Evolutionary computing and genetic programming

Navigation:  Geocomputational methods and modeling > Genetic Algorithms and Evolutionary Computing >

Evolutionary computing and genetic programming

Previous pageReturn to chapter overviewNext page

As noted earlier, related to the field of genetic algorithms is the broader topic known as evolutionary computing. This field now has many strands, incorporating genetic algorithms (GAs), genetic programming (GP — see below), agent-based modeling (ABM), artificial life and cellular automata (CA), together with a range of metaheuristics applied in isolation or in conjunction with other procedures within this broad family. There have been a wide range of areas to which evolutionary computing concepts have been applied. Readers are referred to Diplock (2000) for an introduction to the subject and an application in the field of spatial interaction modeling, and to Banzhaf et al. (1998) for a full treatment of the subject.

Like GAs, genetic programming (GP) involves similar selection and evolutionary ideas, but instead of binary or simple numerically coded chromosomes representing members of the solution space, GP uses coded symbolic expressions or operators. The idea behind genetic programming is to evolve computer programs or models rather than solutions to optimization or similar problems. However, if one applies GA methods to GP problems two limitations are encountered: (i) finite bit strings or coded stings are unsuitable forms for developing (growing and shrinking) structures, hence tree structures are widely used for encoding rather than strings; and (ii) only certain operations produce valid programs or models, hence most GA-type reproduction and mutation mechanisms will result in invalid solutions that therefore have zero fitness. In particular, mutation operations are not normally supported within GP. Tree structures with well-defined sets of ‘valid’ operations (e.g. swapping entire sub-trees) can be designed to generate new solutions that have non-zero fitness, so again this architecture can assist in resolving the second of these problems.

For example, in a spatial interaction model, initial operators might include: (a) a set of basic functions such as +, -, /, *, ^, cos(), sin(), log(), exp() etc.; and (b) initial variables (known as terminals) such as origin and destination totals, inverted versions of these totals, inter-zonal distances, intervening opportunity measures etc. The aim is then to use these components, coupled with a known interaction dataset (e.g. journey-to-work data) to ‘breed’ or ‘train’ a model that fits the data as well as possible. This fit can then be compared with more conventional models in terms of sum of squared errors, for both trained datasets and untrained data. In many instances the models bred in Diplock’s analysis are quite complicated, but some unexpected and quite good quality simpler models were generated, such as:

where Tij is the volume of trips from zone i to zone j, Dj is the total destination volume for zone j, dij is the distance from zone i to zone j, and Xij is a measure of intervening opportunities. Fischer and Leung (1998) tackled the same kind of problem, i.e. design of a spatial interaction model, using a combination of evolutionary methods and neural network techniques. The evolutionary component in this instance was used to select the most appropriate neural network model, which the neural network would then seek to fit to training datasets and then apply to unseen data (i.e. forecasting). Applications of GP in other fields of geospatial analysis have been limited ― possibly the lack of tools to support this approach to model development has held it back, although the range of applications in related fields (e.g. ecology, land-use change forecasting, economics etc.) suggest that it remains an area open to considerable further research.