Global spatial autocorrelation

Navigation:  Data Exploration and Spatial Statistics > Spatial Autocorrelation >

Global spatial autocorrelation

Previous pageReturn to chapter overviewNext page

The procedures adopted for analyzing patterns of spatial autocorrelation depend on the type of data available. There is considerable difference between:

a)a set of 100 values obtained for a 10x10 grid of (100mx100m) squares which covers a 1000mx1000m region;
b)a set of values obtained for 100 contiguous but arbitrarily shaped polygons which again cover the same region;
c)a set of nominal values (classes) for 100 contiguous but arbitrarily shaped polygons which again cover the same region; and
d)a set of data values obtained from 100 arbitrarily distributed sample points in our study region

Each case warrants a slightly different approach, but each utilizes the notion of proximity bands as a means of imposing some form of serial behavior to the data. The idea is then to examine the correlation between areas or points at given levels of separation, to obtain a similar measure to that used within time series analysis. Initially we examine the case of nominal-valued data, which helps identify the key concepts involved and focuses our attention on a set of questions regarding such analysis that we examine in more detail in the subsections that follow.

Join counts and the analysis of nominal-valued spatial data

Consider a regular grid of cells that completely covers a sampled region with a value associated with each grid square. The set of values in this case could be binary, e.g. presence/absence; or they could be classified into one of k classes (see Dacey, 1968, for a full discussion of this case); or k classes could be re-classified back into two classes, one class from k and the remaining k-1 classes. Typically this latter kind of data is nominal- valued, i.e. the cell or zone values can be regarded as categories, such as woodland, grassland, sandy desert etc. rather than ordered values.

If we assume the data are binary, perhaps representing the presence or absence of a particular species of insect or plant variety in sample quadrats, a range of possible patterns might be observed (Figure 5‑25A-D). In Figure 5‑25A all the observed values for presence are in one half of the 6x6 grid (strong positive autocorrelation), whilst in Figure 5‑25B they are perfectly evenly distributed (strong negative autocorrelation). Figure 5‑25C gives a particular case of a random pattern (of which there are many). In each case 50% of the cells show presence, but this value could easily be 10% or any other value depending on the data being studied. Figure 5‑25D is an example of a real-world dataset showing the presence or absence of desert holly in a 16x16 cell sample region. Note that in each instance our comments refer to autocorrelation at the scale of the cell, i.e. between cells; within cells autocorrelation is typically strongly positive.

One way of analyzing these patterns is to ask “what is the chance that a particular pattern could occur at random?” In each of the three sample patterns shown we can look at the spatial equivalent of one time step or lag, the patterns observed at one cell step, i.e. adjacent cells. If steps are restricted to rook’s moves we can count the number of instances of 1‑1, 0‑0, 1‑0 and 0‑1 occurring, and compare these to the number we might expect if the pattern was random. These numbers are referred to as join counts (sometimes erroneously described as ‘joint counts’). For smaller regions edge effects will be significant, so calculations need to be adjusted to reflect the fact that along the borders not all of the four directions are possible. For example, only two adjacent cells exist for the four corner positions, and only three adjacencies for other border cells.

Figure 5‑25 Join count patterns

A. Completely separated pattern (+ve)

B. Evenly spaced pattern (-ve)

clip0185

clip0186

C. Random pattern

D. Atriplex hymeneltrya (desert holly)

clip0187

clip0188

Row and column totals for the adjacencies, or joins, are shown in Figure 5‑26 with the overall total being 120/2=60 joins. For our patterns, with 50% occupancy, we might expect 15 of these to be 1‑1 joins, 15 to be 0‑0 joins and the remaining 30 to be 0‑1 or 1‑0 joins. We can count up the number of each type of join and compare this to our expected values to judge how special (significant) or not our patterns are. In Figure 5‑25A there are 27 1‑1 joins, 27 0‑0 joins and only 6 0‑1 or 1‑0 joins — this seems very unlikely, and is indeed most unlikely. Similar calculations can be undertaken for cases B and C, and as expected for case B all 60 joins are of type 1‑0 or 0‑1, which is again extremely unlikely to occur by chance. Case C has 35 1‑0/0‑1 joins compared with perhaps 30 expected, with 1‑1 joins being 13 and 0‑0 being 12, as against perhaps 15 in each case.

Figure 5‑26 Join count computation

clip0189

A test for the significance of the results we have observed can be produced using a Normal or z-transform of the data. In practice three separate z-transforms are needed, one for the 1‑1 case, one for 0‑0, and one for 0‑1 and 1‑0. These transforms evaluate expressions of the form z=(OE)/SD where O is the observed number of joins of a given type, E is the expected number based on a random model, and SD is the expected standard deviation. The procedure and formulas can be implemented in software scripts for use within mainstream GIS packages or data may be externally analyzed using specialized software and then the results mapped within a GIS; for example, using the Rookcase Excel add-in produced by Sawada (1999) and obtainable from the University of Ottawa, Laboratory for Paleoclimatology and Climatology. Results for the four examples given in Figure 5‑25A-D generated using the Rookcase add-in are shown in Table 5‑8 (“B” and “W” here refer to Black and White, rather than 1 or 0; “Rand.” refers to the randomization model, which is discussed further below; and # means the number (of joins).

Table 5‑8 Join count analysis results

A. Positive autocorrelation

clip0190

B. Negative autocorrelation

clip0191

C. Random model — no discernible autocorrelation

clip0192

D. A. hymeneltrya — positive BB autocorrelation

clip0193

Two features of the results in Table 5‑8 should be noted: (i) the expected number of joins is not 30,15,15 as we suggested earlier, but 30.86, 14.57,14.57 these being adjusted values to take account of what is known as non-free sampling (i.e. sampling without replacement); and (ii) the z-statistic in Table 5‑8B shows a large positive value for BW joins, and large negative values for BB and WW joins (absolute values of >1.96 would be significant at the 5% probability level). This mixed pattern can be confusing, especially if one or two of the z-scores shows a significant value whilst others do not, as in Table 5‑8D.

The join counts method has been utilized in a variety of application areas including: ecological data analysis; to analyze patterns of voting (for example voting for one of two political parties or individuals); for analysis of land-use (e.g. developed or undeveloped land); and for examination of the distribution of rural settlements. However, the complexity of computing the theoretical means and variances in more realistic spatial models, the difficulty of interpreting the multiple z-scores, coupled with the availability of alternative approaches to analysis, have meant that join count procedures are not widely used and are rarely implemented in mainstream GIS packages. Support for this kind of analysis is provided in the joincount functions of the R‑Project spatial package, spdep and within the PASSaGE software. Metrics based on these concepts are provided in Fragstats and similar packages (see further, Section 5.3.3, Quadrat analysis of grid datasets, and Section 5.3.4, Landscape Metrics), for example in the computation of Connection and Contagion indices.

The standard form of the join counts statistical analysis makes a number of assumptions regarding the observation data. These include (under free sampling) that the distribution of BB and WW joins is asymptotically Normal (Gaussian) and that the data is first-order homogeneous. To clarify this latter point, we use an example from Kabos and Csillag (2002) who have investigated relaxing this assumption. They give the example of two simulated images with same B/W distribution (50%B and 50%W), but with different first-order effects. Their first pattern was simulated with first-order homogeneity. i.e., pb, the probability of any cell being black was 0.5 for all cells (Figure 5‑27 left hand image). Their second pattern was simulated by setting pb=0.6 for half of the image and pb=0.4 for the other half, so the overall probability pb remained at 0.5 but was no longer homogeneously distributed across the image (Figure 5‑27, right hand image). They found that the join count statistics (JCS) using pb=0.5 accepted the null hypothesis of spatial randomness for the first image, but rejected it for the second. Using unmodified JCS, they have thus demonstrated that we would erroneously (but understandably) conclude that there is significant spatial autocorrelation in the second image.

Figure 5‑27 Homogeneous and non-homogenous probability images

clip0194

clip0195

Additional questions might reasonably be asked about this analytical procedure that point the way to more powerful extensions and developments of these ideas. Key questions, which we discuss below include:

how sensitive is this technique (and, of course, many other techniques) to the particular size of grid cell used and the number of cells?
if there are k classes rather than just two, how might this kind of data be analyzed?
why choose rook’s moves — why not permit queen’s moves, which would extend the idea of contiguity but increase the complexity of computation?
why restrict the notion of contiguity to directly adjacent cells (a lag of 1) — why not examine longer range effects, i.e. higher order spatial lags? Indeed, with separate analyses for different lags or distance bands one could produce a form of correlogram of autocorrelation effects
why should every cell have the same level of importance or weight — why not permit cells to have differing weights, depending on the kind of model of spatial association we are considering?
should the analysis be restricted to regular grids, or could it be extended to irregular grids, polygonal regions and even pure point data?
is a single statistic for a whole area meaningful, or should separate statistics be computed for sub-regions or individual cells and then plotted, giving a Local Indicator of Spatial Autocorrelation (LISA) measure, which would highlight clustering effects?
if the data in the study cells are integer or real-valued, why restrict analysis (lose valuable information) by ignoring the levels in regions?
how should we determine the cell probabilities, e.g. pB? Is it sufficient to always derive the probability estimate from the observed proportions of cells of type B, or are there other approaches that might be more appropriate?

We briefly comment on the first two of these questions in the discussion below. The remaining questions in the list, notably those relating to cases where the data values in the grid, lattice or point-set are integer or real-valued, are discussed in the subsections that follow.

Figure 5‑28 Grouping and size effects

A. 2x2 grouping of Atriplex hymeneltrya grid

clip0196.zoom80

B. 128x128 grid of Calluna vulgaris presence

clip0197.zoom46

The first point raised above can be examined directly by reviewing our example in Figure 5‑25D. If the grid had been sampled at half the frequency in each direction there would be 8x8=64 cells rather than 16x16=256 cells, and in this instance only one new larger cell contains no data, i.e. a 0, all others showing a desert holly plant present (Figure 5‑25A, empty 2x2 cell shown in white). So we immediately see that such techniques may be highly sensitive to grid resolution, or, with irregular lattices, to the particular density, shape and connectivity of individual areas.

Furthermore, with large grids, e.g. 128x128 cells, the z-scores may become huge, raising doubts over the sensitivity and interpretation of the technique as the resolution of sampling or the size of the area covered is increased; in Figure 5‑28B there are 16384 cells and all three z-scores are over 160. A critical question here is whether the data in such cases reflect the result of direct observation, or whether they represent results from re-sampling or similar computational procedures. In the latter instance test scores may not be valid. At the other extreme scores may be suspect where the number of cells or regions is small (less than 30) or the category selected only occurs in a small percentage of areas.

The second question deals with the k-class case. There are several ways of dealing with this situation. Perhaps the first question to ask is whether the classes are genuinely nominal-valued (e.g. distinct species of tree); or whether the classes have been created from a set of underlying data that are themselves, integer or real-valued. In the latter instance it is sensible to examine the underlying data separately using more powerful techniques, such as Moran’s I (see below). It is also possible to tackle each of the k classes in turn, treating these as B=Black say, and regarding all other classes as W=White, and proceeding as described earlier. Alternatively one can examine all the k classes simultaneously by identifying all instances of the same class being found in adjacent areas. This pattern of occurrences can then be compared with the expected pattern assuming the zones are labeled in a spatially random manner, and/or using a pseudo-probability distribution generated using random permutations of the known values.

Support for this form of ‘multi-colored map’ analysis is to be found in the R-Project package, spdep, joincount functions and the join counts option of the PASSaGE software. Note that for some of the analyses performed it is assumed that the spatial weights matrix, W, is symmetric, or can be adjusted to be symmetric; this excludes the use of some, more complex, neighborhood relations.

Moran I and Geary C

Many of the remaining questions from the preceding subsection relating to join count procedures can be addressed by considering a more general case than presence/absence data, with a region that is not a regular lattice. We start with a simple incomplete rectangular lattice, shown in Figure 5‑29 as an arrangement of 10 cells with real-valued entries. Note that although this is shown with rectangular edges it is topologically equivalent to any set of planar regions that exhibit the same adjacency pattern, however irregular in form.

Figure 5‑29 Irregular lattice dataset

 

+4.55

+5.54

+2.24

-5.15

+9.02

+3.10

-4.39

-2.09

 

+0.46

-3.06

Instead of displaying this data as an irregular lattice, it can be recast as a table of (x,y,z) triples, where rows are numbered 1,2… from the top and columns are likewise numbered 1,2… from the left, as shown in Table 5‑10. This coordinate based description of the data immediately suggests a generalization that may be applied to point datasets, with the (x,y) pairs providing the coordinate information. These (x,y) pairs could be well-defined points, or grid cells, or even polygon centroids.

Table 5‑10 Tabulated lattice data

x

y

z

1

2

4.55

1

3

5.54

2

1

2.24

2

2

‑5.15

2

3

9.02

3

1

3.1

3

2

‑4.39

3

3

‑2.09

4

2

0.46

4

3

‑3.06

This tabular form loses information, in that cell adjacencies are not explicitly captured. Figure 5‑30 provides this missing adjacency information for the sample lattice as a binary matrix using rook’s move definitions. In this example cells in the original dataset are numbered from 1 to 10, starting with the first cell being in row 2 column 1 of the source data and continuing down the rows and across the columns.                

Figure 5‑30 Adjacency matrix, W

clip0198

This matrix can be thought of as a specific example of a spatial weights matrix, W={wij}, indicating elements of computations that are to be included or excluded. In this example wij=1 if two cells are adjacent (joined) and wij=0 otherwise. This matrix is symmetric and binary, but these features are not pre-requisites. For example, one might choose to create a matrix W in which rook’s move adjacencies were given a weight of 0.9 and bishop’s moves (the diagonal connections) a weight of 0.1. More generally, if the source dataset was polygonal, e.g. well-defined agricultural tracts, then adjacency weights might be chosen to reflect the relative lengths of shared boundaries.

We now have a set of values {zi} and a set of weights {wij}. We are looking for some way of combining this information using a function, f(), which satisfies the following criteria:

If the pair (zi,zj) are both +ve or both -ve f(zi,zj)>0
If the pair (zi,zj) are a mix of +ve and -ve f(zi,zj)<0
If the pair (zi,zj) are both large values f(zi,zj) is large
Patterns of adjacency reflected in the matrix W must be accounted for in the computation

One of the simplest ways to fulfill these criteria is to multiply the zone values together, optionally adjusting for the overall mean value for all zones and including the adjacency information, which gives:

or with mean adjustment and weights included:

As noted above, assuming the weights wij are binary, they simply identify which elements of the computation are to be included or excluded in the calculation. This expression looks quite like the covariance of the selected data. To adjust for the number of items included in this sum, and produce a covariance value, we need to divide through by the sum of the weights used, i.e.

To standardize this covariance expression we divide it by the variance of the data, which is simply:

The ratio of these two expressions gives us an index, known as Moran’s I, and has values that typically range from approximately +1, representing complete positive spatial autocorrelation, to approximately ‑1, representing complete negative spatial autocorrelation:

The upper and lower limits of this index may vary depending on the configuration of the weights used. It is the spatial equivalent of the correlation coefficient, r, and very similar to the (unweighted) time series autocorrelation coefficient, r.1 derived in Section 5.5.1, Autocorrelation, time series and spatial analysis:

If the observations, zi, are spatially independent Normal random variables, the expected value of Moran’s I is:

Hence for large n the expected value is approximately 0.

The variance of I under these conditions is shown below (compare this to the formulas in Table 5‑9):

The variance of I under the assumption of randomization (i.e. randomly re-distributed over the support, without replacement) is yet more complicated, but is provided in a number of software packages. From the expected values and variances z‑scores and significance levels for I can then be computed under the alternative sampling assumptions.

The computation of Moran’s I for the sample dataset provided in Figure 5‑29 illustrates the procedure. First we compute the matrix of unadjusted variance- and covariance–like quantities (Figure 5‑31A). There are 10 cells, and values for each cell are shown together with their deviations from the overall mean value, 1.022, in the column and row bounding the 10x10 matrix of computed values, C. The diagonal of the 10x10 matrix gives the variance-like quantities whilst the off-diagonal components give the covariance-like quantities. The weight-adjusted matrix (Figure 5‑31B) includes only the off-diagonal elements, and only those for which adjacencies have been included. The total of these values is 16.19, compared with the sum of the diagonal values which is 196.68. The computation of Moran’s I requires one final element, adjustment by 1/p, which in this case is the number of cells (10 in our case) divided by the sum of the elements in W, which is 26, giving: I=10*16.19/(26*196.68) so I=0.0317. If this value is close to 0 there is very little spatial autocorrelation, which is what we have found in this example.

Figure 5‑31 Moran's I computation

A. Computation of variance/covariance-like quantities, matrix C

clip0199.zoom90

B. C*W: Adjustment by multiplication of the weighting matrix, W

clip0200.zoom90

If we rearrange the cell values the index value will change accordingly. For example, if the negative values in our example are swapped with the positive values in column 3, so that column 2 is entirely positive and column 3 entirely negative (Figure 5‑32) the index value computed is I=0.26 (see Figure 5‑37 for the significance test of this index value).

Figure 5‑32 Revised source data

 

+4.55

-5.15

+2.24

+5.54

-4.39

+3.10

+9.02

-2.09

 

+0.46

-3.06

Moran’s I is one of the most frequently implemented statistics within GIS packages, including ArcGIS Spatial Statistics Toolbox, the AUTOCORR function in Idrisi, and in most other spatial analytical tools. This is in part due to its ease of computation across a range of datasets and weightings, and because of its relationship to well-understood measures used in univariate statistics and time series analysis. It is also readily extended in a number of ways that increase its application and usefulness. These extensions include: adjustment for differences in underlying population size; treatment of spatial point data utilizing distance bands rather like time lags; and computation of local Moran I values rather than a single whole map or global Moran I value. We examine each of these in turn.

Oden (1995) proposed an adjustment to Moran’s I for zonal data where population size data (e.g. population at risk) is available. This statistic is supported within Clusterseer, but is not generally provided in other packages. Under this adjusted version the null hypothesis is that measured rates (e.g. disease rates, crime rates) in adjacent areas are independent, with the spatial variation observed simply reflecting the spatial variation in the underlying population. The computation of this statistic is quite involved, but essentially utilizes the difference between the proportion of cases or incidents in a given zone (i.e. as a proportion of the total cases in the study region) and the expected proportion in the population at large (again, computed as the population in zone i/total population at risk). The expected value for the adjusted statistic is similar to that for the standard statistic, being E(Ipop)=‑1/(N‑1) where N is now the total population at risk. This statistic is not independent of population size, but can be readily standardized by dividing the computed value by the overall incidence rate for the study region.

We now turn to treatment of spatial point data rather than zoned or lattice data. Suppose that instead of zones we had a set of points (x,y) with associated data values, z (which could be values assigned to zone centroids). We could then compute:

where N(h) is the number of points in a distance band of width d and mean distance of h; zi is the (standardized) value at point i; and zj is the (standardized) value at a point j distance h from i (in practice within a distance band from h‑0.5d to h+0.5d). The summations apply for all point pairs in the selected band. Hence the adjacency matrix W is replaced by distance, h, and by using a number of bands a set of I values can be computed and plotted against h. This form of graph or diagram showing correlation information, as noted earlier, is known as a correlogram. To be meaningful the width of distance bands, h, should be greater than the minimum inter-point distance in the dataset, and the initial band should commence at 0.5d in order to give a better representation to values near the origin.

For example, the GS+ demonstration dataset comprises 132 separate spatial measurements of lead and aluminum levels in soil samples. An extract of the first 10 records is shown in Figure 5‑33A with missing values being blank entries (coded on input in this case as ‑99). Note that missing values are not the same as zero values, and recorded zero values may actually be a reflection of the limitations of the measuring device rather than a true zero value. In such instances it may appropriate to adjust zero values to the minimum recordable value. Figure 5‑33B shows a map of the point set for the selected z-variable (Pb, lead), with separate symbols indicating the four quartiles of the overall dataset that each point lies within (the triangular shape is the upper quartile, the square is the lower quartile). There are well over 10,000 possible point pairings in this sample dataset, but when grouped into distance bands only a subset of pairings is considered (Figure 5‑33C). For example, in the first band, lag class 1, there are 226 point pairs, the average distance between pairs is 5.77 units, the bandwidth, h, is fixed for all lags at 8.21 units (defaulted to the maximum pairwise distance/10), and the Moran I value is 0.4179. This value decreases for the first few lags and then becomes closer to 0 and somewhat random. These values may be plotted, as shown in Figure 5‑33D. Spatial autocorrelation (positive) as measured by this function, appears to be present for separation distances up to around 20‑25 meters.

Each square on the graph in Figure 5‑33D represents an index value generated from a large number of point pairs, and the individual contributions to the covariance can be plotted as a cloud of (in this case) 226 values (Figure 5‑34). For example, the largest value in this cloud (see arrow) has a value of 0.4276 and is derived from points 100 and 115, whose separation distance is 6.44 units, and whose data values are 0.17 and 1.25 respectively. Now (0.17‑1.25)2 is not 0.4276, so we must look rather more carefully at what GS+ is doing with the data. The input dataset, like many such samples, is strongly left skewed — there are many small values recorded and only a few larger ones, with 1.25 being the maximum z-value. By default GS+ automatically transforms the data to make it more Normal, using the log transform z*=ln(z+1). This gives the values:

z*100=0.157 and z*115=0.811 and thus (0.157‑0.811)2=0.4276

Figure 5‑33 Sample dataset and Moran I analysis

A. Dataset, first 10 records

B. Coordinates of data points

clip0201

clip0202.zoom73

C. Moran I statistics, by lag

D. Isotropic Correlogram

clip0203

clip0204

 

The correlogram shown is isotropic, i.e. it has been calculated for all points, irrespective of direction. Spatial patterns may well exhibit directional bias and hence it may be of interest to divide the circular lag regions into sectors and investigate the correlograms for each sector. Due to symmetry only half the possible bearings need to be considered. Taking a sector window of, say, 45 degrees (i.e. X +/‑ 22.5 degrees) results in a set of 4 possible sectors to be analyzed. The choice of initial bearing, X, can be selected by reference to the problem at hand, or by examining the dataset to locate the orientation of the principal axis of the standard deviational ellipse. Software packages often identify this for the user and then generate a set of anisotropic correlograms to enable the data to be examined sector-by-sector. If all are broadly similar it is likely that directional effects are small. Again, looking at this sample dataset, the principal axis is at 130 degrees, and the four anisotropic correlograms do not differ greatly.

Figure 5‑34 Moran I (co)variance cloud, lag 1

clip0205

One alternative to the Moran I form of computation involves returning to the original set of criteria for a suitable function. Instead of computing the product sum:

We could equally well have computed the sum of squared differences:

Or the absolute differences:

since these have similar desirable properties.

These expressions once again can be standardized by dividing by the variance and adjusted for the ratio of zones or point pairs sampled. If the expression S2 above is used the resulting measure is known as the Contiguity ratio or Geary’s C ratio:

This expression may be compared to the formula for Moran I provided earlier. Unfortunately this ratio is not particularly easy to interpret: its values vary around 1, which indicates no spatial autocorrelation; to values as low as 0, which indicate positive autocorrelation; and values greater than 1 which indicate negative autocorrelation. Perhaps for this reason it is rarely implemented in GIS packages in this form, but expression S2 turns out to be extremely useful in what is known as semivariance analysis as we shall see in our discussion on geostatistics (see Section 6.7, Geostatistical Interpolation Methods).

Due to the inconvenience of working with absolute values, expression S3 is rarely used, but it is applied in semivariance analysis and is less sensitive to extreme values than computations based on S2. In this case the correlograms are known as Madograms — for more details see Deutsch and Journel (1992) and Goovaerts (1997).

Weighting models and lags

Our discussion on correlograms highlights the way in which distance bands can be used to generate data series in a manner similar to the lags used in time series analysis and Ripley’s K function. Weighting matrices that incorporate values other than 0 and 1 were introduced by Cliff and Ord (1973). One way in which distance-based weighting is applied is to use the actual distances, rather than distance bands, as the means by which data pairs are “ordered”. The usual approach (for example that adopted within the Crimestat package) is to use inverse distance, so that pairs that are further apart contribute less to the overall index value than those that are close together. The distance weighted Moran’s I is then given by:

where

The ε here represents an increment applied to the distance to ensure that when dij is small, results are not overly distorted or overflow. Other software packages implement a range of such distance weightings. For example, ArcGIS includes: inverse distance; inverse distance squared; and specified distance bands amongst its options.

With datasets that represents zones rather than points, as in our earlier examples, GIS tools may be used to map the data and compute spatial weights. Amongst the most extensive facilities available at present for conducting such calculations are provided by the GeoDa and ArcGIS packages, so we will use these to illustrate the range of facilities that have been implemented in readily available software.

Simple rook’s move and queen’s move contiguity weights are supported within GeoDa for polygonal regions. Typically these will be first order contiguity values, but 2nd or higher order weights can also be generated. Secondly, distance-based weights can be generated based on polygon centroid locations or using a defined (x,y) pair falling within the selected polygon. This requires a threshold value to be set, and all points (representing polygons) within this distance band are included within the weighting (as per ArcGIS distance bands). ArcGIS also supports a combination of fixed distance bands and distance decay, which it calls zone of indifference weighting. Distance computations in both GeoDa and ArcGIS are based on projected (Euclidean) distance, plus a second option: in GeoDa’s case unprojected (approximate arc distance); and in the case of ArcGIS, city block calculations (L1 metric).

A further method, based on the number of nearest-neighbors, is also supported in GeoDa. This requires the number of neighbors to be included as a parameter, and then varies the distances to reflect the value set (i.e. an adaptive method). The GeoDa code uses the Approximate Nearest Neighbor (ANN) software library developed at the University of Maryland to determine such neighborhood relations.

The advantage of this nearest-neighbors approach is that the spatial weights for tightly packed small zones will be more similar to those of larger zones within the same study region, which tends not be the case with pure distance-based selection. Finally, a user-defined spatial weights matrix may be defined and read into either of these two packages — of course this requires very careful construction to ensure the weights matrix reflects the topology and weighting patterns intended. Neither of these packages explicitly supports weights based on shared boundary lengths. With initial “point set data” weights could be determined by construction of Voronoi polygons and computing the lengths of shared boundaries, but facilities to perform such operations directly are not provided within existing GIS and related software. Information from the Delaunay triangulation and/or Voronoi polygon creation could be used to generate a spatial weights file, which ArcGIS and GeoDa, for example could import. In this case, and by modifying or augmenting a program-generated standard weights file, weights with closer application-specific meaning may be created, for example based on measured interactions or flows between the sampled regions.

Idrisi supports simple autocorrelation computations for raster datasets using Moran’s I. Its weights matrix options include rook’s move and king’s move (we would call this queen’s move), with weights of 1 and 0.7071… for the direct and diagonal moves respectively. A form of support for higher order lags is indirectly supported in Idrisi by resampling the grid.