Although there are no fixed rules as to what constitutes an agent, they can usefully be defined in terms of their primary characteristics, discussed in the subsections below: autonomy; heterogeneity; pro-active or reactive behavior; bounded rationality; communications capabilities; mobility; and learning capabilities.
The term agent-based modeling (ABM) refers to the use of computational methods to investigate processes and problems viewed as dynamic systems of interacting agents. An example might be attempting to model crowd behavior in a football stadium using computational agents to represent individuals in the crowd. Agent-based models seek macro-level understanding based on micro-level processes, i.e. they involve bottom-up rather than top-down modeling. In many respects ABM and CA are very closely linked ― indeed, as noted earlier, it is often possible to use ABM toolsets to model a variety of CA and in principle the reverse is possible, although this may be awkward to achieve.
The subsections that follow draw on the recent publications and research of a number of authors. Readers interested in more details should refer to these books and papers, and to the documentation associated with the toolkits that are now widely available. These publications include: Axelrod (2006); Axelrod and Tesfatsion (2006); Axtell (2000); Bonabeau (2002); Brown (2006); Casti (1997); Couclelis (2002); Crooks et al.(2008a); Epstein (1999); Epstein and Axtell (1996); Gilbert and Troitzsch (2005); Macal and North (2005); North and Macal (2007); O'Sullivan (2004); Parker et al. (2003); Parker (2005); Torrens (2004); Wooldridge and Jennings (1995); and Franklin and Graesser (1996).
Agent-based models are comprised of multiple, interacting agents situated within a model or simulation environment (see below for clarification of what may be considered as an agent). A relationship between agents is specified, linking agents to other agents and/or other entities within a system. Relationships may be specified in a variety of ways, from simply reactive (i.e. agents only perform actions when triggered to do so by some external stimulus, such as the actions of another agent), to goal-directed (i.e. seeking a particular goal). The behavior of agents can be scheduled to take place synchronously (i.e. every agent performs actions at each discrete time step), or asynchronously (i.e. agent actions are scheduled by the actions of other agents, and/or with reference to a clock).
Environments define the space in which agents operate, serving to support their interaction with the environment and other agents. Agents within an environment may be spatially explicit, meaning they have a location in geometrical space, although the agent itself may be static. For example, within a building evacuation model agents would be required to have a specific location for them to assess their exit strategy. Conversely, agents within an environment may be spatially implicit; meaning their location within the environment is irrelevant. For instance, a model of a computer network does not necessarily require each computer to know the physical location of other computers within the network.
In a modeling context, agent-based models can be used as experimental media for running and observing agent-based simulations. To this extent, they can be thought of as a miniature laboratory, where the attributes and behavior of agents, and the environment in which they are set, can be altered and the repercussions observed over the course of multiple simulation runs. Enabling the ability to simulate individual actions of many diverse agents and to measure the resulting system behavior and outcomes over time (e.g. changes in patterns of pedestrian emergency egress), means that agent-based models can be useful tools for studying processes that operate at multiple scales and organizational levels. In particular, the roots of ABM are within the simulation of social behavior and individual decision-making.
The acronym ABM will be used henceforth, but a caveat is required. There are various alternative terms (and their acronyms) applied in the literature to what, for all intent and purpose, is essentially ABM. Examples include: Agent-Based Computational Modeling, (ABCM), Agent-Based Social Simulation (ABSS), Agent Based Computation Simulation (ABCS), Agent-Based Modeling and Simulation (ABMS) and Individual-Based Modeling (IBM). Multi-Agent Systems (MAS) is another very popular term which is often, confusingly, used interchangeably to describe agent-based models. The field of MAS is a well established applied branch of Artificial Intelligence (AI), and although ABM has strong roots in the field of AI, agent-based models are not limited to the design and understanding of artificial agents. Impetus to develop MAS was spawned from problems encountered in the implementation of tasks on distributed computational units interacting with one another and with the external environment (Distributed Artificial Intelligence, DAI). The term MAS is more commonly applied outside the social sciences, for example, by computer scientists in relation to agent-oriented software development. Therefore, the MAS field can be characterized as the study of societies of artificial autonomous agents, while the ABM field can be typified as the study of artificial societies of autonomous agents. These two fields differ in more substantial ways than just their formalism (i.e. logic and AI based in the MAS domain, and mathematically based in the social science domain). However, this will not be considered here (see Conte et al., 1998 for a more detailed treatment).
More importantly, the term agent has connotations beyond ABM. For instance, agents found within agent-based models are different from mobile agent systems, which are light-weight software proxies that perform various functions for users and to some extent can behave autonomously. ABM is not the same as object-oriented simulation, although the object-oriented paradigm provides a suitable medium for the development of agent-based models. For this reason, ABM systems are invariably object-oriented.
There is no universal agreement on the precise definition of the term ‘agent’. From a pragmatic modeling standpoint there are several features that are common to most agents:
Autonomy: Agents are autonomous units (i.e. operating without the influence of centralized control), capable of processing information and exchanging this information with other agents in order to make independent decisions. They are free to interact with other agents, at least over a limited range of situations, and this does not (necessarily) affect their autonomy. In this respect, agents are active rather than purely passive (see below).
Heterogeneity: The notion of mean-individuals is redundant; agents permit the development of autonomous individuals. Groups of agents can exist, but they are spawned from the bottom-up, as amalgamations of similar autonomous individuals.
Active: Agents are active because they exert independent influence in a simulation. The following active features can be identified:
Pro-active/goal-directed: Agents are often deemed goal-directed, having goals to achieve (not necessarily objectives to maximize) with respect to their behaviors. For example, agents within a geographic space can be developed to find or follow a set of spatial paths to achieve a goal within a certain constraint (e.g. time), when exiting a building during an emergency.
Reactive/Perceptive: Agents can be designed to have an awareness, or sense of their surroundings. Agents can also be supplied with prior knowledge, in effect a ‘mental map’ of their environment, thus providing them with an awareness of other entities, obstacles, or required destinations within their environment. Extending the example above, agents could therefore be provided with knowledge of building exit locations.
Bounded Rationality: The dominant form of modeling in the social sciences is based upon a rational-choice paradigm. Rational-choice models generally assume that agents are perfectly rational optimizers with unfettered access to information, foresight, and infinite analytical ability (Parker et al., 2003). These agents are therefore capable of deductively solving complex mathematical optimization problems in order to maximize their well-being, thereby balancing long-run and short-run payoffs in the face of uncertainty. While rational-choice models can have substantial explanatory power, some of their axiomatic foundations are contradicted by experimental evidence, leading prominent social scientists to question their empirical validity. However, agents can be configured with ‘bounded’ rationality (through their heterogeneity) to circumvent the potential limitations of these assumptions (e.g. agents can be provided with fettered access to information at the local level). In effect, the aforementioned ‘perception’ of agents can be constrained. Thus, rather than implementing a model containing agents with optimal solutions that can fully anticipate all future states of which they are part, agents make inductive, discrete, and adaptive choices that move them towards achieving goals. For instance, an agent may have knowledge of all building exit locations, but agents will be unaware if all exits are accessible (e.g. some may have become blocked through congestion).
Interactive/Communicative: Agents have the ability to communicate extensively. For example, agents can query other agents and/or the environment within a neighborhood, via neighborhoods of (potentially) varying size, searching for specific attributes, with the ability to disregard an input which does not match a desirable threshold.
Mobility: The mobility of agents is a particularly useful feature, not least for spatial simulations. Agents can roam the space in which they are situated within a model. Juxtaposed with the agent’s ability to interact and their intelligence, this permits a vast range of potential uses.
Adaptation/Learning: Agents can also be designed to be adaptive, which can produce Complex Adaptive Systems (CAS; Holland, 1995). Agents can be designed to alter (limited to a given threshold if required) their states depending on their current states, permitting agents to adapt with a form of memory or learning, but not necessarily in the most efficient way possible. Agents can adapt at the individual level (e.g. learning alters the probability distribution of rules that compete for attention), or the population level (e.g. learning alters the frequency distribution of agents competing for reproduction).