100% found this document useful (4 votes)
1K views201 pages

Complexity Theory and Network Centric Warfare

The Command and Control Research Program (CCRP) has the mission of improving DoD’s understanding of the national security implications of the Information Age. Focusing upon improving both the state of the art and the state of the practice of command and control, the CCRP helps DoD take full advantage of the opportunities afforded by emerging technologies.

Uploaded by

REBogart
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
1K views201 pages

Complexity Theory and Network Centric Warfare

The Command and Control Research Program (CCRP) has the mission of improving DoD’s understanding of the national security implications of the Information Age. Focusing upon improving both the state of the art and the state of the practice of command and control, the CCRP helps DoD take full advantage of the opportunities afforded by emerging technologies.

Uploaded by

REBogart
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 201

Information Age Transformation Series

Complexity
Theory
and
Network Centric Warfare

James Moffat
About the CCRP
The Command and Control Research Program (CCRP)
has the mission of improving DoD’s understanding of the
national security implications of the Information Age.
Focusing upon improving both the state of the art and the
state of the practice of command and control, the CCRP
helps DoD take full advantage of the opportunities afforded
by emerging technologies. The CCRP pursues a broad
program of research and analysis in information
superiority, information operations, command and control
theory, and associated operational concepts that enable
us to leverage shared awareness to improve the
effectiveness and efficiency of assigned missions. An
important aspect of the CCRP program is its ability to
serve as a bridge between the operational, technical,
analytical, and educational communities. The CCRP
provides leadership for the command and control research
community by:
• articulating critical research issues;
• working to strengthen command and control research
infrastructure;
• sponsoring a series of workshops and symposia;
• serving as a clearing house for command and control
related research funding; and
• disseminating outreach initiatives that include the
CCRP Publication Series.
This is a continuation in the series of publications
produced by the Center for Advanced Concepts and
Technology (ACT), which was created as a “skunk works”
with funding provided by the CCRP under the auspices of
the Assistant Secretary of Defense (NII). This program has
demonstrated the importance of having a research
program focused on the national security implications of
the Information Age. It develops the theoretical
foundations to provide DoD with information superiority
and highlights the importance of active outreach and
dissemination initiatives designed to acquaint senior
military personnel and civilians with these emerging
issues. The CCRP Publication Series is a key element of
this effort.

Check our Web site for the latest CCRP activities and publications.

www.dodccrp.org
DoD Command and Control Research Program
Assistant Secretary of Defense (NII)
&
Chief Information Officer
Mr. John P. Stenbit
Principal Deputy Assistant Secretary of Defense (NII)
Dr. Linton Wells, II
Special Assistant to the ASD(NII)
&
Director, Research and Strategic Planning
Dr. David S. Alberts

Opinions, conclusions, and recommendations expressed or


implied within are solely those of the authors. They do not
necessarily represent the views of the Department of
Defense, or any other U.S. Government agency. Cleared for
public release; distribution unlimited.

Portions of this publication may be quoted or reprinted


without further permission, with credit to the DoD Command
and Control Research Program, Washington, D.C. Courtesy
copies of reviews would be appreciated.

Library of Congress Cataloging-in-Publication Data

Moffat, James, 1948-


Complexity theory and network centric warfare / James Moffat.
p. cm. -- (Information age transformation series)
ISBN 1-893723-11-9 (pbk. : alk. paper)
1. War--Mathematical models. 2. Information warfare. 3. Computational
complexity. I. Title. II. Series.
U21.2.M584 2003
355.4'8--dc21
2003000365
September 2003
Information Age Transformation Series

Complexity
Theory
and
Network Centric Warfare

James Moffat
This book is dedicated to my wife Jacqueline and my children,
Louise and Katherine.
NOTES TO THE READER
Although I use the term Complexity Theory as if it was a coherent
body of scientific theory, this area of research is in fact still
both young and evolving. I use it therefore as a shorthand
term to cover a number of areas, each with its own distinct
heritage. Broadly, it covers fractal structures, nonlinear
dynamical systems, and models of self-organisation and self-
organised criticality.
The research on which this book is based could not have been
carried out without the help of a number of other people.
Their contributions are, I hope, suitably acknowledged in the
text. I would like to thank particularly Walter Perry (RAND),
Susan Witty (Dstl), David Rowland, and Maurice Passman for
their contributions. I am also most grateful to Professor Henrik
Jensen for contributing the Foreword.
TABLE OF CONTENTS

Foreword ............................................................. xi
CHAPTER 1
COMPLEXITY IN NATURAL
AND ECONOMIC SYSTEMS ............................... 1
Open, Dissipative Structures ............................................. 8
The Far-from-Equilibrium State ..................................... 13
Self-Organisation in Nature–An Example ...................... 17
Clustering in Space and Time ......................................... 23
Movement of a Boundary ................................................ 28
An Example of Complex Behaviour and
Fractal Time Series in Economics.............................. 33
Summary ......................................................................... 42

CHAPTER 2
CONCEPTS FOR WARFARE FROM
COMPLEXITY THEORY .................................. 45
Forest Fires, Clusters of Trees, and Casualties in War .... 52

CHAPTER 3
EVIDENCE FOR COMPLEX EMERGENT
BEHAVIOUR IN HISTORICAL DATA .............. 57
Introduction ..................................................................... 57

i
Time Series Behaviour..................................................... 58
Further Historical Data on the Processes of
“Irruption” and Breakthrough ................................... 63
The Fractal Front of Combat .......................................... 70
Power Law Relationships in Combat Data ..................... 72

CHAPTER 4
MATHEMATICAL MODELLING OF
COMPLEXITY, KNOWLEDGE,
AND CONFLICT .............................................. 77
Introduction ..................................................................... 77
Control and Fractal Dimension....................................... 90
Wargames as Open Systems Sustained by
Knowledge Flowing Across the Boundary ................. 94
Wargaming with FASTHEX........................................... 98
The Decision Problem ..................................................... 99
A Simple Example ......................................................... 114
Multiple Sweeps............................................................. 115
False Target Detections/Identifications ........................ 117
Knowledge Representation ........................................... 118
Combat Cycle Knowledge............................................. 121
Quantifying the Benefit of Collaboration
Across an Information Network............................... 130

ii
CHAPTER 5
AN EXTENDED EXAMPLE OF THE
DYNAMICS OF LOCAL COLLABORATION
AND CLUSTERING,
AND SOME FINAL THOUGHTS .................... 139
Clustering and Swarming .............................................. 141
Final Thoughts............................................................... 148

APPENDIX
OPTIMAL CONTROL WITH A
UNIQUE CONTROL SOLUTION ................... 151
Pontryagin’s Maximum Principle .................................. 154
Determining the Extremal Controls .............................. 154
Uniqueness of the Extremal Control for
a Linear System........................................................ 158

ABOUT THE AUTHOR ...................................... 161


CATALOG OF CCRP PUBLICATIONS ......... CAT-1

iii
LIST OF FIGURES

Figure 1.1: Two Horizontal Plates Containing a


Layer of Fluid ...................................................... 4
Figure 1.2: Schematic of an Open System ............................. 9
Figure 1.3: The Lattice of Species Interactions in a
Model Ecosystem ............................................... 18
Figure 1.4: Movement of the Ecosystem towards a
Self-Organised Critical Point............................. 19
Figure 1.5: Local Pinning of a Fluid Boundary.................... 29
Figure 1.6: The Roughness “W” of an Interface ................. 31
Figure 2.1: Information Leading to Emergent Behaviour ... 50
Figure 3.1: 2nd Armoured Division Data ............................ 59
Figure 3.2: 2nd Armoured Division–
Power Spectrum Prediction............................... 59
Figure 3.3: 2nd Armoured Division–
Nonlinear Prediction ......................................... 60
Figure 3.4: 2nd Armoured Division–
Neural Net Prediction........................................ 60
Figure 3.5: 2nd Armoured Division–
SOC Prediction ................................................. 60
Figure 3.6: 9th Armoured Division Data ............................. 61
Figure 3.7: 9th Armoured Division–
Neural Net Prediction........................................ 61

v
Figure 3.8: 9th Armoured Division–
Power Spectrum Prediction............................... 61
Figure 3.9: 9th Armoured Division–
SOC Prediction ................................................. 61
Figure 3.10: The Statistics of Linear Irruption .................... 69
Figure 3.11: The Statistics of Radial Irruption .................... 69
Figure 4.1: Area of Operations............................................. 85
Figure 4.2: Five Configuration Classes................................. 88
Figure 4.3: Plot of y = f(x) and y = x - weak control ............ 89
Figure 4.4: Plot of y = g(x) and y = x - strong control.......... 89
Figure 4.5: Recursive Calculation of the
Probability of Weak Control.............................. 91
Figure 4.6: FASTHEX Game Cycle Sequence ................... 99
Figure 4.7: A Wargame as an Open Dynamical Process ... 102
Figure 4.8: An Example Two-Cycle Game........................ 105
Figure 4.9: BLUE Commander’s Allocation Strategy ....... 107
Figure 4.10: BLUE Commander’s Situation
Assessment Problem ........................................ 110
Figure 4.11: Developing a Refined Estimate ..................... 114
Figure 4.12: Refined Probability Assessments.................... 115
Figure 4.13: Knowledge and Entropy for Example 1 ........ 125
Figure 4.14: Experimental Assessment of
Campaign Level Knowledge and
Attrition of Enemy Forces ............................... 129
Figure 4.15: Experimental Assessment of the
Effect of Campaign Level Knowledge on
Own Force Casualties...................................... 129
Figure 4.16: The Critical Path ........................................... 131

vi
Figure 4.17: Parallel Nodes on the Critical Path................ 131
Figure 4.18: The Logistics S-Shaped Curve....................... 135
Figure 5.1: Screenshot of the Start of a
Typical ISAAC Simulation Run ..................... 140
Figure 5.2: Nearest and Next Nearest
Neighbour Clustering ...................................... 142
Figure 5.3: Largest Cluster Size as a Function of
Simulated Time (First Iteration, Red Agents) . 143
Figure 5.4: Largest Cluster Size as a Function of
Simulated Time (First Iteration, Blue Agents) . 143
Figure 5.5: Largest Cluster Size as a Function of Time
(40th Iteration, Red) ........................................ 145
Figure 5.6: Largest Cluster Size as a Function of Time
(40th Iteration, Blue)........................................ 145
Figure 5.7: Frequency Distribution of the
Largest Cluster Size for Red Agents................ 145
Figure 5.8: Frequency Distribution of the
Largest Cluster Size for Blue Agents ............... 146
Figure 5.9: Distribution of Cluster Sizes
(2nd Replication, Red Agents)......................... 147
Figure 5.10: Distribution of Cluster Size for Red Agents .. 148

vii
LIST OF TABLES

Table 1.1: Decade-by-Decade Behaviour of


Daily Returns from the Dow Jones Index............ 40
Table 2.1: Relation Between Complexity and
Information Age Warfare .................................... 49
Table 3.1: Geometric Mean Area/Attack Front
at Breakthrough (Miles)........................................ 64
Table 3.2: Geometric Mean Area Per Day/Attack Front
at Breakthrough (Miles/Day)............................... 64
Table 3.3: Geometric Mean √Area Per Day
at Breakthrough (Miles/Day)............................... 65
Table 4.1: Refined Probability Assessments: Example 1 .... 115
Table 4.2: Multiple Sweeps Case 1 ..................................... 117
Table 4.3: Multiple Sweeps Case 2 ..................................... 117
Table 4.4: Total Knowledge: Example 1 ............................ 124
Table 4.5: Total Knowledge ............................................... 127

ix
FOREWORD

F or the last couple of decades, attempts have


been made to develop some general under-
standing, and ultimately a theory, of systems
that consist of many interacting components
and many hierarchical layers. It is common to
call these systems complex because it is impossi-
ble to reduce the overall behaviour of the
system to a set of properties characterising the
individual components. Interaction is able to
produce properties at the collective level that
are simply not present when the components
are considered individually. As an example, one
may think of mutuality and collaboration in
ecology. The function of any ecosystem depends
crucially on mutual benefits between the differ-
ent species present. One example is the relation
between legumes, such as peas and beans, and
their associated nitrogen-fixing bacteria: the
bacteria collects nitrogen for the legume, which
in turn produces carbohydrates and other
organic material for the bacteria. Clearly this

xi
crucial arrangement cannot be studied by focusing on, say,
the legume and neglecting the bacteria; the ecological func-
tion emerges first when the different components are brought
together and interaction is taken into account.
Another important feature of complex systems is their sensi-
tivity to even small perturbations. The same action is found
to lead to a very broad range of responses, making it exceed-
ingly difficult to perform prediction or to develop any type of
experience of a “typical scenario.” This must necessarily lead
to great caution: do not expect what worked last time to work
this time. The situation is exacerbated since real systems
(ecological or social) undergo adaptation. This implies that
the response to a given strategy most likely makes the strat-
egy redundant. An example is the effect of using the same
type of antibiotic against a given type of bacteria. Evolution
soon ensures that the bacteria develop resistance and make
the specific type of antibiotic useless. That complex systems
adapt and change their properties fundamentally as a result
of the intrinsic dynamics of the system is clearly extremely
important. Nevertheless, for the sake of simplicity adaptation
is often neglected in model studies. Sometimes assuming the
existence of a stationary state might be justified (e.g., if one is
interested in “toy” models of the flow of granular material
under a controlled steady input of grains). But if one is deal-
ing with more complex situations such as in ecology, and
even more when considering social and political systems,
ignoring adaptation is very likely to lead to erroneous
conclusions.
We know from studies of Self-Organised Critical models,
which the present book alludes to (for more see P. Bak, How
Nature Works, Oxford University Press, 1997 and H.J. Jensen,
Self-Organized Criticality, Cambridge University Press, 1998),

xii
that the correlations and general behaviour exhibited by these
model systems are entirely determined by the assumed
boundary conditions or the applied drive. The lesson to be
learned from this is that complex systems cannot be studied
independently of their surroundings. Understanding the
behaviour of a complex system necessitates a simultaneous
understanding of the environment of the system. In model
studies, one assumes often that the surroundings can be repre-
sented by one or the other type of “noise,” but this is just a
trick that allows one to proceed with the analysis without
understanding the full system under consideration. It is very
important to appreciate that the “drive” or the “noise” are
equally crucial to the understanding, as is the analysis of the
“system” itself. One should bear in mind that the separation
into system, drive, noise, surroundings, etc. is rather arbitrary
and is far from representing a complete analysis.
From these considerations, we see that it is vitally important
to consider warfare as a complex system that is linked and
interacts (in a coevolving way) with the surrounding socio-
economical and political context. From that perspective, the
present book is a “work in progress” and a preliminary first
step along the road in helping to analyse and structure these
difficult and serious issues. Forgetting that war and warfare
are an intimate part of a much larger complex system will
lead to incomplete and even dangerously incorrect conclu-
sions. Applying the approach of Complexity Theory to
warfare leads one to the self-consistent realisation that war-
fare will have to be analysed in its larger context. Further
work will need to examine how coevolution across the entire
network of military, socioeconomical, and political interac-
tions leads firstly to emergent effects at higher levels, and of
equal importance how such effects lead to coevolution at the

xiii
higher level. It will also be important to consider the robust-
ness of such networks, and their vulnerability to damage.

Henrik Jeldtoft Jensen


Professor of Mathematical Physics
Department of Mathematics
Imperial College, London

xiv
CHAPTER 1

COMPLEXITY IN
NATURAL AND
ECONOMIC SYSTEMS1

I n this chapter we consider some of the key


ideas of Complexity Theory as applied to
natural systems. Having established these key
ideas, in Chapter 2 we begin to see how these
ideas map across, in a broad conceptual sense, to
the dynamics of conflict in an Information Age
environment. In later chapters, we will look in
more detail at the evidence and the type of mod-
elling that emerges from this conceptual
connection. In an organisational context, we
argue that complexity provides an explanatory
framework of interrelationships, both metaphor-
ically and analogously, of how individuals and
military organisations interact, relate, and evolve

1
The contribution of Dr. Maurice Passman to this
chapter is gratefully acknowledged.

1
2 Complexity Theory and Network Centric Warfare

within a larger “ecosystem.” Complexity explains why inter-


ventions may have unanticipated consequences, but also
explains how combat effects follow from these consequences.
The intricate interrelationships of elements within a complex
system give rise to multiple chains of dependencies. Change
happens in the context of this intricate intertwining at all
scales. We become aware of change only when a different pat-
tern becomes discernible. But before change at a macro level
can be seen, it is taking place at many micro levels simulta-
neously. Hence, microcomponent interaction and change
leads to macrosystem evolution.
In a previous book,2 we considered some of the issues to be
addressed at the political/military level as a consequence of
such emergent behaviour and the “resultant likelihood of
complex and unexpected interactions, arising from previ-
ously unexpected sources.” The diligent reader is directed to
that work for further discussion of these issues. Here, we sim-
ply wish to add weight to the points made by Professor Jensen
in his Foreword to this present work. In all that follows, the
recursion of the process up to this political/military level
must be kept in mind, and will be one of the key areas of
future research.
Our lead is taken from current military doctrinal thought
both in the UK and in the United States (particularly the
U.S. Marine Corps [1]). The Chief Analyst of the UK
Defence Science and Technology Laboratory (Dstl), Roger
Forder, makes the following point in his discussion of the
future of defence analysis [2]:

2Chapter 1 of: Moffat J (2002). Command and Control in the Information Age.
The Stationery Office. London, UK.
Chapter 1 3

One effect of the human element in conflict situations is to bring


a degree of complexity into the situation such that the emergent
behaviour of the system as a whole is extremely difficult to pre-
dict from the characteristics and relationships of the system
elements. Detailed simulation, using agent-based approaches, is
always possible but the highly situation-specific results that it
provides may offer little general understanding for carrying for-
ward into robust conclusions of practical significance. Usable
theories of complexity, which would allow understanding of
emergent behaviour rather than merely its observation, would
therefore have a great deal to offer to some of the central problems
facing defence analysis. Indeed they might well be the single most
desirable theoretical development that we should seek over the
next few years.
A similar thought was aired at a Royal United Services Insti-
tute (RUSI) conference on future Intelligence, Surveillance,
Target Acquisition, and Reconnaisance (ISTAR) [3]. Vice
Admiral Cebrowski,3 U.S. Navy, centred his keynote address
on Network Centric Warfare (NCW) as the capstone concept
for the U.S. Navy after Next. He described it in terms of the
achievement of Information Superiority with characteristics of
gross asymmetries and a diversity of “players.” These ideas are
explicitly derived in his description from the new physics of
nonlinearity, complexity, and chaos as exemplified by the
Santa Fe Institute corpus of ideas. The relationship between
complexity and “information-based” warfare is (as he
described it) less deterministic and more emergent; less focused
on the physical, and more behavioural; less focused on things,
and more on relationships. Command and Control (C2)

3Now head of the Office of Force Transformation, The Pentagon, U.S.


DoD.
4 Complexity Theory and Network Centric Warfare

emphasises speed, sharing, and decentralisation. In summary,


ADM Cebrowski defined NCW as the robust networking of
well-informed, geographically dispersed forces.
Where do these ideas come from?
In looking at where these key ideas of complexity come from,
let us make a start by considering systems and their behaviour
in the natural world. In the classical view, such physical or bio-
logical processes are reducible to a few fundamental
interactions. This leads to the idea that under well-defined con-
ditions, a system governed by a given set of laws will follow a
unique course (like the planets of the solar system). Moreover, a
slight change in the cause will likewise produce a slight change
in the effects (i.e., the system is linear in nature). Recently, an
increasing amount of experimental data challenging this idea
has become available and this imposes a new attitude concern-
ing the description of nature. In natural systems, under
appropriate conditions, a multitude of self-organisation phe-
nomena on a macroscopic scale (a scale of an order of
magnitude larger than the range of fundamental interactions)
in the form of spatial or temporal patterns may be generated.

A SIMPLE EXAMPLE
To illustrate this, let us consider a simple thermodynamic
thought experiment. Imagine a layer of fluid limited by two
horizontal parallel plates whose lateral dimensions are much
longer than the width of the layer, as shown in Figure 1.1.

Figure 1.1: Two Horizontal Plates Containing a Layer of Fluid


Chapter 1 5

Left to reach equilibrium, the fluid will rapidly tend to a


homogeneous state that is statistically identical. The homoge-
neity of this system extends to all of its properties, particularly
to its temperature, which will be the same at all parts of the
fluid and equal to the temperature of the limiting plates or,
alternatively, to the temperature of the “external” world. All of
these properties are characteristic of a system in a particular
state, the state of equilibrium, for which there is neither bulk
motion nor temperature difference with the outside world.
What occurs if, for example, the temperature on a small sec-
tion on one of the plates is temporarily perturbed? At
equilibrium, this temperature perturbation has no influence,
since the temperature rapidly becomes uniform again and
equal to its initial value. In other words, the perturbation dies
out; the system keeps no track of it. Such a state is said to be
asymptotically stable.
From the standpoint of a very small observer inside the sys-
tem, not only does the homogeneity of the fluid make it
impossible for the development of an intrinsic concept of
space, but also the stability of the state of equilibrium eventu-
ally makes all time instances identical. It is therefore
impossible for this observer to develop an intrinsic concep-
tion of correlation or coincidence (e.g., things happening at
the same time, or in the same place). We can increase the
complexity of the system by, for instance, heating the fluid
layer from below. In doing this, we communicate energy to
the system in the form of heat. Moreover, as the temperature
of the lower plate is now higher than the upper, the equilib-
rium condition is violated. In other words, by applying
external constraints to the system, we do not permit the system
to reach equilibrium. The presence of an external constraint
therefore implies energy flux and vice-versa.
6 Complexity Theory and Network Centric Warfare

Now suppose at first that the constraint is weak, i.e. the change
in temperature, ΔT , is small. The system will again adopt a
simple and unique state in which the only active process is a
transfer of heat from the lower to the upper plate, from which
heat is lost to the external world. The only difference from the
state of equilibrium is that temperature, and consequently den-
sity and pressure, are no longer uniform. They vary from warm
regions to cold regions in an approximately linear fashion. This
phenomenon is known as thermal conduction. In this new state
that the system has reached in response to a constraint, stability
will prevail again and the behaviour will eventually be as simple
as at equilibrium. However, by removing the system from equi-
librium further and further, through an increase in ΔT , we
observe that suddenly, at a value of ΔT that we will call critical,
matter begins to perform a bulk movement. Moreover, this
movement is far from random; the fluid is structured in a series
of small structures known as Benard cells.
Owing to thermal expansion, the fluid closer to the lower plate
is characterised by a lower density than that nearer the upper
plate. This gives rise to a gradient of density that opposes the
force of gravity. This configuration is thus potentially unstable.
Consider a small volume of the fluid near the lower plate.
Imagine that this volume is displaced upward by a perturba-
tion. This volume, now in a colder and hence denser region,
will experience an upward Archimedes force, amplifying the
ascending movement further. If, on the other hand, a small
droplet initially close to the upper plate is displaced down-
ward, it will penetrate an environment of low density and the
Archimedes force will tend to amplify the initial descent. The
fluid thus generates the observed currents. The stabilising
effect of viscosity, which generates an internal friction oppos-
ing movement, counteracts the destabilising effects. This, and
Chapter 1 7

thermal conduction, which tends to average any temperature


difference between the displaced droplet and its environment,
explains why currents do not appear as soon as ΔT is not
strictly zero.
Benard cells also show the complexity of movement. The cells
unfold along the horizontal axis, adopting successively right-
handed or left-handed rotation. Our very small observer can
now locate his position in space by considering the rotation of
the cell he occupies and by counting the number of cells he
passes through. The emergence of this notion of space is
known as symmetry breaking. When ΔT is below the critical
value, the homogeneity of the fluid in the horizontal direction
renders its different parts independent of each other. In con-
trast, beyond the threshold, it is as if each volume element is
watching the behaviour of its neighbours and is taking this into
account in order to play its role adequately and to participate
in the overall pattern. This suggests the existence of correlations
of statistically reproducible rate relations between distant parts
of the system. The characteristic space dimension of a Benard
cell is in the millimetre range, whereas the characteristic space
scale of the intermolecular forces is in the Angstrom range.
That large numbers of particles can behave in a coherent fash-
ion at this long range, despite random thermal motion, is one
of the principal properties characteristic of such self-organisation
and emergent complex behaviour.
This experiment is reproducible; the same convection patterns
will appear at the same threshold value and the process is sub-
ject to a strict determinism. However, the direction of the
rotation of the cells is unpredictable. The form of the particu-
lar perturbation that prevails at the moment of the experiment
will decide whether a given cell is right- or left-handed. When
the constraint is sufficiently strong, several solutions are possi-
8 Complexity Theory and Network Centric Warfare

ble for the same parameter values and chance alone will
decide which of these solutions is realised. In this way, the sys-
tem has been perturbed from a state of equilibrium or near-
equilibrium to a state of self-organisation, with a number of pos-
sible modes of behaviour.
What happens to the Benard cell system when the thermal
constraint is increased beyond this first threshold? For some
range of values the Benard cells will be maintained globally,
but some of their specific characteristics will be modified. Fur-
ther constraint induces the system to move beyond another
critical point and turbulence is witnessed. Note that all of these
critical behaviours are different from the phase changes we
normally associate with closed thermodynamic systems. The
reason behind this is that a nonequilibrium constraint is being
applied. For example, the dendritic structure associated with
snowflakes has nothing to do with the structure of the underly-
ing ice-crystal lattice. The scale, size, and spacing of the
emergent structure is of an order of magnitude larger.
To summarise, nonequilibrium has enabled the system to
transform part of the energy communicated from the envi-
ronment into an ordered behaviour of a new type: the
dissipative structure. This regime is characterised by symmetry
breaking, multiple modes of behaviour, and correlation. Such a system
is called “open” since it is open to the effect of energy or infor-
mation flowing into and out of the system. It is also called
“dissipative” because of such energy flows, and the resultant
dissipation of energy.

OPEN, DISSIPATIVE STRUCTURES


Consider then a system embedded in an environment with
which it communicates through the exchange of certain prop-

Open, Dissipative Structures


Chapter 1 9

erties that we shall call fluxes. We now know to call this an open
system (in contrast to a closed or isolated system). Figure 1.2 is
a schematic representation of such an open system, communi-
cating with the environment through the exchange of such
properties as mass, energy, or information. The rate of amount
transported per unit surface is the flux of the corresponding
property across the system. In our simple example above, the
amount of heating is the flux of energy into the system.

Figure 1.2: Schematic of an Open System

As a result of these exchanges, the variables describing the


instantaneous state of the system, {Xi}, vary in time and attain
values typically different from those characterising the state of
the environment {Xi,e}. Whatever the detailed interpretation
of Xi might be, the evolution of the system under consideration
may be described in the following general form:
Rate of change of the system state = function of (system state
and control variables).
Thus we have:

dX i
= Fi ( X 1 ,... X n ; λ 1 ,...λ m ), (i = 1,....n)
dt

Open, Dissipative Structures


10 Complexity Theory and Network Centric Warfare

where Fi denote the laws concerning the rate of change of the


system, and λ 1 ,...λ m are a set of parameters present in the
problem, which can be modified by the external world. These
quantities are known as control parameters. Under certain condi-
tions, this relation will have a single solution that minimises
some measure of negative utility (which we call a loss function).
This solution is then a unique optimal control for the system. For
each time t, it defines an optimal value for the settings of the
control parameters λ 1 ,...λ m .

AN EXAMPLE
If we think of a guided missile attempting to manoeuvre
towards a target, the measure of loss is the miss distance rela-
tive to the aim point. The control parameters are the settings
for the missile fins at a given time t. For simple forms of linear
guidance (e.g., early forms of laser-guided bombs), this leads to
what is called bang-bang control, where the missile fins “bang”
from one extreme setting to another in order to keep the mis-
sile on course. The Appendix goes into this in more detail and
shows that such solutions correspond to maximising or mini-
mising a Hamiltonian function. This is due to Pontryagin’s
maximum principle. Applied to a linear control system, this
maximum principle leads to the solution of bang-bang control.
A characteristic feature of many of the systems encountered in
nature, however, is that the F’s are complicated nonlinear func-
tions of the X’s. The equations of evolution of this type of
system should then admit, under certain conditions, several
solutions (rather than just the one optimal solution) since a
multiplicity of solutions is the most typical feature of a nonlin-
ear equation. Our assumption will be that these solutions
represent the various modes of behaviour of the underlying system.

Open, Dissipative Structures


Chapter 1 11

MODES OF BEHAVIOUR IN NONLINEAR SYSTEMS


How do these different modes of behaviour arise in such a
nonlinear system? Thermodynamic equilibrium is character-
ised by detailed balance, i.e.
Probability of a “direct” process = Probability of a “reverse” process
We can understand that in such a state, any attempt at diversi-
fication and self-organisation will be smeared out immediately:
equilibrium is a state of full homeostasis, characterised both by
uniqueness and robust stability properties. Our aim is to
extend our ideas of equilibrium to the nonequilibrium dynam-
ics of an open, dissipative system.
The most useful view of equilibrium is as follows. We represent
the evolution of the system in a space spanned by the state vari-
ables (phase space). An instantaneous state of the system is thus
represented in phase space by a point. As the system evolves
over time, a succession of such states is produced, giving rise to
a curve in phase space, which is called a phase space trajectory. In a
dissipative dynamical system, as time progresses, the phase
space trajectory will tend to a limit representative of the regime
reached by the system when all transients die out. We call this
regime the attractor. The attractor representing an equilibrium
position is unique and describes a time-independent situation.
This gives a phase space point towards which all possible histo-
ries converge monotonically. The state of equilibrium is
therefore a universal point attractor. The goal of self-organisation is
thus the search for new attractors that arise when a system is
driven away from its state of equilibrium. For example, consider
the type of coupled chemical system studied by Ilya Prigogine
[4] (for which he received the Nobel Prize in Chemistry):

Open, Dissipative Structures


12 Complexity Theory and Network Centric Warfare

k
⎯⎯→
1
A+ B 3X
←⎯⎯
k 2

3 k
⎯⎯→
X B
←⎯⎯
k4

Here the concentration [x] of product X is taken to be the only


state variable, being understood that A and B are continuously
supplied from or removed to the outside to maintain fixed con-
centrations. At equilibrium, detailed balance implies that the
rate equation is:

k1 [a ][ x] 2 = k 2 [ x]3

k 3 [ x] = k 4 [b]

These relations fix the equilibrium value [xe] of x uniquely and


impose a condition on the concentrations of constituents A and B:

k4 [be ] k1[ae ]
[ xe ] = =
k3 k2

[be ] k1 k 3
=
[a e ] k 2 k 4

In a dissipative stationary state far from equilibrium, it is not


necessary for each individual reaction to balance in both direc-
tions. Cancelling the overall effect of the two forward reactions
by that of the backward reactions is sufficient, and this yields a
cubic equation:

− k2 [ xs ]3 + k1[a][ xs ]2 + k3 [ xs ] − k4 [b] = 0 .

Open, Dissipative Structures


Chapter 1 13

This equation may have up to three solutions (hence three pos-


sible modes of behaviour) for certain values of [a] and [b]. It
may therefore be said that nonequilibrium reveals the potenti-
alities hidden in the nonlinearities, which remain “dormant”
at or near equilibrium.
The monotonic character of the approach to the state of equi-
librium implies that the evolution laws of the system should
obey, in the neighbourhood of equilibrium, very particular condi-
tions. Introducing the deviations of Xi from the equilibrium
values, Xi,e:

xi = X i − X i ,e

the evolution of {xi} near equilibrium can then be written:

dxi ⎛ ∂Φ ⎞
= −∑ Γi , j ⎜ ⎟.
dt ⎜ ∂x ⎟
j ⎝ j⎠
Φ is a thermodynamic potential taking its minimum at equi-
librium and {Γi, j } is a symmetric matrix. This symmetry can be
traced back to the property of detailed balance or the property
of the invariance of the equilibrium state to time reversal. We
can see from this that the system response is essentially linear
near to equilibrium (i.e., small changes lead to small effects).

THE FAR-FROM-EQUILIBRIUM STATE


The search for a generalised thermodynamic potential Φ in
the nonlinear range well away from the equilibrium state has
attracted a great deal of attention, but these efforts have, so
far, not made much progress. Typically, therefore, beyond the
linear domain for such irreversible processes, the above con-
trol equations are expected to break down. A first consequence
is that the steady state point attractor of the system (i.e., the

The Far-from-Equilibrium State


14 Complexity Theory and Network Centric Warfare

point to which the system state moves as the system evolves


over time), extrapolating the state of equilibrium as the dis-
tance from equilibrium is increased, can now be approached
through damped oscillations. This behaviour heralds a still
more interesting possibility in which the oscillation eventually
becomes sustained. Topologically, this implies the emergence
of a new one-dimensional attractor in phase space. The point
attractor for the state of the system is essentially replaced by a
circle. In the limit, the system moves endlessly around this cir-
cle, which is thus known as a limit cycle.
By allowing the intrinsic nonlinearity to be manifested in the
regime of detailed balance, nonequilibrium can also lead to
the coexistence of multiple attractors in state space. The state
space can then be carved up into a set of basins. Each of these
corresponds to the set of states that, if the system were to start
from there, would evolve to a particular attractor. These are
known as the basins of attraction. The ridges separating these
basins of attraction are called separatrices. The coexistence of
multiple attractors constitutes the natural mode of systems
capable of showing adapted behaviour and of performing
regulatory tasks. We would thus expect to see the system stay-
ing within one basin of attraction (corresponding to resistance
to change) and then at some point switching between differ-
ent attractors (corresponding to a change in the long-term
mode of behaviour) as we further vary the initial state of the
system. The existence of one-dimensional attractors (points
and circles) suggests the possibility of higher dimensional
attracting objects in phase space. These model multiperiodic
and chaotic behaviour, which is observed under appropriate
experimental conditions.
Nonequilibrium phenomena show a variety of behaviours and
therefore correspond to the movements of the system towards

The Far-from-Equilibrium State


Chapter 1 15

different attractors. The simplest mechanism to depict this is


the bifurcation diagram, where a single control parameter (the
thermal gradient in the case of Benard cells) affects the dynam-
ics of the system.

BIFURCATIONS AND UNIVERSALITY LIMITS


Consider a system described by a set of evolution laws of the
form of the equation:

dX i
= Fi ( X 1 ,... X n ; λ 1 ,...λ m ), (i = 1,....n)
dt
where, as before, Fi are the rate laws, and λi are the control
parameters. In a typical natural phenomenon, the number of
variables n is expected to be very high. This will considerably
complicate the search for all possible solutions. However, sup-
pose that by experiment we know one of the solutions. By a
standard method, known as Linear Stability Analysis, we can then
determine the parameter values for which this solution
regarded as the reference state switches from asymptotic sta-
bility to instability.

ROBUSTNESS AND LINEAR STABILITY ANALYSIS


Stability or “robustness to change” is essentially determined by
the response of the system to perturbations. It is therefore nat-
ural to transform the dynamical laws into a form in which the
perturbations appear explicitly:
X i (t ) = X i , s + xi (t )

and
dxi
= Fi ({ X i , s + xi }, λ ) − Fi ({ X i , s }, λ ) .
dt

The Far-from-Equilibrium State


16 Complexity Theory and Network Centric Warfare

These equations are homogeneous in the sense that the right


hand side vanishes if all xi = 0. Expanding, we may write:

dxi
= ∑ Lij (λ) x j + hi ({x j }, λ ) i = 1,2,.....n
dt j

where Lij are the coefficients of the linear part and hi are the
nonlinear part. It is assumed that the asymptotic stability of
the reference state (i.e., X = Xs or x = 0) of the system is iden-
tical to that of the linearised part:

dxi
= ∑ Lij (λ ) x j i = 1,2,.....n .
dt j

This is reasonable provided that the perturbation is not too


large and the system is “well behaved.”
In general, a multivariate system gives rise to a wide spectrum
of values for the rate of change of perturbations. For a unique
control variable λ = λ c , two cases can be distinguished. First,
the perturbations are nonoscillatory and the bifurcations (i.e.,
alternative modes of behaviour) will correspond to steady-
state point attractors, or secondly, the perturbations are oscil-
latory and the bifurcations will correspond to time-periodic
solutions in the form of limit cycles. More intricate solutions
can also be envisaged leading to secondary, tertiary, or higher
order interactions; however, the complete stable unfolding of
the problem remains an open question. It is left to other
methods, such as explicit simulation using cellular automata
(i.e., agent-based simulation) to attempt to solve the problem
of describing the dynamic behaviour of nonequilibrium sys-
tems. The use of such a simulation approach can be described
as “experimental mathematics.”

The Far-from-Equilibrium State


Chapter 1 17

SELF-ORGANISATION IN NATURE–AN EXAMPLE


Consider an ecosystem consisting of a large number of inter-
acting species, each evolving in response to the environment
created by the rest of the ecosystem (i.e., each species is coevolv-
ing). Such a system consists of many components that interact
through some kind of exchange of forces or information. In
addition to the internal interactions, the system may be driven
by some external force–natural selection in this case. The sys-
tem will now evolve over time under the influence of the
external driving forces and the internal interactions. What
happens when we observe such a system? Is there some simpli-
fying mechanism that produces a typical behaviour shared by
large classes of such systems? The mechanism, it turns out, is
that of clustering.
A simple cellular automaton model of such an ecosystem is
given by the Bak-Sneppen evolution model. It is an example of
the experimental mathematics we described earlier, as a way
of analysing the coevolution of such a complex system over
time. A description of this model of coevolution within an eco-
system was given in the last chapter of [5] as part of an
introduction to ideas from complexity mathematics and the
development of mathematical “metamodels” of future Infor-
mation Age conflict. For ease of reference, some of that
description is repeated here.

THE BAK-SNEPPEN EVOLUTION MODEL


In this automaton model, we have a d-dimensional lattice (Fig-
ure 1.3) and random numbers fi drawn without replacement
from the interval [0,1] occupy the lattice sites. At each update
step, the extremal site (that is, the one with the smallest value of
fi) is chosen, and then it and its 2d immediate neighbours are

Self-Organisation in Nature–An Example


18 Complexity Theory and Network Centric Warfare

assigned new random numbers. As a model of evolution, the


values fi correspond to “fitness” values. Changing both the site
and neighbouring sites captures the process of local coevolution.
It follows from [6] that the set of such active sites is a fractal in
space-time (see particularly Figures 1 and 28 of that reference).

Figure 1.3: The Lattice of Species Interactions in a Model Ecosystem

As described by [5, Chapter 6], the approach to the critical


attractor of the process (at which avalanches/clusters of all
sizes are possible) is controlled by the “gap equation”:

dG ( s ) 1 − G ( s )
= d
ds L S G(s)

where G(s) is the maximum extremal value fi(s) at time s, cor-


responding to the gap opened up between the existing state of
the system and the zero or ground state, L is the linear size of
the lattice, and S G (s ) is the average avalanche size at time-
step s. (An avalanche or cluster consists of a set of extremal val-
ues fi, each of which is a neighbour of the previous extremal
value. The size of an avalanche is the number of timesteps for

Self-Organisation in Nature–An Example


Chapter 1 19

which this process continues.) Schematically, we thus have the


following picture of the process (Figure 1.4):

Figure 1.4: Movement of the Ecosystem towards a Self-Organised Critical Point

Figure 1.4 shows how the equation (the hatched line) approxi-
mates the self-organised movement of the system, via a series
of avalanches/clusters towards the critical attractor of the sys-
tem at which the system has optimal flexibility (in the sense
that clusters of all sizes can be created). This critical point cor-
responds to a fitness value fc. At this point, there are no fitness
values below this critical value and a flat distribution of fitness
values in the range from fc to 1.0. This is in complete contrast
to the behaviour of a closed system such as an ideal gas in an
isolated container, where the gas evolves from (for example)
being partitioned in part of the container to the equilibrium
state where it is spread equally throughout.
Such critical systems are of particular scientific interest. Sys-
tems in critical states do not have any characteristic scale and
may therefore exhibit the full range of behavioural characteris-
tics within the particular system restraints. This means that
systems at the point of criticality are in a position of optimal
flexibility in some sense, as we have noted. It could thus be

Self-Organisation in Nature–An Example


20 Complexity Theory and Network Centric Warfare

argued [5] that one of the requirements of military command


is to so arrange things that the forces collaborate locally and
thus self-organise into this optimal state.
From the previous equation, the rate of change of the gap is
inversely proportional to the average avalanche size:
• 1
G ( s) ∝ .
S G(s)

Thus at the critical value f c , S G ( s ) → ∞ and near to critical-


ity, the average avalanche size satisfies the scaling law:

S ( f c − f i ) −γ

for some exponent γ (as has been confirmed by experiment).


Before the critical point is reached, at some time t, if f0 is the
smallest random number on the lattice in the evolution model,
then random numbers created at the next time-step will only
continue the avalanche process if they are smaller than f0.
Thus the value f0 can be viewed as the branching probability
of a random process over time. This will give information on
the avalanche size. If f0 = branching probability, then for
larger f0 we have larger avalanches. We thus assume [6] a scal-
ing relation of the form:
1
−τ
P (S , f0 ) = S g (S ( fc − f0 ) σ
)

to describe the probability distribution of avalanches/clusters


of size S corresponding to an extremal value f0. Such a relation
has also been confirmed by simulation experiments. In partic-
ular, when f0 equals fc, the critical value, the probability of a
cluster of size S is given by P ( S , f c ) S −τ . We can think of the
function g as a ‘cutoff ’ corresponding to the fact that the ava-

Self-Organisation in Nature–An Example


Chapter 1 21

lanches created will have a finite size. As we move to the


critical state, this cutoff dies away. We shall call τ the clustering
exponent for this coevolving ecosystem. In the context of
manoeuvre warfare, this describes the statistics of local cluster-
ing/collaboration at a transient point f0 heading towards the
critical attractor value fc. The parameters τ and σ are model
dependent and g is our “scaling function.” Note again that the
average size of the f0 avalanche diverges as f 0 → f c , i.e.

S ( f c − f i ) −γ .

If we mark each of the minimal sites on the lattice as it is iden-


tified as an extremal value f0, then the set of marks generated
over time forms a fractal in space-time [6] as we have already
noted. Cuts of this fractal in the space direction at a given time
identify the site that is “active” (i.e., chosen as the minimal site)
at that time. Cuts in the time direction produce a fractal time
series. In Chapter 3, we show that the time series of casualties
in conflict has the characteristics, in some cases, of such a frac-
tal time series. One interpretation of such effects is that it is the
dynamics of local clustering (by one side) that is leading to
casualties to the other side. This can also be shown to occur in
cellular automata models of conflict such as the ISAAC model
produced by the U.S. Marine Corps Combat Development
Centre, as discussed in Chapter 6 of reference [5].

SELF-ORGANISED CRITICALITY
In a paper published in 1987, Bak, Tang and Weisenfeld [7]
first proposed the hypothesis that a system consisting of many
interacting constituents may exhibit, in certain cases, a specific
general emergent behaviour characteristic of the system. Bak
described the behaviour of this type of system by the term self-
organised criticality (SOC). Self-organisation has for many years

Self-Organisation in Nature–An Example


22 Complexity Theory and Network Centric Warfare

been used to describe the ability of certain nonequilibrium sys-


tems to develop structure and patterns. The word criticality has
a very precise meaning in thermodynamics. It is used in con-
nection with phase transitions. At all temperatures other than
the transition temperature, perturbations of the system will
only locally influence system components. At the critical tem-
perature, the perturbation affects the whole system, even
though only the nearest neighbour system components inter-
act directly. The system becomes critical in the sense that all of
the members of the entire system influence each other. For the
example ecosystem above, the system self-organises itself into
the critical state corresponding to this ability of the entire sys-
tem to be influenced through the propagation of local
coevolution influences and the resultant clusters/avalanches of
species that coevolution created.

SEPARATION OF INTERNAL AND EXTERNAL


TIMESCALES
Such a self-organised dynamical state requires the separation
of the external and internal timescales. For example, the stress
in the earth’s crust, built up over a period of time, is of a differ-
ent scale to the subsequent earthquake lasting merely minutes
or seconds. The force applied to an individual tectonic plate
must overcome a threshold in order to produce an earthquake.
This means that the plate may exist in a multitude of interme-
diate states–metastable states–on the way to criticality. Among all
of the metastable states, some are of particular importance.
These states are marginally stable; a slight disturbance may lead
to a wide variety of responses. Bak envisaged that these mar-
ginally stable states are characterised by the lack of any typical
time or length. This configuration is of a similar type to that
seen in the thermodynamics of phase changes. The lack of typ-

Self-Organisation in Nature–An Example


Chapter 1 23

ical scale leads to algebraic correlation functions associated


with power-laws.4 Hence, the distribution functions describing
the frequency with which SOC events occur exhibit power-
law characteristics. For our earthquake example, if E is the
energy released during an earthquake, then the probability of
an earthquake of that size is given by the power-law relation-
ship P ( E ) ~ E − B . As noted in [8], such a simple law should
have an elegant explanation. We shall see later, in Chapter 3
when examining the evidence for such emergent behaviour in
warfare, that the number of casualties in war also has such a
power-law distribution. However, as yet no such elegant
explanation is available, either for earthquakes or wars.

CLUSTERING IN SPACE AND TIME


The formation of clusters that are fractal in both space and
time is common in natural systems, as we have already seen. It
was this type of behaviour that Bak wanted to explain. We will
see that these relate to ideas of correlation in space or time (in
contrast to coincidence in space or time). Correlation in space
or time is a signal of local clustering and collaboration spatially
(e.g., across a battlespace) or in time (e.g., across an informa-
tion grid–reading e-mail creates a correlation in time between
individuals, taking a phone call creates a coincidence in time).
The properties of fractals and their link to chaotic behaviour
have been examined intensively over the last two decades.5

4A a
power-law f ( x ) = x has the property that the relative change

f ( kx ) a
=k
f ( x)
is independent of x. Power-laws, in this sense lack characteristic scale.
5See for example the standard text Chaos and Fractals by Heinz-Otto
Peitgen, Hartmut Jürgens and Dietmar Sauape. Springer. 1992.

Clustering in Space and Time


24 Complexity Theory and Network Centric Warfare

Despite this, very little (until now) has been known about why
fractals form. Fractal structures are not the lowest energy con-
figuration that can be selected in, for example, thermo-
dynamic systems, therefore some kind of dynamic selection of
configuration must be taking place.
Bak explains the connection in the following way. A signal will
be able to evolve through the system as long as it is able to find
a connected path of above-threshold regions. When the system
is either driven at random or started from a random initial
state, regions that are able to transmit a signal will form some
kind of random network. This network is correlated by the
interaction of the internal dynamics with the external field.
The complicated interrelation between the two driving
dynamics means that a complex, finely-balanced system is pro-
duced. As the system is driven, after this marginally stable self-
organised state has been reached, we will see flashes of activity
as external perturbations interact with internal drivers to spark
off avalanches (i.e., clusters) of activity through different routes
in the system. Bak’s assertion is that the structure of this
dynamic network is fractal. If the activated clusters consist of
fractals of different sizes, then the duration of the induced pro-
cesses travelling through these fractals will vary greatly.
Different timescales of this type lead to what is termed 1/f noise.
1/f noise is a label used to describe a particular form of time
correlation in nature. If a time signal fluctuates in a seemingly
erratic way, the question is whether the value of the signal
N (τ 0 ) at time τ 0 has any correlation to the signal measured at
time τ 0 + τ ( N (τ 0 + τ) ). The amount of causation is character-
ised by a temporal correlation function:
2
G (τ) = N (τ 0 ) N (τ 0 + τ) τ0
− N (τ 0 ) τ0
.

Clustering in Space and Time


Chapter 1 25

The correlation function, for a stationary process, is linked to


the power spectrum as follows:

S ( f ) = 2 ∫ dτG (τ) cos(2π fτ)
0

where the power spectrum is defined in terms of the square


amplitude of the Fourier transform of the time signal:

T 2
1
∫ dτN (τ)e
2 iπ fτ
S ( f ) = lim T →∞ .
T −T
1
1/f noise corresponds to the case where S ( f ) = and corre-
f
sponds to a fractal clustering of the signal amplitude in time.

FRACTAL CLUSTERING IN SPACE


For a system that is not poised in a critical state and thus not
about to change its mode of behaviour, the reaction of the sys-
tem is described by a characteristic response time and
characteristic length of scale over which the perturbation is felt.
However, for a critical system, the same perturbation applied
at different positions or the same position at different times can
lead to a response of any size. The average is not therefore a
useful measure of response. The amount of the system involved
is a cluster in the spatial dimensions of the system.

AN EXAMPLE–A PILE OF DRY SAND ON A BEACH


The nature of this critical state can be illustrated using a sim-
ple example–a pile of sand. Anyone can try this experiment
on a sunny day by the seaside. Normally, the grains of sand
only interact locally and nothing surprising happens. How-

Clustering in Space and Time


26 Complexity Theory and Network Centric Warfare

ever, if the critical state (defined by the slope of the pile) is


perturbed by the addition of one grain of sand to a randomly
chosen position on top of the pile, the extra grain will induce
an avalanche (a cluster) characterised by spatial and temporal
characteristics such as the total number of sand grains s
involved in the avalanche and the lifetime of the avalanche t.
The statistical distributions describing the response are
denoted by P(s) and P(t). In the critical state, we expect broad
power-law distributions of the form P( s) ~ s − β and P (t ) ~ t − α .
The particular values of α and β are then characteristic of
how the system can create such correlations in time and
space. These distributions will typically be bounded by spatial
and temporal lower and upper cut-offs. For example, an avalanche
cannot involve the displacement of less than one grain of sand,
and the duration of an avalanche cannot be shorter than the
time it takes one grain to move a distance equal to the size of a
single grain.
In the thermodynamic theory of phenomena, the spatial corre-
lation function plays a fundamental role. If the system is
described by a space-time field n(r,t), the spatial correlation
function is defined as:
2
G (r ) = n(r0 )n(r0 + r ) r0
− n(r0 ) r0

where a thermal average and an average over the position r0


is assumed.
Away from the critical temperature, the correlations decay
exponentially, G (r ) ~ exp(−r / ξ) , beyond the correlation
−ν
length ξ . The correlation length diverges, ξ ~ T − Tc , as the
critical temperature is approached. At the critical point,
T = Tc , the correlation function changes functional behaviour
from exponential to algebraic dependence upon r, i.e.

Clustering in Space and Time


Chapter 1 27

G (r ) ~ r − η . The divergence is considered to be the signal of the


lack of a characteristic length scale. Self-organising systems
may be characterised and examined in this way.

CELLULAR AUTOMATA
The introduction in 1987 of the Self-Organised Criticality con-
cept employed the language of avalanches (clusters) of sand
grains. It proposed a simulation model (a cellular automata model
to be precise) of the most essential features of sand dynamics.
This cellular automata model is indeed characterised by
power-laws and exhibits critical behaviour. As an example, let
us look again at the modelling of an ecosystem consisting of a
number of coevolving species (the Bak-Sneppen evolution
model) that we mentioned earlier. This automaton process
considers the points of a grid and has a simple set of rules
determining how the system changes from one time-step to the
next, to represent the changing fitness of the species itself, and
the coevolutionary impact of that change on the fitness of
closely linked species in the ecosystem. Note that this linkage is
one of local species influence and coevolution. It does not nec-
essarily assume physical closeness. Although these rules are
simple, the emergent behaviour of the system is complex and
surprising–a characteristic of such nonlinear interactions.

AN EXAMPLE: CLUSTERING AND COEVOLUTION ON


A GLOBAL INFORMATION GRID

We can use the evolution model just described to gain insight


into the effect of a Global Information Grid. Imagine such a
grid in two dimensions. At each grid point is positioned an
element of our force. Each such force element has a “fitness”
value corresponding to its ability to evolve and adapt to local

Clustering in Space and Time


28 Complexity Theory and Network Centric Warfare

circumstances as a function of the information available on


the grid. We assume these fitness values are random at first.
At each step of the process, we assume that the force element
with the smallest fitness is likely to have to adapt fastest to its
local environment. In so doing, it will change the fitness val-
ues of the units closest to it on the information grid (i.e., there
is local coevolution). Note that these force elements may be
separated by large and varying distances in space. With these
assumptions, over time the force elements will form clusters of
coevolution of the form predicted by the Bak-Sneppen model.
In particular, the statistics of emergent cluster size can be pre-
dicted mathematically to converge to a power-law. Thus, the
probability of a cluster of size S, P(S), is of the form S −τ ,
where τ is a characteristic cluster exponent for this particular
information grid.
Thus far, we have looked at complex effects and correlations
across space and time. What happens if we restrict ourselves to
looking at the boundary between two different regimes (such
as two different nationalities or two opposing armed forces),
and how this would move over time depending on the local
coevolution of the elements involved?

MOVEMENT OF A BOUNDARY
In natural systems, we can consider the movement of a bound-
ary through a medium (for example, the boundary of an
atomic surface, the boundary of a growing cluster of bacteria,
or the front of advance of a fluid “invasion” of a medium such
as a crystalline rock). This has been studied extensively in rela-
tion to the laying down of single atom surfaces using molecular
beam epitaxy [9]. The most relevant case from our point of
view is the front of advance of fluid “invasion” of a medium.
As described in [9], we can represent the medium itself as con-

Movement of a Boundary
Chapter 1 29

sisting of a lattice of cells, each with either a 1 or 0 in it. A “1”


represents the fact that that cell can be wetted. The proportion
of cells containing a “1” is defined as p. For large configura-
tions, we can also interpret p as the probability that a
particular cell contains a “1.” A “0” represents the fact that the
cell cannot be wetted–it thus “pins” the advance of the fluid
through the medium, at least locally. In the standard Direction
Percolation Depinning (DPD) model, we start off with (for
example) a two-dimensional square lattice of cells, some
labelled with 1s and the rest with 0s to represent the distribu-
tion of these “pinning” forces throughout the medium.
Initially, we wet one edge of the lattice. In Figure 1.5, we show
the wetted right-hand edge. In the standard DPD model, one
of the unpinned cells in the next column is chosen at random,
and the fluid invades that cell.

Figure 1.5: Local Pinning of a Fluid Boundary

Movement of a Boundary
30 Complexity Theory and Network Centric Warfare

It turns out that for this case, when the pinning probability p is
greater than a critical value pc, the growth of the interface is
halted by a spanning path of pinning cells. Such models of
interface or boundary movement exhibit fractal properties of
the interface, as discussed in detail in [9]. We shall see similar
effects later in our discussion in Chapter 4 of the control of the
battlespace using ideas based on preventing the flow of oppos-
ing forces and/or third parties through the space. Rather than
choosing the next cell to invade at random, as in the DPD
model, we can use a model of the process that is more akin to
the manoeuverist principle of applying your strength where
the opponent is weak–in other words, the cell next to be wet-
ted is the one where the local pinning force of the medium is
weakest. Such a model of the boundary movement is the Inva-
sion Percolation model. We can create a model of this process in a
way that is consistent with our description of the Bak-Sneppen
evolution model of local coevolution [6]. We start by assign-
ing, as with the Bak-Sneppen evolution model, random
numbers fi between 0 and 1 to the points of a d-dimensional
lattice. Initially, one side of the lattice is the wetted cluster.
The random numbers at the boundary of the wetted cluster
are examined. At each update step s, the site with the smallest
random number fi,s on the boundary of the wetted area is
located and added to the cluster. In this case, we can interpret
the values fi as the values of the local pinning force, and the
cluster advances at those points where the pinning force is
smallest. As noted in [6], an important physical realisation of
invasion percolation is the displacement of one fluid by
another in a porous medium. The boundary of the cluster cre-
ated by this process is fractal and has a fractal dimension in the
range 1.33-1.89 dependent on the exact definition of the
boundary [6, Appendix].

Movement of a Boundary
Chapter 1 31

AN ANALYTICAL MODEL OF THIS PROCESS


In natural systems, the boundary of such an interface that is
moving through a medium can be characterised by its “rough-
ness.” This is defined as follows.
We assume that the interface is defined across a linear space of
length L (as shown in Figure 1.6). The roughness of the interface
at a given time t and for a span of length L, is defined [9] as:

1 L
W ( L, t ) = ∑
L i =1
(h(i, t ) − h (t )) 2

where the interface length is divided into a number of natu-


ral cells of unit length, h(i,t) is the height of the ith column at
time t (as indicated in Figure 1.6) and h (t ) is the average of
these heights at time t. The roughness W(L,t) is thus the stan-
dard deviation of the height at a given time. For many
natural systems, the roughness first goes through a transition
period before stabilising at an equilibrium value. Once it has
stabilised, the expected behaviour is that this “saturation”
value Wsat scales with the interface width L, i.e. we expect
the relationship Wsat Lα .

Figure 1.6: The Roughness “W” of an Interface

Movement of a Boundary
32 Complexity Theory and Network Centric Warfare

When this occurs, the exponent α is referred to as the roughness


exponent. It is then a characteristic exponent of the invasion
process under study. A typical value of α is about 0.6 (as
shown in [9]). This idea can also be used to characterise
Brownian motion, as we discuss later in this chapter in the
context of stock price dynamics.
By using symmetry arguments, we can derive [9] an analytical
expression of the rate at which such an interface moves
through a medium. (In terms of conflict, this corresponds to
the rate of advance of a combat front.) This is given by the
Kardar-Parisi-Zhang (KPZ) equation:

∂h( x, t ) λ
= v∇ 2 h + (∇h) 2 + η ( x, t ) .
∂t 2
The first term in this equation represents linear effects of the
interface growth, the second captures nonlinear effects, and
the third is a noise term. This thus represents the starting point
for an analytical expression (i.e., a metamodel) of the advance
of a conflict front through a locally controlled area, as dis-
cussed below.

AN EXAMPLE: FORCE CONTROL ALONG A


BOUNDARY
The concept of pinning a fluid locally is similar to the idea of
trying to exert local control over a boundary to prevent the
flow of other forces or third parties across that boundary. We
will see later (in Chapter 4) that the idea of control as the pre-
vention of such flows through an area has important
implications for the emergent behaviour of a force (or two com-
peting forces) attempting to exert control over a battlespace.

Movement of a Boundary
Chapter 1 33

In the case of control along a boundary, Complexity Theory,


in terms of the invasion percolation model, can be used to
analyse the effect of two forces (an attack and a defence force)
interacting across a boundary, when the boundary moves at
the point where the defending (pinning) force is weakest. If the
defence pinning force is coevolving locally, then the boundary
should form a fractal with a fractal dimension in the range
1.33-1.89 [6, Appendix], as we have seen. In Chapter 3, we
show that there is historical evidence in warfare for such an
effect, and for values in this range.

AN EXAMPLE OF COMPLEX BEHAVIOUR AND


FRACTAL TIME SERIES IN ECONOMICS

BROWNIAN MOTION, FRACTALS, AND SIMILARITY


Mandelbrot [10] has considered the movement of stock prices
using fractal ideas and we start by using a simple example
based on Brownian motion (since this is the basis of most cur-
rent predictors of stock price volatility). Consider then
Brownian motion in one space variable, thus the motion of
particles is restricted to a line. The impacts affect the particle
only from the left and right, causing a displacement of length l
in either direction. Can any prediction be made about the
total displacement after a number of time-steps n? First, the
total expected displacement is zero, as all displacements are +l
or –l, both with equal probability 0.5. Consider, instead, the
square of the displacement. The average of these square dis-
placements, called the mean square displacement, indicates how
much the particles spread in a given number of time-steps on
the average. Its value is nl2. In terms of Brownian motion, the
number of steps corresponds to the number of impacts on a
particle and cannot be directly measured in an experiment.

An Example of Complex Behaviour and Fractal Time Series in Economics


34 Complexity Theory and Network Centric Warfare

Consider, therefore, time duration t. Assuming an average


number of n impulses during a timespan t, the particle travels a
total length nl. If v is the average speed of the particle, then we
have vt = nl. The mean square displacement nl2 thus equals vlt.
Therefore, the mean square displacement is proportional to
the timespan. Is there anything else that can be said regarding
the distribution of the displacement after time t? Experimen-
tally, it is found that the statistics for a simulated one-
dimensional Brownian motion follow a Gaussian or normal
distribution. This is not surprising as a normal distribution
arises where independent and identically distributed random
events are averaged and is an implication of the Central Limit
Theorem of Statistics.
Brownian motion is also associated with scaling relationships and
fractal behaviour. As a nonfractal object is magnified, no new fea-
tures are revealed. As a fractal object is magnified, finer details
are revealed. The size of the smallest feature of a nonfractal
object is called the characteristic scale. A measurement made at
finer resolution will include more of these smaller pieces. Thus
the value measured of a property will depend upon the resolu-
tion used to make the measurement. How a measured
property depends on the resolution used to make the measure-
ment is called the scaling relationship. A fractal object has
features over a broad range of sizes. Fractal phenomenological
characteristics are:
1. SELF-SIMILARITY: behavioural characteristics are “simi-
lar” at different resolutions.
2. SCALING: the value measured for a property depends
upon the resolution at which it is measured.
3. DIMENSION: the dimension of an object gives a quanti-
tative measure of self-similarity and scaling. It tells us

An Example of Complex Behaviour and Fractal Time Series in Economics


Chapter 1 35

how many new pieces of an object are revealed as it is


viewed at higher magnification.
4. NONSTATISTICAL PROPERTIES may be observed.
Moments may be zero or nonfinite (e.g., the mean tends
towards zero and variance tends towards infinity).
There are two types of self-similarity:
1. GEOMETRICAL: pieces of the object are exact smaller
copies of the whole object.
2. STATISTICAL: the value of the statistical property Q(r)
measured at resolution r is proportional to the value of
Q(ar) measured at a resolution ar such that Q(ar) = kQ(r).
For statistical probability distribution functions (pdf),
this implies that: pdf[Q(ar)] = pdf[kQ(r)].
Statistical self-similarity is possible in both space (spatial) and
time (temporal).
In summary then, such self-similarity implies a scaling rela-
tionship. The simplest form of the scaling relationship is that
the measured value of a property Q(r) depends on the resolu-
tion used to make the measurement, as in the equation
Q(r) = Brb, i.e. LogQ(r)=LogB+bLog r, where B and b are con-
stants. Hence for self-similarity, we observe log-log linear
behaviour. Experimentally, this implies that for Brownian
motion, there should be a scaling factor, r, that yields curves
that are visibly identical, i.e. when scaling Brownian motion
in time, by a factor say of t and in amplitude by a factor of r,
we should see no difference. This transformation is called a
scaling collapse since it has the effect of collapsing the curves at
different scales of measurement on top of each other into one
normalised relationship.

An Example of Complex Behaviour and Fractal Time Series in Economics


36 Complexity Theory and Network Centric Warfare

The scaling relationship in time may be determined theoreti-


cally for Brownian motion. It follows from the mean squared
analysis that the mean squared displacement, Δ2 , of the
Brownian motion X(t), is described by Δ2 ∝ t . Now consider
the rescaled random function:

⎛t⎞
Y (t ) = rX ⎜ ⎟ ,
⎝a⎠
i.e. the graph of X is stretched in the time direction by a fac-
tor a and in the amplitude by r. The displacements in Y for
time differences t are the same as those in X multiplied by r
for corresponding time differences t/a. Thus, the squared
displacements are proportional to r2t/a. In order to ensure
the same constant of proportionality as the original Brown-
ian motion, we require r 2 / a = 1 or r = a . For example,
when replacing t by t/2, i.e. stretching the graph by a factor
of 2, we have a=2 i.e., r = 2 .
In this broader context of fractal processes, ordinary Brownian
motion is a random process X(t) with Gaussian increments and
2H
var( X (t 2 ) − X (t1 )) ∝ t 2 − t1 where H (the Hurst expo-
nent) = ½. We can consider (as discussed in [9]) the Brownian
motion time series as describing an interface (between the
parts above and below the series) stretching between the time
points t1 and t2. In terms of our previous discussion of interface
roughness, we now have L = t2-t1. The standard deviation (i.e.,
the roughness of the interface generated by the Brownian
motion time series) over this timespan L has the form Lα
where in this case α equals the Hurst exponent H (as we can
see from the expression for the variance above in terms of H).
Thus, standard Brownian motion corresponds to a roughness
exponent of ½. Other values of H are possible, corresponding
to rougher or smoother forms of time series.

An Example of Complex Behaviour and Fractal Time Series in Economics


Chapter 1 37

STOCK PRICES AND BROWNIAN MOTION


Mandelbrot considered the movement of stock prices using a
Brownian random walk process as his starting point [10]. The
behaviour of such a variable z can be understood by consider-
ing the changes in its value in small intervals of time. Consider
such a small interval of time Δt and define Δz as the change
in z during Δt . Δz must have two basic properties: first,
Δz = ε Δt where ε is a random draw from a standardised
normal distribution (i.e., a normal distribution with mean zero
and standard deviation of 1.0); and second, the values of Δz
for any two different short intervals of time Δt are indepen-
dent. It follows that Δz has a normal distribution with mean
zero, standard deviation of Δt and variance Δt . The second
property implies that z follows a Markov process. Now con-
sider the change in the value of z during a relatively long
period of time T. This can be denoted by z(T)-z(0). This
change can be regarded as the sum of the changes in z in N
T
small time intervals of length Δt , where N = . Thus:
Δt
N
z(T)-z(0) = ∑ε
i =1
i Δt

where ε i (i = 1,2,...N) are random drawings from a standard-


ised normal distribution. It follows that z(T)-z(0) is normally
distributed with mean zero, variance of N Δt = T and stan-
dard deviation T . The process described so far has a drift
rate of zero and a variance rate of 1. The expected value of z
at any future time is equal to its current value and the variance
of the change in z in a time interval T equals T.
A generalised Wiener process for a variable x generalises the con-
cept of the drift rate and variance of such a process, and may
be defined in terms of dz as dx = adt + bdz where a and b are

An Example of Complex Behaviour and Fractal Time Series in Economics


38 Complexity Theory and Network Centric Warfare

constants. The adt term implies that x has an expected drift rate
of a per unit time. Without the bdz term, the equation becomes:

dx
dx = adt , i.e. =a.
dt
The bdz term may be regarded as adding noise or variability to
the path followed by x. For a small time interval, Δt , the
change in x, Δx , is given by Δx = aΔt + bε Δt where ε is a
random draw from a standardised normal distribution. Δx has
a normal distribution with mean of a Δt , standard deviation of
b Δt , and a variance of b 2 Δt . Similarly, the mean change in x
for any time interval T is normally distributed with mean
change in x given by aT, standard deviation of change in x
given by b T , and variance of change in x as b2T.
Even more generally, a stochastic process is defined as an Ito
process if dx = a ( x, t )dt + b( x, t )dz .
A basic simulation of stock price movement would be to use a
generalised Wiener process. This is clearly inadequate as this
assumes that both a constant drift rate and constant variance
rate occur, i.e. the percentage stock return is dependent upon
stock price. The constant expected drift rate assumption is
inappropriate and is replaced by the assumption that the
expected drift, expressed as a proportion of the stock price, is
constant. Thus, if S is the stock price, the expected drift rate in
S is μS for some constant parameter μ and for a small time
interval, Δt , the expected change in S is μSΔt . If the variance
rate of the stock price is always zero, then:

dS
dS = μSdt or = μdt
S
i.e., S = S 0 e μ t

An Example of Complex Behaviour and Fractal Time Series in Economics


Chapter 1 39

where S 0 is the stock price at time zero. This indicates that


when the variance rate is zero, the stock price grows (or
declines) at a continuously compounded rate of μ per unit
time. In practice, as stock prices exhibit volatility, a reasonable
assumption is that the variance of the percentage return in a
short period of time, Δt , is the same regardless of stock price.
Define σ 2 as the variance rate of the proportional change in
the stock price. Thus σ 2 Δt is the variance of the proportional
change in stock price, S, during time Δt . The instantaneous
variance rate for S is σ 2 S 2 . This implies that S can be repre-
sented by an Ito process that has an instantaneous drift rate μS
and instantaneous variance rate σ 2 S 2 . This can be written:

dS = μ Sdt + σ Sdz

or

dS
= μ dt + σ dz
S
and

⎛ σ2⎞
d ln S = ⎜⎜ μ − ⎟dt + σdz .
⎝ 2 ⎟⎠
The change in ln S between times t and T is thus normally
distributed:

⎡⎛ σ2⎞ ⎤

ln S T − ln S ≈ φ ⎢⎜ μ − ⎟⎟(T − t ), σ T − t ⎥
⎣⎝ 2 ⎠ ⎦

where S T is the stock price at a future time T, S is the stock


price at the current time, and φ (m, s) denotes a normal distri-
bution with mean m and standard deviation s.

An Example of Complex Behaviour and Fractal Time Series in Economics


40 Complexity Theory and Network Centric Warfare

It follows that:

⎡ ⎛ σ2⎞ ⎤
ln S T ≈ φ ⎢ln S + ⎜⎜ μ − ⎟⎟(T − t ), σ T − t ⎥
⎣ ⎝ 2 ⎠ ⎦

thus ln St is normally distributed so that ST has a log-normal


distribution.
Under closer examination, stock prices in fact depart from log-
normal behaviour. Examination of movements in stock prices
presents changes greater than our model predicts. Stock
returns exhibit leptokurtosis, i.e. the likelihood of returns near
the mean and of large returns is greater than our Brownian
motion model predicts, whilst other returns tend to be less
likely [11, 12]. Table 1.1 gives a précis of the statistics of this
non-normal behaviour.
Decade Mean Standard Skewness Kurtosis Max Min
% Deviation % %
%
1920s 0.0058 2.1453 0.1321 11.9272 11.6396 -13.7203
1930s -0.0216 1.8233 0.2856 4.5422 13.8635 -8.7776
1940s 0.0098 0.7618 -1.0890 12.8516 6.5275 -7.0431
1950s 0.0470 0.6578 -0.9201 7.3343 4.0476 -6.7660
1960s 0.0063 0.6558 0.0437 5.4873 4.5787 -5.8815
1970s 0.0025 0.9262 0.2693 1.7889 4.9517 -3.5660
1980s 0.0489 1.1607 -4.3115 99.3933 9.6661 -25.6315
Overall 0.0142 1.1329 -0.7919 28.8256 13.8635 -25.6315

Table 1.1: Decade-by-Decade Behaviour of Daily Returns


from the Dow Jones Index

CLUSTERING IN TIME
What is of particular interest is that Turner and Weigel’s data
strongly suggest the occurrence of temporal clustering. Over

An Example of Complex Behaviour and Fractal Time Series in Economics


Chapter 1 41

the 1928-1989 period, 12.5 and 37.5 percent of all extreme


positive jumps in the S&P 500 occurred within one and five
days respectively of another positive jump in equity prices.
Positive jumps in the Dow Jones were similarly clustered with
11.3 percent of the positive jumps taking place within one day,
and 36.2 percent transpiring within 5 days of each other.
The second defect with the Brownian motion model is that if
the model holds, then stock returns should be proportional to
elapsed time and the standard deviation of returns should be
proportional to the square root of elapsed time. This is based
upon the scaling properties of Brownian motion. Turner and
Weigel’s data also demonstrated that monthly and quarterly
volatilities are higher than annual volatilities and conversely,
that daily volatilities are lower than annual volatilities, i.e.
their research shows that the stock returns do not scale in a
Brownian motion sense.
The answer to these problems can perhaps be found by con-
sidering the mathematical form of these distributions [13]. We
have shown that scaling is observed within Brownian motion
and that this may be characterised by a Gaussian probability
distribution plot. A log-normal distribution may be crudely
assumed to be a Gaussian plot with an increased tail. The tails
of Pareto distributions also die off much more slowly than
Gaussian or log-normal tails. Such fat tail probability distribu-
tions are thus increasingly describing the greater and greater
volatility of the system. Current financial models rely on an
extension of the basic Brownian motion model, either assum-
ing a different volatility distribution (as in “jump volatility
calculations”) or attempting to empirically analyse and fit to
the observed volatility. For example, for a Pareto probability
distribution of the random variable y, with y>1 then we obtain:

An Example of Complex Behaviour and Fractal Time Series in Economics


42 Complexity Theory and Network Centric Warfare

1
F(y) ,

i.e. a power-law. For further discussion of these and related
ideas from Complexity Theory and dynamical systems used in
financial mathematics, see references 10 and 13.

SUMMARY
In summary, we have looked in some depth at the complex
behaviour of natural biological and physical systems. From
our analysis of these open and dissipative systems, it is clear
that there are a number of key properties of complexity that
are important to our consideration of the nature of future war-
fare. Such futures, involving the exploitation of loosely
coupled command systems such as Network Centric Warfare,
will have to take account of these key properties. A list of these
is given here, and then discussed further in Chapter 2 in the
context of Network Centric Warfare.
1. NONLINEAR INTERACTION: this can give rise to surpris-
ing and non-intuitive behaviour, on the basis of simple
local coevolution.
2. DECENTRALISED CONTROL: the natural systems we
have considered, such as the coevolution of an ecosys-
tem or the movement of a fluid front through a
crystalline structure, are not controlled centrally. The
emergent behaviour is generated through local
coevolution.
3. SELF-ORGANISATION: we have seen how such natural
systems can evolve over time to an attractor correspond-
ing to a special state of the system, without the need for
guidance from outside the system.

Summary
Chapter 1 43

4. NONEQUILIBRIUM ORDER: the order (for example, the


space and time correlations) inherent in an open, dissi-
pative system far from equilibrium.
5. ADAPTATION: we have seen how such systems are con-
stantly adapting–clusters or avalanches of local
interaction are constantly being created and dissolved
across the system. These correspond to correlation
effects in space and time, rather a top-down imposition
of large-scale coincidences in space and time.
6. COLLECTIVIST DYNAMICS: the ability of elements to
locally influence each other, and for these effects to rip-
ple through the system, allows continual feedback
between the evolving states of the elements of the
system.

REFERENCES
1 HOFFMAN F G and HORNE G E (1998). Maneuver Warfare Science 1998.
Dept of the Navy, HQ U.S. Marine Corps, Washington DC.
2 FORDER R (2000). “The Future of Defence Analysis.” Journal of Defence
Science. 5, No. 2. pp. 215-226.
3 CEBROWSKI A (2000). “Network Centric Warfare and Information
Superiority” Keynote address from proceedings, Royal United Services
Institute (RUSI) conference “C4ISTAR; Achieving Information
Superiority.” July 2000. RUSI, Whitehall, London, UK.
4 PRIGOGINE I (1980). From Being to Becoming; Time and Complexity in the
Physical Sciences. W H Freeman and Co., San Francisco, USA.
5 MOFFAT J (2002). Command and Control in the Information Age; Representing its
Impact. The Stationery Office, London, UK.
6 PACZUSKI M, MASLOV S, and BAK P (1996). “Avalanche Dynamics in
Evolution, Growth and Depinning Models.” Physics Review. E 53 No. 1.
pp. 414-443.
7 BAK P, TANG C, and WIESENFELD K (1987). Self-Organised Criticality; An
Explanation for 1/f Noise. Physics. Review Letters, 59. pp. 381-384.

References
44 Complexity Theory and Network Centric Warfare

8 SETHNA J P, DAHMEN K A, and MYERS C R (2001). “Crackling


Noise.” Nature. Vol 410. pp. 242-250.
9 BARABASI A L and STANLEY H E (1995). Fractal Concepts in Surface Growth.
Cambridge University Press. Cambridge, UK.
10 MANDELBROT B (1997). Fractals and Scaling in Finance. Springer-Verlag.
11 TURNER A L and WEIGEL E J K (1990). “An Analysis of Stock Market
Volatility.” Technical Report, Frank Russell Co., Tacoma WA. USA.
12 TURNER A L and WEIGEL E J K (1992). “Daily Stock Market Volatility:
1928-1989.” Management Science. 38. pp. 1586-1609.
13 MANTEGNA R and STANLEY H E (2000). An Introduction to Econophysics;
Correlations and Complexity in Finance. Cambridge University Press. Cambridge,
UK.

ADDITIONAL REFERENCE
14 PEITGEN H-O, JURGENS H, and SAUAPE D (1992). Chaos and Fractals.
Springer-Verlag.

Additional Reference
CHAPTER 2

CONCEPTS FOR
WARFARE FROM
COMPLEXITY THEORY

A s a starting point for our journey,


Chapter 1 established in some depth and
detail the key ideas and methods from Complex-
ity Theory that we can bring to bear in thinking
about and modelling future warfare. In his RUSI
keynote talk [1], ADM Cebrowski (Head of the
Office of Force Transformation, U.S. DoD)
indicated that Network Centric Warfare is an
emerging theory of war based on the concepts of
nonlinearity, complexity, and chaos. It is less
deterministic and more emergent; it has less
focus on the physical than the behavioural; and
it has less focus on things than on relationships.
It is clear from the discussion and modelling in
Chapter 1 that Complexity Theory is the
essence of these ideas.

45
46 Complexity Theory and Network Centric Warfare

In a previous book [2] we showed how it is possible to capture


the effects of command and control in agent-based simulations
of Information Age warfare. This is done by representing the
process as the interaction of top-down and bottom-up effects.
These are described as Deliberate Planning and Rapid Plan-
ning. Deliberate Planning is appropriate when ample time is
available for the consideration of a number of alternative
courses of action by either “side” and a course of action can be
chosen that is considered to be, in some sense, optimal. Rapid
Planning is appropriate when time is short and expert deci-
sionmaking under stress leads to a pattern-matching approach.
To quote from [2]:

Combat is, by its nature, a complex activity. Ashby’s Law of


Requisite Variety...which emerged from the theoretical consider-
ation of general systems as part of Cybernetics, indicates that to
properly control such a system, the variety of the controller (the
number of accessible states which it can occupy) must match the
variety of the combat system itself. The control system itself, in
other words, has to be complex. Some previous attempts at repre-
senting C2 in combat models have taken the view that this must
inevitably lead to extremely complex models. However, recent
developments in Complexity Theory...indicate another way for-
ward. The essential idea is that a number of interacting units,
behaving under small numbers of simple rules or algorithms, can
generate extremely complex behaviour, corresponding to an
extremely large number of accessible states, or a high variety con-
figuration, in Cybernetic terms. It follows that, if we choose these
simple interactions carefully, the resultant representation of C2
will be sufficient to control, in an acceptable way, the underlying
combat model. As part of this careful choice, we need to ensure
that the potentially chaotic behaviour generated by the interaction
Chapter 2 47

of these simple rules is ‘damped’ by a top-down C2 structure


which remains focused on the overall, high level, campaign
objectives....

...It follows, from what we have just said, that the representation
of the C2 process must reflect two different mechanisms. The
first is the lower-level interaction of simple rules or algorithms,
which generate the required system variety. The second is the
need to damp these by a top-down C2 process focused on cam-
paign objectives. Each of these has to be capable of being
represented using the same Generic HQ/Command Agent object
architecture. We have chosen to do this by following the general
psychological structure of Rasmussen’s Ladder, as a schema for
the decisionmaking process. At the lower levels of command
(below about Corps, and equivalent in other environments), this
will consist of a stimulus/response mechanism. In cybernetic
terms, this is feedback control. At the higher level, a broader
(cognitive-based) review of the options available to change the
current campaign plan (if necessary) will be carried out. In
cybernetic terms, this is feedforward control since it involves the
use of a ‘model’ (i.e., a model within our model) to predict the
effects of a particular system change.
In the last chapter of [2] (Chapter 6: “Paths to the Future”) the
following point is made, which is the foundation for all of the
work and ideas presented here:

Modelling and analysis to determine the effect of such phenomena


underpin our thinking about such future conflict, the representa-
tion of information and command being at their heart. A new
approach to capturing these effects has been put forward in this
book, and is having a significant influence on the approach to
48 Complexity Theory and Network Centric Warfare

modelling these phenomena. However, capturing the process of


intelligent agents in conflict, set within a widely divergent set of
possible futures, leads to a rich set of possible trajectories of sys-
tem evolution for analysis to consider. We thus need to
complement this effort with other work to categorise and under-
stand the classes of behaviours which might emerge from such a
complex situation. This is the domain of Complexity Theory.
The overall aim is thus to develop an “Operational Synthesis”
(as discussed in [3]) of both agent-based modelling approaches
(as described in [2]) and higher level mathematical metamod-
els based on Complexity Theory. Reference [2] lays out some
initial ideas on how to develop such an understanding based
on a theoretical approach to the development of a higher level
“metamodel” of a cellular automaton model of conflict such as
the ISAAC model developed by the U.S. Marine Corps Com-
bat Development Centre. We will develop these ideas further
in Chapters 4 and 5. In Chapter 5 in particular, we will con-
sider again the ISAAC model and demonstrate how the ideas
of Complexity Theory lead to an understanding of the clusters
forming and reforming in such a model, and how they relate
to the emergent behaviour of the model.
For the moment, let us consider the list of key concepts from
Complexity Theory at the end of Chapter 1. These are on the
left-hand side of Table 2.1. On the right-hand side is an inter-
pretation in terms of military behaviour and doctrine (which
we have termed “an Information Age force structure”).
The nature of Network Centric Warfare for such future Infor-
mation Age forces can be outlined as: within a broad intent
and constraints available to all the forces, the local force units
self-synchronise under mission command in order to achieve
the overall intent [4].
Chapter 2 49

COMPLEXITY CONCEPT INFORMATION AGE FORCE


Combat forces composed of a large
Nonlinear interaction
number of nonlinearly interacting parts.
There is no master “oracle” dictating the
Decentralised Control
actions of each and every combatant.
Local action, which often appears
Self-Organization
“chaotic,” induces long-range order.
Military conflicts, by their nature, pro-
Nonequilibrium Order ceed far from equilibrium. Correlation
of local effects is key.
Combat forces must continually adapt
Adaptation and coevolve in a changing
environment.
There is a continual feedback between
Collectivist Dynamics the behaviour of combatants and the
command structure.
Table 2.1: Relation Between Complexity and Information Age Warfare

This process is enabled by the ability of the forces involved to


robustly network. We can describe such a system as loosely cou-
pled to capture the local freedom available to the units to
prosecute their mission within an awareness of the overall
intent and constraints imposed by high-level command. This
also emphasises the looser correlation and nonsynchronous rela-
tionship between inputs to the system (e.g., sensor reports) and
outputs from the system (e.g., orders). In this process, informa-
tion is transformed into “shared awareness,” which is available
to all. This leads to units linking up with other units, which are
either local in a physical sense or local through (for example)
an information grid or Intranet (self-synchronisation). This in
turn leads to emergent behaviour in the battlespace, as shown
in Figure 2.1.
50 Complexity Theory and Network Centric Warfare

Figure 2.1: Information Leading to Emergent Behaviour

Compare these ideas with the broad conceptual framework of


Complexity Theory, as summarised below.

THE CONCEPTUAL FRAMEWORK OF COMPLEXITY


Prof. Murray Gell-Mann [5] traces the meaning to the root of
the word. Plexus means braided or entwined, from which is
derived complexus, meaning braided together, and the English
word complex is derived from the Latin. Complexity is therefore
associated with the intricate intertwining or inter-connectivity
of elements within a system and between a system and its envi-
ronment. In a human system, connectivity means that a
decision or action by any individual (group, organisation, insti-
tution, or human system) will affect all other related
individuals and systems. That effect will not have equal or uni-
form impact, and will vary with the state of each related
individual and system at that time. The state of an individual
and system will include its history and its constitution, which in
turn will include its organisation and structure. Connectivity
applies to the interrelatedness of individuals within a system, as
well as to the relatedness between human social systems, which
include systems of artifacts such as information systems and
intellectual systems of ideas.
The term complexity is used to refer to the theories of complex-
ity as applied to complex adaptive systems (CAS). These are
dynamic systems able to adapt and change within, or as part
of, a changing environment. It is important to note, however,
Chapter 2 51

that there is no dichotomy between a system and its environ-


ment in the sense that a system always adapts to a changing
environment. The notion to be explored is rather that of a sys-
tem closely linked with all other related systems making up an
“ecosystem.” Within such a context, change needs to be seen
in terms of coevolution with all other related systems as we saw
in Chapter 1, rather than as adaptation to a separate and dis-
tinct environment.

COLLECTIVIST DYNAMICS AND FEEDBACK


The phenomenological definition of a complex system is that it
exhibits nonlinear, emergent, adaptive behaviour. Nonlinear
behaviour is associated with far-from-equilibrium, open sys-
tems, in that cause and effect are no longer linearly connected.
This is ultimately due to the type of internal-external system
interactions (feedback) affecting our system.

SELF-ORGANISATION AND CLUSTERING


Self-organisation in this context is taken to mean the coming
together of a group of individuals to perform a particular task.
They are not directed by anyone outside the group. This is not
the same as “self-management,” as no manager outside the
group dictates that those individuals should belong to that
group, what they should do, or how it should be done. It is the
group members themselves who choose to come together, who
decide what they will do and how it will be done. A feature of
these groups is that they are informal and often temporary.
Enabling self-organisation can often be a source of innovation.
Military commanders who understand the nature of auftrag-
staktik1 have always understood this: a commander must
regard his superior’s intention as sacrosanct and make its
52 Complexity Theory and Network Centric Warfare

attainment the underlying purpose of everything he does. He


is given a task and resources and any constraints, and within
this framework he is left to make his plan.
We already know from Chapter 1 that some complexity mod-
els of natural ecosystems use extremal dynamics as a
behavioural driver. In systems where the dynamical evolution
is a struggle against various types of thresholds or barriers, the
action will predominately occur where the net barrier to
change is the smallest. The Bak-Sneppen evolution model
described in Chapter 1 is an example of such an extremal
model. The species with the lowest fitness coevolves first. Sim-
ilarly, in considering the movement of a fluid through a
medium, the boundary moves where the pinning force is
smallest. The avalanches of active sites are the clusters created
in such models. In the model ecosystem, the system self-organ-
ises towards a critical point where it has the greatest
dynamical freedom–clusters of all sizes can potentially be cre-
ated, and the statistics of the emergent distribution of such
cluster sizes can be predicted–the distribution is of power law
form. Such extremal dynamics echo one of the key tenets of
manoeuvre warfare–namely, to focus your strength against
your enemy’s weakness.

FOREST FIRES, CLUSTERS OF TREES, AND


CASUALTIES IN WAR
The “forest fire” model is another example of a system that
evolves to a critical point through a process of local interac-
tion. In this case, we start with an empty two-dimensional grid.
At each iteration of the process, with a certain probability p

1Auftragstaktik is directed control, as opposed to befehlstaktik (detailed


order tactics).

Forest Fires, Clusters of Trees, and Casualties in War


Chapter 2 53

(normally close to one), we drop a tree onto a random grid


point. If the grid point is blank, the tree is planted. If there is
already a tree there, the new tree is discarded. With probabil-
ity (1-p) we drop a spark onto a random site instead of a tree. If
the site is bare the spark goes out. If there is a tree on the site,
the tree and all of its immediate neighbours burn. All of the
neighbours of these trees then burn and so on until the com-
plete cluster of linked trees is burned (this is termed a forest fire).
The rate (which is [1-p] if each iteration of the process is
counted as a unit of time) of sparks dropping onto the grid is
termed the sparking frequency. This sparking frequency sp is a key
driver of the dynamics of the forest ecosystem. If sp is small,
very large clusters of trees are allowed to form, which span the
entire grid. When a spark is then dropped, the forest fire wipes
out an entire forest stretching from one side of the grid to the
other. In Complexity Theory, this is known as snapping
noise [6]. This name comes from looking at the behaviour of
the system over time–large spikes of tree extinction (forest fires)
are created at isolated points in time. If the sparking frequency
sp is very large, then tree clusters do not have the chance to
grow. Thus, over time, the system produces a large number of
small spikes of activity, which are called popping noise. When sp
is in the intermediate regime, the system self-organises to a
critical state where the clusters of burnt trees have a distribu-
tion represented by a power law, and clusters of all sizes can be
created. Over time, the spikes produced by this process
(i.e., the time evolution of forest fires of various sizes) have a
similar dynamic to that produced by the acoustic dynamics of
crumpling paper [6], and so this regime is termed crackling noise.
It is possible [7] to relate such self-organised behaviour of a
forest fire model to the statistics of the scale and intensity of
conflicts. This is the beginning of an explanation as to why

Forest Fires, Clusters of Trees, and Casualties in War


54 Complexity Theory and Network Centric Warfare

casualties in war follow a power law distribution. As noted


in [7], “...the behaviour of the forest fire model can be
explained in terms of a cascade model. If trees are randomly
planted on a grid, the distribution of cluster sizes is exponential
(Poissonian), not power law (fractal). The distribution of cluster
sizes in the forest fire model is power law (fractal). This is
because clusters of trees continuously grow and combine to
form larger clusters. Small fires sample this population of clus-
ters, but the loss of trees in fires is dominated by the largest
fires. There is a self-similar cascade of trees from small to large
clusters. In terms of the forest fire model, a spark ignites a tree
and the model fire consumes the entire cluster to which this
tree belongs. This is also the case for real forest fires. Ignition
of the forest must take place for a fire to take place, and the fire
will then spread through the contiguous flammable material.
A war must begin in a manner similar to the ignition of a for-
est. One country may invade another country, or a prominent
politician may be assassinated. The war will then spread over
the contiguous region of metastable countries....” We will look
at this fractal and power law behaviour of casualties in more
detail in Chapter 3.

TUNING AND GOAL SEEKING


This leads us to consider the concept of tuning. Self-organising
systems (such as the Bak-Sneppen evolution model of an eco-
system described in Chapter 1) can, as their name implies,
develop local organisation within the system in order to evolve
towards an attractor. Tuning can be seen as a directive way for
the macrosystem to attempt to influence the behaviour of the
microsystem. A controlling intelligence is deemed to be neces-
sary in order to guide the system towards a particular goal. An
example of this is the tuning of the sparking frequency model

Forest Fires, Clusters of Trees, and Casualties in War


Chapter 2 55

parameter sp in the forest fire model in order to obtain critical-


ity (and hence power law statistics for the size of clusters of
burnt trees). Varying the tuning parameter (the sparking fre-
quency) of the forest fire model represents intervention from
outside the system in order to ensure that it heads towards a
particular goal. This question of tuning makes us consider the
boundaries of the systems we are examining, and the flux of
energy and/or information across the system boundary.

INFORMATION F LUX ACROSS THE BOUNDARY OF AN


OPEN SYSTEM
In such open dissipative systems, there will always be fluxes of
information and/or energy across the system boundary. As an
example of this, we show in Chapter 4 how it is possible to
quantify the flux of information across the boundary of an
open system (a wargame) using the concept of Information
Entropy. We can then use this to relate such a flux of informa-
tion to emergent properties of the wargame (such as the
number of casualties suffered). The idea of using entropy as a
measure of information (and hence knowledge) is then applied
in Chapter 4 to show how the benefits of network-centric (as
opposed to platform-centric) approaches to specific task prose-
cution can be quantified (this latter based on work by the
RAND Corporation).

REFERENCES
1 CEBROWSKI A (2000). “Network Centric Warfare and Information
Superiority.” Keynote address from proceedings, Royal United Services
Institute (RUSI) conference. “C4ISTAR; Achieving Information
Superiority.” RUSI. Whitehall, London, UK.
2 MOFFAT J (2002). Command and Control in the Information Age; Representing its
Impact. The Stationery Office. London, UK.

References
56 Complexity Theory and Network Centric Warfare

3 HOFFMAN F G and HORNE G E (1998). Maneuver Warfare Science 1998.


Dept of the Navy, HQ U.S. Marine Corps. Washington, DC.
4 ALBERTS D S, GARSTKA J J and STEIN F P (1999). Network Centric
Warfare; Developing and Leveraging Information Superiority. CCRP, DoD.
Washington, DC, USA.
5 GELL-MANN M (1994). The Quark and the Jaguar. WH Freeman. New York,
USA.
6 SETHNA J P, DAHMEN K A, and MYERS C R (2001). “Crackling
Noise.” Nature. 410 8. March 2001. pp. 242-250.
7 ROBERTS D C and TURCOTTE D L (1998). “Fractality and Self-
Organised Criticality of Wars.” Fractals. 6 No 4. pp. 351-357.

References
CHAPTER 3

EVIDENCE FOR
COMPLEX EMERGENT
BEHAVIOUR IN
HISTORICAL DATA

INTRODUCTION

T he first point to make is that the exploita-


tion of manoeuvre warfare is not new.
Commanders have exploited such an approach
in previous generations. For example, LTG Sir
Francis Tuker [1] indicated that at a three-
dimensional spatial level, manoeuvre warfare is
determined by three conditions:
1. Flanks shall be tactically open or it shall be
possible to create a flank by break-in and
breakthrough.
2. The mobile arm shall be predominant.

57
58 Complexity Theory and Network Centric Warfare

3. It shall be possible to administer the mobile arm to the


point where it will decide the battle and gain decisive
victory.
Examination of historical data should thus give us some insight
into these effects.
Secondly, the idea that execution of military network-centric
enterprises within a battlespace is of a self-organising nature
has interesting consequences for the future of both warfighting
and other operations. A system that comprises of clusters
grouping and regrouping, the increasing tempos of battlespace
awareness, operations, and responsiveness etc. require that the
command decision process (or better, the rate of change of
decision process) is undertaken with a similar tempo. The ratio
of “battle speed” to “C2 speed’” is thus a critical issue, as we
discussed in [2]. When this ratio is high, the system reverts to
being self-organising in nature. In considering such self-organ-
ising systems earlier, we indicated that under certain
conditions, the time series of outputs from such a system
should have a certain fractal character. In the case of a model
ecosystem such as the Bak-Sneppen evolution model discussed
in Chapter 1, in fact, this fractal process is driven by the extre-
mal (smallest) fitness values on the lattice of interacting species.

TIME SERIES BEHAVIOUR


Applications of these ideas have been made in order to predict
combat casualty rate patterns from WWII data [3]. A time
series model based on this fractal self-organising approach has
been contrasted with three other prediction methodologies: a
neural network; use of nonlinear prediction (a prediction is
made by searching the data for the nearest N points in a d-
dimensional embedding and estimating the behaviour charac-

Time Series Behaviour


Chapter 3 59

teristics of the data for the next timestep [4]); and lastly, use of
a maximum entropy method to calculate a power spectrum
from which a linear prediction may be made [5]. All of these
approaches are available in a package of time series analysis
procedures (the Chaos Data Analyser) produced by the Ameri-
can Institute of Physics for the analysis of experimental data in
natural systems, and that is what we have used here. A range
of the data points from the original data were deleted (those at
the end of the time series) and a prediction made of these data
points, which is then compared with the original data for the
2nd Armoured Division. The plots (Figure 3.1 to 3.5) are of
casualties per 1000 on the y-axis and time in days on the x-axis.

Figure 3.1: 2nd Armoured Division Data

Figure 3.2: 2nd Armoured Division–Power Spectrum Prediction

Time Series Behaviour


60 Complexity Theory and Network Centric Warfare

Figure 3.3: 2nd Armoured Division–Nonlinear Prediction

Figure 3.4: 2nd Armoured Division–Neural Net Prediction

Figure 3.5: 2nd Armoured Division–SOC Prediction

The first part of this time series (up to day 38) was in fact used
to train a number of different time series prediction methods,
and these have been compared with the predictions for the
days 39 onward. In fitting a prediction based on a self-organis-
ing criticality (SOC) fractal series, we have assumed that the
circumstances remain sufficiently constant that we can fit a
single SOC process (this corresponds to a power spectrum that
is linear when plotted on a log-log scale). Comparing the “jerk-
iness” of the SOC prediction and the real data, the general
pattern of the process is very similar.

Time Series Behaviour


Chapter 3 61

We repeated the process with another data set drawn from


Kuhn for the 9th Armoured Division, with similar results as
summarised below.

Figure 3.6: 9th Armoured Division Data

Figure 3.7: 9th Armoured Division–Neural Net Prediction

Figure 3.8: 9th Armoured Division–Power Spectrum Prediction

Figure 3.9: 9th Armoured Division–SOC Prediction

Time Series Behaviour


62 Complexity Theory and Network Centric Warfare

These plots indicate that the assumption of self-organisation


appears to give casualty behaviour that is of the same form as
for the real data, at least for these data points.
Kuhn himself suggested that the casualty rate data pattern,
taken from his collection of combat rates from World War II
to the present, displayed a move away from linearity. First, the
number of “hotspots” in combat did not increase with larger
force concentrations, and second, combat was characterised
by high rates of casualties lasting for short periods of time,
interspersed with low casualty rates. These results indicated, in
turn, a move away from the type of modelling associated with
attrition warfare. Quantitative patterns were also found to be
based upon two types of operational forms: that of a continual
front (characterised by combat on the Western Front in World
War II) and that of a disrupted front, (characterised by combat
on the Eastern Front in World War II). Combat casualties dra-
matically increased as a consequence of breakthrough
operations in disruptive types of operations. Combat, there-
fore, could be seen as a process wherein quiet states are
interspersed with “critical” irruptions.
Lauren [6] has also considered casualty data from World
War II and compared it with outputs from the MANA model
(an agent-based simulation similar in character to the ISAAC
model we will discuss in detail in Chapters 4 and 5). He has
shown that such data display fractal properties and power
spectra that confirm the analysis of casualty data discussed in
this chapter.

Time Series Behaviour


Chapter 3 63

FURTHER HISTORICAL DATA ON THE


PROCESSES OF “IRRUPTION” AND
BREAKTHROUGH
In an historical analysis study [7] of the operational level of
combat, it was found by Rowland that the occurrence of
breakthrough, defined as the destruction of cohesiveness of the
defence, was an important event in the eventual success of an
offence. Following breakthrough, 86% of operations were suc-
cessful, whereas if no breakthrough was achieved only 15%
were eventually successful. Once breakthrough has been
achieved, it becomes possible for the attack force to conduct a
type of operation more in the nature of exploitation than com-
bat. Moreover, variations in the time to breakthrough also led
to differences in the nature of campaigns; the timings of
“immediate” (less than half a day), “quick” (less than 2 days),
and “prolonged” (over 2 days) were used for study purposes.
In studying the course of operations, one of the aspects exam-
ined was the nature of the movements of attack forces. The
study of the rate of terrain capture with operational success
showed patterns that could be related to breakthrough time.
Several different measures were examined, the first one
derived having the dimensions of advance rate (distance/
time). Whilst the measures change through time, the following
results relate to measurement (such as the length of attack
frontage) at the time of breakthrough.

AREA TAKEN AND MEAN ADVANCE AT


BREAKTHROUGH
This measure may be taken as a possible index of how badly the
defence has fared and of its chances of recovery. However, as
the simple area measure does not allow for variations in size of

Further Historical Data on the Processes of “Irruption” and Breakthrough


64 Complexity Theory and Network Centric Warfare

operation, the function (area at breakthrough)/(attack frontage)


has been used here. This has a dimension of length and would
represent “mean advance” on the original attack frontage.
The distributions of the measures in Tables 3.1, 3.2, and 3.3
are log-normal within a given breakthrough category, as we
will show explicitly later. In each case, the table shows the geo-
metric mean of the corresponding log-normal distribution.
The discussion of stock price returns and Brownian motion in
Chapter 1 gives us some insight into why log-normal distribu-
tions might arise in such cases.

OPERATIONS LEADING TO:


CAMPAIGN SUCCESS CAMPAIGN FAILURE
Prolonged
29 8
Breakthrough
Quick
5 -
Breakthrough
Immediate
11 2.2
Breakthrough
Table 3.1: Geometric Mean Area/Attack Front at Breakthrough (Miles)

OPERATIONS LEADING TO:


CAMPAIGN SUCCESS CAMPAIGN FAILURE
Prolonged
4.5 1.5
Breakthrough
Quick
4.5 1.5
Breakthrough
Immediate
11 2.2
Breakthrough
Table 3.2: Geometric Mean Area Per Day/Attack Front at Breakthrough
(Miles/Day)

Further Historical Data on the Processes of “Irruption” and Breakthrough


Chapter 3 65

OPERATIONS LEADING TO:


CAMPAIGN SUCCESS CAMPAIGN FAILURE
Prolonged
4.5 2.1
Breakthrough
Quick
8.8 -
Breakthrough
Immediate
18 5.5
Breakthrough
Table 3.3: Geometric Mean √Area Per Day at Breakthrough (Miles/Day)

A first analysis of the results in Table 3.1 shows that these


“mean advances” at breakthrough are greater for those achiev-
ing subsequent success than those failing by factors of 5 for
immediate breakthrough and approximately 3.6 for prolonged
breakthrough. This larger factor for immediate breakthrough
is indicative of the extra “brittleness” that pertains to these very
quick breakthrough cases. They also show that these “mean
advances” at breakthrough are less for immediate than for pro-
longed breakthrough by mean factors of 0.38 for those
achieving subsequent success and by 0.28 for those failing.
Moving to the more complicated measure of irruption in
Table 3.2, we move from mean advance to mean advance per
day, giving a first order measure of the mean rate of advance.
For this case, there is somewhat less variability in results.
There is, in fact, no significant difference between quick and
prolonged breakthrough effects in this case. These two groups
can therefore be pooled. Combined quick and prolonged
breakthrough results in terms of subsequent success and failure
are thus shown in Table 3.2.
We again find differences on this measure (mean rate of advance)
between breakthroughs leading to eventual success and those
leading to failure. These differences are by factors of 5 for imme-

Further Historical Data on the Processes of “Irruption” and Breakthrough


66 Complexity Theory and Network Centric Warfare

diate breakthrough, and 3 for prolonged/quick breakthrough.


The mean rate of advance for immediate breakthrough is greater
than for prolonged/quick breakthrough.
The caricature of movement to breakthrough implied is:

Rather than:

However, the movement post-breakthrough is better represented


by the linear model than the radial one for success after break-
through and without breakthrough. The exception to this is
success after immediate breakthrough, which discriminates
success most significantly on the basis of the radial model.
An alternative measure of the same dimensions is thus √ Area/
Time, again a representation of advance rates but representing
propagation from a point rather than a linear advance.

Further Historical Data on the Processes of “Irruption” and Breakthrough


Chapter 3 67

The equivalent mean values for this measure are shown in


Table 3.3. The differences between groups show similar pat-
terns to the previous measures, but are greater, and
significant at the 1% level (t test) between immediate and
prolonged breakthrough.
It can thus be observed that, despite the great variations in size
of the operations studied, there are patterns to be deduced and
these can offer lessons on the nature of this little-studied aspect
of manoeuvre warfare.
This process of irruption has been identified as one of the key
emergent effects of manoeuvre warfare [8]. We consider now
whether such a process has scaling properties of the type dis-
cussed in our general consideration of complexity. The
historical data indicates (as we have discussed) that for a
given type of breakthrough (immediate, quick, or prolonged–
I, Q , or P), and subsequent effect on the campaign (Subse-
quent Success [SS] or Subsequent Failure[SF]), the mean
advance at breakthrough turns out to be a log-normal distri-
bution. Of even more interest to us is the fact that if these
distributions are plotted for each of the breakthrough/cam-
paign effect categories, then they have a certain scaling
character, which we now define.
Consider Figure 3.10 (page 71). Here we have plotted the log-
normal distribution of mean advance at breakthrough for each
of a number of breakthrough categories. The x-axis is the log
of the mean advance at breakthrough (in miles), and the y-axis
is such that a cumulative normal distribution of the x variable
will give a straight line. We can see that the various categories
of breakthrough produce cumulative curves that are parallel to
each other, with one exception. However, this case has signifi-
cantly less data than the other cases and we take this to be the

Further Historical Data on the Processes of “Irruption” and Breakthrough


68 Complexity Theory and Network Centric Warfare

cause of this deviation. The fact that these lines lie parallel to
each other means the following: given two such curves, corre-
sponding to x variables x(1) and x(2), there is a scaling variable
λ (which depends on the two categories of breakthrough being
considered) such that
Log x(1) = λ Log x(2)
i.e. x(1) = x(2) λ
and the distribution of mean advance at breakthrough coin-
cides for the variables x(1) and x(2)λ. In this sense, we can say
that x(2) can be scaled by a power transformation so that its
distribution collapses onto that of x(1).
An alternative radial measure of irruption, as we have dis-
cussed, is √(area at breakthrough)/(days to breakthrough),
with dimensions of miles per day. If this is plotted on the same
basis as the previous figure, we again have evidence for a form
of “scaling collapse” of the type discussed above (Figure 3.11).
Moreover, the stability of the two sets of data indicate that
there are (at least) two categories of emergent behaviour for
irruption and subsequent campaign outcome–linear advance
and radial propagation from a point.
Each data point in Figures 3.10 and 3.11 is a campaign out-
come, classified in terms of immediate (I), quick (Q) or
prolonged (P) irruption, and Subsequent Success (SS) or Sub-
sequent Failure (SF). The key to the data points is given below:

Further Historical Data on the Processes of “Irruption” and Breakthrough


Chapter 3 69

Figure 3.10: The Statistics of Linear Irruption

Figure 3.11: The Statistics of Radial Irruption

Further Historical Data on the Processes of “Irruption” and Breakthrough


70 Complexity Theory and Network Centric Warfare

THE FRACTAL FRONT OF COMBAT


In [9], Lauren discusses the fractal nature of a combat front
between two opponents. The idea is that an essentially straight
line frontage between two tactical-level opponents will buckle
into a fractal shape, whose fractal dimension can be calculated
as a function of the force ratio of the forces involved (the number
of attackers to the number of defenders) as derived from Histori-
cal Analysis of infantry battles carried out by the UK Dstl.
Lauren [9] uses the Historical Analysis result as a basis for his
analysis:
F = (number of attack infantry/number of defence infantry) 0.685
where F is a multiplier for the base number of casualties of the
attacking force per defence weapon. As a consequence of this,
Lauren was able to show that the combat front will buckle
over time and in the limit will have a fractal dimension
D = 1.685.
From Chapter 1, if we assume that this process is akin to inva-
sion percolation of one fluid by another in a porous medium,
the fractal dimension of the boundary of the resulting interface
should lie in the range 1.33-1.89, which is what we find from
historical data.
It is possible in this case to derive the underlying dynamics
producing this statistical effect. It turns out that this fractal fac-
tor is due fundamentally to detection of targets [10], and
comes from a model of the engagement process that leads to
the following relationship:

1 k1
= + k2
R T

The Fractal Front of Combat


Chapter 3 71

where k1 and k2 are constants, R is the defender rate of fire,


and T is the number of targets in view [11].
It reflects the asymmetry of the infantry battle in the following
sense [12]. The attack force aim is to close on the defence posi-
tion, and fire is used in a general suppressive mode–actual
casualties caused to the defence are only a small part of the
process at this point. However, from the defence perspective,
the aim is to deter the attack, and casualties to the attack force
are very important. Such casualties to the attack force are a
direct reflection of the intervisibility of targets to the defence
force as discussed above.
As with most applications of fractal processes, the process
breaks down at some point due to the granularity of the resolu-
tion. In this case, the process remains valid up to about 30m
closing distance between the attack and defence. At that point,
a different mechanism comes into play, leading to local defence
surrender and attack overrun of defence positions [12].
More generally, the figure of 0.685 relates to open terrain. In
urban areas it is about 0.5 [13]. The closing to overrun
appears to occur differently in urban and wooded terrain as
compared with open terrain [12]. For example, in open condi-
tions, the closing part of the battle occurs across the front. By
contrast, in urban conditions, the attack force is split into small
subunits who individually close on defence locations leading to
local surrender and overrun.

The Fractal Front of Combat


72 Complexity Theory and Network Centric Warfare

POWER LAW RELATIONSHIPS IN COMBAT DATA

THE HARTLEY MODEL


In [14], Hartley has analysed eight separate databases of his-
torical combat data. Four were developed by Helmbold; the
fifth dataset, “Inchon,” was developed by Busse. The last three
datasets were developed by Dupuy. These eight datasets span
several centuries in time, include both air and land conflicts,
and span the range from small to large interactions. Details of
the databases are given in [14]. On the basis of this extremely
extensive set of data, Hartley was able to develop a stable anal-
ysis of the relationship between casualties in conflict and the
initial force ratio based on earlier ideas of Helmbold. He
defines the following two dimensionless variables:

x02 − x 2
HELMRAT = 2
y0 − y 2
x0
FORRAT =
y0
where in each case, x0 is the starting value of force size, and x is
the final value (similarly for y). Hartley has established a power
law relationship between these two variables, HELMRAT and
FORRAT, on the basis of the comprehensive data sets
described above. He has shown that (in logarithmic terms):
Ln (HELMRAT) = α Ln (FORRAT) +β
where the expected value of α is approximately 1.35 and the
value of β is approximately normally distributed about the
value -0.22 with standard deviation of 0.7. Hartley shows that
the value of α has the characteristics of a universal constant,
being stable over four centuries of time [14, Figure 17], and
stable when considering conflicts of different sizes, ranging

Power Law Relationships in Combat Data


Chapter 3 73

from force sizes of less that 5,000 to more than 100,000 [14,
Tables 4, 5, and 6].
If it is assumed that the mechanism that produces this remark-
ably stable relationship between casualty effects and force ratio
is of Lanchester type, then Hartley shows that it must be of lin-
ear-logarithmic form. However, the relationship is based on
the empirical data alone, and other explanations are possible.
For example, in [15] an analysis based on self-organisation (in
particular the forest fire model of Chapter 2) is put forward as
the basis for the equally remarkable scaling of conflict size. It is
thus persuasive that such complexity-based effects, rather than
a Lanchester process, lie at the base of the scaling relationship
established by Hartley. This analysis by Turcotte and Roberts
is next discussed.
In their paper [15], they begin by comparing the predictions
of the theoretical self-organising forest fire model with the sta-
tistics of the relative sizes of real forest fires. Four data sets are
considered: 4,284 forest fires in the USA Fish and Wildlife
Service Lands during the period 1986-1995; 120 of the largest
fire areas in the western USA from tree ring data, spanning
the period 1155-1960 (800 years); 164 fires in the Alaskan
boreal forests during 1990-1991, and 298 fires in the Austra-
lian Capital Territory during 1926-1991. The results are in
good agreement with a power law statistical distribution of size
of fire versus frequency, with a power law exponent of
between 1.3 and 1.5. The remarkable thing is the stability of
the trend across such a long period of time, during which tech-
nology has changed, as have ways of fighting such fires. The
authors then show that a similar power law relationship (also
with an exponent in the same range) holds for the intensity of
conflict versus its frequency. This work extends the research of
Richardson [16], who also showed a power law relationship

Power Law Relationships in Combat Data


74 Complexity Theory and Network Centric Warfare

between the intensity of war and its frequency. Turcotte and


Roberts base their results on two data sets: that of Levy, which
tabulates the intensities of 119 wars from 1495 to 1973; and
that of Small and Singer, who considered 118 wars over the
period 1816-1980.
The similarity of the power law exponent for both forest fire
statistics and war intensity leads Turcotte and Roberts to
hypothesise that war deaths are caused by a self-organising
mechanism akin to that of the forest fire model. This is at least
the beginning of an explanation of why casualties in war
should give rise to such a simple power law relationship, stable
over centuries of time.

REFERENCES
1 LTG TUKER F (1948). The Pattern of War. Cassell. UK.
2 MOFFAT J (2002). Command and Control in the Information Age: Representing its
Impact. The Stationery Office. London, UK.
3 KUHN G W S (1989). “Ground Force Casualty Patterns: The Empirical
Evidence.” Report FP703TR1.
4 WAYLAND R, BROMLEY D, PICKETT D, and PASSAMANTE A
(1993). Physics Review Letters. 70. p. 580.
5 LAERI F (1990). Computational Physics. 4. p. 627.
6 LAUREN M and STEPHEN R T. “Fractals and Combat Modelling: Using
MANA to Explore the Role of Entropy in Complexity Science.” Paper
prepared for Fractals. Defence Technology Agency. Auckland, New
Zealand.
7 ROWLAND D, KEYS M C, and STEPHENS A B (1994). “Breakthrough
and Manoeuvre Operations (Historical Analysis of the Conditions for
Success) Annex I, Irruption.” Unpublished DOAC Report, Annex I.
8 ROWLAND D, SPEIGHT L R, and KEYS M C (1996). “Manoeuvre
Warfare: Some Conditions Associated with Success at the Operational
Level.” Military Operations Research. 2 No 3. pp. 5-16.

References
Chapter 3 75

9 LAUREN M K (2000). “Modelling Combat using Fractals and the Statistics


of Scaling Systems.” Military Operations Research. 5 No 3. pp. 47-58.
10 ROWLAND D (1983). “The Effectiveness of Infantry Small Arms Fire in
Defence – A Comparison of Trials and Combat Data.” Unpublished
DOAC Memorandum.
11 THODY J H and DOVE H J (1981). “An Analysis of Small Arms Fire by
Infantry in Defensive Positions.” DOAE Unpublished Note.
12 ROWLAND D personal communication.
13 ROWLAND D (1991). “The Effect of Combat Degradation on the Urban
Battle.” Journal of the OR Society. 42 No 7. pp. 543-553.
14 HARTLEY D S (1991). Confirming the Lanchesterian Linear-Logarithmic Model of
Attrition. Martin-Marietta Center for Modelling, Simulation and Gaming.
Report K/DSRD-263/R1.
15 ROBERTS D C and TURCOTTE D L (1998). “Fractality and Self-
Organised Criticality of Wars.” Fractals. 6 No 4. pp. 351-357.
16 RICHARDSON L F (1960). The Statistics of Deadly Quarrels. Boxwood Press.
Pittsburg, USA.

References
CHAPTER 4

MATHEMATICAL
MODELLING OF
COMPLEXITY,
KNOWLEDGE,
AND CONFLICT

INTRODUCTION

U nderstanding the behaviour of agent-based


simulation models of conflict is now
becoming more important, especially as (with
improved representation of Command and Con-
trol [1]) the agents gain intelligence and try to
outsmart each other, producing potentially very
complex behaviour. In the modelling of natural
systems (such as fluid dynamics, or heat flow), the
principal variables in these models can often be
separated out from the rest of the model to pro-
duce a mathematical metamodel that is aimed at
relating the outputs of the model to these driving

77
78 Complexity Theory and Network Centric Warfare

inputs in a more transparent and explicit way. If this can be


achieved, it improves our understanding of the system and its
likely emergent behaviour, as well as complementing the use of
detailed simulation. Such an approach is consistent with the
idea of ‘Operational Synthesis’ as espoused by Dr. Alfred
Brandstein, then Chief Scientist, U.S. Marine Corps [2].
As an example, in developing a metamodel we consider the
relationship between a key outcome of the model, a, and a set
of input variables as follows:

a = f (a1 ,...ak , b1 , b2 )
(This is easily generalised to an arbitrary number of bs.) The
arguments a1 ,...ak have independent dimensions. That is, the
dimension of any a cannot be expressed as a combination of
the dimensions of the other as. In contrast, the dimension of
each b variable can be expressed as such a combination. The
arguments can be transformed using a gauge transformation
so that:

a1′ = A1a1 ,..., ak′ = Ak ak .


These correspond to a change in the “gauge” (e.g., from centi-
metres to metres or kilometres) in the measurement of a
variable. Physically, if a gauge change makes no difference to
observed behaviour for all observers using different gauges,
the variable is said to be self-similar.

Introduction
Chapter 4 79

The metamodel function can then be shown [1] to have the


property:

⎛ b b ⎞
a = f (a1 ,..., ak , b1 , b2 ) = a1p ...akr Φ ⎜ p1 1 r1 , p2 2 r2 ⎟
⎝ a1 ...ak a1 ...ak ⎠
= a1p ...akr Φ ( Π1 , Π 2 ) (1)
b1 b
where Π1 = p1 r1
and Π 2 = p2 2 r2
a ...ak
1 a1 ...ak

These “ Π ” variables are sometimes called similarity variables.


This is because two natural systems, with different values of as
and bs but the same value of Π , will tend to have similar
emergent behaviour. An example is the flow of air past an air-
craft in the atmosphere or past a model of the aircraft in a
wind tunnel. The Π variable in this case is the Reynolds
number. If this is the same in both cases, then the model in
the wind tunnel will give results relevant to the full-scale air-
craft in the atmosphere.
Self-similar solutions correspond to problems where the values
of the variable ( b2 for example) tend to zero or infinity. Three
possibilities are available (see Reference [1] for further
discussion):
TYPE 1 METAMODEL. Φ tends to a non-zero finite limit as one
of its arguments tends to zero or infinity. This means that in
most practical cases, this argument can be eliminated from the
relationship, giving a simplified form in equation (1).
TYPE 2 METAMODEL. Φ has power-law asymptotics of the
form:

% ( Π1 )
Φ (Π1 , Π 2 ) = Πα21 Φ (2)
Πα2 2

Introduction
80 Complexity Theory and Network Centric Warfare

as the argument Π 2 tends to zero or infinity. In theoretical


physics, these systems are examined from a gauge theory point
of view using a “renormalisation group” approach in which
the parameter Π 2 is considered (through the repeated applica-
tion of a renormalisation group) at larger and larger (or
smaller and smaller) gauges, giving an asymptotic expression
of the form required for a type 1 or 2 metamodel. Later, we
will discuss the renormalisation group in more depth and
relate it to concepts of control of the battlespace.
TYPE 3 METAMODEL. Neither 1 nor 2 holds and self-similarity
is not observed; Φ has no finite limit different from zero and
no power-law asymptotics.
This approach directs us (for evidence of metamodels of types
1 and 2) to search for evidence of power-law relationships of
the form y = x α , which, if plotted on a log-log scale, give a
straight line whose slope is the power-law exponent. Such
expressions arise naturally in certain types of complex systems,
particularly where fractal structures are involved, and are
referred to as scaling relationships since they have no preferred
gauge. Evidence of such scaling relations is thus evidence in support of the
assumption of relative gauge. Chapter 3 shows that there is clear
evidence for such an assumption in historical conflict data, and
that metamodels of types 1 and 2 should be expected for
agent-based models of conflict. In addition, we should expect
to see evidence for normalised “scaling collapse” as exempli-
fied by the function Φ (Π1 , Π 2 ) in equation (2), and we should
expect to see the effect of renormalisation groups.
If we consider an agent-based “distillation” such as the ISAAC
model developed under Project Albert by the U.S. Marine
Corps Combat Development Centre, we can consider the
emergent behaviour of such a model in terms of both the spa-

Introduction
Chapter 4 81

tial clustering of the agents, and the attrition that they inflict
on the opponent. For such a distillation, a metamodel of type 2
applies [1] that allows us to relate the attrition rate for one side
to the clustering dynamics of the opposing side, as measured
by the mean fractal dimension of these clustering agents. As a
simple example, (given in [1]), assume that the command pro-
cess, say for Red, is represented by the following effects:
1. The number of discrete clusters of Red agents at time t,
N(t), is specified ahead of the simulation.
2. N(t) is a decreasing function of t.
These assumptions are meant to suggest that the number of
Red clusters decreases in time, reflecting the desire of Red to
concentrate force. With these assumptions let us further
assume that the smallest cluster of Red agents, X(t), at time t, is
taken and added to another randomly chosen cluster of Red
agents. This process thus represents both the concentration of
Red force and the reconstitution of force elements.
Let us now define ϕ ( x, t ) = (expected number of clusters of Red
agents ≥ size x at time t)/(initial total number of clusters of Red
agents) and N(t)=(the total number of remaining clusters of
Red agents at time t)/(initial total number of clusters). Given
the assumptions and definitions above, it can then be shown
that ϕ ( x, t ) , the cumulative distribution of cluster sizes at time
t, approaches a self-similar distribution as time progresses
(i.e., a scaling collapse takes place). Thus the cluster size distri-
bution evolves over time by a scaling relation. ϕ ( x, t ) can then
be represented in the self-similar form:

g ( x / X (t ))
ϕ ( x, t ) =
X (t )

Introduction
82 Complexity Theory and Network Centric Warfare

where g is some positive continuous function and where


g(1)=N(t)X(t). This self-similar form means that we can define
the distribution of relative cluster size in a way that is time
invariant (although the actual cluster sizes will change).
Now assume that the evolution of the distribution of ϕ ( x, t ) is
smooth (a small change in time t leads to a small change in
ϕ ( x, t ) ; this is equivalent to saying that the renormalisation
group is smooth [1]). If ϕ ( x, (1 − δ )t ) is the expected cluster size
x at time (1 − δ )t and ϕ ( x, t ) is the same expectation at time t,
then this assumption means we can find a constant b to first
order such that:

ϕ ( x, t ) = (1 + bδ )ϕ ( x, (1 − δ )t ) .
It then follows that:

log ϕ ( x, t ) = b log t + c
for some constant c and the normalised expected cluster size at
time t, ϕ ( x, t ) , varies as a power-law with increasing time t and
scaling constant b.
If ΔB is the change in the number of Blue agents, Lauren [3]
has shown that:

ΔB
Δt
is proportional to the product of (Red unit effectiveness)x(the
probability of meeting a Red cluster)x(the expected number of
Red units per cluster). It is assumed that unit effectiveness is
constant. Keeping the cluster size constant for the moment,
this indicates [3] that the rate of change of Blue agents is given
by an expression of the form:

Introduction
Chapter 4 83

ΔB
= k q ( D ) Δt r ( D )
Δt
where D is the average fractal dimension of Red (and therefore
an indication of how Red clusters/collaborates locally) and
both r and q are exponents. This equation is a form of
Lanchester law where the rate constant is dependent upon the
clustering of Red agents. If Red cluster size varies according to:

g ( x / X (t ) )
ϕ ( x, t ) =
X (t )
where x is the cluster size and X(t) the smallest cluster at time t,
then we can write:

ΔB
= k q ( D ) Δt r ( D ) N (t ) g ( y (t ))
Δt
where N(t), inversely related to X(t), is the normalised number
of clusters of Red at time t and g(y(t)) is the mean of the distri-
bution of cluster size, which evolves as a power law (as we have
shown in certain cases).

MORE GENERAL DISTRIBUTION OF CLUSTER SIZES


For self-organising groups of agents that are approaching a
critical point in the form of a Bak-Sneppen evolution model,
but are not necessarily attempting to concentrate force or to
reconstitute force in the sense described above, our previous
analysis in Chapter 1 indicated that the distribution of cluster
sizes at some intermediate point s is given by:
1
P( S , fi , s ) = S −τ g ( S ( f c − fi , s ) σ )

Introduction
84 Complexity Theory and Network Centric Warfare

where fi , s is the minimal site probability at timestep s. As


noted before, this converges to a power-law distribution as s
tends to infinity.
In [1], we also relate the cluster fractal dimension predicted by
the Bak-Sneppen evolution model of local species coevolution,
to that which emerges from initial experiments with the
ISAAC cellular automaton model. (For a description of
ISAAC see the Web site at reference [4].)
THE RENORMALISATION GROUP
The gauge invariant approach to metamodelling outlined
above also leads us to consider the role of the renormalisa-
tion group, which explicitly appears in terms of its effect on
the distribution function Φ (Π1 , Π 2 ) in our characterisation of
metamodels. Let us explore this a little more here. Suppose
that u ( x, t ) is a function of the two variables x and t. Using
the “static scaling” assumption of renormalisation [5], we
assume that we have a group of renormalisations of the form
Rb ,φ , so that:

1
u (bφ x, bt )
Rb ,φ u ( x, t ) =
Z (b)
From the group property, Ra ,φ Rb ,φ = Rab ,φ
It follows that Z (a ) Z (b) = Z (ab)
Thus Z (b) = bα for some exponent α
If u * ( x, t ) is a fixed point of the renormalisation, then;
u * ( x, t ) = b −α u * (bφ x, bt ) ∀b
1
Choose b = , then;
t
x
u * ( x, t ) = t α u% * ( φ )
t

Introduction
Chapter 4 85

The repeated application of the renormalisation process will


thus (in the limit) produce a functional relation of the form u * .
This explains why the type 2 metamodel has the form
assumed, and also why the (renormalisation invariant) cluster
size distribution of Carr and Pego discussed earlier has the
form derived by them. Note the simplification that has been
achieved in going from an unknown function of two variables
to a normalised unknown function of only one variable.
The ability of a force to control an area of operations can be
related to the fractal dimension of the force through the use of
such a renormalisation process as we now show.

CONTROL OF THE BATTLESPACE


Looking now at the phenomenon of control of the battlespace,
we can consider the problem using the renormalisation
approach as in a type 2 metamodel. Consider, as shown in Fig-
ure 4.1, the Area of Operations (AO) of a military
commander. For simplicity we assume this is a square of side L.

Figure 4.1: Area of Operations

Introduction
86 Complexity Theory and Network Centric Warfare

We assume that the commander aims to establish control in


this area. Firstly we have to define what this means. Each unit,
shown by a dot in Figure 4.1, has an area surrounding it that it
can control. The size of this area is defined by the nature of the
force and its associated sensors [6], giving rise to a “bubble” of
control around the unit. In two dimensions, let this area corre-
spond to a square of side l. We assume that l is significantly
smaller than the dimensions of the AO. (Note: we have
restricted the battlespace here to two dimensions to simplify
the discussion. However, the same approach should work in
three dimensions, corresponding to the complete battlespace.)
Now let D be the fractal dimension of the force under the com-
mander’s control within the AO. Suppose we partition the AO
into square cells of width l. Let N be the total number of such
cells, so that Nl 2 = A . Let N(l) equal the number of cells in the
AO that are occupied by one of the units making up the force.
By definition of the fractal dimension, we have that N (l ) = l − D
(normalising the constant of proportionality to 1). If p is the
probability that a cell chosen at random in the AO is under
control, then:

N (l ) l − D l 2− D .
p= = =
N N A
Note that D always lies between 0 and 2, so that p is well
defined.
In discussion with senior UK commanders who have had
recent operational experience at a high level, the concept of
control of an area as corresponding to the prevention of flow
through an area (flow in terms of an opposing force, or per-
haps some third party) has been endorsed as a good analogy.
We thus define the commander as having “weak control” of
his area of operations if he can to some extent control move-

Introduction
Chapter 4 87

ment through the AO. This is similar to the problem of


determining whether fluid can seep through a block of semi-
porous rock, as discussed in [7] where a renormalisation group
approach was used. We thus define weak control as corre-
sponding to a span of controlled areas that stretch either from
side to side or top to bottom of the AO. Following on from
this, we define the commander to have “strong control” of the
AO if there is a span of controlled areas stretching both from
side to side and top to bottom of the AO, resulting in a strong
constraint on the flow of people through the area.
The question at issue is then: how do these concepts of control
relate to the ability of the force to collaborate locally (the frac-
tal dimension)?
Consider first a cell of four elements where each cell is a
square of side l, which a single unit can control. We now con-
sider the probability p1 of weak or strong control of this square
cell of side 2l in terms of the probability:

l 2− D
p=
A
of a unit controlling each of the squares of side l. We consider
each of the five different classes of configuration for this cell, as
shown in Figure 4.2.
In Figure 4.2, we show the five classes a to e of configuration,
and mark beside each case whether this gives weak or strong
control, by considering the span of controlled areas.

Introduction
88 Complexity Theory and Network Centric Warfare

Figure 4.2: Five Configuration Classes

The probability of each configuration can be derived in terms


of p. For example, the probability of any of the cases in config-
uration d is p3(1-p). By adding up the configurations
corresponding to weak control, and taking into account the
probability of each such configuration, we have the relation:
p1 (weak) = 4 p 2 (1 − p) 2 + 4 p 3 (1 − p) + p 4 .
We can do the same thing for strong control, leading to the
relation:
p1 (strong) = 4 p 3 (1 − p ) + p 4 .
Using the renormalisation group approach [5], we iterate at
increasing levels of cell size, leading to the relations:
• Weak control: pn +1 = pn2 (4 − 4 pn + pn2 )
• Strong control: pn +1 = pn3 (4 − 3 pn )
These give rise to the recursive schemes shown in Figures 4.3
and 4.4. The relations for weak and strong control above cor-
respond to the relationships respectively:
f ( x) = x 2 (4 − 4 x + x 2 )
g ( x) = x 3 (4 − 3 x)

Introduction
Chapter 4 89

Figure 4.3: Plot of y = f(x) and y = x - weak control

Figure 4.4: Plot of y = g(x) and y = x - strong control

The fixed points in the recursive relation of weak control cor-


respond to the values x shown in Figure 4.3 such that y = f(x)
intersects y = x. Similarly for strong control, the fixed points
correspond to the values x such that y = g(x) intersects y = x.
For both weak and strong control, there are stable fixed points
at x = 0 and x = 1. However, there is also an unstable fixed

Introduction
90 Complexity Theory and Network Centric Warfare

point between these that is different for strong and weak con-
trol. This was calculated to be 0.382 for weak control, and
0.768 for strong control.

CONTROL AND FRACTAL DIMENSION


In either case, starting with a given fractal dimension for the
force, and the dimensions of the AO, we can calculate a corre-
sponding starting probability:

l 2− D .
p0 =
A
For side length L of the AO there will be a corresponding
value of iteration order n such that 2n l ≅ L . By using the recur-
sive scheme above, we can calculate for this value of n the
corresponding probability of weak or strong control of the
AO. Consideration of Figures 4.3 and 4.4 indicates that there
is a critical value of the probability:

l 2− D
p0 =
A
such that values above p0 polarise towards very good control,
whereas values below p0 polarise towards very poor control. In
fact, the point p0 corresponds to a phase change in the behav-
iour of such a system.
Examination of Figures 4.3 and 4.4 shows that it is easier to
iterate towards good weak control than towards good strong
control, as we would expect (since weak control is easier to
achieve than strong control). Figure 4.5 shows how this itera-
tion works for a starting probability of 0.65 and the
requirement of weak control.

Control and Fractal Dimension


Chapter 4 91

Figure 4.5: Recursive Calculation of the Probability of Weak Control

LOCKOUT
From a game theoretic perspective, we can see that each side is
trying to drive its own value of control up, and the other side’s
down. The analysis above indicates that there should be rapid
lockout, i.e. one side should rapidly gain control and lock the
other side out.

PERCOLATION THEORY AND THE


RENORMALISATION GROUP
In relating these ideas to the behaviour of natural systems, the
various configurations of 2x2 controlled areas are identical to
the porous and nonporous regions in semipermeable rock
structures [7]. The study of such processes is referred to as Per-
colation Theory. A good step-by-step introduction to the theory
and some working examples of how such processes work in two
dimensions are provided at the Web site reference [8]. If p is
the probability of an individual rock (or crystal) domain being

Control and Fractal Dimension


92 Complexity Theory and Network Centric Warfare

porous, then p is the driving parameter of the process. The


behaviour can be tuned in the sense that we discussed in
Chapter 2. If p is below the critical value p0, then the clusters
are not large enough to form a path of percolation from one
side of the structure to the other (popping noise). When the
probability p is above the critical value p0, then suddenly all
clusters tend to spread from one side to the other, allowing
percolation throughout the structure (these are thus called per-
colation clusters) and corresponding to a phase change in the
dynamic of the system (snapping noise). Near to the critical
point, it can be shown [8] that the distribution in size of perco-
lating clusters is a power-law, corresponding to a fractal
distribution of cluster size (crackling noise). In fact, for p
slightly greater than p0, the fraction F of individual domains
that form part of a percolating cluster takes the form
F = F0 ( p − p0 ) β . From this, we can see that as p approaches
the critical value p0 from above, the fraction of domains form-
ing part of the percolating cluster tends to zero (for a very large
initial configuration). Thus such clusters can become very ten-
uous close to the critical point.
These configurations come originally from attempts to model
lattices of magnetic spins in more than one dimension [5].
Such 2x2 configurations are then referred to as block spins
since they are composed of four individual spins, each of
which may be either up or down (the block spin is defined as
the sum of the signs of the individual spins, so it still has the
value +1 or –1). By developing the idea of the renormalisation
group, iterating to larger and larger domains, Wilson (building
on work by Kadanoff), was able to show that such arrays of
magnetic spins do indeed exhibit phase change effects (as we
have shown above for the phase change between being out of
control and being in control of a region), and that the param-

Control and Fractal Dimension


Chapter 4 93

eters involved can be calculated explicitly. For this work,


Wilson was awarded the Nobel Prize for Physics.

IMPLICATIONS FOR SELF-ORGANISING


INFORMATION NETWORKS
If a Self-Organising Information Network is thought of as a
grid of connections, and we have a probability p of creating a
link between one element of the network and another, then
percolation across the network corresponds to being able to
send a signal from one end of the network to the other. From
Percolation Theory, we can thus see that we would expect a
phase change in the dynamic of such a network. If p is small,
then only local connections can be made. However, at some
critical value of p, there will be a phase change such that con-
nections across the full network can suddenly be made. Near
to this critical value of p, the clusters formed by those con-
nected on the network will form a fractal set, and the
distribution of such cluster sizes is described by a power-law
relationship between size and frequency of that size. Two
questions that arise are:
1. What is the benefit (and cost) of being above the critical
threshold so that the connections are robust?
2. How can we measure the benefit of using the knowledge
obtained by such networked interconnection?
Recent work by Perry [9] has exploited the idea of information
entropy to address the second question (with a reduction in
entropy across the network corresponding to an increase in
knowledge, and this then being equivalent to a reduction in
delay in prosecuting an action). To understand where this idea
originates, we first look at the influence of knowledge in war-
games from an open systems perspective.

Control and Fractal Dimension


94 Complexity Theory and Network Centric Warfare

WARGAMES AS OPEN SYSTEMS SUSTAINED BY


KNOWLEDGE F LOWING ACROSS THE
BOUNDARY
As noted recently by Roske [10], a wargame is an open system
of the type introduced in Chapter 1. As he notes,

“In a classic command post exercise, we inject human decision-


making into a structured environment to generate open system
behaviours. Human decisionmaking represents energy crossing
the structured system boundary…”
In [11], Perry and Moffat developed the idea of using Infor-
mation Entropy (from Shannon Information Theory) as a way
of capturing the knowledge available to a military com-
mander’s decisionmaking during a wargame. This was initially
applied to looking at the benefit to be gained from advanced
airborne standoff radars (such as JSTARS in the U.S. or
ASTOR in the UK).
A series of wargames was carried out in the UK to quantify the
difference in combat effectiveness of a force without airborne
standoff radar (ASTOR) in comparison with a force with
ASTOR, or a force with other weapon systems whose life-
cycle costs approximately equalled those of ASTOR. The
common thread through all of the cases to be examined was
the stream of decisions made by the friendly force com-
mander. We describe here how we were able to capture the
flow of information to the Blue commander during the war-
games using the concept of Information Entropy, and then
turning that into a measure of “knowledge.” This approach
allowed us to measure the quantity of knowledge flowing
across the boundary of the system in order to influence the

Wargames as Open Systems Sustained by Knowledge Flowing Across the Boundary


Chapter 4 95

decisions made by the Blue commander and the onward evo-


lution of the wargame.

WARGAME STRUCTURE
There are three major problems with the use of wargames to
support military studies: (1) too little output data, (2) the likeli-
hood of atypical results, and (3) oversimplification. The first
problem stems from the fact that wargames are generally slow,
cumbersome, and resource intensive. Consequently, most ana-
lysts who use them to support studies plan only a small number
of games, thus precluding significant statistical results. The
second problem recognises the possibility that the sequence of
decisions taken by the players in these games represents statisti-
cal outliers. Players may adopt extreme strategies that exist
“outside” of what is considered to be a typical military
response. The third problem reflects the fact that human play-
ers can only approximate the results of combat operations. In
our studies, we addressed these problems in three ways: by
arguing that our wargames are quasi-memoryless processes for tac-
tical situation assessment; by introducing the epitomising strategy
principle in wargames; and by embedding computer models to
adjudicate engagements in the manual games. We discuss the
first two of these concepts next.

THE MEMORYLESS PROPERTY OF WARGAMES


The wargames used to support this study were two-sided,
zero-sum games played over several discrete time periods or
cycles (Bowen [12]). Consequently, the entire campaign can
be viewed as a dynamical process in which the state variables
are the force levels on both sides. In each of the wargames,
both the friendly and enemy commanders formulated opera-

Wargames as Open Systems Sustained by Knowledge Flowing Across the Boundary


96 Complexity Theory and Network Centric Warfare

tions plans based on the stated campaign objectives. During


the play of each game, both commanders assessed the overall
situation at the strategic and tactical levels. Assessment at the
strategic level consisted of examining the need to alter the
campaign plan. At the tactical level, it supported force alloca-
tion decisions consistent with the implementation of the
campaign plan.
A dynamic process is said to be memoryless or Markovian if at
each cycle, the state of the system is influenced only by the
state of the system in the previous cycle, and not by the specific
history of the system (Stark and Woods [13]). The wargames
conducted to support this study approximately satisfied these
conditions as follows:
• TACTICAL DECISIONS: In all of the wargames played,
the strategic situation was such that neither commander
found it necessary to alter their original plan. Conse-
quently, the commanders’ decisions centred on the
allocation of their forces. This forced them to focus
exclusively on the tactical situation in the current game
cycle and their assessments of the likely situation in sub-
sequent cycles. Transitions in the state variables
therefore depended upon the status of the forces at the
end of the previous cycle, the decisions taken in the cur-
rent cycle, and the combat attrition experienced in the
current cycle.
• CYCLE INDEPENDENCE: Both commanders were
assumed to act outside the opponent’s decision cycle.
That is, the opponent was assumed to have sufficient
time within each cycle to redeploy his forces so that intel-
ligence on unit identity, type, and location gained in the
previous cycle was no longer valid in the current cycle;
past history had no effect on the commanders’ current

Wargames as Open Systems Sustained by Knowledge Flowing Across the Boundary


Chapter 4 97

decisions. In actual practice, we found this assumption


only partially valid, as will be made more apparent below
in the discussion of the UK FASTHEX gaming model
used in the wargames for this study.
Under these conditions, the campaign can be viewed as a
sequence of decisions taken in a discrete dynamical system.
Each commander attempts to select a set of decisions (strategy)
that will maximise the likelihood that he will achieve his mis-
sion and that is consistent with the overall campaign plan.
Because the decisions are made under a connecting campaign
plan, we assert that the process is only quasi-memoryless. This
assumption is most relevant when the situation on the ground
is in a state of rapid flux–the most interesting case.

THE EPITOMISING STRATEGY PRINCIPLE


In the play of the games, we strove to ensure that the com-
manders took actions that epitomised the side’s historic
conduct in battle. We attempted to avoid bold, daring, bril-
liant manoeuvres as well as bungling incompetence. For the
purposes of analysis, more cautious conservative strategies that
are consistent with accepted doctrine are preferred. There is a
real danger that uncontrolled play would have resulted in pro-
ducing only outlier results, when what is wanted are typical
results. We achieved this by playing the wargames “open.”1
Red and Blue players were able to discuss strategies and deci-
sions in the presence of a game controller and thus we ensured
that the actions contemplated by either did not constitute
extraordinary tactics.

1A wargame is “open to Blue” if the physical state of Red is fully known to


the Blue player (Bowen [12]).

Wargames as Open Systems Sustained by Knowledge Flowing Across the Boundary


98 Complexity Theory and Network Centric Warfare

In keeping with this principle, great pains were taken to ensure


that each side “knew” only those things that would normally
be known through the available surveillance and intelligence
assets. Players were forced to examine the information
received through sensors and surveillance assets carefully to
ensure that:
1. Sound military judgement was used in considering the
decision options available.
2. Players used what they learned and interpreted from
sensor reports, and not what they saw on the “game
board.”
3. The information received was consistent with the limita-
tions and capabilities of the equipment being used, the
employment of the surveillance assets, and other envi-
ronmental conditions.

WARGAMING WITH FASTHEX


In the play of the FASTHEX wargame, the continuity of the
battle was modelled as a series of discrete timesteps referred to
as “game cycles” (Figure 4.6). The length of a cycle can be set
by the players, but is usually chosen to be 2 hours. Within each
cycle, a linear sequence of actions is taken and the conse-
quences of each is evaluated to simulate events within the
cycle. Blue and Red alternately begin the sequence in order to
smooth out the advantages or disadvantages of “going first.”
The flow chart in Figure 4.6 can be thought of as a series of
modules in a fully automated simulation of combat, less the
decision module. The modules are rather simple representa-
tions requiring extensive player input. For example, the player
selects the type of reconnaissance system to be used, states the

Wargaming with FASTHEX


Chapter 4 99

current environmental conditions, and the hexagon2 on the


game board to be searched. With this information, the model
applies the appropriate probability of detection or recognition
and reports the results. Environmental conditions and terrain
features are not known by the model and are inserted through
rules constraining the play.

Figure 4.6: FASTHEX Game Cycle Sequence

THE DECISION PROBLEM


With the epitomising principle in mind and the constraints on
gaming imposed by the FASTHEX model, we examine the
decisions open to each commander with particular emphasis
on the Blue commander. Great care has been taken to ensure
that decisions required in actual combat had their equivalent

2FASTHEX uses a hexagonal game board much like IDAHEX. For these
games, each hexagon is 7.5 km from face to face.

The Decision Problem


100 Complexity Theory and Network Centric Warfare

during the play of the wargame. This is made possible by the


fact that players can override almost all computer-generated
outcomes in the game. Therefore, realism can be imposed in
those cases where the model obviously strays.
The commander makes operational and tactical decisions at
each combat phase in the FASTHEX wargame cycle in keep-
ing with his overall operational and tactical aims (Figure 4.6).
The one exception of course is the engagement phase: the
engagement is a consequence of the decisions taken by the
Red and Blue commanders in the other phases.3 Therefore at
each cycle, the decision set in FASTHEX, D(t), consists of 4
components, D(t)={d1(t),d2(t),d3(t),d4(t)}, where t indicates the
cycle and di(t) is the decision taken at the ith phase in the FAS-
THEX model. The following is a description of the decisions
taken at each phase:
1. RECONNAISSANCE, d1(t): The commander must decide
which of the reconnaissance assets at his disposal to allo-
cate. In most cases, this means deciding where to direct
his sensors and how many to commit to the process (tak-
ing account of higher level assets such as satellite
surveillance). It should be noted here that reconnais-
sance assets are used primarily to identify valid targets
and main force concentrations.
2. STRIKE, d2(t): The strike decisions can be thought of as
the allocation of deep fire assets. The engagement phase
adjudicates the close battle, but the deep fire battle is
handled separately.

3Engagement adjudication is done using look-up tables based on lower


level computer modelling of the various types of engagement.

The Decision Problem


Chapter 4 101

3. MOVEMENT, d3(t): The commander must decide which


of his units to move and how far they are to move. He is
constrained by terrain, the maximum speed of his units,
and the degree to which the units are fit (in terms of
damage inflicted) to accomplish movement. Units move
at the lowest level of resolution: the battalion, battle
group, or squadron.
4. POST-ENGAGEMENT MOVES, d4(t): After each engage-
ment, the commander assesses the damage done to his
units. If the units are not capable of continuing as an
integral force, they can be withdrawn, consolidated with
other forces, or both.

OPTIMAL CONTROL FORMULATION


Each of the decisions is taken from among a discrete set of the
possible choices described above and therefore the set {D(t)}
consisting of all possible decisions at cycle t has cardinality
equal to the product of the cardinality of the 4 phase-decision
sets. The collection of decision sets at each cycle in the game is
then referred to as the decision stream for that game and there-
fore, we denote

DG = {D(0 ), D(1), L, D(m − 1)}

the decision stream for an m-cycle game. Clearly, the number


of possible decision streams for even a simple wargame can
easily become unmanageably large and thus we are burdened
by the “curse of dimensionality.”
The consequences of the commander’s decisions at each cycle
can be measured in several ways. As mentioned earlier, we
have selected the status of friendly and enemy forces. Conse-
quently, if we let X (t ) = [x1 (t ), x2 (t ),L , xb (t ), y1 (t ), y2 (t ),L, yr (t )] T

The Decision Problem


102 Complexity Theory and Network Centric Warfare

be the vector of combat strength of the b friendly force compo-


nents and the r enemy force components at cycle t, then the
decision stream can be viewed as a memoryless multistage decision
process as depicted in Figure 4.7. At each cycle, the commander
wishes to select D(t) so that a performance function dependent
upon the vector X(t) is optimised in some way.

Figure 4.7: A Wargame as an Open Dynamical Process

In this formulation, θ t is a transition function so that


X (t + 1) = θ t [ X (t ), D(t )]. The status of both Red and Blue forces
in terms of combat strength in cycle t+1 depends upon the
their combat status in cycle t and the decisions made in cycle t.
The initial condition X(0) is the total combat strength of the
friendly and enemy forces at the beginning of the campaign,
and X(m) is the status of both at the termination of the
campaign.
The commander’s problem then is to select the decision stream
DG that optimises the performance (i.e., utility) function:

P = ∑t =0 f t [ X (t ), D(t )] + Φ[X (m )].


m −1

Assuming that we are able to find a reasonable representation


for f t (⋅) and Φ[X (m )], finding the optimal decision stream is
then an optimal control problem of the form discussed in
Chapter 1 and Appendix 1. In this case, the decision variables
D(t) at each cycle are the control variables and the X(t) are the

The Decision Problem


Chapter 4 103

state variables. In Chapter 1 and Appendix 1, we discussed


under what conditions we might expect such a problem to
have a unique solution. For further discussion of such solu-
tions, see also Bryson and Ho [14].

THE TWO-SIDED GAME


The problem with a two-sided game is the development of a
second performance function for the opposing side. This prob-
lem can easily be solved if we design P such that if Blue
chooses to maximise P, then Red will choose to minimise P. In
this formulation, we assume two decision streams, one for the
friendly commander BG = {B(0),B(1),...,B(m–1)} and one for
the enemy commander RG = {R(0),R(1),...,R(m–1)}. Our
assumption implies that the game is zero-sum, that is, any
increase in P for the friendly force results in an equal decrease
for the enemy force and vice versa. Consequently, we wish to

maximise minimise
(P ) ,
B(t ) ∈ BG R(t ) ∈ RG
subject to the transition constraint:

X (t + 1) = θ t [ X (t ), B(t ), R(t )], 0 ≤ t ≤ m - 1 .


Solutions to problems of this type are fairly complex for all but
very trivial examples. For example, see Hillestad [15] and
Berkovitz and Dresher [16]. However, the objective here is
not to solve the wargame using one of these techniques, but
rather to use the two-sided memoryless multistage optimal
control formulation as a convenient construct for a formal
statement of the problem.

The Decision Problem


104 Complexity Theory and Network Centric Warfare

GAMES WITH EQUIVALENT DECISION STREAMS


Given the complexity of the two-sided optimal control con-
struct for the wargames, we forego attempts to apply any
closed form solution and rely instead on the replication of
instances of the game from each of the scenarios and for the
several cases to be examined. Even this however can be
extremely time consuming and therefore we wish to examine
those alternative cases in which the decision streams are essen-
tially different. That is, if the introduction of alternative
weapon systems in a game does not significantly alter the deci-
sion stream, DG, then the two games are considered
equivalent. In this way, the results of one game can be rerun
under the conditions of the second. Although the results may
vary with each, the decision stream is taken to be constant.
A simple example will serve to illustrate the process. Consider
a conflict in which Red and Blue commanders have only artil-
lery and tanks with which to conduct a two-cycle campaign,
and we focus on the use of artillery resources. Our state vector
then is X (t ) = [x1 (t ), x2 (t ), y1 (t ), y2 (t )] T where:
x1 (t ), y1 (t ) = the number of BLUE and RED artillery
pieces respectively, and
x2 (t ), y2 (t ) = the number of BLUE and RED tanks
respectively.
In both cycles, we assume that the decision on both sides is the
allocation of artillery fires. Consequently, B(t ) = [b1 (t ), b2 (t )] T is
the Blue commander’s decision at cycle t (t = 0,1) and
R(t) = [r1(t),r2(t)]T is the Red commander’s decision, where
b1(t),r1(t) is the fraction of Blue/Red artillery allocated to attack
Red/Blue artillery, b2(t) = 1–b1(t) is the fraction of Blue artil-
lery to be allocated against Red tanks, and r2(t) = 1–r1(t) is the

The Decision Problem


Chapter 4 105

fraction of Red artillery to be allocated against Blue tanks. For


simplicity, we restrict the domain to 0, .5, and 1. The two-
cycle game thus described is illustrated in Figure 4.8:

Figure 4.8: An Example Two-Cycle Game

The transition function, θ t , is simply the combat adjudication


model. If we assume a simple Lanchester differential model
and let i = 1 for Blue and i = 2 for Red,

θ t [ X (t ), B(t ), R(t )] = [α 21r1 (t ) y1 (t ), α 22 r2 (t ) y1 (t ), α11b1 (t )x1 (t ), α12b2 (t )x1 (t )] T ,

where:
0 ≤ α i1 ≤ 1 is the effect of Blue/Red artillery against Red/Blue
artillery; and
0 ≤ α i 2 ≤ 1 is the effect of Blue/Red artillery against Red/Blue
tanks.
The α ij s can be thought of as single shot kill probabilities
(SSKPs) and bj(t)x1(t) and r j(t)y1(t) represent the number of Blue/
Red artillery allocated to Red/Blue artillery and tanks. There-
fore the transition equations become:

x1 (t + 1) = x1 (t ) − α 21r1 (t ) y1 (t )
x2 (t + 1) = x2 (t ) − α 22 r2 (t ) y1 (t )
y1 (t + 1) = y1 (t ) − α11b1 (t )x1 (t )
y2 (t + 1) = y2 (t ) − α12b2 (t )x1 (t )

The Decision Problem


106 Complexity Theory and Network Centric Warfare

We must also have that α 2 j rj (t ) y1 (t ) ≤ x j (t ) and similarly,


α1 j b j (t )x1 (t ) ≤ y j (t ) . In other words, the number of kills cannot
exceed the number of target weapon systems available.
What remains to be defined is a utility function that is some
measure of how well both sides accomplish their mission. For
this simple problem, we assume that both sides wish to maxi-
mise the number of tanks available at the end of the second
cycle. Their reasoning is that as the opposing sides come into
closer contact, tanks are more effective than artillery. A utility
function that does this is:

P = ∑ t =0 ⎡⎣ x2 ( t ) − y2 ( t ) ⎤⎦ + 0.9 ⎡⎣ x2 ( 2 ) − y2 ( 2 ) ⎤⎦ + 0.1 ⎡⎣ x1 ( 2 ) − y1 ( 2 ) ⎤⎦ .
1

The Blue commander therefore wishes to maximise P and the


Red commander wishes to minimise P. Note that the weights,
0.9 and 0.1, reflect the relative importance assigned to tanks
and artillery by the two commanders. Of course, this objective
function is not unique. There are several other possibilities.
An interesting problem arises when the opposing commanders
do not agree on a common objective. That is, what happens
when the game is not zero-sum? In this case, we would need to
evaluate a separate objective function for each side. In the
example above, this would correspond to having a utility
P(Blue) and a utility P(Red) that might correspond (for exam-
ple) to different weightings for the value of tanks versus
artillery due to different perceptions of the endstate (Red may
simply wish to survive with a roughly balanced force, for
example). We would then require some higher level measure
of what this set of outcomes implies. In some cases (such as
Operations Other Than War), we might be seeking to maxi-
mise the utility of both sides (i.e., a win-win situation rather
than the win-lose assumption we made above).

The Decision Problem


Chapter 4 107

For the example we have considered so far, we can now anal-


yse an equivalent decision streams case. The allocation of
artillery to enemy tanks and artillery over the two cycles is
referred to as the allocation strategy. Figure 4.9 lists all the
possible strategies by describing the game for the Blue com-
mander in extensive form. Note that the diagram reflects only
the allocation of Blue artillery to Red tanks, b2(t).

Figure 4.9: BLUE Commander’s Allocation Strategy

The Blue commander reasons that during the first cycle,


because the opposing forces are not in direct contact, the
greatest threat to his forces is the enemy artillery. Therefore,
he allocates all of his artillery against Red artillery. In the sec-
ond cycle, as the forces begin to close, he sees Red tanks as the
more serious threat and therefore allocates all of his resources
against Red tanks. Therefore, his decision stream (allocation
strategy) is:

BG = { B ( 0 ) , B (1)} = {[1, 0] , [ 0,1]} .

The Decision Problem


108 Complexity Theory and Network Centric Warfare

This is reflected in Figure 4.9 by the bold path, and corre-


sponds to the bang-bang solution discussed in Appendix 1,
where only extreme settings of the control variable are used.
(This solution would result from a Lanchester square law
Blue/Red interaction for example, which would give rise to a
linear Hamiltonian, as explained in Appendix 1.)
Now, assume that a new game is to be played that differs from
the current game only in that the Blue force has been aug-
mented by more tanks. If the addition of these tanks does not
alter the decision stream, then we consider them equivalent
and the game may be replayed with the same decisions on
both sides and the outcomes (value of P) compared.

DECISION UNCERTAINTY
ASTOR’s primary function is to contribute to tactical situa-
tion assessment by observing the battlefield, detecting and
identifying enemy units, and reporting on its findings. Conse-
quently, a metric designed to measure how well situation
assessment is accomplished in all cases tested was seen as use-
ful to this study of the ASTOR sensor system. Such a metric
allows us to measure the degree of confidence the commander has that
he possesses an accurate picture of the battlefield in his area of interest.
We would expect that the greater his knowledge about the
location, size, and composition of the enemy force, the greater
his confidence in making decisions concerning the allocation
of his weapons and the movement of his forces. We also recog-
nise that information of this type is not all he would require.
Information concerning enemy intent gleaned from COM-
INT, SIGINT, and known enemy fighting doctrine would also
assist in completing the picture.

The Decision Problem


Chapter 4 109

We developed such a metric that reflects the amount of knowl-


edge the commander has concerning the enemy forces arrayed
against him in his area of interest. The measure is a function of
the size, diversity, and effectiveness of the sensor suite, and the
effectiveness of the command and control system used to pro-
cess the reported sensor observations. The detailed develop-
ment of this metric is covered next.

PROBABILITY DISTRIBUTION
We begin by letting the vector U represent the competing
hypotheses that any number of enemy units are arrayed
against the friendly commander at time cycle t so that
U = {0,1,2,...,n}. Given the level of resolution for the ASTOR
games, a unit was taken to be a battalion. We omit the cycle
index, t, for now focusing instead on analysis within a timestep.
The term arrayed against is taken to indicate the units located on
the battlefield in some area of interest to the friendly com-
mander. This may mean along some avenue of approach in a
defensive operation or blocking a route of advance in an offen-
sive operation. Figure 4.10 depicts a notional defensive
campaign situation.
We assume that the friendly commander knows the number of
enemy units that might be brought to bear against him during
the campaign. That is, we assume that he knows n. This is a rea-
sonable assumption in that it is highly likely that the Intelligence
Preparation of the Battlefield (IPB) would yield this informa-
tion. What is unknown is the tactical deployment of the units at
each timestep. Tactical situation assessment then is taken to be
the process of estimating the enemy’s tactical deployment at
time t and the effectiveness of this estimate is the degree of
uncertainty associated with his current state of knowledge.

The Decision Problem


110 Complexity Theory and Network Centric Warfare

Figure 4.10: BLUE Commander’s Situation Assessment Problem

BAYESIAN DECISIONMAKING
We begin by analysing the intelligence gathering process at
each timestep. We first assume that a Bayesian update meth-
odology for tactical situation assessment is appropriate within
a wargame cycle, but not between wargame cycles, given the
assumptions concerning the Markov properties (i.e., lack of
memory) of the FASTHEX game with 2-hour timesteps.4
Consequently, the process described here is repeated prior to
each decision to commit forces.
1. INPUT DISTRIBUTION: The friendly commander may
or may not have some idea of the likely disposition of
enemy units. If he does, we may describe it using an
empirical distribution. However, for this analysis, we
assume that the friendly commander is completely igno-

4We later exploit Bayesian updating by assuming multiple sensor sweeps


within a single decision cycle.

The Decision Problem


Chapter 4 111

rant of the enemy commander’s intentions. This


provides us with a worst case situation, corresponding to
the assumed lack of memory between timesteps. We let
P(U = u) represent the probability that the enemy com-
mander will commit u of his n units in a specified area of
interest in the area of operations AO (avenue of
approach in Figure 4.10). Assuming that the enemy
commander is equally likely to deploy any number of
units in the area of interest, we have that
P(U = u) = 1/n+1. The friendly commander hopes to
refine this distribution using his sensor assets.
2. THE SENSOR MODEL: We next let V = {0,1,2,...,n} rep-
resent the number of units detected by the sensor assets
allocated to the area of interest.5 Therefore, P(V = v) is
the probability that the sensors will detect v of the enemy
units arrayed against the friendly forces. However, this
number is conditioned on the number of units in the
area of interest deployed by the enemy commander.
Consequently, we focus on the conditional probability,
P(V = v | U = u ) . For simplicity, we assume a single sen-
sor is cued to search the specified area of interest.6 We
further assume that the sensor is capable of detecting a
unit in the area of interest with probability q and that
there are no false detections from the sensor or else-
where.7 Consequently, the conditional probability
distribution, P(V = v | U = u ) , is the binomial
distribution:

5
By “detect” we mean that sufficient information is provided to allow the
unit to be targeted by a weapon.
6This assumption can be relaxed to allow for the characterisation of a
multisensor suite, provided that the sensors are independent.

The Decision Problem


112 Complexity Theory and Network Centric Warfare

⎧⎛ u ⎞ v
⎪⎜⎜ ⎟⎟q (1 − q ) for v ≤ u
u −v
P(V = v | U = u ) = ⎨⎝ v ⎠ .
⎪ 0 otherwise.

3. SENSOR OPERATIONS: Our objective is to clarify the
enemy force deployment picture based on the sensor
observations by refining the friendly commander’s ini-
tial and subsequent probability distributions on U. That
is, we wish to calculate P(U = u | V = vd ) , where vd is the
number of detections reported in the cycle, and thus
assess the impact of the evidence provided by the sensor
on our estimate of the number of enemy units arrayed
against the friendly forces in the area of interest. Opera-
tionally, we assume that the sensor sweeps the area of
interest once in a cycle. As a detection occurs, it is
immediately reported so that there are vd+1 reports
from the sensor per cycle. The additional report
accounts for the fact that a report of 0 detections is sent
initially. Since it is impossible to control the time when
detections occur within a FASTHEX game cycle, we
assume a uniform distribution of reports. That is, a
report of no detections occurs at time t/(vd+1), a report
of one occurs at 2t/(vd+1), etc. The estimate is refined at
every subinterval using Bayes’ formula as follows:
P(U = u | V = v − 1)P(V = v | U = u )
P(U = u | V = v ) = (2)
∑ P(U = i | V = v − 1)P(V = v | U = i )
n
i =0

7We later relax this assumption by allowing for the possibility that the
sensor detections/identifications are false, that the command and control
system used to transmit the sensor information may report a false
detection/identification as real, and that the intelligence processing centre
may interpret a false detection/identification as real.

The Decision Problem


Chapter 4 113

In this formulation, P(U = u | V = v − 1) is the prior prob-


ability, P(V = v | U = u ) is the knowledge contributed by
the latest report (the probability that one more unit is
detected), and P(U = u | V = v ) is the posterior probabil-
ity on U given the last report. Note that
P(U = u | V = −1) = P(U = u ) = 1 / (n + 1) ; that is, the prior
distribution before sensors are deployed is flat, as
described above. This process is repeated for
v = 0,1,...,vd. Making the appropriate substitutions in (2),
we get:

⎛u ⎞
P(U = u | V = v − 1)⎜⎜ ⎟⎟q v (1 − q )
u −v

P(U = u | V = v ) = ⎝v⎠
⎛i⎞
∑i =0 P(U = i | V = v − 1)⎜⎜ v ⎟⎟q v (1 − q )
n i −v

⎝ ⎠
(3)
⎛u ⎞
P(U = u | V = v − 1)⎜⎜ ⎟⎟(1 − q )
u

= ⎝v⎠ ,
⎛i⎞
∑i =v P(U = i | V = v − 1)⎜⎜ v ⎟⎟(1 − q )
n i

⎝ ⎠
where v = 0,1,...,vd is the number of units detected by the
sensor and u ≥ v at each iteration. Figure 4.11 depicts the
process diagrammatically. Note the difference between no
sensor sweep in progress and a report of no detections.
The former is depicted by a flat probability distribution on
U whereas the latter is a refinement to the flat distribution.

The Decision Problem


114 Complexity Theory and Network Centric Warfare

Figure 4.11: Developing a Refined Estimate

A SIMPLE EXAMPLE
The following illustrates the process. Table 4.1 summarises the
results of a simple situation in which three units are known to
be available to the enemy commander. The sensor system has
a probability of detection/identification of q = 0.8. The entries
in the rows are the refined probabilities from 0, 1, 2, and 3
detections. The first row is the a priori probability assessment
on U assuming initial total ignorance. Figure 4.12 depicts the
results graphically.

A Simple Example
Chapter 4 115

v P (U = 0 | V ) P (U = 1 | V ) P (U = 2 | V ) P (U = 3 | V )

- .2500 .2500 .2500 .2500


0 .8013 .1603 .0321 .0064
1 0 .9218 .0736 .0046
2 0 0 .9634 .0366
3 0 0 0 1
Table 4.1: Refined Probability Assessments: Example 1

Figure 4.12: Refined Probability Assessments

MULTIPLE SWEEPS
We now refine the analysis to show the value that multiple sen-
sor sweeps within the same cycle have on refining the
probability estimates for U. Suppose that we assume that our
sensors are capable of k sweeps of the area of interest within
the commander’s decision cycle. That is, the sensor can per-
form k sweeps of the area of interest before the enemy

Multiple Sweeps
116 Complexity Theory and Network Centric Warfare

commander can move his units in any significant way. In each


of these sweeps (i), vdi enemy units are detected. We further
assume that the probability estimates are made sequentially,
and that the sweep time is sufficiently small to allow for a sin-
gle “end of sweep” report. Using Bayes formula we get:

P (U = u | V = vd (i −1) )P(V = vdi | U = u )


P(U = u | V = vdi ) = (4)
∑ P (U = j | V = vd (i −1) )P(V = vdi | U = j )
n
j =0

where i = 1,2,...,k, and P(U = u | V = vd 0 ) = P(U = u ) = 1 / n + 1 .


Making the appropriate substitutions, (4) becomes:

⎛u⎞
P(U = u | V = vd (i −1) )⎜⎜ ⎟⎟(1 − q )
u

P(U = u | V = vdi ) = ⎝ vdi ⎠ . (5)


⎛ j⎞
∑ j =vdi P(U = j | V = vd (i −1) )⎜⎜ v ⎟⎟(1 − q )
n j

⎝ di ⎠
In general, Bayesian updating has a tendency to converge
rather rapidly–especially in cases such as this where false
detections/identifications are not allowed: that is, it is impossi-
ble to overstate the number of units actually present. The
effect is that subsequent detections that report fewer units than
the previous are totally ignored. To illustrate, consider a sim-
ple case in which n = 3 units. We assume that three sweeps
were conducted resulting in three sequential detections using a
sensor with probability of detection: q = 0.8. Table 4.2 sum-
marises the results of applying equation (5) with k = 3. The
number of units detected each time is listed in the table. The
number of units in the area of interest is actually three and
subsequent observations that two units were detected/identi-
fied are completely ignored.

Multiple Sweeps
Chapter 4 117

i vdi P(U = 0 | V ) P(U = 1 | V ) P(U = 2 | V ) P(U = 3 | V )

0 - 0.250 0.250 0.250 0.250


1 2 0 0 0.625 0.375
2 3 0 0 0 1
3 2 0 0 0 1
Table 4.2: Multiple Sweeps Case 1

Now consider a second case with a somewhat different history


as depicted in Table 4.3. In this case, four sweeps were con-
ducted resulting in the sequential detections depicted in the
Table. The detection of one unit persisted for three reports.
Note the rapid convergence of P(U = 1 | V ) . However, the sin-
gle detection of two units in Sweep 4 shifts the mode of the
distribution to U = 2. Because we exclude false detections, all
reports less than the current number detected will be ignored.

i vdi P(U = 0 | V ) P(U = 1 | V ) P(U = 2 | V ) P(U = 3 | V )

0 - 0.250 0.250 0.250 0.250


1 1 0 0.658 0.263 0.079
2 1 0 0.767 0.122 0.111
3 1 0 0.925 0.059 0.016
4 2 0 0 0.855 0.145

Table 4.3: Multiple Sweeps Case 2

FALSE TARGET DETECTIONS/


IDENTIFICATIONS
Up to this point, we have assumed that false targets were not
present. It is possible to relax this assumption and recognise
that targets can be misclassified in several ways: (1) the sensor

False Target Detections/Identifications


118 Complexity Theory and Network Centric Warfare

system may be defective; (2) the command and control system


used to transmit the detection to a central processing centre
may have erroneously introduced a false target; (3) the pro-
cessing centre equipment or personnel may have misinter-
preted the data being received; and (4) the enemy may be
actively engaging in deception activities (i.e., Information
Operations). The way in which this is done is described in
detail in reference [11], using a Poisson process to represent
the “flow” of false targets through the sensing process.

KNOWLEDGE REPRESENTATION
It now remains to ascertain the degree of uncertainty existing
in the mind of the friendly commander at the time he must
take a decision on the employment of his forces. His current
knowledge consists of two components: (1) the fact that his sen-
sor suite detected a number of enemy units in his area of inter-
est; and (2) the refined probability distribution over the
possible number of enemy units that might be in his area of
interest based on his most recent sensor report. The value of
the first component depends upon whether false detections are
possible. The second depends upon the number of enemy
units detected and the reliability of the sensor system. The task
is to develop a knowledge metric that incorporates these two com-
ponents, thereby quantifying the likelihood that the com-
mander has a true picture of the number of enemy units
arrayed against him in his area of interest.

INFORMATION ENTROPY
We draw on information science to develop a knowledge met-
ric that is a function of the average information present in the
set of all possible uncertain events. This quantity is referred to

Knowledge Representation
Chapter 4 119

as information entropy8 and it measures the amount of uncertainty in


a probability distribution.
The amount of information available from the known occur-
rence of the event, U = u, i.e. that u enemy units are indeed
arrayed against the friendly force, is inversely proportional to
the likelihood that the event will occur. An event that is very
likely to occur provides little information when it does occur.
On the other hand, an unlikely event provides considerable
information when it occurs. Mathematically, we define infor-
mation as follows:

1
I (U = u ) = ln = − ln P(U = u ). 9
P(U = u )
If we now consider all of the events in the refined set
U | V = vd , we reason that each occurs with probability
P(U = u | V = vd ). Therefore, the information available from
the occurrence of each event is:

I (U = u | V = vd ) = − ln P(U = u | V = vd ) ,
and the expected information from the occurrence of each
event is:

P(U = u | V = vd ) I (U = u | V = vd ) = − P(U = u | V = vd ) ln P(U = u | V = vd ).

8The term entropy is used because the information entropy function is the
same as that used in statistical mechanics for the thermodynamic quantity
entropy. For a more complete discussion of entropy, see Blahut [17] and
Zurek, ed. [18].
9
In communication theory, the units of measurement are “bits” if base 2
logarithms are used and “nits” if natural logarithms are used (See
Kullback [19] p. 7). For our purposes, we will assume a dimensionless
quantity.

Knowledge Representation
120 Complexity Theory and Network Centric Warfare

Consequently, the average amount of information in the prob-


ability distribution: P(U | V = vd ) can be expressed as:

H [P(U | V = vd )] = H (U | V = vd ) = −∑i =0 P(U = i | V = vd ) ln[P(U = i | V = vd )].


n

The entropy quantity H (U | V = vd ) is the residual uncertainty


regarding U given that V is instantiated to vd. The average
uncertainty then is the sum of the residual uncertainties
weighted by the probability distribution on the sensor detec-
tion/identifications:

H (U | V ) = −∑ j =0 P(V = j )∑i =0 P(U = i | V = j ) ln[P(U = i | V = j )].


n n

PROPERTIES OF INFORMATION ENTROPY


Information entropy has properties that make it ideal as a met-
ric for measuring the commander’s uncertainty prior to
making a decision and for measuring the uncertainty in the
entire campaign:
1. MAXIMUM ENTROPY: The entropy function is maxi-
mised when the uncertainty in the distribution is
greatest. Maximum uncertainty occurs when the
friendly commander has no sensor assets to deploy. In
this case, any number of units might be arrayed against
him with equal probability. Mathematically we have
that P(U = u ) = 1 / n + 1 . The entropy in this case is:
1 1
H (U ) = −∑i =0 = ln (n + 1).
n
ln
n +1 n +1
Thus the maximum uncertainty in P(U) is ln(n+1). Note
that as n grows larger, the entropy increases as we might
expect; the more units available to the enemy com-

Knowledge Representation
Chapter 4 121

mander, the less clear we are about their deployment in


the absence of sensor outputs. In general, a probability
distribution with a wide variance exhibits high entropy.
2. MINIMUM ENTROPY: The entropy function is mini-
mised at 0. This occurs when P(U = ui) = 1.0 and
P(U = uj) = 0 for all j ≠ i . This represents total certainty
or minimum uncertainty.
3. CAMPAIGN ENTROPY: The total campaign entropy,
denoted H(U1,U2,...Um), where m is the total number of
game (campaign) cycles satisfies the relation:

H (U1 , U 2 , LU m ) ≤ ∑i =1 H (U i ).
m

The equality condition holds when the process is memo-


ryless as approximated in the FASTHEX games (for
purposes of tactical situation assessment), when the situ-
ation being considered is rapidly changing across the
timespan of the campaign.

COMBAT CYCLE KNOWLEDGE


For the wargames we were dealing with, we found it impor-
tant to develop a metric that was capable of providing an
ordinal ranking of the wargame cases across all scenarios in
terms of the knowledge possessed by the commander prior to
making a decision at each cycle. Although entropy is a conve-
nient measure of decision uncertainty, making direct
comparisons among the cases examined could be misleading.
We need only recall that maximum entropy is H(U) = ln(n+1)
to realise that the varying number of enemy units in the AO
makes a direct comparison incorrect. In addition, it is incom-
plete because it addresses only the second knowledge
component, namely the knowledge gained from the refined

Combat Cycle Knowledge


122 Complexity Theory and Network Centric Warfare

probability distribution. What is needed is a more comprehen-


sive metric incorporating the residual uncertainty in the refined
distribution, and the detection information gained by the sensor
report. Having said this, in later applications of this idea to
quantifying Information Dominance [6] and the benefits of a
network-centric approach to collaboration and task prosecu-
tion [9], the simpler form of Residual Uncertainty (and hence,
Residual Knowledge–which is a measure of uncertainty
removed) has been found to be adequate.
For the wargaming application then, we let K(U,V = vd) repre-
sent the knowledge gained from detecting vd enemy units when
there are U enemy units in the area of interest. Symbolically
we have:

K (U ,V = vd ) = K (U | V = vd )K (V = vd )

where K (U | V = vd ) is the knowledge associated with the resid-


ual uncertainty in the refined probability distribution given a
sensor report of vd units, and K(V = vd) is the knowledge gained
by detecting/identifying vd enemy units. If we can ensure that
both K (U | V = vd ) and K(V = vd) are confined to the interval
[0,1], then we can think of K(U,V = vd) as a probability.10 As
such, it represents the likelihood that the commander has a
complete picture of the battlefield at the time he makes a deci-
sion. This can be a very powerful statistic when correlated
with the Force Loss Exchange Ratio, enemy attrition, and
friendly survivability as discussed below.
1. RESIDUAL KNOWLEDGE: The maximum uncertainty in
P(U | V = vd ) is ln(n+1). Therefore, maximum certainty

10
K(U,V = vd) satisfies the probability axioms (see Stark and Woods [13]
p. 9 for instance) and therefore can be thought of as a subjective
probability.

Combat Cycle Knowledge


Chapter 4 123

can be defined as ln(n + 1) − H (U | V = vd ) .11 Normalising


this quantity provides us with the following definition of
residual knowledge:
ln (n + 1) − H (U | V = vd )
K (U | V = vd ) = . (6)
ln(n + 1)
Residual knowledge is maximised when residual entropy
is 0 and it is minimised when residual entropy is ln(n+1).
In general, residual knowledge reflects the amount of
uncertainty in the refined probability distribution.
2. DETECTION KNOWLEDGE: Given that vd enemy units
were detected, we are now concerned with the addi-
tional information this provides concerning the
likelihood that there are actually vd or more enemy units
in the area. This is clearly a function of the reliability of
the sensors and the command and control system used
to process the sensor data it receives. Mathematically,
we are interested in the information content for the
event: U ≥ vd | V = vd . That is, the information that will
be provided from the detection reports this cycle, or the
prior information content of the event, V = vd. This is cal-
culated to be:

[
I (U ≥ vd | V = vd ) = − ln[P(U ≥ vd | V = vd )] = − ln ∑i =v P (U = i | V = vd )
n

d
]
If vd = 0, we get no information because P(U ≥ 0) = 1 .
However, if vd = n, the information content is maxi-
mised at − ln ⎡⎣ P (U = n | V = vd ) ⎤⎦ . This is due to the fact
that P(U ≥ u | V = vd ) decreases monotonically with

11In general, the change in information resulting from detecting V = vd


units is ΔI (U | V = vd ) = H (U ) − H (U | V = vd ) .

Combat Cycle Knowledge


124 Complexity Theory and Network Centric Warfare

increasing u and therefore is smallest for u = n. This sug-


gests the following definition for K(V = vd):

ln ⎡ ∑ i =v P (U = i | V = vd − 1) ⎤
n

K (V = vd ) = ⎣ d ⎦.
ln ⎡⎣ P (U = n | V = vd − 1) ⎤⎦
(We use vd–1 to ensure that the denominator never goes to
zero).
The total knowledge gained is then defined to be the product
of residual and detection knowledge:

ln ( n + 1) − H (U | V = vd ) ln ⎣ ∑ i =vd P (U = i | V = vd − 1) ⎦
⎡ n ⎤
K (U , V = vd ) = ⋅ (7)
ln ( n + 1) ln ⎡⎣ P (U = n | V = vd − 1) ⎤⎦

We can apply equation 7 to the example depicted in Table


4.1. The first 5 columns of Table 4.4 repeat the information in
Table 4.1 for convenience. The last two columns contain the
entropy and knowledge figures based on the refined distribu-
tions at each iteration, and the intermediate values of V.
Figure 4.13 depicts the results graphically.

v P (U = 0 | V ) P (U = 1 | V ) P (U = 2 | V ) P (U = 3 | V ) H (U | V ) K (U ,V )

- .2500 .2500 .2500 .2500 1.3863 0


0 .8013 .1603 .0321 .0064 0.6130 0
1 0 .9218 .0736 .0046 0.2918 .1638
2 0 0 .9634 .0366 0.1570 .4434
3 0 0 0 1 0 1

Table 4.4: Total Knowledge: Example 1

Combat Cycle Knowledge


Chapter 4 125

Figure 4.13: Knowledge and Entropy for Example 1

CAMPAIGN KNOWLEDGE
A similar formulation may now be used to calculate campaign
knowledge, given that the FASTHEX games are taken to be
memoryless processes for tactical situation assessment. Con-
sider a campaign consisting of m cycles. At each cycle, t, vdt
enemy units are detected by the sensor. At each cycle, the
number of possible enemy units arrayed against the friendly
forces, n, is likely to be reduced as a result of combat during the
cycle so that nt is the total number of enemy forces that might
be arrayed against the friendly forces in the area of interest.
H (U t | V = vdt ) then represents the residual uncertainty at each
cycle and the total campaign uncertainty is expressed as:

H (U1 | V = vd1 ,U 2 | V = vd 2 ,L,U m | V = vdm ) = ∑t =1 H (U t | V = vdt )


m

= −∑t =1 ∑i=t 0 P(U t = i | V = vdt )lnP(U t = i | V = vdt ).


m n

Combat Cycle Knowledge


126 Complexity Theory and Network Centric Warfare

By analogue, residual knowledge for the entire campaign can


be defined as:

[ln(n +1) − H (U | V = v )]
)= ∑
m

K (U1 | V = vd1,U2 | V = vd 2 ,L,Um | V = vdm t =1 t t dt


.
∑ ( )
m
ln n + 1
t =1 t

Detection knowledge can be calculated in a similar way. In


this case, we are interested in the total information available
from having detected/identified vdt enemy units at each of the
m cycles, t. To do this, we rely on the fact that the total infor-
mation available from the occurrence of m independent events
is the sum of the information available from the occurrence of
each of them. Therefore we get that:

I (U1 ≥ vd1 | V = vd1 −1,U2 ≥ vd 2 | V = vd 2 −1,L,Um ≥ vdm | V = vdm −1)


= ∑t =1 I (Ut ≥ vdt | V = vdt −1).
m

Detection knowledge for the entire campaign can then be


expressed as:

I (U1 ≥ vd1 | V = vd1 −1,U2 ≥ vd 2 | V = vd 2 −1,L,Um ≥ vdm | V = vdm −1)

∑ ln ⎡⎣∑ P (U = i | V = v −1)⎤⎦
m nt
t =1 i =vdt t dt
=
∑ ln ⎡⎣P (U = n | V = v −1)⎤⎦
m
t =1 t t dt

and total campaign knowledge is:

K (U1,V = vd1,U2 ,V = vd 2 ,L,Um ,V = vdm )

∑ ⎡⎣ln ( nt +1) − H (Ut | V = vdt ) ⎤⎦ ∑t =1 ln ⎣⎡∑i=vdt P (Ut = i | V = vdt −1) ⎦⎤


m m tn

= t =1
⋅ .
∑t=1 ( t ) ∑t=1 ⎣ ( t t )⎦
m m
ln n + 1 ln ⎡ P U = n | V = vdt −1 ⎤

Combat Cycle Knowledge


Chapter 4 127

AN EXAMPLE
Consider the example summarised in Table 4.5. The campaign
consists of 5 cycles. At each cycle t, the detection probability qt,
the number of units detected vdt and the maximum size nt of the
enemy force is given. The last three columns depict the residual
uncertainty in the refined probability distribution, the informa-
tion available from the detection of vdt enemy units, and the
“probability” that the commander has an accurate picture of
the number of enemy units in his area of interest. The last row
reflects his total campaign knowledge.

t qt vdt nt K (U t | V = vdt ) K (U t = vdt ) K (U t ,V = vdt )

1 .6 2 5 0.6122 0.2263 0.1385


2 .4 3 5 0.3950 0.3868 0.1528
3 .9 3 4 1.0000 0.5693 0.5693
4 .9 0 3 0.5572 0 0
5 .3 2 3 0.5166 0.5000 0.2583
Total Campaign Knowledge 0.2092

Table 4.5: Total Knowledge

The detection probability qt is assumed to change as a func-


tion of time t to reflect the changing sensor mix. The
maximum size of the enemy force nt reduces over time to
reflect attrition. The fluctuation in numbers detected vdt
(including a complete lack of detections during one time cycle)
leads to a reduced value of overall campaign knowledge. This
is scaled to vary between 0 and 1, with 0 representing com-
plete ignorance, and 1 representing complete knowledge of
the number of enemy units in the commander’s area of inter-
est at every stage of the campaign.

Combat Cycle Knowledge


128 Complexity Theory and Network Centric Warfare

THE EFFECTS OF KNOWLEDGE


We stated at the outset that it was desirable to assess the effects
of increased knowledge on the outcome of the campaign. One
way to do this is to compare K(U,V = vd) to the Force Loss
Exchange Ratio (FLER) and the friendly and enemy combat
attrition. Comparisons with the FLER measure how knowl-
edge influences the relative losses in combat. Comparisons
with friendly and enemy attrition measure the degree to which
knowledge enhances the survivability of friendly forces and the
destruction of the enemy. Statistically, we have shown a posi-
tive linear relationship when the FLER or enemy attrition is
compared to knowledge, and a negative linear relationship
when friendly casualties are compared to knowledge [11]. The
results also showed that the approach and structure (the use of
the epitomising approach, and open gaming) adopted in the
wargames was able to produce a set of coherent and quantified
alternatives, which formed the basis of the balance of invest-
ment analysis. As a consequence of this analysis, information
could be weighed in the same scale as weapon effects, and the
benefit clearly demonstrated.
Figure 4.14 shows the relation between campaign level knowl-
edge and attrition of enemy forces, as assessed using the
FASTHEX wargaming experiments, giving a correlation
value of 0.8.
Figure 4.15 shows the effect of an increase in campaign level
knowledge on own force casualties, again as a result of the
FASTHEX wargaming experiments, giving a correlation
value of –0.4.

Combat Cycle Knowledge


Chapter 4 129

Figure 4.14: Experimental Assessment of Campaign Level Knowledge and


Attrition of Enemy Forces

Figure 4.15: Experimental Assessment of the Effect of Campaign Level


Knowledge on Own Force Casualties
Combat Cycle Knowledge
130 Complexity Theory and Network Centric Warfare

As discussed in [11] from the experimental results, there


appears to be a point where the knowledge available to the
commander exceeds his capacity to act on it, either to gener-
ate more enemy losses or to prevent further friendly casualties,
and we need to represent this effect.

QUANTIFYING THE BENEFIT OF


COLLABORATION ACROSS AN INFORMATION
NETWORK
In further exploitation of these ideas, Perry [9] has shown how
this approach can be used as a basis for quantifying the benefit
of collaborating across an information network. An example
of the approach is described below, as used in recent work by
Dstl in the UK in the context of modelling a time-critical
operation. Full details of the general approach and other areas
of application are in [9].
We assume we have a network of command and control nodes
that are involved in coordinating a time-critical operation.
Each of these nodes has a number of information pro-
cessing tasks to perform. If:
1
λi
is the mean time for node i to complete all of its tasks, we
assume that this completion time is distributed exponentially
(an exponential distribution is used to model the time between
events or how long it takes to complete a task), so that if fi(t) is
the probability of completing all tasks at node i by time t, then:
fi (t ) = λi e − λi t .
In general, there will be a number of parallel and sequential
nodes in the network sustaining the operation. Let this total

Quantifying the Benefit of Collaboration Across an Information Network


Chapter 4 131

number be τ . In the simplest case, there is a critical path con-


sisting of ρ nodes where ρ is a subset of τ , as shown in
Figure 4.16.

Figure 4.16: The Critical Path

We define the total latency of the path as the sum of the


delays (latencies) at each of the nodes, plus the time, defined
as tm, required to move a terminal attack system (such as an
aircraft) to the terminal attack area. In this sequential case,
we thus have that the total expected latency T is the sum of
the expected latencies at each node on the critical path, plus
the time tm:
ρ
1
T =∑ + tm .
i =1 λi
If there are sequential and parallel nodes on the critical path,
these can be dealt with in the way shown by the example
below:

Figure 4.17: Parallel Nodes on the Critical Path

Quantifying the Benefit of Collaboration Across an Information Network


132 Complexity Theory and Network Centric Warfare

In this example:

1 1 1 1
T = max( , )+ + + tm .
λ1 λ2 λ3 λ4
Returning now to the case of a serial set of nodes that consti-
tute the critical path, for each such node i on the critical path
define the indegree di to be the number of command and control
(C2) network edges having i as a terminal link.
For each node j in the C2 network, we assume (based on our
earlier discussion of information entropy and knowledge) that
the amount of knowledge available at node j concerning its
ability to process the information and provide quality collabo-
ration is a function of the uncertainty in the distribution of
information processing time fj(t) at node j. Thus the more we
know about node j processes, the better the quality of collabo-
ration with node j.
Let Hj(t) be the Shannon entropy of the function fj(t). Then
Hj(t) is a measure of this (residual) uncertainty defined in terms
of a lack of knowledge. By definition of the Shannon entropy,
we have:

H j (t ) = − ∫ ln(λ j e
−λ jt −λ jt
)λ j e dt
0

Since the differential of (xe x − e x ) is xe x


e
it follows that H j (t ) = ln( )
λj

If λjmin corresponds to a minimum rate of task completions at


node j, then:

1
λ j min

Quantifying the Benefit of Collaboration Across an Information Network


Chapter 4 133

corresponds to a maximum expected time to complete all tasks


at node j. In order to provide a normalised value of the knowl-
edge Kj(t) available at node j in terms of the Shannon entropy,
we define this as:

⎛ e ⎞ ⎛ e ⎞
K j (t ) = ln ⎜ ⎟ − ln ⎜ ⎟⎟
⎜ λ j min ⎟ ⎜ λj
⎝ ⎠ ⎝ ⎠
⎛ λj ⎞
= ln ⎜ ifλ jmin ≤ λ j ≤ eλ j min
⎜ λ j min ⎟⎟
⎝ ⎠
= 0 if λ j < λ jmin
= 1 if λ j > eλ jmin

Suppose now that node i is on the critical path, and node j is


another network node connected to node i. Let cij represent
the quality of collaboration obtained by including node j. If
this is high, reference [9] assumes Kj(t) will be close to 1. The
effective latency at node i is thus assumed to be reduced by the
factor (1 − K j (t ))ω due to the effect of this high quality of col-
j

laboration. The factor ω j is assumed to be 1 if j is one of the


nodes directly involved in the time-critical operation (but not
on the critical path). It is assumed to be 0.5 if node j is one of
the other network nodes, to reflect a lower level of collabora-
tion with these nodes.
It is important to note that the actual latency may not be
reduced by this collaboration, but the ability to use the time
more wisely through collaboration (to fill in missing parts of
the operational picture that are available from other
nodes, etc.) has an impact that can be expressed equivalently in
terms of latency reduction. The use of such time more wisely
implies a good knowledge of expected time to complete tasks
that can provide such information. This is similar to the use of
entropy and knowledge in the FASTHEX wargames where

Quantifying the Benefit of Collaboration Across an Information Network


134 Complexity Theory and Network Centric Warfare

increased knowledge led to better awareness of the layout of


the enemy force, and hence to wiser use of the commander’s
own forces. Such wiser use, due to information superiority,
could be quantified in that case by an equivalent (linear)
improvement in the number of enemy units destroyed, or a
reduction in own force casualties.
The balance to be struck is that between such enhanced col-
laboration and the effects of information overload due to
increasing network complexity (which we assess separately as a
function of the number of elements of the network involved in
the task).
The total (equivalent) reduction in latency at node i due to col-
laboration with the network nodes connected to node i is then
given by:
di
ci = ∏ cij
j =1
di
= ∏ (1 − K j (t ))
ωj

j =1

Thus the total effective latency along the critical path,


accounting for the positive effects of collaboration, is given by:
ρ
ci
Tc = ∑ + tm
i =1 λi
ρ di
1
= {∑ ∏ (1 − K
ω
(t )) j } + tm
λ
j
i =1 i j =1

We noted in the experimental data from the ASTOR study


discussed earlier that we need to represent the effect of infor-
mation saturation. Reference [9] also includes a complexity penalty
to account for the fact that taking account of additional net-

Quantifying the Benefit of Collaboration Across an Information Network


Chapter 4 135

work connectivity leads to such information overload effects.


This is the negative effect of collaboration. It leads to an
increase in effective latency on the critical path. Following [9],
we define C to be the total number of network connections
accessed by nodes on the critical path. For each node i on
the critical path, this is the indegree di. Thus:
ρ
C = ∑ di .
i =1

The value of C is then a measure of the complexity of the net-


work. We assume that the complexity effect associated with a
particular value of C follows a nonlinear S-shaped curve as
shown below.

Figure 4.18: The Logistics S-Shaped Curve

The equation used to describe this effect is a Logistics


equation:
e a +bC .
g (C ) =
1 + e a +bC
The penalty for information overload is then defined as:
1 .
1 − g (C )

Quantifying the Benefit of Collaboration Across an Information Network


136 Complexity Theory and Network Centric Warfare

The total effective latency, taking account of both the positive


and negative effects of C2 network collaboration, is then:

Tc − tm
Tc ,C = + tm .
1 − g (C )

NETWORK-CENTRIC BENEFIT
This network-enabled approach thus allows us to compute the
distribution of the response time of the system as a function of
the network assumptions. As we increase the collaboration
throughout the network in going from platform-centric to net-
work-centric to futuristic network-centric (to use the RAND
categories [9]), so the positive effects of enhanced collabora-
tion have to balance off against the downside effects of
information overload and increasing network complexity.
Going back to the discussion in Chapter 2 on the Conceptual
Framework of Complexity, we can call this overall assessed
performance of the network the plecticity12 of the network,
since it characterises the combined positive and negative
effects of network complexity and collaboration.

STOCHASTIC NETWORKS AND NETWORK


VULNERABILITY
So far, we have shown how it is possible to calculate the posi-
tive and negative effects of network complexity and
collaboration based on the use of Shannon entropy as a mea-
sure of (lack of) knowledge. We can extend this model
potentially in a number of ways. The length of the critical
path, for example, if the network is adapting over the course of
a series of tasks, will be a stochastic variable. We would expect,

12
A term proposed by Perry (RAND Corp.) - personal communication.

Quantifying the Benefit of Collaboration Across an Information Network


Chapter 4 137

from the theory we have considered so far that the size of the
network (i.e., the number of nodes on the critical path) should
be sampled from a power-law distribution of network size. The
exponent of this power-law is then a characteristic measure of
the ability of the nodes in the network to form and reform
dynamically over time. Similarly, we can consider the indegree
of a node on the network to be a stochastic variable. If the
indegree of a node is sampled from a power-law distribution of
the number of links, then the network is said to be “scale-free”
[20]. This corresponds to a network with a small number of
nodes with very rich connections, and many nodes with sparse
connections. (The Internet is an example of a scale-free net-
work.) Conversely, if the indegree of a node is sampled from a
normal distribution of the number of links, then the network is
of “random” type. Characterising networks in this way allows
us to investigate the vulnerability of such networks to attack, as
discussed in [20].

REFERENCES
1 MOFFAT J (2002). Command and Control in the Information Age: Representing its
Impact. The Stationery Office. London, UK.
2 HORNE G E and LEONARDI M (2001). Manoeuver Warfare Science 2001.
Marine Corps Combat Development Command. Quantico, VA, USA.
3 LAUREN M (2002). “Firepower Concentration in Cellular Automaton
Combat Models – An Alternative to Lanchester.” J Opl Res. Soc 53 Issue 6,
pp. 672-679.
4 www.cna.org/isaac (Aug 1, 2003)
5 GOLDENFELD N (1992). Lectures on Phase Transitions and the Renormalisation
Group. Addison-Wesley. MA, USA.
6 DARILEK R, PERRY W et al (2001). Measures of Effectiveness for the
Information Age Army. RAND. Santa Monica, CA, USA.
7 TURCOTTE D L (1997). Fractals and Chaos in Geology and Geophysics. 2nd
Edn. Cambridge University Press. Cambridge, UK.

References
138 Complexity Theory and Network Centric Warfare

8 http://fafnir.phyast.pitt.edu/myjava/perc/percTesT.html
(February 1, 2003)
9 PERRY W, BUTTON R W et al (2002). Measures of Effectiveness for the
Information-Age Navy: The Effects of Network-Centric Operations on Combat Outcomes.
RAND. Santa Monica, CA, USA.
10 ROSKE V (2002). “Opening Up Military Analysis: Exploring Beyond the
Boundaries.” Phalanx. USA Military Operations Research Society. 35 No 2.
11 PERRY W and MOFFAT J (1997). “Measuring the Effects of Knowledge in
Military Campaigns.” J Opl Res. Soc 48. pp. 965-972.
12 BOWEN K C (1978). Research Games – An Approach to the Study of Decision
Processes. Taylor and Francis, UK.
13 STARK H and WOODS J W (1986). Probability, Random Processes and
Estimation Theory for Engineers. Prentice Hall, USA.
14 BRYSON A E and HO Y C (1975). Applied Optimal Control. Hemisphere
Publishing, USA.
15 HILLESTAD R (1986). SAGE: An Algorithm for the Allocation of Resources in a
Conflict Model. RAND working draft.
16 BERKOVITZ D and DRESHER M (1959). A Game-theory Analysis of Tactical
Air War. Operations Research, 17. pp. 599-620.
17 BLAHUT R E (1987). Principles and Practice of Information Theory. Addison-
Wesley. MA, USA.
18 ZUREK W H ed. (1990). Complexity, Entropy and the Physics of Information. Vol
III, Santa Fe Institute Studies in the Sciences of Complexity Series. Addison
Wesley. USA.
19 KULLBACK S (1968). Information Theory and Statistics. Dover. New York,
USA.
20 COHEN D (2002). “All the World’s a Net.” New Scientist. 13 April 2002.
pp. 24-29.

References
CHAPTER 5

AN EXTENDED
EXAMPLE OF THE
DYNAMICS OF LOCAL
COLLABORATION AND
CLUSTERING, AND
SOME FINAL
THOUGHTS1

T owards the end of Chapter 4, we discussed


the way in which a particular network could
be analysed using what we called the plecticity of a
network, which includes both the positive effects
of collaboration and the downside effects of infor-
mation overload. This assumes a network of a
particular size and configuration, and thus raises

1
The contribution of Dr. Susan Witty, Dstl, to this
chapter is gratefully acknowledged.

139
140 Complexity Theory and Network Centric Warfare

the question of what sizes and configurations of such networks


of collaboration are likely to emerge. In some models of natural
systems, we have already seen that the networks of interaction
that form (i.e., the dynamics of cluster formation) can be pre-
dicted ahead of time to have a particular form. For example the
Bak-Sneppen model of the coevolution of the species within an
evolving ecosystem (described in Chapter 1) gives rise to clusters
of coevolution that tend to a power law distribution of cluster
size. Clusters of burning trees (forest fires) also show such power
law effects, which turn out to be similar to the distribution of
casualties in war (Chapters 2 and 3). Clustering of force units in
the ISAAC “distillation” model of manoeuvre warfare produces
fractal clustering (Chapter 4).
We wish to finish with an extended example analysis of the
ISAAC distillation; recall that this is a simple agent-based model
of land warfare, incorporating small rule sets that govern agent
decisionmaking, movement, and engagement. More detail is
available in [1]. Figure 5.1 is a screenshot of the start of a typical
ISAAC simulation run.

Figure 5.1: Screenshot of the Start of a Typical ISAAC Simulation Run


Chapter 5 141

ISAAC is based on cellular automata, which uses simple local


rules to describe interactions between units. Different scenar-
ios can be set up by changing the parameters of these rules.
The initial laydown of forces is carried out stochastically, by
the model within user-defined limits, and so different replica-
tions of the same basic scenario are possible.
The particular ISAAC scenario we use in this chapter was sup-
plied to us by Dr. Gary Horne of the U.S. Marine Corps
Warfighting Lab and has three phases. In the first phase, the
Red forces move to meet the Blue; the second phase is the
engagement between Red and Blue forces. It is in the third
phase that either the Red forces take control of the Blue flag (in
the top right corner) or the Blue forces retain control of their
flag. This scenario is interesting as in all stochastic replications
but one (replication 40), the Red forces are successful in achiev-
ing their goal of taking control of the Blue flag. The analysis
that follows explores the clustering and “swarming” of the
agents and the similarities and differences in the replications.

CLUSTERING AND SWARMING


In order to analyse the swarming dynamics of cluster forma-
tion and dissolution in ISAAC, we need to consider firstly
what this means. There are two ways to define a cluster of
agents. The first, and most usual, is to define neighbouring
agents only by those that are north, south, east, or west adja-
cent to the agent in question, known as nearest neighbour
clustering. The second (and although most intuitive, less used)
definition is to include all eight neighbours of the central agent
as part of a cluster, as shown in Figure 5.2, which is known as
next nearest neighbour clustering.

Clustering and Swarming


142 Complexity Theory and Network Centric Warfare

Figure 5.2: Nearest and Next Nearest Neighbour Clustering

For a single timestep in any run of a cellular automata-based


model, the number and size of clusters of agents can be deter-
mined using a simple algorithm–the Hoshen-Kopelman
algorithm–for the nearest neighbour case. This algorithm can
be modified for use with eight neighbours. For details of the
algorithm, see references [2,3].

CLUSTER DISTRIBUTION
Once the cluster numbers and sizes can be determined, there
are a number of ways to analyse the data. The first that we
look at is the size of the largest cluster. This gives an indication
of the ability of the agents to cluster or the amount of dispersal
of the agents. For example, if the largest cluster size is near to
the total number of agents, we know that that is the only clus-
ter. However, if the largest cluster is small, then we know that
the agents are dispersed in many small clusters.
The following plots are of the largest cluster size against the
timestep for several different replications with different ran-
dom seeds for the same basic run of the ISAAC model. The
clustering algorithm used is that of nearest neighbours and dif-
ferent plots are graphed for Red and Blue agents. The agents
can be ordered by state–alive, injured, or dead. The plots that
follow are for only those agents that are alive. For each
timestep, we plot the largest cluster size for Blue and the larg-
est cluster size for Red. For the first iteration of our example

Clustering and Swarming


Chapter 5 143

run of ISAAC, Figure 5.3 shows the evolution of the largest


cluster size for Red as a function of simulated time. Figure 5.4
shows the same thing for Blue.

Figure 5.3: Largest Cluster Size as a Function of Simulated Time


(First Iteration, Red Agents)

Figure 5.4: Largest Cluster Size as a Function of Simulated Time


(First Iteration, Blue Agents)

In these two plots (Figures 5.3 and 5.4), it is possible to see a


smoothly changing pattern in the largest cluster size for the
Red forces, but not for the Blue. Looking at the clustering of

Clustering and Swarming


144 Complexity Theory and Network Centric Warfare

Red, it is clear that the dynamic behaviour can be split into


distinct areas in time by the rate of change of cluster size–the
slope of the plot. The distinct areas in time of the changes in
the slope of the plot of largest cluster size of Red agents corre-
spond to the three phases of the ISAAC run.
In further replications of the ISAAC run, a similar pattern
emerges–each time Red succeeding in his objective. However,
in the 40th replication, Red fails to secure the Blue flag.
Dr. Horne estimates that the Red force is successful in excess
of 100 replications, with only this one failure. Figures 5.5 and
5.6 show the evolution of the largest cluster size for this
replication.
We can see that there is now no clear evidence of a third phase
of operation in the plot of the largest Red cluster size. In fact,
the plot of the largest Blue cluster size is now more structured
and shows evidence of a third phase of the force’s operation.
We suggest that Red’s failure is due to the increased clustering
ability of the Blue forces, thereby reaching a greater largest
cluster size than in other replications. Such behaviour is con-
sistent with the mathematical metamodel of ISAAC discussed
in Chapter 4. This metamodel indicates that agile clustering
and reclustering should lead to better local force ratios and
hence improved ability to cause attrition to the enemy (locally)
and to thus move freely.
To gain some additional insight, let us now consider the larg-
est cluster size at each timestep of the simulation, and plot
this as a frequency distribution of cluster size. We have done
this for a number of replications. Figure 5.7 shows the fre-
quency of largest cluster size for Red agents for typical and
exceptional replications.

Clustering and Swarming


Chapter 5 145

Figure 5.5: Largest Cluster Size as a Function of Time (40th Iteration, Red)

Figure 5.6: Largest Cluster Size as a Function of Time (40th Iteration, Blue)

Figure 5.7: Frequency Distribution of the Largest Cluster Size for Red Agents
Clustering and Swarming
146 Complexity Theory and Network Centric Warfare

We can see from this that Red is able to generate a wide


spectrum of cluster sizes. Figure 5.8 shows the same plot for
the Blue agents with the number of each replication shown
on the plot.

Figure 5.8: Frequency Distribution of the Largest Cluster Size for Blue Agents

We can see from Figure 5.8 that the spread of clusters for
Blue is much smaller in general. However, for the 40th repli-
cation, Blue is able to generate a wider spread of clusters, and
thus succeed.

Clustering and Swarming


Chapter 5 147

THE DISTRIBUTION OF CLUSTER SIZE


Let us move on from largest cluster size now and look at all
the clusters formed over time in the simulation. From our
mathematical metamodel of ISAAC discussed in Chapter 4,
and from the general emergent behaviour of natural systems
we have discussed in this book, we anticipate that the distribu-
tion of cluster size should approximate to a power law
distribution. Thus on a log-log scale, the distribution of cluster
size should be a straight line, with end effects where the
assumptions break down.
Firstly let us just look at one replication of the simulation. Fig-
ure 5.9 shows the distribution of cluster size for Red agents for
the 2nd replication of ISAAC, plotted on a log-log scale.

Figure 5.9: Distribution of Cluster Sizes (2nd Replication, Red Agents)

We can see that in the intermediate regime, the plot forms a


straight line, confirming the theoretical expectation. Figure
5.10 shows a similar plot with the other replications
superimposed.

Clustering and Swarming


148 Complexity Theory and Network Centric Warfare

Figure 5.10: Distribution of Cluster Size for Red Agents

Finally, from theory and from the analysis in Chapter 3, we


expect that the time series of casualties produced by a model
such as ISAAC should show evidence of fractal clustering in
time. In precise terms, this implies that the power spectrum of
the time series of casualties should be related to the casualty
size as a power law relationship. Using a related distillation
model called MANA [4], evidence of this effect has been
found by Lauren [5], as we have already discussed in Chapter
3. This is an area that we intend to investigate further in the
context of building metamodelling equations.

FINAL THOUGHTS
We started by considering what we can learn from natural
systems: an ecosystem in which species coevolve locally; a
fluid forming an interface when it is pinned; the effect of forest
fires. All of these show regularities and emergent behaviours
of the whole system that can be captured and deduced using
mathematical models. We have also shown how the same
ideas of local coevolution within such “open” systems are very
relevant to thinking about the consequences of a network-cen-
tric form of warfare, where units coevolve (self-synchronise)
across an information grid. By exploiting this linkage, it is pos-

Final Thoughts
Chapter 5 149

sible to build quantitative models that help us to understand


the likely emergent behaviour of such coevolving networks of
force interaction.
This is work in progress that we hope will contribute to the
new science of understanding, analysing, and modelling the
effects of Information Age warfare. In doing so, we aim, as
remarked in the preface to a previous contribution,2 to “gain a
deeper understanding not only of conflict, but also of the
avoidance of conflict, which is the ultimate aim of the politi-
cal/military art.”

REFERENCES
1 ILLACHINSKI A (2000). “Irreducible Semi-Autonomous Adaptive
Combat (ISAAC): An Artificial Life Approach to Land Warfare.” Military
Operations Research. Vol 5 No 3. pp. 29-46.
2 HOSHEN J and KOPELMAN R (1976). “Percolation and cluster
distribution. I. Cluster multiple labeling technique and critical concentration
algorithm.” Phys. Rev. B14, p. 3438.
3 http://www.splorg.org/people/tobin/projects/hoshenkopelman/
hoshenkopelman.html (Aug 1, 2003)
4 LAUREN M K and STEPHEN R T (2002). “Map-Aware Non-Uniform
Automata (MANA)–A New Zealand Approach to Scenario Modelling.”
Journal of Battlefield Technology. Vol 5 No 1. pp. 27-31.
5 LAUREN M K and STEPHEN R T (2002). “Fractals and Combat
Modelling: Using MANA to Explore the Role of Entropy in Complexity
Science.” Paper prepared for Fractals. Defence Technology Agency.
Auckland, New Zealand.

2Moffat J. Command and Control in the Information Age; Representing its Impact.
The Stationery Office. London, UK, 2002.

References
APPENDIX

OPTIMAL CONTROL
WITH A UNIQUE
CONTROL SOLUTION

I n this Appendix, we investigate the case of a


unique optimal control solution to the prob-
lem of system control, and show that this unique
solution is of the form of bang-bang control when
the system is linear in nature.
As in Chapter 1, we assume that our system can
be described by the functional relationship:

dX i
= Fi ( X 1 (t ),... X n (t ); λ1 (t ),...λm (t ) ) , (i = 1,....n)
dt
where Fi are the rate laws, and λi (t ) are the con-
trol variables. In matrix/vector notation, we
write this as:

X& (t ) = F ( X (t ), λ (t ))

with initial conditions X i (t0 ) = X i0 .

151
152 Complexity Theory and Network Centric Warfare

We assume fairly weak conditions on the continuity of F, X,


and λ sufficient to make the equations “well behaved.”
We consider here analysis of this relationship as time varies
over a fixed time interval [t0,t1]. At time t1, the state variables
will have values Xi(t1), and the objective is to maximise or min-
imise a linear combination of these endstate values. The
problem can thus be written as:
n
Optimise ∑ c X (t )
i =1
i i 1

(where the ci are constant coefficients or “weights”) subject to


the constraints:

X& (t ) = F ( X (t ), λ (t ))

X i (t0 ) = X i0 ∀i .
We now define a process [1] that yields a necessary condition
for a vector of control variables λ (t ) to optimise the objective
function. In other words, any control vector that gives rise to
an optimal value of the objective function must satisfy this con-
dition. Although this does not guarantee that a solution λ (t )
satisfying this condition is optimal, other information (such as
the uniqueness of such a solution) can be used in particular
cases to prove that λ (t ) is indeed an optimal control vector.
The first step in this procedure is to introduce a set of “dual”
variables:

ψ 1 (t ), ........ ,ψ n (t )
that are defined by the relationships:
n ∂Fj ( X , λ )
ψ& i (t ) = −∑ψ j (t )
j =1 ∂X i
Appendix 153

with final conditions:

ψ i (t1 ) = −ci ∀i

where the values ci are the same as the coefficients appearing


in the objective function.
A Hamiltonian function H is now defined by:

H (ψ , X , λ ) =< ψ , X& >

where < > denotes the inner product of the two vectors
ψ and λ .
Thus:
n
H (ψ , X , λ ) = ∑ψ i (t ) X& i (t )
i =1
n
= ∑ψ i (t ) Fi ( X , λ )
i =1

From the definition of H, we have the dual relationship:

∂H
X& i = ∀i
∂ψ i
∂H
ψ& i = − ∀i
∂X i

The corresponding “boundary conditions” are:

X i (t0 ) = X i0 ∀i
ψ i (t1 ) = −ci ∀i
154 Complexity Theory and Network Centric Warfare

PONTRYAGIN’S MAXIMUM PRINCIPLE


Using the notation we have now developed, Pontryagin’s
Maximum Principle [2] states that if λ * (t ) is a control vector
that maximises (resp. minimises) the objective function:
n

∑ c X (t )
i =1
i i 1

then the Hamiltonian H (ψ , X , λ ) achieves a minimum (resp.


maximum) at the point λ * (t ) for any value of X or ψ .
It is worth noting here that if the set U of admissible control
vectors is a topologically compact set, then the continuous
function H will realise its minimum or maximum value on U.
The control vectors λ that minimise or maximise the Hamil-
tonian H are called extremal controls. If we denote this subset of
U by U*, then we know from Pontryagin’s Maximum Principle
that if λ * is an optimal control vector, then λ * ∈ U * . Thus, if
we know that:
1. An optimal control vector exists; and
2. There is only one extremal control (i.e., U* is a single
point)
then the single element of U* must be the optimal control vec-
tor. Whether (1) and (2) apply depends on the particular
problem under study.

DETERMINING THE EXTREMAL CONTROLS


The process for determining these extremal controls is as fol-
lows (for definiteness, we assume that we are maximising the
objective function, and hence minimising the Hamiltonian):

Pontryagin’s Maximum Principle


Appendix 155

1. Compute the form of the Hamiltonian function


H (ψ , X , λ ) .
2. Compute the values of λ which minimise H (ψ , X , λ )
for λ ∈ U .
These are the extremal controls. They are expressed as
functions of X and ψ
λ = λ ( X ,ψ ) .
3. Substitute this extremal control λ into the relations:
∂H
X& i =
∂ψ i
∂H
ψ& i = −
∂X i

in order to solve for the extremal system trajectory X*


and extremal dual function ψ * .
4. Substitute these values into the expression for the extre-
mal control:
λ = λ ( X * ,ψ * )
to give an explicit formulation of this extremal control
vector.
Knowledge of the extremal trajectory X* allows the
objective function:
n

∑c X
i =1
i
*
i (t1 )

to be evaluated.

Determining the Extremal Controls


156 Complexity Theory and Network Centric Warfare

LINEAR MODELS
A linear system model is defined as having a relationship of the
form:

X& (t ) = A(t ) X (t ) + B (t )λ (t ) + g (t )

in a matrix and vector representation, with initial conditions


X i (t0 ) = X i0 ∀i .
When the system behaviour is of this form, it is possible to
characterise the nature of the optimal controls under fairly
general conditions. This is particularly the case if the objective
function itself is linear, i.e. it is of the form:
t1

∫ < S, X > + < W , λ >


t0

where, as before, < > denotes the inner product of two vec-
tors, and S and W are time-dependent vectors of known value.
Assume then that the objective function is of this form, and
that the system model is linear in the way that we have
described. Without loss of generality, we can set g(t)=0 and
write the system behaviour model in the form:

X& = AX + Bλ
where X is the vector of state variables (e.g., force levels) and λ
is the vector of control variables.
Make the transformation:
t
X n +1 (t ) = ∫ < S , X > + < W , λ > .
t0

Determining the Extremal Controls


Appendix 157

The objective function now becomes:

Optimise X n +1 (t1 )
and the relationship X& n +1 (t ) =< S (t ), X (t ) > + < W (t ), λ (t ) > is
added to the set of equations describing the system behaviour.
(This can be done since the above equation is linear and so has
the same form as the others.)
Consider now the form of the Hamiltonian for such a linear
system. We have:

H (ψ , X , λ ) =< ψ , X& >


=< ψ , AX + Bλ >
=< ψ , AX > + < ψ , Bλ >

Since we are interested in the extremal controls λ that maxi-


mise or minimise H, only the second term is of interest, the
first not being a function of λ .
Let us look at this second term in more detail. We have:

< ψ , Bλ >= ∑ψ i ∑ Bij λ j = ∑ (∑ψ i Bij )λ j


i j j i

Let: ϕ j = ∑ψ i Bij .
i

Then: < ψ , Bλ >= ∑ ϕ j λ j =< ϕ , λ > .


j

Let us assume that the objective function is to be maximised.


By Pontryagin’s Maximum Principle [2], we thus wish to con-
sider control vectors λ that minimise the Hamiltonian H. This
is then equivalent, as we have seen, to minimising the expres-
sion ∑ ϕ j (t )λ j (t ) .
j

Determining the Extremal Controls


158 Complexity Theory and Network Centric Warfare

We can define such an extremal control vector λ * as follows,


(provided that the set U of all possible control vectors is topo-
logically compact):

If ϕ j (t ) ≤ 0 let λ *j (t ) = max{λ j (t ), λ ∈ U }
and if ϕ j (t ) > 0 let λ *j (t ) = min{λ j (t ), λ ∈ U }

These are well defined since a continuous function will attain


its max or min on a compact set.
For any t in the interval [t0, t1] we then have:

∑ϕj
j (t )λ *j (t ) ≤ ∑ ϕ j (t )λ j (t ) .
j

For any control vector λ in the admissible set U of control


vectors.

UNIQUENESS OF THE EXTREMAL CONTROL


FOR A LINEAR SYSTEM
If V is any other extremal control vector, then since it mini-
mises the Hamiltonian H, it must satisfy:

∑ϕ j
j (t )V j (t ) ≤ ∑ ϕ j (t )λ j (t )
j

for any λ in the admissible set U of control vectors.


However, if ϕ j (t ) > 0 , then it is clear that:

V j (t ) = min{λ j (t ), λ ∈ U } .
Otherwise it would be possible to define a vector giving a
smaller value of the Hamiltonian, contradicting the extremal
nature of V. It follows that every extremal vector must be of
the form λ * .

Uniqueness of the Extremal Control for a Linear System


Appendix 159

At the points where ϕ j (t ) changes sign, λ *j (t ) changes from


an extreme minimum value to an extreme maximum, or
vice versa. Such a form of control is known as bang-bang
since the value “bangs” from one extreme possible value to
another, and never assumes any intermediate values. What
we have shown is that for a linear system with a linear objec-
tive function, every extremal control (including therefore the
optimal control) must be in bang-bang form. In this sense, the
optimal control vector always lies on the boundary of the
admissible set U.
If it can be shown that the ϕ j are unique, then the above con-
struction yields a unique control vector that must then be the
optimal control.1 Now, we have that:

ϕ j = ∑ψ i Bij .
i

Thus the uniqueness of ϕ j depends on the uniqueness of


ψ i (1 ≤ i ≤ n) .
We have that:

∂ &
ψ& i = −∑ψ j (X j )
j ∂X i

= −∑ψ j (∑ Ajl X l + ∑ B jm λm )
j ∂X i l m

= −∑ψ j Aji
j

with final conditions:

ψ (t1 ) = −(c1 , ...... , cn +1 ) = −(0,....., 0,1) .

1It can be shown that for this type of system, an optimal control must exist [1] [2].

Uniqueness of the Extremal Control for a Linear System


160 Complexity Theory and Network Centric Warfare

Since the matrix A is known, this equation has a unique solution


[1] and thus the vector ψ is unique. Hence, the optimal control
for such a linear system that optimises the objective function is
precisely defined by the bang-bang control function λ * .

REFERENCES
1 CONNORS M M and TEICHROEW D (1967). Optimal Control of Dynamic
Operations Research Models. International Textbook Co. Pennsylvania, USA.
2 ROZONOER L T (1959). L.S. Pontryagin’s Maximum Function Principle in its
Application to the Theory of Optimum Systems–I, II, III. Avtomatika i
Telemikhanika 20. p. 1320 et seq. Translated in the journal Automation and
Remote Control (1959). 20. p. 1288 et seq.

References
ABOUT THE AUTHOR

P rofessor James Moffat is a Senior Fellow of the Defence


Science and Technology Laboratory (Dstl), UK, a Fellow
of Operational Research, a Fellow of the Institute of Mathe-
matics and its Applications, and a visiting Professor at
Cranfield University, UK. He was awarded the President’s
Medal of the Operational Research Society in the year 2000.
He holds a first class honours degree and a Ph.D. in Mathe-
matics, and was awarded the Napier Medal in Mathematics by
Edinburgh University. He has worked for the past 20 years or
so on defence-related operational analysis problems and aero-
space technology research. His current research interest is in
building analysis tools and models that capture the key effects
of human decisionmaking and the other aspects of C4ISR.
Catalog of CCRP Publications
(* denotes a title no longer available in print)

Coalition Command and Control*


(Maurer, 1994)
Peace operations differ in significant ways from tra-
ditional combat missions. As a result of these unique
characteristics, command arrangements become far
more complex. The stress on command and control
arrangements and systems is further exacerbated by
the mission's increased political sensitivity.

The Mesh and the Net


(Libicki, 1994)
Considers the continuous revolution in information
technology as it can be applied to warfare in terms
of capturing more information (mesh) and how peo-
ple and their machines can be connected (net).

Command Arrangements for


Peace Operations
(Alberts & Hayes, 1995)
By almost any measure, the U.S. experience shows
that traditional C2 concepts, approaches, and doc-
trine are not particularly well suited for peace
operations. This book (1) explores the reasons for
this, (2) examines alternative command arrangement
approaches, and (3) describes the attributes of effec-
tive command arrangements.

CAT-1
CCRP Publications

Standards: The Rough Road to the


Common Byte
(Libicki, 1995)
The inability of computers to "talk" to one another is a
major problem, especially for today's high technology
military forces. This study by the Center for Advanced
Command Concepts and Technology looks at the
growing but confusing body of information technology
standards. Among other problems, it discovers a persis-
tent divergence between the perspectives of the
commercial user and those of the government.

What Is Information Warfare?*


(Libicki, 1995)
Is Information Warfare a nascent, perhaps embryonic
art, or simply the newest version of a time-honored fea-
ture of warfare? Is it a new form of conflict that owes its
existence to the burgeoning global information infra-
structure, or an old one whose origin lies in the wetware
of the human brain but has been given new life by the
Information Age? Is it a unified field or opportunistic
assemblage?

Operations Other Than War*


(Alberts & Hayes, 1995)
This report documents the fourth in a series of work-
shops and roundtables organized by the INSS Center
for Advanced Concepts and Technology (ACT). The
workshop sought insights into the process of determin-
ing what technologies are required for OOTW. The
group also examined the complexities of introducing
relevant technologies and discussed general and specific
OOTW technologies and devices.

CAT-2
CCRP Publications

Dominant Battlespace Knowledge*


(Johnson & Libicki, 1996)
The papers collected here address the most critical
aspects of that problem—to wit: If the United States
develops the means to acquire dominant battlespace
knowledge, how might that affect the way it goes to war,
the circumstances under which force can and will be
used, the purposes for its employment, and the resulting
alterations of the global geomilitary environment?

Interagency and Political-Military


Dimensions of Peace Operations:
Haiti - A Case Study
(Hayes & Wheatley, 1996)
This report documents the fifth in a series of workshops
and roundtables organized by the INSS Center for
Advanced Concepts and Technology (ACT). Widely
regarded as an operation that "went right," Haiti
offered an opportunity to explore interagency relations
in an operation close to home that had high visibility
and a greater degree of interagency civilian-military
coordination and planning than the other operations
examined to date.

The Unintended Consequences of the


Information Age*
(Alberts, 1996)
The purpose of this analysis is to identify a strategy for
introducing and using Information Age technologies
that accomplishes two things: first, the identification
and avoidance of adverse unintended consequences
associated with the introduction and utilization of infor-

CAT-3
CCRP Publications

mation technologies; and second, the ability to


recognize and capitalize on unexpected opportunities.

Joint Training for Information Managers*


(Maxwell, 1996)
This book proposes new ideas about joint training for
information managers over Command, Control, Com-
munications, Computers, and Intelligence (C4I) tactical
and strategic levels. It suggests a substantially new way
to approach the training of future communicators,
grounding its argument in the realities of the fast-mov-
ing C4I technology.

Defensive Information Warfare*


(Alberts, 1996)
This overview of defensive information warfare is the
result of an effort, undertaken at the request of the Dep-
uty Secretary of Defense, to provide background
material to participants in a series of interagency meet-
ings to explore the nature of the problem and to identify
areas of potential collaboration.

Command, Control, and the Common


Defense
(Allard, 1996)
The author provides an unparalleled basis for assessing
where we are and were we must go if we are to solve the
joint and combined command and control challenges
facing the U.S. military as it transitions into the 21st
century.

CAT-4
CCRP Publications

Shock & Awe:


Achieving Rapid Dominance*
(Ullman & Wade, 1996)
The purpose of this book is to explore alternative con-
cepts for structuring mission capability packages
around which future U. S. military forces might be
configured.

Information Age Anthology:


Volume I*
(Alberts & Papp, 1997)
In this first volume, we will examine some of the
broader issues of the Information Age: what the Infor-
mation Age is; how it affects commerce, business, and
service; what it means for the government and the mili-
tary; and how it affects international actors and the
international system.

Complexity, Global Politics,


and National Security*
(Alberts & Czerwinski, 1997)
The charge given by the President of the National
Defense University and RAND leadership was three-
fold: (1) push the envelope; (2) emphasize the policy and
strategic dimensions of national defense with the impli-
cations for Complexity Theory; and (3) get the best
talent available in academe.

CAT-5
CCRP Publications

Target Bosnia: Integrating Information


Activities in Peace Operations*
(Siegel, 1998)
This book examines the place of PI and PSYOP in
peace operations through the prism of NATO opera-
tions in Bosnia-Herzegovina.

Coping with the Bounds


(Czerwinski, 1998)
The theme of this work is that conventional, or linear,
analysis alone is not sufficient to cope with today’s and
tomorrow’s problems, just as it was not capable of solv-
ing yesterday’s. Its aim is to convince us to augment our
efforts with nonlinear insights, and its hope is to provide
a basic understanding of what that involves.

Information Warfare and


International Law*
(Greenberg, Goodman, & Soo Hoo, 1998)
The authors, members of the Project on Information
Technology and International Security at Stanford
University's Center for International Security and Arms
Control, have surfaced and explored some profound
issues that will shape the legal context within which
information warfare may be waged and national infor-
mation power exerted in the coming years.

CAT-6
CCRP Publications

Lessons From Bosnia:


The IFOR Experience*
(Wentz, 1998)
This book tells the story of the challenges faced and
innovative actions taken by NATO and U.S. personnel
to ensure that IFOR and Operation Joint Endeavor
were military successes. A coherent C4ISR lessons
learned story has been pieced together from firsthand
experiences, interviews of key personnel, focused
research, and analysis of lessons learned reports pro-
vided to the National Defense University team.

Doing Windows: Non-Traditional


Military Responses to Complex
Emergencies
(Hayes & Sands, 1999)
This book provides the final results of a project spon-
sored by the Joint Warfare Analysis Center. Our
primary objective in this project was to examine how
military operations can support the long-term objective
of achieving civil stability and durable peace in states
embroiled in complex emergencies.

Network Centric Warfare


(Alberts, Garstka, & Stein, 1999)
It is hoped that this book will contribute to the prepara-
tions for NCW in two ways. First, by articulating the
nature of the characteristics of Network Centric War-
fare. Second, by suggesting a process for developing
mission capability packages designed to transform
NCW concepts into operational capabilities.

CAT-7
CCRP Publications

Behind the Wizard’s Curtain


(Krygiel, 1999)
There is still much to do and more to learn and under-
stand about developing and fielding an effective and
durable infostructure as a foundation for the 21st cen-
tury. Without successfully fielding systems of systems,
we will not be able to implement emerging concepts in
adaptive and agile command and control, nor will we
reap the potential benefits of Network Centric Warfare.

Confrontation Analysis: How to Win


Operations Other Than War
(Howard, 1999)
A peace operations campaign (or operation other than
war) should be seen as a linked sequence of confronta-
tions, in contrast to a traditional, warfighting campaign,
which is a linked sequence of battles. The objective in
each confrontation is to bring about certain “compli-
ant” behavior on the part of other parties, until in the
end the campaign objective is reached. This is a state of
sufficient compliance to enable the military to leave the
theater.

Information Campaigns for


Peace Operations
(Avruch, Narel, & Siegel, 2000)
In its broadest sense, this report asks whether the notion
of struggles for control over information identifiable in
situations of conflict also has relevance for situations of
third-party conflict management—for peace
operations.

CAT-8
CCRP Publications

Information Age Anthology:


Volume II*
(Alberts & Papp, 2000)
Is the Information Age bringing with it new challenges
and threats, and if so, what are they? What sorts of dan-
gers will these challenges and threats present? From
where will they (and do they) come? Is information war-
fare a reality? This publication, Volume II of the
Information Age Anthology, explores these questions
and provides preliminary answers to some of them.

Information Age Anthology:


Volume III*
(Alberts & Papp, 2001)
In what ways will wars and the military that fight them
be different in the Information Age than in earlier ages?
What will this mean for the U.S. military? In this third
volume of the Information Age Anthology, we turn
finally to the task of exploring answers to these simply
stated, but vexing questions that provided the impetus
for the first two volumes of the Information Age
Anthology.

Understanding Information Age Warfare


(Alberts, Garstka, Hayes, & Signori, 2001)
This book presents an alternative to the deterministic
and linear strategies of the planning modernization that
are now an artifact of the Industrial Age. The approach
being advocated here begins with the premise that
adaptation to the Information Age centers around the
ability of an organization or an individual to utilize
information.

CAT-9
CCRP Publications

Information Age Transformation


(Alberts, 2002)
This book is the first in a new series of CCRP books
that will focus on the Information Age transformation
of the Department of Defense. Accordingly, it deals
with the issues associated with a very large governmen-
tal institution, a set of formidable impediments, both
internal and external, and the nature of the changes
being brought about by Information Age concepts and
technologies.

Code of Best Practice for


Experimentation
(CCRP, 2002)
Experimentation is the lynch pin in the DoD’s strategy
for transformation. Without a properly focused, well-
balanced, rigorously designed, and expertly conducted
program of experimentation, the DoD will not be able
to take full advantage of the opportunities that Informa-
tion Age concepts and technologies offer.

Lessons From Kosovo:


The KFOR Experience
(Wentz, 2002)
Kosovo offered another unique opportunity for CCRP
to conduct additional coalition C4ISR-focused research
in the areas of coalition command and control, civil-
military cooperation, information assurance, C4ISR
interoperability, and information operations.

CAT-10
CCRP Publications

NATO Code of Best Practice for


C2 Assessment
(2002)
To the extent that they can be achieved, significantly
reduced levels of fog and friction offer an opportunity
for the military to develop new concepts of operations,
new organisational forms, and new approaches to com-
mand and control, as well as to the processes that
support it. Analysts will be increasingly called upon to
work in this new conceptual dimension in order to
examine the impact of new information-related capa-
bilities coupled with new ways of organising and
operating.

Effects Based Operations


(Smith, 2003)
This third book of the Information Age Transformation
Series speaks directly to what we are trying to accom-
plish on the "fields of battle" and argues for changes in
the way we decide what effects we want to achieve and
what means we will use to achieve them.

The Big Issue


(Potts, 2003)
This Occasional considers command and combat in the
Information Age. It is an issue that takes us into the
realms of the unknown. Defence thinkers everywhere
are searching forward for the science and alchemy that
will deliver operational success.

CAT-11
CCRP Publications

Power to the Edge:


Command...Control... in the
Information Age
(Alberts & Hayes, 2003)
Power to the Edge articulates the principles being used to
provide the ubiquitous, secure, wideband network that
people will trust and use, populate with high quality
information, and use to develop shared awareness, col-
laborate effectively, and synchronize their actions.

CAT-12
CCRP Publications, as products of the
Department of Defense, are available to
the public at no charge. To order any of
the CCRP books in stock, simply contact
the Publications Coordinator at:

[email protected]

The Publications Coordinator will work


with you to arrange shipment to both
domestic and international destinations.

Please be aware that our books are in


high demand, and not all titles have
been reprinted. Thus, some publications
may no longer be available.

You might also like