Complexity Theory and Network Centric Warfare
Complexity Theory and Network Centric Warfare
Complexity
Theory
and
Network Centric Warfare
James Moffat
About the CCRP
The Command and Control Research Program (CCRP)
has the mission of improving DoD’s understanding of the
national security implications of the Information Age.
Focusing upon improving both the state of the art and the
state of the practice of command and control, the CCRP
helps DoD take full advantage of the opportunities afforded
by emerging technologies. The CCRP pursues a broad
program of research and analysis in information
superiority, information operations, command and control
theory, and associated operational concepts that enable
us to leverage shared awareness to improve the
effectiveness and efficiency of assigned missions. An
important aspect of the CCRP program is its ability to
serve as a bridge between the operational, technical,
analytical, and educational communities. The CCRP
provides leadership for the command and control research
community by:
• articulating critical research issues;
• working to strengthen command and control research
infrastructure;
• sponsoring a series of workshops and symposia;
• serving as a clearing house for command and control
related research funding; and
• disseminating outreach initiatives that include the
CCRP Publication Series.
This is a continuation in the series of publications
produced by the Center for Advanced Concepts and
Technology (ACT), which was created as a “skunk works”
with funding provided by the CCRP under the auspices of
the Assistant Secretary of Defense (NII). This program has
demonstrated the importance of having a research
program focused on the national security implications of
the Information Age. It develops the theoretical
foundations to provide DoD with information superiority
and highlights the importance of active outreach and
dissemination initiatives designed to acquaint senior
military personnel and civilians with these emerging
issues. The CCRP Publication Series is a key element of
this effort.
Check our Web site for the latest CCRP activities and publications.
www.dodccrp.org
DoD Command and Control Research Program
Assistant Secretary of Defense (NII)
&
Chief Information Officer
Mr. John P. Stenbit
Principal Deputy Assistant Secretary of Defense (NII)
Dr. Linton Wells, II
Special Assistant to the ASD(NII)
&
Director, Research and Strategic Planning
Dr. David S. Alberts
Complexity
Theory
and
Network Centric Warfare
James Moffat
This book is dedicated to my wife Jacqueline and my children,
Louise and Katherine.
NOTES TO THE READER
Although I use the term Complexity Theory as if it was a coherent
body of scientific theory, this area of research is in fact still
both young and evolving. I use it therefore as a shorthand
term to cover a number of areas, each with its own distinct
heritage. Broadly, it covers fractal structures, nonlinear
dynamical systems, and models of self-organisation and self-
organised criticality.
The research on which this book is based could not have been
carried out without the help of a number of other people.
Their contributions are, I hope, suitably acknowledged in the
text. I would like to thank particularly Walter Perry (RAND),
Susan Witty (Dstl), David Rowland, and Maurice Passman for
their contributions. I am also most grateful to Professor Henrik
Jensen for contributing the Foreword.
TABLE OF CONTENTS
Foreword ............................................................. xi
CHAPTER 1
COMPLEXITY IN NATURAL
AND ECONOMIC SYSTEMS ............................... 1
Open, Dissipative Structures ............................................. 8
The Far-from-Equilibrium State ..................................... 13
Self-Organisation in Nature–An Example ...................... 17
Clustering in Space and Time ......................................... 23
Movement of a Boundary ................................................ 28
An Example of Complex Behaviour and
Fractal Time Series in Economics.............................. 33
Summary ......................................................................... 42
CHAPTER 2
CONCEPTS FOR WARFARE FROM
COMPLEXITY THEORY .................................. 45
Forest Fires, Clusters of Trees, and Casualties in War .... 52
CHAPTER 3
EVIDENCE FOR COMPLEX EMERGENT
BEHAVIOUR IN HISTORICAL DATA .............. 57
Introduction ..................................................................... 57
i
Time Series Behaviour..................................................... 58
Further Historical Data on the Processes of
“Irruption” and Breakthrough ................................... 63
The Fractal Front of Combat .......................................... 70
Power Law Relationships in Combat Data ..................... 72
CHAPTER 4
MATHEMATICAL MODELLING OF
COMPLEXITY, KNOWLEDGE,
AND CONFLICT .............................................. 77
Introduction ..................................................................... 77
Control and Fractal Dimension....................................... 90
Wargames as Open Systems Sustained by
Knowledge Flowing Across the Boundary ................. 94
Wargaming with FASTHEX........................................... 98
The Decision Problem ..................................................... 99
A Simple Example ......................................................... 114
Multiple Sweeps............................................................. 115
False Target Detections/Identifications ........................ 117
Knowledge Representation ........................................... 118
Combat Cycle Knowledge............................................. 121
Quantifying the Benefit of Collaboration
Across an Information Network............................... 130
ii
CHAPTER 5
AN EXTENDED EXAMPLE OF THE
DYNAMICS OF LOCAL COLLABORATION
AND CLUSTERING,
AND SOME FINAL THOUGHTS .................... 139
Clustering and Swarming .............................................. 141
Final Thoughts............................................................... 148
APPENDIX
OPTIMAL CONTROL WITH A
UNIQUE CONTROL SOLUTION ................... 151
Pontryagin’s Maximum Principle .................................. 154
Determining the Extremal Controls .............................. 154
Uniqueness of the Extremal Control for
a Linear System........................................................ 158
iii
LIST OF FIGURES
v
Figure 3.8: 9th Armoured Division–
Power Spectrum Prediction............................... 61
Figure 3.9: 9th Armoured Division–
SOC Prediction ................................................. 61
Figure 3.10: The Statistics of Linear Irruption .................... 69
Figure 3.11: The Statistics of Radial Irruption .................... 69
Figure 4.1: Area of Operations............................................. 85
Figure 4.2: Five Configuration Classes................................. 88
Figure 4.3: Plot of y = f(x) and y = x - weak control ............ 89
Figure 4.4: Plot of y = g(x) and y = x - strong control.......... 89
Figure 4.5: Recursive Calculation of the
Probability of Weak Control.............................. 91
Figure 4.6: FASTHEX Game Cycle Sequence ................... 99
Figure 4.7: A Wargame as an Open Dynamical Process ... 102
Figure 4.8: An Example Two-Cycle Game........................ 105
Figure 4.9: BLUE Commander’s Allocation Strategy ....... 107
Figure 4.10: BLUE Commander’s Situation
Assessment Problem ........................................ 110
Figure 4.11: Developing a Refined Estimate ..................... 114
Figure 4.12: Refined Probability Assessments.................... 115
Figure 4.13: Knowledge and Entropy for Example 1 ........ 125
Figure 4.14: Experimental Assessment of
Campaign Level Knowledge and
Attrition of Enemy Forces ............................... 129
Figure 4.15: Experimental Assessment of the
Effect of Campaign Level Knowledge on
Own Force Casualties...................................... 129
Figure 4.16: The Critical Path ........................................... 131
vi
Figure 4.17: Parallel Nodes on the Critical Path................ 131
Figure 4.18: The Logistics S-Shaped Curve....................... 135
Figure 5.1: Screenshot of the Start of a
Typical ISAAC Simulation Run ..................... 140
Figure 5.2: Nearest and Next Nearest
Neighbour Clustering ...................................... 142
Figure 5.3: Largest Cluster Size as a Function of
Simulated Time (First Iteration, Red Agents) . 143
Figure 5.4: Largest Cluster Size as a Function of
Simulated Time (First Iteration, Blue Agents) . 143
Figure 5.5: Largest Cluster Size as a Function of Time
(40th Iteration, Red) ........................................ 145
Figure 5.6: Largest Cluster Size as a Function of Time
(40th Iteration, Blue)........................................ 145
Figure 5.7: Frequency Distribution of the
Largest Cluster Size for Red Agents................ 145
Figure 5.8: Frequency Distribution of the
Largest Cluster Size for Blue Agents ............... 146
Figure 5.9: Distribution of Cluster Sizes
(2nd Replication, Red Agents)......................... 147
Figure 5.10: Distribution of Cluster Size for Red Agents .. 148
vii
LIST OF TABLES
ix
FOREWORD
xi
crucial arrangement cannot be studied by focusing on, say,
the legume and neglecting the bacteria; the ecological func-
tion emerges first when the different components are brought
together and interaction is taken into account.
Another important feature of complex systems is their sensi-
tivity to even small perturbations. The same action is found
to lead to a very broad range of responses, making it exceed-
ingly difficult to perform prediction or to develop any type of
experience of a “typical scenario.” This must necessarily lead
to great caution: do not expect what worked last time to work
this time. The situation is exacerbated since real systems
(ecological or social) undergo adaptation. This implies that
the response to a given strategy most likely makes the strat-
egy redundant. An example is the effect of using the same
type of antibiotic against a given type of bacteria. Evolution
soon ensures that the bacteria develop resistance and make
the specific type of antibiotic useless. That complex systems
adapt and change their properties fundamentally as a result
of the intrinsic dynamics of the system is clearly extremely
important. Nevertheless, for the sake of simplicity adaptation
is often neglected in model studies. Sometimes assuming the
existence of a stationary state might be justified (e.g., if one is
interested in “toy” models of the flow of granular material
under a controlled steady input of grains). But if one is deal-
ing with more complex situations such as in ecology, and
even more when considering social and political systems,
ignoring adaptation is very likely to lead to erroneous
conclusions.
We know from studies of Self-Organised Critical models,
which the present book alludes to (for more see P. Bak, How
Nature Works, Oxford University Press, 1997 and H.J. Jensen,
Self-Organized Criticality, Cambridge University Press, 1998),
xii
that the correlations and general behaviour exhibited by these
model systems are entirely determined by the assumed
boundary conditions or the applied drive. The lesson to be
learned from this is that complex systems cannot be studied
independently of their surroundings. Understanding the
behaviour of a complex system necessitates a simultaneous
understanding of the environment of the system. In model
studies, one assumes often that the surroundings can be repre-
sented by one or the other type of “noise,” but this is just a
trick that allows one to proceed with the analysis without
understanding the full system under consideration. It is very
important to appreciate that the “drive” or the “noise” are
equally crucial to the understanding, as is the analysis of the
“system” itself. One should bear in mind that the separation
into system, drive, noise, surroundings, etc. is rather arbitrary
and is far from representing a complete analysis.
From these considerations, we see that it is vitally important
to consider warfare as a complex system that is linked and
interacts (in a coevolving way) with the surrounding socio-
economical and political context. From that perspective, the
present book is a “work in progress” and a preliminary first
step along the road in helping to analyse and structure these
difficult and serious issues. Forgetting that war and warfare
are an intimate part of a much larger complex system will
lead to incomplete and even dangerously incorrect conclu-
sions. Applying the approach of Complexity Theory to
warfare leads one to the self-consistent realisation that war-
fare will have to be analysed in its larger context. Further
work will need to examine how coevolution across the entire
network of military, socioeconomical, and political interac-
tions leads firstly to emergent effects at higher levels, and of
equal importance how such effects lead to coevolution at the
xiii
higher level. It will also be important to consider the robust-
ness of such networks, and their vulnerability to damage.
xiv
CHAPTER 1
COMPLEXITY IN
NATURAL AND
ECONOMIC SYSTEMS1
1
The contribution of Dr. Maurice Passman to this
chapter is gratefully acknowledged.
1
2 Complexity Theory and Network Centric Warfare
2Chapter 1 of: Moffat J (2002). Command and Control in the Information Age.
The Stationery Office. London, UK.
Chapter 1 3
A SIMPLE EXAMPLE
To illustrate this, let us consider a simple thermodynamic
thought experiment. Imagine a layer of fluid limited by two
horizontal parallel plates whose lateral dimensions are much
longer than the width of the layer, as shown in Figure 1.1.
Now suppose at first that the constraint is weak, i.e. the change
in temperature, ΔT , is small. The system will again adopt a
simple and unique state in which the only active process is a
transfer of heat from the lower to the upper plate, from which
heat is lost to the external world. The only difference from the
state of equilibrium is that temperature, and consequently den-
sity and pressure, are no longer uniform. They vary from warm
regions to cold regions in an approximately linear fashion. This
phenomenon is known as thermal conduction. In this new state
that the system has reached in response to a constraint, stability
will prevail again and the behaviour will eventually be as simple
as at equilibrium. However, by removing the system from equi-
librium further and further, through an increase in ΔT , we
observe that suddenly, at a value of ΔT that we will call critical,
matter begins to perform a bulk movement. Moreover, this
movement is far from random; the fluid is structured in a series
of small structures known as Benard cells.
Owing to thermal expansion, the fluid closer to the lower plate
is characterised by a lower density than that nearer the upper
plate. This gives rise to a gradient of density that opposes the
force of gravity. This configuration is thus potentially unstable.
Consider a small volume of the fluid near the lower plate.
Imagine that this volume is displaced upward by a perturba-
tion. This volume, now in a colder and hence denser region,
will experience an upward Archimedes force, amplifying the
ascending movement further. If, on the other hand, a small
droplet initially close to the upper plate is displaced down-
ward, it will penetrate an environment of low density and the
Archimedes force will tend to amplify the initial descent. The
fluid thus generates the observed currents. The stabilising
effect of viscosity, which generates an internal friction oppos-
ing movement, counteracts the destabilising effects. This, and
Chapter 1 7
ble for the same parameter values and chance alone will
decide which of these solutions is realised. In this way, the sys-
tem has been perturbed from a state of equilibrium or near-
equilibrium to a state of self-organisation, with a number of pos-
sible modes of behaviour.
What happens to the Benard cell system when the thermal
constraint is increased beyond this first threshold? For some
range of values the Benard cells will be maintained globally,
but some of their specific characteristics will be modified. Fur-
ther constraint induces the system to move beyond another
critical point and turbulence is witnessed. Note that all of these
critical behaviours are different from the phase changes we
normally associate with closed thermodynamic systems. The
reason behind this is that a nonequilibrium constraint is being
applied. For example, the dendritic structure associated with
snowflakes has nothing to do with the structure of the underly-
ing ice-crystal lattice. The scale, size, and spacing of the
emergent structure is of an order of magnitude larger.
To summarise, nonequilibrium has enabled the system to
transform part of the energy communicated from the envi-
ronment into an ordered behaviour of a new type: the
dissipative structure. This regime is characterised by symmetry
breaking, multiple modes of behaviour, and correlation. Such a system
is called “open” since it is open to the effect of energy or infor-
mation flowing into and out of the system. It is also called
“dissipative” because of such energy flows, and the resultant
dissipation of energy.
erties that we shall call fluxes. We now know to call this an open
system (in contrast to a closed or isolated system). Figure 1.2 is
a schematic representation of such an open system, communi-
cating with the environment through the exchange of such
properties as mass, energy, or information. The rate of amount
transported per unit surface is the flux of the corresponding
property across the system. In our simple example above, the
amount of heating is the flux of energy into the system.
dX i
= Fi ( X 1 ,... X n ; λ 1 ,...λ m ), (i = 1,....n)
dt
AN EXAMPLE
If we think of a guided missile attempting to manoeuvre
towards a target, the measure of loss is the miss distance rela-
tive to the aim point. The control parameters are the settings
for the missile fins at a given time t. For simple forms of linear
guidance (e.g., early forms of laser-guided bombs), this leads to
what is called bang-bang control, where the missile fins “bang”
from one extreme setting to another in order to keep the mis-
sile on course. The Appendix goes into this in more detail and
shows that such solutions correspond to maximising or mini-
mising a Hamiltonian function. This is due to Pontryagin’s
maximum principle. Applied to a linear control system, this
maximum principle leads to the solution of bang-bang control.
A characteristic feature of many of the systems encountered in
nature, however, is that the F’s are complicated nonlinear func-
tions of the X’s. The equations of evolution of this type of
system should then admit, under certain conditions, several
solutions (rather than just the one optimal solution) since a
multiplicity of solutions is the most typical feature of a nonlin-
ear equation. Our assumption will be that these solutions
represent the various modes of behaviour of the underlying system.
k
⎯⎯→
1
A+ B 3X
←⎯⎯
k 2
3 k
⎯⎯→
X B
←⎯⎯
k4
k1 [a ][ x] 2 = k 2 [ x]3
k 3 [ x] = k 4 [b]
k4 [be ] k1[ae ]
[ xe ] = =
k3 k2
[be ] k1 k 3
=
[a e ] k 2 k 4
− k2 [ xs ]3 + k1[a][ xs ]2 + k3 [ xs ] − k4 [b] = 0 .
xi = X i − X i ,e
dxi ⎛ ∂Φ ⎞
= −∑ Γi , j ⎜ ⎟.
dt ⎜ ∂x ⎟
j ⎝ j⎠
Φ is a thermodynamic potential taking its minimum at equi-
librium and {Γi, j } is a symmetric matrix. This symmetry can be
traced back to the property of detailed balance or the property
of the invariance of the equilibrium state to time reversal. We
can see from this that the system response is essentially linear
near to equilibrium (i.e., small changes lead to small effects).
dX i
= Fi ( X 1 ,... X n ; λ 1 ,...λ m ), (i = 1,....n)
dt
where, as before, Fi are the rate laws, and λi are the control
parameters. In a typical natural phenomenon, the number of
variables n is expected to be very high. This will considerably
complicate the search for all possible solutions. However, sup-
pose that by experiment we know one of the solutions. By a
standard method, known as Linear Stability Analysis, we can then
determine the parameter values for which this solution
regarded as the reference state switches from asymptotic sta-
bility to instability.
and
dxi
= Fi ({ X i , s + xi }, λ ) − Fi ({ X i , s }, λ ) .
dt
dxi
= ∑ Lij (λ) x j + hi ({x j }, λ ) i = 1,2,.....n
dt j
where Lij are the coefficients of the linear part and hi are the
nonlinear part. It is assumed that the asymptotic stability of
the reference state (i.e., X = Xs or x = 0) of the system is iden-
tical to that of the linearised part:
dxi
= ∑ Lij (λ ) x j i = 1,2,.....n .
dt j
dG ( s ) 1 − G ( s )
= d
ds L S G(s)
Figure 1.4 shows how the equation (the hatched line) approxi-
mates the self-organised movement of the system, via a series
of avalanches/clusters towards the critical attractor of the sys-
tem at which the system has optimal flexibility (in the sense
that clusters of all sizes can be created). This critical point cor-
responds to a fitness value fc. At this point, there are no fitness
values below this critical value and a flat distribution of fitness
values in the range from fc to 1.0. This is in complete contrast
to the behaviour of a closed system such as an ideal gas in an
isolated container, where the gas evolves from (for example)
being partitioned in part of the container to the equilibrium
state where it is spread equally throughout.
Such critical systems are of particular scientific interest. Sys-
tems in critical states do not have any characteristic scale and
may therefore exhibit the full range of behavioural characteris-
tics within the particular system restraints. This means that
systems at the point of criticality are in a position of optimal
flexibility in some sense, as we have noted. It could thus be
S ( f c − f i ) −γ
S ( f c − f i ) −γ .
SELF-ORGANISED CRITICALITY
In a paper published in 1987, Bak, Tang and Weisenfeld [7]
first proposed the hypothesis that a system consisting of many
interacting constituents may exhibit, in certain cases, a specific
general emergent behaviour characteristic of the system. Bak
described the behaviour of this type of system by the term self-
organised criticality (SOC). Self-organisation has for many years
4A a
power-law f ( x ) = x has the property that the relative change
f ( kx ) a
=k
f ( x)
is independent of x. Power-laws, in this sense lack characteristic scale.
5See for example the standard text Chaos and Fractals by Heinz-Otto
Peitgen, Hartmut Jürgens and Dietmar Sauape. Springer. 1992.
Despite this, very little (until now) has been known about why
fractals form. Fractal structures are not the lowest energy con-
figuration that can be selected in, for example, thermo-
dynamic systems, therefore some kind of dynamic selection of
configuration must be taking place.
Bak explains the connection in the following way. A signal will
be able to evolve through the system as long as it is able to find
a connected path of above-threshold regions. When the system
is either driven at random or started from a random initial
state, regions that are able to transmit a signal will form some
kind of random network. This network is correlated by the
interaction of the internal dynamics with the external field.
The complicated interrelation between the two driving
dynamics means that a complex, finely-balanced system is pro-
duced. As the system is driven, after this marginally stable self-
organised state has been reached, we will see flashes of activity
as external perturbations interact with internal drivers to spark
off avalanches (i.e., clusters) of activity through different routes
in the system. Bak’s assertion is that the structure of this
dynamic network is fractal. If the activated clusters consist of
fractals of different sizes, then the duration of the induced pro-
cesses travelling through these fractals will vary greatly.
Different timescales of this type lead to what is termed 1/f noise.
1/f noise is a label used to describe a particular form of time
correlation in nature. If a time signal fluctuates in a seemingly
erratic way, the question is whether the value of the signal
N (τ 0 ) at time τ 0 has any correlation to the signal measured at
time τ 0 + τ ( N (τ 0 + τ) ). The amount of causation is character-
ised by a temporal correlation function:
2
G (τ) = N (τ 0 ) N (τ 0 + τ) τ0
− N (τ 0 ) τ0
.
T 2
1
∫ dτN (τ)e
2 iπ fτ
S ( f ) = lim T →∞ .
T −T
1
1/f noise corresponds to the case where S ( f ) = and corre-
f
sponds to a fractal clustering of the signal amplitude in time.
CELLULAR AUTOMATA
The introduction in 1987 of the Self-Organised Criticality con-
cept employed the language of avalanches (clusters) of sand
grains. It proposed a simulation model (a cellular automata model
to be precise) of the most essential features of sand dynamics.
This cellular automata model is indeed characterised by
power-laws and exhibits critical behaviour. As an example, let
us look again at the modelling of an ecosystem consisting of a
number of coevolving species (the Bak-Sneppen evolution
model) that we mentioned earlier. This automaton process
considers the points of a grid and has a simple set of rules
determining how the system changes from one time-step to the
next, to represent the changing fitness of the species itself, and
the coevolutionary impact of that change on the fitness of
closely linked species in the ecosystem. Note that this linkage is
one of local species influence and coevolution. It does not nec-
essarily assume physical closeness. Although these rules are
simple, the emergent behaviour of the system is complex and
surprising–a characteristic of such nonlinear interactions.
MOVEMENT OF A BOUNDARY
In natural systems, we can consider the movement of a bound-
ary through a medium (for example, the boundary of an
atomic surface, the boundary of a growing cluster of bacteria,
or the front of advance of a fluid “invasion” of a medium such
as a crystalline rock). This has been studied extensively in rela-
tion to the laying down of single atom surfaces using molecular
beam epitaxy [9]. The most relevant case from our point of
view is the front of advance of fluid “invasion” of a medium.
As described in [9], we can represent the medium itself as con-
Movement of a Boundary
Chapter 1 29
Movement of a Boundary
30 Complexity Theory and Network Centric Warfare
It turns out that for this case, when the pinning probability p is
greater than a critical value pc, the growth of the interface is
halted by a spanning path of pinning cells. Such models of
interface or boundary movement exhibit fractal properties of
the interface, as discussed in detail in [9]. We shall see similar
effects later in our discussion in Chapter 4 of the control of the
battlespace using ideas based on preventing the flow of oppos-
ing forces and/or third parties through the space. Rather than
choosing the next cell to invade at random, as in the DPD
model, we can use a model of the process that is more akin to
the manoeuverist principle of applying your strength where
the opponent is weak–in other words, the cell next to be wet-
ted is the one where the local pinning force of the medium is
weakest. Such a model of the boundary movement is the Inva-
sion Percolation model. We can create a model of this process in a
way that is consistent with our description of the Bak-Sneppen
evolution model of local coevolution [6]. We start by assign-
ing, as with the Bak-Sneppen evolution model, random
numbers fi between 0 and 1 to the points of a d-dimensional
lattice. Initially, one side of the lattice is the wetted cluster.
The random numbers at the boundary of the wetted cluster
are examined. At each update step s, the site with the smallest
random number fi,s on the boundary of the wetted area is
located and added to the cluster. In this case, we can interpret
the values fi as the values of the local pinning force, and the
cluster advances at those points where the pinning force is
smallest. As noted in [6], an important physical realisation of
invasion percolation is the displacement of one fluid by
another in a porous medium. The boundary of the cluster cre-
ated by this process is fractal and has a fractal dimension in the
range 1.33-1.89 dependent on the exact definition of the
boundary [6, Appendix].
Movement of a Boundary
Chapter 1 31
1 L
W ( L, t ) = ∑
L i =1
(h(i, t ) − h (t )) 2
Movement of a Boundary
32 Complexity Theory and Network Centric Warfare
∂h( x, t ) λ
= v∇ 2 h + (∇h) 2 + η ( x, t ) .
∂t 2
The first term in this equation represents linear effects of the
interface growth, the second captures nonlinear effects, and
the third is a noise term. This thus represents the starting point
for an analytical expression (i.e., a metamodel) of the advance
of a conflict front through a locally controlled area, as dis-
cussed below.
Movement of a Boundary
Chapter 1 33
⎛t⎞
Y (t ) = rX ⎜ ⎟ ,
⎝a⎠
i.e. the graph of X is stretched in the time direction by a fac-
tor a and in the amplitude by r. The displacements in Y for
time differences t are the same as those in X multiplied by r
for corresponding time differences t/a. Thus, the squared
displacements are proportional to r2t/a. In order to ensure
the same constant of proportionality as the original Brown-
ian motion, we require r 2 / a = 1 or r = a . For example,
when replacing t by t/2, i.e. stretching the graph by a factor
of 2, we have a=2 i.e., r = 2 .
In this broader context of fractal processes, ordinary Brownian
motion is a random process X(t) with Gaussian increments and
2H
var( X (t 2 ) − X (t1 )) ∝ t 2 − t1 where H (the Hurst expo-
nent) = ½. We can consider (as discussed in [9]) the Brownian
motion time series as describing an interface (between the
parts above and below the series) stretching between the time
points t1 and t2. In terms of our previous discussion of interface
roughness, we now have L = t2-t1. The standard deviation (i.e.,
the roughness of the interface generated by the Brownian
motion time series) over this timespan L has the form Lα
where in this case α equals the Hurst exponent H (as we can
see from the expression for the variance above in terms of H).
Thus, standard Brownian motion corresponds to a roughness
exponent of ½. Other values of H are possible, corresponding
to rougher or smoother forms of time series.
constants. The adt term implies that x has an expected drift rate
of a per unit time. Without the bdz term, the equation becomes:
dx
dx = adt , i.e. =a.
dt
The bdz term may be regarded as adding noise or variability to
the path followed by x. For a small time interval, Δt , the
change in x, Δx , is given by Δx = aΔt + bε Δt where ε is a
random draw from a standardised normal distribution. Δx has
a normal distribution with mean of a Δt , standard deviation of
b Δt , and a variance of b 2 Δt . Similarly, the mean change in x
for any time interval T is normally distributed with mean
change in x given by aT, standard deviation of change in x
given by b T , and variance of change in x as b2T.
Even more generally, a stochastic process is defined as an Ito
process if dx = a ( x, t )dt + b( x, t )dz .
A basic simulation of stock price movement would be to use a
generalised Wiener process. This is clearly inadequate as this
assumes that both a constant drift rate and constant variance
rate occur, i.e. the percentage stock return is dependent upon
stock price. The constant expected drift rate assumption is
inappropriate and is replaced by the assumption that the
expected drift, expressed as a proportion of the stock price, is
constant. Thus, if S is the stock price, the expected drift rate in
S is μS for some constant parameter μ and for a small time
interval, Δt , the expected change in S is μSΔt . If the variance
rate of the stock price is always zero, then:
dS
dS = μSdt or = μdt
S
i.e., S = S 0 e μ t
dS = μ Sdt + σ Sdz
or
dS
= μ dt + σ dz
S
and
⎛ σ2⎞
d ln S = ⎜⎜ μ − ⎟dt + σdz .
⎝ 2 ⎟⎠
The change in ln S between times t and T is thus normally
distributed:
⎡⎛ σ2⎞ ⎤
⎜
ln S T − ln S ≈ φ ⎢⎜ μ − ⎟⎟(T − t ), σ T − t ⎥
⎣⎝ 2 ⎠ ⎦
It follows that:
⎡ ⎛ σ2⎞ ⎤
ln S T ≈ φ ⎢ln S + ⎜⎜ μ − ⎟⎟(T − t ), σ T − t ⎥
⎣ ⎝ 2 ⎠ ⎦
CLUSTERING IN TIME
What is of particular interest is that Turner and Weigel’s data
strongly suggest the occurrence of temporal clustering. Over
1
F(y) ,
yα
i.e. a power-law. For further discussion of these and related
ideas from Complexity Theory and dynamical systems used in
financial mathematics, see references 10 and 13.
SUMMARY
In summary, we have looked in some depth at the complex
behaviour of natural biological and physical systems. From
our analysis of these open and dissipative systems, it is clear
that there are a number of key properties of complexity that
are important to our consideration of the nature of future war-
fare. Such futures, involving the exploitation of loosely
coupled command systems such as Network Centric Warfare,
will have to take account of these key properties. A list of these
is given here, and then discussed further in Chapter 2 in the
context of Network Centric Warfare.
1. NONLINEAR INTERACTION: this can give rise to surpris-
ing and non-intuitive behaviour, on the basis of simple
local coevolution.
2. DECENTRALISED CONTROL: the natural systems we
have considered, such as the coevolution of an ecosys-
tem or the movement of a fluid front through a
crystalline structure, are not controlled centrally. The
emergent behaviour is generated through local
coevolution.
3. SELF-ORGANISATION: we have seen how such natural
systems can evolve over time to an attractor correspond-
ing to a special state of the system, without the need for
guidance from outside the system.
Summary
Chapter 1 43
REFERENCES
1 HOFFMAN F G and HORNE G E (1998). Maneuver Warfare Science 1998.
Dept of the Navy, HQ U.S. Marine Corps, Washington DC.
2 FORDER R (2000). “The Future of Defence Analysis.” Journal of Defence
Science. 5, No. 2. pp. 215-226.
3 CEBROWSKI A (2000). “Network Centric Warfare and Information
Superiority” Keynote address from proceedings, Royal United Services
Institute (RUSI) conference “C4ISTAR; Achieving Information
Superiority.” July 2000. RUSI, Whitehall, London, UK.
4 PRIGOGINE I (1980). From Being to Becoming; Time and Complexity in the
Physical Sciences. W H Freeman and Co., San Francisco, USA.
5 MOFFAT J (2002). Command and Control in the Information Age; Representing its
Impact. The Stationery Office, London, UK.
6 PACZUSKI M, MASLOV S, and BAK P (1996). “Avalanche Dynamics in
Evolution, Growth and Depinning Models.” Physics Review. E 53 No. 1.
pp. 414-443.
7 BAK P, TANG C, and WIESENFELD K (1987). Self-Organised Criticality; An
Explanation for 1/f Noise. Physics. Review Letters, 59. pp. 381-384.
References
44 Complexity Theory and Network Centric Warfare
ADDITIONAL REFERENCE
14 PEITGEN H-O, JURGENS H, and SAUAPE D (1992). Chaos and Fractals.
Springer-Verlag.
Additional Reference
CHAPTER 2
CONCEPTS FOR
WARFARE FROM
COMPLEXITY THEORY
45
46 Complexity Theory and Network Centric Warfare
...It follows, from what we have just said, that the representation
of the C2 process must reflect two different mechanisms. The
first is the lower-level interaction of simple rules or algorithms,
which generate the required system variety. The second is the
need to damp these by a top-down C2 process focused on cam-
paign objectives. Each of these has to be capable of being
represented using the same Generic HQ/Command Agent object
architecture. We have chosen to do this by following the general
psychological structure of Rasmussen’s Ladder, as a schema for
the decisionmaking process. At the lower levels of command
(below about Corps, and equivalent in other environments), this
will consist of a stimulus/response mechanism. In cybernetic
terms, this is feedback control. At the higher level, a broader
(cognitive-based) review of the options available to change the
current campaign plan (if necessary) will be carried out. In
cybernetic terms, this is feedforward control since it involves the
use of a ‘model’ (i.e., a model within our model) to predict the
effects of a particular system change.
In the last chapter of [2] (Chapter 6: “Paths to the Future”) the
following point is made, which is the foundation for all of the
work and ideas presented here:
REFERENCES
1 CEBROWSKI A (2000). “Network Centric Warfare and Information
Superiority.” Keynote address from proceedings, Royal United Services
Institute (RUSI) conference. “C4ISTAR; Achieving Information
Superiority.” RUSI. Whitehall, London, UK.
2 MOFFAT J (2002). Command and Control in the Information Age; Representing its
Impact. The Stationery Office. London, UK.
References
56 Complexity Theory and Network Centric Warfare
References
CHAPTER 3
EVIDENCE FOR
COMPLEX EMERGENT
BEHAVIOUR IN
HISTORICAL DATA
INTRODUCTION
57
58 Complexity Theory and Network Centric Warfare
teristics of the data for the next timestep [4]); and lastly, use of
a maximum entropy method to calculate a power spectrum
from which a linear prediction may be made [5]. All of these
approaches are available in a package of time series analysis
procedures (the Chaos Data Analyser) produced by the Ameri-
can Institute of Physics for the analysis of experimental data in
natural systems, and that is what we have used here. A range
of the data points from the original data were deleted (those at
the end of the time series) and a prediction made of these data
points, which is then compared with the original data for the
2nd Armoured Division. The plots (Figure 3.1 to 3.5) are of
casualties per 1000 on the y-axis and time in days on the x-axis.
The first part of this time series (up to day 38) was in fact used
to train a number of different time series prediction methods,
and these have been compared with the predictions for the
days 39 onward. In fitting a prediction based on a self-organis-
ing criticality (SOC) fractal series, we have assumed that the
circumstances remain sufficiently constant that we can fit a
single SOC process (this corresponds to a power spectrum that
is linear when plotted on a log-log scale). Comparing the “jerk-
iness” of the SOC prediction and the real data, the general
pattern of the process is very similar.
Rather than:
cause of this deviation. The fact that these lines lie parallel to
each other means the following: given two such curves, corre-
sponding to x variables x(1) and x(2), there is a scaling variable
λ (which depends on the two categories of breakthrough being
considered) such that
Log x(1) = λ Log x(2)
i.e. x(1) = x(2) λ
and the distribution of mean advance at breakthrough coin-
cides for the variables x(1) and x(2)λ. In this sense, we can say
that x(2) can be scaled by a power transformation so that its
distribution collapses onto that of x(1).
An alternative radial measure of irruption, as we have dis-
cussed, is √(area at breakthrough)/(days to breakthrough),
with dimensions of miles per day. If this is plotted on the same
basis as the previous figure, we again have evidence for a form
of “scaling collapse” of the type discussed above (Figure 3.11).
Moreover, the stability of the two sets of data indicate that
there are (at least) two categories of emergent behaviour for
irruption and subsequent campaign outcome–linear advance
and radial propagation from a point.
Each data point in Figures 3.10 and 3.11 is a campaign out-
come, classified in terms of immediate (I), quick (Q) or
prolonged (P) irruption, and Subsequent Success (SS) or Sub-
sequent Failure (SF). The key to the data points is given below:
1 k1
= + k2
R T
x02 − x 2
HELMRAT = 2
y0 − y 2
x0
FORRAT =
y0
where in each case, x0 is the starting value of force size, and x is
the final value (similarly for y). Hartley has established a power
law relationship between these two variables, HELMRAT and
FORRAT, on the basis of the comprehensive data sets
described above. He has shown that (in logarithmic terms):
Ln (HELMRAT) = α Ln (FORRAT) +β
where the expected value of α is approximately 1.35 and the
value of β is approximately normally distributed about the
value -0.22 with standard deviation of 0.7. Hartley shows that
the value of α has the characteristics of a universal constant,
being stable over four centuries of time [14, Figure 17], and
stable when considering conflicts of different sizes, ranging
from force sizes of less that 5,000 to more than 100,000 [14,
Tables 4, 5, and 6].
If it is assumed that the mechanism that produces this remark-
ably stable relationship between casualty effects and force ratio
is of Lanchester type, then Hartley shows that it must be of lin-
ear-logarithmic form. However, the relationship is based on
the empirical data alone, and other explanations are possible.
For example, in [15] an analysis based on self-organisation (in
particular the forest fire model of Chapter 2) is put forward as
the basis for the equally remarkable scaling of conflict size. It is
thus persuasive that such complexity-based effects, rather than
a Lanchester process, lie at the base of the scaling relationship
established by Hartley. This analysis by Turcotte and Roberts
is next discussed.
In their paper [15], they begin by comparing the predictions
of the theoretical self-organising forest fire model with the sta-
tistics of the relative sizes of real forest fires. Four data sets are
considered: 4,284 forest fires in the USA Fish and Wildlife
Service Lands during the period 1986-1995; 120 of the largest
fire areas in the western USA from tree ring data, spanning
the period 1155-1960 (800 years); 164 fires in the Alaskan
boreal forests during 1990-1991, and 298 fires in the Austra-
lian Capital Territory during 1926-1991. The results are in
good agreement with a power law statistical distribution of size
of fire versus frequency, with a power law exponent of
between 1.3 and 1.5. The remarkable thing is the stability of
the trend across such a long period of time, during which tech-
nology has changed, as have ways of fighting such fires. The
authors then show that a similar power law relationship (also
with an exponent in the same range) holds for the intensity of
conflict versus its frequency. This work extends the research of
Richardson [16], who also showed a power law relationship
REFERENCES
1 LTG TUKER F (1948). The Pattern of War. Cassell. UK.
2 MOFFAT J (2002). Command and Control in the Information Age: Representing its
Impact. The Stationery Office. London, UK.
3 KUHN G W S (1989). “Ground Force Casualty Patterns: The Empirical
Evidence.” Report FP703TR1.
4 WAYLAND R, BROMLEY D, PICKETT D, and PASSAMANTE A
(1993). Physics Review Letters. 70. p. 580.
5 LAERI F (1990). Computational Physics. 4. p. 627.
6 LAUREN M and STEPHEN R T. “Fractals and Combat Modelling: Using
MANA to Explore the Role of Entropy in Complexity Science.” Paper
prepared for Fractals. Defence Technology Agency. Auckland, New
Zealand.
7 ROWLAND D, KEYS M C, and STEPHENS A B (1994). “Breakthrough
and Manoeuvre Operations (Historical Analysis of the Conditions for
Success) Annex I, Irruption.” Unpublished DOAC Report, Annex I.
8 ROWLAND D, SPEIGHT L R, and KEYS M C (1996). “Manoeuvre
Warfare: Some Conditions Associated with Success at the Operational
Level.” Military Operations Research. 2 No 3. pp. 5-16.
References
Chapter 3 75
References
CHAPTER 4
MATHEMATICAL
MODELLING OF
COMPLEXITY,
KNOWLEDGE,
AND CONFLICT
INTRODUCTION
77
78 Complexity Theory and Network Centric Warfare
a = f (a1 ,...ak , b1 , b2 )
(This is easily generalised to an arbitrary number of bs.) The
arguments a1 ,...ak have independent dimensions. That is, the
dimension of any a cannot be expressed as a combination of
the dimensions of the other as. In contrast, the dimension of
each b variable can be expressed as such a combination. The
arguments can be transformed using a gauge transformation
so that:
Introduction
Chapter 4 79
⎛ b b ⎞
a = f (a1 ,..., ak , b1 , b2 ) = a1p ...akr Φ ⎜ p1 1 r1 , p2 2 r2 ⎟
⎝ a1 ...ak a1 ...ak ⎠
= a1p ...akr Φ ( Π1 , Π 2 ) (1)
b1 b
where Π1 = p1 r1
and Π 2 = p2 2 r2
a ...ak
1 a1 ...ak
% ( Π1 )
Φ (Π1 , Π 2 ) = Πα21 Φ (2)
Πα2 2
Introduction
80 Complexity Theory and Network Centric Warfare
Introduction
Chapter 4 81
tial clustering of the agents, and the attrition that they inflict
on the opponent. For such a distillation, a metamodel of type 2
applies [1] that allows us to relate the attrition rate for one side
to the clustering dynamics of the opposing side, as measured
by the mean fractal dimension of these clustering agents. As a
simple example, (given in [1]), assume that the command pro-
cess, say for Red, is represented by the following effects:
1. The number of discrete clusters of Red agents at time t,
N(t), is specified ahead of the simulation.
2. N(t) is a decreasing function of t.
These assumptions are meant to suggest that the number of
Red clusters decreases in time, reflecting the desire of Red to
concentrate force. With these assumptions let us further
assume that the smallest cluster of Red agents, X(t), at time t, is
taken and added to another randomly chosen cluster of Red
agents. This process thus represents both the concentration of
Red force and the reconstitution of force elements.
Let us now define ϕ ( x, t ) = (expected number of clusters of Red
agents ≥ size x at time t)/(initial total number of clusters of Red
agents) and N(t)=(the total number of remaining clusters of
Red agents at time t)/(initial total number of clusters). Given
the assumptions and definitions above, it can then be shown
that ϕ ( x, t ) , the cumulative distribution of cluster sizes at time
t, approaches a self-similar distribution as time progresses
(i.e., a scaling collapse takes place). Thus the cluster size distri-
bution evolves over time by a scaling relation. ϕ ( x, t ) can then
be represented in the self-similar form:
g ( x / X (t ))
ϕ ( x, t ) =
X (t )
Introduction
82 Complexity Theory and Network Centric Warfare
ϕ ( x, t ) = (1 + bδ )ϕ ( x, (1 − δ )t ) .
It then follows that:
log ϕ ( x, t ) = b log t + c
for some constant c and the normalised expected cluster size at
time t, ϕ ( x, t ) , varies as a power-law with increasing time t and
scaling constant b.
If ΔB is the change in the number of Blue agents, Lauren [3]
has shown that:
ΔB
Δt
is proportional to the product of (Red unit effectiveness)x(the
probability of meeting a Red cluster)x(the expected number of
Red units per cluster). It is assumed that unit effectiveness is
constant. Keeping the cluster size constant for the moment,
this indicates [3] that the rate of change of Blue agents is given
by an expression of the form:
Introduction
Chapter 4 83
ΔB
= k q ( D ) Δt r ( D )
Δt
where D is the average fractal dimension of Red (and therefore
an indication of how Red clusters/collaborates locally) and
both r and q are exponents. This equation is a form of
Lanchester law where the rate constant is dependent upon the
clustering of Red agents. If Red cluster size varies according to:
g ( x / X (t ) )
ϕ ( x, t ) =
X (t )
where x is the cluster size and X(t) the smallest cluster at time t,
then we can write:
ΔB
= k q ( D ) Δt r ( D ) N (t ) g ( y (t ))
Δt
where N(t), inversely related to X(t), is the normalised number
of clusters of Red at time t and g(y(t)) is the mean of the distri-
bution of cluster size, which evolves as a power law (as we have
shown in certain cases).
Introduction
84 Complexity Theory and Network Centric Warfare
1
u (bφ x, bt )
Rb ,φ u ( x, t ) =
Z (b)
From the group property, Ra ,φ Rb ,φ = Rab ,φ
It follows that Z (a ) Z (b) = Z (ab)
Thus Z (b) = bα for some exponent α
If u * ( x, t ) is a fixed point of the renormalisation, then;
u * ( x, t ) = b −α u * (bφ x, bt ) ∀b
1
Choose b = , then;
t
x
u * ( x, t ) = t α u% * ( φ )
t
Introduction
Chapter 4 85
Introduction
86 Complexity Theory and Network Centric Warfare
N (l ) l − D l 2− D .
p= = =
N N A
Note that D always lies between 0 and 2, so that p is well
defined.
In discussion with senior UK commanders who have had
recent operational experience at a high level, the concept of
control of an area as corresponding to the prevention of flow
through an area (flow in terms of an opposing force, or per-
haps some third party) has been endorsed as a good analogy.
We thus define the commander as having “weak control” of
his area of operations if he can to some extent control move-
Introduction
Chapter 4 87
l 2− D
p=
A
of a unit controlling each of the squares of side l. We consider
each of the five different classes of configuration for this cell, as
shown in Figure 4.2.
In Figure 4.2, we show the five classes a to e of configuration,
and mark beside each case whether this gives weak or strong
control, by considering the span of controlled areas.
Introduction
88 Complexity Theory and Network Centric Warfare
Introduction
Chapter 4 89
Introduction
90 Complexity Theory and Network Centric Warfare
point between these that is different for strong and weak con-
trol. This was calculated to be 0.382 for weak control, and
0.768 for strong control.
l 2− D .
p0 =
A
For side length L of the AO there will be a corresponding
value of iteration order n such that 2n l ≅ L . By using the recur-
sive scheme above, we can calculate for this value of n the
corresponding probability of weak or strong control of the
AO. Consideration of Figures 4.3 and 4.4 indicates that there
is a critical value of the probability:
l 2− D
p0 =
A
such that values above p0 polarise towards very good control,
whereas values below p0 polarise towards very poor control. In
fact, the point p0 corresponds to a phase change in the behav-
iour of such a system.
Examination of Figures 4.3 and 4.4 shows that it is easier to
iterate towards good weak control than towards good strong
control, as we would expect (since weak control is easier to
achieve than strong control). Figure 4.5 shows how this itera-
tion works for a starting probability of 0.65 and the
requirement of weak control.
LOCKOUT
From a game theoretic perspective, we can see that each side is
trying to drive its own value of control up, and the other side’s
down. The analysis above indicates that there should be rapid
lockout, i.e. one side should rapidly gain control and lock the
other side out.
WARGAME STRUCTURE
There are three major problems with the use of wargames to
support military studies: (1) too little output data, (2) the likeli-
hood of atypical results, and (3) oversimplification. The first
problem stems from the fact that wargames are generally slow,
cumbersome, and resource intensive. Consequently, most ana-
lysts who use them to support studies plan only a small number
of games, thus precluding significant statistical results. The
second problem recognises the possibility that the sequence of
decisions taken by the players in these games represents statisti-
cal outliers. Players may adopt extreme strategies that exist
“outside” of what is considered to be a typical military
response. The third problem reflects the fact that human play-
ers can only approximate the results of combat operations. In
our studies, we addressed these problems in three ways: by
arguing that our wargames are quasi-memoryless processes for tac-
tical situation assessment; by introducing the epitomising strategy
principle in wargames; and by embedding computer models to
adjudicate engagements in the manual games. We discuss the
first two of these concepts next.
2FASTHEX uses a hexagonal game board much like IDAHEX. For these
games, each hexagon is 7.5 km from face to face.
maximise minimise
(P ) ,
B(t ) ∈ BG R(t ) ∈ RG
subject to the transition constraint:
where:
0 ≤ α i1 ≤ 1 is the effect of Blue/Red artillery against Red/Blue
artillery; and
0 ≤ α i 2 ≤ 1 is the effect of Blue/Red artillery against Red/Blue
tanks.
The α ij s can be thought of as single shot kill probabilities
(SSKPs) and bj(t)x1(t) and r j(t)y1(t) represent the number of Blue/
Red artillery allocated to Red/Blue artillery and tanks. There-
fore the transition equations become:
x1 (t + 1) = x1 (t ) − α 21r1 (t ) y1 (t )
x2 (t + 1) = x2 (t ) − α 22 r2 (t ) y1 (t )
y1 (t + 1) = y1 (t ) − α11b1 (t )x1 (t )
y2 (t + 1) = y2 (t ) − α12b2 (t )x1 (t )
P = ∑ t =0 ⎡⎣ x2 ( t ) − y2 ( t ) ⎤⎦ + 0.9 ⎡⎣ x2 ( 2 ) − y2 ( 2 ) ⎤⎦ + 0.1 ⎡⎣ x1 ( 2 ) − y1 ( 2 ) ⎤⎦ .
1
DECISION UNCERTAINTY
ASTOR’s primary function is to contribute to tactical situa-
tion assessment by observing the battlefield, detecting and
identifying enemy units, and reporting on its findings. Conse-
quently, a metric designed to measure how well situation
assessment is accomplished in all cases tested was seen as use-
ful to this study of the ASTOR sensor system. Such a metric
allows us to measure the degree of confidence the commander has that
he possesses an accurate picture of the battlefield in his area of interest.
We would expect that the greater his knowledge about the
location, size, and composition of the enemy force, the greater
his confidence in making decisions concerning the allocation
of his weapons and the movement of his forces. We also recog-
nise that information of this type is not all he would require.
Information concerning enemy intent gleaned from COM-
INT, SIGINT, and known enemy fighting doctrine would also
assist in completing the picture.
PROBABILITY DISTRIBUTION
We begin by letting the vector U represent the competing
hypotheses that any number of enemy units are arrayed
against the friendly commander at time cycle t so that
U = {0,1,2,...,n}. Given the level of resolution for the ASTOR
games, a unit was taken to be a battalion. We omit the cycle
index, t, for now focusing instead on analysis within a timestep.
The term arrayed against is taken to indicate the units located on
the battlefield in some area of interest to the friendly com-
mander. This may mean along some avenue of approach in a
defensive operation or blocking a route of advance in an offen-
sive operation. Figure 4.10 depicts a notional defensive
campaign situation.
We assume that the friendly commander knows the number of
enemy units that might be brought to bear against him during
the campaign. That is, we assume that he knows n. This is a rea-
sonable assumption in that it is highly likely that the Intelligence
Preparation of the Battlefield (IPB) would yield this informa-
tion. What is unknown is the tactical deployment of the units at
each timestep. Tactical situation assessment then is taken to be
the process of estimating the enemy’s tactical deployment at
time t and the effectiveness of this estimate is the degree of
uncertainty associated with his current state of knowledge.
BAYESIAN DECISIONMAKING
We begin by analysing the intelligence gathering process at
each timestep. We first assume that a Bayesian update meth-
odology for tactical situation assessment is appropriate within
a wargame cycle, but not between wargame cycles, given the
assumptions concerning the Markov properties (i.e., lack of
memory) of the FASTHEX game with 2-hour timesteps.4
Consequently, the process described here is repeated prior to
each decision to commit forces.
1. INPUT DISTRIBUTION: The friendly commander may
or may not have some idea of the likely disposition of
enemy units. If he does, we may describe it using an
empirical distribution. However, for this analysis, we
assume that the friendly commander is completely igno-
5
By “detect” we mean that sufficient information is provided to allow the
unit to be targeted by a weapon.
6This assumption can be relaxed to allow for the characterisation of a
multisensor suite, provided that the sensors are independent.
⎧⎛ u ⎞ v
⎪⎜⎜ ⎟⎟q (1 − q ) for v ≤ u
u −v
P(V = v | U = u ) = ⎨⎝ v ⎠ .
⎪ 0 otherwise.
⎩
3. SENSOR OPERATIONS: Our objective is to clarify the
enemy force deployment picture based on the sensor
observations by refining the friendly commander’s ini-
tial and subsequent probability distributions on U. That
is, we wish to calculate P(U = u | V = vd ) , where vd is the
number of detections reported in the cycle, and thus
assess the impact of the evidence provided by the sensor
on our estimate of the number of enemy units arrayed
against the friendly forces in the area of interest. Opera-
tionally, we assume that the sensor sweeps the area of
interest once in a cycle. As a detection occurs, it is
immediately reported so that there are vd+1 reports
from the sensor per cycle. The additional report
accounts for the fact that a report of 0 detections is sent
initially. Since it is impossible to control the time when
detections occur within a FASTHEX game cycle, we
assume a uniform distribution of reports. That is, a
report of no detections occurs at time t/(vd+1), a report
of one occurs at 2t/(vd+1), etc. The estimate is refined at
every subinterval using Bayes’ formula as follows:
P(U = u | V = v − 1)P(V = v | U = u )
P(U = u | V = v ) = (2)
∑ P(U = i | V = v − 1)P(V = v | U = i )
n
i =0
7We later relax this assumption by allowing for the possibility that the
sensor detections/identifications are false, that the command and control
system used to transmit the sensor information may report a false
detection/identification as real, and that the intelligence processing centre
may interpret a false detection/identification as real.
⎛u ⎞
P(U = u | V = v − 1)⎜⎜ ⎟⎟q v (1 − q )
u −v
P(U = u | V = v ) = ⎝v⎠
⎛i⎞
∑i =0 P(U = i | V = v − 1)⎜⎜ v ⎟⎟q v (1 − q )
n i −v
⎝ ⎠
(3)
⎛u ⎞
P(U = u | V = v − 1)⎜⎜ ⎟⎟(1 − q )
u
= ⎝v⎠ ,
⎛i⎞
∑i =v P(U = i | V = v − 1)⎜⎜ v ⎟⎟(1 − q )
n i
⎝ ⎠
where v = 0,1,...,vd is the number of units detected by the
sensor and u ≥ v at each iteration. Figure 4.11 depicts the
process diagrammatically. Note the difference between no
sensor sweep in progress and a report of no detections.
The former is depicted by a flat probability distribution on
U whereas the latter is a refinement to the flat distribution.
A SIMPLE EXAMPLE
The following illustrates the process. Table 4.1 summarises the
results of a simple situation in which three units are known to
be available to the enemy commander. The sensor system has
a probability of detection/identification of q = 0.8. The entries
in the rows are the refined probabilities from 0, 1, 2, and 3
detections. The first row is the a priori probability assessment
on U assuming initial total ignorance. Figure 4.12 depicts the
results graphically.
A Simple Example
Chapter 4 115
v P (U = 0 | V ) P (U = 1 | V ) P (U = 2 | V ) P (U = 3 | V )
MULTIPLE SWEEPS
We now refine the analysis to show the value that multiple sen-
sor sweeps within the same cycle have on refining the
probability estimates for U. Suppose that we assume that our
sensors are capable of k sweeps of the area of interest within
the commander’s decision cycle. That is, the sensor can per-
form k sweeps of the area of interest before the enemy
Multiple Sweeps
116 Complexity Theory and Network Centric Warfare
⎛u⎞
P(U = u | V = vd (i −1) )⎜⎜ ⎟⎟(1 − q )
u
⎝ di ⎠
In general, Bayesian updating has a tendency to converge
rather rapidly–especially in cases such as this where false
detections/identifications are not allowed: that is, it is impossi-
ble to overstate the number of units actually present. The
effect is that subsequent detections that report fewer units than
the previous are totally ignored. To illustrate, consider a sim-
ple case in which n = 3 units. We assume that three sweeps
were conducted resulting in three sequential detections using a
sensor with probability of detection: q = 0.8. Table 4.2 sum-
marises the results of applying equation (5) with k = 3. The
number of units detected each time is listed in the table. The
number of units in the area of interest is actually three and
subsequent observations that two units were detected/identi-
fied are completely ignored.
Multiple Sweeps
Chapter 4 117
KNOWLEDGE REPRESENTATION
It now remains to ascertain the degree of uncertainty existing
in the mind of the friendly commander at the time he must
take a decision on the employment of his forces. His current
knowledge consists of two components: (1) the fact that his sen-
sor suite detected a number of enemy units in his area of inter-
est; and (2) the refined probability distribution over the
possible number of enemy units that might be in his area of
interest based on his most recent sensor report. The value of
the first component depends upon whether false detections are
possible. The second depends upon the number of enemy
units detected and the reliability of the sensor system. The task
is to develop a knowledge metric that incorporates these two com-
ponents, thereby quantifying the likelihood that the com-
mander has a true picture of the number of enemy units
arrayed against him in his area of interest.
INFORMATION ENTROPY
We draw on information science to develop a knowledge met-
ric that is a function of the average information present in the
set of all possible uncertain events. This quantity is referred to
Knowledge Representation
Chapter 4 119
1
I (U = u ) = ln = − ln P(U = u ). 9
P(U = u )
If we now consider all of the events in the refined set
U | V = vd , we reason that each occurs with probability
P(U = u | V = vd ). Therefore, the information available from
the occurrence of each event is:
I (U = u | V = vd ) = − ln P(U = u | V = vd ) ,
and the expected information from the occurrence of each
event is:
8The term entropy is used because the information entropy function is the
same as that used in statistical mechanics for the thermodynamic quantity
entropy. For a more complete discussion of entropy, see Blahut [17] and
Zurek, ed. [18].
9
In communication theory, the units of measurement are “bits” if base 2
logarithms are used and “nits” if natural logarithms are used (See
Kullback [19] p. 7). For our purposes, we will assume a dimensionless
quantity.
Knowledge Representation
120 Complexity Theory and Network Centric Warfare
Knowledge Representation
Chapter 4 121
H (U1 , U 2 , LU m ) ≤ ∑i =1 H (U i ).
m
K (U ,V = vd ) = K (U | V = vd )K (V = vd )
10
K(U,V = vd) satisfies the probability axioms (see Stark and Woods [13]
p. 9 for instance) and therefore can be thought of as a subjective
probability.
[
I (U ≥ vd | V = vd ) = − ln[P(U ≥ vd | V = vd )] = − ln ∑i =v P (U = i | V = vd )
n
d
]
If vd = 0, we get no information because P(U ≥ 0) = 1 .
However, if vd = n, the information content is maxi-
mised at − ln ⎡⎣ P (U = n | V = vd ) ⎤⎦ . This is due to the fact
that P(U ≥ u | V = vd ) decreases monotonically with
ln ⎡ ∑ i =v P (U = i | V = vd − 1) ⎤
n
K (V = vd ) = ⎣ d ⎦.
ln ⎡⎣ P (U = n | V = vd − 1) ⎤⎦
(We use vd–1 to ensure that the denominator never goes to
zero).
The total knowledge gained is then defined to be the product
of residual and detection knowledge:
ln ( n + 1) − H (U | V = vd ) ln ⎣ ∑ i =vd P (U = i | V = vd − 1) ⎦
⎡ n ⎤
K (U , V = vd ) = ⋅ (7)
ln ( n + 1) ln ⎡⎣ P (U = n | V = vd − 1) ⎤⎦
v P (U = 0 | V ) P (U = 1 | V ) P (U = 2 | V ) P (U = 3 | V ) H (U | V ) K (U ,V )
CAMPAIGN KNOWLEDGE
A similar formulation may now be used to calculate campaign
knowledge, given that the FASTHEX games are taken to be
memoryless processes for tactical situation assessment. Con-
sider a campaign consisting of m cycles. At each cycle, t, vdt
enemy units are detected by the sensor. At each cycle, the
number of possible enemy units arrayed against the friendly
forces, n, is likely to be reduced as a result of combat during the
cycle so that nt is the total number of enemy forces that might
be arrayed against the friendly forces in the area of interest.
H (U t | V = vdt ) then represents the residual uncertainty at each
cycle and the total campaign uncertainty is expressed as:
[ln(n +1) − H (U | V = v )]
)= ∑
m
∑ ln ⎡⎣∑ P (U = i | V = v −1)⎤⎦
m nt
t =1 i =vdt t dt
=
∑ ln ⎡⎣P (U = n | V = v −1)⎤⎦
m
t =1 t t dt
= t =1
⋅ .
∑t=1 ( t ) ∑t=1 ⎣ ( t t )⎦
m m
ln n + 1 ln ⎡ P U = n | V = vdt −1 ⎤
AN EXAMPLE
Consider the example summarised in Table 4.5. The campaign
consists of 5 cycles. At each cycle t, the detection probability qt,
the number of units detected vdt and the maximum size nt of the
enemy force is given. The last three columns depict the residual
uncertainty in the refined probability distribution, the informa-
tion available from the detection of vdt enemy units, and the
“probability” that the commander has an accurate picture of
the number of enemy units in his area of interest. The last row
reflects his total campaign knowledge.
In this example:
1 1 1 1
T = max( , )+ + + tm .
λ1 λ2 λ3 λ4
Returning now to the case of a serial set of nodes that consti-
tute the critical path, for each such node i on the critical path
define the indegree di to be the number of command and control
(C2) network edges having i as a terminal link.
For each node j in the C2 network, we assume (based on our
earlier discussion of information entropy and knowledge) that
the amount of knowledge available at node j concerning its
ability to process the information and provide quality collabo-
ration is a function of the uncertainty in the distribution of
information processing time fj(t) at node j. Thus the more we
know about node j processes, the better the quality of collabo-
ration with node j.
Let Hj(t) be the Shannon entropy of the function fj(t). Then
Hj(t) is a measure of this (residual) uncertainty defined in terms
of a lack of knowledge. By definition of the Shannon entropy,
we have:
∞
H j (t ) = − ∫ ln(λ j e
−λ jt −λ jt
)λ j e dt
0
1
λ j min
⎛ e ⎞ ⎛ e ⎞
K j (t ) = ln ⎜ ⎟ − ln ⎜ ⎟⎟
⎜ λ j min ⎟ ⎜ λj
⎝ ⎠ ⎝ ⎠
⎛ λj ⎞
= ln ⎜ ifλ jmin ≤ λ j ≤ eλ j min
⎜ λ j min ⎟⎟
⎝ ⎠
= 0 if λ j < λ jmin
= 1 if λ j > eλ jmin
j =1
Tc − tm
Tc ,C = + tm .
1 − g (C )
NETWORK-CENTRIC BENEFIT
This network-enabled approach thus allows us to compute the
distribution of the response time of the system as a function of
the network assumptions. As we increase the collaboration
throughout the network in going from platform-centric to net-
work-centric to futuristic network-centric (to use the RAND
categories [9]), so the positive effects of enhanced collabora-
tion have to balance off against the downside effects of
information overload and increasing network complexity.
Going back to the discussion in Chapter 2 on the Conceptual
Framework of Complexity, we can call this overall assessed
performance of the network the plecticity12 of the network,
since it characterises the combined positive and negative
effects of network complexity and collaboration.
12
A term proposed by Perry (RAND Corp.) - personal communication.
from the theory we have considered so far that the size of the
network (i.e., the number of nodes on the critical path) should
be sampled from a power-law distribution of network size. The
exponent of this power-law is then a characteristic measure of
the ability of the nodes in the network to form and reform
dynamically over time. Similarly, we can consider the indegree
of a node on the network to be a stochastic variable. If the
indegree of a node is sampled from a power-law distribution of
the number of links, then the network is said to be “scale-free”
[20]. This corresponds to a network with a small number of
nodes with very rich connections, and many nodes with sparse
connections. (The Internet is an example of a scale-free net-
work.) Conversely, if the indegree of a node is sampled from a
normal distribution of the number of links, then the network is
of “random” type. Characterising networks in this way allows
us to investigate the vulnerability of such networks to attack, as
discussed in [20].
REFERENCES
1 MOFFAT J (2002). Command and Control in the Information Age: Representing its
Impact. The Stationery Office. London, UK.
2 HORNE G E and LEONARDI M (2001). Manoeuver Warfare Science 2001.
Marine Corps Combat Development Command. Quantico, VA, USA.
3 LAUREN M (2002). “Firepower Concentration in Cellular Automaton
Combat Models – An Alternative to Lanchester.” J Opl Res. Soc 53 Issue 6,
pp. 672-679.
4 www.cna.org/isaac (Aug 1, 2003)
5 GOLDENFELD N (1992). Lectures on Phase Transitions and the Renormalisation
Group. Addison-Wesley. MA, USA.
6 DARILEK R, PERRY W et al (2001). Measures of Effectiveness for the
Information Age Army. RAND. Santa Monica, CA, USA.
7 TURCOTTE D L (1997). Fractals and Chaos in Geology and Geophysics. 2nd
Edn. Cambridge University Press. Cambridge, UK.
References
138 Complexity Theory and Network Centric Warfare
8 http://fafnir.phyast.pitt.edu/myjava/perc/percTesT.html
(February 1, 2003)
9 PERRY W, BUTTON R W et al (2002). Measures of Effectiveness for the
Information-Age Navy: The Effects of Network-Centric Operations on Combat Outcomes.
RAND. Santa Monica, CA, USA.
10 ROSKE V (2002). “Opening Up Military Analysis: Exploring Beyond the
Boundaries.” Phalanx. USA Military Operations Research Society. 35 No 2.
11 PERRY W and MOFFAT J (1997). “Measuring the Effects of Knowledge in
Military Campaigns.” J Opl Res. Soc 48. pp. 965-972.
12 BOWEN K C (1978). Research Games – An Approach to the Study of Decision
Processes. Taylor and Francis, UK.
13 STARK H and WOODS J W (1986). Probability, Random Processes and
Estimation Theory for Engineers. Prentice Hall, USA.
14 BRYSON A E and HO Y C (1975). Applied Optimal Control. Hemisphere
Publishing, USA.
15 HILLESTAD R (1986). SAGE: An Algorithm for the Allocation of Resources in a
Conflict Model. RAND working draft.
16 BERKOVITZ D and DRESHER M (1959). A Game-theory Analysis of Tactical
Air War. Operations Research, 17. pp. 599-620.
17 BLAHUT R E (1987). Principles and Practice of Information Theory. Addison-
Wesley. MA, USA.
18 ZUREK W H ed. (1990). Complexity, Entropy and the Physics of Information. Vol
III, Santa Fe Institute Studies in the Sciences of Complexity Series. Addison
Wesley. USA.
19 KULLBACK S (1968). Information Theory and Statistics. Dover. New York,
USA.
20 COHEN D (2002). “All the World’s a Net.” New Scientist. 13 April 2002.
pp. 24-29.
References
CHAPTER 5
AN EXTENDED
EXAMPLE OF THE
DYNAMICS OF LOCAL
COLLABORATION AND
CLUSTERING, AND
SOME FINAL
THOUGHTS1
1
The contribution of Dr. Susan Witty, Dstl, to this
chapter is gratefully acknowledged.
139
140 Complexity Theory and Network Centric Warfare
CLUSTER DISTRIBUTION
Once the cluster numbers and sizes can be determined, there
are a number of ways to analyse the data. The first that we
look at is the size of the largest cluster. This gives an indication
of the ability of the agents to cluster or the amount of dispersal
of the agents. For example, if the largest cluster size is near to
the total number of agents, we know that that is the only clus-
ter. However, if the largest cluster is small, then we know that
the agents are dispersed in many small clusters.
The following plots are of the largest cluster size against the
timestep for several different replications with different ran-
dom seeds for the same basic run of the ISAAC model. The
clustering algorithm used is that of nearest neighbours and dif-
ferent plots are graphed for Red and Blue agents. The agents
can be ordered by state–alive, injured, or dead. The plots that
follow are for only those agents that are alive. For each
timestep, we plot the largest cluster size for Blue and the larg-
est cluster size for Red. For the first iteration of our example
Figure 5.5: Largest Cluster Size as a Function of Time (40th Iteration, Red)
Figure 5.6: Largest Cluster Size as a Function of Time (40th Iteration, Blue)
Figure 5.7: Frequency Distribution of the Largest Cluster Size for Red Agents
Clustering and Swarming
146 Complexity Theory and Network Centric Warfare
Figure 5.8: Frequency Distribution of the Largest Cluster Size for Blue Agents
We can see from Figure 5.8 that the spread of clusters for
Blue is much smaller in general. However, for the 40th repli-
cation, Blue is able to generate a wider spread of clusters, and
thus succeed.
FINAL THOUGHTS
We started by considering what we can learn from natural
systems: an ecosystem in which species coevolve locally; a
fluid forming an interface when it is pinned; the effect of forest
fires. All of these show regularities and emergent behaviours
of the whole system that can be captured and deduced using
mathematical models. We have also shown how the same
ideas of local coevolution within such “open” systems are very
relevant to thinking about the consequences of a network-cen-
tric form of warfare, where units coevolve (self-synchronise)
across an information grid. By exploiting this linkage, it is pos-
Final Thoughts
Chapter 5 149
REFERENCES
1 ILLACHINSKI A (2000). “Irreducible Semi-Autonomous Adaptive
Combat (ISAAC): An Artificial Life Approach to Land Warfare.” Military
Operations Research. Vol 5 No 3. pp. 29-46.
2 HOSHEN J and KOPELMAN R (1976). “Percolation and cluster
distribution. I. Cluster multiple labeling technique and critical concentration
algorithm.” Phys. Rev. B14, p. 3438.
3 http://www.splorg.org/people/tobin/projects/hoshenkopelman/
hoshenkopelman.html (Aug 1, 2003)
4 LAUREN M K and STEPHEN R T (2002). “Map-Aware Non-Uniform
Automata (MANA)–A New Zealand Approach to Scenario Modelling.”
Journal of Battlefield Technology. Vol 5 No 1. pp. 27-31.
5 LAUREN M K and STEPHEN R T (2002). “Fractals and Combat
Modelling: Using MANA to Explore the Role of Entropy in Complexity
Science.” Paper prepared for Fractals. Defence Technology Agency.
Auckland, New Zealand.
2Moffat J. Command and Control in the Information Age; Representing its Impact.
The Stationery Office. London, UK, 2002.
References
APPENDIX
OPTIMAL CONTROL
WITH A UNIQUE
CONTROL SOLUTION
dX i
= Fi ( X 1 (t ),... X n (t ); λ1 (t ),...λm (t ) ) , (i = 1,....n)
dt
where Fi are the rate laws, and λi (t ) are the con-
trol variables. In matrix/vector notation, we
write this as:
X& (t ) = F ( X (t ), λ (t ))
151
152 Complexity Theory and Network Centric Warfare
X& (t ) = F ( X (t ), λ (t ))
X i (t0 ) = X i0 ∀i .
We now define a process [1] that yields a necessary condition
for a vector of control variables λ (t ) to optimise the objective
function. In other words, any control vector that gives rise to
an optimal value of the objective function must satisfy this con-
dition. Although this does not guarantee that a solution λ (t )
satisfying this condition is optimal, other information (such as
the uniqueness of such a solution) can be used in particular
cases to prove that λ (t ) is indeed an optimal control vector.
The first step in this procedure is to introduce a set of “dual”
variables:
ψ 1 (t ), ........ ,ψ n (t )
that are defined by the relationships:
n ∂Fj ( X , λ )
ψ& i (t ) = −∑ψ j (t )
j =1 ∂X i
Appendix 153
ψ i (t1 ) = −ci ∀i
where < > denotes the inner product of the two vectors
ψ and λ .
Thus:
n
H (ψ , X , λ ) = ∑ψ i (t ) X& i (t )
i =1
n
= ∑ψ i (t ) Fi ( X , λ )
i =1
∂H
X& i = ∀i
∂ψ i
∂H
ψ& i = − ∀i
∂X i
X i (t0 ) = X i0 ∀i
ψ i (t1 ) = −ci ∀i
154 Complexity Theory and Network Centric Warfare
∑ c X (t )
i =1
i i 1
∑c X
i =1
i
*
i (t1 )
to be evaluated.
LINEAR MODELS
A linear system model is defined as having a relationship of the
form:
X& (t ) = A(t ) X (t ) + B (t )λ (t ) + g (t )
where, as before, < > denotes the inner product of two vec-
tors, and S and W are time-dependent vectors of known value.
Assume then that the objective function is of this form, and
that the system model is linear in the way that we have
described. Without loss of generality, we can set g(t)=0 and
write the system behaviour model in the form:
X& = AX + Bλ
where X is the vector of state variables (e.g., force levels) and λ
is the vector of control variables.
Make the transformation:
t
X n +1 (t ) = ∫ < S , X > + < W , λ > .
t0
Optimise X n +1 (t1 )
and the relationship X& n +1 (t ) =< S (t ), X (t ) > + < W (t ), λ (t ) > is
added to the set of equations describing the system behaviour.
(This can be done since the above equation is linear and so has
the same form as the others.)
Consider now the form of the Hamiltonian for such a linear
system. We have:
Let: ϕ j = ∑ψ i Bij .
i
If ϕ j (t ) ≤ 0 let λ *j (t ) = max{λ j (t ), λ ∈ U }
and if ϕ j (t ) > 0 let λ *j (t ) = min{λ j (t ), λ ∈ U }
∑ϕj
j (t )λ *j (t ) ≤ ∑ ϕ j (t )λ j (t ) .
j
∑ϕ j
j (t )V j (t ) ≤ ∑ ϕ j (t )λ j (t )
j
V j (t ) = min{λ j (t ), λ ∈ U } .
Otherwise it would be possible to define a vector giving a
smaller value of the Hamiltonian, contradicting the extremal
nature of V. It follows that every extremal vector must be of
the form λ * .
ϕ j = ∑ψ i Bij .
i
∂ &
ψ& i = −∑ψ j (X j )
j ∂X i
∂
= −∑ψ j (∑ Ajl X l + ∑ B jm λm )
j ∂X i l m
= −∑ψ j Aji
j
1It can be shown that for this type of system, an optimal control must exist [1] [2].
REFERENCES
1 CONNORS M M and TEICHROEW D (1967). Optimal Control of Dynamic
Operations Research Models. International Textbook Co. Pennsylvania, USA.
2 ROZONOER L T (1959). L.S. Pontryagin’s Maximum Function Principle in its
Application to the Theory of Optimum Systems–I, II, III. Avtomatika i
Telemikhanika 20. p. 1320 et seq. Translated in the journal Automation and
Remote Control (1959). 20. p. 1288 et seq.
References
ABOUT THE AUTHOR
CAT-1
CCRP Publications
CAT-2
CCRP Publications
CAT-3
CCRP Publications
CAT-4
CCRP Publications
CAT-5
CCRP Publications
CAT-6
CCRP Publications
CAT-7
CCRP Publications
CAT-8
CCRP Publications
CAT-9
CCRP Publications
CAT-10
CCRP Publications
CAT-11
CCRP Publications
CAT-12
CCRP Publications, as products of the
Department of Defense, are available to
the public at no charge. To order any of
the CCRP books in stock, simply contact
the Publications Coordinator at: