@@ -25,10 +25,63 @@ \section*{Written}
25
25
26
26
\item {} % 1
27
27
28
+ I would equip the robots with simple sensors to detect radioactivity at a
29
+ specific location (such as a digital Geiger counter). I will assume that the
30
+ robots have no other means of control or inter-communication. The large number
31
+ of robots will go into the plant with a small number of pellets, drop them at
32
+ some location, and return for more pellets. When the robots enter the plant for
33
+ the first time, they will travel randomly within it. This is for two reasons:
34
+ one, because it is initially unknown where the highly radioactive areas are, and
35
+ two, because it is unknown how radioactive the most radioactive areas are. The
36
+ robots travels randomly for a set period of time to determine an average
37
+ radioactivity for the area. It then proceeds to continue moving randomly until
38
+ it finds an area which is more radioactive than the average it has found. When
39
+ it has found such a location, it will drop the pellets, thus reducing the
40
+ overall average radioactivity for the area it has seen, and leave for more
41
+ pellets. The entire time the robot is moving, it will be continually updating
42
+ it's running average, to have a better knowledge of the true radioactivity of
43
+ the space. As each robot places their pellets, the total actual average
44
+ radioactivity of the plant will decrease. Once the robots find no place within
45
+ the plant whose maximum radioactivity exceeds the allowable radioactivity within
46
+ a given amount of time, it will leave and the plant will be considered safe.
47
+
28
48
\item {} % 2
29
49
50
+ The key concepts from actual ant behavior that are used in most ant colony
51
+ optimization algorithms are:
52
+
53
+ \begin {itemize }
54
+ \item Random initial search - Without any initial pheromone trails, scout
55
+ ants will randomly (or quasi-randomly) move about their environment in
56
+ search of food. This ensures eventually finding a food source and returning
57
+ to the nest;
58
+ \item Pheroemone laying - Actual ants, once they have found a food source,
59
+ will leave a trail of pheromones from the food to the nest. This serves as a
60
+ path for other ants to follow, to strengthen the path to the food;
61
+ \item Collective pheromones - In actual ant systems, ants to not have unique
62
+ pheremones, so multiple ants laying pheromones in the same location will
63
+ simply multiply the `` signal'' for that location, allowing a good path to be
64
+ strengthened;
65
+ \item Pheromone decay - In the real world, pheromones decay and dissipate
66
+ over time. While this may degrade a path to a good solution, it also ensures
67
+ that longer or less frequently traveled paths to the same location are less
68
+ desirable for any given ant than a more optimal path.
69
+ \item Path following - Actual ants have the ability to detect a pheromone
70
+ trail, and to follow it from it's source to it's destination. This allows
71
+ ants to intelligently follow a path laid by their predecessors.
72
+ \item Deviations - Actual ants aren't perfect at following the trail,
73
+ though, and may deviate from it. This helps ants find a possibly shorter
74
+ path than the one they should be following.
75
+ \end {itemize }
76
+
30
77
\item {} % 3
31
78
79
+ To adapt a particle swarm optimization problem to a dynamic or `` online''
80
+ solution, my main focus would be to allow the system to determine when there has
81
+ been a change in the system, and tell the particles to discard or re-determine
82
+ their best positions and velocities accordingly. By being able to discard this
83
+ outdated memory of the optimal solution, the particles will be able to
84
+ re-optimize to find a new solution every time the problem changes.
32
85
33
86
\end {enumerate }
34
87
@@ -40,15 +93,34 @@ \section*{Programming}
40
93
The control parameters for my implementation are as follows:
41
94
42
95
\begin {align* }
43
- $ \alpha &= 1.0 $
96
+ \alpha &= 1.0\\
97
+ \beta &= 10.0\\
98
+ \rho &= 0.1\\
99
+ Q &= 1.0\\
100
+ n_{ants} &= 5
44
101
\end {align* }
45
102
103
+ I arrived at these parameters after trying a few values and seeing which ones
104
+ solved the Djibouti case the fastest. If I had more time, I would likely specify
105
+ a range and a step for each of these, and attempt to see which combination of
106
+ each (out of possibly hundreds) actually performed the best.
107
+
46
108
\begin {enumerate }
47
109
48
110
\item {} % 1
49
111
112
+ For the Djibouti case, my implementation is able to converge, on average, within
113
+ 0.2\% of the optimal solution. The best run of my implementation was able to
114
+ converge on the optimal solution. To generate this statistic, I ran my
115
+ implementation 30 times, giving it 5 seconds to complete.
116
+
50
117
\item {} % 2
51
118
119
+ For the Luxembourg case, my implementation is able to converge, on average,
120
+ within 28\% of the optimal solution. The best run of my implementation was
121
+ a length of 14353. To generate this statistic, I ran my implementation 10 times,
122
+ giving it 120 seconds to complete.
123
+
52
124
\item {} % 3
53
125
54
126
\item {} % 4
0 commit comments