**We’re Already Past The Hard Part**

Like I’ve told you again and again a particle swarm optimizer is just a bunch of particles/numbers moving around and making guesses at how to optimize a problem until we feel like we’re reasonable close to the ‘true’ best answer to our problem.

Last time we wrote some code for moving one particle around, so now all we need is some code for moving lots of particles around. We can actually do this with just two loops: One big outer loop that keeps repeating until we have our final answer and a smaller inner loop that updates every particle in the swarm once for each step through the main loop.

With that in mind there are really only two questions left to answer: How big should our swarm be and how do we decide when we are done looking for better answers?

The simplest solution to both problems is to hard code some values. We will choose a specific number of particles for our swarm and we will also choose how many times we want the swarm to update those particles before deciding on a final answer.

This obviously isn’t the only, or best, way to do this. It might be smarter to decide when to stop by keeping track of how long it’s been since we’ve found a better answer. If we haven’t found on after a few dozen or hundred updates then we stop. This way we can leave it up to the swarm to decide when it is done and we don’t have to worry so much about accidentally cutting the swarm off while it’s still doing good work.

But that can get complicated. So for now let’s just stick with hard coding some values. Just pop them in some global variables so we can easily adjust them for each problem we try. We’ll keep our default values really small so it’s easy to debug:

(defparameter *swarm-size-2d* 5) (defparameter *swarm-iterations* 5)

Now we just need to build a swarm of size *swarm-size* and loop through it *swarm-iterations* times. The **update-swarm-particle-2d** code from last post will do the rest of the work for us:

(defun swarm-minimize-2d (fn x-limits y-limits) "A particle swarm optimizer for two dimensional equations" (let ((swarm (generate-swarm-2d *swarm-size* (first x-limits) (second x-limits) (first y-limits) (second y-limits)))) (loop for i from 1 to *swarm-iterations* do (loop for particle in (particle-list swarm) do (update-swarm-particle-2d particle swarm fn x-limits y-limits))) (list (best-x swarm) (best-y swarm))))

**Go Forth and Optimize, My Swarmy Minions!**

That’s all our swarm needs in order to start solving problems, so let’s give it a spin. We’ll be feeding it our handy **double-parabola **test function and seeing how close it comes to the real answer. Remember that the true minimal optimum for a double parabola is (0, 0):

[34]> (swarm-minimize-2d #’double-parabola ‘(-10 10) ‘(-10 10))

(-2.5209198 3.1263676)

Ok… not so great. Kind of disappointing, actually.

But wait! We only had five particles making five guesses each. Of course they couldn’t find a good answer. Let’s give them a little more time to work and see if that improves anything:

[35]> (defparameter *swarm-iterations* 100)

*SWARM-ITERATIONS*

[36]> (swarm-minimize-2d #’double-parabola ‘(-10 10) ‘(-10 10))

(2.5831852E-5 -6.708622E-6)

Much much better! Almost perfect, in fact. We’re within a rounding error of (0,0). Our swarm intelligence has successfully done something intelligent! Congratulations, an AI programmer is you!

**Record Keeping**

Getting good answers is pretty cool, but back in the design phase we decided that we were also interested in seeing the path our particles took in order to find that answer. That’s why all of our particles have that “history” data slot. You remember, that one that we had to write that weird nested list append thing to keep updated.

Because every particle keeps track of its own history generating a swarm-wide history is as easy as running through the swarm’s particle list and gather up all the individual histories into one big list. Which is yet another problem that the loop macro can solve for us. Honestly I probably could have titled this series “Let’s Use The Loop Macro To Solve All Our Problems”.

Anyways, here’s a function for building a swarm history and here’s an updated swarm-optimizer that includes that history in it’s output:

(defun get-swarm-history-2d (swarm) (loop for particle in (particle-list swarm) collect (history particle)))

(defun swarm-minimize-2d (fn x-limits y-limits) "A particle swarm optimizer for two dimensional equations" (let ((swarm (generate-swarm-2d *swarm-size* (first x-limits) (second x-limits) (first y-limits) (second y-limits)))) (loop for i from 1 to *swarm-iterations* do (loop for particle in (particle-list swarm) do (update-swarm-particle-2d particle swarm fn x-limits y-limits))) (list (list (best-x swarm) (best-y swarm)) (get-swarm-history-2d swarm))))

Now Let’s shrink our swarm size and our number of guesses back down and give the new code a test run.

[39]> (defparameter *swarm-size* 2)

*SWARM-SIZE*

[40]> (defparameter *swarm-iterations* 4)

*SWARM-ITERATIONS*

[41]> (swarm-minimize-2d #’double-parabola ‘(-10 10) ‘(-10 10))

((-0.11861801 -0.97045755)

(((8.618713 -8.651136) (10 -0.23300552) (10 6.501499) (8.640749 9.647516))

((-8.614849 -7.199463) (-0.11861801 -0.97045755) (6.678367 4.012747)

(10 6.5043488))))

That’s a lot of parenthesis to read through but I bet you can see what happened. The first pair of numbers drowning in those parenthesis is the best answer the swarm has found so far (which is pretty bad since we only ran the algorithm through four steps). After that you might be able to make out that the remaining pairs of numbers are split into two lists of four. Those are the histories of the paths our two particles took through their four step journey.

And with that our two dimensional particle swarm optimizer is done. It runs the algorithm, finds the answers and spits back all the data we wanted from it.

**Complete 2D Particle Swarm Optimizer Code**

Now that this part of our project is “done” I imagine you’ll be wanting another full code dump. I keep all the following code in a file called “swarm.lisp”:

;Helper Functions (defun ranged-random (min max) (+ min (random (float (- max min))))) ;Particle Related Code (defclass particle-2d () ((x-coord :initarg :x-coord :accessor x-coord) (y-coord :initarg :y-coord :accessor y-coord) (x-vel :initarg :x-vel :accessor x-vel) (y-vel :initarg :y-vel :accessor y-vel) (history :accessor history :initform () ))) (defun generate-particle-2d (x-min x-max y-min y-max) (let ((x-range (- x-max x-min)) (y-range (- y-max y-min))) (make-instance 'particle-2d :x-coord (ranged-random x-min x-max) :y-coord (ranged-random y-min y-max) :x-vel (ranged-random (- x-range) x-range) :y-vel (ranged-random (- y-range) y-range)))) (defun print-particle-2d (particle) (format t "x:~a~%y:~a~%x-vel:~a~%y-vel:~a" (x-coord particle) (y-coord particle) (x-vel particle) (y-vel particle))) (defun one-line-print-particle-2d (particle) (format t "pos:<~a, ~a> vel:<~a, ~a>~%" (x-coord particle) (y-coord particle) (x-vel particle) (y-vel particle))) ;Swarm Related Code (defclass swarm-2d () ((best-x :initarg :best-x :accessor best-x :initform 0) (best-y :initarg :best-y :accessor best-y :initform 0) (best-answer :initarg :best-answer :accessor best-answer :initform most-positive-long-float) (particle-list :accessor particle-list))) (defun generate-swarm-2d (particle-count x-min x-max y-min y-max) (let ((new-swarm (make-instance 'swarm-2d))) (setf (particle-list new-swarm) (loop for i from 1 to particle-count collect (generate-particle-2d x-min x-max y-min y-max))) new-swarm)) (defun print-swarm-2d (swarm) (format t "Best input:<~a, ~a>~%Best answer:~a~%" (best-x swarm) (best-y swarm) (best-answer swarm)) (loop for particle in (particle-list swarm) do (one-line-print-particle-2d particle))) ;Swarm Optimization Code (defparameter *swarm-size* 5) (defparameter *swarm-iterations* 5) (defun swarm-minimize-2d (fn x-limits y-limits) "A particle swarm optimizer for two dimensional equations" (let ((swarm (generate-swarm-2d *swarm-size* (first x-limits) (second x-limits) (first y-limits) (second y-limits)))) (loop for i from 1 to *swarm-iterations* do (loop for particle in (particle-list swarm) do (update-swarm-particle-2d particle swarm fn x-limits y-limits))) (list (list (best-x swarm) (best-y swarm)) (get-swarm-history-2d swarm)))) (defun update-swarm-particle-2d (particle swarm fn x-bounds y-bounds) (update-particle-history-2d particle) (let ((particle-answer (funcall fn (x-coord particle) (y-coord particle)))) (when (< particle-answer (best-answer swarm)) (setf (best-answer swarm) particle-answer) (setf (best-x swarm) (x-coord particle)) (setf (best-y swarm) (y-coord particle)))) (update-particle-velocity-2d particle (best-x swarm) (best-y swarm)) (let ((updated-x (+ (x-coord particle) (x-vel particle))) (updated-y (+ (y-coord particle) (y-vel particle)))) (setf (x-coord particle) (max (first x-bounds) (min (second x-bounds) updated-x))) (setf (y-coord particle) (max (first y-bounds) (min (second y-bounds) updated-y))))) (defun update-particle-history-2d (particle) (setf (history particle) (append (history particle) (list (list (x-coord particle) (y-coord particle)))))) (defparameter *original-velocity-weight-2d* 0.8) (defparameter *target-velocity-weight-2d* 0.3) (defun update-particle-velocity-2d (particle target-x target-y) (let ((target-x-vel (- target-x (x-coord particle))) (target-y-vel (- target-y (y-coord particle)))) (setf (x-vel particle) (+ (* *target-velocity-weight-2d* target-x-vel) (* *original-velocity-weight-2d* (x-vel particle)))) (setf (y-vel particle) (+ (* *target-velocity-weight-2d* target-y-vel) (* *original-velocity-weight-2d* (y-vel particle)))))) (defun get-swarm-history-2d (swarm) (loop for particle in (particle-list swarm) collect (history particle))) ;Functions To Optimize (defun double-parabola (x y) (+ (* x x) (* y y)))

**More Done Than Done**

With the 2D swarm finished you might think our next goal is to tackle the general swarm intelligence project, but I’m not quite ready to move beyond the second dimension just yet. I’d like to run a few more tests and play with some data first. So join me next time as we take a look at the wonderful world of data visualization!