Let’s Illustrate Planetbase Part 3: Maslow’s Hierarchy… IN SPACE!

When we last left Fault Inc’s first space colony things were looking pretty good. We had a set of solar panels producing tons of electricity that we then used to power both a water extractor and an oxygen plant. Toss in all the freeze dried food the colonists brought with them and the future is looking bright!

At least until the sun goes down.

No sun means no solar energy. No solar energy means no electricity, which means no water which means no oxygen.

As you might imagine the Fault Inc colonists are not happy about this and start complaining.

You already had twelve hours of oxygen today. What more do you want?

The good news is that they stop complaining pretty quickly. The bad news is that they stop because they’re all dead.

On the bright side poor Jude apparently died happy.

Well shoot. Not only did several brave astronauts lose their lives but Fault Inc’s insurance premiums are probably going to skyrocket too. We’d better be extra careful with the next colony.

Let’s Illustrate Planetbase Part 2: How Hard Is Rocket Science, Really?

When you first try to play Planetbase there’s a big popup suggesting you should complete the tutorial mode before trying to settle an actual planet, but I’m going to completely ignore it in favor of clicking on buttons myself and seeing what happens. For example, it turns out that clicking the “Start Game” button triggers an adorable cinematic of your space capsule landing and your brave space pioneers emerging.

And you thought it felt good to get out and stretch your legs after a long car trip.

The best thing about camping in space is there are no mosquitoes.

Now I’m no space expert but I’ve read enough sci-fi to know the obvious top priority for a space base is air. And the easiest way to get air is by electrically splitting water, which means I need some power and water. Fortunately solar panels and water extraction systems come as pre-unlocked tech so I toss up some panels, drill for water and then set up an oxygen plant for my base.

Things are looking pretty good for Fault Inc’s first extraterrestrial colony!

Let’s Illustrate Planetbase Part 1: Everything Is Better In Space

Planetbase is a town management sim whose claim to fame is that it takes place in space, meaning it’s not enough to just worry about whether your citizens are happy and well fed in their newly built settlement. You also have to worry about whether or not they have enough oxygen or too much cosmic radiation. How much you have to worry about this depends a lot on the planet you choose to settle.

Since I’m going into this game blind I only have access to the easiest of planets: Class D. It’s a dustball with a thick CO2 atmosphere and while humans can’t breath that the fact that it at least has an atmosphere should help with everything from providing wind power to screening out a certain amount of cosmic radiation and debris. It would also make this an attractive planet for terraforming but I don’t think this game cares about so oh well.

Anyways, first up we need to name our base. Now let’s see… who do we know that both desperately needs a way off planet and has enough money to fund a space program?

Why none other than Fault, who as you may recall ended her last adventure by kind of sort of betraying her orders from a powerful Emperor and thus winding up on his hit list. Still, at least she managed to amass a large fortune in the process and what better way to spend that than on space exploration.

Extensive experience with stabbing monsters totally qualifies Fault to run a space based industrial firm, right?

Thus is born “Fault Inc”.

No, this has no impact on the game at all but after the last two Let’s Illustrates it would be sad if Fault didn’t at least get a cameo in this one.

Let’s Program A 3D Browser Game Part 1: You Can Do That?!

A while back we saw how recent improvements in HTML5 standards and modern browser technology make it possible to program classic 2D games using nothing but Javascript. But that’s not all that modern browsers can do!

They can do full 3D.

This is exciting because it used to be that the only way to share 3D content with your users was to ask them to download and run a a suspicious standalone exe file. But now you can share your 3D ideas directly through the browser they already trust.

Of course, there are limits. The browser may already know how to render 3D graphics but you still have to send the user a copy of all your models and texture and users aren’t going to wait around for hours while you send them ten gigs of data. So you’re unlikely to be hosting full triple A games on your blog.

But there’s still a lot of cool stuff you can do with a mere dozen megabytes of 3D data. Games, simulations, interactive presentations and so on. It’s not going to be replacing the fast and reliable text based Internet we know and love anytime soon (or ever) but it can certainly enhance an existing website.

Always Turn Right

So what shall we do to practice our browser based 3D skills?

Well, lately I’ve been playing an unreasonable amount of dungeon crawlers. There’s just something about exploring a labyrinth in first person that’s more exciting than doing it from a third person eagle eye view. Probably the suspense of not being able to see what’s around each corner combined with the sense of scale you get from actually being in the dungeon.

So let’s build ourselves a first person, grid based maze. Just a maze, mind you. No combat system or treasure chests or anything. Just the maze exploration system.

Now before we start talking about our code let’s go over what our project needs to actually do:

  • Draw textured squares to represent walls, floors and ceilings
  • Accept user movement input
  • Force the user to move in a grid pattern
  • Keep track of the shape of the maze
  • Prevent the user from walking through walls

Pretty easy and straightforward as long as you have a reliable way to draw 3D graphics. And thanks to the smart people in charge of browser design we do.

So next time we can start experimenting with actual graphics code.

Random Pixels: Submarine Map Necklace Bag

I rolled the idea dice and came up with a submarine, a map, a necklace and some sort of bag. The map and necklace together suggest pirate treasure which works well with the idea of bags full of gold. Add in the submarine and you get sunken pirate treasure.

Of course nothing but a big old treasure chest underwater would be kind of boring so I threw in a mermaid.

Let’s Program A Compression Algorithm: Index And Code

Data compression is the science of taking big computer files, shrinking them down for easy transport or storage and then expanding them back into their original size and shape later on when you need to use them again. It gets used everywhere from computer backups to streaming video to software downloads and more.

But how does it work? How do you take 100 MB of data and somehow fit it into an only 10 MB file?

Let’s find out by writing our own simple compression algorithm!

Let’s Program A Compression Algorithm Part 1: How To Fit A Byte Into A Bit And Other Curious Tricks

Let’s Program A Compression Algorithm Part 2: In Which A General Idea Becomes A Specific Algorithm

Let’s Program A Compression Algorithm Part 3: In Which A General Compression Algorithm Becomes Working Lisp Code

Let’s Program A Compression Algorithm Part 4: In Which Byte Compression Code Is Applied To An Entire File

Let’s Program A Compression Algorithm Part 5: In Which Compressed Files Are Decompressed And The Project Completed

Let’s Program A Compression Algorithm Part 6: In Which We Consider How Professional Compression Algorithms Function

Let’s Program A Compression Algorithm Part 7: Considerations On Speeding Up Slow Code

Let’s Program A Compression Algorithm Part 8: Further Musings On Speeding Up Slow Code

Code

wrcslow.lisp – Our super slow but functional prototype from parts 1-5

wrcfast.lisp – The much faster optimized code from parts 7 and 8

Random Pixels: Submarine Shield Cauldron Fuel Gauge

So those two Let’s Illustrates I did were a lot of fun and they did help me get over that first big artistic hurdle of “What should I draw for practice?”

But now that I’m starting to get in the habit of practicing the time required to play a game and keep a journal and organize reference screen shots and whatnot is actually getting in the way of practicing the actual art. So now I have a new plan!

I have purchased a set of Rory’s Story Cubes, Voyage edition. It’s a set of nine dice with adventure themed pictures on them instead of numbers. These things are really handful for anybody whose creativity needs a little head start. Want a random short story idea? A theme for your next game of Dungeons and Dragons? Maybe… inspiration for a piece of pixel art?

In my case I use them by rolling all nine dice, choosing four that seem to fit together well and then that’s the theme of my next pixel drawing.

For example, the first time I rolled them I picked out a submarine, a shield, a cauldron and an empty fuel gauge.

So I thought a little and came up with this:

A deep sea diver (submarine) wearing one of those metal plated old diving suits (shield). He’s underwater examining a pipe for carrying oil that can be refined (cauldron) into usable fuel (fuel gauge).

Let’s Program A Compression Algorithm Part 8: Further Musings On Speeding Up Slow Code

Last time we drastically sped up our program by changing how we put lists together. This time we’re going to speed things up again by changing how we take them apart.

Speed Reading A List

The part of our code that creates compressed bit lists now runs super fast but our program as a whole is still very slow. That suggests we’ve got a problem in the part of the compression function that actually writes to output. If you look at the code you’ll notice that we write to output eight bits at a time by using a series of subsequence copies and assignments. This was nice because it perfectly matched the idea in our heads (read 8 bits from the front of the list and then throw them away).

The problem here is that subseq can get pretty expensive, especially when used to grab really big subsequences from really big lists. In particular the way that we delete used bits from our bitlist is pretty horrible since we aren’t actually deleting them; instead we basically ask subseq to make a brand new copy of everything in the list except the first eight items. We then replace our original list with this slightly shorter list. This obviously does work but all that copying is reaaaalllly slow. Is there any way we can get the same effect but avoid all that work?

Well, when you think about it as long as we only read each bit once it doesn’t really matter whether we delete them after we use them or if we just leave them alone but ignore them. Kind of like a book. You don’t rip out pages after you finish reading them, you just use a bookmark to keep track of which pages you have and haven’t read.

Is it possible that we can do the same thing with our data and write some code that reads through our list eight bits at a time without deleting any of the old stuff?

Of course we can! It’s not even terribly hard. The mighty Lisp loop macro can actually be configured to read multiple values at a time and then skip forward multiple values to get its next input.

(defun white-rabbit-compress-file (input-filename output-filename)
    (let ((bitlist (compress-terminate-and-pad-file input-filename))
            (out (open output-filename :direction :output :element-type '(unsigned-byte 8))))
            (when out
                (loop for (b1 b2 b3 b4 b5 b6 b7 b8) on bitlist 
                    by (lambda (x) (cddddr (cddddr x))) 
                    do (write-byte (8-bit-list-to-byte (list b1 b2 b3 b4 b5 b6 b7 b8)) out)))
            (close out)))

As you can see we’re asking the loop for eight variables at once instead of just one and we’re telling it to skip forward multiple spaces at once by using a local lambda function that uses some weird syntax to basically look for the fourth neighbor of the fourth neighbor of our current list item. So now we read eight bits from our list, jump forward eight spaces and then repeat till we’re done.

What does that do for our runtime?

[100]> (time (white-rabbit-compress-file “chapter1.txt” “bettertimedoutput3”))

Real time: 0.44549 sec.

Run time: 0.436 sec.

Space: 3931184 Bytes

GC: 3, GC time: 0.036 sec.

T

Look at that! Half a second processing time and only 4 megabytes of memory usage now that we aren’t wasting all our time and RAM making copies of copies of copies of subsequences of copies of our copied data copies.

In fact, at this speed we can finally achieve our goal of compressing the entirety of Alice in Wonderland!

[102]> (time (white-rabbit-compress-file “aliceASCII.txt” “tinyalice”))

Real time: 5.921212 sec.

Run time: 5.872 sec.

Space: 52589912 Bytes

GC: 16, GC time: 0.86 sec.

T

For anyone who cares we managed to shrink it from 147.8 kilobytes to 113.3, which is roughly 25% smaller just like we hoped for. Go us!

Making More Functions Fast

As long as we’re in our groove it might be nice to also speed up our decompression function, especially since I’m getting pretty tired of having to wait five minutes every time I want to test whether or not a change to compression logic actually worked.

Like compression our decompression is a two step process. The white-rabbit-decompress-file function starts by calling the file-to-bitlist function to get a compressed file full of bits and then it goes on to decompress those bits and write them to our output file.

These two functions have the same efficiency flaws that compression functions did. file-to-bitlist relies too much on append and white-rabbit-decompress-file abuses subsequences.

Cutting the appends out of file-to-bitlist isn’t any different than it was for our compression function. Replace the appends with pushes to efficiently create a backwards list and then flip it around.

(defun file-to-bitlist (filename)
    (let ((bitlist '())
            (in (open filename :element-type '(unsigned-byte 8))))
        (when in
            (loop for testbyte = (read-byte in nil)
                while testbyte do (let ((bits (byte-to-8-bit-list testbyte)))
                                                    (loop for i in bits do (push i bitlist))))
        (close in))
        (nreverse bitlist)))

Fixing up the way we write to output is going to be a little bit harder. Since we’re working with a compressed file we can no longer safely say that all of our letters are eight bits long. Some letters will actually only be four bits long and others will be nine bits long. So we can’t just write a loop that always jumps n items forward between writes. Instead we’re going to have to write our own looping logic that’s smart enough to decide when to jump forward four spaces and when to jump forward nine.

The basic idea is that we will start with a variable pointing at the start of our list. We will then use subseq to check whether the list starts with a 0 or 1 (a short subsequence at the start of a list is pretty cheap). Like usual we will use this information to decide how to decompress our data. We will then manually change our variable to point either four neighbors further or nine neighbors further as needed. This will make that new spot look like the start of the list and we can just loop until we hit the termination sequence.

(defun white-rabbit-decompress-file (input-filename output-filename)
    (let ((bitlist (file-to-bitlist input-filename))
            (decompressing 1)
            (out (open output-filename :direction :output :element-type '(unsigned-byte 8))))
            (when out
                (loop while decompressing do                                             
                        (if (= 0 (first bitlist))
                            (progn (write-byte (gethash (subseq bitlist 0 4) *list-to-byte-compression-hash*) out)
                                    (setf bitlist (cddddr bitlist)))
                            (if (equal (subseq bitlist 0 9) '(1 0 0 0 0 0 0 0 0))
                                    (setf decompressing nil)                                    
                                    (progn (write-byte (8-bit-list-to-byte (subseq bitlist 1 9)) out)
                                    (setf bitlist (cdr (cddddr (cddddr bitlist)))))))))
            (close out)))

 

Once again we use a chain of cdr and it’s subtypes to jump through our list. Since this is the second time we’ve used them we might as well explain them.

Remember how I said every item in a Lisp list is made of two halves? The first usually* holds a piece of data and the second usually holds a link to the next item in the list. Together these two halves make up a “cons cell” and you can individually access each half using the historically named car function to grab the first half and cdr to grab the second half. Since the second is usually a link to the next item in the list cdr

You can walk through lisp lists by chaining these together. If you want the data at the third spot of the list you need to cdr to get to the second item then cdr again to the third item and the car to get the data, or in other words (car (cdr (crd list-with-cool-data))).

On the other hand if you want a new sublist that starts at the third part of the list you would just use two cdrs to get the link to the third item without using car to then specifically grab only the data.

Nesting these calls can get annoying though so Lisp has several build in functions that condense several chains into single function such as caddr or cddr. Unfortunately these only go up to four calls in a row which is why in order to jump the start of our list forward nine spaces we still have to chain multiple calls together.

Now that you understand how the code works let’s see how well it works:

[2]> (time (white-rabbit-decompress-file “tinyalice” “fastexpandtest.txt”))

Real time: 5.585598 sec.

Run time: 5.268 sec.

Space: 51634168 Bytes

GC: 20, GC time: 0.608 sec.

T

We can now decompress files just as fast as we compress them. It’s not exactly a great program but for an educational toy I think it turned our pretty well. And hopefully you learned at least a little about Lisp’s inner guts and what to look for when things seem slow.

 

* Sometimes the first and the second halves of a cons cell will both hold links to other cons cells, allowing for nested lists and interesting tree data structures.

Let’s (Not) Illustrate Shadowrun Dragonfall

So after finding out that trying to illustrate Dragon Age wasn’t helping me get in as much practice as I had hoped I decided to try moving on to a game I was more familiar with: Shadowrun Dragonfall. It’s a really well designed cyberpunk meets urban fantasy RPG that did a great job of giving you a lot of freedom of choice in how you approach problems. Need to get past security? Use hacking skills to fake an id. Fast talk your way past a guard. Get advice from the spirit realm on alternate paths. And of course if all else fails you can just shoot your way through the whole game.

Anyways that wound up not really being any better than Dragon Age in terms of how much time I spent drawing pixel art versus time spent playing and writing about the game but I might as well link a couple almost nice pieces before we get to next week and I reveal my new (but not that exiting) art learning strategy.

My first Shadowrun character ever was a dwarf roboticist named “Uncle Chrome” who managed to find a nonviolent solution to most of his problems thanks to being moderately charismatic and extremely good with computers and circuits.

Fault’s shadowrunning career sadly never took off the ground but I think it’s safe to say it would have involved a lot more explosions and hostile negotiations than my old Uncle Chrome run.

Let’s Program A Compression Algorithm Part 7: Considerations On Speeding Up Slow Code

Welcome fellow Lisp enthusiasts*! Now that our proof of concept code works it’s time to talk about the elephant in the room: Our code is really slow and considering that modern Lisp is supposed to be FAST that means we’re probably doing something dumb.

But exactly how dumb? Well let’s take a scientific approach and run a test:

[7]> (time (white-rabbit-compress-file “chapter1.txt” “timedoutput”))

Real time: 251.23251 sec.

Run time: 250.732 sec.

Space: 10816791720 Bytes

GC: 9324, GC time: 159.104 sec.

T

Looks like compressing a small text file on my (admittedly old) Linux laptop took over four minutes and, more surprisingly, consumed something like 10 gigabytes of memory. That’s a freakishly huge amount of memory for working with a file that’s only 11.4kB long. And using up so much memory was a real strain on the garbage collector, which spent over two minutes just cleaning up all the data our program threw away.

How Are We Going To Fix This?

Code optimization is the reason why it’s important to not only understand WHAT your programming language can do but also HOW it does it.

Sure, most programmers can easily spot the warning signs when they personally write inefficient code: Too many nested loops, recursive functions on complex data and so on.

But what about when you load up someone else’s library and call the doSomethingCool function they wrote? Is it fast? Slow? Does it loop? Without some research you have no way of telling.

Moral of the story: Do your research!

Doing Some Research!

For example, let’s take a look at Lisp’s handy append function. Its job is simple: take two lists, glue them together and then return the combined list. Our prototype uses this function everywhere. To build bit lists. To build compressed output files. To build decompressed output files. It is no exaggeration to say most of what our program does is appending lists to other lists.

So…. How does append append lists anyways?

For that matter, what is a Lisp list?

A list in Lisp is a pretty simple thing. Each individual item in the list contains two pieces of information: The data stored there and directions on where to find the next item in the list. The final item in the list has a blank in the “next” spot. That’s how you know it’s the end of the list.

Now the easiest and fastest way to connect two lists together is to take the empty “next” from the end of the first list and point it at the start of the second list.

But append comes with a bonus guarantee that complicates things: It is guaranteed to NOT change either of its inputs. That means changing the last item in the first list is a no go.

Instead append creates a complete copy of the first list and then links that new list to the second list. (This doesn’t change the second list because Lisp lists only move forward and don’t care if anyone links to them, just who they link to).

Did we find the problem?

So append makes a copy of its first argument. That sounds like it could be a bit slow. Maybe we should double check how we us it. Let’s start with a look at our file-to-compressed-bitlist function. You might notice this little gem:

(append bit-list (compress-byte testbyte))

That’s the line of our code where we say “Take our current bit list and add the next set of compressed bits to the end”. This happens inside of a loop that runs once for every byte in the input file. So our eleven kilobyte test file is going to trigger this line some 11,000** times. And every single time is going to involve making a complete copy of our compressed bit list so far.

That will add up fast. How fast? Hmmm….

Let’s assume that between our 4 bit short codes and our 9 bit long codes the average compressed-byte comes out at 6 bits (that matches the 25% compression rate we were aiming for). Working with that average our loop probably looks sort of like this.

Step one: Append first 6 bits to empty list. No copying.

Step two: Copy existing 6 bits and link to next 6 bits. Total of 6 bits copied.

Step three: Copy existing 12 bits and link to next 6 bits. Total of 18 bits copied.

Step three: Copy existing 18 bits and link to next 6 bits. Total of 36 bits copied.

Step four: Copy existing 24 bits and link to next 6 bits. Total of 60 bits copied.

Step five: Copy existing 30 bits and link to next 6 bits. Total of 90 bits copied.

Step six: Copy existing 36 bits and link to next 6 bits. Total of 126 bits copied.

So we’re only six bytes into our 11,000+ byte file and we’ve already had to make copies of 126 list items. And while I’ve been calling them bits remember that they’re we’re actually using full 32 bit (4 byte) integers to keep track of our 0s and 1s. So that means we’ve had to copy 126 * 4 = 504 bytes just to compress six letters of input.

And it only gets worse. By the time we make it to the end of our elven kilobyte file we will have made copies equal to several thousand times the size of our original input! The bigger the input gets the worse that multiplier becomes and suddenly it’s not so mysterious why our code takes multiple minutes to run and consumes gigabytes of memory.

Functional Programming: What that be?

Before we start talking about how to “fix” append I want to take a minute and talk about why it’s not actually “broken”. Sure, using append the way we do is horribly inefficient but that’s not because append was poorly designed. In fact, append was very carefully designed to be very safe. Because it copies its inputs instead of changing them you can use it anywhere you want without having to worry about accidentally mutilating some bit of data you might need to use again later.

This is the core of what’s known as “functional programming”.

Basically programmers noticed that a lot of their worst and hardest to solve bugs were caused when different bits of code accidentally changed each other’s variables. Example: If three functions all depend on the same global variable and something else changes that variable suddenly all three functions might break down. Or if you pass the same variable as input to multiple functions and it gets changed halfway through your later functions might not work like you expected.

The traditional solution to this problem is to just work really really hard to remember which functions share which global variables and how every function changes their input. The problem here is that the bigger your program gets the harder it is to keep track of all that stuff.

This lead some people to come up with a clever idea: What if we avoid writing code with shared variables and just don’t let functions change their inputs? That should get rid of all those weird bugs and accidental data corruption issues.

When you write your code according to these rules it’s “functional”, named after math functions. After all, the quadratic equation won’t ever break down just because you used the Pythagorean Theorem wrong and and four is always four no matter how many equations you pass it through.

The obvious cost here is that functional code tends to “waste” a lot of time and memory copying inputs so they can safely work with the copies and leave the original data alone. So it’s up to you as the programmer to decide what each part of your program needs most: Functional code that’s easy to maintain and experiment with or efficient code that’s harder to work with but runs much faster.

In our case since we’re dealing with the interior of a massive loop it’s probably time to say goodbye to our safe functional prototype code and focus on speed.

Appending Faster Than Append

So we need a list building loop that doesn’t waste time or memory space. Turns out one of the more popular ways to do this in Lisp is to actually build your list backwards and then reverse it. This takes advantage of the fact that adding something to the front of Lisp list is both fast and simple.

Interesting trivia: We actually already did this in our current byte-to-8-bit-list function.

Now let’s rewrite our file-to-compressed-bitlist to use the same technique.

Basically in our new version when we compress a byte we won’t append the compressed bit pattern directly to our output. Instead we will load the compressed bits into a temporary variable and then push them one by one onto the front of our bit list.

After all the bytes are read we’ll then load our termination sequence into a variable and push that one bit at time too. This will give us a complete mirror image of the bit list we actually want, so we finish off with a call to nreverse which reverses the list. The “n” indicates this is a “unsafe” function that works very fast but will destroy the original data. Since we won’t ever need that backwards list that’s fine.

(defun file-to-compressed-bitlist (filename)
    (let ((bit-list '())
            (in (open filename :element-type '(unsigned-byte 8))))
        (when in
            (loop for testbyte = (read-byte in nil)
                while testbyte do (let ((compressed-bits (compress-byte testbyte)))
                                                    (loop for i in compressed-bits do (push i bit-list))))
        (close in))
        (let ((termination-symbol '(1 0 0 0 0 0 0 0 0)))
            (loop for i in termination-symbol do (push i bit-list)))
        (nreverse bit-lists)))

Interestingly enough this function is actually fairly functional since it avoids global variables. The only data it changes or destroys is private data so as far as the rest of the world is concerned nothing important has been changed.

Now that we’ve tuned up on of our major functions let’s give things another whirl and see what happens:

[41]> (time (white-rabbit-compress-file “chapter1.txt” “bettertimedoutput”))

Real time: 120.46666 sec.

Run time: 120.376 sec.

Space: 4636093800 Bytes

GC: 2925, GC time: 47.0 sec.

T

Not bad. Twice as fast, used up only half as much memory and not nearly as hard on the garbage collector. But even after that sort of major improvement it’s still pretty slow and memory hungry so our works not done quite yet. We’ll see what remaining inefficiencies we can trim out next time.

 

 

* Reminder: I’m not particularly good at Lisp. I just like the language.

** Yes, yes, I know. A kilobyte is actually 1024 bytes not an even 1000 but multiplying by powers of ten in your head is so much easier than powers of two and is close enough for general discussion.