Let’s Program Some Weird Markov Chains Part 2: Need More Data

Last time we learned that a Markov Chain is a way to model lists of events and objects by basically saying “If your current item/event is X then here is a list of items/events that might come next PLUS a list of how likely each choice is.”

This is a remarkably easy thing to program in pretty much any language.

All we really need is:

  1. A way to connect an input to a list of possible outputs
  2. A way to keep track of the probability of those outputs
  3. A way to randomly select one output from a list based on those probabilities

For this Let’s Program we’ll be using Python since it’s cross platform and already on all of my computers. Be warned that I’m not much of a Python expert so please don’t take this project as an example of how to write “proper” Python code.

Step 1, connecting inputs to possible outputs, is super easy because we can just use the dictionary/map/hashtable that is undoubtedly included in whatever language you’re using. These are data structures that let you link a “key” to a “value” of any sort. Observe:

>> dictionaryTest = {‘apple’: ‘pie’, ‘chocolate’: ‘ice cream’}

>> dictionaryTest[‘apple’]

‘pie’

The trick here is that we don’t want to link a word to another word, we want to link a word to a set of possible next words.

Believe it or not we can actually represent that list as a dictionary as well, a dictionary where the keys are words we have seen and the value is how often we have seen them.

>>> dictionaryTest2 = {‘apple’: {‘pie’:15, ‘tart’:6}, ‘chocolate’: {‘ice cream’: 6, ‘milk’: 3}}

>>> dictionaryTest2[‘apple’]

{‘pie’: 15, ‘tart’: 6}

Now all we need is a way to use those word counts to randomly choose a word.

Pulling Marbles Out Of Bags

The most obvious (but probably not most efficient) algorithm here is to cycle through the entire dictionary and count up the total number of word occurrences in it. Ex, if pie showed up 15 times and tart showed up 6 times the total is 21.

We then randomly generate a number between 0 and our total, then loop through the array a second time. For each word count we check if our random number is less than the count. If so we return that word. If not we decrease our random number by that word count and then try again with the next key vaule pair.

Example 1: We randomly generate “3”. That’s less than the “pie” count of 15 so we return “pie”.

Example 2: We randomly generate “19”. That’s higher than the “pie” count of 15 so we subtract 15 from 19 and now have “4”. We then move on to “tart” and see that “4” is less than the “tart” count of 6, so we return “tart”.

Let's see that in code! These experiments are getting a little long to just type into the command line so let's put this in a file with a convenient name like “markov.py”.

from random import randint

def getRandomFromProbabilityList(probabilityList):
    total = 0

    for item, count in probabilityList.items():
        total = total + count
        chosenProbability = randint(0, total -1)

        for item, count in probabilityList.items():
            if(chosenProbability < count):
                return item
            chosenProbability = chosenProbability - count

markovModel = {'apple': {'pie':15, 'tart':6}, 'chocolate': {'ice cream': 6, 'milk': 3}}

print(getRandomFromProbabilityList(markovModel['apple']))

Here you can see we brought in our mini food themed Markov Model and then used our getRandomFromProbabilityList function to randomly generate an appropriate next word for the starting word “apple”.

Inside the getRandomFromProbabilityList you should see the two “key value” for loops that we planned. The “key” is the word and the “value” is how often we saw it. The first loop ignores the keys and just focuses on adding up all the counts. The second loop then uses a randomly chosen number to choose a word key to return.

Running this several times in a row shows that we do indeed get semi-random answers for what word should come after “apple” and that we see ‘pie’ (count 15) a little more than twice as often as ‘tart’ (count 6), which is exactly what we want.

And that simple code really is the core of our Markov Chain engine. Obviously we’ll need to build a few other tools in order to do things like automatically build Markov Chains from raw data or to automatically generate interesting output from our input but the real logic is all right here. That function will help us link our chains together.

Let’s Data Mine Some Text

It didn’t take long at all to build some code for pulling probabilistic events out of our Markov Model. The problem though is that to test that code we had to build a Markov Model by hand. That’s the sort of thing that can get awfully boring awfully fast so let’s figure out a way to automatically build a Markov Model from existing text.

Of course, for debugging purposes it would be silly to jump straight into working with a massive book like Moby Dick. Instead let’s cut our teeth on this fascinating little piece of micro-fiction:

I have an apple. I have another apple. I have some apples. I will make them into a pie. It will be an apple pie. The apple pie will taste good. I like apple pie.

To turn this into a Markov Chain we need to extract all the word pairs and count how often each on happened, but before that we need to do a little pre-processing to make the text more computer friendly.

Specifically we want to make sure the Markov Chain knows when sentences start and end. Knowing when and how sentences start will help the chain create sentences with a natural sounding beginning while knowing how they end will give the program a way to gracefully finish off sentences instead of just looping through an infinite chain of “the man saw the dog saw the man saw the dog saw the…”

After we’ve figured out the start and stop info out we’ll also want to remove all punctuation and capitalization. Otherwise our program might accidentally think “Apple”, “apple”, “apple,” and “apple.” are four different words.

After pre-processing our silly little sample story should look more like this:

sentencestart i have an apple sentenceend sentencestart i have another apple sentenceend sentencestart i have some apples sentenceend sentencestart i will make them into a pie sentenceend sentencestart it will be an apple pie sentenceend sentencestart the apple pie will taste good sentenceend sentencestart i like apple pie sentenceend

Obviously I did that by hand, but equally obvious is the fact that I don’t want to have to edit the entirety of War and Peace just to use it in our program so let’s figure out how we might write a program to pre-process our text for us.

Hmmm…, as humans we keep track of the boundary between sentences by using periods. Maybe our code can do the same? Perhaps we can scan through the code looking for periods and then replace them all with symbols showing that one sentence has ended and another is about to start. We can use whatever symbols we want but should try to avoid anything that might show up in a real piece of text. Mushing words together to create sentenceend and sentencestart is how I did it but you could use a randomly generated strings like BBSDFDSstart and BBSDFDend if you wanted to be extra careful.

As usual there are some edge cases, very literal ones here. The first word of any text should be sentencestart even though there is no period there. And the final period a story should be replaced by just sentenceend with no following sentencestart.

Forunately the code for this is pretty easy thanks to regular experssions:

import re #Regular expressions
import sys #Lets us get command line arguments

#Get the first argument to the program and use it as a filename
with open(sys.argv[1], 'r') as myfile:
    data = myfile.read()

#Text should start with sentencestart
text = 'sentencestart ' + data

#Replace all periods with 'senteceend sentencestart'
text = re.sub('\.', ' sentenceend sentencestart', text)

#Remove the unneccessary sentencestart from the last period in the file
text = re.sub('sentencestart$', '', text);

#Make everything lowercase
text = text.lower()

#Remove everything that isn't a number or letter
text = re.sub('^a-zA-Z\d\s:', '', text)

#Split the pre-processed text in to an array of words
tokenList = text.split()

#Double check that the pre-processed array turned out right
print(tokenList)

Once we have our list split into a nice array it’s pretty easy to walk through that array and add every word pair to our Markov Model.

markovChain = {}

for i in range(len(tokenList) - 1):
    if tokenList[i] not in markovChain:
        markovChain[tokenList[i]] = {}

    if tokenList[i+1] not in markovChain[tokenList[i]]:
        markovChain[tokenList[i]][tokenList[i+1]] = 1
    else:
        markovChain[tokenList[i]][tokenList[i+1]] += 1


Pretty simple. We index through the array of tokens. At every step the word at the current index i is treated as the first word of our pair and the word at i + 1 is treated as the second letter in the pair. We can use this to build our nested dictionary of dictionaries. The end result will eventually work sort of like this: markovChain[firstWord][secondWord] = count.

The only trick here is that you have to make sure you end the loop on the second to last token, not the last token. This is because the last token doesn’t have a “next” token so we can’t build a pair. And, depending on your language, trying to will probably throw an error.

And with that we have automatically turned a text file into a Markov Chain full of interesting facts like:

  • 5 sentences started with ‘I’
  • The world “apple” was followed by the word “pie” twice
  • The word “will” can be followed by “be”, “make” or “taste”

Great! We now have a tool that can help us build large Markov Models with significant amounts of real data. Tune in next time to see what interesting stuff we can do with that sort of information.