Super Brains, Super Data and Chaos Theory

I just finished reading Count to a Trillion by John C. Wright. To summarize without spoiling: It’s a story about the futuristic space adventures of a man who suffers from occasional bouts of post-human hyper-intelligence. Expect mystery, treachery, high-tech duels and the mandatory beautiful space princess.

 

A major theme in Count to a Trillion is the existence of what I’m going to call “super brains”, extremely intelligent entities that make normal human geniuses look like mere children. Inspired by this book I’ve spent the past few days playing with the question: What are the sort of things that a super brain should and should not be able to do? What are the realistic limits that even a super high IQ entity would have to deal with?

 

After much thought I think it is safe to say that all brains are limited by the data available to them. For example, even the most talented of accountants will have trouble helping you balance your checkbook if you don’t keep your receipts or if you lie about your expenses.

 

This limit applies to super brains too. A super brain might be able to instantly calculate the proper flight path to put a satellite in orbit around Jupiter but he’s still going to need to know how much the satellite weighs, how much stress it can handle and what kind of engine it uses. Give the super brain bad information about the satellite’s weight or construction and his answer will be just as disastrously wrong as a random guess. Like we say in computer science: Garbage in, garbage out.

 

But there is more to data than just whether you have it or not. You also have to consider the quality of your data. When I eyeball a jug of milk and claim there are “about two cups left” I probably actually have anywhere between one and a half to two and a half cups of milk. This data is less accurate and precise then if I were to spend an afternoon carefully measuring the remaining milk with scientific instruments in order to report that I was 99.9% sure that the jug held 2.13 cups of milk, plus or minus half a teaspoon.

 

This is important because different problems require different levels of data precision. If you want a super brain to help you decide what to make for dinner “we have about 2 cups of milk left” is more than enough information. But if you want his help sending a satellite to Jupiter you’re going to need to do a lot better than “I think it weighs somewhere between 2 and 3 tons”.

 

But it gets even more interesting! Chaos Theory shows that there are certain problems that require infinitely accurate data to solve, or at the very least data so close to infinitely accurate that no human could ever collect it. Even worse, not only do you need infinitely accurate “super data” to solve a chaos problem, without super data you can’t even predict how wrong you’ll be.

 

You see, with a normal problem having data that is 10% wrong will lead to an answer that is 10% wrong. Which is actually very useful because it let’s you say “our data isn’t perfect, but it’s close enough so our answer will also be close enough.”

 

But chaos theory introduces problems where data that is 10% wrong will lead to an answer that is unpredictably wrong. You might be 10% wrong or you might be 80% wrong or you might only be 1% wrong. There’s no way to tell. In the words of Edward Lorenz, father of chaos theory:

 

Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

 

The classic example of a chaotic problem is predicting the weather. Weather is a complex, iterative system where today’s weather leads to tomorrow’s weather which leads to the next day’s weather and so on. Theoretically you should be able to predict tomorrow’s weather just by looking at today’s weather. You can then use that prediction to calculate the next day’s weather again and again as far out into the future as you want.

 

The problem is that making even a tiny mistake when recording today’s weather will lead to small but unpredictable errors in your prediction for tomorrow. A small error in your prediction for tomorrow means a moderate error in your prediction for next week. A moderate error in your prediction for next week means a huge error in your prediction for next month. And a huge, unpredictable error in your prediction for next month means we have no idea what the weather will be like exactly one year from now.

 

The only way to avoid this endless cascade of errors is to make sure you start with perfect super data. But collecting super data is somewhere between “ridiculously hard” and “genuinely impossible”. For example: in order to put together some super weather data you would need an accurate list of air pressure and temperatures from every single point on the globe. And to keep that data accurate you would need to know every time a child laughed, a butterfly took flight or anyone lit a match. Unless you’ve got a plan for turning the entire planet into a giant self-monitoring barometer* you might as well give up.

 

Without this super data even a super brain would have no chance at predicting the far future of weather. It doesn’t matter that the super brain perfectly understands the mechanics of weather and can manipulate billions of numbers in his head; without perfect data he is stuck to the same seven day forecasts as normal weathermen. Chaos Theory paints a clear limit beyond which pure intelligence cannot take you.

 

All of which suggests a good rule of thumb for writing realistic super brain characters: The more chaotic a problem is the less useful the super brain is. Even characters with post-human levels of super-intelligence can’t predict the future of chaotic systems.

 

A super brain might be able to invent a new branch of chemistry, jury rig a spaceship and learn an entire language just by skimming a dictionary but he still won’t be able to predict next year’s weather or make more than an educated guess at which countries will still be around 1000 years in the future.

 

But this is just a rule of thumb. If you have a really interesting story idea that requires the main character to be able to predict the future of the human race with 99% accuracy then go ahead and write it. As long as it is entertaining enough (tip: lots of explosions and space princesses) I’m hardly going to throw a fit that you bent a few laws of mathematics.

 

 

 

* Fun story idea: Humanity builds a hyper-intelligent computer and asks it to solve the climate so we can finally stop arguing about what the climate is doing and how much of it is our fault. The computer concludes that it needs super data to do this and unleashes swarms of nano-bots to turn the entire surface of the earth into a giant weather monitoring station. Can our heroes stop the nanobots and shut down the super brain AI before it wipes out the entire human race?

2 thoughts on “Super Brains, Super Data and Chaos Theory

  1. Do you assume that a superbrain would be able to communicate with humans across the possible barriers of experienced time rate and two value logic? Perhaps a nascent superbrain given large amounts of data and information on multiple value logic would develop too rapidly for a human experimenter to be able to gauge success or otherwise. Even given an event trail, the ramifications of the thought processes might quickly become beyond human assessment

  2. I agree, it is entirely possible that normal humans would be unable to understand or communicate easily with a super brain. Even humans with similar intelligence levels often have trouble following each other’s thought processes, so I can only imagine how much harder it would be to try and follow along with a hyper intelligent human or singularity level AI. And how would the super brain see us? As clever children? Annoying children? Mere animals that it can no longer empathize with? This communication barrier is actually one of the topics Mr. Wright touched on in Count to a Trillion.

    On the other hand, I think the main point of my post holds true independent of communication issues. Even a brain too complex for us to understand or talk to would still need data to draw conclusions from, and chaotic systems would still require infinitely accurate data to solve. If the super brain doesn’t have some way to get that super data it can’t solve the problem no matter how smart it is. And since it is likely that some sorts of super data are physically impossible to obtain it also seems likely that even an infinitely intelligent mind would still find itself stumped when it came to things like predicting whether or not it will be raining exactly two years from now… at least without bringing time travel into the picture. But now we’re dealing with a whole ‘nother kind of sci-fi.

Leave a Reply

Your email address will not be published.