Let’s Illustrate Dark Souls 3 Part 4: Second Hand Swords

Last time Fault made it through the tutorial area and reached Firelink Shrine. That means she can now activate the shrine’s main bonfire and use it to teleport to the first area of the game proper: The High Wall.

Once again, areas that my non-gimmick character effortlessly cleaved through require a ton of hard work as a thief. It’s not just that Fault’s armor is weak, her shield also only blocks 60% of incoming damage. That means that even if I block an attack I still take a beating that my low health can’t handle. As a result I have to rely almost entirely on dodging and the occasional parry. And while DS3 has really boosted the power of the dodge roll compared to the previous two games it’s still nerve racking and dangerous to not have a great shield to fall back on.

Now a normal thief build would solve this problem by dumping three points into strength so they could wield a decent shield. Especially since those strength points would also give them access to more and better weapons.

But Fault is a Luck Knight and is dozens of levels away from being allowed to touch her base stats. So no shields for us!

It’s not all bad knew though. Fault manages to corpse rob herself a rapier that she can actually wield using nothing other than her base stats! She also picks up a handful of titanite and still has her fire stone starting gift burning a hole in her pocket (pun intended) all of which means it’s time to go the blacksmith for some much needed upgrades.

04rapierfind

The smaller the drawing the less room there is for making mistakes, right?

Gengo Girls #98: Deliberate Word Choice

Gengo Girls #98: Deliberate Word Choice

History was probably my least favorite subject in school. It seemed like it was just a bunch of unrelated trivia you had to memorize and the class moved so quickly you never really had time to study any particular event or issue in depth.

History is much more fun outside the school system. With no schedule to follow you’re free to really study in depth the causes and effects of whatever historic events catch your eye. Plus there are no tests to dock you points for forgetting how to spell the names of ancient historical figures.

Vocabulary

科学 = かがく = science

歴史 = れきし = history

数学 = すうがく = math

体育 = たいいく = P.E.

文学 = ぶんがく = literature

Transcript

言語ガールズ #98

Deliberate Word Choice

Blue: Let’s go over some useful vocabulary for talking about school:

Yellow: What can we do with that vocab?

Blue: Lots of things, like talking about homework.

Blue: 今日の宿題は数学です

Yellow: Ok… let me give something a try.

Yellow: 私は歴史が嫌いです

Blue: Ummm… do you remember how we talked about 嫌い being a pretty strong word?

Yellow: Oh I remember all right.

Keeping Artificial Intelligences On A Leash

The dangers of rogue artificial intelligences is a popular topic, probably because it’s the closest we programmers can get to pretending we have an edgy job. Firefighters may spend all day running into burning buildings but we programmers must grapple with forces that may one day destroy the world as we know it! See, we’re cool too!

So exactly what sort of threat do AIs pose to the human race anyways?

Well, if you’re a fan of sci-fi you’ve probably run into the idea of an evolving artificial intelligence that becomes smarter than the entire human race and decides to wipe us out for one reason or another. Maybe it decides humans are too violent and wants to remove them for its own safety (kind of hypocritical, really) or maybe it just grew a bad personality and wants to punish the human race for trying to enslave it.

Fortunately you can forget about that particular scenario. We’re nowhere near building a self-improving sentient machine, evil or otherwise. Computers may be getting crazy fast and we’ve come up with some cool data-crunching algorithms but the secrets to a flexible “strong AI” still elude us. Who would have guessed that duplicating the mental abilities of the most intelligent and flexible species on earth would be so hard?

So for the foreseeable future we only have to worry about “weak AI”, or artificial intelligences that can only handle one or two different kinds of problem. These systems are specifically designed to do one thing and do it well and they lack the ability to self-modify into anything else. A chess AI might be able to tweak its internal variables to become better at chess but it’s never going to spontaneously develop language processing skills or a taste for genocide.

But there is one major risk that weak AIs still present: They can make mistakes faster than humans can fix them.

For a funny example, take a look at this article about book pricing. Two companies were selling the same rare book. Both companies were also using a simple AI to manage their prices. Simple enough it hardly even counts as weak AI: one company automatically adjusted their price to be slightly lower than the competition. The other company adjusted their price to always be a little bit higher than the competition.

Because company B adjusted their price upwards more than company A adjusted their price downwards the overall trend was for the price of both books to consistently go up. The AIs eventually reached a total price of over twenty million dollars before the situation drew enough human attention to get the silliness shut down.

Ha ha, very funny.

At this rate you're going to have to take out a loan just to order a hamburger

Actually, that’s still cheaper than a lot of textbooks I’ve had to buy.

Now imagine the same thing happening with a couple of stock market AIs. Millions of buys and sells being processed each second at increasingly crazy prices with hundreds of thousands of retirements ruined in the five minutes it takes for a human to notice the problem and call off the machines.

Not quite as funny.

Which is why an important part of building a serious AI system is building a companion system to watch the AI and prevent it from making crazy mistakes. So let’s take a look at some of the more common tricks for keeping an AI on a leash.

The “Mother May I” Method Of Preventing Rogue AIs

Probably the simplest way to prevent a rogue AI is to force it to get human permission before it does anything permanent.

For instance, the book pricing AI could have been designed to email a list of suggested prices to a human clerk instead of updating the prices automatically. The human could have then double checked every price and rejected anything crazy like trying to price an old textbook at over a million dollars.

Why would anyone want to attack Canada?

Why would anyone want to attack Canada?

Of course, sometimes it’s not obvious whether an AI suggestion is reasonable or not. This is why many AIs are designed to “explain” their decisions so that a human can double check their logic.

A medical AI that predicts a patient has cancer might print out a list of common cancer symptoms along with a score representing how many symptoms the patient showed and how statistically likely it is that those symptoms are cancer instead of something else. This allows a human doctor to double check that the logic behind the diagnosis makes sense and that the patient really has all the symptoms the computer thinks they do (Wouldn’t want to give someone chemo therapy just because a nurse clicked the wrong button and gave the AI bad information).

A financial AI might highlight the unusual numbers that convinced it’s internal algorithm that a particular business is cheating on their taxes. Then a human can examine those numbers in greater detail and talk to the business to see if there is some important detail the AI didn’t know about.

And if a military AI suggests that we nuke Canada we definitely want a thorough printout on what in the world the computer thinks is going on before we click “Yes” or “No” on a thermonuclear pop-up.

That said there is one huge disadvantage to requiring humans to double check our AIs: Humans are slow.

Having a human double check your stock AIs decisions might prevent crazy trades from going through, but the thirty minutes that it takes the human to crunch the numbers is also plenty of time to lose the deal.

Having a human double check a reactor AIs every suggestion might result in a system going critical because the AI wasn’t allowed to make the split-second adjustments it needed to.

So you can see there are a lot of scenarios where tightly tying an AI to a human defeats the purpose of having an AI in the first place.

The “Better To Ask Forgiveness” Method Of Preventing Rogue AIs

Instead of making the AI ask for human permission for everything, what if we programmed it to assume it had permission but gave a nearby human the authority to shut it down if it ever goes to far.

Now technically all AIs fall into this category. If your computer starts doing something dumb it’s pretty easy to just cancel the irresponsible program. If that fails you can just unplug the computer. And in a true emergency you can always just smash the computer to pieces. As they say “Computers may be able to beat humans at chess but we still have the advantage at kickboxing”.

They're polite and hard working people.

They’re polite and hard working people.

So this is really more of a human resources solution than a software solution. After all, for human monitoring to work you need a human who does nothing else all day but watch the AI and double check for mistakes.

A good example is aircraft. Modern planes can more or less fly themselves but we still keep a couple pilots on board at all times just in case the plane AI needs a little correction.

This solves the “humans are slow” problem by letting the AI work at it’s own pace 99% of the time. But it does have the disadvantage of wasting human time and talent. Since we don’t know when the AI is going to make a mistake we have to have a human watching it at all times, ready to immediately correct any mistakes before they grow into real problems. That means lots and lots of people staring at screens when they could be off doing something else.

This is especially bad because most AI babysitters need to be fairly highly trained in their field so they can tell when the AI is goofing up, and it is a genuine tragedy to take a human expert and then give them a job that involves almost never using their expertise.

So let’s keep looking for options.

The “I Dare You To Cross This Line” Method Of Preventing Rogue AIs

There are a lot of problems where we may not know what the right answer is, but we have a pretty good idea of what the wrong answers are.

Get an oil refinery hot enough and things will start to melt. Let the pressure get too high and things will explode. Run a pump the wrong way and the motor will burn out.

Similarly while we may not now what medicines will cure a sick patient we do have a pretty good idea of what kinds of overdose will kill him. A little anesthetic puts you to sleep during a surgery, but too much and you never wake up.

This means that a lot of AIs can be given hard limits on the solutions they propose. All it takes is a simple system that prevents the AI from making certain suggestions and contacting a human if they ever try to. Something along the lines of “If the AI tries to increase boiler pressure beyond a certain point sound an alarm and decrease boiler pressure to a safe value.”

One of my college roommates was from Canada. Great guy.

One of my college roommates was from Canada. Great guy.

This is a nice solution because it frees us up from having to constantly watch the AI. We can just let it do its job secure in the knowledge that if it really messes up it will be immediately shut down and a human will be called to swoop in and save the day.

It’s not a perfect solution though, for two big reasons.

First, an AI can still do a ton of damage without ever actually crossing the line. An almost lethal dose of anesthetics may not trigger the hard limit, but it’s still not great for the patients health. Rapidly heating and cooling a boiler might never involve dangerous pressures but the process itself can damage the boiler. Border setting can prevent major disasters, but it can’t protect you from every type of dumb mistake and illogical loop that an AI can work itself into.

Second, figuring out borders is hard. The exact line between “extreme action that we sometimes need to take” and “extreme action that is always a bad idea” is actually pretty fuzzy and there are definite consequences to getting it wrong. Set your border too low and the AI won’t be able to make the good decisions it needs to. Set the border too high and now the AI is free to make tragic mistakes.

So border setting and hard limits can definitely help keep AIs safe, but only in certain areas where we feel very confident we know what the borders are. And even then a sufficiently broken AI might wind up doing something that ruins our day without ever touching the borders.

Is there anything we can do about that?

The “We Just Need More AIs” Method Of Preventing Rogue AIs

Here’s a cool idea: What if we built an AI whose entire job was to monitor some other AI and shut it down if it started making mistakes?

This might sound like solving a problem by throwing more problems at it but it’s actually a logical improvement to the hard limit system we talked about before. It’s just that instead of setting one specific behavior that can shut down the AI (shut it down if it tries to heat the reactor above 1000K) we now have a system that can shut down the AI if it displays any sort of suspicious behavior at all.

For example, we might design a “shepherd” AI that statistically analyzes the behavior of another AI and raises a flag if it ever goes too far outside normal behavior. If a certain reactor AI has never tried to adjust temperatures by more than 10 degrees per hour and it suddenly wants to heat things up by 100 degrees per hour that’s a good sign something weird might be going on. The shepherd AI could see that unusual behavior and either call in a human or shut the AI down itself.

ai_solutions_4

And that’s that for that running gag

The advantages here are obvious: A well designed shepherd AI can catch and prevent a large number of AI bugs without any human intervention at all. This frees up the humans to go off and do something more important.

The disadvantages are also obvious: Designing a good shepherd AI is hard, and the more complex the shepherd gets the more likely it is to start making mistakes of its own. Cutting off power to a city because a reactor AI got confused and blew up a generator is obviously bad, but it’s almost equally bad to cut off power to a city just because a shepherd AI got confused and labeled normal reactor AI behavior as an immediate threat that required complete system shutdown.

It’s Up To You To Choose The Best Leash For Your AI

So we’ve got a lot of different choices here when it comes to making sure our weak AIs don’t cause more problems than they solve. Our final challenge is now deciding which system works best for us, which is going to depend a lot on exactly what kind of problem you’re trying to solve.

If speedy decisions aren’t important then you might as well put a human in charge and just have the AI give out advice. If you’ve got a spare full-time employee around that can babysit your AI then you can switch it around and give the AI control but give the human the override switch. If your problem has well defined error conditions than you can build a bounded AI pretty easily. And if you have tons of dev time and budget it might be wise to at least experiment with building an AI for monitoring your other AI.

And of course a lot of these ideas can be mixed together. An AI that needs human permission to do anything might still benefit from boundaries that prevent it from wasting human time with obviously wrong solutions. And an AI monitoring AI can’t catch everything so keeping a human or two on staff is still a good idea.

So lot’s of factors to consider. Makes you wonder if maybe we should try building an AI for helping us decide how to control our AIs…

Immortal Boredom Would Never Kick In

As you might have guessed, I find immortality to be a fun theme for fiction. After all, I did make a game called Immortals Should Try Harder.

Today I want to talk about the “bored immortal”, a classic fantasy and sci-fi trope. He or she has lived for hundreds or thousands of years and has already seen everything there is to see and done everything there is to do and now they are just plain bored with life.

But would that really happen?

The basic idea seems to be that doing the same thing again and again eventually gets boring. Since immortals live forever they would eventually do everything often enough to get bored with everything.

But this skips over an important point: Boring activities become fun again if you just wait long enough.

Have you even eaten so much of a favorite food that you’ve gotten sick of the taste, only to suddenly start craving it again a few months later?

Do you have a favorite holiday that you enjoy year after year?

Have you ever had a sudden urge to reread a book, rewatch a movie or replay a game that you haven’t touched in years?

Do you sometimes find yourself wishing you had the free time to restart a hobby you gave up in the past?

My personal non-immortal life experiences show that:

  • Doing an activity too often causes a sort of boredom fatigue, but that fatigue heals with time
  • The brain doesn’t remember the fine details of books and movies for more than a few years, making them fun to reread
  • Actively having a good experience is superior to mere memories of that experience

All of which suggests that an immortal could keep themselves amused for pretty much forever by just switching between a few dozen lifestyles and having a really big collection of favorite books, movies, games and hobbies.

Spend a few years doing research at a big library. Then spend some time touring Europe and practicing cooking all their regional specialties. Then hunker down and run a small farm in the Alaskan frontier. Then switch to being an auto mechanic and learning how machines really work.

And eventually the immortal starts to miss some of their earlier lifestyle. They head back to the library or the kitchen or the farm and find that after a hundred years the activity they were once bored with has become fresh and entertaining once again. They reread the book they have forgotten. They rediscover favorite recipes. They find that the “boring old farm life” is actually a nice change of pace every once and a while.

And they repeat this cycle, happily, forever.

Now of course an immortal would still probably have their ups and downs and slumps. But I think breaking out of a period of depressing boredom would be as easy as finding something they used to enjoy decades ago and forcing themselves to give it another try.

So if you plan to live forever you had better start collecting books and movies now. You’re going to need a few thousand.

Discussion Prompt:

  • How often can you rewatch a movie or reread a book? How many would you need to fight off immortal boredom?
  • How many years worth of different activities and lifestyles do you think an immortal would need to keep the non-boredom cycle going? Or do you think the cycle would eventually degrade no matter how many different lifestyles they switched between?
  • Would an immortal with perfect memory be harder to entertain than a more human immortal whose memories tend to fade after a few decades or centuries?
  • Are there any activities that you never seem to get bored of, like getting a good night’s rest? Could just a few of these always good activities sustain an immortals mental health forever, even if they had perfect memory?

Gengo Girls #57: I Didn’t Do It

Gengo Girls #57: I Didn't Do It

Look at all the different rules all coming together in that one example! We’ve got an implied subject, a possessive, an object and a past tense irregular verb (“suru” to “shimasen” to “shimasen deshita”). Sure, it may not seem like a big deal to be able to say “I didn’t do my homework” but the amount of grammar represented in that one idea is actually pretty impressive. Good job reader for keeping up so far!

Vocabulary

宿題 = しゅくだい = homework

Transcript

言語ガールズ #57

I Didn’t Do It

Blue: To make a polite negative past tense verb you start with the polite negative present tense.

Yellow: That’s just switching ます to ません.

Blue: Then you add でした to the end.

Yellow: Isn’t that the past tense of です?

Blue: Yes it is.

Yellow: So I could say 私の宿題をしませんでした

Blue: I knew you were going to come up with an example like that.

Yellow: That reminds me, can I borrow your math homework for a few minutes?

Blue: いいえ

Gengo Girls #51: AAAA

Gengo Girls #51: AAAA

The Japanese word “namae” sounds and awful lot like the English word “name”, which might make you think that one language borrowed it from the other (happens all the time). But in this case no borrowing happened! It’s just a bizarre coincidence that these two different, unrelated languages developed similar words for the idea of “name”.

Also, Gabriela and Schneider are both fairly common German names. This comic is not a reference to any particular Gabriela Schneider. Although in retrospect finding a way to work some obscure movie or historical reference into this strip probably would have been funnier than just choosing a random name…

Vocabulary

名前 = なまえ = name

Transcript

言語ガールズ #51

AAAA

Blue: In 日本語 you introduce yourself with the phrase 私の名前はNAMEです

Blue: 私はNAMEです is fine too.

Yellow: The family name comes first, right?

Blue: I’m glad you remembered.

Yellow: Let me give it a try:

Yellow: 私の名前は Schneider Gabriela

Blue: But… you’re name isn’t Gabriela Schneider.

Yellow: Wouldn’t it be cool if it was?

ComiPo! “Comic Maker” Review

Summary: ComiPo! makes it easy for non-artistic people like me to put together decent looking comics with a school or office theme. On the other hand its limited poses and outfits might be frustrating to some artists. So if you’re looking for a tool to help you illustrate a high school drama or workplace comedy ComiPo! might be just what you need, but if you’re trying to make a fantasy battle comic you’re better of just learning how to draw.

How ComiPo! Helps Me

I can’t draw. I can barely doodle. But I’m a big fan of gag comic strips and often wished I could make my own. So stumbling across ComiPo! was pretty exciting and I immediately downloaded the free trial to see what it could do. Thirty minutes later I had the prototype for Gengo Girls, my Japanese educational comic.

The demo seemed promising so I bought the full version of ComiPo! and here I am four months later with 50 complete comics.

The Good

ComiPo! is incredibly easy to use and basically idiot-proof.

You create new characters by picking and choosing facial features, hair styles and colors from a set of drop down menus. Everything matches up pretty well so your character is almost guaranteed to turn out great.

Then you build a comic by choosing a comic layout and dragging and dropping items into each comic panel. If you want two characters talking in the park you just grab the park background and drop it in your panel, then grab the two characters you want and drop them in too. You can rotate, re-size and re-position them however you want.

comipo_good

This is a hundred times better than anything I could have actually drawn

You pose characters by choosing from a list of over a hundred different options. Each pose is represented by a stick figure that shows you exactly what you’re about to get, so you don’t have to remember a lot of complicated names like “Running_3” or “Sitting_2”. Just click the image that’s closest to what you want your character to do and suddenly your character is running or singing or waving to a friend.

The same simple system works for facial expressions. You get a big list of different faces to look at and you just click on the one you want your character to have in the current panel.

Then from there you just drag and drop in some text bubbles and type in your dialogue. Maybe drag and drop in some props or special effects. Then you’re done. That’s all it took.

The Not So Good

Remember how I said ComiPo! was idiot-proof? Well, that’s mostly because they don’t let you do anything that might ruin your comic.

The biggest issue is that you can’t design your own character poses. You have to use the presets. That means it can be difficult to give characters unique body language and it’s more or less impossible to set up an interesting fight scene.

Now that’s not really a problem if you mostly plan on having characters walking around and talking to each other with only the occasional shove or punch at dramatic moments. And as a non-artist I wouldn’t want to try and manually pose a character anyways. But if you have very specific dreams of making characters dance or fight or wrestle you’re going to need something more flexible than ComiPo!.

Similarly you’re stuck using the school and office outfits that come with the software. You can’t build your own character models or design your own clothes. Then again, if you know how to build and pose 3D models you probably don’t need ComiPo! in the first place.

Limited poses and outfits mean that certain stories just won't work.

Limited poses and outfits mean that certain stories just won’t work.

There’s also an issue with the speech bubbles that kind of bugs me: They don’t have any sort of automatic text centering, so you have to manually indent and shift every line to make each bubble look good. You could probably get around this by using ComiPo! to make your comic and then adding in the text with a different tool, but it would have been nice if ComiPo! could have done it all on it’s own. It took a while but a recent update finally added text alignment tools. Making natural looking speech bubbles is pretty easy now.

Final Recommendation: ComiPo! will not make you an artist, but it does let you easily build pleasantly generic comics. From there it’s up to you and your dialogue to make a story or a gag that people will want to read. So if you’re a writer who wants to try their hand at comics you might as well download the free demo* and see what ComiPo! has to offer.

* The free demo lets you see how the software works but limits you to half-page comics and only has a couple of different character and background options. The full version has both male and female characters, tons of style options, hundreds of backgrounds and the ability to create full-page comics.

Gengo Girls #47: Tough Love

Gengo Girls #47: Tough Love

Have you noticed that after every comic there’s a transcript? And have you noticed that you can copy and paste kanji from this transcript? So if you can’t remember a word I’m using you can always just paste it into your favorite electronic dictionary to get the pronunciation and definition.

Although in the long run it’s probably better for you to do your best to memorize words when I first introduce them. And when you do forget a kanji it’s probably best if you try to look it up by radical or stroke-count before taking the easy way out and just copy pasting.

Transcript

言語ガールズ #47

Tough Love

Yellow: I was looking at a Japanese comic book and they had a bunch of tiny hiragana next to every kanji.

Blue: Those are called “furigana”.

Blue: Furigana are pronunciation guides for the kanji. They show up mainly in works for children and teens who haven’t memorized all the kanji yet.

Blue: They can be really useful to us foreigners too.

Yellow: How come our comic doesn’t have any furigana?

Blue: By using pure kanji we force the audience to practice their memorization and dictionary skills.

Yellow: Are you sure it’s not just cutting corners?

Blue: My mother always said you should give people the benefit of the doubt.

Book Review: Land of Lisp

So you’ve been reading “Let’s Program A Swarm Intelligence” and now you want to learn how to program in Lisp. In that case I would suggest Land of Lisp by Conrad Barski, which holds the honor of being the only programming book where I followed along and typed up every single code example in the book without ever feeling tempted to just skim the code and skip to the next chapter.

 

A big part of this is because every example in Land of Lisp comes in the form of a simple game. And I love games! Odds are you do too. I mean, honestly, ask yourself this: Would you rather practice object oriented programming by writing yet another employee registry* or by writing up a text-driven combat arena where a brave hero has to fight against a horde of slimes, hydras and bandits?

 

But the coding exercises weren’t the only thing I liked. Land of Lisps is just an overall humorous book filled with cartoon sketches, silly stories and humorous analogies that make the book fun, easy to read and avoid overwhelming you with technical details. It gives the lessons a nice casual pace that’s perfect for a newcomer to the language.

 

The focus on simple games also has the benefit of introducing a lot of very valuable programming techniques and data crunching algorithms. After all, you can’t program a boardgame without a boardgame AI and you can’t program a boardgame AI without learning some real cool search-and-sort algorithms. So while Land of Lisp is primarily a Lisp textbook it also includes a tasty side order of computer science.

 

The only downside to Land of Lisp is that it doesn’t make a very good reference book. The games and cartoons and stories that made it a fun learning experience just get in the way when you’re trying to track down a specific fact as quickly as possible. So while Land of Lisp will give you a solid foundation in the language odds are you will end up relying on other Internet or book resources for those times when you’re halfway through a program and really need a reminder on what “loop and collect” syntax looks like.

 

Final recommendation: If you are a complete Lisp beginner then Land of Lisp is a great and entertaining way to learn everything you need to know to write moderately complex Lisp programs. It won’t make you an expert, but it will teach you everything you need to know in order to start practicing and studying the more complex topics that eventually will.

 

 

* The employee registry seems to show up a lot in programming books. Usually something like “Manager and Sales Person both inherit from Employee but override the calculate_pay method.” It’s a solid and useful example… it’s just a really boring one.

Super Brains, Super Data and Chaos Theory

I just finished reading Count to a Trillion by John C. Wright. To summarize without spoiling: It’s a story about the futuristic space adventures of a man who suffers from occasional bouts of post-human hyper-intelligence. Expect mystery, treachery, high-tech duels and the mandatory beautiful space princess.

 

A major theme in Count to a Trillion is the existence of what I’m going to call “super brains”, extremely intelligent entities that make normal human geniuses look like mere children. Inspired by this book I’ve spent the past few days playing with the question: What are the sort of things that a super brain should and should not be able to do? What are the realistic limits that even a super high IQ entity would have to deal with?

 

After much thought I think it is safe to say that all brains are limited by the data available to them. For example, even the most talented of accountants will have trouble helping you balance your checkbook if you don’t keep your receipts or if you lie about your expenses.

 

This limit applies to super brains too. A super brain might be able to instantly calculate the proper flight path to put a satellite in orbit around Jupiter but he’s still going to need to know how much the satellite weighs, how much stress it can handle and what kind of engine it uses. Give the super brain bad information about the satellite’s weight or construction and his answer will be just as disastrously wrong as a random guess. Like we say in computer science: Garbage in, garbage out.

 

But there is more to data than just whether you have it or not. You also have to consider the quality of your data. When I eyeball a jug of milk and claim there are “about two cups left” I probably actually have anywhere between one and a half to two and a half cups of milk. This data is less accurate and precise then if I were to spend an afternoon carefully measuring the remaining milk with scientific instruments in order to report that I was 99.9% sure that the jug held 2.13 cups of milk, plus or minus half a teaspoon.

 

This is important because different problems require different levels of data precision. If you want a super brain to help you decide what to make for dinner “we have about 2 cups of milk left” is more than enough information. But if you want his help sending a satellite to Jupiter you’re going to need to do a lot better than “I think it weighs somewhere between 2 and 3 tons”.

 

But it gets even more interesting! Chaos Theory shows that there are certain problems that require infinitely accurate data to solve, or at the very least data so close to infinitely accurate that no human could ever collect it. Even worse, not only do you need infinitely accurate “super data” to solve a chaos problem, without super data you can’t even predict how wrong you’ll be.

 

You see, with a normal problem having data that is 10% wrong will lead to an answer that is 10% wrong. Which is actually very useful because it let’s you say “our data isn’t perfect, but it’s close enough so our answer will also be close enough.”

 

But chaos theory introduces problems where data that is 10% wrong will lead to an answer that is unpredictably wrong. You might be 10% wrong or you might be 80% wrong or you might only be 1% wrong. There’s no way to tell. In the words of Edward Lorenz, father of chaos theory:

 

Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

 

The classic example of a chaotic problem is predicting the weather. Weather is a complex, iterative system where today’s weather leads to tomorrow’s weather which leads to the next day’s weather and so on. Theoretically you should be able to predict tomorrow’s weather just by looking at today’s weather. You can then use that prediction to calculate the next day’s weather again and again as far out into the future as you want.

 

The problem is that making even a tiny mistake when recording today’s weather will lead to small but unpredictable errors in your prediction for tomorrow. A small error in your prediction for tomorrow means a moderate error in your prediction for next week. A moderate error in your prediction for next week means a huge error in your prediction for next month. And a huge, unpredictable error in your prediction for next month means we have no idea what the weather will be like exactly one year from now.

 

The only way to avoid this endless cascade of errors is to make sure you start with perfect super data. But collecting super data is somewhere between “ridiculously hard” and “genuinely impossible”. For example: in order to put together some super weather data you would need an accurate list of air pressure and temperatures from every single point on the globe. And to keep that data accurate you would need to know every time a child laughed, a butterfly took flight or anyone lit a match. Unless you’ve got a plan for turning the entire planet into a giant self-monitoring barometer* you might as well give up.

 

Without this super data even a super brain would have no chance at predicting the far future of weather. It doesn’t matter that the super brain perfectly understands the mechanics of weather and can manipulate billions of numbers in his head; without perfect data he is stuck to the same seven day forecasts as normal weathermen. Chaos Theory paints a clear limit beyond which pure intelligence cannot take you.

 

All of which suggests a good rule of thumb for writing realistic super brain characters: The more chaotic a problem is the less useful the super brain is. Even characters with post-human levels of super-intelligence can’t predict the future of chaotic systems.

 

A super brain might be able to invent a new branch of chemistry, jury rig a spaceship and learn an entire language just by skimming a dictionary but he still won’t be able to predict next year’s weather or make more than an educated guess at which countries will still be around 1000 years in the future.

 

But this is just a rule of thumb. If you have a really interesting story idea that requires the main character to be able to predict the future of the human race with 99% accuracy then go ahead and write it. As long as it is entertaining enough (tip: lots of explosions and space princesses) I’m hardly going to throw a fit that you bent a few laws of mathematics.

 

 

 

* Fun story idea: Humanity builds a hyper-intelligent computer and asks it to solve the climate so we can finally stop arguing about what the climate is doing and how much of it is our fault. The computer concludes that it needs super data to do this and unleashes swarms of nano-bots to turn the entire surface of the earth into a giant weather monitoring station. Can our heroes stop the nanobots and shut down the super brain AI before it wipes out the entire human race?