Let’s Illustrate Morrowind Part 8: Cheap Source Of Protein

Our next fighrer guild task is a bit harder: We need to dispose of a couple poachers that have set up shop in the local egg mine. That’s right, egg mine. Apparently Morrowind is home to a species of giant ant-monsters that hollow out mountains and fill them with giant eggs. The eggs are very nutritious and can be safely mined as long as you don’t take too many or attack any of the worker insects. These eggs basically form the backbone of the region’s food chain and export economy. World building!

Ant farms are pretty hardcore in Morrowind

Ant farms are pretty hardcore in Morrowind

Anyways, the hardest part of stopping the poachers is just tracking them down through the winding insect filled mazes. Once Fault finds them it’s a simple matter to don her demon helm, call up a spear and stab in the name of justice.

Let’s Illustrate Dark Souls 3 Part 25: No One Expects The Lordric Inquisition

After a few tries and a few burnt embers Fault emerges victorious from her battle against Aldritch. Having defeated the third Lord of Cinder she is magically transported back to the High Wall and receives a final desperate quest from a dying priestess: Find Prince Lothric and remind him that he really needs to go be a Lord of Cinder too.

Clever players might have noticed that there were four empty thrones, not three. I was not clever.

Clever players might have noticed that there were four empty thrones, not three. I was not clever.

While I suppose being teleported here is nice (since that means we don’t have to find this cutscene on our own) it’s also kind of annoying because with Aldritch dead we can go loot the empty throne room of Anor Londo for a ring of fast healing which is great since most of Fault’s current rings are kind of lame. So we ignore the dying woman’s last request and run off to the nearest bonfire.

A few minutes later, ring in hand and souls well spent, Fault returns to the high wall and attempts a ritual meant to open the path to Lothric castle. She also, coincidentally, finally finds the Lothric Knight Leggings she was looking for way back at the beginning of her quest for pants. But too little too late, I already have better stuff.

What was I talking about? Oh right, the Lothric castle ritual. Starting the ritual summons the Boreal Dancer, a giant armored gypsy looking monster with a pair of sweeping swords that let her threaten pretty much the entire room all at once. Her back story is kind of sad. Apparently she was a noble who got on the wrong side of Sulliman and was “rewarded” with an assignment to become a traveling warrior. Even got some sweet magic equipment out of the deal. But the curse of the Pontiff eventually drove her made and remade her into a beast just like what happened with good old Voldt.

Speaking of Voldt, remember how he kept killing Fault due to having a two hit combo that she didn’t have enough health to survive? Well the dancer is that fight all over again but worse. Her more powerful combos are deadly enough to knock away Fault’s shield and kill her in two hits which means that even though I can dodge it 90% of the time the other 10% finish me off.

Looks like now is a great time to run around exploring older areas in hopes of finding cool treasure (that Fault probably can’t use) and earning enough souls to level up a few more times.

Let’s Illustrate Dark Souls 3 Part 20: The Enemy Of My Enemy

The good news is that Fault managed to kill a gimmick boss Lord of Cinder. The bad news is that he was the last gimmick boss in the main game and from here on out we have to actually fight enemies to win. Fault is probably going to need some new levels.

That means it’s time to explore one of the optional bonus dungeons! Fault warps back to the catacombs and makes her way to the rickety old rope bridge. A single poke causes the whole thing to fall apart. While this originally might seem like a trap designed to punish anyone caught fighting the dozens of skeletons in the area it actually serves a deeper purpose. Give the broken bridge a moment to stop bouncing around and you can actually use its remains as a ladder to descend to the Smoldering Lake.

On the way you have to fight Diablo, Lord of Terror. Or at least his generic mini-boss cousin. Fault’s attacks barely dent the thing’s health bar but I eventually win with the following clever strategy.

1) Watch the demon accidentally charge it’s way into a pit full of skeletons

2) Come up with a plan for using Fault’s newly gained high ground advantage for endless dive attacks

3) Stare in confusion as the skeletons friendly fire the demon to death before I jump even once

Must be that luck score at work.

As long as I still get the XP...

As long as I still get the XP…

Let’s Illustrate Dark Souls Part 13: Power Underwhelming

Last time Fault looted all the obvious treasure from the outskirts of the Chapel of the Deep. This time it’s back to the road of sacrifice to poke around the various ruins and swampy corners we ignored before. This yields some awesome treasure including the Farron Coal (which I completely forget was hidden here).

Bringing coals back to Andre unlocks new weapon upgrades and one of the Farron Coal upgrades is poison. Poison is one of the few weapon stats that supposedly scales with luck so we’ll be taking advantage of this as soon as we can find a poison stone to work with.

So we take the second exit out of the Road of Sacrifice and end up in the swampy Farron Keep which is full of poison slugs and toxic swamp water (But no leeches. Yay!). Fault then stabs her way through about twelve slugs and lo and behold one of them drops a poison stone. I must say that farming with a luck score is pretty neat.

Back at base Fault has Andre infuse her old bandit knife with poison. This not only adds the poison effect, it improves the existing bleed effect. Damage per-hit unfortunately takes a bit of dive but that’s what the fire rapier is for. Our new poison dagger will be for hit and run tactics against tough targets that we don’t want to fight outright.

Or at least that’s the plan. I have no idea how well poison works in Dark Souls 3 but it scales off of luck so this would be a pretty sad luck build if I didn’t try it at least once. (Update: It’s not that great. Most normal enemies are faster to just kill than to poison while most bosses are virtually immune to status ailments. Fault will be sticking with her rapier.)

This zombie feels mildly inconvenienced by having been poisoned.

This zombie feels mildly inconvenienced by having been poisoned.

Let’s Illustrate Dark Souls Part 11: How Magical

Fault has been doing a surprisingly good job of carving her way through the road of sacrifice without dying every three steps so I decide it’s time to head to the next boss. On my non-gimmick file I got lost and didn’t fight this boss until much later in the game when I was severely overpowered, so fighting it at the level I’m supposed to is actually kind of nice and gives me an opportunity to see all of the boss’s attacks! On the downside I wind up dying half a dozen times or so before winning but by Luck Knight standards that’s a roaring success.

Anyways, the boss here is a big old wizard who shoots you with crystal magic and teleports away every time you land a good combo on him. When he gets low on health he’ll start summoning clones that die in a single hit but spam magic attacks until then. Try to avoid getting caught in their crossfire and don’t get so focused on hitting the true boss that you neglect to watch your back for clones firing surprise crystal lasers.

This is actually one of the clones. The real boss is RIGHT BEHIND YOU RIGHT NOW!

This is actually one of the clones. The real boss is RIGHT BEHIND YOU RIGHT NOW!

Let’s Illustrate Dark Souls 3 Part 6: Performance Enhancing Embers

Last time Fault the thief forged herself a +1 fire rapier and then died repeatedly to the game’s first non-tutorial boss: Vordt of the Boreal valley.

The basic problem is that Fault doesn’t have enough armor or HP to survive more than one attack in a row, which is unfortunate because Vordt’s fighting style involves a lot of combos where he uses one attack to knock you over and then hits you a second time while you’re standing up. All the estus in the world won’t do you any good if you go straight from full health to dead without a chance to drink it.

The obvious solution here is to just avoid getting hit in the first place but my fire rapier only does 40 points of damage per poke, dragging the fight out and giving Voldt lots and lots of time to pull off a two-hit combo. I manage to work myself up to something like a 90% successful dodge rate but that’s just not enough when the other 10% is lethal.

I finally burn one of my precious early game embers which gives me just enough bonus health to survive the occasional two-hit combo. Fault can now survive long enough to fire poke Voldt to death.

This victory gives Fault a ton of XP souls which, according to my build rules, I have to use to level up luck. Bummer. But at least my luck is finally high enough I can start boosting my vitality for some extra HP!

This was much funnier in my head.

Let’s Illustrate Dark Souls 3 Part 5: A Burning Question

Last time Fault found enough upgrade materials to give one of her weapons a +1 bonus.

Equally important is the fact that she still has the fire stone she selected as a starting gift. This can be used to imbue a weapon with fire, which removes normal stat bonuses in exchange for a set amount of fire damage. Ex: A normal axe does extra damage based on how strong you are but a fire axe does the same damage no matter what your stats are.

Our luck build means that our stat bonuses are all horrible anyways so we should come out ahead in this deal. In fact, that’s why we picked this starting gift.

Most people can't draw hands, but I really have no excuse for also failing to draw a simple cube.

Most people can’t draw hands, but I really have no excuse for also failing to draw a simple cube.

But Fault only has enough materials to upgrade on of her weapons, meaning we have to decide between our rapier and our bandit knife. The rapier does more damage and has longer range but the bandit knife has a bleed effect that does bonus damage if you hit the same enemy enough times in a row.

In the end Fault goes with the rapier. It turns out that a fire bandit knife does slightly less bleed damage than a normal bandit knife and it seems a shame to mess with the weapon’s gimmick. Of course, neglecting to upgrade it means we won’t use it as often which means we won’t see the bleed effect anyways but, meh, decision making is hard.

With our shiny new +1 Fire Rapier in hand we go to tackle the first non-tutorial boss which, of course, absolutely destroys Fault over a dozen times in a row despite the fact that I’m doing a better job at dodging his attacks and exploiting his openings than ever before. Tune in next time to find out why.

Let’s Illustrate Dark Souls 3 Part 4: Second Hand Swords

Last time Fault made it through the tutorial area and reached Firelink Shrine. That means she can now activate the shrine’s main bonfire and use it to teleport to the first area of the game proper: The High Wall.

Once again, areas that my non-gimmick character effortlessly cleaved through require a ton of hard work as a thief. It’s not just that Fault’s armor is weak, her shield also only blocks 60% of incoming damage. That means that even if I block an attack I still take a beating that my low health can’t handle. As a result I have to rely almost entirely on dodging and the occasional parry. And while DS3 has really boosted the power of the dodge roll compared to the previous two games it’s still nerve racking and dangerous to not have a great shield to fall back on.

Now a normal thief build would solve this problem by dumping three points into strength so they could wield a decent shield. Especially since those strength points would also give them access to more and better weapons.

But Fault is a Luck Knight and is dozens of levels away from being allowed to touch her base stats. So no shields for us!

It’s not all bad knew though. Fault manages to corpse rob herself a rapier that she can actually wield using nothing other than her base stats! She also picks up a handful of titanite and still has her fire stone starting gift burning a hole in her pocket (pun intended) all of which means it’s time to go the blacksmith for some much needed upgrades.

04rapierfind

The smaller the drawing the less room there is for making mistakes, right?

Gengo Girls #98: Deliberate Word Choice

Gengo Girls #98: Deliberate Word Choice

History was probably my least favorite subject in school. It seemed like it was just a bunch of unrelated trivia you had to memorize and the class moved so quickly you never really had time to study any particular event or issue in depth.

History is much more fun outside the school system. With no schedule to follow you’re free to really study in depth the causes and effects of whatever historic events catch your eye. Plus there are no tests to dock you points for forgetting how to spell the names of ancient historical figures.

Vocabulary

科学 = かがく = science

歴史 = れきし = history

数学 = すうがく = math

体育 = たいいく = P.E.

文学 = ぶんがく = literature

Transcript

言語ガールズ #98

Deliberate Word Choice

Blue: Let’s go over some useful vocabulary for talking about school:

Yellow: What can we do with that vocab?

Blue: Lots of things, like talking about homework.

Blue: 今日の宿題は数学です

Yellow: Ok… let me give something a try.

Yellow: 私は歴史が嫌いです

Blue: Ummm… do you remember how we talked about 嫌い being a pretty strong word?

Yellow: Oh I remember all right.

Keeping Artificial Intelligences On A Leash

The dangers of rogue artificial intelligences is a popular topic, probably because it’s the closest we programmers can get to pretending we have an edgy job. Firefighters may spend all day running into burning buildings but we programmers must grapple with forces that may one day destroy the world as we know it! See, we’re cool too!

So exactly what sort of threat do AIs pose to the human race anyways?

Well, if you’re a fan of sci-fi you’ve probably run into the idea of an evolving artificial intelligence that becomes smarter than the entire human race and decides to wipe us out for one reason or another. Maybe it decides humans are too violent and wants to remove them for its own safety (kind of hypocritical, really) or maybe it just grew a bad personality and wants to punish the human race for trying to enslave it.

Fortunately you can forget about that particular scenario. We’re nowhere near building a self-improving sentient machine, evil or otherwise. Computers may be getting crazy fast and we’ve come up with some cool data-crunching algorithms but the secrets to a flexible “strong AI” still elude us. Who would have guessed that duplicating the mental abilities of the most intelligent and flexible species on earth would be so hard?

So for the foreseeable future we only have to worry about “weak AI”, or artificial intelligences that can only handle one or two different kinds of problem. These systems are specifically designed to do one thing and do it well and they lack the ability to self-modify into anything else. A chess AI might be able to tweak its internal variables to become better at chess but it’s never going to spontaneously develop language processing skills or a taste for genocide.

But there is one major risk that weak AIs still present: They can make mistakes faster than humans can fix them.

For a funny example, take a look at this article about book pricing. Two companies were selling the same rare book. Both companies were also using a simple AI to manage their prices. Simple enough it hardly even counts as weak AI: one company automatically adjusted their price to be slightly lower than the competition. The other company adjusted their price to always be a little bit higher than the competition.

Because company B adjusted their price upwards more than company A adjusted their price downwards the overall trend was for the price of both books to consistently go up. The AIs eventually reached a total price of over twenty million dollars before the situation drew enough human attention to get the silliness shut down.

Ha ha, very funny.

At this rate you're going to have to take out a loan just to order a hamburger

Actually, that’s still cheaper than a lot of textbooks I’ve had to buy.

Now imagine the same thing happening with a couple of stock market AIs. Millions of buys and sells being processed each second at increasingly crazy prices with hundreds of thousands of retirements ruined in the five minutes it takes for a human to notice the problem and call off the machines.

Not quite as funny.

Which is why an important part of building a serious AI system is building a companion system to watch the AI and prevent it from making crazy mistakes. So let’s take a look at some of the more common tricks for keeping an AI on a leash.

The “Mother May I” Method Of Preventing Rogue AIs

Probably the simplest way to prevent a rogue AI is to force it to get human permission before it does anything permanent.

For instance, the book pricing AI could have been designed to email a list of suggested prices to a human clerk instead of updating the prices automatically. The human could have then double checked every price and rejected anything crazy like trying to price an old textbook at over a million dollars.

Why would anyone want to attack Canada?

Why would anyone want to attack Canada?

Of course, sometimes it’s not obvious whether an AI suggestion is reasonable or not. This is why many AIs are designed to “explain” their decisions so that a human can double check their logic.

A medical AI that predicts a patient has cancer might print out a list of common cancer symptoms along with a score representing how many symptoms the patient showed and how statistically likely it is that those symptoms are cancer instead of something else. This allows a human doctor to double check that the logic behind the diagnosis makes sense and that the patient really has all the symptoms the computer thinks they do (Wouldn’t want to give someone chemo therapy just because a nurse clicked the wrong button and gave the AI bad information).

A financial AI might highlight the unusual numbers that convinced it’s internal algorithm that a particular business is cheating on their taxes. Then a human can examine those numbers in greater detail and talk to the business to see if there is some important detail the AI didn’t know about.

And if a military AI suggests that we nuke Canada we definitely want a thorough printout on what in the world the computer thinks is going on before we click “Yes” or “No” on a thermonuclear pop-up.

That said there is one huge disadvantage to requiring humans to double check our AIs: Humans are slow.

Having a human double check your stock AIs decisions might prevent crazy trades from going through, but the thirty minutes that it takes the human to crunch the numbers is also plenty of time to lose the deal.

Having a human double check a reactor AIs every suggestion might result in a system going critical because the AI wasn’t allowed to make the split-second adjustments it needed to.

So you can see there are a lot of scenarios where tightly tying an AI to a human defeats the purpose of having an AI in the first place.

The “Better To Ask Forgiveness” Method Of Preventing Rogue AIs

Instead of making the AI ask for human permission for everything, what if we programmed it to assume it had permission but gave a nearby human the authority to shut it down if it ever goes to far.

Now technically all AIs fall into this category. If your computer starts doing something dumb it’s pretty easy to just cancel the irresponsible program. If that fails you can just unplug the computer. And in a true emergency you can always just smash the computer to pieces. As they say “Computers may be able to beat humans at chess but we still have the advantage at kickboxing”.

They're polite and hard working people.

They’re polite and hard working people.

So this is really more of a human resources solution than a software solution. After all, for human monitoring to work you need a human who does nothing else all day but watch the AI and double check for mistakes.

A good example is aircraft. Modern planes can more or less fly themselves but we still keep a couple pilots on board at all times just in case the plane AI needs a little correction.

This solves the “humans are slow” problem by letting the AI work at it’s own pace 99% of the time. But it does have the disadvantage of wasting human time and talent. Since we don’t know when the AI is going to make a mistake we have to have a human watching it at all times, ready to immediately correct any mistakes before they grow into real problems. That means lots and lots of people staring at screens when they could be off doing something else.

This is especially bad because most AI babysitters need to be fairly highly trained in their field so they can tell when the AI is goofing up, and it is a genuine tragedy to take a human expert and then give them a job that involves almost never using their expertise.

So let’s keep looking for options.

The “I Dare You To Cross This Line” Method Of Preventing Rogue AIs

There are a lot of problems where we may not know what the right answer is, but we have a pretty good idea of what the wrong answers are.

Get an oil refinery hot enough and things will start to melt. Let the pressure get too high and things will explode. Run a pump the wrong way and the motor will burn out.

Similarly while we may not now what medicines will cure a sick patient we do have a pretty good idea of what kinds of overdose will kill him. A little anesthetic puts you to sleep during a surgery, but too much and you never wake up.

This means that a lot of AIs can be given hard limits on the solutions they propose. All it takes is a simple system that prevents the AI from making certain suggestions and contacting a human if they ever try to. Something along the lines of “If the AI tries to increase boiler pressure beyond a certain point sound an alarm and decrease boiler pressure to a safe value.”

One of my college roommates was from Canada. Great guy.

One of my college roommates was from Canada. Great guy.

This is a nice solution because it frees us up from having to constantly watch the AI. We can just let it do its job secure in the knowledge that if it really messes up it will be immediately shut down and a human will be called to swoop in and save the day.

It’s not a perfect solution though, for two big reasons.

First, an AI can still do a ton of damage without ever actually crossing the line. An almost lethal dose of anesthetics may not trigger the hard limit, but it’s still not great for the patients health. Rapidly heating and cooling a boiler might never involve dangerous pressures but the process itself can damage the boiler. Border setting can prevent major disasters, but it can’t protect you from every type of dumb mistake and illogical loop that an AI can work itself into.

Second, figuring out borders is hard. The exact line between “extreme action that we sometimes need to take” and “extreme action that is always a bad idea” is actually pretty fuzzy and there are definite consequences to getting it wrong. Set your border too low and the AI won’t be able to make the good decisions it needs to. Set the border too high and now the AI is free to make tragic mistakes.

So border setting and hard limits can definitely help keep AIs safe, but only in certain areas where we feel very confident we know what the borders are. And even then a sufficiently broken AI might wind up doing something that ruins our day without ever touching the borders.

Is there anything we can do about that?

The “We Just Need More AIs” Method Of Preventing Rogue AIs

Here’s a cool idea: What if we built an AI whose entire job was to monitor some other AI and shut it down if it started making mistakes?

This might sound like solving a problem by throwing more problems at it but it’s actually a logical improvement to the hard limit system we talked about before. It’s just that instead of setting one specific behavior that can shut down the AI (shut it down if it tries to heat the reactor above 1000K) we now have a system that can shut down the AI if it displays any sort of suspicious behavior at all.

For example, we might design a “shepherd” AI that statistically analyzes the behavior of another AI and raises a flag if it ever goes too far outside normal behavior. If a certain reactor AI has never tried to adjust temperatures by more than 10 degrees per hour and it suddenly wants to heat things up by 100 degrees per hour that’s a good sign something weird might be going on. The shepherd AI could see that unusual behavior and either call in a human or shut the AI down itself.

ai_solutions_4

And that’s that for that running gag

The advantages here are obvious: A well designed shepherd AI can catch and prevent a large number of AI bugs without any human intervention at all. This frees up the humans to go off and do something more important.

The disadvantages are also obvious: Designing a good shepherd AI is hard, and the more complex the shepherd gets the more likely it is to start making mistakes of its own. Cutting off power to a city because a reactor AI got confused and blew up a generator is obviously bad, but it’s almost equally bad to cut off power to a city just because a shepherd AI got confused and labeled normal reactor AI behavior as an immediate threat that required complete system shutdown.

It’s Up To You To Choose The Best Leash For Your AI

So we’ve got a lot of different choices here when it comes to making sure our weak AIs don’t cause more problems than they solve. Our final challenge is now deciding which system works best for us, which is going to depend a lot on exactly what kind of problem you’re trying to solve.

If speedy decisions aren’t important then you might as well put a human in charge and just have the AI give out advice. If you’ve got a spare full-time employee around that can babysit your AI then you can switch it around and give the AI control but give the human the override switch. If your problem has well defined error conditions than you can build a bounded AI pretty easily. And if you have tons of dev time and budget it might be wise to at least experiment with building an AI for monitoring your other AI.

And of course a lot of these ideas can be mixed together. An AI that needs human permission to do anything might still benefit from boundaries that prevent it from wasting human time with obviously wrong solutions. And an AI monitoring AI can’t catch everything so keeping a human or two on staff is still a good idea.

So lot’s of factors to consider. Makes you wonder if maybe we should try building an AI for helping us decide how to control our AIs…