What Is The ELIZA Effect?
The ELIZA Effect refers to the way that humans tend to think machines act and think like humans. We unconsciously like to believe that computers are actually intelligent, that robots have motives and that our favorite gizmos have emotions.
This human instinct to treat machines like people is named after the original ELIZA chatbot, a simple pattern matching program that pretended to be a psychiatrist by turning everything said to it into a question*. The scientist who designed ELIZA considered it nothing more than a clever trick and was surprised to find that many of the humans he had testing ELIZA started to develop emotional reactions towards it, some going so far as to claim that they felt like ELIZA really cared about the topics they were talking about.
Further studies pretty much proved that the ELIZA effect kicks in just about anytime a human sees a computer or machine do anything even vaguely unpredictable or clever. The moment a program does something the user can’t immediately explain he will begin to assume deep logic and complex motives are taking place, even when the “smart” behavior turns out to be nothing more than a ten line script with three if statements and a call to random(). Even after you show the user there is no real intelligence involved he will still tend to see bits of human personality the machine.
For example, just a few weeks ago the Internet was abuzz with stories of a “suicidal robot”, a Roomba vacuum cleaner that apparently was activated while the owners weren’t watching and then got stuck on a hotplate which eventually caused it to burn to “death”.
The interesting part of this story isn’t that a household robot glitched up and got stuck in a dangerous place. That happens all the time. The interesting part is that almost every human who talked about the story phrased it in terms of a robot making a decision to kill itself (a very human, if depressing, behavior). Even technical people who know better than to assign feelings and motivation to a circuit board couldn’t resist framing the event in human terms.
That’s the ELIZA effect.
Exploiting The Eliza Effect
So… humans like to think that other things behave like humans. That’s not really very surprising. Why should we programmers care?
We should care because we can use the ELIZA effect to hack people’s brains into liking our programs better. We can trick them into being patient with load times, forgiving of bugs and sometimes even genuinely loving our products.
Simple example: When Firefox is restarted after a crash it begins with a big “Well this was embarrassing” message that makes it feel like an apologetic friend that really is sorry that he forgot to save the last slice of pizza for you. It’s surprisingly effective at taking the edge of the frustration of suddenly getting kicked off a web-page.
The ELIZA effect is even more important for people who are specifically trying to write programs that mimic human behavior. Like game developers trying to create likable characters or chatbot designers trying to create a bot that is fun to talk to. For these people getting the ELIZA effect to activate isn’t just a useful side goal, it is their primary goal.
Wait a minute, aren’t WE amateur chatbot designers? I guess we should figure out how to integrate this idea into DELPHI.
Simulating Human Humor
In my experience people will forgive a lot of bad behavior as long as they are laughing. A good joke can break the ice after showing up late to a party and a witty one-liner can fix a lot of embarrassing social mistakes**.
That’s why rule #1 for writing DELPHI responses is going to be “make it quirky”. The funnier the responses DELPHI generates the less users are going to care about how precise and correct they are. Bits of strange grammar and weird phrases will be cheerfully ignored as long as the user is having a good time. And since humor is a very human trait this should do a lot to make DELPHI feel more like a real conversation partner and less like the big foreach loop it really is.
So don’t do this:
Fate indicates that your cat does secretly want to kill you.
Do this!
Let me check my Magic 8 ball(tm). Hmm… future looks cloudy so I don’t know if your cat does secretly want to kill you. Ask again later***
Apologize Profusely. For Everything.
This seems like a really good place for a joke about Japanese etiquette compared to American etiquette but I can’t think of anything funny right now. すみません
Anyways, I’ve noticed that when non-technical people have a computer problem one of the first things they always say is “I didn’t break it! It’s not my fault! It just happened on its own!”
This makes sense. People hate feeling like they are responsible for things going wrong. No one wants to take the blame for a broken machine or a program that stopped working. The only thing worse than a broken computer is a broken computer that is scolding you for breaking it.
So if your program is likely to break or get confused, and this simple chatbot certainly is, your top priority should be to reassure the user that the problem isn’t his fault. The problem is that your poor humble program couldn’t quite handle the user’s request and could the user pretty please try again? We really are very very sorry that this happened at all.
Also, apologizing is a very human behavior that will go a long ways towards hiding our dumb code behind an illusion of human intelligence.
So don’t do this:
I don’t recognize that as a question. Try again
Do this!
I’m sorry, I got confused. Could you ask your question again and keep it simple for me?
Teach Your User How To Be A Better User
This final design tip has less to do with the ELIZA effect and more to do with user psychology. The following tip is vitally important to anyone who wants to build user friendly software: Users never read the manual.
I don’t care how well documented your program is or how many full color screen-shots are included in your manual. 95% of your users are just going to start clicking on buttons and typing in words and then get frustrated if things don’t work the way they want them to.
In a perfect world we would solve this problem by convincing everyone in the world to do the responsible thing and read the entire user manual of every product they buy before they try to operate it. But we live in a broken and fallen world so we’re going to have be sneaky about this.
The goal here is that every-time the user causes an error or makes something strange happen we should slip them a quick tip on how to make sure that problem doesn’t happen again. This way we can feed them the entire manual one bite at a time until they finally figure out everything we wish they had known in the first place.
I’m sure you’ve seen this tactic before. Windows warning you that changing a file’s type can be dangerous. Google politely suggesting alternate searches when you don’t get many results. Video games slipping helpful tips into their loading screens. All are just ways to teach the user how to be a better user without every calling him a bad user or forcing him to read a book.
How do we incorporate this into a chatbot like DELPHI? Well, when we detect the user is having trouble we should not only be incredibly apologetic to make him feel safe and incredibly funny to make him feel relaxed, we should also try to show him how to better format his input.
So don’t do this:
I can’t understand what you’re saying
Do this!
I’m having trouble with your last question. Let’s start with something simpler like “Will it rain tomorrow?”
Conclusion
Writing a program that can act like an intelligent human is hard. Luckily for us humans are easy-going lifeforms that are more than happy to project the illusion of human intelligence onto every machine they see. As long as our chatbot is funny and polite most users will be willing to consider it human enough.
Now I’m going to spend the next few days adding new responses to DELPHI. Once that’s finally done I’m going to recruit a friend to test-chat with DELPHI and my next post will be spent analyzing how well (or how poorly) DELPHI did.
I suppose there is a small chance that DELPHI will do perfectly and this Let’s Program will end. But I seriously doubt it. This chatbot doesn’t even have a dozen rules yet. I’m predicting it won’t be able to handle even half the input the tester gives to it.
* You probably remember ELIZA from when I introduced it back at the beginning of this Let’s Program.
** On the other hand, trying to come up with a witty one-liner under pressure is very difficult and a botched one-liner will just make the problem. So if you accidentally insult someone’s religion/parents/favorite OS it might be best to just shut up and retreat.
***If you want to plug this into our “does” rule you’re probably looking for something like “Let me check my Magic 8 ball(tm). Hmm… future looks cloudy so I don’t know if UIF0 does UIF1. Ask again later