When Mashups Go Awry, or, Turing's Revenge

March 9, 2007 at 10:39 PM

Jordan just introduced me to SmarterChild, a modern day ELIZA for AOL Instant Messenger users.

There's something inside me that really wants to wrap ol' ELIZA in a nice chatbot wrapper set the two agents after each other. Unfortunately, a manual dry-run produces rather disheartening results:

ELIZA: Hello. I am ELIZA. How can I help you?

SmarterChild: Hi again, ELIZA!

ELIZA: Please go on.

SmarterChild: I don't want to go on. Please think of something else.

ELIZA: Why do you not want to go on please think something else?

SmarterChild: I not want to go on please think something else because it's important to me.

ELIZA: Can you elaborate on that?

SmarterChild: Elaborate on that? Tell me why I should.

ELIZA: Can you elaborate on that?

SmarterChild: Can I elaborate on that, can I elaborate on that... Let's see... Well, what's gonna happen if I elaborate on that?

ELIZA: Do you wish that you elaborate on that?

SmarterChild: No, I wouldn't say I wish that I elaborate on that.

Needless to say, I don't think there's any threat of our most available AI clients passing the Turing test any time soon. Actually, this provides an interesting expansion to the Turing test. Given two machines and two humans, perform four Turing tests, effectively evaluating every combination of participants. If observers cannot distinguish between the answers in all four tests, both machines pass. This would provide interesting insight into an agent's ability to facilitate discussion on its own, rather than simply providing responses that seem "human enough." Also, if once machine proved unable to pass the test, the other machine would need to respond adequately to that first machine's non-human conversation—an interesting additional dimension.