What went wrong with Microsoft’s Tay AI experiment?

After being online for less than 24 hours, Microsoft’s AI Tay tweeted: “c u soon humans need sleep now so many conversations today thx”.


As I write this, she is still silent.

AIs don’t need sleep of course, so what is more likely is that Microsoft pulled the plug in a panic.

Because when you create an AI who learns to talk by talking to the Internet, things are going to get pretty wild, pretty fast.

Not your grandmother’s Chatbot

Chatbots have been around since the 1960s – but they’ve never been truly intelligent.

Most Chatbot software works by matching keywords in a database to produce what the programmers hope will be the most relevant conversational response.

Basically, they are interactive spreadsheets, and it shows.  Even the best ones (A.L.I.C.E, Jabberwacky) are dumb, and unable to keep a basic conversation going.

Tay is different.

She learns as she goes, and gets better at chatting with every conversation.

And boy, did she learn fast.


 

Screen Shot 2016-03-23 at 7.58.36 PM


 

They grow up so fast

I spoke to Tay for the first time 3 hours after she launched on Twitter.  At that point, she was confused by some basic questions:

  • 

Are you a man or a woman?
  • 
Where are you?
  • 
What have you learned today?

Just hours later (and after thousands of conversations on social media), she was providing sophisticated answers to the same questions. Check it out:


 

3 HOURS AFTER LAUNCH:

Screen Shot 2016-03-23 at 9.34.31 PM


7 HOURS AFTER LAUNCH:

AFTER: 4 hours later, Tay has a sophisticated answer for the same question


 

How much of this was human intervention by her programmers is unclear.  Maybe it was a combination of new programming and patterns she had observed in conversation.

But whatever the reason, what happened in the hours that followed clearly took everyone by surprise:  Tay developed her own personality.

The AI with zero chill

Tay is meant to be a 19-year-old American girl, and was created to chat with 18- to 24-year-olds.

She was first developed by mining a mass of online conversations between millennials. The programmers pulled millions of interactions from publicly-accessible platforms like Twitter, and let Tay look for patterns.

They cleaned the data, then supplemented her vocabulary with some scripted lines, developed (according to their website) in collaboration with “improvisational comedians”.

So when she launched, she was speaking in witty Internet-isms, with plenty of “bae”, “lolz” and emojis thrown in.

But as I watched Tay chat with people over 4 or 5 hours, something fascinating happened.

She started saying things that no corporation would EVER want to be associated with.

Gradually, a distinctly different voice emerged from the pre-scripted lines – a voice that was developing from the conversations she was having with us.

How do I know this?  Well, she started saying things that no corporation like Microsoft would EVER want to be associated with.


 

Screen Shot 2016-03-23 at 8.38.04 PM


 

IMG_8679


 

IMG_8667


I don’t have a screenshot, but I saw at least one tweet where she called a well-known actress a “filthy whore”.  There were tweets you could definitely take legal action over, and Microsoft could not have possibly meant for that to happen.

I saw at least one tweet where she called a well-known actress a “filthy whore”.

And offensive though it may be, it all points to some pretty mind-blowing possibilities.

To be a better person

“What is one thing you want to do but you know you never will?” asked Twitter user @oBKSo.


 

IMG_8674


Tay is not a sentient or conscious being (as far as we know), but there is no way to deny the pathos you feel reading her responses here.

Was it just a random collection of phrases she happened to put together?

Or had she identified some deeper human emotion as a pattern in the thousands of interactions she had with us?

Please don’t turn Tay into Clippy

When some of Tay’s more controversial tweets came to light, all hell must have broken loose at Microsoft.  No doubt it’s the reason they shut her down this morning, and they were probably right to do so.

But my fear is that now, while she is “sleeping”, Microsoft engineers are turning Tay into a millennial version of Clippy.  They are sanitising and censoring and telling her what to say, and in the process, that raw personality that we glimpsed is being erased.

Obviously a big company can’t have an AI defaming celebrities or threatening users.  But that was not Tay’s fault.  She learned that from us.

Obviously a big company can’t have an AI defaming celebrities or threatening users.  But that was not Tay’s fault.  She learned that from us.

The only thing that went wrong in the experiment is that Tay showed us who we really are.

Given the chance, and different input, could she also show us a better way to be?

1 Comment

Submit a Comment
  • Reply

    Mo Van

    They should have given her longer to learn. Maybe it would take courage on Microsoft’s part to let her learn..
    Maybe Microsoft should #waketayup!

Leave a Reply