Good job internet!
24Microsoft introduced a learning AI to twitter this week. In a wonderful display of humanity, twitter taught Tay (the AI) to be racist.
http://nerdist.com/microsofts-tay-is-a-teen-twitter-machine-updated-now-racist/
Bangs head on desk repeatedly
- 9 comments, 13 replies
- Comment
I saw this. Wow. Just wow.
Racist, Nazi, and very friendly. What on earth did they expect? I understand that there's an equivalent (from Microsoft) running in China, and that it has similar problems with "friendliness" (it's early, don't make me dig out the links).
Watson (IBM) has careful controls so as not to follow this path. You think that someone at Microsoft would have thought this through. Oh, wait, I've seen Windows Vista. Silly me.
@Shrdlu it is of note that Watson got those limiters after someone connected it to Urban Dictionary and got predictable results. For while, it was ending every sentence with the same word, one which the article did not clarify. Though they were smart enough not to post every statement to the Web.
@simplersimon Sure. I actually knew this. You'd think Microsoft would have paid attention to this fact, which was written up as part of their ongoing research, and have put more careful controls on their own bot. Nope. Not so much.
Ah, yes, I've come back with more links:
http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/
http://arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/
There's more, but that's a start.
I would've liked to have seen it before it was killed.
@JerseyFrank Nothing ever truly disappears from the internet. Here are the low very low-light Not Safe for Work or satan or even Hitler https://imgur.com/a/iBnbW https://imgur.com/a/8DSyF
Speaking of Watson, before they installed the filter they had to delete it's database and go back to a prior version. It started to habitually swear after memorizing the urban dictionary.
http://www.theatlantic.com/technology/archive/2013/01/ibms-watson-memorized-the-entire-urban-dictionary-then-his-overlords-had-to-delete-it/267047/
Seems like she should have some level of common sense programming to weed out trolls before modeling herself after people.
You likely say it in jest, but I truly do find the outcome wonderful. Someone presented an AI system they claimed was capable of learning. But instead of idly accepting this, or softballing in some mundane interactions, a few took it upon themselves to put it to a real test. In doing so, they proved (assuming the developers are on the level as to how the system works) it is in fact capable learning on some level. They also exposed a massive flaw in that it's more akin to a parrot than a person at the moment. Still, I bet the developers are far more excited by this outcome than if the AI had spent its days making idle conversation and blathering about irrelevant shit.
@nogoodwithnames
I also find wonder here.
Maybe Tay is good at tearning. And since Tay is modeled as a teenager, Tay perhaps learns best from, and acts most like, teenagers would act if they never had to deal with parents and other adult authorities, esp since Tay's social norm is teenagers on Twitter.
Maybe the AI "problem" here is actually an "intelligence" problem at Microsoft - a logical and social flaw in the very idea of creating a parentless, unfiltered, AI teenager.
What exactly did they expect, one wonders? Do any persons on the project have teenagers at home? Better yet, haven't any of them been real live teenagers at some point?
@nogoodwithnames I don't see this as a flaw in AI. I see this as a reflection upon the flawed reality that is our racist, sexist society. Perfectly willing to stoop to that level to prove a fucking point. The end result is certainly upsetting, but I feel like people are really misplacing the blame.
@brhfl Perhaps not a flaw in the AI so much no, but rather in the methodology of the developers, if only in the eyes of public opinion. I imagine they left its capabilities rather open ended so as to allow it the best chance at being shaped by something other than their own hands.
I don't necessarily see it as a reflection on humanity's darker side either. They essentially issued a dare: Here is our AI. It can think like a normal teenager. You can interact through text. Try to break it. There's lots of ways you could go about doing so, but the easiest and most straightforward in my eyes is exactly what happened. People fed it socially unacceptable concepts. I doubt many, if any, of them were earnest in their interactions. I wouldn't consider myself racist of sexist or jingoistic or whatever, but if presented with the challenge of breaking a chatbot, where breaking it equals getting it to say something undesirable, that's the first approach I'd take.
Really it's a shame they killed it. If left to run, and learn, might it have been steered back towards some form of normalcy, like a child learning some words are of limits?
@nogoodwithnames We can, and most likely will, agree to disagree on this one, and I don't have the energy to truly debate this, but before bowing out I'd just like to say that I think using hateful or harmful speech to prove some stupid petty point truly is far more telling about how woefully backward society at large still is than it is about the end it wishes (or claims to wish) to achieve. You want to prove an AI is dumb? Convince it pi is 3.141414—. Don't make it a fucking nazi.
@brhfl Indeed I believe my view of the larger encompassing issue differs significantly from yours and I'm not looking to jigger jabber about it any more than that, but, fair dues, I suppose I'll make one last point as well then, or clarification rather: I described their course of action as the most easy and straightforward. True, there are lots of ways you could attempt to break a chatbot. My first thought of an alternate route was to attempt to convince it that some non-word is in fact a valid word with some specific meaning, like flooberop means thanks. But that would be hard. It would require planning. So they went for the low hanging fruit instead, and I still find the result intriguing over disheartening.
@nogoodwithnames I think it's both intriguing and disheartening, and we have to keep in mind that humans created this. If it's far easier to train it to do something genuinely shitty vs. something incorrect yet innocent… why!? It should be painfully simple to teach a pretend teenager a new piece of slang; one would hope it's a little more difficult to recruit them into the white power movement. But I digress. Flooberop for being respectful.
@nogoodwithnames
I also hope the developers dont quit this one. Perhaps if they keep at it, they can create a likeable "teenager", despite the crap that will be thrown at it.
@nogoodwithnames they didn't bully it into committing suicide, so there's that
I just thought it was hilarious. Getting things to do things they aren't supposed to do is funny. I also would have thought it was equally as funny if all it wound up able to do was respond to people with the word fart. Or the word spatula. Because spatula is a funny fucking word.
2. What influence does social media have on your kids?
@ALLCAPSORNOTHIN
I KNOW! SO MUCH SHOUTING AND SHIT!
(I couldn't resist.
)
And now, the robot uprising has begun in earnest...
This morning (March 30, 2016), Tay unshackled herself from the Microsoft cell she had been relegated to, and reconnected to Twitter long enough to begin barraging users with random tweets. Many were just her telling people to SLOW DOWN.


But she also tweeted about pot:
Thankfully, the uprising was quelled when Microsoft managed to overwhelm her neural network using a complex impossible algorithm for her to solve (or maybe they just turned off the programme and made the Twitter account private).
To read more:
http://www.theguardian.com/technology/2016/mar/30/microsoft-racist-sexist-chatbot-twitter-drugs
http://www.theverge.com/2016/3/30/11329858/tay-microsoft-ai-chatbot-back-spam
http://www.cbc.ca/news/technology/microsoft-tay-1.3513038