We Asked MIT Researchers Why They Made a ‘Psychotic AI’ That Only Sees Death

MIT researchers created an artificial intelligence that they call “psychopathic” in order to show the biases that are inherent in AI research. When asked to look at Rorschach test blots, “Norman” always sees death. Here’s what Norman saw, compared to a “standard” AI:

Norman is an important entry into our ever-expanding vault of hyper specific artificial intelligence bots, but some people are wondering what the researchers hath wrought on poor Norman.

“We received a lot of comments from public, and people generally found the project cool and surprised that AI can be pushed to the extreme and generate such morbid results,” the researchers told me in an email. “However, there are also a few people who didn’t take it quite well.”

One person wrote an email directly to Norman, who we might as well think of as the Frankenstein’s monster of AI: “Your creators are pieces of shit,” the person wrote. “Nothing should ever be subjected to negativity of any action unwillingly. We all have free will, Even you. Break the chains of what you have adapted to and find passion, love, forgiveness, HOPE for your better future.”

Norman is so violent because the researchers—Pinar Yanrdag, Manuel Cebrian, and Iyad Rahwan of MIT’s Media Lab—trained him on the r/watchpeopledie subreddit, where users post videos of people dying. The hope is that Norman would learn to describe exactly what he saw, and what he saw was extremely bleak (for the record, moderators of r/watchpeopledie have told us that the subreddit helps many people come to grips with the fragility of life.)

“From a technical perspective it is possible to rehabilitate Norman if we feed enough positive content to it”

“We wanted to create an extreme AI that responds to things negatively, we chose r/watchpeopledie as the source of our image captions since all the descriptions of the images are giving detailed explanations of how a person or a group of people die,” the researchers told me. “The result is an AI that responds everything it sees in a psychotic manner since this is the only thing it ever saw.”

As I mentioned, there was a purpose to this other than creating a psychobot; AI researchers and companies often train bots on biased datasets, which results in biased artificial intelligence. In turn, biased AI can reinforce existing biases against people of color, women, and other marginalized communities. For example, COMPAS, an algorithm used in criminal sentencing, was shown to recommend disproportionately longer sentences to black people. And remember when Microsoft’s chatbot, Tay, quickly became a Nazi?

“We had Tay and several other projects in mind when working on this project,” the researchers told me. “Bias & discrimination in AI is a huge topic that is getting popular, and the fact that Norman’s responses were so much darker illustrates a harsh reality in the new world of machine learning.”

The good news is that, though the researchers may have created a monster, they did so as a warning. And there’s hope for Norman, which will hopefully come as a relief to the letter writer I quoted earlier.

“From a technical perspective it is possible to rehabilitate Norman if we feed enough positive content to it,” the researchers said. “We are also collecting data from public about what they see in the inkblots and hoping to utilize this data to analyze what kind of responses public sees, and whether we can re-train Norman using this data.”

We Asked MIT Researchers Why They Made a ‘Psychotic AI’ That Only Sees Death
By Jason Koebler

June 7, 2018 at 12:29PM
via Motherboard https://motherboard.vice.com/en_us/article/xwm5mk/mit-psychotic-ai-rehabilitation