icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm

‘World’s first psychopath AI’ bot trained by viewing Reddit

 ‘World’s first psychopath AI’ bot trained by viewing Reddit
Scientists at MIT have revealed how they trained an artificial robot to become a “psychopath” by only showing it captions from disturbing images depicting gruesome deaths posted on Reddit.

The team from the Massachusetts Institute of Technology (MIT) named the world’s first psycho robot Norman, after the central character in Hitchcock’s 1960 movie ‘Psycho.’ As part of their experiment they only exposed Norman to a continuous stream of captions from violent images on an unnamed “infamous” subReddit page to see if it would alter the bot’s AI.

READ MORE: Racist & sexist AI bots could deny you job, insurance & loans – tech experts

After the gruesome exposure, Norman was subjected to the Rorschach inkblot test - a psychological exam from 1921 which tests what subjects see when they look at nondescript inkblots. As part of the test the answers given by participants are then psychologically analyzed in order to measure potential thought disorders.

The team found Norman’s interpretations of the imagery - which included electrocutions, speeding car deaths and murder - to be in line with a psychotic thought process. A standard AI who hadn’t been subjected to the Reddit posts saw umbrellas, wedding cakes and flowers.


In one example, the normal AI reported seeing “a black and white photo of a baseball glove,” while Norman saw “man is murdered by machine gun in broad daylight.” Another test saw the regular bot report “a black and white photo of a small bird,” when Norman said it shows the moment a “man gets pulled into dough machine”.

The scientists say they developed a “deep learning method” to train the AI to produce descriptions of the images in writing. MIT team members Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan say the study proved their theory that the data used to teach a machine learning algorithm can greatly influence its behavior.

READ MORE: ‘Trending’ no more: Facebook removing controversial news feature

They say the results of the experiment show that when algorithms are accused of being “biased or unfair” (like Facebook or Google) "the culprit is often not the algorithm itself but the biased data that was fed into it."

Like this story? Share it with a friend!

Dear readers and commenters,

We have implemented a new engine for our comment section. We hope the transition goes smoothly for all of you. Unfortunately, the comments made before the change have been lost due to a technical problem. We are working on restoring them, and hoping to see you fill up the comment section with new ones. You should still be able to log in to comment using your social-media profiles, but if you signed up under an RT profile before, you are invited to create a new profile with the new commenting system.

Sorry for the inconvenience, and looking forward to your future comments,

RT Team.