In James Bridle’s article “Something is wrong on the internet”, he discusses how artificial intelligence allows for the mass production of disturbing “kids” videos on YouTube with the intent of making a profit. He provides countless examples of automated videos, many almost exact replicas of others. While scrolling through the article and watching some of the examples, I was reminded of the video Don’t Hug me I’m Scared produced on YouTube in 2011 (It is now a series of 7 videos, the latest made in 2017). Although I don’t think the intended audience for this video is children, it still utilizes common tropes discussed in Bridle’s article.
Don’t Hug Me .I’m Scared. (29 August 2011). Don’t Hug me I’m Scared [Video file]. Retrieved from https://www.youtube.com/watch?v=9C_HReR_McQ
For starters, there is an Ad at the beginning of the video, so YouTube and the creators are profiting from millions of views. The titles of each video in the series are the same except for the number it is, so they will all show up in any search containing those words. The video itself starts off with fun bright colors, nursery rhyme like music, and a childish theme involving puppets. However, the videos take a drastic turn towards disturbing features that one wouldn’t expect from the first part of the video. Someone in the comments noted “I can only imagine some parent watching half of this video and then showing it to their kids.” It’s all fun and games until kids are crying after seeing puppets erratically dancing and eating cake made of organs. The creators of this video are unintentionally contributing to the traumatization of children via YouTube, yet who is at fault?
In the case of Don’t Hug me I’m Scared, the creators Joseph Pelling and Rebecca Sloan designed the video to seem innocent in the beginning, so the argument can be made that if a kid is disturbed they are to blame. But YouTube is still involved in the sharing process of the video. To my knowledge it is not being mass reproduced by algorithms and A.I., but it is still being spread through response and theory videos. These videos typically have similar titles that appear as suggested content based off of YouTube’s algorithms, allowing it to reach more people and possibly disturb them. People are finding ways to get their videos views, and the design of YouTube is letting them.
Regardless of the countless videos found on YouTube, both Bridle and Zeynep Tufekci acknowledge the algorithms and AI themselves aren’t alarming. It is how those in power can and will use these technologies to exploit us that is the concern. I’m not sure what this means for the future, but for now parents should stick to showing their kids things on Nick Jr.