The Coming of the “Ad-pocalypse”

Here comes the fifth horseman of the apocalypse, “Ad-pocalypse!” 

All jokes aside, the “ad-pocalypse” is in fact, real. This term was coined to describe Youtube’s recent large-scale demonetization and banning of videos in an attempt to appease the growing pressure from advertisers to regulate its content. Because it is resource-intensive and time-consuming to use human censors, Youtube developed a censorship AI. 

But why is it called an “ad-pocalypse” then? An AI automatically purging the internet of hate speech and graphic content and all kinds of menace sounds like a great plan, doesn’t it? The reality may be grimmer than you think.

Radić, the host of a popular chess channel, was live interviewing the grandmaster Hikaru Nakamura when Youtube suddenly cut off the live stream because it contained “hate speech.”¹ Similarly, Mr. Allsop found his channel teaching history curriculums like IGCSE banned for hate speech.² Their videos did not really contain anything offensive or dangerous and Youtube restored them eventually, so what did they violate?

Ashique KhudaBukhsh and Rupak Sarkar, scientists at Carnegie Mellon University, suggested that AI is prone to misinterpret words that have different meanings. Wondering if the AI banned Radić’s chess channel because it confused the language in the chess discussion such as “Black,” “White,” “Attack,” with actual hate speech, they trained two AIs, one with the far-right site Stormfront and the other with Twitter, to detect hate speech. Then they tested both AIs with transcripts and comments from about 9,000 Chess videos and found them to be far from perfect: 80% of the texts and transcripts marked as hate speech were false positive, meaning that they were not hate speech when read in context.

Millions of viewers came across this message when Youtube censored a livestream of the Canadian trucker protests earlier this year.

“Fundamentally, language is still a very subtle thing,” says Tom Mitchell, a CMU professor who has previously worked with the KhudaBukhsh, “these kinds of trained classifiers are not soon going to be 100 percent accurate.” Mitchell then added that even more sophisticated AI like Youtube’s AI is still limited.³

But perhaps it’ll get better. You may think. It will have less collateral damage one day! Sure. Let's imagine this ideal world in a very distant future where Youtube’s AI makes significantly fewer mistakes. Is it a good idea for Youtube to adopt it then? 

No, because some of Youtube’s “unintentional mistakes'' are hardly justifiable.  Many controversial minority groups on Youtube feel like their content is specifically targeted by Youtube, including members of the LGBTQ+ community, pro-life supporters, firearms enthusiasts, flat earthers, video game streamers, Hong Kong protesters, and many others. ⁴

For example, the LGBTQ+ community recently filed a lawsuit against Youtube for discrimination.  The Washington Posts reported that the LGBTQ+ community argued that Youtube enforces its policies unevenly, giving pass to popular Youtubers even when their content offends the LGBTQ+ community.

The article then added that an earlier report in the Washington Post revealed that Youtube trained its moderators to treat the most popular Youtubers differently, allowing hate speech to remain on their channels unpunished but enforcing the policies against minority video creators stringently. ⁵

These minority groups’ unpleasant experiences may reflect the downsides of quantifying human values to develop AI, meaning recognizing, defining, and evaluating complex social issues through quantifiable codes that could be incorporated into AI. Youtube’s practices represent the recent trend of relying on AI to solve problems, operating on the assumption that better AI means a better world. But is relying on AI truly a good idea?

There is complexity as well as diversity in human values, but AI might only be able to understand the one system of value that it has been taught through its algorithm in the first place. Yet what values should be taught and who gets to decide? The values of some minority groups, like the LGBTQ+ community, may conflict with those of the general public, but shouldn’t they be valued as well?

We cannot adopt any values without compromising or even sacrificing other perspectives. Youtube should not rely on AI to solve all of its problems; whilst AI can be useful, it should be granted a much more qualified role especially in terms of content regulation. AI might be better used to make “suggestions” or “reservations” to the viewers and let us decide the acceptability of content and only remove the worst content, achieving the goal of content regulation and preserving the plurality of human values.

Come on folks, the other four horsemen of the apocalypse caused enough trouble already! Let's stop "Adpocalypse" while we still can.

  1. https://www.wired.com/story/why-youtube-chat-chess-flagged-hate-speech/

  2. https://twitter.com/MrAllsopHistory/status/1136326031290376193

  3. https://www.wired.com/story/why-youtube-chat-chess-flagged-hate-speech/

  4. https://www.washingtonpost.com/technology/2019/08/14/youtube-discriminates-against-lgbt-content-by-unfairly-culling-it-suit-alleges/; https://www.movieguide.org/news-articles/youtube-bans-pro-life-news-organization-removes-thousands-of-videos.html; https://arstechnica.com/information-technology/2018/03/youtube-to-crack-down-harder-on-videos-about-building-buying-firearms/; https://www.wired.com/story/youtube-algorithm-silence-conspiracy-theories/; https://www.forbes.com/sites/insertcoin/2017/08/22/youtube-is-making-it-almost-impossible-to-monetize-video-game-content-involving-guns/?sh=381c49c62613; https://onezero.medium.com/why-youtube-keeps-demonetizing-videos-of-the-hong-kong-protests-460da6b6cb2b

  5. https://www.washingtonpost.com/technology/2019/08/14/youtube-discriminates-against-lgbt-content-by-unfairly-culling-it-suit-alleges/

Previous
Previous

Gaming And Sleeping: Two Opposites In Need Of Compromise.

Next
Next

We are NOT Tourist Attractions