Although Sussex fans have for years questioned why anti-Meghan accounts were created and monetized, the royal YouTube scrutiny gained momentum in January, when social media analytics firm Bot Sentinel published Report Screening of anti-Meghan content across various online platforms. This report was the third in a series about what the company called “single-purpose hate accounts”, profiles that have appeared online only to attack a specific individual – in this case, the Duchess of Sussex.
In the report’s section on YouTube, the Bot Sentinel identified 25 channels whose videos “mostly focused on disparaging Meghan”. According to the report, these channels had a combined total of nearly 500 million views, and the Bot Sentinel estimated that the accounts combined generated nearly $3.5 million in ad revenue over their lifetime. (Number of channel creators identified in the report disputed The company estimates but has so far refused to share its earnings publicly.) The report called on the platform to remove the channels, citing YouTube Online harassment and bullying policywhich explicitly states that “accounts entirely dedicated to the focus of maliciously insulting an identifiable person” are examples of content that is not permitted.
And despite the frustration of many Sussex fans (and the joy of Meghan and Harry haters), YouTube has, so far, removed only one anti-Meghan channel. (Another channel temporarily removed, but as Input mentionedhas been restored.)
This is because the current YouTube Community Guidelines have a major vulnerability that allows targeted harassment, often spreading misinformation about an individual without breaking the rules of the platform. Anti-Meghan channels are created – and many are monetised – because the company defines what you need to attack about someone. Number As harassment, hate speech and cyberbullying.
According to YouTube’s terms of service, to be considered “content that targets an individual with prolonged or malicious insults,” the insults must be based on “substantive traits,” which the company defines as “physical traits” and “Protected group status. ” this is protected group The policy lists 13 attributes that cannot be attacked: age, social class, disability, race, gender identity or expression, nationality, ethnicity, immigration status, religion, gender/gender, sexual orientation, victims of a major violent event and their relatives, and veteran status.
YouTube’s rules say everything else, including attacks based on lies and potentially defamatory content, is fair game. The platform thus hosts conspiracy videos that incorrectly suggest that Megan is too bisexual or provide “proof” of it working in the field of sex Before meeting Prince Harry – these videos have garnered more than 100,000 views.
In an emailed statement, YouTube reiterated the definition of the types of attacks that constitute harassment and hate speech. “We have clear policies in place that prohibit content that targets an individual with malicious threats or insults based on core attributes, such as race or gender,” spokesperson Jack Malone said.
Part of the problem, Maza said, is that most racist, anti-LGBT, xenophobic, or otherwise hateful content that targets a person on the basis of “core traits” is deceptive. In fact, Many Wrote Around The “racist ringtones“Affiliate” British media‘s coverage Megan.
“Hate speech is always implicit,” Maza said. “Good hate speech, good fanatical propaganda, euphemisms, stereotypes, a wink and nod. It is always suggested or hinted at. If your mode of speech is that there must be a clear rule, you will never have good politics.” The focus should be on implicit prejudice…Violence and explicit intolerance come from implicit prejudice.”
Most anti-Meghan YouTubers have proven themselves to the YouTube streak when it comes to their videos, using codewords like “vanity” And the “no classTo describe Megan, or play the angry black woman metaphor by portraying her as a person World Health Organization Regularly throws temper tantrums.