Alex Jones and Infowars finally face the music for sowing Sandy Hook conspiracies

Infowars founder Alex Jones took the stand today in a trial that will determine what he owes to the parents of a child killed in the Sandy Hook mass shooting. Last year, Jones was found liable in a series of defamation cases brought by the parents of Sandy Hook victims.

For years, Jones and Infowars spread outlandish and disturbing conspiracy theories purporting that the 2012 tragedy, which claimed 28 lives — most of them children — was staged.

The first trial to determine the damages Jones may owe is underway in Texas, with Neil Heslin and Scarlett Lewis, parents of 6-year-old Sandy Hook victim Jesse Lewis, seeking at least $150 million. Late last month, Infowars’ parents company filed for Chapter 11 bankruptcy, likely a pre-emptive effort to dodge financial culpability before the outcome of the trial.

In a surprise twist Wednesday, the lawyer representing the Sandy Hook victim’s parents revealed that he recently received a trove of Jones’ phone data, apparently shared to the opposing legal team by mistake.

Jones lost four separate defamation cases relating to Sandy Hook by default after refusing to cooperate with Texas and Connecticut courts and provide requested documents. Jones also failed to produce any messages related to Sandy Hook in the discovery process for the damages trial — a discrepancy that the family’s lawyer Mark Bankston highlighted on Wednesday.

“You know what perjury is, right?” Bankston asked.

The plaintiff’s lawyer also cited emails that showed Infowars making $800,000 a day — a mind-boggling figure that Jones did not dispute in spite his previous contradictory claims about the company’s revenue. Jones claimed that any punishment above $2 million would “sink” his company.

Whether any damages awarded would destroy his business or not, the trial could prove to be a cautionary tale for the countless businesses peddling conspiracies that, like Infowars, rake in revenue around dangerous and politically divisive misinformation.

Shortly after the reveal, Rolling Stone reported that the January 6 committee plans to request those messages and emails in its ongoing investigation into the Capitol insurrection.

Jones is in court after infamously claiming that the Sandy Hook school shooting was a fake event staged by “crisis actors” to advance a covert ideological agenda. The false claims spread online like wildfire in conspiracist echo chambers over the last decade, inspiring believers to stalk and harass the parents of Sandy Hook victims, some of whom even moved or went into hiding to escape the abuse.

Heslin described the situation as a “living hell” in testimony this week. “What was said about me and Sandy Hook itself resonates around the world,” he said. “As time went on, I truly realized how dangerous it was… My life has been threatened. I fear for my life, I fear for my safety.”

Alex Jones has cashed in on the Infowars conspiracy empire for years, promoting repeated claims of government coverups and false flag operations while pushing branded products like nootropic supplements promising to enhance “male vitality.” While skirting the rules or even outright breaking them, Jones managed to stay active on mainstream social media platforms until just a few years ago.

In late 2018, major tech companies including YouTube, Facebook, Twitter, Spotify and Apple kicked Jones off of their platforms, citing his long track record of misbehavior, misinformation and harassment. Apple spearheaded the effort, erasing Infowars from the App Store after Jones’ media empire broke its rules against hate speech.

House punishes Republican lawmaker who promoted violent conspiracy theories

Democrats in the House voted to strip freshman Georgia Representative Marjorie Taylor Greene of some of her responsibilities Thursday, citing her penchant for violent, anti-democratic and at times anti-Semitic conspiracy theories.

Greene has expressed support for a range of alarming conspiracies, including the belief that the 2018 Parkland school shooting that killed 17 people was a “false flag.” That belief prompted two teachers unions to call for her removal from the House Education Committee — one of her new committee assignments.

The vote on a resolution to remove Greene from her committee assignments broke along party lines, with nearly all Republicans opposing the resolution. Some of her colleagues even voted in Greene’s defense in spite of condemning her behavior in the past.

As the House moved to vote on the highly unusual resolution, the new Georgia lawmaker claimed that her embrace of QAnon was in the past.

“I never once said during my entire campaign “QAnon,'” Greene said Thursday. “I never once said any of the things that I am being accused of today during my campaign. I never said any of these things since I have been elected for Congress. These were words of the past.”

But as the Daily Beast’s Will Sommer reported, a deleted tweet from December shows Greene explicitly defending QAnon and directing blame toward the media and “big tech.”

In another recently-uncovered post from January 2019, Greene showed support for online comments calling for “a bullet to the head” for House Speaker Nancy Pelosi and executing FBI agents.

Greene has also shared openly racist, Islamophobic and anti-Semitic views in Facebook videos, a track record that prompted Republican House Minority Leader Kevin McCarthy to condemn her statements as “appalling” last June. More recently, McCarthy defended Greene against efforts to remove her from committees.

Greene was elected in November to represent a conservative district in northwest Georgia after her opponent Kevin Van Ausdal dropped out, citing personal reasons. Greene beat her opponent in the Republican primary in August, winning 57% of the vote.

QAnon, a dangerous once-fringe collection of conspiracy theories, was well-represented in January’s deadly Capitol riot and many photos from the day show the prevalence of QAnon symbols and sayings. In 2019, an FBI bulletin warned of QAnon’s connection to “conspiracy theory-driven domestic extremists.” A year later, at least one person who had espoused the same views would win a seat in Congress.

The overlap between Greene’s beliefs and those of the violent pro-Trump mob at the Capitol escalated tensions among lawmakers, many of whom feared for their lives as the assault unfolded.

A freshman representative with little apparent appetite for policy or coalition-building, Greene wasn’t likely to wield much legislative power in the House. But as QAnon and adjacent conspiracies move from the fringe to the mainstream and possibly back again — a trajectory largely dictated by the at times arbitrary decisions of social media companies — Greene’s treatment in Congress may signal what’s to come for a dangerous online movement that’s more than demonstrated its ability to spill over into real-world violence.

Facebook blocks hashtags for #sharpiegate, #stopthesteal election conspiracies

Facebook today began to block select hashtags which were being used to share misinformation related to the 2020 U.S. presidential election.

Now, searches for the hashtags #SharpieGate is being blocked on the social network. Another election conspiracy hashtag #stopthesteal is also blocked on Facebook, with a note saying some of its content goes against the platform’s community standards. The #stopthesteal hashtag has been promoted by Donald Trump Jr. and other Trump campaign associates on Twitter.

Instead of taking users to search results for the hashtag in question, Facebook presents a page where it explains that posts with the hashtag are being “temporarily hidden.” This message also explains that “some content in those posts goes against our Community Standards,” and offers to direct users to its guidelines under a “Learn More” link.

Image Credits: Facebook screenshot via TechCrunch

Though TechCrunch found select election misinformation hashtags had been banned, there were still many others that would direct users to content that pushed conspiracies disputing the election results or outright calling them fraudulent.

For example, hashtags like #RiggedElection, #Rigged, #ElectionFraud, #ElectionMeddling and others still worked, and even directed users to content associated with QAnon conspiracies, at times — despite Facebook’s earlier ban on QAnon content, which extended to many associated hashtags.

Given that Facebook allowed QAnon content to spread for years, it’s notable that the company moved to block election misinformation hashtags in a matter of days. That indicates Facebook is capable of addressing viral misinformation somewhat quickly — it just has historically chosen not to do so.

As for the hashtags themselves, Sharpiegate had already been thoroughly debunked, both by news outlets and election officials. In a letter posted to Twitter, the Maricopa County Board of Supervisors debunked the claims that the use of sharpies would invalidate ballots. Because the ballots are printed with offset columns, the use of sharpies is allowed and would not cause bleed-through or other issues.

The claim was first made Tuesday in a video posted to Facebook in which a woman claims poll workers were encouraging some voters to use sharpies in order to invalidate their ballots. That video is now flagged on Facebook with a “false information” label and requires users to click through to watch it.

Social media election takedowns

In addition to its hashtag bans, Facebook today also removed a group, “Stop the Steal 2020,” BuzzFeed’s Ryan Mac first reported. The group was tied to real-world protests by Trump supporters, which have erupted around the country as key states continue to tally votes. Some protesters, inspired by misinformation, have even swarmed voting sites where counting is still underway.

“In line with the exceptional measures that we are taking during this period of heightened tension, we have removed the Group ‘Stop the Steal,’ which was creating real-world events,” Facebook spokesperson Andy Stone told TechCrunch. “The group was organized around the delegitimization of the election process, and we saw worrying calls for violence from some members of the group.”

As he has signaled he would for months, President Trump is leaning heavily into a false narrative that suspicious polling place behavior and late-arriving ballots are part of a Democratic plot to thwart his reelection chances. In a speech from the White House early Wednesday morning, Trump declared premature victory, raising baseless concerns that mail-in ballots, which were expected to lean heavily Democratic, were somehow improper as they erased some of his early gains. “We were getting ready to win this election,” Trump said. “Frankly, we did win this election.”

On Twitter, many of Trump’s recent tweets promoting unfounded election conspiracies have been hidden from view and placed behind a misinformation warning. Those hidden tweets also have likes, retweets and comments restricted in order to limit their ability to spread in a viral way. On Facebook, the president’s posts alleging fraud at voting sites are not called out directly as misinformation. Instead, Facebook pairs them with informational labels reminding users that vote-by-mail ballots are trustworthy or noting that election officials follow “strict rules” around processing and counting ballots. The company also disabled the “recent” page for Instagram hashtags in the lead-up to the election, a precaution designed to limit the spread of viral election misinformation.

Facebook so far has not responded to a request for comment about its new hashtag bans, but they’re observable within the Facebook app on both the desktop and mobile app as of the time of writing.

 

A QAnon supporter is headed to Congress

Marjorie Taylor Greene’s win in a Georgia House race means that QAnon is headed to Capitol Hill.

Greene openly supports the complex, outlandish conspiracy theory, which posits that President Trump is waging a secret war against a shadowy group of elites who engage in child sex trafficking, among other far-fetched claims. The FBI identified QAnon as a potential inspiration for “conspiracy theory-driven domestic extremists” last year.

Greene’s win is a startling moment of legitimacy for the dangerous conspiracy, though it wasn’t unexpected: her Democratic opponent dropped out of the race for personal reasons in September, clearing her path to the House seat.

Greene’s support for the constellation of conspiracy theories isn’t particularly quiet — nor are her other beliefs. Called a “future Republican star” by President Trump, Greene has been vocal in expressing racist and Islamophobic views. Greene has also espoused September 11 “truther” theories and criticized the use of masks, a scientifically-supported measure that reduces transmission of the novel coronavirus.

QAnon, once a belief only at the far-right fringes of the internet, has inspired followers to engage in real-world criminal acts, including fatally shooting a mob boss in Staten Island and blocking the Hoover Dam bridge in an armed standoff.

The conspiracy’s adherents have also hijacked the hashtag #savethechildren, interfering with legitimate child safety efforts and exporting their extreme ideas into mainstream conversation under the guise of helping children. Facebook, which previously banned QAnon, limited the hashtag’s reach last month in light of the phenomenon.

Other QAnon believers are on the ballot in 2020, including in Oregon, where Jo Rae Perkins is running for Senate after beating out other Republicans in the primary. Perkins is very open about her beliefs and in June tweeted a video pledging her allegiance as a “digital soldier” for QAnon along with a popular hashtag associated with the conspiracy movement.

A QAnon supporter is headed to Congress

Marjorie Taylor Greene’s win in a Georgia House race means that QAnon is headed to Capitol Hill.

Greene openly supports the complex, outlandish conspiracy theory, which posits that President Trump is waging a secret war against a shadowy group of elites who engage in child sex trafficking, among other far-fetched claims. The FBI identified QAnon as a potential inspiration for “conspiracy theory-driven domestic extremists” last year.

Greene’s win is a startling moment of legitimacy for the dangerous conspiracy, though it wasn’t unexpected: her Democratic opponent dropped out of the race for personal reasons in September, clearing her path to the House seat.

Greene’s support for the constellation of conspiracy theories isn’t particularly quiet — nor are her other beliefs. Called a “future Republican star” by President Trump, Greene has been vocal in expressing racist and Islamophobic views. Greene has also espoused September 11 “truther” theories and criticized the use of masks, a scientifically-supported measure that reduces transmission of the novel coronavirus.

QAnon, once a belief only at the far-right fringes of the internet, has inspired followers to engage in real-world criminal acts, including fatally shooting a mob boss in Staten Island and blocking the Hoover Dam bridge in an armed standoff.

The conspiracy’s adherents have also hijacked the hashtag #savethechildren, interfering with legitimate child safety efforts and exporting their extreme ideas into mainstream conversation under the guise of helping children. Facebook, which previously banned QAnon, limited the hashtag’s reach last month in light of the phenomenon.

Other QAnon believers are on the ballot in 2020, including in Oregon, where Jo Rae Perkins is running for Senate after beating out other Republicans in the primary. Perkins is very open about her beliefs and in June tweeted a video pledging her allegiance as a “digital soldier” for QAnon along with a popular hashtag associated with the conspiracy movement.

Facebook says it will ban QAnon across its platforms

Facebook expanded a ban on QAnon-related content on its various social platforms Tuesday, deepening a previous prohibition on QAnon-related groups that had “discussed potential violence,” according to the company.

Today’s move by Facebook to not only ban violent QAnon content but “any Facebook Pages, Groups and Instagram accounts representing QAnon” is an escalation by the social giant to clean its platform ahead of an increasingly contentious election.

QAnon is a sprawling set of interwoven pro-Trump conspiracy theories that has taken root inside swaths of the American electorate. Its more extreme adherents have been charged with terrorism after acting out in violent and dangerous ways, spurred on by their adherence to the unusual and often incoherent belief system. Buzzfeed News recently decided to call QAnon a “collective delusion,” another apt title for the theory’s inane, fatuous, and dangerous beliefs.

Facebook’s effort to rein in QAnon is helpful, but likely too late. Over the course of the last year, QAnon swelled from a fringe conspiracy theory into a shockingly mainstream political belief system — one that even has its own Congressional candidates. That growth was powered by social networks inherently designed to connect like-minded people to one another, a feature that has been found time and time again to spread misinformation and usher users toward increasingly radical beliefs.

In July, Twitter took action of its own against QAnon, citing concerns about “offline harm.” The company downranked QAnon content, removing it from trending pages and algorithmic suggestions. Twitter’s policy change, like Facebook’s previous one, stopped short of banning the content outright but did move to contain its spread.

Other companies, like Alphabet’s YouTube product have come under similar censure by external observers. (YouTube says it reworked its algorithm to better filter out the darker shores of its content mix, but the results of that experiment are far from conclusive.)

Social platforms like Facebook and Twitter have also made changes to their rules after being confronted with a willfully mendacious administration ahead of an election, about which the same administration has propagated lies and disinformation about voting security and the virus that has killed more than 200,000 Americans. The pairs’ work to limit those two particularly risky strains of misinformation is worthy, but by taking a reactive posture instead of a proactive one most of those policy choices have also come too late to control the viral spread of dangerous content.

Facebook’s new rule comes into force today, with the company saying in a release that it is now “removing content accordingly,” but that the effort to purge QAnon”will take time.”

What drove the change at Facebook? According to the company, after it yanked violent QAnon material, it saw “other QAnon content tied to different forms of real world harm, including recent claims that the west coast wildfires were started by certain groups.” In Oregon where forest fires recently raged, misinformation on the Facebook platform led to misinformed state residents who believed that antifa — a term applied to those opposed to fascism as an unironic pejorative — were torching the state, set up illegal roadblocks.

How effective Facebook will be at clearing QAnon related content from its various platforms is not clear today, but will be something that will track.

Twitter cracks down on QAnon conspiracy theory, banning 7,000 accounts

Twitter announced Tuesday that many accounts spreading the pervasive right-wing conspiracy theory known as QAnon would no longer be welcome on its platform.

In a series of tweets, the company explained that it would make a “strong enforcement action” against QAnon content on the platform, removing related topics from its trending pages and algorithmic recommendations, blocking any associated URLs and permanently suspending any accounts tweeting about QAnon that have previously been suspended, coordinate harassment against individuals or amplify identical content across multiple accounts.

Twitter says the enforcement will go into effect this week and that the company would continue to provide transparency and additional context as it makes related platform policy choices going forward. According to a Twitter spokesperson, the company believes its action will affect 150,000 accounts and more than 7,000 QAnon-related accounts have already been removed for breaking the rules around platform manipulation, evading a ban and spam.

QAnon emerged in the Trump era and the conspiracy’s adherents generally fervently support the president, making frequent appearances at his rallies and other pro-Trump events. QAnon’s supporters believe that President Trump is waging a hidden battle against a secretive elite known as the Deep State. In their eyes, that secret battle produces many, many clues that they claim are encoded in messages sprinkled across anonymous online accounts and hinted at by the president himself.

QAnon is best known for its connection to Pizzagate, a baseless conspiracy that accused Hillary Clinton of running a sex trafficking ring out of a Washington D.C. pizza place. The conspiracy inspired an armed believer to show up to the pizza shop, where he fired a rifle inside the restaurant, though no one was injured.

While the conspiracy theory is elaborate, odd, and mostly incoherent, it’s been popping up in other mainstream places. Last week, Ed Mullins, the head of one of New York City’s most prominent police unions, spoke live on Fox News with a mug featuring the QAnon logo within clear view of the camera. In Oregon, a QAnon supporter won her primary to become the state’s Republican nominee for the Senate.

Twitter starts putting fact-checking labels on tweets about 5G and COVID-19

Conspiracy theories claiming a connection between 5G technology and the coronavirus have been around since the pandemic’s early days and apparently they’re still going strong.

So strong, in fact, that Twitter began applying a label to some tweets about COVID-19 and 5G, encouraging users to “get the facts” about the virus. As Business Insider reported, clicking through the label leads to a page collecting sources debunking claims with links to pages like the BBC and Snopes. Earlier this year, the conspiracy was linked to a series of arson attacks on 5G towers.

Twitter’s latest wave of fact-checking labels appears to have been applied pretty broadly, even on some tweets making jokes or references to the label itself.

Expanding its fact-checking labels to apply to coronavirus 5G conspiracies is the latest instance of Twitter’s evolving platform moderation efforts. The labels stop a step short of screening or removing content altogether, instead offering up additional context and letting users reach their own conclusions. At the very least, even with its mild wording, the warning labels should help flag untrustworthy content for the platform’s most credulous users.

Coronavirus conspiracies like that bogus 5G claim are racing across the internet

As the U.S. and much of the world hunkers down to slow the spread of the novel coronavirus, some virus-related conspiracy theories are having a heyday. Specifically, a conspiratorial false claim that 5G technology is linked to COVID-19 gained ground, accelerating from obscurity into the rattled mainstream by way of conspiracy theorists who’d been chattering about 5G conspiracies for years.

While there is scientific consensus around the basic medical realities of COVID-19, researchers are still filling in the gaps on a virus that no one knew existed five months ago. That relative dearth of information opens the way for ideas usually relegated to the internet’s fringes to slip into the broader conversation about the pandemic—a dangerous feature of an unprecedented global health crisis.

According to Yonder, an AI company that monitors online conversations including disinformation, conspiracies that would normally remain in fringe groups are traveling to the mainstream faster during the epidemic.

A report on coronavirus misinformation from the company notes “the mainstream is unusually accepting of conspiratorial thinking, rumors, alarm, or panic” during uncertain times—a phenomenon that explains the movement of misinformation that we’re seeing now.

While the company estimates that it would normally take six to eight months for a “fringe narrative” to make its way from the edges of the internet into the mainstream, that interval looks like like three to 14 days in the midst of COVID-19.

“In the current infodemic, we’ve seen conspiracy theories and other forms of misinformation spread across the internet at an unprecedented velocity,” Yonder Chief Innovation Officer Ryan Fox told TechCrunch. He believes that the trend represents the outsized influence of “small groups of hyper passionate individuals” in driving misinformation, like the 5G claims.

While 5G claims about the coronavirus are new, 5G conspiracies are not. “5G misinformation from online factions like QAnon or Anti-Vaxxers has existed for months, but is accelerating into the mainstream much more rapidly due to its association with COVID-19,” Fox said.

The seed of the false 5G coronavirus claim may have been planted in a late January print interview with a Belgian doctor who suggested that 5G technology poses health dangers and might be linked to the virus, according to reporting from Wired. Not long after the interview, Dutch-speaking anti-5G conspiracy theorists picked up on the theory and it spread through Facebook pages and YouTube channels already trafficking in other 5G conspiracies. Somewhere along the way, people started burning down mobile phone towers in the UK, acts that government officials believe have a link to the viral misinformation, even though they apparently took down the wrong towers. “Owing to the slow rollout of 5G in the UK, many of the masts that have been vandalised did not contain the technology and the attacks merely damaged 3G and 4G equipment,” The Guardian reported.

This week, the conspiracy went mainstream, getting traction among a pocket of credulous celebrities, including actors John Cusack and Woody Harrelson, who amplified the false 5G claims to their large followings on Twitter and Instagram, respectively.

A quick Twitter search reveals plenty of variations on the conspiracy still circulating. “… Can’t everyone see that 5G was first tested in Wuhan. It’s not a coincidence!” one Twitter user claims. “5G was first installed in Wuhan and now other major cities. Coincidence?” another asks.

In the past, 5G misinformation has had plenty of help. As the New York Times reported last year, Russian state-linked media outlet RT America began airing segments raising alarms about 5G and health back in 2018. By last May, RT America had aired seven different programs focused on unsubstantiated claims around 5G, including a report that 5G towers could cause nosebleeds, learning disabilities and even cancer in children. It’s possible that the current popular 5G hoax could be connected to disinformation campaigns as well, though we likely won’t learn the specifics for some time.

In previous research on 5G-related conspiracies, social analytics company Graphika found that the majority of the online conversation around 5G focused on its health effects. Accounts sharing those kind of conspiracies overlapped with accounts pushing anti-vaccine, flat Earth and chemtrail misinformation.

While the 5G coronavirus conspiracy theory has taken off, it’s far from the only pandemic-related misinformation making the rounds online lately. From the earliest moments of the crisis, fake cures and preventative treatments offered scammers an opportunity to cash in. And even after social media companies announced aggressive policies cracking down on potentially deadly health misinformation, scams and conspiracies can still surface in AI blindspots. On YouTube, some scammers are avoiding target words like “coronavirus” that alert automated systems in order to sell products like a powdered supplement that its seller falsely claims can ward off the virus. With their human moderators sent home, YouTube and other social platforms are relying on AI now more than ever.

Social networks likely enabled the early spread of much of the COVID-19 misinformation floating around the internet, but they don’t account for all of it. Twitter, Facebook, and YouTube all banned Infowars founder and prominent conspiracy theorist Alex Jones from their platforms back in 2018, but on his own site, Jones is peddling false claims that products he sells can be used to prevent or treat COVID-19.

The claims are so dangerous that the FDA even stepped in this week, issuing a warning letter to Jones telling him to cease the sale of those products. One Infowars video cited by the FDA instructs viewers concerned about the coronavirus “to go to the Infowars store, pick up a little bit of silver that really acts its way to boost your immune system and fight off infection.”

As it becomes clear that the disruptions to everyday life necessitated by the novel coronavirus are likely to be with us for some time, coronavirus conspiracies and scams are likely to stick around too. A vaccine will eventually inoculate human populations against the devastating virus, but if history is any indication, even that is likely to be the fodder for online conspiracists.

YouTube to reduce conspiracy theory recommendations in the UK

YouTube is expanding an experimental tweak to its recommendation engine that’s intended to reduce the amplification of conspiracy theories to the UK market.

In January, the video-sharing platform said it was making changes in the US to limit the spread of conspiracy theory content, such as junk science and bogus claims about historical events — following sustained criticism of how its platform accelerates damaging clickbait.

A YouTube spokeswoman confirmed to TechCrunch it is now in the process of rolling out the same update to suppresses conspiracy recommendations in the UK. She said it will take some time to take full effect — without providing detail on when exactly the changes will be fully applied.

The spokeswoman said YouTube acknowledges that it needs to do more to reform a recommendation system that has been shown time and again lifting harmful clickbait and misinformation into mainstream view. Though YouTube claims this negative spiral occurs only sometimes, and says on average its system points users to mainstream videos.

The company calls the type of junk content it’s been experimenting with recommending less often “borderline”, saying it’s stuff that toes the line of its acceptable content policies. In practice this means stuff like videos that make nonsense claims the earth is flat, or blatant lies about historical events such as the 9/11 terror attacks, or promote harmful junk about bogus miracle cures for serious illnesses.

All of which can be filed under misinformation ‘snake oil’. But for YouTube this sort of junk has been very lucrative snake oil as a consequence of Google’s commercial imperative being to keep eyeballs engaged in order to serve more ads.

More recently, though, YouTube has taken a reputational hit as its platform as been blamed for an extremist and radicalizing impact on young and impressionable minds by encouraging users to swallow junk science and worse.

A former Google engineer, Guillaume Chaslot, who worked on the YouTube recommendation algorithms went public last year to condemn what he described as the engine’s “toxic” impact which he said “perverts civic discussion” by encouraging users to create highly engaging borderline content.

Multiple investigations by journalists have also delved into instances where YouTube has been blamed for pushing people, including the young and impressionable, towards far right points of view via its algorithm’s radicalizing rabbit hole — which exposes users to increasingly extreme points of view without providing any context about what it’s encouraging them to view. 

Of course it doesn’t have to be this way. Imagine if a YouTube viewer who sought out at a video produced by a partisan shock jock was suggested less extreme or even an entirely alternative political point of view. Or only saw calming yoga and mindfulness videos in their ‘up next’ feed.

YouTube has eschewed a more balanced approach to the content its algorithms select and recommend for commercial reasons. But it may also have been keen to avoid drawing overt attention to the fact that its algorithms are acting as de facto editors.

And editorial decisions are what media companies make. So it then follows that tech platforms which perform algorithmic content sorting and suggestion should be regulated like media businesses are. (And all tech giants in the user generated content space have been doing their level best to evade that sort of rule of law for years.)

That Google has the power to edit out junk is clear.

A spokeswoman for YouTube told us the US test of a reduction in conspiracy junk recommendations has led to a drop in the number of views from recommendations of more than 50%.

Though she also said the test is still ramping up — suggesting the impact on the viewing and amplification of conspiracy nonsense could be even greater if YouTube were to more aggressively demote this type of BS.

What’s very clear is the company has the power to flick algorithmic levers that determine what billions of people see — even if you don’t believe that might also influence how they feel and what they believe. Which is a concentration of power that should concern people on all sides of the political spectrum.

While YouTube could further limit algorithmically amplified toxicity the problem is its business continues to monetize on engagement, and clickbait’s fantastical nonsense is, by nature, highly engaging. So — for purely commercial reasons — it has a counter incentive not to clear out all YouTube’s crap.

How long the company can keep up this balancing act remains to be seen, though. In recent years some major YouTube advertisers have intervened to make it clear they do not relish their brands being associated with abusive and extremist content. Which does represent a commercial risk to YouTube — if pressure from and on advertisers steps up.

Like all powerful tech platforms, its business is also facing rising scrutiny from politicians and policymakers. And questions about how to ensure such content platforms do not have a deleterious effect on people and societies are now front of mind for governments in some markets around the world.

That political pressure — which is a response to public pressure, after a number of scandals — is unlikely to go away.

So YouTube’s still glacial response to addressing how its population-spanning algorithms negatively select for stuff that’s socially divisive and individually toxic may yet come back to bite it — in the form of laws that put firm limits on its powers to push people’s buttons.