Facebook whistleblower Frances Haugen will talk Section 230 reform with Congress this week

Facebook whistleblower Frances Haugen will go before Congress again this week, this time offering her unique perspective on the company’s moderation and policy failures as they relate to Section 230 of the Communications Decency Act, the key legal shield that protects online platforms from liability for the user-created content they host.

The House Energy and Commerce Subcommittee on Communications and Technology will hold the hearing, titled “Holding Big Tech Accountable: Targeted Reforms to Tech’s Legal Immunity,” this Wednesday, December 1 at 10:30 AM ET. Color of Change President Rashad Robinson and Common Sense Media CEO James Steyer will also testify on Wednesday.

The hearing is the latest Section 230-focused discussion from the House committee. In March, the chief executives of Facebook, Google and Twitter went before lawmakers to defend the measures they’ve taken to fight misinformation and disinformation — two major areas of concern that have inspired Democratic lawmakers to reexamine tech’s longstanding liability shield.

In an October Senate hearing, Haugen advocated for changes to Section 230 that would hold platforms accountable for the content that they promote algorithmically. While Haugen isn’t an expert on legislative solutions to some of social media’s current ills, given her time with Facebook’s since-dismantled civic integrity team, she’s uniquely positioned to give lawmakers insight into some of the most dangerous societal outcomes of algorithmically amplified content.

“User-generated content is something companies have less control over. But they have 100% control over their algorithms,” Haugen said. “Facebook should not get a free pass on choices it makes to prioritize growth, virality and reactiveness over public safety.”

Facebook’s former News Feed lead and current Head of Instagram Adam Mosseri is also set to testify before the Senate for the first time next week, addressing revelations in leaked documents that the company knows its business takes a toll on the mental health of some of its youngest, most vulnerable users.

In its announcement, the House Energy and Commerce committee cited four tech reform bills that Congress is currently mulling: the Justice Against Malicious Algorithms Act of 2021, the SAFE TECH Act, the Civil Rights Modernization Act of 2021 and the Protecting Americans from Dangerous Algorithms Act. The first bill, proposed by the committee holding Wednesday’s hearing, would lift Section 230’s liability protections in cases when a platform “knowingly or recklessly” recommends harmful content using algorithms.

Trump’s new lawsuits against social media companies are going nowhere fast

Trump’s spicy trio of lawsuits against the social media platforms that he believes wrongfully banned him have succeeded in showering the former president with a flurry of media attention, but that’s likely where the story ends.

Like Trump’s quixotic and ultimately empty quest to gut Section 230 of the Communications Decency Act during his presidency, the new lawsuits are all sound and fury with little legal substance to back them up.

The suits allege that Twitter, Facebook and YouTube violated Trump’s First Amendment rights by booting him from their platforms, but the First Amendment is intended to protect citizens from censorship by the government — not private industry. The irony that Trump himself was the uppermost figure in the federal government at the time probably won’t be lost on whoever’s lap this case lands in.

In the lawsuits, which also name Twitter and Facebook chief executives Jack Dorsey and Mark Zuckerberg as well as Google CEO Sundar Pichai (Susan Wojcicki escapes notice once again!), Trump accuses the three companies of engaging in “impermissible censorship resulting from threatened legislative action, a misguided reliance upon Section 230 of the Communications Decency Act, and willful participation in joint activity with federal actors.”

The suit claims that the tech companies colluded with “Democrat lawmakers,” the CDC and Dr. Anthony Fauci, who served in Trump’s own government at the time.

The crux of the argument is that communication between the tech companies, members of Congress and the federal government somehow transforms Facebook, Twitter and YouTube into “state actors” — a leap of epic proportion:

“Defendant Twitter’s status thus rises beyond that of a private company to that of a state actor, and as such, Defendant is constrained by the First Amendment right to free speech in the censorship decisions it makes.”

Trump’s own Supreme Court appointee Brett Kavanaugh issued the court’s opinion on a relevant case two years ago. It examined whether a nonprofit running public access television channels in New York qualified as a “state actor” that would be subject to First Amendment constraints. The court ruled that running the public access channels didn’t transform the nonprofit into a government entity and that it retained a private entity’s rights to make editorial decisions.

“… A private entity… who opens its property for speech by others is not transformed by that fact alone into a state actor,” Justice Kavanaugh wrote in the decision.

It’s not likely that a court would decide that talking to the government or being threatened by the government somehow transform Twitter, YouTube and Facebook into state actors either.

Trump vs. Section 230 (again)

First Amendment aside — and there’s really not much of an argument there — social media platforms are protected by Section 230 of the Communications Decency Act, a concise snippet of law that shields them from liability not just for the user-generated content they host but for the moderation decisions they make about what content to remove.

In line with Trump’s obsessive disdain for tech’s legal shield, the lawsuits repeatedly rail against Section 230. The suits try to argue that because Congress threatened to revoke tech’s 230 protections, that forced them to ban Trump, which somehow makes social media companies part of the government and subject to First Amendment constraints.

Of course, Republican lawmakers and Trump’s own administration made frequent threats about repealing Section 230, not that it changes anything because this line of argument doesn’t make much sense anyway.

The suit also argues that Congress crafted Section 230 to intentionally censor speech that is otherwise protected by the First Amendment, ignoring that the law was born in 1996, well before ubiquitous social media, and for other purposes altogether.

For the four years of his presidency, Trump’s social media activity — his tweets in particular — informed the events of the day, both nationally and globally. While other world leaders and political figures used social media to communicate or promote their actions, Trump’s Twitter account was usually the action itself.

In the shadow of his social media bans, the former president has failed to re-establish lines of communication to the internet at large. In May, he launched a new blog, “From the Desk of Donald J. Trump,” but the site was taken down just a month later after it failed to attract much interest.

The handful of pro-Trump alternative social platforms are still struggling with app store content moderation requirements at odds with their extreme views on free speech, but that didn’t stop Gettr, the latest, from going ahead with its own rocky launch last week.

Viewed in one light, Trump’s lawsuits are a platform too, his latest method for broadcasting himself to the online world that his transgressions eventually cut him off from. In that sense, they seem to have succeeded, but in all other senses, they won’t.

At social media hearing, lawmakers circle algorithm-focused Section 230 reform

Rather than a CEO-slamming sound bite free-for-all, Tuesday’s big tech hearing on algorithms aimed for more of a listening session vibe — and in that sense it mostly succeeded.

The hearing centered on testimony from the policy leads at Facebook, YouTube and Twitter rather than the chief executives of those companies for a change. The resulting few hours didn’t offer any massive revelations but was still probably more productive than squeezing some of the world’s most powerful men for their commitments to “get back to you on that.”

In the hearing, lawmakers bemoaned social media echo chambers and the ways that the algorithms pumping content through platforms are capable of completely reshaping human behavior. .

“… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech, which held the hearing.

Coons struck a cooperative note, observing that algorithms drive innovation but that their dark side comes with considerable costs

None of this is new, of course. But Congress is crawling closer to solutions, one repetitive tech hearing at a time. The Tuesday hearing highlighted some zones of bipartisan agreement that could determine the chances of a tech reform bill passing the Senate, which is narrowly controlled by Democrats. Coons expressed optimism that a “broadly bipartisan solution” could be reached.

What would that look like? Probably changes to Section 230 of the Communications Decency Act, which we’ve written about extensively over the years. That law protects social media companies from liability for user-created content and it’s been a major nexus of tech regulation talk, both in the newly Democratic Senate under Biden and the previous Republican-led Senate that took its cues from Trump.

Lauren Culbertson, head of U.S. public policy at Twitter

Lauren Culbertson, head of U.S. public policy at Twitter Inc., speaks remotely during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., on Tuesday, April 27, 2021. Photographer: Al Drago/Bloomberg via Getty Images

A broken business model

In the hearing, lawmakers pointed to flaws inherent to how major social media companies make money as the heart of the problem. Rather than criticizing companies for specific failings, they mostly focused on the core business model from which social media’s many ills spring forth.

“I think it’s very important for us to push back on the idea that really complicated, qualitative problems have easy quantitative solutions,” Sen. Ben Sasse (R-NE) said. He argued that because social media companies make money by keeping users hooked to their products, any real solution would have to upend that business model altogether.

“The business model of these companies is addiction,” Josh Hawley (R-MO) echoed, calling social media an “attention treadmill” by design.

Ex-Googler and frequent tech critic Tristan Harris didn’t mince words about how tech companies talk around that central design tenet in his own testimony. “It’s almost like listening to a hostage in a hostage video,” Harris said, likening the engagement-seeking business model to a gun just offstage.

Spotlight on Section 230

One big way lawmakers propose to disrupt those deeply entrenched incentives? Adding algorithm-focused exceptions to the Section 230 protections that social media companies enjoy. A few bills floating around take that approach.

One bill from Sen. John Kennedy (R-LA) and Reps. Paul Gosar (R-A) and Tulsi Gabbard (R-HI) would require platforms with 10 million or more users to obtain consent before serving users content based on their behavior or demographic data if they want to keep Section 230 protections. The idea is to revoke 230 immunity from platforms that boost engagement by “funneling information to users that polarizes their views” unless a user specifically opts in.

In another bill, the Protecting Americans from Dangerous Algorithms Act, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) propose suspending Section 230 protections and making companies liable “if their algorithms amplify misinformation that leads to offline violence.” That bill would amend Section 230 to reference existing civil rights laws.

Section 230’s defenders argue that any insufficiently targeted changes to the law could disrupt the modern internet as we know it, resulting in cascading negative impacts well beyond the intended scope of reform efforts. An outright repeal of the law is almost certainly off the table, but even small tweaks could completely realign internet businesses, for better or worse.

During the hearing, Hawley made a broader suggestion for companies that use algorithms to chase profits. “Why shouldn’t we just remove section 230 protection from any platform that engages in behavioral advertising or algorithmic amplification?” he asked, adding that he wasn’t opposed to an outright repeal of the law.

Sen. Klobuchar, who leads the Senate’s antitrust subcommittee, connected the algorithmic concerns to anti-competitive behavior in the tech industry. “If you have a company that buys out everyone from under them… we’re never going to know if they could have developed the bells and whistles to help us with misinformation because there is no competition,” Klobuchar said.

Subcommittee members Klobuchar and Sen. Mazie Hirono (D-HI) have their own major Section 230 reform bill, the Safe Tech Act, but that legislation is less concerned with algorithms than ads and paid content.

At least one more major bill looking at Section 230 through the lens of algorithms is still on the way. Prominent big tech critic House Rep. David Cicilline (D-RI) is due out soon with a Section 230 bill that could suspend liability protections for companies that rely on algorithms to boost engagement and line their pockets.

“That’s a very complicated algorithm that is designed to maximize engagement to drive up advertising prices to produce greater profits for the company,” Cicilline told Axios last month. “…That’s a set of business decisions for which, it might be quite easy to argue, that a company should be liable for.”

AOL founder Steve Case, involved early in Section 230, says it’s time to change it

AOL founder Steve Case was there in Dulles, Virginia, just outside of Washington, D.C., when in 1996 the Communications Decency Act was passed as part of a major overhaul of U.S. telecommunications laws that President Bill Clinton signed into law. Soon after, in its first test, a provision of that act which states that, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” would famously save AOL’s bacon, too.

That wasn’t coincidental. In a wide-ranging call earlier today with Case — who has become an influential investor over the last 15 years through his Washington, D.C.-based firm Revolution and its early-stage, growth-stage, and seed-stage funds — he talked about his involvement in Section 230’s creation, and why the thinks it’s time to change it.

We’ll have more from our interview with Case tomorrow. In the meantime, here he talks about the related legal protections for online platforms that took center stage yesterday or, at least, were supposed to during the Senate’s latest Big Tech hearing.

In that early birthing stage of the internet, [we were all] figuring out what the rules of the road were, and the 230 provision was something I was involved in. I do think the first lawsuit related to it was related to AOL. But 25 years later, it’s fair to take a fresh look at it — [it’s] appropriate to take a fresh look at it. I’ve not recently spent enough time digging in to really have a strong point of view in terms of exactly what to change, but I think it’s fair to say that what made sense in those early days when very few people were online maybe doesn’t make as much sense now when when the entire world is online and the impact these platforms have is so significant.

At the same time, I think you have to be super careful. I think that’s what what the CEOs testifying [yesterday] were trying to emphasize. [It was] ‘We get that there’s a desire to relook at it. We also get that because of the election season, it’s become a highly politicized issue. Let’s engage in this discussion, and perhaps there are some things that need to be modified to reflect the current reality . . .let’s don’t do it just in the heat of a political moment.’

When we started AOL 35 years ago, only 3% of people are connected. They were only online about an hour a week, and it was still illegal, actually, for consumers or businesses to be on the internet [so] I spent a lot of time on commercializing the internet, opening up consumers and businesses, figuring out what the right rules of the road were in terms of things like taxes on e-commerce. And generally, we were able to convince regulators and government leaders that a light touch for the internet made sense, because it was a new idea, and it wasn’t clear exactly how it was going to develop.

But now, it’s not a new idea. And now it has a profound impact on people’s lives and our communities and countries. And so I’m not surprised that there’s more more focus on it, [though] it’s a little too bad that there’s so much attention right this moment because in an election season, things tend to get a little bit hot on both sides.

Putting that aside, I think there are legitimate issues that the policymakers need to be looking at and are starting to look at, not just in Washington, DC, but more broadly in Brussels. And I think having more of a dialogue between the innovators and the policymakers is actually going to be critical in this internet third wave, because the sectors up for grabs are most important aspects of our lives — things like health care and education and food and agriculture. And that’s really going to require not just innovation from a technology standpoint, but thoughtfulness from a a policy standpoint.

I understand entrepreneurs who get frustrated by regulations kind of slowing down the pace of information. I get that. Obviously, some of the businesses that we back have suffered from that. But at the same time, you can’t not expect the government — which is elected by the people — to serve the people, including protecting the people.”

Reports say White House has drafted an order putting the FCC in charge of monitoring social media

The White House is contemplating issuing an executive order that would widen its attack on the operations of social media companies.

The White House has prepared an executive order called “Protecting Americans From Online Censorship” that would give the Federal Communications Commission oversight of how Facebook, Twitter and other tech companies monitor and manage their social networks, according to a CNN report.

Under the order, which has not yet been announced and could be revised, the FCC would be tasked with developing new regulations that would determine when and how social media companies filter posts, videos, or articles on their platforms.

The draft order also calls for the Federal Trade Commission to take those new policies into account when investigating or filing lawsuits against technology companies, according to the CNN report.

Social media censorship has been a perennial talking point for President Donald Trump and his administration. In May, the White House set up a tip line for people to provide evidence of social media censorship and a systemic bias against conservative media.

In the executive order, the White House says it received more than 15,000 complaints about censorship from the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.

As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.

Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include, Facebook, Google, Instagram, Twitter, Snap and Pinterest .

At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos, or articles that are uploaded from individual users or third parties.

The Trump Administration aren’t the only politicians in Washington that are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.

The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.

The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.

The FTC and FCC had not responded to a request for comment at the time of publication.