UK publishes draft Online Safety Bill

The UK government has published its long-trailed (child) ‘safety-focused’ plan to regulate online content and speech.

The Online Safety Bill has been in the works for years — during which time a prior plan to require age verification for accessing online porn in the UK, also with the goal of protecting kids from being exposed to inappropriate content online but which was widely criticized as unworkable, got quietly dropped.

At the time the government said it would focus on introducing comprehensive legislation to regulate a range of online harms. It can now say it’s done that.

The 145-page Online Safety Bill can be found here on the gov.uk website — along with 123 pages of explanatory notes and an 146-page impact assessment.

The draft legislation imposes a duty of care on digital service providers to moderate user generated content in a way that prevents users from being exposed to illegal and/or harmful stuff online.

The government dubs the plan globally “groundbreaking” and claims it will usher in “a new age of accountability for tech and bring fairness and accountability to the online world”.

Critics warn the proposals will harm freedom of expression by encouraging platforms to over-censor, while also creating major legal and operational headaches for digital businesses that will discourage tech innovation.

The debate starts now in earnest.

The bill will be scrutinised by a joint committee of MPs — before a final version is formally introduced to Parliament for debate later this year.

How long it might take to hit the statute books isn’t clear but the government has a large majority in parliament so, failing major public uproar and/or mass opposition within its own ranks, the Online Safety Bill has a clear road to becoming law.

Commenting in a statement, digital secretary Oliver Dowden said: “Today the UK shows global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world.

“We will protect children on the internet, crack down on racist abuse on social media and through new measures to safeguard our liberties, create a truly democratic digital age.”

The length of time it’s taken for the government to draft the Online Safety Bill underscores the legislative challenge involved in trying to ‘regulate the Internet’.

In a bit of a Freudian slip, the DCMS’ own PR talks about “the government’s fight to make the internet safe”. And there are certainly question-marks over who the future winners and losers of the UK’s Online Safety laws will be.

Safety and democracy?

In a press release about the plan, the Department for Digital, Media, Culture and Sport (DCMS) claimed the “landmark laws” will “keep children safe, stop racial hate and protect democracy online”.

But as that grab-bag of headline goals implies there’s an awful lot going on here — and huge potential for things to go wrong if the end result is an incoherent mess of contradictory rules that make it harder for digital businesses to operate and for Internet users to access the content they need.

The laws are set to apply widely — not just to tech giants or social media sites but to a broad swathe of websites, apps and services that host user-generated content or just allow people to talk to others online.

In scope services will face a legal requirement to remove and/or limit the spread of illegal and (in the case of larger services) harmful content, with the risk of major penalties for failing in this new duty of care toward users. There will also be requirements for reporting child sexual exploitation content to law enforcement.

Ofcom, the UK’s comms regulator — which is responsible for regulating the broadcast media and telecoms sectors — is set to become the UK Internet’s content watchdog too, under the plan.

It will have powers to sanction companies that fail in the new duty of care toward users by hitting them with fines of up to £18M or ten per cent of annual global turnover (whichever is higher).

The regulator will also get the power to block access to sites — so the potential for censoring entire platforms is baked in.

Some campaigners backing tough new Internet rules have been pressing the government to include the threat of criminal sanctions for CEOs to concentrate C-suite minds on anti-harms compliance. And while ministers haven’t gone that far, DCMS says a new criminal offence for senior managers has been included as a deferred power — adding: “This could be introduced at a later date if tech firms don’t step up their efforts to improve safety.”

Despite there being widespread public support in the UK for tougher rules for Internet platforms, the devil is the detail of how exactly you propose to do that.

Civil rights campaigners and tech policy experts have warned from the get-go that the government’s plan risks having a chilling effect on online expression by forcing private companies to be speech police.

Legal experts are also warning over how workable the framework will be, given hard to define concepts like “harms” — and, in a new addition, content that’s defined as “democratically important” (which the government wants certain platforms to have a special duty to protect).

The clear risk is massive legal uncertainty wrapping digital businesses — with knock-on impacts on startup innovation and availability of services in the UK.

The bill’s earlier incarnation — a 2019 White Paper — had the word “harms” in the title. That’s been swapped for a more anodyne reference to “safety” but the legal uncertainty hasn’t been swapped out.

The emphasis remains on trying to rein in an amorphous conglomerate of ‘harms’ — some illegal, others just unpleasant — that have been variously linked to or associated with online activity. (Often off the back of high profile media reporting, such as into children’s exposure to suicide content on platforms like Instagram.)

This can range from bullying and abuse (online trolling), to the spread of illegal content (child sexual exploitation), to content that’s merely inappropriate for children to see (legal pornography).

Certain types of online scams (romance fraud) are another harm the government wants the legislation to address, per latest additions.

The umbrella ‘harms’ framing makes the UK approach distinct to the European Union’s Digital Service Act — a parallel legislative proposal to update the EU’s digital rules that’s more tightly focused on things that are illegal, with the bloc setting out rules to standardize reporting procedures for illegal content; and combating the risk of dangerous products being sold on ecommerce marketplaces with ‘know your customer’ requirements.

In a response to criticism of the UK Bill’s potential impact on online expression, the government has added measures which it said today are aimed at strengthen people’s rights to express themselves freely online.

It also says it’s added in safeguards for journalism and to protect democratic political debate in the UK.

However its approach is already raising questions — including over what look like some pretty contradictory stipulations.

For example, the DCMS’ discussion of how the bill will handle journalistic content confirms that content on news publishers’ own websites won’t be in scope of the law (reader comments on those sites are also not in scope) and that articles by “recognised news publishers” shared on in-scope services (such as social media sites) will be exempted from legal requirements that may otherwise apply to non journalistic content.

Indeed, platforms will have a legal requirement to safeguard access to journalism content. (“This means [digital platforms] will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists’ removed content, and will be held to account by Ofcom for the arbitrary removal of journalistic content,” DCMS notes.)

However the government also specifies that “citizen journalists’ content will have the same protections as professional journalists’ content” — so exactly where (or how) the line gets drawn between “recognized” news publishers (out of scope), citizen journalists (also out of scope), and just any old person blogging or posting stuff on the Internet (in scope… maybe?) is going to make for compelling viewing.

Carve outs to protect political speech also complicate the content moderation picture for digital services — given, for example, how extremist groups that hold racist opinions can seek to launder their hate speech and abuse as ‘political opinion’. (Some notoriously racist activists also like to claim to be ‘journalists’…)

DCMS writes that companies will be “forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation”.

“Policies to protect such content will need to be set out in clear and accessible terms and conditions and firms will need to stick to them or face enforcement action from Ofcom,” it goes on, adding: “When moderating content, companies will need to take into account the political context around why the content is being shared and give it a high level of protection if it is democratically important.”

Platforms will face responsibility for balancing all these conflicting requirements — drawing on Codes of Practice on content moderation that respects freedom of expression which will be set out by Ofcom — but also under threat of major penalties being slapped on them by Ofcom if they get it wrong.

Interestingly, the government appears to be looking favorably on the Facebook-devised ‘Oversight Board’ model, where a panel of humans sit in judgement on ‘complex’ content moderation cases — and also discouraging too much use of AI filters which it warns risk missing speech nuance and over-removing content. (Especially interesting given the UK government’s prior pressure on platforms to adopt AI tools to speed up terrorism content takedowns.)

“The Bill will ensure people in the UK can express themselves freely online and participate in pluralistic and robust debate,” writes DCMS. “All in-scope companies will need to consider and put in place safeguards for freedom of expression when fulfilling their duties. These safeguards will be set out by Ofcom in codes of practice but, for example, might include having human moderators take decisions in complex cases where context is important.”

“People using their services will need to have access to effective routes of appeal for content removed without good reason and companies must reinstate that content if it has been removed unfairly. Users will also be able to appeal to Ofcom and these complaints will form an essential part of Ofcom’s horizon-scanning, research and enforcement activity,” it goes on.

“Category 1 services [the largest, most popular services] will have additional duties. They will need to conduct and publish up-to-date assessments of their impact on freedom of expression and demonstrate they have taken steps to mitigate any adverse effects. These measures remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.”

Another confusing-looking component of the plan is that while the bill includes measures to tackle what it calls “user-generated fraud” — such as posts on social media for fake investment opportunities or romance scams on dating apps — fraud that’s conducted online via advertising, emails or cloned websites will not be in scope, per DCMS, as it says “the Bill focuses on harm committed through user-generated content”.

Yet since Internet users can easily and cheaply create and run online ads — as platforms like Facebook essentially offer their ad targeting tools to anyone who’s willing to pay — then why carve out fraud by ads as exempt?

It seems a meaningless place to draw the line. Fraud where someone paid a few dollars to amplify their scam doesn’t seem a less harmful class of fraud than a free Facebook post linking to the self-same crypto investment scam.

In short, there’s a risk of arbitrary/ill-thought through distinctions creating incoherent and confusing rules that are prone to loopholes. Which doesn’t sound good for anyone’s online safety.

In parallel, meanwhile, the government is devising an ambitious pro-competition ex ante regime to regulate tech giants specifically. Ensuring coherence and avoiding conflicting or overlapping requirements between that framework for platform giants and these wider digital harms rules is a further challenge.

Facebook ordered not to apply controversial WhatsApp T&Cs in Germany

The Hamburg data protection agency has banned Facebook from processing the additional WhatsApp user data that the tech giant is granting itself access to under a mandatory update to WhatsApp’s terms of service.

The controversial WhatsApp privacy policy update has caused widespread confusion around the world since being announced — and already been delayed by Facebook for several months after a major user backlash saw rivals messaging apps benefitting from an influx of angry users.

The Indian government has also sought to block the changes to WhatApp’s T&Cs in court — and the country’s antitrust authority is investigating.

Globally, WhatsApp users have until May 15 to accept the new terms (after which the requirement to accept the T&Cs update will become persistent, per a WhatsApp FAQ).

The majority of users who have had the terms pushed on them have already accepted them, according to Facebook, although it hasn’t disclosed what proportion of users that is.

But the intervention by Hamburg’s DPA could further delay Facebook’s rollout of the T&Cs — at least in Germany — as the agency has used an urgency procedure, allowed for under the European Union’s General Data Protection Regulation (GDPR), to order the tech giant not to share the data for three months.

A WhatsApp spokesperson disputed the legal validity of Hamburg’s order — calling it “a fundamental misunderstanding of the purpose and effect of WhatsApp’s update” and arguing that it “therefore has no legitimate basis”.

“Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. As the Hamburg DPA’s claims are wrong, the order will not impact the continued roll-out of the update. We remain fully committed to delivering secure and private communications for everyone,” the spokesperson added, suggesting that Facebook-owned WhatsApp may be intending to ignore the order.

We understand that Facebook is considering its options to appeal Hamburg’s procedure.

The emergency powers Hamburg is using can’t extend beyond three months but the agency is also applying pressure to the European Data Protection Board (EDPB) to step in and make what it calls “a binding decision” for the 27 Member State bloc.

We’ve reached out to the EDPB to ask what action, if any, it could take in response to the Hamburg DPA’s call.

The body is not usually involved in making binding GDPR decisions related to specific complaints — unless EU DPAs cannot agree over a draft GDPR decision brought to them for review by a lead supervisory authority under the one-stop-shop mechanism for handling cross-border cases.

In such a scenario the EDPB can cast a deciding vote — but it’s not clear that an urgency procedure would qualify.

In taking the emergency action, the German DPA is not only attacking Facebook for continuing to thumb its nose at EU data protection rules, but throwing shade at its lead data supervisor in the region, Ireland’s Data Protection Commission (DPC) — accusing the latter of failing to investigate the very widespread concerns attached to the incoming WhatsApp T&Cs.

(“Our request to the lead supervisory authority for an investigation into the actual practice of data sharing was not honoured so far,” is the polite framing of this shade in Hamburg’s press release).

We’ve reached out to the DPC for a response and will update this report if we get one.

Ireland’s data watchdog is no stranger to criticism that it indulges in creative regulatory inaction when it comes to enforcing the GDPR — with critics charging commissioner Helen Dixon and her team of failing to investigate scores of complaints and, in the instances when it has opened probes, taking years to investigate — and opting for weak enforcements at the last.

The only GDPR decision the DPC has issued to date against a tech giant (against Twitter, in relation to a data breach) was disputed by other EU DPAs — which wanted a far tougher penalty than the $550k fine eventually handed down by Ireland.

GDPR investigations into Facebook and WhatsApp remain on the DPC’s desk. Although a draft decision in one WhatsApp data-sharing transparency case was sent to other EU DPAs in January for review — but a resolution has still yet to see the light of day almost three years after the regulation begun being applied.

In short, frustrations about the lack of GDPR enforcement against the biggest tech giants are riding high among other EU DPAs — some of whom are now resorting to creative regulatory actions to try to sidestep the bottleneck created by the one-stop-shop (OSS) mechanism which funnels so many complaints through Ireland.

The Italian DPA also issued a warning over the WhatsApp T&Cs change, back in January — saying it had contacted the EDPB to raise concerns about a lack of clear information over what’s changing.

At that point the EDPB emphasized that its role is to promote cooperation between supervisory authorities. It added that it will continue to facilitate exchanges between DPAs “in order to ensure a consistent application of data protection law across the EU in accordance with its mandate”. But the always fragile consensus between EU DPAs is becoming increasingly fraught over enforcement bottlenecks and the perception that the regulation is failing to be upheld because of OSS forum shopping.

That will increase pressure on the EDPB to find some way to resolve the impasse and avoid a wider break down of the regulation — i.e. if more and more Member State agencies resort to unilateral ’emergency’ action.

The Hamburg DPA writes that the update to WhatsApp’s terms grant the messaging platform “far-reaching powers to share data with Facebook” for the company’s own purposes (including for advertising and marketing) — such as by passing WhatApp users’ location data to Facebook and allowing for the communication data of WhatsApp users to be transferred to third-parties if businesses make use of Facebook’s hosting services.

Its assessment is that Facebook cannot rely on legitimate interests as a legal base for the expanded data sharing under EU law.

And if the tech giant is intending to rely on user consent it’s not meeting the bar either because the changes are not clearly explained nor are users offered a free choice to consent or not (which is the required standard under GDPR).

“The investigation of the new provisions has shown that they aim to further expand the close connection between the two companies in order for Facebook to be able to use the data of WhatsApp users for their own purposes at any time,” Hamburg goes on. “For the areas of product improvement and advertising, WhatsApp reserves the right to pass on data to Facebook companies without requiring any further consent from data subjects. In other areas, use for the company’s own purposes in accordance to the privacy policy can already be assumed at present.

“The privacy policy submitted by WhatsApp and the FAQ describe, for example, that WhatsApp users’ data, such as phone numbers and device identifiers, are already being exchanged between the companies for joint purposes such as network security and to prevent spam from being sent.”

DPAs like Hamburg may be feeling buoyed to take matters into their own hands on GDPR enforcement by a recent opinion by an advisor to the EU’s top court, as we suggested in our coverage at the time. Advocate General Bobek took the view that EU law allows agencies to bring their own proceedings in certain situations, including in order to adopt “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case.”

The CJEU ruling on that case is still pending — but the court tends to align with the position of its advisors.

 

Facebook is testing pop-up messages telling people to read a link before they share it

Years after popping open a pandora’s box of bad behavior, social media companies are trying to figure out subtle ways to reshape how people use their platforms.

Following Twitter’s lead, Facebook is trying out a new feature designed to encourage users to read a link before sharing it. The test will reach 6 percent of Facebook’s Android users globally in a gradual rollout that aims to encourage “informed sharing” of news stories on the platform.

Users can still easily click through to share a given story, but the idea is that by adding friction to the experience, people might rethink their original impulses to share the kind of inflammatory content that currently dominates on the platform.

Twitter introduced prompts urging users to read a link before retweeting it last June and the company quickly found the test feature to be successful, expanding it to more users.

Facebook began trying out more prompts like this last year. Last June, the company rolled out pop-up messages to warn users before they share any content that’s more than 90 days old in an an effort to cut down on misleading stories taken out of their original context.

At the time, Facebook said it was looking at other pop-up prompts to cut down on some kinds of misinformation. A few months later, Facebook rolled out similar pop-up messages that noted the date and the source of any links they share related to COVID-19.

The strategy demonstrates Facebook’s preference for a passive strategy of nudging people away from misinformation and toward its own verified resources on hot button issues like COVID-19 and the 2020 election.

While the jury is still out on how much of an impact this kind of gentle behavioral shaping can make on the misinformation epidemic, both Twitter and Facebook have also explored prompts that discourage users from posting abusive comments.

Pop-up messages that give users a sense that their bad behavior is being observed might be where more automated moderation is headed on social platforms. While users would probably be far better served by social media companies scrapping their misinformation and abuse-ridden existing platforms and rebuilding them more thoughtfully from the ground up, small behavioral nudges will have to do.

State AGs tell Facebook to scrap Instagram for kids plans

In a new letter, attorneys general representing 44 U.S. states and territories are pressuring Facebook to walk away from new plans to open Instagram to children. The company is working on an age-gated version of Instagram for kids under the age of 13 that would lure in young users who are currently not permitted to use the app, which was designed for adults.

“It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the coalition of attorneys general wrote, warning that an Instagram for kids would be “harmful for myriad reasons.”

The state attorneys general call for Facebook to abandon its plans, citing concerns around developmental health, privacy and Facebook’s track record of prioritizing growth over the well being of children on its platforms. In the letter, embedded below, they delve into specific worries about cyberbullying, online grooming by sexual predators and algorithms that showed dieting ads to users with eating disorders.

Concerns about social media and mental health in kids and teens is a criticism we’ve been hearing more about this year, as some Republicans join Democrats in coalescing around those issues, moving away from the claims of anti-conservative bias that defined politics in tech during the Trump years.

Leaders from both parties have been openly voicing fears over how social platforms are shaping young minds in recent months amidst calls to regulate Facebook and other social media companies. In April, a group of Congressional Democrats wrote Facebook with similar warnings over its new plans for children, pressing the company for details on how it plans to protect the privacy of young users.

In light of all the bad press and attention from lawmakers, it’s possible that the company may walk back its brazen plans to boost business by bringing more underage users into the fold. Facebook is already in the hot seat with state and federal regulators in just about every way imaginable. Deep worries over the company’s future failures to protect yet another vulnerable set of users could be enough to keep these plans on the company’s back burner.

Facebook’s Oversight Board throws the company a Trump-shaped curveball

Facebook’s controversial policy-setting supergroup issued its verdict on Trump’s fate Wednesday, and it wasn’t quite what most of us were expecting.

We’ll dig into the decision to tease out what it really means, not just for Trump, but also for Facebook’s broader experiment in outsourcing difficult content moderation decisions and for just how independent the board really is.

What did the Facebook Oversight Board decide?

The Oversight Board backed Facebook’s determination that Trump violated its policies on “Dangerous Individuals and Organizations,” which prohibits anything that praises or otherwise supports violence. The the full decision and accompanying policy recommendations are online for anyone to read.

Specifically, the Oversight Board ruled that two Trump posts, one telling Capitol rioters “We love you. You’re very special” and another calling them “great patriots” and telling them to “remember this day forever” broke Facebook’s rules. In fact, the board went as far as saying the pair of posts “severely” violated the rules in question, making it clear that the risk of real-world harm in Trump’s words was was crystal clear:

The Board found that, in maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible. At the time of Mr. Trump’s posts, there was a clear, immediate risk of harm and his words of support for those involved in the riots legitimized their violent actions. As president, Mr. Trump had a high level of influence. The reach of his posts was large, with 35 million followers on Facebook and 24 million on Instagram.”

While the Oversight Board praised Facebook’s decision to suspend Trump, it disagreed with the way the platform implemented the suspension. The group argued that Facebook’s decision to issue an “indefinite” suspension was an arbitrary punishment that wasn’t really supported by the company’s stated policies:

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.

In applying this penalty, Facebook did not follow a clear, published procedure. ‘Indefinite’ suspensions are not described in the company’s content policies. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.”

The Oversight Board didn’t mince words on this point, going on to say that by putting a “vague, standardless” punishment in place and then kicking the ultimate decision to the Oversight Board, “Facebook seeks to avoid its responsibilities.” Turning things around, the board asserted that it’s actually Facebook’s responsibility to come up with an appropriate penalty for Trump that fits its set of content moderation rules.

Is this a surprise outcome?

If you’d asked me yesterday, I would have said that the Oversight Board was more likely to overturn Facebook’s Trump decision. I also called Wednesday’s big decision a win-win for Facebook, because whatever the outcome, it wouldn’t ultimately be criticized a second time for either letting Trump back onto the platform or kicking him off for good. So much for that!

Facebook likely saw a more clear-cut decision on the Trump situation in the cards. This is a relatively challenging outcome for a company that’s probably ready to move on from its (many, many) missteps during the Trump era. But there’s definitely an argument that if the board declared that Facebook made the wrong call and reinstated Trump that would have been a much bigger headache.

A lot of us didn’t see the “straight up toss the ball back into Facebook’s court” option as a possible outcome. It’s ironic and a bit surprising that the Oversight Board’s decision to give Facebook the final say actually makes the board look more independent, not less.

But: It’s worth remembering that at the end of the day, Facebook could undermine the whole thing by just refusing to do what the board says. The board only has as much power as Facebook grants it and the company could call off the deal at any second, if it chose to.

What does it mean that the Oversight Board sent the decision back to Facebook?

Ultimately the Oversight Board is asking Facebook to either a) give Trump’s suspension and end date or b) delete his account. In a less severe case, the normal course of action would be for Facebook to remove whatever broke the rules, but given the ramifications here and the fact that Trump is a repeat Facebook rule-breaker, this is obviously all well past that option.

What will Facebook do?

We’re in for a wait. The board called for Facebook to evaluate the Trump situation and reach a final decision within six months, calling for a “proportionate” response that is justified by its platform rules. Since Facebook and other social media companies are re-writing their rules all the time and making big calls on the fly, that gives the company a bit of time to build out policies that align with the actions it plans to take.

In the months following the violence at the U.S. Capitol, Facebook repeatedly defended its Trump call as “necessary and right.” It’s hard to imagine the company deciding that Trump will get reinstated six months from now, but in theory Facebook could decide that length of time was an appropriate punishment and write that into its rules. The fact that Twitter permanently banned Trump means that Facebook could comfortably follow suit at this point.

In direct response to the decision, Facebook’s Nick Clegg wrote only: “We will now consider the board’s decision and determine an action that is clear and proportionate.” Clegg says Trump will stay suspended until then but didn’t offer further hints at what comes next. See you again on November 5.

If Trump had won reelection, this whole thing probably would have gone down very differently. As much as Facebook likes to say its decisions are aligned with lofty ideals — absolute free speech, connecting people — the company is ultimately very attuned to its regulatory and political environment.

Trump’s actions were on January 6 were dangerous and flagrant, but Biden’s looming inauguration two weeks later probably influenced the company’s decision just as much. Circumventing regulatory scrutiny is also arguably the r’aison dêtre for the Oversight Board to begin with.

Did the board actually change anything?

Potentially. In its decision, the Oversight Board said that Facebook asked for “observations or recommendations from the Board about suspensions when the user is a political leader.” The board’s policy recommendations aren’t binding like its decisions are, but since Facebook asked, it’s likely to listen.

If it does, the Oversight Board’s recommendations could reshape how Facebook handles high profile accounts in the future:

The Board stated that it is not always useful to draw a firm distinction between political leaders and other influential users, recognizing that other users with large audiences can also contribute to serious risks of harm.

While the same rules should apply to all users, context matters when assessing the probability and imminence of harm. When posts by influential users pose a high probability of imminent harm, Facebook should act quickly to enforce its rules. Although Facebook explained that it did not apply its ‘newsworthiness’ allowance in this case, the Board called on Facebook to address widespread confusion about how decisions relating to influential users are made. The Board stressed that considerations of newsworthiness should not take priority when urgent action is needed to prevent significant harm.

Facebook and other social networks have hidden behind newsworthiness exemptions for years instead of making difficult policy calls that would upset half their users. Here, the board not only says that political leaders don’t really deserve special consideration while enforcing the rules, but that it’s much more important to take down content that could cause harm than it is to keep it online because it’s newsworthy.

So… we’re back to square one?

Yes and no. Trump’s suspension may still be up in the air, but the Oversight Board is modeled after a legal body and its real power is in setting precedents. The board kicked this case back to Facebook because the company picked a punishment for Trump that wasn’t even on the menu, not because it thought anything about his behavior fell in a gray area.

The Oversight Board clearly believed that Trump’s words of praise for rioters at the Capitol created a high stakes, dangerous threat on the platform. It’s easy to imagine the board reaching the same conclusion on Trump’s infamous “when the looting starts, the shooting starts” statement during the George Floyd protests, even though Facebook did nothing at the time. Still, the board stops short of saying that behavior like Trump’s merits a perma-ban — that much is up to Facebook.

Facebook launches Neighborhoods, a Nextdoor clone

Facebook is launching a new section of its app designed to connect neighbors and curate neighborhood-level news. The new feature, predictably called Neighborhoods, is available now in Canada and will be rolling out soon for U.S. users to test.

As we reported previously, Neighborhoods has technically been around since at least October of last year, but that limited test only recruited residents of Calgary, Canada.

On Neighborhoods, Facebook users can create a separate sub-profile and can populate it with interests and a custom bio. You can join your own lower-case neighborhood and nearby neighborhoods and complain about porch pirates, kids these days, or whatever you’d otherwise be doing on Nextdoor.

Aware of the intense moderation headaches on Nextdoor, Facebook says that it will have a set of moderators dedicated to Neighborhoods to will review comments and posts to keep matters “relevant and kind.” Within Neighborhoods neighborhoods, deputized users can steer and strike up conversations and do some light moderation, it sounds like. The new corner of Facebook will also come with blocking features.

As far as privacy goes, well, it’s Facebook. Neighborhoods isn’t its own standalone app and will naturally be sharing your neighborly behavior to serve you targeted ads elsewhere.

Twitter rolls out improved ‘reply prompts’ to cut down on harmful tweets

A year ago, Twitter began testing a feature that would prompt users to pause and reconsider before they replied to a tweet using “harmful” language — meaning language that was abusive, trolling, or otherwise offensive in nature. Today, the company says it’s rolling improved versions of these prompts to English-language users on iOS and soon, Android, after adjusting its systems that determine when to send the reminders to better understand when the language being used in the reply is actually harmful.

The idea behind these forced slow downs, or nudges, are about leveraging psychological tricks in order to help people make better decisions about what they post. Studies have indicated that introducing a nudge like this can lead people to edit and cancel posts they would have otherwise regretted.

Twitter’s own tests found that to be true, too. It said that 34% of people revised their initial reply after seeing the prompt, or chose not to send the reply at all. And, after being prompted once, people then composed 11% fewer offensive replies in the future, on average. That indicates that the prompt, for some small group at least, had a lasting impact on user behavior. (Twitter also found that users who were prompted were less likely to receive harmful replies back, but didn’t further quantify this metric.)

Image Credits: Twitter

However, Twitter’s early tests ran into some problems. it found its systems and algorithms sometimes struggled to understand the nuance that occurs in many conversations. For example, it couldn’t always differentiate between offensive replies and sarcasm or, sometimes, even friendly banter. It also struggled to account for those situations in which language is being reclaimed by underrepresented communities, and then used in non-harmful ways.

The improvements rolling out starting today aim to address these problems. Twitter says it’s made adjustments to the technology across these areas, and others. Now, it will take the relationship between the author and replier into consideration. That is, if both follow and reply to each other often, it’s more likely they have a better understanding of the preferred tone of communication than someone else who doesn’t.

Twitter says it has also improved the technology to more accurately detect strong language, including profanity.

And it’s made it easier for those who see the prompts to let Twitter know if the prompt was helpful or relevant — data that can help to improve the systems further.

How well this all works remains to be seen, of course.

Image Credits: Twitter

While any feature that can help dial down some of the toxicity on Twitter may be useful, this only addresses one aspect of the larger problem — people who get into heated exchanges that they could later regret. There are other issues across Twitter regarding abusive and toxic content that this solution alone can’t address.

These “reply prompts” aren’t the only time Twitter has used the concept of nudges to impact user behavior. It also reminds users to read an article before you retweet and amplify it in an effort to promote more informed discussions on its platform.

Twitter says the improved prompts are rolling out to all English-language users on iOS starting today, and will reach Android over the next few days.

Facebook’s hand-picked ‘oversight’ panel upholds Trump ban — for now

Facebook’s content decision review body, a quasi-external panel that’s been likened to a ‘Supreme Court of Facebook’ but isn’t staffed by sitting judges, can’t be truly independent of the tech giant which funds it, has no legal legitimacy or democratic accountability, and goes by the much duller official title ‘Oversight Board’ (aka the FOB) — has just made the biggest call of its short life…

Facebook’s hand-picked ‘oversight’ panel has voted against reinstating former U.S. president Donald Trump’s Facebook account.

However it has sought to row the company back from an ‘indefinite’ ban — finding fault with its decision to impose an indefinite restriction, rather than issue a more standard penalty (such as a penalty strike or permanent account closure).

In a press release announcing its decision the board writes:

Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in suspending Mr. Trump’s accounts on January 6 and extending that suspension on January 7.

However, it was not appropriate for Facebook to impose an ‘indefinite’ suspension.

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.”

The board wants Facebook to revision its decision on Trump’s account within six months — and “decide the appropriate penalty”. So it appears to have succeeded in… kicking the can down the road.

The FOB is due to hold a press conference to discuss its decision shortly so stay tuned for updates.

This story is developing… refresh for updates…

It’s certainly been a very quiet five months on mainstream social media since Trump had his social media ALL CAPS megaphone unceremoniously shut down in the wake of his supporters’ violent storming of the capital.

For more on the background to Trump’s deplatforming do make time for this excellent explainer by TechCrunch’s Taylor Hatmaker. But the short version is that Trump finally appeared to have torched the last of his social media rule-breaking chances after he succeeded in fomenting an actual insurrection on U.S. soil on January 6. Doing so with the help of the massive, mainstream social media platforms whose community standards don’t, as a rule, give a thumbs up to violent insurrection…

For Trump and Facebook, judgment day is around the corner

Facebook unceremoniously confiscated Trump’s biggest social media megaphone months ago, but the former president might be poised to snatch it back.

Facebook’s Oversight Board, an external Supreme Court-like policy decision making group, will either restore Trump’s Facebook privileges or banish him forever on Wednesday. Whatever happens, it’s a huge moment for Facebook’s nascent experiment in outsourcing hard content moderation calls to an elite group of global thinkers, academics and political figures and allowing them to set precedents that could shape the world’s biggest social networks for years to come.

Facebook CEO Mark Zuckerberg announced Trump’s suspension from Facebook in the immediate aftermath of the Capitol attack. It was initially a temporary suspension, but two weeks later Facebook said that the decision would be sent to the Oversight Board. “We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Facebook CEO Mark Zuckerberg wrote in January.

Facebook’s VP of Global Affairs Nick Clegg, a former British politician, expressed hope that the board would back the company’s own conclusions, calling Trump’s suspension an “unprecedented set of events which called for unprecedented action.”

Trump inflamed tensions and incited violence on January 6, but that incident wasn’t without precedent. In the aftermath of the murder of George Floyd, an unarmed Black man killed by Minneapolis police, President Trump ominously declared on social media “when the looting starts, the shooting starts,” a threat of imminent violence with racist roots that Facebook declined to take action against, prompting internal protests at the company.

The former president skirted or crossed the line with Facebook any number of times over his four years in office, but the platform stood steadfastly behind a maxim that all speech was good speech, even as other social networks grew more squeamish.

In a dramatic address in late 2019, Zuckerberg evoked Martin Luther King Jr. as he defended Facebook’s anything goes approach. “In times of social turmoil, our impulse is often to pull back on free expression,” Zuckerberg said. “We want the progress that comes from free expression, but not the tension.” King’s daughter strenuously objected.

A little over a year later, with all of Facebook’s peers doing the same and Trump leaving office, Zuckerberg would shrink back from his grand free speech declarations.

In 2019 and well into 2020, Facebook was still a roiling hotbed of misinformation, conspiracies and extremism. The social network hosted thousands of armed militias organizing for violence and a sea of content amplifying QAnon, which moved from a fringe belief on the margins to a mainstream political phenomenon through Facebook.

Those same forces would converge at the U.S. Capitol on January 6 for a day of violence that Facebook executives characterized as spontaneous, even though it had been festering openly on the platform for months.

 

How the Oversight Board works

Facebook’s Oversight Board began reviewing its first cases last October. Facebook can refer cases to the board, like it did with Trump, but users can also appeal to the board to overturn policy decisions that affect them after they exhaust the normal Facebook or Instagram appeals process. A five member subset of its 20 total members evaluate whether content should be allowed to remain on the platform and then reach a decision, which the full board must approve by a majority vote. Initially, the Oversight Board was only empowered to reinstate content removed on Facebook and Instagram, but in mid-April began accepting requests to review controversial content that stayed up.

Last month, the Oversight Board replaced departing member Pamela Karlan, a Stanford professor and voting rights scholar critical of Trump, who left to join the Biden administration. Karlan’s replacement, PEN America CEO Susan Nossel, wrote an op-ed in the LA Times in late January arguing that extending a permanent ban on Trump “may feel good” but that decision would ultimately set a dangerous precedent. Nossel joined the board too late to participate in the Trump decision.

The Oversight Board’s earliest batch of decisions leaned in the direction of restoring content that’s been taken down — not upholding its removal. While the board’s other decisions are likely to touch on the full spectrum of frustration people have with Facebook’s content moderation preferences, they come with far less baggage than the Trump decision. In one instance, the Oversight Board voted to restore an image of a woman’s nipples used in the context of a breast cancer post. In another, the board decided that a quote from a famous Nazi didn’t merit removal because it wasn’t an endorsement of Nazi ideology. In all cases, the Oversight Board can issue policy recommendations, but Facebook isn’t obligated to implement them — just the decisions.

Befitting its DNA of global activists, political figures and academics, the Oversight Board’s might have ambitions well beyond one social network. Earlier this year, Oversight Board co-chair and former Prime Minister of Denmark Helle Thorning-Schmidt declared that other social media companies would be “welcome to join” the project, which is branded in a conspicuously Facebook-less way. (The group calls itself the “Oversight Board” though everyone calls it the “Facebook Oversight Board.”)

“For the first time in history, we actually have content moderation being done outside one of the big social media platforms,” Thorning-Schmidt declared, grandly. “That in itself… I don’t hesitate to call it historic.”

Facebook’s decision to outsource some major policy decisions is indeed an experimental one, but that experiment is just getting started. The Trump case will give Facebook’s miniaturized Supreme Court an opportunity to send a message, though whether the takeaway is that it’s powerful enough to keep a world leader muzzled or independent enough to strike out from its parent and reverse the biggest social media policy decision ever made remains to be seen.

If Trump comes back, the company can shrug its shoulders and shirk another PR firestorm, content that its experiment in external content moderation is legitimized. If the board doubles down on banishing Trump, Facebook will rest easy knowing that someone else can take the blowback this round in its most controversial content call to date. For Facebook, for once, it’s a win-win situation.