Questions linger over Facebook, Twitter, TikTok’s commitment to uphold election integrity in Africa, as countries head to polls

A dozen countries in Africa including Nigeria, the continent’s biggest economy and democracy, are expected to hold their presidential elections next year, and questions linger on how well social media platforms are prepared to curb mis and dis-information after claims of botched content moderation during Kenya’s polls last August.

Concerns are mounting as it emerges that Twitter has scaled back content moderation after Elon Musk took over and later laid-off more than half the employees, and nearly cleaning out the entire Africa team, a decision that left outsourced moderators out of jobs too. With very limited support to filter or stop spread of propaganda, Africa will likely be a casualty of Twitter’s oft-erratic or slow response to falsehoods — which catalyze violence in times of political polarization.

But this is not unique to Twitter; widely used platforms like Facebook, TikTok, WhatsApp, and YouTube have also been fingered for doing little to stop mis- and dis-information in Africa.

In Nigeria, for instance, sitting president Muhammadu Buhari, has voiced concerns over how dis- and mis-information on social media is fanning conflict, insecurity, and distrust in the government in the lead up to the February elections – even as the country’s economy continues to struggle causing a sense of instability. Yet, as momentum picks up for what is one of the most hotly contested elections, activists, researchers, and a section of civilians are apprehensive about the mounting spread of negative campaigning.

Researchers anticipate that hateful content and falsehoods, meant to stir confusion or sway voters in Nigeria, will continue to be shared online. They are insistently calling on tech companies to hire and train local experts with the knowledge of local languages and context to intercept misleading, violent or intimidating posts that could undermine election integrity.

“Social-media platforms especially – Twitter, Meta (Facebook), YouTube, WhatsApp and Telegram – should step up efforts to identify and deal with election-related misinformation, disinformation and conspiracies as well as intercepting violent or intimidating messages,” said Audu Bulama Bukarti, a senior fellow, Tony Blair Institute for Global Change, said in a report published a fortnight ago about security risks in Nigeria.

Nigeria’s youthful and tech-savvy population is Africa’s most active on social media. The calls for the platforms to step-up content moderation, while not new, follow the increased use of social sites owing to smartphone and internet penetration.

“The reach and influence of social media have grown ever larger in the years since the 2019 election. It will play a pivotal role in the 2023 election, in terms of positive political communication and in terms of its ability to spread misinformation and disinformation,” said Bukarti.

In Nigeria, Meta claims to have invested in people including content moderators and technology to stem the misuse of its platforms ahead of the elections. The social media giant is also taking the same measures as before and during Kenya’s elections, which included verifying the identities of persons posting political ads. But Mozilla Tech and Society Fellow Odanga Madung is not convinced Facebook are other social sites are prepared well enough.

“Social media platforms are still not completely ready to deal with election environments especially because they’ve had massive layoffs that have greatly affected how the work within several of the areas these elections will be held,” said Madung.

“And quite frankly, they have consistently failed to address the key aspects that make an election environment a dangerous information environment in the first place, where things are neither true or false and information tends to get weaponized quite a bit. Election environments are incredibly low trust environments. I do not think they’re going to actually succeed on this.”

Away from Nigeria, a pivotal moment is also approaching for social media platforms and regimes in fragile nations such as Sudan, South Sudan, DR Congo, Libya, and Mali – most of which have blocked social media access in the recent past to quell protests against their governments.

Bungled labeling and moderation

Social sites like Facebook, Twitter and TikTok recently came under heavy scrutiny over their role in undermining election integrity in Kenya. A Mozilla Foundation report claims that content labeling failed to stop misinformation, while platforms such as Facebook profiteered from political advertising that served to amplify propaganda.

Twitter and TikTok’s spotty labeling of posts calling the elections ahead of the official announcement made the platforms seem partisan, and failed to stop the spread of falsehoods, despite partnering with fact-checking organizations.

Facebook, the leading social media platform in Africa, failed majorly on this front by not having “any visible labels” during the elections, allowing the spread of propaganda — like claims of the kidnapping and arrest of a prominent politician, which had been debunked by local media houses. Months later, Facebook put a label on the original post claiming the kidnapping and arrest of the prominent politician.

Sluggish responses to falsehoods by Facebook, are now at the center of a lawsuit filed last week claiming that Meta is fueling violence and hate in eastern and southern Africa.

Abrham Meareg, one of the petitioners and whose father, Professor Meareg Amare, was killed during the Tigray War after Facebook posts doxed and called for violence against him, says that Facebook failed, on multiple requests, to bring down posts that put his father’s life in danger. He said that one post was recently taken down, a year after his father’s murder — more than 600,000 Ethiopians were killed during the two-year war that started in 2020.

The case claims that Facebook’s algorithm fuels viral hate and violence while that content moderation in Africa is bungled as moderators lack local knowledge to moderate content posted in local languages.

“Many of them (platforms) lack context and they are always going to fall short in terms of the of the promises they make to their users because, again, a lie is able to move very fast across platforms before they able to get ahold of it,” said Odanga.

Whistleblower Frances Haugen previously accused Facebook of “literally fanning ethnic violence” in Ethiopia, and a recent Global Witness investigation also noted that the social site was “extremely poor at detecting hate speech in the main language of Ethiopia.”

“Something is wrong with the way Facebook moderates content, and … there is a lack of investment in content moderation, especially for African countries. When you compare to other regions, we are getting the second-rate treatment. And what’s the effect? We are seeing a catalyst for civic unrest, civil war coming from normal interactions; viral posts that make fun of people and then escalate to insightful posts that my client is proof do end up causing violence in real life,” said Meareg’s lawyer, Mercy Mutemi.

Meanwhile, social media remains central to the permeation of political propaganda and the dilution of important investigations in matters around economic and social corruption. Last year, the former Kenyan president, Uhuru Kenyatta, was mentioned in the Pandora Papers – a leakage of files detailing the hidden wealth of a number of global leaders, celebrities and billionaires in offshore havens. However, researchers noticed the soaring of two hashtags, #offshoreaccountfacts and #phonyleaks, which topped trending topics and shadowed organic discussions on Twitter in Kenya, undermining the findings of the documentary.

Foreign-sponsored campaigns with political objectives have also affected more than three-quarters of the countries in Africa as “disinformation campaigns become increasingly sophisticated in camouflaging their origins by outsourcing posting operations.”

According to a Africa Center for Strategic Studies report published in April this year, Russian-sponsored disinformation campaigns by the Wagner Group mercenary force, promoting the Kremlin’s interests in the continent, for instance, have so affected more than 16 countries in Africa.

Questions linger over Facebook, Twitter, TikTok’s commitment to uphold election integrity in Africa, as countries head to polls by Annie Njanja originally published on TechCrunch

How Twitter is handling the 2021 US presidential transition

Twitter has set out its plans for US Inauguration Day 2021, next Wednesday, January 20, when president-elect Joe Biden will be sworn into office as the 46th US president and vice president-elect Kamala Harris will become VP.

“This year, multiple challenging circumstances will require that most people experience this historic ceremony virtually,” the social media firm writes in a blog post detailing how it will handle the transition of power on its platform as the Trump administration departs office.

“As Twitter will serve as both a venue for people to watch and talk about this political event, and play a key role in facilitating the transfer of official government communication channels, we want to be transparent and clear about what people should expect to see on the platform.”

The inauguration will of course be livestreamed via Twitter by multiple accounts (such as news outlets), as well as the official inauguration accounts, @JCCIC and @BidenInaugural.

Twitter will also be streaming the ceremony via its US Elections Hub, where it says it will share curated Moments, Lists and accounts to follow as well.

Once sworn into office, Biden and Harris will gain control of the @POTUS and @VP Twitter accounts. Other accounts that will transition to the new administration on the day include @WhiteHouse, @FLOTUS and @PressSec.

Twitter has also confirmed that Harris’ husband, Douglas Emhoff, will use a new official account — called @SecondGentleman. (It’s not clear why not ‘SGOTUS’; aside from, well, the unloveliness of the acronym.) 

As it did when president Obama left office, Twitter will transfer the current institutional accounts of the Trump administration to the National Archives and Records Administration (Nara) — meaning the outgoing administration’s tweets and account history will remain publicly available (with account usernames updated to reflect their archived status, e.g. @POTUS will be archived as @POTUS45).

However Trump’s personal account, which he frequently used as a political cudgel, yelling in ALL CAPS and/or spewing his customary self-pitying tweets, has already been wiped from public view after Twitter took the decision to permanently ban him last week for repeat violations of its rules of conduct. So there’s likely to be a major gap in Nara’s Trump archive.

Since late last year we’ve known the transitioning @POTUS and institutional accounts will not automatically retain followers from the prior administration. But Twitter still hasn’t confirmed why.

Today it just reiterated that the current (33.3M) followers of @POTUS and the other official accounts will receive a notification about the archival process which will include the “option” to follow the new holders of the accounts.

That’s another notable change from 2017 when Trump inherited the ~14M followers of president Obama’s @POTUS. Biden will instead have to start his presidential tweeting from scratch.

Given the chaotic events in the US capital last week, when supporters of the outgoing president broke through police lines to cause mayhem on the hill and in the House, there’s every reason for tech platforms to approach the 2021 transition with trepidation, lest their tools get used to livestream another historic insurrection (or worse).

Since then Trump has also continued to maintain his false claim that the election was stolen through voter fraud.

Although he avoided any new direct reference to this big lie when he circumvented Twitter’s ban on his personal account earlier this week, by posting a new video of himself speaking on the official @WhiteHouse account.

In the video he decried the “incursion at the US capital”, as he put it; claimed that he “unequivocally condemns the violence that we saw last week”; and called for unity. But Twitter has put tight limits on what Trump can say on its platform without having his posts removed (as well limiting him to the official @POTUS channel). So he remains on a very tight speech leash.

In the video Trump limits his verbal attacks to a few remarks — about what he describes as “the unprecedented assault on free speech we have seen in recent days” — dubbing tech platforms’ censorship “wrong” and “dangerous”, and adding that “what is needed now is for us to listen to one another, not to silence one another”.

There’s a lot going on here but it should not escape notice that Trump’s seeming contrition and quasi-concession and his very-last-minute calls for unity have only come when he actively feels power draining away from him.

Most notably, his call for unity has only come after powerful tech platforms acted to shut off his hate-megaphone — ending the years of special dispensation they granted Trump to ride roughshod over democratic convention and tear up the civic rulebook.

It’s very interesting to speculate how different the 2021 US inauguration might look and feel if platforms like Twitter had consistently enforced their rules against Trump from the get-go.

Instead we’re stuck in all sorts of lockdown, counting the days til Biden takes office — and above all hoping for a smooth transition of power.

So Twitter CEO Jack Dorsey is quite right when he said this week that Twitter has failed in its mission to “promote healthy conversation”. His company ignored warnings about online toxicity for years. Trump is, in no small part, the divisive product of that. 

In a brief section of Twitter’s transition handling blog post, entitled “protecting the public conversation”, the company refers back to a post from earlier this week where it set out steps it’s taking to try to prevent its platform from being used to “incite violence, organize attacks, and share deliberately misleading information about the election outcome” in the coming days.

These measures include permanently suspending ~70,000 accounts it said were primarily dedicated to sharing content related to the QAnon conspiracy theory; aggressively beefing up its civic integrity policy; and applying interaction limits on labeled tweets plus blocking violative keywords from appearing in Trends and search.

“These efforts, including our open lines of communication with law enforcement, will continue through the inauguration and will adapt as needed if circumstances change in real-time,” it adds, preparing for the possibility of more unrest.

Facebook touts beefed up hate speech detection ahead of Myanmar election

Facebook has offered a little detail on extra steps it’s taking to improve its ability to detect and remove hate speech and election disinformation ahead of Myanmar’s election. A general election is scheduled to take place in the country on November 8, 2020.

The announcement comes close to two years after the company admitted a catastrophic failure to prevent its platform from being weaponized to foment division and incite violence against the country’s Rohingya minority.

Facebook says now that it has expanded its misinformation policy with the aim of combating voter suppression and will now remove information “that could lead to voter suppression or damage the integrity of the electoral process” — giving the example of a post that falsely claims a candidate is a Bengali, not a Myanmar citizen, and thus ineligible to stand.

“Working with local partners, between now and November 22, we will remove verifiable misinformation and unverifiable rumors that are assessed as having the potential to suppress the vote or damage the integrity of the electoral process,” it writes.

Facebook says it’s working with three fact-checking organizations in the country — namely: BOOM, AFP Fact Check and Fact Crescendo — after introducing a fact-checking program there in March.

In March 2018 the United Nations warned that Facebook’s platform was being abused to spread hate speech and whip up ethnic violence in Myanmar. By November of that year the tech giant was forced to admit it had not stopped its platform from being repurposed as a tool to drive genocide, after a damning independent investigation slammed its impact on human rights.

On hate speech, which Facebook admits could suppress the vote in addition to leading to what it describes as “imminent, offline harm” (aka violence), the tech giant claims to have invested “significantly” in “proactive detection technologies” that it says help it “catch violating content more quickly”, albeit without quantifying the size of its investment nor providing further details. It only notes that it “also” uses AI to “proactively identify hate speech in 45 languages, including Burmese”.

Facebook’s blog post offers a metric to imply progress — with the company stating that in Q2 2020 it took action against 280,000 pieces of content in Myanmar for violations of its Community Standards prohibiting hate speech, of which 97.8% were detected proactively by its systems before the content was reported to it.

“This is up significantly from Q1 2020, when we took action against 51,000 pieces of content for hate speech violations, detecting 83% proactively,” it adds.

However without greater visibility into the content Facebook’s platform is amplifying, including country-specific factors such as whether hate speech posting is increasing in Myanmar as the election gets closer, it’s not possible to understand what volume of hate speech is passing under the radar of Facebook’s detection systems and reaching local eyeballs.

In a more clearly detailed development, Facebook notes that since August, electoral, issue and political ads in Myanmar have had to display a ‘paid for by’ disclosure label. Such ads are also stored in a searchable Ad Library for seven years — in an expansion of the self-styled ‘political ads transparency measures’ Facebook launched more than two years ago in the US and other western markets.

Facebook also says it’s working with two local partners to verify the official national Facebook Pages of political parties in Myanmar. “So far, more than 40 political parties have been given a verified badge,” it writes. “This provides a blue tick on the Facebook Page of a party and makes it easier for users to differentiate a real, official political party page from unofficial pages, which is important during an election campaign period.”

Another recent change it flags is an ‘image context reshare’ product, which launched in June — which Facebook says alerts a user when they attempt to share a image that’s more than a year old and could be “potentially harmful or misleading” (such as an image that “may come close to violating Facebook’s guidelines on violent content”).

“Out-of-context images are often used to deceive, confuse and cause harm. With this product, users will be shown a message when they attempt to share specific types of images, including photos that are over a year old and that may come close to violating Facebook’s guidelines on violent content. We warn people that the image they are about to share could be harmful or misleading will be triggered using a combination of artificial intelligence (AI) and human review,” it writes without offering any specific examples.

Another change it notes is the application of a limit on message forwarding to five recipients which Facebook introduced in Sri Lanka back in June 2019.

“These limits are a proven method of slowing the spread of viral misinformation that has the potential to cause real world harm. This safety feature is available in Myanmar and, over the course of the next few weeks, we will be making it available to Messenger users worldwide,” it writes.

On coordinated election interference, the tech giant has nothing of substance to share — beyond its customary claim that it’s “constantly working to find and stop coordinated campaigns that seek to manipulate public debate across our apps”, including groups seeking to do so ahead of a major election.

“Since 2018, we’ve identified and disrupted six networks engaging in Coordinated Inauthentic Behavior in Myanmar. These networks of accounts, Pages and Groups were masking their identities to mislead people about who they were and what they were doing by manipulating public discourse and misleading people about the origins of content,” it adds.

In summing up the changes, Facebook says it’s “built a team that is dedicated to Myanmar”, which it notes includes people “who spend significant time on the ground working with civil society partners who are advocating on a range of human and digital rights issues across Myanmar’s diverse, multi-ethnic society” — though clearly this team is not operating out of Myanmar.

It further claims engagement with key regional stakeholders will ensure Facebook’s business is “responsive to local needs” — something the company demonstrably failed on back in 2018.

“We remain committed to advancing the social and economic benefits of Facebook in Myanmar. Although we know that this work will continue beyond November, we acknowledge that Myanmar’s 2020 general election will be an important marker along the journey,” Facebook adds.

There’s no mention in its blog post of accusations that Facebook is actively obstructing an investigation into genocide in Myanmar.

Earlier this month, Time reported that Facebook is using US law to try to block a request for information related to Myanmar military officials’ use of its platforms by the West African nation, The Gambia.

“Facebook said the request is ‘extraordinarily broad’, as well as ‘unduly intrusive or burdensome’. Calling on the U.S. District Court for the District of Columbia to reject the application, the social media giant says The Gambia fails to ‘identify accounts with sufficient specificity’,” Time reported.

“The Gambia was actually quite specific, going so far as to name 17 officials, two military units and dozens of pages and accounts,” it added.

“Facebook also takes issue with the fact that The Gambia is seeking information dating back to 2012, evidently failing to recognize two similar waves of atrocities against Rohingya that year, and that genocidal intent isn’t spontaneous, but builds over time.”

In another recent development, Facebook has been accused of bending its hate speech policies to ignore inflammatory posts made against Rohingya Muslim immigrants by Hindu nationalist individuals and groups.

The Wall Street Journal reported last month that Facebook’s top public-policy executive in India, Ankhi Das, opposed applying its hate speech rules to T. Raja Singh, a member of Indian Prime Minister Narendra Modi’s Hindu nationalist party, along with at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence — citing sourcing from current and former Facebook employees.

Bracing for election day, Facebook rolls out voting resources to U.S. users

Eager to avoid a repeat of its disastrous role as a super-spreader of misinformation during the 2016 election cycle, Facebook is getting its ducks in a row.

Following an announcement earlier this summer, the company is now launching a voting information hub that will centralize election resources for U.S. users and ideally inoculate at least some of them against the platform’s ongoing misinformation epidemic.

The voting information center will appear in the menu on both Facebook and Instagram. As part of the same effort, Facebook will also target U.S. users with notifications based on location and age, displaying relevant information about voting in their state. The info center will help users check their state-specific vote-by-mail options, request mail-in ballots and provide voting-related deadlines.

Facebook election information center

Facebook is also expanding the labels it uses to attach verified election resources to posts by political figures. The labels will now appear on voting-related posts from all users across its main platform and Instagram, a way for the platform to avoid taking actions against specific political figures while still directing its users toward verified information about U.S. elections.

Along with other facets of its pre-election push, Facebook will roll previously-announced “voting alerts,” a feature that will allow state election officials to communicate election-related updates to users through the platform. “This will be increasingly critical as we get closer to the election, with potential late-breaking changes to the voting process that could impact voters,” Facebook Vice President of Product Management and Social Impact Naomi Gleit wrote in a blog post about the feature. According to the company, voting alerts will only be available to government accounts and not personal pages belonging to state or local election administrators.

The company cites the complexity of conducting state elections in the midst of the pandemic in its decision to launch the info center, which is also modeled after the COVID-19 info center that it created in the early days of the crisis. While the COVID-19 info hub initially appeared at the top of users’ Facebook feeds, it’s now only surfaced in searches related to the virus.

Election night nightmare

Uncomfortable as it is with the idea, Facebook seems to be aware that it could very well become the “arbiter of truth” on election night. With 2020’s unprecedented circumstances leading to a record number of ballots cast through the mail, it’s possible that the election’s outcome could be delayed or otherwise confusing. Without clear cut results, conspiracy theories, opportunism and other forms of misinformation are likely to explode on social platforms — a nightmare scenario that social networks seem to be preemptively dreading.

“A prolonged ballot process has the potential to be exploited in order to sow distrust in the election outcome,” Gleit wrote in Facebook’s post detailing the election tools.

The company was one of nine tech companies that met with federal officials on Wednesday to discuss how they will handle concerns around misinformation on the platforms around election day.

The group of companies now includes Facebook, Google, Reddit, Twitter, Microsoft, Pinterest, Verizon Media, Linkedin and the Wikimedia Foundation. Some of the group’s members had met previously to discuss efforts ahead of U.S. elections, but the expanded coalition of companies formally working with federal officials to prepare for the U.S. election appears to be new.

Twitter is holding off on fixing verification policy to focus on election integrity

Twitter is pausing its work on overhauling its verification process, which provides a blue checkmark to public figures, in favor of election integrity, Twitter product lead Kayvon Beykpour tweeted today. That’s because, as we enter another election season, “updating our verification program isn’t a top priority for us right now (election integrity is),” he wrote on Twitter this afternoon.

Last November, Twitter paused its account verifications as it tried to figure out a way to address confusion around what it means to be verified. That decision came shortly after people criticized Twitter for having verified the account of Jason Keller, the person who organized the deadly white supremacist rally in Charlottesville, Virginia.

Fast forward to today, and Twitter still verifies accounts “ad hoc when we think it serves the public conversation & is in line with our policy,” Beykpour wrote. “But this has led to frustration b/c our process remains opaque & inconsistent with our intented [sic] pause.”

While Twitter recognizes its job isn’t done, the company is not prioritizing the work at this time — at least for the next few weeks, he said. In an email addressed to Twitter’s health leadership team last week, Beykpour said his team simply doesn’t have the bandwidth to focus on verification “without coming at the cost of other priorities and distracting the team.”

The highest priority, Beykpour said, is election integrity. Specifically, Twitter’s team will be looking at the product “with a specific lens towards the upcoming elections and some of the ‘election integrity’ workstreams we’ve discussed.”

Once that’s done “after ~4 weeks,” he said, the product team will be in a better place to address verification.