VC investors and startup founders see hope in the red wave that wasn’t

Janna Meyrowitz Turner’s biggest concern going into the U.S. midterm elections was that more than half of American voters had an election denier on their ballot.

She was quite nervous, alongside pundits and polls who expected a red wave, a conservative flush that would take hold of Congress, state legislatures, and city halls. That didn’t happen.

Instead, many candidates backed by Donald Trump failed to garner voters. Pro-abortion, anti-slavery, and pro-marijuana proposals passed while a young and diverse crop of politicians were elected at federal, state, and local levels. The red scare was nipped. For now, at least.

“By and large, these candidates lost and will continue to do so,” Turner, the founder of Synastry Capital and co-founder of the coalition VCs for Repro, told TechCrunch. “Despite the deliberate barriers to keep Black, brown, young, and new Americans from voting, these folks turned out in record numbers, which means our government will start to better reflect our citizens.”

All in all, Democrats fared better than expected and are predicted to maintain control of the Senate, while Republicans are favored to take hold of the House. TechCrunch conducted a vibe check with investors and founders to gauge how they feel as results continue trickling in. Many were happy with the progress being made, while others spoke of the issues they aim to tackle next as calls increase for progressive investors to start speaking up.

“Democracy was the real winner last night, which is the underpinning of everything.” Jana Meyrowitz Turner, investor, Synastry Capital

Naturally, one issue on everyone’s mind was abortion.

Voters in many states had to vote on abortion-related proposals since the overturn of Roe v. Wade left such decisions in the hands of states. Shortly before the midterm elections, more than 100 VC firms came together to create VCs for Repro, a coalition rallying investors to vote in favor of reproductive health and wield more of their sociocultural power for change.

Ballot results show that VCs for Repro’s cries were part of a larger clamor to protect reproductive autonomy. Kentucky voted against amending its state constitution to say that there was no right to abortion; Vermont, Michigan, and California voted to make reproductive freedom a constitutional right. Turner noted that pundits and polls drastically underestimated how much Americans support abortion. If it wasn’t clear then, though, it is now.

VC investors and startup founders see hope in the red wave that wasn’t by Dominic-Madori Davis originally published on TechCrunch

Nextdoor and Vote.org partner to increase voter turnout for US midterms

Nextdoor is partnering with Vote.org to encourage people to participate in the U.S. midterm elections this November, the company announced on Wednesday. The neighborhood social network says the partnership aims to simplify political engagement and increase voter turnout.

The app will encourage users to verify their voter registration status and find their polling place. Leading up to election day on November 8, the social network will identify polling locations and encourage neighbors to help each other get to the polls by carpooling or walking together.

Nextdoor will also remind users to have civil political discourse by displaying pop-ups when hurtful or harmful language is detected or anticipated via predictive technology. The company says its efforts to keep interactions on the platform safe are a balance of human review and technology, both of which will work to flag and remove content that violates Nextdoor’s guidelines.

“Change starts in the neighborhood, and while every neighborhood is unique, everyone wants their community to thrive,” said Nextdoor CEO Sarah Friar in a statement. “We have an opportunity to strengthen civic engagement while upholding Nextdoor’s commitment to voter rights and protections. We’re very excited to continue the partnership with Vote.org to help neighbors throughout the upcoming election cycle and beyond.”

The company says that by partnering with organizations like Vote.org, it can help share trusted voting resources and provide tools to support civic engagement and neighborhood conversations around local and national politics.

Nextdoor is also partnering with the Advancement Project, the Lawyers Committee for Civil Rights and the NAACP to share information about election rights and protections.

Earlier this year, Nextdoor revamped its app with new profiles and more community-building features. The changes came as the social network developed a reputation for racial profiling over the years, which led to the company releasing specific features to address this. The revamp was part of Nextdoor’s larger vision and its goal to create a welcoming neighborhood both online and offline.

Nextdoor and Vote.org partner to increase voter turnout for US midterms by Aisha Malik originally published on TechCrunch

Twitter expands its crowdsourced fact-checking program ‘Birdwatch’ ahead of US midterms

On the heels of a report detailing how Twitter had once accidentally allowed a conspiracy theorist into its invite-only fact-checking program known as Birdwatch, the company is today announcing the program will expand to users across the U.S. — with a few changes. The rollout will add 1,000 more contributors to this program every week, ahead of the U.S. midterm elections. But Birdwatch won’t work the same as it did before, Twitter says.

Previously, Birdwatch contributors could immediately add their fact-checks to provide additional context to tweets. Now, that privilege will have to be earned.

To become a Birdwatch contributor capable of writing “notes,” or annotations on tweets that provide further context, a person must first prove they’re capable of identifying the helpful notes written by others.

To determine this, Twitter will assign each potential contributor a “rating impact” score. This score begins at zero and must reach a “5” for a person to become a Birdwatch contributor — a metric that’s likely achievable after a week’s work, Twitter said. Users gain these points by rating Birdwatch notes that enable the note to earn the status of “Helpful” or “Not Helpful.” They lose points when their rating ends up in contrast with the note’s final status.

Image Credits: Twitter

After a person unlocks the ability to write their own Birdwatch notes, they can begin adding contributions and fact-checks. But the quality of their work could lead them to lose their contributor status once again.

Twitter will first push the user whose notes are being marked “Not Helpful” to improve — by better addressing a tweet’s claims or by fixing typos, for instance. But if they still don’t improve, they will have their writing ability locked. They’ll then need to improve their rating impact score to become a contributor again.

Image Credits: Twitter

Another key aspect is how Birdwatch’s upgraded system involves the use of what the company is referring to as its “bridging algorithm.”

This works differently from many social media algorithms, said Twitter. Often, internet algorithms will determine which content to rate higher or approve based on whether or not there’s a majority consensus — like how a post that gets more upvotes on Reddit winds up at the top of the page, for instance. Or a platform may consider posts that meet certain thresholds for engagement — a factor Facebook considers, among others, when determining which posts make it into your feed.

Twitter’s bridging algorithm, on the other hand, will instead look to find consensus across groups where there are typically differing points of view before it highlights the crowd-sourced fact-checks to other users on its platform.

“To be shown on a tweet, a note actually has to be found helpful by people who have historically disagreed in their ratings,” explained Twitter Product VP Keith Coleman, in a briefing with reporters. The idea, he says, is that if people who tend to disagree on notes both find themselves agreeing that a particular note is helpful, that increases the chance that others will also agree about the note’s importance.

“This is a novel approach. We’re not aware of other areas where this has been done before,” Coleman said.

Twitter, however, did not invent this idea. Rather, the concept arose from academic research on internet polarization, where the idea for a bridging algorithm, or bridging-based ranking, is thought to be a potential approach to create a better consensus in a world where multiple truths sometimes seem to co-exist. Today, each side argues only their “truth” is true, and the other is a lie, which has made it difficult to find agreement. The bridging algorithm looks for areas where both sides agree. Ideally, platforms would then reward behavior that “bridges divides” rather than reward posts that create further division.

In the case of Birdwatch notes, Twitter claims to have already seen an impact since switching to this new scoring system during pilot tests.

It found that people on average were 20% to 40% less likely to agree with the substance of a potentially misleading tweet after they read the note about it.

This, said Coleman, is “really significant from the perspective of changing the understanding of a topic.”

Image Credits: Twitter

What’s more, the system works to find agreement across party lines, Twitter claims. It said there’s “no statistically significant difference” on this measure between Democrats, independents and Republicans.

Of course, this begs the question as to how many Birdwatch notes will actually make an appearance in the wild if they rely on cross-aisle agreement.

After all, there aren’t two truths. There is the truth and what another side wants to present as the truth. And there are a number of people on both sides of this equation, each armed with information that others who think like them will vote up and down (or Helpful or Not Helpful, as in Birdwatch’s case). This is the problem the internet delivered — one of a system where expertise and experience are discounted in favor of a crowd where the loudest voices on digital soapboxes get the most attention.

Birdwatch believes people will come to an agreement on certain points elevated by its crowdsourced fact-checkers as it finds common ground in the basis of fact, but this is ultimately the same promise that fact-checking organizations, like Politifact or Snopes, had promised. But when the facts they uncovered were misaligned with the narrative one side was espousing, the people on the losing team just pointed to the system overall as being corrupt.

How long Birdwatch will escape a similar fate is unknown.

But Twitter says it’s not rolling out Birdwatch more broadly to help counter election misinformation. It just believes the system is now ready to scale.

Plus, the company notes Birdwatch can be used to tackle all sorts of misleading content or misinformation outside of politics — including areas like health, sports, entertainment and other random curiosities that pop up on the internet — like whether or not someone just tweeted a photo of a bat the size of a human, for example.

Also during its pilot phase, Twitter found that people are 15% to 35% less likely to like or retweet a tweet when there’s a Birdwatch note attached to it, which reduces the further amplification of potentially misleading content in general.

“This is a really encouraging sign that, in addition to informing understanding, these Birdwatch notes are also informing people’s sharing behavior,” Coleman pointed out.

Image Credits: Twitter

This isn’t the first time Twitter has tweaked its Birdwatch system. Since launching its tests, it has added prompts that encouraged contributors to cite their sources when leaving notes and made it possible for users to contribute notes under an alias to minimize potential harassment and abuse. It also added notifications that let users know how many people have read their notes.

And while it allows users across Twitter to now rate notes, those ratings don’t change the outcome of the note’s availability — only ratings by Birdwatch contributors do.

The company’s partners, including AP and Reuters, will help Twitter to review the notes’ accuracy, but this won’t determine what shows up in Birdwatch. It’s a distributed system of consensus, not a top-down effort. However, Twitter says that during the 18 months it’s been piloting this project, the notes that were marked “Helpful” were generally those the partners also found to be accurate.

In addition, the Birdwatch algorithm as well as all contributions to the system are publicly available and open sourced on GitHub for anyone to access.

Twitter says it’s been piloting Birdwatch with around 15,000 contributors, but will now begin to scale the program by adding around 1,000 more contributors every week going forward. Anyone in the U.S. can qualify, but the additions will be on a first-come, first-serve basis. The notes can be written in both English and Spanish, but so far, most have chosen to write in the former.

To fight potential bots, Birdwatch contributors will also need to have a verified phone number from a mobile operator — not a virtual number. The accounts can’t have any recent rule violations and will need to be at least six months old.

Around half the U.S. user base will also start seeing the Birdwatch notes that reached the status of “Helpful,” starting today.

Twitter said the new system is not meant to replace its own fact-check labels or misinformation policies, but rather to run in tandem.

Today, the company’s misinformation policies cover a range of topics, from civic integrity to COVID and health misinformation to manipulated media, and more.

“Beyond those, there is still a lot of content out there that’s potentially misleading,” said Coleman. A tweet could be factually true but could leave out a detail that provides further context and impact how someone understood the topic, he suggested. “There’s no policy against that — and it’s really hard to craft policies in these gray areas,” Coleman continued.

“One of the powers of Birdwatch is that it can cover any tweet, it can cover any gray area. And ultimately, it’s up to the people to decide whether the context is helpful enough to be added,” he said.

Twitter expands its crowdsourced fact-checking program ‘Birdwatch’ ahead of US midterms by Sarah Perez originally published on TechCrunch

Google, YouTube outline plans for the US midterm elections

Google and its video sharing app YouTube outlined plans for handling the 2022 U.S. midterm elections this week, highlighting tools at its disposal to limit the effort to limit the spread of political misinformation.

When users search for election content on either Google or YouTube, recommendation systems are in place to highlight journalism or video content from authoritative national and local news sources such as The Wall Street Journal, Univision, PBS NewsHour and local ABC, CBS and NBC affiliates.

In today’s blog post, YouTube noted that it has removed “a number of videos” about the U.S. midterms that violate its policies, including videos that make false claims about the 2020 election. YouTube’s rules also prohibit inaccurate videos on how to vote, videos inciting violence and any other content that it determines interferes with the democratic process. The platform adds that it has issued strikes to YouTube channels that violate policies related to the midterms and have temporarily suspended some channels from posting new videos.

Image Credits: Google

Google Search will now make it easier for users to look up election coverage by local and regional news from different states. The company is also rolling out a tool on Google Search that it has used before, which directs voters to accurate information about voter registration and how to vote. Google will be working with The Associated Press again this year to offer users authoritative election results in search.

YouTube will also direct voters to an information panel on voting and a link to Google’s “how to vote” and “how to register to vote” features. Other election-related features YouTube announced today include reminders on voter registration and election resources, information panels beneath videos, recommended authoritative videos within its “watch next” panels and an educational media literacy campaign with tips about misinformation tactics.

On Election Day, YouTube will share a link to Google’s election results tracker, highlight livestreams of election night and include election results below videos. The platform will also launch a tool in the coming weeks that gives people searching for federal candidates a panel that highlights essential information, such as which office they’re running for and what their political party is.

Image Credits: YouTube

With two months left until Election Day, Google’s announcement marks the latest attempt by a tech giant to prepare for the pivotal moment in U.S. history. Meta, TikTok and Twitter have also recently addressed how they will approach the 2022 U.S. midterm elections.

YouTube faced scrutiny over how it handled the 2020 presidential election, waiting until December 2020 to announce a policy that would apply to misinformation swirling around the previous month’s election.

Before the policy was initiated, the platform didn’t remove videos with misleading election-related claims, allowing speculation and false information to flourish. That included a video from One America News Network (OAN) posted on the day after the 2020 election falsely claiming that Trump had won the election. The video was viewed more than 340,000 times, but YouTube didn’t immediately remove it, stating the video didn’t violate its rules.

In a new study, researchers from New York University found that YouTube’s recommendation system had a part in spreading misinformation about the 2020 presidential election. From October 29 to December 8, 2020, the researchers analyzed the YouTube usage of 361 people to determine if YouTube’s recommendation system steered users toward false claims regarding the election in the immediate aftermath of the election. The researchers concluded that participants who were very skeptical about the election’s legitimacy were recommended significantly more election fraud-related claims than participants who weren’t unsure about the election results.

YouTube pushed back against the study in a conversation with TechCrunch, arguing that its small sample size undermined its potential conclusions. “While we welcome more research, this report doesn’t accurately represent how our systems work,” YouTube spokesperson Ivy Choi told TechCrunch. “We’ve found that the most viewed and recommended videos and channels related to elections are from authoritative sources, like news channels.”

The researchers acknowledged that the number of fraud-related videos in the study was low overall and that the data doesn’t consider what channels the participants were subscribed to. Nonetheless, YouTube is clearly a key vector of potential political misinformation — and one to watch as the U.S. heads into its midterm elections this fall.

TikTok launches an in-app US midterms Elections Center, shares plan to fight misinformation

TikTok announced its midterms Elections Center will go live in the app in the U.S. starting today, August 17, 2022, where it will be available to users in more than 40 languages, including English and Spanish.

The new feature will allow users to access state-by-state election information, including details on how to register to vote, how to vote by mail, how to find your polling place and more, provided by TikTok partner NASS (the National Association of Secretaries of State). TikTok also newly partnered with Ballotpedia to allow users to see who’s on their ballot, and is working with various voting assistance programs — including the Center for Democracy in Deaf America (for deaf voters), the Federal Voting Assistance Program (overseas voting), the Campus Vote Project (students) and Restore Your Vote (people with past convictions) — to provide content for specific groups. The AP will continue to provide the latest election results in the Elections Center.

The center can be accessed through a number of places inside the TikTok app, including by clicking on content labels found on video, via a banner in the app’s Friends tab, as well as through hashtag and search pages.

Image Credits: TikTok

The company also detailed its broader plan to combat election misinformation on its platform, building on lessons it learned from the 2020 election cycle. For starters, it launched this in-app Election Center six weeks earlier than in 2020. It’s ramping up its efforts to educate the creator community about its rules related to election content, as well. This will include the launch of an educational series on the Creator Portal and TikTok, and briefings with both creators and agencies to further clarify its rules.

Much of how TikTok will address election misinformation has not changed, however.

On the policy side, TikTok says it will monitor for content that violates its guidelines. This includes misinformation about how to vote, harassment of election workers, harmful deep fakes of candidates and incitement to violence. Depending on the violation, TikTok may remove the content or the user’s account, or ban the device. In addition, TikTok may choose to redirect search terms or hashtags to its community guidelines, as it did during the prior election cycle for the hashtags associated with terms like “stop the steal” or “sharpiegate,” among others.

Image Credits: TikTok

The company reiterated its decision to ban political advertising on the platform, which extends not only to ads paid for through its ads platform but also to branded content posted by creators themselves. That means a political action committee could not work around TikTok policies to instead pay a creator to make a TikTok video advocating for their political position, the company claims.

Of course, just important as the policies themselves are TikTok’s ability to enforce them.

The company says it will use a combination of automated technology and Trust and Safety team people to help drive moderation decisions. The former, TikTok admits, can only go so far. Technology can be trained to identify keywords associated with conspiracy theories, that is, but only a human would be able to understand if a video is promoting that conspiracy theory or working to debunk it. (The latter is permitted by TikTok guidelines.)

Image Credits: TikTok

TikTok declined to share how many staffers are dedicated to the job of moderating election misinformation, but noted the larger Trust and Safety team has grown over the past several years. This election, however, will be of increased importance as it follows shortly after TikTok shifted its U.S. user data to Oracle’s cloud and has now tasked the company with auditing its moderation policies and algorithmic recommendation systems.

“As part of Oracle’s work, they will be regularly vetting and validating both our recommendation and our moderation models,” confirmed TikTok’s head of U.S. Safety, Eric Han. “What that means is there’ll be regular audits of our content moderation processes, both from automated systems…technology — and how do we detect and triage certain things — as well as the content that is moderated and reviewed by humans,” he explained.

“This will help us have an extra layer and check to make sure that our decisions highlight what our community guidelines are enforcing and what we want our committee guidelines to do. And obviously, that builds on previous announcements that we’ve talked about in the past in our relationship and partnership with Oracle on data storage for U.S. users,” Han said.

Election content can be triggered for moderation in a number of ways. If the community flags a video in the app, it would be reviewed by TikTok’s teams, who may also work with third-party threat intelligence firms to detect things like coordinated activities and covert operations, like those from foreign powers looking to influence U.S. elections. But a video may also be reviewed if it rises in popularity in order to keep TikTok’s main feed — its For You feed — from spreading false or misleading information. While videos are being evaluated by a fact checker, they are not eligible for recommendation to the For You feed, TikTok notes.

The company says it’s now working with a dozen fact-checking partners worldwide, supporting over 30 languages. Its U.S.-based partners include PolitiFact, Science Feedback and Lead Stories. When these firms determine a video to be false, TikTok says the video will be taken down. If it’s returned as “unverified” — meaning the fact checker can’t make a determination — TikTok will reduce its visibility. Unverified content can’t be promoted to the For You feed and will receive a label that indicates the content could not be verified. If a user tries to share the video, they’ll be shown a pop-up asking them if they’re sure they want to post the video. These sorts of tools have been shown to impact user behavior. TikTok said during tests of its unverified labels in the U.S. that videos saw a 24% decline in sharing rates, for instance.

In addition, all videos related to the elections — including those from politicians, candidates, political parties or government accounts — will be labeled with a link that redirects to the in-app election center. TikTok will also host PSAs on election-related hashtags like #midterms and #elections2022.

Image Credits: TikTok

TikTok symbolizes a new era of social media compared to longtime stalwarts like Facebook and YouTube, but it’s already repeating some of the same mistakes. The short-form social platform wasn’t around during Facebook’s Russian election inference scandal back in 2016, but it isn’t immune from the same concerns about misinformation and disinformation that have plagued more traditional social platforms.

Like other social networks, TikTok relies on a blend of human and automated moderation to detect harmful content at scale — and like its peers it leans too heavily on the latter. TikTok also outlines its content moderation policies in lengthy blog posts, but at times fails to live up to its own lofty promises.

In 2020, a report from watchdog group Media Matters for America found that 11 popular videos promoting false pro-Trump election conspiracies attracted more than 200,000 combined views within a day of the U.S. presidential election. The group noted that the selection of misleading posts was only a “small sample” of election misinformation in wide circulation on the app at the time.

With TikTok gaining popularity and mainstream adoption outside of viral dance videos and the Gen Z early adopters it’s known for, the misinformation problem only stands to worsen. The app has grown at a rapid clip in the last couple of years, hitting three billion downloads by the middle of 2021 with projections that it would pass the 750 million user mark in 2022.

This year, TikTok has emerged as an unlikely but vital source of real-time updates and open source intelligence about the war in Ukraine, taking such a prominent position in the information ecosystem that the White House decided to brief a handful of star creators about the conflict.

But because TikTok is a wholly video-focused app that lacks the searchable text of a Facebook or Twitter post, tracing how misinformation travels on the app is challenging. And like the secretive algorithms that propel hit content on other social networks, TikTok’s ranking system is sealed away in a black box, obscuring the forces that propel some videos to viral heights while others languish.

A researcher studying the Kenyan information ecosystem with the Mozilla Foundation found that TikTok is emerging as an alarming vector of political disinformation in the country. “While more mature platforms like Facebook and Twitter receive the most scrutiny in this regard, TikTok has largely gone under-scrutinized — despite hosting some of the most dramatic disinformation campaigns,” Mozilla Fellow Odanga Madung wrote. He describes a platform “teeming” with misleading claims about Kenya’s general election that could inspire political violence there.

Mozilla researchers had similar concerns in the run-up to 2021’s German federal election, finding that the company dragged its feet to implement fact-checking and failed to detect a number of popular accounts impersonating German politicians.

TikTok may have also played an instrumental role in elevating the son of a dictator to the presidency in the Philippines. Earlier this year, the campaign of Ferdinand “Bongbong” Marcos Jr. flooded the social network with flattering posts and bought influencers to rewrite a brutal family legacy.

Though TikTok and Oracle are now engaged in some sort of auditing agreement, the details of how this will take place are undisclosed, nor to what extent Oracle’s findings will be made public. That means we may not know for some time to what extent TikTok will be able to keep election misinformation under control.