UK’s ThirdFort nabs $20M for tools to help with ID verification, and detect money laundering and payment fraud

Money laundering has been a hot topic of late in the UK, which is facing pressure to not just make tighter rules to track down the origins of money that’s spent on large assets in the country like prime real estate — the capital city has been dubbed by some as the “London laundromat” — but also, in light of sanctions on Russia in recent weeks, actually to enforce those rules.

Today, a London startup called Thirdfort, which has built a platform to help professional services firms run more thorough due diligence, and to flag when something is suspicious, is announcing a funding round of £15 million (about $20 million), money that it will be using to continue expanding its services, specifically to build payment infrastructure directly into its platform.

The raise, a Series A, was led by Breega, with B2B fintech-focused Element Ventures also investing, along with the founders of ComplyAdvantage, Tessian, Fenergo, R3, Funding Circle and Fidel.

CEO Olly Thornton-Berry said that he and Jack Bidgood first came upon the idea for Thirdfort after a friend of theirs lost £25,000 while buying a flat in London due to phishing attack: fraudsters had picked up some data about the deal, and created a domain similar to that of the legal firm the friend was using for the purchase, and with that wrote an email impersonating the friend’s lawyer, asking for the sum to be transferred via link. It was only weeks later that the friend was legitimately asked for the same sum that they all started to suspect foul play. The friend never recovered that money.

The incident, Thornton-Berry said, highlighted how little information both professional services companies require of a customer before entering into a business dealing, and how little protection the client has against more sophisticated fraud attempts.

This led to Thirdfort, which provides a big-data toolkit of multiple resources such as data from LexisNexis, ComplyAdvantage, Companies House and more that can be corralled (and picked by the client) to provide different data points about individuals and their sources of money. Thirdfort has built tools first to address the needs of companies in the legal and property markets.

The product today comes in two parts. First, there is the “risk engine” built for its business clients, which can be used both for KYC (know your customer) checks as well as to help companies comply with anti-money laundering regulations. There are around the 700 businesses already using the platform, including law firms DAC Beachcroft, Penningtons Manches Cooper and Mishcon de Reya; and property businesses Knight Frank, Strutt & Parker and Winkworth.

Second, there is an app built for consumer customers of those businesses that has been built on open banking infrastructure to connect up those businesses with the customer’s bank, by way of the banks’ own banking apps, for payments to be made in a secure way. This has now been downloaded some 500,000 times.

Similar to Alloy in the U.S. (which is a potential competitor, if either expands into the other’s market), the pitch here with Thirdfort is that the work that would have had to go into running similar identification and origin-of-funds research would have mostly been drawn-out, largely manual and expensive to run, if it was run at all. Times are now changing, and companies are now being required to do more of this work.

“Now, what’s required is a lot more,” Thornton-Berry said. “They need to run an in-depth level of due diligence, seeing bank statements what is moving in and out, asking clients specific questions, diligence checks on gifted money if the sum is specified as a gift. It’s a whole new kind of workflow that’s come into existence with AML going up.”

And while Thirdfort today focuses primarily on fraud detection — it’s managed to halt around one dozen dodgy transactions for its customers, Thornton-Barry said — it’s also built for AML diligence and compliance regulations and will likely come into its own when these are run more widely, especially around any large transactions involving international money.

“For both consumers and professional services, the risk of fraud and the need for compliance represents a massive burden,” Maxence Drummond, Principal at Breega, said in a statement. “Consumers need to get verified for every transaction, and regulated professionals spend too much of their valuable time on client verification and compliance.

UK publishes safety-focused rules for video sharing platforms like TikTok

Video sharing platforms that offer a service in the UK have to comply with new regulations intended to protect users and under 18s from harmful content such as hate speech and videos/ads likely to incite violence again protected groups.

Ofcom, the country’s comms, broadcast and — in an expanding role — Internet content regulator, has published the guidance for platforms like TikTok, Snapchat, Vimeo and Twitch today.

Among the requirements for in-scope services are that they must take “appropriate measures” to protect users from harmful material.

Terrorist content, child sexual abuse material, racism and xenophobia also fall under the ‘harmful content’ bracket.

In a press release the regulator said its research shows that a third of UK Internet users say they have witnessed or experienced hateful content; a quarter claim they’ve been exposed to violent or disturbing content; while one in five have been exposed to videos or content that encouraged racism.

There is no prescriptive list of what measures video sharing platforms must use to prevent users being exposed to such content.

But there are a number of recommendations — such as clauses in terms and conditions; functionality like the ability for uploaders to declare if their content contains an ad; and user-friendly mechanisms for viewers to report or flag harmful content, as well as transparent complaints procedures.

Age assurance systems are also recommended, as are the inclusion of parental controls — as the regulation has the specific aim of protecting under-18s from viewing videos and adverts containing restricted material.

Ofcom is also recommending “robust” age-verification for video-sharing platforms that host pornography in order to prevent under 18s from viewing adult material.

A list of video sharing platforms that have notified themselves to Ofcom as under scope of the regulations can be found here. (As well as the aforementioned platform giants it also includes the likes of OnlyFans, Triller and Recast.)

“We are recommending providers put in place systematic risk management processes to help providers to identify and implement measures that are practicable and proportionate,” Ofcom goes on to say in the guidance to video sharing platforms.

“While we acknowledge that harmful material may not be completely eradicated from a platform, we expect providers to make meaningful efforts to prevent users from encountering it,” it adds.

“The VSP [aka video sharing platform] Regime is about platform’s safety systems and processes, not about regulating individual videos, however evidence of a prevalence of harmful material on a platform may require closer scrutiny.”

The regulator says it will want to understand measures platforms have in place, as well as their effectiveness at protecting users — and “any processes which have informed a provider’s decisions about which protection measures to use”. So platforms will need to document and be able to justify their decisions if the regulator comes calling, such as following a complaint.

Monitoring tech platforms’ compliance with the new requirements will be a key new Ofcom role — and a taster of what is to come under incoming and far more broad-brush safety-focused digital regulations.

“Along with engagement with providers themselves, we expect to inform our understanding of whether users are being effectively protected, for example by monitoring complaints and engaging with interested parties such as charities, NGOs and tech safety groups,” Ofcom also writes, adding that this engagement will play an important part in supporting its decisions about “areas of focus”.

Ofcom’s role as an Internet content regulator will be fleshed out in the coming years as the government works to pass legislation that will impose a wide-ranging duty of care on digital service providers of all stripes, instructing them to handle user-generated content in a way that prevents people — and especially children — from being exposed to illegal and/or harmful stuff.

A key appointment — the chair of Ofcom — has been delayed as the government decided to rerun the competition for the role.

Reports have suggested the government wants the former editor of the Daily Mail to take the post but an independent panel involved in the initial selection process rejected Paul Dacre as an unsuitable candidate earlier this year. (It is unclear whether the government will continue to try to parachute Dacre into the job.)

Ofcom, meanwhile, has been regulating video on-demand services in the UK since 2010.

But the video-sharing framework is a separate regulatory instrument that’s intended to respond to differences in the level of control as video-sharing platforms provide tools to allow users to upload their own content.

However this newer framework is set to be superseded by new legislation under the incoming online safety regulatory framework.

So these regulations for video sharing platforms are something of a placeholder and a taster as UK lawmakers grapple with laying down more comprehensive online safety rules which will apply much more widely.

Still, in the guidance Ofcom describes the VSP Regime as “an important precursor to the future Online Safety legislation”, adding:  “Given the two regimes’ shared objective to improve user safety by requiring services to protect users through the adoption of appropriate systems and processes, Ofcom considers that compliance with the VSP regime will assist services in preparing for compliance with the online safety regime as described by Government in the draft Online Safety Bill.”

The UK’s data protection regulation is also already enforcing a set of ‘age appropriate’ design requirements for digital services that are likely to be accessed by children.

Tiger Global leads $34M investment into Unit21, a no-code fraud prevention platform

Unit21, a startup that helps businesses monitor fraudulent activities with its no-code software, announced today it has raised $34 million in a Series B round of funding led by Tiger Global Management.

The round values San Francisco-based Unit21 at $300 million and comes nine months after the startup raised a $13 million Series A that included investments from the founders of Plaid, Chime and Shape Security as well as former Venmo COO Michael Vaughan.

ICONIQ Capital and existing backers Gradient Ventures (Google’s AI venture fund), A.Capital and South Park Commons participated in the latest funding event. 

Former Affirm product manager Trisha Kothari and Clarence Chio founded Unit21 in 2018 with the goal of giving risk, compliance and fraud teams a way to fight financial crime via a “secure, integrated, no-code platform.” 

Image Credits: Unit21

The pair say they started Unit21 based on the belief that the existing model of “black box” machine learning used for fraud prevention and detection was flawed. Their idea was to develop an alternative system to provide risk and compliance teams with more control over their operations. Unit21 describes its core technology as a “flag-and-review” toolset designed to give non-technical operators and anti-money laundering (AML) teams the ability to “easily” write complex statistical models and deploy customized workflows without having to involve their engineering teams. Unit21 says it provides this toolset to companies with the aim of helping them mitigate fraud and money laundering risks through Know Your Customer (KYC) verification, transaction monitoring detection and suspicious activity report (SAR) case management. 

Unit21 has built up an impressive customer base of over 50 enterprise clients, including Chime, Intuit, Coinbase, Gusto, Flywire, Wyre and Twitter, among others. The company says it has monitored more than $100 billion in activity via its API and dashboard since its 2018 inception. It also says that it has saved more than 20 million users over $100 million in fraud loss/suspicious activity. The company declined to reveal hard revenue figures, saying only revenue grew by “12x” in 2020 compared to 2019.

“Data is the most important weapon in the fight against fraud and money laundering,” Kothair said. “This funding will support our mission to democratize data and make it more accessible to  operations teams.”

The company will also use its new capital in part toward expanding its engineering, research & development and go-to-market  teams. As of late June, Unit21 had 53 employees, up from 12 at the same time last year. The startup also plans evolve its platform for generalized flag + review use cases beyond financial crimes and fraud. It’s also eyeing expansion in the Asia-Pacific (APAC) and Europe/Middle East (EMEA) markets.

Tiger Global Partner John Curtius said Unit21 is transforming organizations’ ability to “analyze data to its advantage for risk management and compliance.”

The space is a hot one with a number of other fraud-prevention companies raising capital in recent months including Sift, Seon and Feedzai. According to Compliance Week (citing analysis by Fenergo), financial institutions were hit with an estimated $10.4 billion in global fines and penalties related to anti-money laundering (AML), know your customer (KYC), data privacy, and MiFID (Markets in Financial Instruments Directive) regulations in 2020, bringing the total to $46.4 billion for those types of breaches since 2008. The report, spanning up to its release date of Dec. 9, said there has been 198 fines against financial institutions for AML, KYC, data privacy, and MiFID deficiencies, representing a 141% increase since 2019.

Biden admin will share more info with online platforms on ‘front lines’ of domestic terror fight

The Biden administration is outlining new plans to combat domestic terrorism in light of the January 6 attack on the U.S. Capitol and social media companies have their own part to play.

The White House released a new national strategy on countering domestic terrorism Tuesday. The plan acknowledges the key role that online platforms play in bringing violent ideas into the mainstream, going as far as calling social media sites the “front lines” of the war on domestic terrorism.

“The widespread availability of domestic terrorist recruitment material online is a national security threat whose front lines are overwhelmingly private–sector online platforms, and we are committed to informing more effectively the escalating efforts by those platforms to secure those front lines,” the White House plan states.

The Biden administration committed to more information sharing with the tech sector to fight the tide of online extremism, part of a push to intervene well before extremists can organize violence. According to a fact sheet on the new domestic terror plan, the U.S. government will prioritize “increased information sharing with the technology sector,” specifically online platforms where extremism is incubated and organized.

“Continuing to enhance the domestic terrorism–related information offered to the private sector, especially the technology sector, will facilitate more robust efforts outside the government to counter terrorists’ abuse of Internet–based communications platforms to recruit others to engage in violence,” the White House plan states.

In remarks timed with the release of the domestic terror strategy, Attorney General Merrick Garland asserted that coordinating with the tech sector is “particularly important” for interrupting extremists who organize and recruit on online platforms and emphasized plans to share enhanced information on potential domestic terror threats.

In spite of the new initiatives, the Biden administration admits that that domestic terrorism recruitment material will inevitably remain available online, particularly on platforms that don’t prioritize its removal — like most social media platforms, prior to January 2021 — and on end-to-end encrypted apps, many of which saw an influx of users when social media companies cracked down on extremism in the U.S. earlier this year.

“Dealing with the supply is therefore necessary but not sufficient: we must address the demand too,” the White House plan states. “Today’s digital age requires an American population that can utilize essential aspects of Internet–based communications platforms while avoiding vulnerability to domestic terrorist recruitment and other harmful content.”

The Biden administration will also address vulnerability to online extremism through digital literacy programs, including “educational materials” and “skills–enhancing online games” designed to inoculate Americans against domestic extremism recruitment efforts, and presumably disinformation and misinformation more broadly.

The plan stops short of naming domestic terror elements like QAnon and the “Stop the Steal” movement specifically, though it acknowledges the range of ways domestic terror can manifest, from small informal groups to organized militias.

A report from the Office of the Director of National Intelligence in March observed the elevated threat to the U.S. that domestic terrorism poses in 2021, noting that domestic extremists leverage mainstream social media sites to recruit new members, organize in-person events and share materials that can lead to violence.

EU adopts rules on one-hour takedowns for terrorist content

The European Parliament approved a new law on terrorist content takedowns yesterday, paving the way for one-hour removals to become the legal standard across the EU.

The regulation “addressing the dissemination of terrorist content online” will come into force shortly after publication in the EU’s Official Journal — and start applying 12 months after that.

The incoming regime means providers serving users in the region must act on terrorist content removal notices from Member State authorities within one hour of receipt, or else provide an explanation why they have been unable to do so.

There are exceptions for educational, research, artistic and journalistic work — with lawmakers aiming to target terrorism propaganda being spread on online platforms like social media sites.

The types of content they want speedily removed under this regime includes material that incites, solicits or contributes to terrorist offences; provides instructions for such offences; or solicits people to participate in a terrorist group.

Material posted online that provides guidance on how to make and use explosives, firearms or other weapons for terrorist purposes is also in scope.

However concerns have been raised over the impact on online freedom of expression — including if platforms use content filters to shrink their risk, given the tight turnaround times required for removals.

The law does not put a general obligation on platforms to monitor or filter content but it does push service providers to prevent the spread of proscribed content — saying they must take steps to prevent propagation.

It is left up to service providers how exactly they do that, and while there’s no legal obligation to use automated tools it seems likely filters will be what larger providers reach for, with the risk of unjustified, speech chilling takedowns fast-following. 

Another concern is how exactly terrorist content is being defined under the law — with civil rights groups warning that authoritarian governments within Europe might seek to use it to go after critics based elsewhere in the region.

The law does include transparency obligations — meaning providers must publicly report information about content identification and takedown actions annually.

On the sanctions side, Member States are responsible for adopting rules on penalties but the regulation sets a top level of fines for repeatedly failing to comply with provisions at up to 4% of global annual turnover.

EU lawmakers proposed the new rules back in 2018  when concern was riding high over the spread of ISIS content online.

Platforms were pressed to abide by an informal one-hour takedown rule in March of the same year. But within months the Commission came with a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

Negotiations over the proposal have seen MEPs and Member States (via the Council) tweaking provisions — with the former, for example, pushing for a provision that requires competent authority to contact companies that have never received a removal order a little in advance of issuing the first order to remove content — to provide them with information on procedures and deadlines — so they’re not caught entirely on the hop.

The impact on smaller content providers has continued to be a concern for critics, though.

The Council adopted its final position in March. The approval by the Parliament yesterday concludes the co-legislative process.

Commenting in a statement, MEP Patryk JAKI, the rapporteur for the legislation, said: “Terrorists recruit, share propaganda and coordinate attacks on the internet. Today we have established effective mechanisms allowing member states to remove terrorist content within a maximum of one hour all around the European Union. I strongly believe that what we achieved is a good outcome, which balances security and freedom of speech and expression on the internet, protects legal content and access to information for every citizen in the EU, while fighting terrorism through cooperation and trust between states.”

Europe to push for one-hour takedown law for terrorist content

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads.

The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet.

For now the ‘rule of thumb’ regime continues to apply. But it’s putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

As per usual EU processes, the Commission’s proposal would need to gain the backing of Member States and the EU parliament before it could be cemented into law.

One major point to note here is that existing EU law does not allow Member States to impose a general obligation on hosting service providers to monitor the information that users transmit or store. But in the proposal the Commission argues that, given the “grave risks associated with the dissemination of terrorist content”, states could be allowed to “exceptionally derogate from this principle under an EU framework”.

So it’s essentially suggesting that Europeans’ fundamental rights might not, in fact, be so fundamental. (Albeit, European judges might well take a different view — and it’s very likely the proposals could face legal challenges should they be cast into law.)

What is being suggested would also apply to any hosting service provider that offers services in the EU — “regardless of their place of establishment or their size”. So, seemingly, not just large platforms, like Facebook or YouTube, but — for example — anyone hosting a blog that includes a free-to-post comment section.

Websites that fail to promptly take down terrorist content would face fines — with the level of penalties being determined by EU Member States (Germany has already legislated to enforce social media hate speech takedowns within 24 hours, setting the maximum fine at €50M).

“Penalties are necessary to ensure the effective implementation by hosting service providers of the obligations pursuant to this Regulation,” the Commission writes, envisaging the most severe penalties being reserved for systematic failures to remove terrorist material within one hour. 

It adds: “When determining whether or not financial penalties should be imposed, due account should be taken of the financial resources of the provider.” So — for example — individuals with websites who fail to moderate their comment section fast enough might not be served the very largest fines, presumably.

The proposal also encourages platforms to develop “automated detection tools” so they can take what it terms “proactive measures proportionate to the level of risk and to remove terrorist material from their services”.

So the Commission’s continued push for Internet pre-filtering is clear. (This is also a feature of the its copyright reform — which is being voted on by MEPs later today.)

Albeit, it’s not alone on that front. Earlier this year the UK government went so far as to pay an AI company to develop a terrorist propaganda detection tool that used machine learning algorithms trained to automatically detect propaganda produced by the Islamic State terror group — with a claimed “extremely high degree of accuracy”. (At the time it said it had not ruled out forcing tech giants to use it.)

What is terrorist content for the purposes of this proposals? The Commission refers to an earlier EU directive on combating terrorism — which defines the material as “information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.

And on that front you do have to wonder whether, for example, some of U.S. president Donald Trump’s comments last year after the far right rally in Charlottesville where a counter protestor was murdered by a white supremacist — in which he suggested there were “fine people” among those same murderous and violent white supremacists might not fall under that ‘glorifying the commission of terrorist offences’ umbrella, should, say, someone repost them to a comment section that was viewable in the EU…

Safe to say, even terrorist propaganda can be subjective. And the proposed regime will inevitably encourage borderline content to be taken down — having a knock-on impact upon online freedom of expression.

The Commission also wants websites and platforms to share information with law enforcement and other relevant authorities and with each other — suggesting the use of “standardised templates”, “response forms” and “authenticated submission channels” to facilitate “cooperation and the exchange of information”.

It tackles the problem of what it refers to as “erroneous removal” — i.e. content that’s removed after being reported or erroneously identified as terrorist propaganda but which is subsequently, under requested review, determined not to be — by placing an obligation on providers to have “remedies and complaint mechanisms to ensure that users can challenge the removal of their content”.

So platforms and websites will be obligated to police and judge speech — which they already do do, of course but the proposal doubles down on turning online content hosters into judges and arbiters of that same content.

The regulation also includes transparency obligations on the steps being taken against terrorist content by hosting service providers — which the Commission claims will ensure “accountability towards users, citizens and public authorities”. 

Other perspectives are of course available… 

The Commission envisages all taken down content being retained by the host for a period of six months so that it could be reinstated if required, i.e. after a valid complaint — to ensure what it couches as “the effectiveness of complaint and review procedures in view of protecting freedom of expression and information”.

It also sees the retention of takedowns helping law enforcement — meaning platforms and websites will continue to be co-opted into state law enforcement and intelligence regimes, getting further saddled with the burden and cost of having to safely store and protect all this sensitive data.

(On that the EC just says: “Hosting service providers need to put in place technical and organisational safeguards to ensure the data is not used for other purposes.”)

The Commission would also create a system for monitoring the monitoring it’s proposing platforms and websites undertake — thereby further extending the proposed bureaucracy, saying it would establish a “detailed programme for monitoring the outputs, results and impacts” within one year of the regulation being applied; and report on the implementation and the transparency elements within two years; evaluating the entire functioning of it four years after it’s coming into force.

The executive body says it consulted widely ahead of forming the proposals — including running an open public consultation, carrying out a survey of 33,500 EU residents, and talking to Member States’ authorities and hosting service providers.

“By and large, most stakeholders expressed that terrorist content online is a serious societal problem affecting internet users and business models of hosting service providers,” the Commission writes. “More generally, 65% of respondent to the Eurobarometer survey considered that the internet is not safe for its users and 90% of the respondents consider it important to limit the spread of illegal content online.

“Consultations with Member States revealed that while voluntary arrangements are producing results, many see the need for binding obligations on terrorist content, a sentiment echoed in the European Council Conclusions of June 2018. While overall, the hosting service providers were in favour of the continuation of voluntary measures, they noted the potential negative effects of emerging legal fragmentation in the Union.

“Many stakeholders also noted the need to ensure that any regulatory measures for removal of content, particularly proactive measures and strict timeframes, should be balanced with safeguards for fundamental rights, notably freedom of speech. Stakeholders noted a number of necessary measures relating to transparency, accountability as well as the need for human review in deploying automated tools.”

Twitter claims more progress on squeezing terrorist content

Twitter has put out its latest Transparency Report providing an update on how many terrorist accounts it has suspended on its platform — with a cumulative 1.2 million+ suspensions since August 2015.

During the reporting period of July 1, 2017 through December 31, 2017 — for this, Twitter’s 12th Transparency Report — the company says a total of 274,460 accounts were permanently suspended for violations related to the promotion of terrorism.

“This is down 8.4% from the volume shared in the previous reporting period and is the second consecutive reporting period in which we’ve seen a drop in the number of accounts being suspended for this reason,” it writes. “We continue to see the positive, significant impact of years of hard work making our site an undesirable place for those seeking to promote terrorism, resulting in this type of activity increasingly shifting away from Twitter.”

Six months ago the company claimed big wins in squashing terrorist activity on its platform — attributing drops in reports of pro-terrorism accounts then to the success of in-house tech tools in driving terrorist activity off its platform (and perhaps inevitably rerouting it towards alternative platforms — Telegram being chief among them, according to experts on online extremism).

At that time Twitter reported a total of 299,649 pro-terrorism accounts had been suspended — which it said was a 20 per cent drop on figures reported for July through December 2016.

So the size of the drops are also shrinking. Though it’s suggesting that’s because it’s winning the battle to discourage terrorists from trying in the first place.

For its latest reporting period, ending December 2017, Twitter says 93% of the accounts were flagged by its internal tech tools — with 74% of those also suspended before their first tweet, i.e. before they’d been able to spread any terrorist propaganda.

Which means that around a quarter of the pro-terrorist accounts did manage to get out at least one terror tweet.

This proportion is essentially unchanged since the last report period (when Twitter reported suspending 75% before their first tweet) — so whatever tools it’s using to automate terror account identification and blocking appear to be in a steady state, rather than gaining in ability to pre-filter terrorist content.

Twitter also specifies that government reports of violations related to the promotion of terrorism represent less than 0.2% of all suspensions in the most recent reporting period — or 597 to be exact.

As with its prior transparency report, a far larger number of Twitter accounts are being reported by governments for “abusive behavior” — which refers to long-standing problems on Twitter’s platform such as hate speech, racism, misogyny and trolling.

And in December a Twitter policy staffer was roasted by UK MPs during a select committee session after the company was again shown failing to remove violent, threatening and racist tweets — which committee staffers had reported months earlier in that case.

Twitter’s latest Transparency Report specifies that governments reported 6,254 Twitter accounts for abusive behavior — yet the company only actioned a quarter of these reports.

That’s still up on the prior reporting period, though, when it reported actioning a paltry 12% of these type of reports.

The issue of abuse and hate speech on online platforms generally has rocketed up the political agenda in recent years, especially in Europe — where Germany now has a tough new law to regulate takedowns.

Platforms’ content moderation policies certainly remain a bone of contention for governments and lawmakers.

Last month the European Commission set out a new rule of thumb for social media platforms — saying it wants them to take down illegal content within an hour of it being reported.

This is not legislation yet, but the threat of EU-wide laws being drafted to regulate content takedowns remains a discussion topic — to encourage platforms to improve performance voluntarily.

Where terrorist content specifically is concerned, the Commission has also been pushing for increased used by tech firms of what it calls “proactive measures”, including “automated detection”.

And in February the UK government also revealed it had commissioned a local AI firm to build an extremist content blocking tool — saying it could decide to force companies to use it.

So political pressure remains especially high on that front.

Returning to abusive content, Twitter’s report specifies that the majority of the tweets and accounts reported to it by governments which it did remove violated its rules in the following areas: impersonation (66%), harassment (16%), and hateful conduct (12%).

This is an interesting shift on the mix from the last reported period when Twitter said content was removed for: harassment (37%), hateful conduct (35%), and impersonation (13%).

It’s difficult to interpret exactly what that development might mean. One possibility is that impersonation could cover disinformation agents, such as Kremlin bots, which Twitter has being suspending in recent months as part of investigations into election interference — an issue that’s been shown to be a problem across social media, from Facebook to Tumblr.

Governments may also have become more focused on reporting accounts to Twitter that they believe are wrappers for foreign agents to spread false information to try to meddle with democratic processes.

In January, for example, the UK government announced it would be setting up a civil service unit to combat state-led disinformation campaigns.

And removing an account that’s been identified as a fake — with the help of government intelligence — is perhaps easier for Twitter than judging whether a particular piece of robust speech might have crossed the line into harassment or hate speech.

Judging the health of conversations on its platform is also something the company recently asked outsiders to help it with. So it doesn’t appear overly confident in making those kind of judgement calls.

UK outs extremism blocking tool and could force tech firms to use it

 The UK government’s pressure on tech giants to do more about online extremism just got weaponized. The Home Secretary has today announced a machine learning tool, developed with public money by a local AI firm, which the government says can automatically detect propaganda produced by the Islamic State terror group with “an extremely high degree of accuracy”. Read More