Amnesty International used machine-learning to quantify the scale of abuse against women on Twitter

A new study by Amnesty International and Element AI puts number to a problem many women already know about: that Twitter is a cesspool of harassment and abuse. Conducted with the help of 6,500 volunteers, the study, billed by Amnesty International as “the largest ever” into online abuse against women, used machine-learning software from Element AI to analyze tweets sent to a sample of 778 women politicians and journalists during 2017. It found that 7.1%, or 1.1 million, of those tweets were either “problematic” or “abusive,” which Amnesty International said amounts to one abusive tweet sent every 30 seconds.

On an interactive website breaking down the study’s methodology and results, Amnesty International said many women either censor what they post, limit their interactions on Twitter, or just quit the platform altogether. “At a watershed moment when women around the world are using their collective power to amplify their voices through social media platforms, Twitter’s failure to consistently and transparently enforce its own community standards to tackle violence and abuse means that women are being pushed backwards towards a culture of silence,” stated the human rights advocacy organization.

Amnesty International, which has been researching abuse against women on Twitter for the past two years, signed up 6,500 volunteers for what it refers to as the “Troll Patrol” after releasing another study in March 2018 that described Twitter as a “toxic” place for women. The Troll Patrol’s volunteers, who come from 150 countries and range in age from 18 to 70 years old, received training about constitutes a problematic or abusive tweet. Then they were shown anonymized tweets mentioning one of the 778 women and asked whether or not the tweets were problematic or abusive. Each tweet was shown to several volunteers. In addition, Amnesty International said “three experts on violence and abuse against women” also categorized a sample of 1,000 tweets to “ensure we were able to assess the quality of the tweets labelled by our digital volunteers.”

The study defined “problematic” as tweets “that contain hurtful or hostile content, especially if repeated to an individual on multiple occasions, but do not necessarily meet the threshold of abuse,” while “abusive” meant tweets “that violate Twitter’s own rules and include content that promote violence against or threats of people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”

In total, the volunteers analyzed 288,000 tweets sent between January to December 2017 to the 778 women studied, who included politicians and journalists across the political spectrum from the United Kingdom and United States. Politicians included members of the U.K. Parliament and the U.S. Congress, while journalists represented a diverse group of publications including The Daily Mail, The New York Times, Guardian, The Sun, gal-dem, Pink News, and Breitbart.

Then a subset of the labelled tweets was processed using Element AI’s machine-learning software to extrapolate the analysis to the total of 14.5 million tweets that mentioned the 778 women during 2017. (Since tweets weren’t collected for the study until March 2018, Amnesty International notes that the scale of abuse was likely even higher because some abusive tweets may have been deleted or made by accounts that were suspended or disabled). Element AI’s extrapolation produced the finding that 7.1% of tweets sent to the women were problematic or abusive, amounting to 1.1 million tweets 2017.

Black, Asian, Latinx, and mixed race women were 34% more likely to be mentioned in problematic or abusive tweets than white women. Black women in particular were especially vulnerable: they were 84% more likely than white women to be mentioned in problematic or abusive tweets. One in 10 tweets mentioning black women in the study sample was problematic or abusive, compared to one in 15 for white women.

“We found that, although abuse is targeted at women across the political spectrum, women of color were much more likely to be impacted, and black women are disproportionately targeted. Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices,” said Milena Marin, Amnesty International’s senior advisor for tactical research, in the statement.

Breaking down the results by profession, the study found that 7% of tweets that mentioned the 454 journalists in the study were either problematic or abusive. The 324 politicians surveyed were targeted at a similar rate, with 7.12% of tweets that mentioned them problematic or abusive.

Of course, findings from a sample of 778 journalists and politicians in the U.K. and U.S. is difficult to extrapolate to other professions, countries, or the general population. The study’s findings are important, however, because many politicians and journalists need to use social media in order to do their jobs effectively. Women, and especially women of color, are underrepresented in both professions, and many stay on Twitter simply to make a statement about visibility, even though it means dealing with constant harassment and abuse. Furthermore, Twitter’s API changes means many third-party anti-bullying tools no longer work, as technology journalist Sarah Jeong noted on her own Twitter profile, and the platform has yet to come up with tools that replicate their functionality.

Amnesty International’s other research about abusive behavior towards women on Twitter includes a 2017 online poll of women in 8 countries, and an analysis of abuse faced by female members of Parliament before the UK’s 2017 snap election. The organization said the Troll Patrol isn’t about “policing Twitter or forcing it to remove content.” Instead, the organization wants the platform to be more transparent, especially about how the machine-learning algorithms it uses to detect abuse.

Because the largest social media platforms now rely on machine learning to scale their anti-abuse monitoring, Element AI also used the study’s data to develop a machine-learning model that automatically detects abusive tweets. For the next three weeks, the model will be available to test on Amnesty International’s website in order to “demonstrate the potential and current limitations of AI technology.” These limitations mean social media platforms need to fine-tune their algorithms very carefully in order to detect abusive content without also flagging legitimate speech.

“These trade-offs are value-based judgements with serious implications for freedom of expression and other human rights online,” the organization said, adding that “as it stands, automation may have a useful role to play in assessing trends or flagging content for human review, but it should, at best, be used to assist trained moderators, and certainly should not replace them.”

TechCrunch has contacted Twitter for comment.

Facebook’s got 99 problems but Trump’s latest “bias” tweet ain’t one

By any measure Facebook hasn’t had the best of years in 2018.

But while toxic problems keep piling up and, well, raining acidly down on the social networking giant — from election interference, to fake accounts, faulty metrics, security flaws, ethics failuresprivacy outrages and much more besides — the silver lining of having a core business now widely perceived as hostile to democratic processes and civilized sentiment, and the tool of choice for shitposters agitating for hate and societal division, well, everywhere in the world, is that Facebook has frankly far more important things to worry about than the latest anti-tech-industry salvo from President Trump.

In an early morning tweet today, Trump (again) attacked what he dubbed anti-conservative “bias” in the digital social sphere — hitting out at not just Facebook but tech’s holy trinity of social giants, with a claim that “Facebook, Twitter and Google are so biased towards the Dems it is ridiculous!”

Time was when Facebook was so sensitive to accusations of internal anti-conservative bias that it fired a bunch of journalists it had contracted and replaced them with algorithms — which almost immediately pumped up a bunch of fake news. RIP irony.

Not today, though.

When asked if it had a response to Trump’s accusation of bias a Facebook spokesperson told us: “We don’t have anything to add here.”

The brevity and alacrity of the response suggested the spokesperson had a really cheerful expression on their face when they typed it.

The relief of Facebook not having to give a shit this time was kinda palpable, even in pixel form.

It was also a far cry from the screeds the company routinely dispenses these days to try to muffle journalistic — and indeed political — enquiry.

Trump evidently doesn’t factor ‘bigly’ on Facebook’s oversubscribed risk-list.

Even though Facebook was the first name on the president’s (non-alphabetical) tech giant hit-list.

Still, Twitter appeared to have irked Trump more, as his tweet singled out the short-form platform — with an accusation that Twitter has made it “much more difficult for people to join [sic] @realDonaldTrump”. (We think by “join” he means follow. But we’re speculating wildly.)

This is perhaps why Twitter felt moved to provide a response to the claim of bias, albeit also without wasting a lot of words.

Here’s its statement:

Our focus is on the health of the service, and that includes work to remove fake accounts to prevent malicious behavior. Many prominent accounts have seen follower counts drop, but the result is higher confidence that the followers they have are real, engaged people.

Presumably the president failed to read our report, from July, when we trailed Twitter’s forthcoming spam purge, warning it would result in users with lots of followers taking a noticeable hit in the coming days. In a word: Sad.

Of course we also asked Google for a response to Trump’s bias claim. But just got radio silence.

In similar “bias” tweets from August the company got a bigger Trump-lashing. And in a response statement then it told us: “We never rank search results to manipulate political sentiment.”

Google CEO Sundar Pichai has also just had to sit through some three hours of questions from Republicans in Congress on this very theme.

So the company probably feels it’s exhausted the political bias canard.

Even while, as the claims drone on and on, it might truly come to understand what it feels like to be stuck inside a filter bubble.

In any case there are far more pressing things to accuse Google’s algorithms of than being ‘anti-Trump’.

So it’s just as well it didn’t waste time on another presidential sideshow intended to distract from problems of Trump’s own making.

Jack Dorsey and Twitter ignored opportunity to meet with civic group on Myanmar issues

Responding to criticism from his recent trip to Myanmar, Twitter CEO Jack Dorsey said he’s keen to learn about the country’s racial tension and human rights atrocities, but it has emerged that both he and Twitter’s public policy team ignored an opportunity to connect with a key civic group in the country.

A loose group of six companies in Myanmar has engaged with Facebook in a bid to help improve the situation around usage of its services in the country — often with frustrating results — and key members of that alliance, including Omidyar-backed accelerator firm Phandeeyar, contacted Dorsey via Twitter DM and emailed the company’s public policy contacts when they learned that the CEO was visiting Myanmar.

The plan was to arrange a forum to discuss the social media concerns in Myanmar to help Dorsey gain an understanding of life on the ground in one of the world’s fastest-growing internet markets.

“The Myanmar tech community was all excited, and wondering where he was going,” Jes Kaliebe Petersen, the Phandeeyar CEO, told TechCrunch in an interview. “We wondered: ‘Can we get him in a room, maybe at a public event, and talk about technology in Myanmar or social media, whatever he is happy with?'”

The DMs went unread. In a response to the email, a Twitter staff member told the group that Dorsey was visiting the country strictly on personal time with no plans for business. The Myanmar-based group responded with an offer to set up a remote, phone-based briefing for Twitter’s public policy team with the ultimate goal of getting information to Dorsey and key executives, but that email went unanswered.

When we contacted Twitter, a spokesperson initially pointed us to a tweet from Dorsey in which he said: “I had no conversations with the government or NGOs during my trip.”

However, within two hours of our inquiry, a member of Twitter’s team responded to the group’s email in an effort to restart the conversation and set up a phone meeting in January.

“We’ve been in discussions with the group prior to your outreach,” a Twitter spokesperson told TechCrunch in a subsequent email exchange.

That statement is incorrect.

Still, on the bright side, it appears that the group may get an opportunity to brief Twitter on its concerns on social media usage in the country after all.

The micro-blogging service isn’t as well-used in Myanmar as Facebook, which has some 20 million monthly users and is practically the de facto internet, but there have been concerns in Myanmar. For one thing, there was been the development of a somewhat sinister bot army in Myanmar and other parts of Southeast Asia, while it remains a key platform for influencers and thought-leaders.

“[Dorsey is] the head of a social media company and, given the massive issues here in Myanmar, I think it’s irresponsible of him to not address that,” Petersen told TechCrunch.

“Twitter isn’t as widely used as Facebook but that doesn’t mean it doesn’t have concerns happening with it,” he added. “As we’d tell Facebook or any large tech company with a prominent presence in Myanmar, it’s important to spend time on the ground like they’d do in any other market where they have a substantial presence.”

The UN has concluded that Facebook plays a “determining” role in accelerating ethnic violence in Myanmar. While Facebook has tried to address the issues, it hasn’t committed to opening an office in the country and it released a key report on the situation on the eve of the U.S. mid-term elections, a strategy that appeared designed to deflect attention from the findings. All of which suggests that it isn’t really serious about Myanmar.

A popular ‘boomoji’ app exposed millions of users’ contact lists and location data

Popular animated avatar creator app Boomoji, with more than five million users across the world, exposed the personal data of its entire user base after it failed to put passwords on two of its internet-facing databases.

The China-based app developer left the ElasticSearch databases online without passwords — a U.S.-based database for its international customers; and a Hong Kong-based database containing mostly Chinese users’ data in an effort to comply with China’s data security laws, which requires Chinese citizens’ data to be located on servers inside the country.

Anyone who knew where to look could access, edit or delete the database using their web browser. And, because the database was listed on Shodan, a search engine for exposed devices and databases, they was easily found with a few keywords.

After TechCrunch reached out, Boomoji pulled the two databases offline. “These two accounts were made by us for testing purposes,” said an unnamed Boomoji spokesperson in an email.

But that isn’t true.

The database contained records on all of the company’s iOS and Android users — some 5.3 million users as of this week. Each record contained their username, gender, country, and phone type.

Each record also included a user’s unique Boomoji ID, which was linked to other tables in the database. Those other tables included if and which school they go to — a feature Boomoji touts as a way for users to get in touch with their fellow students. That unique ID also included the precise geolocation of more than 375,000 users that had allowed the app to know their location at any given time.

Worse, the database contained every phone book entry of every user who had allowed the app access to their contacts.

One table had more than 125 million contacts, including their names (as written in a user’s phone book) and their phone number. Each record was linked to a Boomoji’s unique ID, making it relatively easy to know whose contact list belonged to whom.

Even if you didn’t use the app, anyone who has your phone number stored on their device and used the app more than likely uploaded your number to Boomoji’s database. To our knowledge, there’s no way to opt out or have your information deleted.

Given Boomoji’s response, we verified the contents of the database by downloading the app on a dedicated iPhone using a throwaway phone number, containing a few dummy, but easy-to-search contact list entries. To find friends, the app matches your contacts with those registered with the app in its database. When we were prompted to allow the app access to our contacts list, the entire dummy contact list was uploaded instantly — and viewable in the database.

So long as the app was installed and had access to the contacts, new phone numbers would be automatically uploaded.

Yet, none of the data was encrypted. All of the data was all stored in plaintext.

Although Boomoji is based in China, it claims to follow California state law, where data protection and privacy rules are some of the strongest in the U.S. We asked Boomoji if has or plans to inform California’s attorney general of the exposure as it’s required to by state law, but the company did not answer.

Given the vast amount of European users’ information in the database, the company may also face penalties under the EU’s General Data Protection Regulation, which can impose fines of up to four percent of the company’s global annual revenue for serious breaches.

But given its China-based presence, it’s not clear, however, what actionable repercussions the company could face.

This is the latest in a series of exposures involving ElasticSearch instances, a popular open source search and database software. In recent weeks, several high profile data exposures have been reported as a result of companies’ failure to practice basic data security measures — including Urban Massage exposing its own customer database, Mindbody-owned FitMetrix forgetting to put a password on its servers, and Voxox, a communications company, which leaked phone numbers and two-factor codes on millions of unsuspecting users.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.

Changing consumer behavior is the key to unlocking billion dollar businesses

In the summer of 2012, I had just learned of a new service where a driver would pick you up in their own car, not a taxi or licensed town car. You’d be able to recognize the car by the pink mustache strapped to the front. I quickly downloaded the new app called Lyft and, intrigued, started to share it with others around the Airbnb offices.

Almost everyone gave me a same response: “I would never use it.” I asked why. “Well, I wouldn’t feel comfortable getting into someone else’s car.” I said, “Wait a minute, you are comfortable allowing others into your home and staying in others’ homes while you travel, but you don’t want to get into someone else’s car?” The reply was always a version of “Yeah, I guess that’s it—a car is different than a home.”

I was dumbfounded. Here was a collection of adventurous individuals — who spent their days at Airbnb expanding the boundaries of what it means to trust another person — but they were stuck on the subtle behavior change of riding shotgun with a stranger. I then had another quick reaction: this product was going to be huge.

Behavior Shifts in Consumer Internet

Truly transformative consumer products require a behavior shift. Think back to the early days of the internet. Plenty of people said they would never put their credit card credentials online. But they did, and that behavior shift allowed e-commerce to flourish, creating the likes of Amazon. Fast forward to the era when Myspace, Facebook, and other social networks were starting out. Again, individuals would commonly say that they would never put their real names or photos of themselves online. It required only one to two years before the shift took hold and the majority of the population created social media profiles. The next wave included sharing-economy companies like Airbnb, Lyft, and Uber, prompting individuals to proclaim that they would never stay in someone else’s home or get into their car. In short order, times changed and those behaviors are now so commonplace, these companies are transforming how people travel and move about the world.

The behavior shifts were a change in socially accepted norms and previously learned behavior. They alone don’t create stratospheric outcomes, but they do signal that there could be something special at play.

Build an Enhanced Experience

Still, just because a product creates a behavior shift does not mean that it will be successful. Often, though a handful of loyal users may love them, there is ultimately no true advantage to these products or services.

One prime example comes to mind, the product Blippy. In late 2009, the team built a product to livestream a user’s credit card transactions. It would show the purchase details to the public, pretty much anyone on the internet, unlocking a new data stream. It was super interesting and definitely behavior shifting. This was another case where many people were thinking, “Wow, I would never do that,” even as others were happily publishing their credit card data. Ultimately there was little consumer value created, which led Blippy to fold. The founders have since gone on to continually build interesting startups.

In successful behavior-shifting products, the shift leads to a better product, unlocking new types of online interactions and sometimes offline activities in the real world. For instance, at Airbnb the behavior shift of staying in someone else’s home created a completely new experience that was 1) cheaper, 2) more authentic, and 3) unique. Hotels could not compete, because their cost structure was different, their rooms were homogenized, and the hotel experience was commonplace. The behavior shift enabled a new product experience. You can easily flip this statement, too: a better experience enabled the behavior shift. Overall, the benefits of the new product were far greater than the discomfort of adopting new behavior.

Revolutionary products succeed when they deliver demonstrable value to their users. The fact that a product creates a behavior shift is clearly not enough. It must create enormous value to overcome the initial skepticism. When users get over this hurdle, though, they will be extremely bought in, commonly becoming evangelists for the product.

Unlock Greenfield Opportunity

One key benefit of a behavior-shifting product is that it commonly creates a new market where there is no viable competition. Even in cases where several innovative players crop up at the same time, they’re vying for market share in a far more favorable environment, not trying to unseat entrenched corporations. The opportunity then becomes enormous, as the innovators can capture the vast majority of the market.

Other times, the market itself isn’t new, but the way the product or service operates in it is. Many behavior-shifting products were created in already enormous markets, but they shifted the definition of those markets. For instance, e-commerce is an extension of the regular goods market, which is in the trillions. Social media advertising is an extension of online advertising, which is in the hundreds of billions. Companies that innovated within those markets created new greenfield but also continued to grow the existing market pie and take market share away from the incumbents. The innovators retrain the consumer to expect more, forcing the incumbents to respond to a new paradigm.

(Photo by Carl Court/Getty Images)

Shape the Future

A behavior shift also allows the innovator to shape the future by creating a new product experience and pricing structure.

When it comes to product experience, there are no prior mental constructs. This is a huge advantage to product development, as it allows teams to be as creative as possible. For instance, the addition of ratings in Uber’s and Lyft’s products changed the dynamic between driver and rider. Taxi drivers and passengers could be extremely rude to each other. Reviews have altered that experience and made rudeness an edge case, as there are ramifications to behaving badly. Taxis can’t compete with this seemingly small innovation because there is no mechanism to do so. They can’t enhance quality of interaction without taking the more manual approach of driver education.

Another benefit to the innovator is that they can completely change the economics of the transaction, shaping the future of the market. Amazon dictated a new shopping experience with online purchasing, avoiding the costs of a brick-and-mortar location. They could undercut pricing across the board, focusing on scale instead of margin per product. This shifted the business model of the market, forcing others to respond to follow suit. In many cases, that shift ultimately eroded the competition’s existing economic structure, making it extremely challenging for them to participate in the new model.

Expect Unintended Consequences

It can be difficult to imagine at the outset, but if your product is encouraging massive behavior shifts, you will undoubtedly encounter many unintended consequences along the way. It is easy to brush off a problem you did not directly and intentionally create. But as the social media companies are learning today, very few problems go away by ignoring them. It is up to you to address these challenges, even if they are an unintended byproduct.

One of the most common unintended consequences nearly all behavior-shifting companies will run into is government regulation. Regulation is created to support the world as it is today. When you introduce a behavior shift into society, you will naturally be operating outside of previously created societal frameworks. The sharing-economy companies like Airbnb and Uber are prime examples. They push the boundaries of land use regulation and employer-employee relationships and aggravate unions.

I want to emphasize that you should not ignore such matters or think that their regulation is silly. Regulation serves a purpose. Startups must work with regulators to help define new policy structures, and governments must be open to innovation. It’s a two-way street, and everyone wins when we work together.

What’s Next

My advice is to start by thinking about existing categories that represent people’s biggest or most frequent expenditures. The amount of money you spend on your home, transportation, and clothes, for example, is enormous. Is there an opportunity to grow and capture part of these markets by upending old commercial models and effecting a behavior shift?

Scooter networks are a real-time example of a behavior-shifting innovation that is just getting going. It has the same explosive opportunity of prior game-changing innovations. There are still many individuals who state that they will never commute on scooter. But applying this framework tells me that it is just a matter of time before it is more widely adopted as the technology keeps evolving and maturing.

There is no magical formula for uncovering massive, behavior-shifting products. But if you come up with an innovative idea, and everyone initially tells you that they would never use it, think a little harder to make sure they are right…

Pew: Social media for the first time tops newspapers as a news source for US adults

It’s not true that everyone gets their news from Facebook and Twitter. But it is now true that more U.S. adults get their news from social media than from print newspapers. According to a new report from Pew Research Center out today, social media has for the first time surpassed newspapers as a preferred source of news for American adults. However, social media is still far behind other traditional news sources, like TV and radio, for example.

Last year, the portion of those who got their news from social media was around equal to those who got their news from print newspapers, Pew says. But in its more recent survey conducted from July 30 through August 12, 2018, that had changed.

Now, one-in-five U.S. adults (20 percent) are getting news from social media, compared with just 16 percent of those who get news from newspapers, the report found. (Pew had asked respondents if they got their news “often” from the various platforms.)

The change comes at a time when newspaper circulation is on the decline, and its popularity as a news medium is being phased out — particularly with younger generations. In fact, the report noted that print only remains popular today with the 65 and up crowd, where 39 percent get their news from newspapers. By comparison, no more than 18 percent of any other age group does.

While the decline of print has now given social media a slight edge, it’s nowhere near dominating other formats.

Instead, TV is still the most popular destination for getting the news, even though that’s been dropping over the past couple of years. TV is then followed by news websites, radio and then social media and newspapers.

But “TV news” doesn’t necessarily mean cable news networks, Pew clarifies.

In reality, local news is the most popular, with 37 percent getting their news there often. Meanwhile, 30 percent get cable TV news often and 25 percent watch the national evening news shows often.

However, if you look at the combination of news websites and social media together, a trend toward increasing news consumption from the web is apparent. Together, 43 percent of U.S. adults get their news from the web in some way, compared to 49 percent from TV.

There’s a growing age gap between TV and the web, too.

A huge majority (81 percent) of those 65 and older get news from TV, and so does 65 percent of those ages 50 to 64. Meanwhile, only 16 percent of the youngest consumers — those ages 18 to 29 — get their news from TV. This is the group pushing forward the cord cutting trend, too — or more specifically, many of them are the “cord-nevers,” as they’re never signing up for pay TV subscriptions in the first place. So it’s not surprising they’re not watching TV news.

Plus, a meager 2 percent get their news from newspapers in this group.

This young demographic greatly prefers digital consumption, with 27 percent getting news from news websites and 36 percent from social media. That is to say, they’re four times as likely than those 65 and up to get news from social media.

Meanwhile, online news websites are the most popular with the 30 to 49-year-old crowd, with 42 percent saying they get their news often from this source.

Despite their preference for digital, younger Americans’ news consumption is better spread out across mediums, Pew points out.

“Younger Americans are also unique in that they don’t rely on one platform in the way that the majority of their elders rely on TV,” Pew researcher Elisa Shearer writes. “No more than half of those ages 18 to 29 and 30 to 49 get news often from any one news platform,” she says.

Yet another massive Facebook fail: Quiz app leaked data on ~120M users for years

Facebook knows the historical app audit it’s conducting in the wake of the Cambridge Analytica data misuse scandal is going to result in a tsunami of skeletons tumbling out of its closet.

It’s already suspended around 200 apps as a result of the audit — which remains ongoing, with no formal timeline announced for when the process (and any associated investigations that flow from it) will be concluded.

CEO Mark Zuckerberg announced the audit on March 21, writing then that the company would “investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity”.

But you do have to question how much the audit exercise is, first and foremost, intended to function as PR damage limitation for Facebook’s brand — given the company’s relaxed response to a data abuse report concerning a quiz app with ~120M monthly users, which it received right in the midst of the Cambridge Analytica scandal.

Because despite Facebook being alerted about the risk posed by the leaky quiz apps in late April — via its own data abuse bug bounty program — they were still live on its platform a month later.

It took about a further month for the vulnerability to be fixed.

And, sure, Facebook was certainly busy over that period. Busy dealing with a major privacy scandal.

Perhaps the company was putting rather more effort into pumping out a steady stream of crisis PR — including taking out full page newspaper adverts (where it wrote that: “we have a responsibility to protect your information. If we can’t, we don’t deserve it”) — vs actually ‘locking down the platform’, per its repeat claims, even though the company’s long and rich privacy-hostile history suggests otherwise.

Let’s also not forget that, in early April, Facebook quietly confessed to a major security flaw of its own — when it admitted that an account search and recovery feature had been abused by “malicious actors” who, over what must have been a period of several years, had been able to surreptitiously collect personal data on a majority of Facebook’s ~2BN users — and use that intel for whatever they fancied.

So Facebook users already have plenty reasons to doubt the company’s claims to be able to “protect your information”. But this latest data fail facepalm suggests it’s hardly scrambling to make amends for its own stinkingly bad legacy either.

Change will require regulation. And in Europe that has arrived, in the form of the GDPR.

Although it remains to be seen whether Facebook will face any data breach complaints in this specific instance, i.e. for not disclosing to affected users that their information was at risk of being exposed by the leaky quiz apps.

The regulation came into force on May 25 — and the javascript vulnerability was not fixed until June. So there may be grounds for concerned consumers to complain.

Which Facebook data abuse victim am I?

Writing in a Medium post, the security researcher who filed the report — self-styled “hacker” Inti De Ceukelaire — explains he went hunting for data abusers on Facebook’s platform after the company announced a data abuse bounty on April 10, as the company scrambled to present a responsible face to the world following revelations that a quiz app running on its platform had surreptitiously harvested millions of users’ data — data that had been passed to a controversial UK firm which intended to use it to target political ads at US voters.

De Ceukelaire says he began his search by noting down what third party apps his Facebook friends were using — finding quizzes were one of the most popular apps. Plus he already knew quizzes had a reputation for being data-suckers in a distracting wrapper. So he took his first ever Facebook quiz, from a brand called NameTests.com, and quickly realized the company was exposing Facebook users’ data to “any third-party that requested it”.

The issue was that NameTests was displaying the quiz taker’s personal data (such as full name, location, age, birthday) in a javascript file — thereby potentially exposing the identify and other data on logged in Facebook users to any external website they happened to visit.

He also found it was providing an access token that allowed it to grant even more expansive data access permissions to third party websites — such as to users’ Facebook posts, photos and friends.

It’s not clear exactly why — but presumably relates to the quiz app company’s own ad targeting activities. (Its privacy policy states: “We work together with various technological partners who, for example, display advertisements on the basis of user data. We make sure that the user’s data is pseudonymised (e.g. no clear data such as names or e-mail addresses) and that users have simple rights of revocation at their disposal. We also conclude special data protection agreements with our partners, in which they commit themselves to the protection of user data.” — which sounds great until you realize its javascript was just leaking people’s personally identified data… [facepalm])

“Depending on what quizzes you took, the javascript could leak your facebook ID, first name, last name, language, gender, date of birth, profile picture, cover photo, currency, devices you use, when your information was last updated, your posts and statuses, your photos and your friends,” writes De Ceukelaire.

He reckons people’s data had been being publicly exposed since at least the end of 2016.

On Facebook, NameTests describes its purpose thusly: “Our goal is simple: To make people smile!” — adding that its quizzes are intended as a bit of “fun”.

It doesn’t shout so loudly that the ‘price’ for taking one of its quizzes, say to find out what Disney princess you ‘are’, or what you could look like as an oil painting, is not only that it will suck out masses of your personal data (and potentially your friends’ data) from Facebook’s platform for its own ad targeting purposes but was also, until recently, that your and other people’s information could have been exposed to goodness knows who, for goodness knows what nefarious purposes… 

The Facebook-Cambridge Analytica data misuse scandal has underlined that ostensibly frivolous social data can end up being repurposed for all sorts of manipulative and power-grabbing purposes. (And not only can end up, but that quizzes are deliberately built to be data-harvesting tools… So think of that the next time you get a ‘take this quiz’ notification asking ‘what is in your fact file?’ or ‘what has your date of birth imprinted on you’? And hope ads is all you’re being targeted for… )

De Ceukelaire found that NameTests would still reveal Facebook users’ identity even after its app was deleted.

“In order to prevent this from happening, the user would have had to manually delete the cookies on their device, since NameTests.com does not offer a log out functionality,” he writes.

“I would imagine you wouldn’t want any website to know who you are, let alone steal your information or photos. Abusing this flaw, advertisers could have targeted (political) ads based on your Facebook posts and friends. More explicit websites could have abused this flaw to blackmail their visitors, threatening to leak your sneaky search history to your friends,” he adds, fleshing out the risks for affected Facebook users.

As well as alerting Facebook to the vulnerability, De Ceukelaire says he contacted NameTests — and they claimed to have found no evidence of abuse by a third party. They also said they would make changes to fix the issue.

We’ve reached out to NameTests’ parent company — a German firm called Social Sweethearts — for comment. Its website touts a “data-driven approach” — and claims its portfolio of products achieve “a global organic reach of several billion page views per month”.

After De Ceukelaire reported the problem to Facebook, he says he received an initial response from the company on April 30 saying they were looking into it. Then, hearing nothing for some weeks, he sent a follow up email, on May 14, asking whether they had contacted the app developers.

A week later Facebook replied saying it could take three to six months to investigate the issue (i.e. the same timeframe mentioned in their initial automated reply), adding they would keep him in the loop.

Yet at that time — which was a month after his original report — the leaky NameTests quizzes were still up and running,  meaning Facebook users’ data was still being exposed and at risk. And Facebook knew about the risk.

The next development came on June 25, when De Ceukelaire says he noticed NameTests had changed the way they process data to close down the access they had been exposing to third parties.

Two days later Facebook also confirmed the flaw in writing, admitting: “[T]his could have allowed an attacker to determine the details of a logged-in user to Facebook’s platform.”

It also told him it had confirmed with NameTests the issue had been fixed. And its apps continue to be available on Facebook’s platform — suggesting Facebook did not find the kind of suspicious activity that has led it to suspend other third party apps. (At least, assuming it conducted an investigation.)

Facebook paid out a $4,000 x2 bounty to a charity under the terms of its data abuse bug bounty program — and per De Ceukelaire’s request.

We asked it what took it so long to respond to the data abuse report, especially given the issue was so topical when De Ceukelaire filed the report. But Facebook declined to answer specific questions.

Instead it sent us the following statement, attributed to Ime Archibong, its VP of product partnerships:

A researcher brought the issue with the nametests.com website to our attention through our Data Abuse Bounty Program that we launched in April to encourage reports involving Facebook data. We worked with nametests.com to resolve the vulnerability on their website, which was completed in June.

Facebook also claims it received De Ceukelaire’s report on April 27, rather than April 22, as he recounts it. Though it’s possible the former date is when Facebook’s own staff retrieved the report from its systems. 

Beyond displaying a disturbingly relaxed attitude to other people’s privacy — which risks getting Facebook into regulatory trouble, given GDPR’s strict requirements around breach disclosure, for example — the other core issue of concern here is the company’s apparent failure to enforce its own developer policy. 

The underlying issue is whether or not Facebook performs any checks on apps running on its platform. It’s no good having T&Cs if you don’t have any active processes to enforce your T&Cs. Rules without enforcement aren’t worth the paper they’re written on.

Historical evidence suggests Facebook did not actively enforce its developer T&Cs — even if it’s now “locking down the platform”, as it claims, as a result of so many privacy scandals. 

The quiz app developer at the center of the Cambridge Analytica scandal, Aleksandr Kogan — who harvested and sold/passed Facebook user data to third parties — has accused Facebook of essentially not having a policyHe contends it is therefore Facebook who is responsible for the massive data abuses that have played out on its platform — only a portion of which have so far come to light. 

Fresh examples such as NameTests’ leaky quiz apps merely bolster the case Kogan made for Facebook being the guilty party where data misuse is concerned. After all, if you built some stables without any doors at all would you really blame your horses for bolting?

CarBlip’s car buying app raises $2 million

CarBlip, a new car-buying mobile application that’s aiming to compete with services like Wyper, has raised $2 million in a new round of financing.

The investment was led by Nordic Eye Venture Capital with participation from the startup studio and seed investment firm, Science.

CarBlip chief executive Brian Johnson, said that the company’s main purpose was to bring the used-car buying experience online.

“One of the main things about why we started CarBlip is we wanted to circumvent the in-person negotiation process and avoid the influx of calls that a buyer gets,” says Johnson. 

The user just downloads an app and looks for the brands they want that are available in an already geo-located area, Johnson said. Shoppers can submit bids on a vehicle privately and receive counter-offers via the app. Then, when they’re ready, they can head down to a dealership to move forward with the purchase, Johnson said.

While Johnson doesn’t have much auto experience himself, co-founders Eric Brooks and Ken Lane both spent time in the car business, Johnson said. Brooks founded LA Car Connection, a platform for buying and leasing new cars, while Kim Lane spent over a decade at Ford, according to the company’s website.

Like Wyper, CarBlip touts a Tinder -like interface that lets users swipe to select vehicles they’re interested in, what Johnson says differentiates his business is the ability to negotiate on the platform for the vehicle. It’s a feature that’s bound to attract interest from dealerships because they’re pretty much assured a sale, Johnson said.

“We loved the value proposition that they were signing up dealers directly,” said Richard Sussman, a the managing partner for Nordic Eye in the U.S.

This post has been updated to indicate that CarBlip is selling new cars. 

CarBlip’s car buying app raises $2 million

CarBlip, a new car-buying mobile application that’s aiming to compete with services like Wyper, has raised $2 million in a new round of financing.

The investment was led by Nordic Eye Venture Capital with participation from the startup studio and seed investment firm, Science.

CarBlip chief executive Brian Johnson, said that the company’s main purpose was to bring the used-car buying experience online.

“One of the main things about why we started CarBlip is we wanted to circumvent the in-person negotiation process and avoid the influx of calls that a buyer gets,” says Johnson. 

The user just downloads an app and looks for the brands they want that are available in an already geo-located area, Johnson said. Shoppers can submit bids on a vehicle privately and receive counter-offers via the app. Then, when they’re ready, they can head down to a dealership to move forward with the purchase, Johnson said.

While Johnson doesn’t have much auto experience himself, co-founders Eric Brooks and Ken Lane both spent time in the car business, Johnson said. Brooks founded LA Car Connection, a platform for buying and leasing new cars, while Kim Lane spent over a decade at Ford, according to the company’s website.

Like Wyper, CarBlip touts a Tinder -like interface that lets users swipe to select vehicles they’re interested in, what Johnson says differentiates his business is the ability to negotiate on the platform for the vehicle. It’s a feature that’s bound to attract interest from dealerships because they’re pretty much assured a sale, Johnson said.

“We loved the value proposition that they were signing up dealers directly,” said Richard Sussman, a the managing partner for Nordic Eye in the U.S.

This post has been updated to indicate that CarBlip is selling new cars. 

Security, privacy experts weigh in on the ICE doxxing

In what appears to be the latest salvo in a new, wired form of protest, developer Sam Lavigne posted code that scrapes LinkedIn to find Immigration and Customs Enforcement employee accounts. His code, which basically a Python-based tool that scans LinkedIn for keywords, is gone from Github and Gitlab and Medium took down his original post. The CSV of the data is still available here and here and WikiLeaks has posted a mirror.

“I find it helpful to remember that as much as internet companies use data to spy on and exploit their users, we can at times reverse the story, and leverage those very same online platforms as a means to investigate or even undermine entrenched power structures. It’s a strange side effect of our reliance on private companies and semi-public platforms to mediate nearly all aspects of our lives. We don’t necessarily need to wait for the next Snowden-style revelation to scrutinize the powerful — so much is already hiding in plain sight,” said Lavigne.

Doxxing is the process of using publicly available information to target someone online for abuse. Because we can now find out anything on anyone for a few dollars – a search for “background check” brings up dozens of paid services that can get you names and addresses in a second – scraping public data on LinkedIn seems far easier and innocuous. That doesn’t make it legal.

“Recent efforts to outlaw doxxing at the national level (like the Online Safety Modernization Act of 2017) have stalled in committee, so it’s not strictly illegal,” said James Slaby, Security Expert at Acronis. “But LinkedIn and other social networks usually consider it a violation of their terms of service to scrape their data for personal use. The question of fairness is trickier: doxxing is often justified as a rare tool that the powerless can use against the powerful to call attention to perceived injustices.”

“The problem is that doxxing is a crude tool. The torrent of online ridicule, abuse and threats that can be heaped on doxxed targets by their political or ideological opponents can also rain down on unintended and undeserving targets: family members, friends, people with similar names or appearances,” he said.

The tool itself isn’t to blame. No one would fault a job seeker or salesperson who scraped LinkedIn for targeted employees of a specific company. That said, scraping and publicly shaming employees walks a thin line.

“In my opinion, the professor who developed this scraper tool isn’t breaking the law, as it’s perfectly legal to search the web for publicly available information,” said David Kennedy, CEO of TrustedSec. “This is known in the security space as ‘open source intelligence’ collection, and scrapers are just one way to do it. That said, it is concerning to see ICE agents doxxed in this way. I understand emotions are running high on both sides of this debate, but we don’t want to increase the physical security risks to our law enforcement officers.”

“The decision by Twitter, Github and Medium to block the dissemination of this information and tracking tool makes sense – in fact, law enforcement agents’ personal information is often protected. This isn’t going to go away anytime soon, it’s only going to become more aggressive, particularly as more people grow comfortable with using the darknet and the many available hacking tools for sale in these underground forums. Law enforcement agents need to take note of this, and be much more careful about what (and how often) they post online.”

Ultimately, doxxing is problematic. Because we place our information on public forums there should be nothing to stop anyone from finding and posting it. However, the expectation that people will use our information for good and not evil is swiftly eroding. Today, wrote one security researcher, David Kavanaugh, doxxing is becoming dangerous.

“Going after the people on the ground is like shooting the messenger . Decisions are made by leadership and those are the people we should be going after. Doxxing is akin to a personal attack. Change policy, don’t ruin more lives,” he said.