Facebook will reveal who uploaded your contact info for ad targeting

Facebook’s crack down on non-consensual ad targeting last year will finally produce results. In March, TechCrunch discovered Facebook planned to require advertisers pledge that they had permission to upload someone’s phone number or email address for ad targeting. That tool debuted in June, though there was no verification process and Facebook just took businesses at their word despite the financial incentive to lie. In November, Facebook launched a way for ad agencies and marketing tech developers to specify who they were buying promotions ‘on behalf of’. Soon that information will finally be revealed to users.

Facebook’s new Custom Audiences transparency feature shows when your contact info was uploaded by who, and if it was shared between brands and partners

Facebook previously only revealed what brand was using your contact info for targeting, not who uploaded it or when

Starting February 28th, Facebook’s “Why am I seeing this?” button in the drop-down menu of feed posts will reveal more than the brand who paid for the ad, some biographical details they targeted, and if they’d uploaded your contact info. Facebook will start to show when your contact info was uploaded, if it was by the brand or one of their agency/developer partners, and when access was shared between partners. A Facebook spokesperson tells me the goal to keep giving people a better understanding of how advertisers use their information.

This new level of transparency could help users pinpoint what caused a brand to get ahold of their contact info. That might help them to change their behavior to stay more private. The system could also help Facebook zero in on agencies or partners who are constantly uploading contact info and might not have attained it legitimately. Apparently seeking not to dredge up old privacy problems, Facebook didn’t publish a blog post about the change but simply announced it in a Facebook post to the Facebook Advertiser Hub Page.

The move comes in the wake of Facebook attaching immediately visible “paid for by” labels to more political ads to defend against election interference. With so many users concerned about how Facebook exploits their data, the Custom Audiences transparency feature could provide a small boost of confidence in a time where people have little faith in the social network’s privacy practices.

Facebook will reveal who uploaded your contact info for ad targeting

Facebook’s crack down on non-consensual ad targeting last year will finally produce results. In March, TechCrunch discovered Facebook planned to require advertisers pledge that they had permission to upload someone’s phone number or email address for ad targeting. That tool debuted in June, though there was no verification process and Facebook just took businesses at their word despite the financial incentive to lie. In November, Facebook launched a way for ad agencies and marketing tech developers to specify who they were buying promotions ‘on behalf of’. Soon that information will finally be revealed to users.

Facebook’s new Custom Audiences transparency feature shows when your contact info was uploaded by who, and if it was shared between brands and partners

Facebook previously only revealed what brand was using your contact info for targeting, not who uploaded it or when

Starting February 28th, Facebook’s “Why am I seeing this?” button in the drop-down menu of feed posts will reveal more than the brand who paid for the ad, some biographical details they targeted, and if they’d uploaded your contact info. Facebook will start to show when your contact info was uploaded, if it was by the brand or one of their agency/developer partners, and when access was shared between partners. A Facebook spokesperson tells me the goal to keep giving people a better understanding of how advertisers use their information.

This new level of transparency could help users pinpoint what caused a brand to get ahold of their contact info. That might help them to change their behavior to stay more private. The system could also help Facebook zero in on agencies or partners who are constantly uploading contact info and might not have attained it legitimately. Apparently seeking not to dredge up old privacy problems, Facebook didn’t publish a blog post about the change but simply announced it in a Facebook post to the Facebook Advertiser Hub Page.

The move comes in the wake of Facebook attaching immediately visible “paid for by” labels to more political ads to defend against election interference. With so many users concerned about how Facebook exploits their data, the Custom Audiences transparency feature could provide a small boost of confidence in a time where people have little faith in the social network’s privacy practices.

NYC launches partnership network, “The Grid”, to help grow urban tech ecosystem

The New York City Economic Development Corporation (NYCEDC) and CIV:LAB – a nonprofit dedicated to connecting urban tech leaders – have announced the launch of The Grid, a member-based partnership network for New York’s urban tech community. The goal of the network is to link organizations, academia and local tech leaders, in order to promote collaboration and the sharing of knowledge and resources.

In addition to connecting member companies and talent, The Grid will host various events, educational programs, and co-innovation projects, while hopefully improving access to investors as well as pilot program opportunities. The Grid is launching with over 70 member organizations – approved through an application and screening process – across various stages and sectors.

In recent years, the tech and startup scene in New York has notably ballooned – evolving from the Valley’s obscure younger sibling to one of the top cities for talent, entrepreneurship, and venture capital investment. And while the city has seen countless startups, VCs, accelerators, and other entrepreneurial resources set up shop within its borders, getting the right tools in place is only part of the battle.

New York wants to prove its initiatives are more than just “show-and-tell” projects and city officials believe that building a truly sustainable innovation economy is dependent on all its local resources working in conjunction, allowing entrepreneurship to permeate every arm of commerce. With an institutionalized network like The Grid, New York hopes it can further fuse its pockets of innovation into to one well-oiled machine, consistently producing transformative ideas.

“The Grid represents a promising new way for NYCEDC to work across sectors to strengthen collaboration and innovation, first in New York City and hopefully soon in many more cities across the country and around the world,” said NYCEDC President and CEO James Patchett in a statement. “It signals that New York City is leading with  a new approach to technology and startup culture, with a real focus on diversity, inclusion, equity, and community.”

As one of the largest and most industrially diverse cities in the world, New York has naturally placed a heightened focus on the growing sector of “urban tech” – which has been broadly categorized as innovation focused on improving city functionality, equality or ease of living. According to NYCEDC, the urban tech space has seen nearly $80 billion in VC investment since 2016, with nearly 10% going to New York-based beneficiaries.

The launch of The Grid is part of an expansion of NYCEDC’s larger UrbanTech NYC program, which has already helped establish the New York innovation hubs New LabUrban Future Lab, and Company. Alongside the membership network and a new site for UrbanTech NYC, NYCEDC is also launching The Grid Academy, an adjacent academic group with the mission of creating applied R&D partnerships between local academic institutions and corporate sponsors. The expansion of UrbanTech NYC represents the latest of several initiatives NYCEDC is pursuing to develop the broader ecosystem, coming just months after the EDC announced the launch of Cyber NYC, a $30 million investment initiative focused on growing New York’s cybersecurity presence and infrastructure.

The group will be led by a steering committee that will guide decisions related to strategic priorities, funding, events, and communications. Members of the committee include some of The Grid’s largest government and corporate members including the Bronx Cooperative Development Initiative, the Downtown Brooklyn Partnership, Civic Hall, Company, New Lab, Urban Future Lab, Dreamit UrbanTech, URBAN-X, Urban.Us, Accenture, Samsung NEXT, Rentlogic, Smarter Grid Solutions, Civic Consulting USA, and the World Economic Forum.

“Since its early days, innovation has been part of the DNA that is New York City,” said Jeff Merritt, Head of IoT + Smart Cities at World Economic Forum. “Nowhere else in the world can you find an ecosystem that combines as many industries and nationalities. New York’s thriving urban technology community is a natural byproduct of what happens when you allow diversity, entrepreneurship and ambition to collide in one of the greatest cities in the world.” 

The Grid’s first meeting will be held on February 19th at Samsung NEXT’s New York HQ. Membership applications for The Grid are accepted on a rolling basis and can be found here on the UrbanTech NYC website.

Why no one really quits Google or Facebook

Another week, another set of scandals at Facebook and Google . This past week, my colleagues reported that Facebook and Google had abused Apple enterprise developer certificates in order to distribute info-scraping research apps, at times from underage users in the case of Facebook. Apple responded by cutting off both companies from developer accounts, before shortly restoring them.

The media went into overdrive over the scandals, as predictable as the companies’ statements that they truly care about users and their privacy. But will anything change?

I think we know the answer to this question: no. And it is never going to change because the vast majority of users just don’t care one iota about privacy or these scandals.

Privacy advocates will tell you that the lack of a wide boycott against Google and particularly Facebook is symptomatic of a lack of information: if people really understood what was happening with their data, they would galvanize immediately for other platforms. Indeed, this is the very foundation for the GDPR policy in Europe: users should have a choice about how their data is used, and be fully-informed on its uses in order to make the right decision for them.

I don’t believe more information would help, and I reject the mentality behind it. It’s reminiscent of the political policy expert who says that if only voters had more information — if they just understood the issue — they would change their mind about something where they are clearly in the “wrong.” It’s incredibly condescending, and obscures a far more fundamental fact about consumers: people know what they value, they understand it, and they are making an economic choice when they stick with Google or Facebook.

Alternatives exist for every feature and app offered by these companies, and they are not hard to find. You can use Signal for chatting, DuckDuckGo for search, FastMail for email, 500px or Flickr for photos, and on and on. Far from being shameless clones of their competitors, in many cases these products are even superior to their originals, with better designs and novel features.

And yet. When consumers start to think about the costs, they balk. There’s sometimes the costs of the products themselves (FastMail is $30/year minimum, but really $50 a year or more if you want reasonable storage), but more importantly are the switching costs that come with using a new product. I have 2,000 contacts on Facebook Messenger — am I just supposed to text them all to use Signal from now on? Am I supposed to completely relearn a new photos app, when I am habituated to the taps required from years of practice on Instagram?

Surveillance capitalism has been in the news the past few weeks thanks to Shoshana Zuboff’s 704-page tome of a book “The Age of Surveillance Capitalism.” But surveillance capitalism isn’t a totalizing system: consumers do have choices here, at least when it comes to consumer apps (credit scores and the reporting bureaus are a whole other beast). There are companies that have even made privacy their distinguishing feature. And consumers respond pretty consistently: I will take free with surveillance over paid with privacy.

One of the lessons I have learned — perhaps the most important you can learn about consumer products — is just how much people are willing to give up for free things. They are willing to give up privacy for free email. They are willing to allow their stock broker to help others actively trade against them for a free stock brokerage account with free trading. People love free stuff, particularly when the harms are difficult to perceive.

This is not to say that Facebook and Google shouldn’t try to improve their shoddy records on privacy, or rebuild trust with users. Those consumers are always able to leave, and their sentiment should never be taken for granted. But after more than a decade of abuse, we should look deeper at our analysis and perhaps conclude that these issues aren’t abuse at all, but rather a bargain, a negotiation, and one that people are quite willing to live with.

China’s influence pushed MSCI to add shares to index

(Photo by China Photos/Getty Images)

MSCI runs some of the most important financial indexes in the world. Trillions of dollars of capital are pegged to these metrics, which is why changes to them can be so controversial. Few decisions by MSCI have been as significant though as the addition of Chinese “A-shares” to its emerging markets indexes last year, which for the first time added mainland Chinese stocks to these important benchmarks. Billions of dollars of capital was expected to flow to those stocks, as wealth managers matched their allocations to the updated indexes.

Now, we have learned just how much pressure MSCI faced in adding those shares. Mike Bird at the Wall Street Journal reports that China placed enormous pressure on MSCI to change its indexes, threatening to cut off its access to domestic wealth managers and stunt its growth in the number two economy. From the article:

MSCI’s discussions with several Chinese asset managers were abruptly curtailed in 2015 and 2016 after the firm didn’t add Chinese-listed stocks to the emerging-markets index following its midyear reviews, according to people close to or directly involved in the discussions. The Chinese firms communicated that they had been instructed by authorities to cut off negotiations with MSCI, the people said.

China’s two national stock exchanges also threatened to withdraw MSCI’s access to market pricing data, which the company provided to its customers all over the world, the people added. It was akin to “business blackmail,” said a person familiar with MSCI’s negotiations with Chinese regulatory authorities.

Companies the world over attempt to manipulate these indexes, particularly given the increasing amount of money flowing to ETFs and other index-backed funds. But few companies have the clout required to actually get MSCI to make changes that benefit them. China, with its huge market, clearly does.

MSCI is “now considering quadrupling China’s weighting in the emerging-markets index.” Maybe that’s objective and fair — after all, China is crucial for the global economy. With China’s meddling and MSCI’s capitulation though, one has to wonder how much is blackmail, and how much is financial science.

More links

TechCrunch is experimenting with new content forms. This is a rough draft of something new – provide your feedback directly to the author (Danny at danny@techcrunch.com) if you like or hate something here.

Share your feedback on your startup’s attorney

My colleague Eric Eldon and I are reaching out to startup founders and execs about their experiences with their attorneys. Our goal is to identify the leading lights of the industry and help spark discussions around best practices. If you have an attorney you thought did a fantastic job for your startup, let us know using this short Google Forms survey and also spread the word. We will share the results and more in the coming weeks.

This newsletter is written with the assistance of Arman Tabatabai from New York

Online platforms still not clear enough about hate speech takedowns: EC

In its latest monitoring report of a voluntary Code of Conduct on illegal hate speech, which platforms including Facebook, Twitter and YouTube signed up to in Europe back in 2016, the European Commission has said progress is being made on speeding up takedowns but tech firms are still lagging when it comes to providing feedback and transparency around their decisions.

Tech companies are now assessing 89% of flagged content within 24 hours, with 72% of content deemed to be illegal hate speech being removed, according to the Commission — compared to just 40% and 28% respectively when the Code was first launched more than two years ago.

However it said today that platforms still aren’t giving users enough feedback vis-a-vis reports, and has urged more transparency from platforms — pressing for progress “in the coming months”, warning it could still legislate for a pan-EU regulation if it believes it’s necessary.

Giving her assessment of how the (still) voluntary code on hate speech takedowns is operating at a press briefing today, commissioner Vera Jourova said: “The only real gap that remains is transparency and the feedback to users who sent notifications [of hate speech].

“On average about a third of the notifications do not receive a feedback detailing the decision taken. Only Facebook has a very high standard, sending feedback systematically to all users. So we would like to see progress on this in the coming months. Likewise the companies should be more transparent towards the general public about what is happening in their platforms. We would like to see them make more data available about the notices and removals.”

“The fight against illegal hate speech online is not over. And we have no signs that such content has decreased on social media platforms,” she added. “Let me be very clear: The good results of this monitoring exercise don’t mean the companies are off the hook. We will continue to monitor this very closely and we can always consider additional measures if efforts slow down.”

Jourova flagged additional steps taken by the Commission to support the overarching goal of clearing what she dubbed a “sewage of words” off of online platforms, such as facilitating data-sharing between tech companies and police forces to help investigations and prosecutions of hate speech purveyors move forward.

She also noted it continues to provide Member States’ justice ministers with briefings on how the voluntary code is operating, warning again: “We always discuss that we will continue but if it slows down or it stops delivering the results we will consider some kind of regulation.”

Germany passed its own social media hate speech takedown law back in 2016, with the so-called ‘NetzDG’ law coming into force in early 2017. The law provides for fines as high as €50M for companies that fail to remove illegal hate speech within 24 hours and has led to social media platforms like Facebook to plough greater resource into locally sited moderation teams.

While, in the UK, the government announced a plan to legislate around safety and social media last year. Although it has yet to publish a White Paper setting out the detail of its policy plan.

Last week a UK parliamentary committee which has been investigating the impacts of social media and screen use among children recommended the government legislate to place a legal ‘duty of care’ on platforms to protect minors.

The committee also called for platforms to be more transparent, urging them to provide bona fide researchers with access to high quality anonymized data to allow for robust interrogation of social media’s effects on children and other vulnerable users.

Debate about the risks and impacts of social media platforms for children has intensified in the UK in recent weeks, following reports of the suicide of a 14 year old schoolgirl — whose father blamed Instagram for exposing her to posts encouraging self harm, saying he had no doubt content she’d been exposed to on the platform had helped kill her.

During today’s press conference, Jourova was asked whether the Commission intends to extend the Code of Conduct on illegal hate speech to other types of content that’s attracting concern, such as bullying and suicide. But she said the executive body is not intending to expand into such areas.

She said the Commission’s focus remains on addressing content that’s judged illegal under existing European legislation on racism and xenophobia — saying it’s a matter for individual Member States to choose to legislate in additional areas if they feel a need.

“We are following what the Member States are doing because we see… to some extent a fragmented picture of different problems in different countries,” she noted. “We are focusing on what is our obligation to promote the compliance with the European law. Which is the framework decision against racism and xenophobia.

“But we have the group of experts from the Member States, in the so-called Internet forum, where we speak about other crimes or sources of hatred online. And we see the determination on the side of the Member States to take proactive measures against these matters. So we expect that if there is such a worrying trend in some Member State that will address it by means of their national legislation.”

“I will always tell you I don’t like the fragmentation of the legal framework, especially when it comes to digital because we are faced with, more or less, the same problems in all the Member States,” she added. “But it’s true that when you [take a closer look] you see there are specific issues in the Member States, also maybe related with their history or culture, which at some moment the national authorities find necessary to react on by regulation. And the Commission is not hindering this process.

“This is the sovereign decision of the Member States.”

Four more tech platforms joined the voluntary code of conduct on illegal hate speech last year: — namely Google+, Instagram, Snapchat, Dailymotion. While French gaming platform Webedia (jeuxvideo.com) also announced their participation today.

Drilling down into the performance of specific platforms, the Commission’s monitoring exercise found that Facebook assessed hate speech reports in less than 24 hours in 92.6% of the cases and 5.1% in less than 48 hours. The corresponding performance figures for YouTube were 83.8 % and 7.9%; and for Twitter 88.3% and 7.3%, respectively.

While Instagram managed 77.4 % of notifications assessed in less than 24 hours. And Google+, which will in any case closes to consumers this April, managed to assess just 60%.

In terms of removals, the Commission found YouTube removed 85.4% of reported content, Facebook 82.4% and Twitter 43.5% (the latter constituting a slight decrease in performance vs last year). While Google+ removed 80.0% of the content and Instagram 70.6%.

It argues that despite social media platforms removing illegal content “more and more rapidly”, as a result of the code, this has not led to an “over-removal” of content — pointing to variable removal rates as an indication that “the review made by the companies continues to respect freedom of expression”.

“Removal rates varied depending on the severity of hateful content,” the Commission writes. “On average, 85.5% of content calling for murder or violence against specific groups was removed, while content using defamatory words or pictures to name certain groups was removed in 58.5 % of the cases.”

“This suggest that the reviewers assess the content scrupulously and with full regard to protected speech,” it adds.

It is also crediting the code with helping foster partnerships between civil society organisations, national authorities and tech platforms — on key issues such as awareness raising and education activities.

H1-B changes will simplify application process

The federal government yesterday published the final rule for changes to the H1-B visa program, which is one of the primary conduits for technical talent to come and work in the United States.

There are two key changes coming with the rule. First, the government will require applicants for an H1-B visa to electronically register with the immigration office for the H1-B lottery before they submit their applications or documentation.

Due to hard caps imposed by Congress on the number of workers who can be admitted under the program, tens of thousands of people apply for a visa who ultimately do not attain it. Under the current process, applicants must submit their entire applications including supporting documentation in order to apply for a lottery run by USCIS, the immigration authority.

Last year, roughly 190,000 applicants applied for 85,000 total slots. That means 105,000 people put together complete applications but lost out on the lottery.

Under the new rule that will be in force for this year’s H1-B process, applicants will first register with USCIS electronically, which will process the lottery. If selected in the lottery, an applicant would then be invited to submit their application and supporting materials. The idea is that you only have to do all the work of applying when there is an actual slot available.

The change is likely to cut into the revenue of immigration attorneys, who today prepare full applications for all applicants. A typical H1-B visa application retainer for an attorney today in Silicon Valley runs in the low thousands of dollars each, with companies picking up the tab. I am sure attorneys will still recommend doing some prep work, but the new rules should cut costs for employers.

The second change of the final rule has to do with how the lottery is conducted. Be very careful here, as the changes are somewhat subtle and there is a lot of malarkey being written across the internet about it.

Under the H1-B program, there are two pools of applicants: let’s call them the regular pool and the advanced degree holders pool. There is a cap of 65,000 for the regular pool, and 20,000 for the advanced degree pool, which is limited to applicants holding a master’s degree or better.

In today’s process, advanced degree applicants first go through the lottery of the advanced degree pool, and if they fail, they get added to the regular pool for the second lottery. In the new process just confirmed by USCIS, that process is inverted: the regular pool lottery will be run first with all applicants, and then the advanced degree pool will happen second with advanced degree applicants who failed in the first lottery.

What does that mean for applicants? Well, we have to do a bit of table napkin probability math to understand* (feel free to skip ahead if you just want the answer).

Using last year’s numbers there were 95,885 advanced degree applicants for 20,000 spots, so a roughly 20.85% chance of receiving a visa. That means 75,885 advanced degree applicants who lost out were then added to the regular pool of 94,213 applicants. That’s 170,098 applicants for 65,000 visas, or roughly a 38.21% chance of getting a visa. Across the two lotteries then, advanced degree holders statistically would have gotten 20,000 visas from the first lottery, and then 38.21% of 75,885 or 28,998 visas from the regular pool lottery. So an advanced degree holder had a 51.1% of getting an H1-B visa, compared to 38.21% for regular pool applicants.

That’s the old probabilities, so let’s see how reversing the sequence of lotteries change the probabilities. Now, 95,885 advanced degree holders join 94,213 regular applicants for 65,000 spots, for a success rate of 34.19%. That means 32,786 advanced degree holders will be successful in the regular pool. From there, the 63,099 advanced degree applicants who were not successful would get to go through the advanced degree lottery of 20,000 spots, a probability rate of 31.70%. Combined then, you have 20,000 + 32,786 = 52,786 successful advanced degree holders out of 95,885, for a combined statistical success rate of 55.05%.

Net-net, the changes in the lottery sequence mean that advanced degree holders would have been successful 55.05% of the time last year, compared with 51.1% under the previous system. For regular applicants, the success rate declines from 38.21% to 31.70%.

So to be accurate in language, I would say that USCIS is (from a statistical point of view) “placing an additional emphasis” on advanced degree holders. It’s a meaningful adjustment if you are applying of course, but ultimately nothing has changed since immigration priorities are written into the law and the executive branch doesn’t have much flexibility to change these systems.

(*One side note: that probability math is “rough” because the H1-B program has a variety of small preferences and set asides that make the probability math unique for each person. Citizens of Chile and Singapore get special treatment, and if you apply to work in Guam and a few other territories, you also have your own special process).

Talking about borders: Huawei and smartphone privacy

The Huawei logo is seen in the center of Warsaw, Poland

(Photo by Jaap Arriens/NurPhoto via Getty Images)

The U.S., like many countries around the world, doesn’t provide a lot of privacy rights at the border. The country can scan the electronic devices of any traveler, and save files and other data in those sweeps, and such tactics are increasingly common much to the chagrin of privacy advocates like the ACLU.

But there is a benefit of these sweeps when it comes to closing in on an international investigation. The U.S. Department of Justice charged Huawei’s CFO Meng Wanzhou with a variety of crimes including bank fraud and wire fraud this week in connection with Huawei’s alleged breach of U.S. sanctions on Iran.

From the indictment, some of the key evidence for the case comes from a sweep of Meng’s smartphone while she passed through JFK Airport, where border officials captured Huawei’s talking points about the Iran / Skycom situation. From the indictment, “When she entered the United States, MENG was carrying an electronic device that contained a file in unallocated space—indicating that the file may have been deleted […]”

As with debates over end-to-end encryption, there are complexities to the level of privacy that should be offered at national borders. While the general right to privacy should be protected, law enforcement should also have the tools it needs to stop crimes within a proper due process system.

Talking about borders: Brexit and manufacturing scale

(Photo by Dan Kitwood/Getty Images)

I talked about manufacturing scale yesterday in the context of Foxconn’s multiple shutdowns of its factories in Wisconsin and Guangzhou this week. Apple isn’t the only one failing to find a screw these days — now the entirety of Britain’s industrial base is worried about finding components.

Bloomberg noted that British “Companies’ inventory holdings grew in January at the quickest rate in the 27-year history of IHS Markit’s survey, the group said in a report Friday.” Companies are stockpiling everything from screws and parts to medications as the risk of a no-deal Brexit increases after Parliament has repeatedly struck down plans for Britain’s withdrawal from the European Union.

Stockpile as much as you want, but China’s success over the past three decades since reform and opening up has been making its borders, customs, and ports some of the most efficient in the world. If Britain wants to compete, it needs to do the same.

TechCrunch is experimenting with new content forms. This is a rough draft of something new – provide your feedback directly to the author (Danny at danny@techcrunch.com) if you like or hate something here.

Share your feedback on your startup’s attorney

My colleague Eric Eldon and I are reaching out to startup founders and execs about their experiences with their attorneys. Our goal is to identify the leading lights of the industry and help spark discussions around best practices. If you have an attorney you thought did a fantastic job for your startup, let us know using this short Google Forms survey and also spread the word. We will share the results and more in the coming weeks.

What’s Next

  • More work on societal resilience
  • I’m reading a Korean novel called The Human Jungle by Cho Chongnae that places a multi-national cast of characters in China’s economy. It’s been a great read a quarter of the way in.

This newsletter is written with the assistance of Arman Tabatabai from New York

We dismantle Facebook’s memo defending its Research data-grab

Facebook published an internal memo today trying to minimize the morale damage of TechCrunch’s investigation that revealed it’d been paying people to suck in all their phone data. Attained by Business Insider’s Rob Price, the memo from Facebook’s VP of production engineering and security Pedro Canahuati gives us more detail about exactly what data Facebook was trying to collect from teens and adults in the US and India. But it also tries to claim the program wasn’t secret, wasn’t spying, and that Facebook doesn’t see it as a violation of Apple’s policy against using its Enterprise Certificate system to distribute apps to non-employees — despite Apple punishing it for the violation.

Here we lay out the memo with section by section responses to Facebook’s claims challenging TechCrunch’s reporting. Our responses are in bold and we’ve added images.

Memo from Facebook VP Pedro Canahuati

APPLE ENTERPRISE CERTS REINSTATED

Early this morning, we received agreement from Apple to issue a new enterprise certificate; this has allowed us to produce new builds of our public and enterprise apps for use by employees and contractors. Because we have a few dozen apps to rebuild, we’re initially focusing on the most critical ones, prioritized by usage and importance: Facebook, Messenger, Workplace, Work Chat, Instagram, and Mobile Home.

New builds of these apps will soon be available and we’ll email all iOS users for detailed instructions on how to reinstall. We’ll also post to iOS FYI with full details.

Meanwhile, we’re expecting a follow-up article from the New York Times later today, so I wanted to share a bit more information and background on the situation.

What happened?

On Tuesday TechCrunch reported on our Facebook Research program. This is a market research program that helps us understand consumer behavior and trends to build better mobile products.

TechCrunch implied we hid the fact that this is by Facebook – we don’t. Participants have to download an app called Facebook Research App to be involved in the stud. They also characterized this as “spying,” which we don’t agree with. People participated in this program with full knowledge that Facebook was sponsoring this research, and were paid for it. They could opt-out at any time. As we built this program, we specifically wanted to make sure we were as transparent as possible about what we were doing, what information we were gathering, and what it was for — see the screenshots below.

We used an app that we built ourselves, which wasn’t distributed via the App Store, to do this work. Instead it was side-loaded via our enterprise certificate. Apple has indicated that this broke their Terms of Service so disabled our enterprise certificates which allow us to install our own apps on devices outside of the official app store for internal dogfooding.

Author’s response: To start, “build better products” is a vague way of saying determining what’s popular and buying or building it. Facebook has used competitive analysis gathered by its similar Onavo Protect app and Facebook Research app for years to figure out what apps were gaining momentum and either bring them in or box them out. Onavo’s data is how Facebook knew WhatsApp was sending twice as many messages as Messenger, and it should invest $19 billion to acquire it.

Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause (which owns uTest) and CentreCode (which owns Betabound) to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly.

TechCrunch has reviewed communications indicating Facebook would threaten legal action if a user spoke publicly about being part of the Research program. While the program had run since 2016, it had never been reported on. We believe that these facts combined justify characterizing the program as “secret”

The Facebook Research program was called Project Atlas until you signed up

How does this program work?

We partner with a couple of market research companies (Applause and CentreCode) to source and onboard candidates based in India and USA for this research project. Once people are onboarded through a generic registration page, they are informed that this research will be for Facebook and can decline to participate or opt out at any point. We rely on a 3rd party vendor for a number of reasons, including their ability to target a Diverse and representative pool of participants. They use a generic initial Registration Page to avoid bias in the people who choose to participate.

After generic onboarding people are asked to download an app called the ‘Facebook Research App,’ which takes them through a consent flow that requires people to check boxes to confirm they understand what information will be collected. As mentioned above, we worked hard to make this as explicit and clear as possible.

This is part of a broader set of research programs we conduct. Asking users to allow us to collect data on their device usage is a highly efficient way of getting industry data from closed ecosystems, such as iOS and Android. We believe this is a valid method of market research.

Author’s response: Facebook claims it wasn’t “spying”, yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify specific data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity”

The parental consent form from Facebook and Applause lists none of the specific types of data collected or the extent of Facebook’s access. Under “Risks/Benefits”, the form states “There are no known risks associated with this project however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” It gives parents no information about what data their kids are giving up.

Facebook claims it uses third-parties to target a diverse pool of participants. Yet Facebook conducts other user feedback and research programs on its own without the need for intermediaries that obscure its identity, and only ran the program in two countries. It claims to use a generic signup page to avoid biasing who will choose to participate, yet the cash incentive and technical process of installing the root certificate also bias who will participate, and the intermediaries conveniently prevent Facebook from being publicly associated with the program at first glance. Meanwhile, other clients of the Betabound testing platform like Amazon, Norton, and SanDisk reveal their names immediately before users sign up.

Facebook’s ads recruiting teens for the program didn’t disclose its involvement

Did we intentionally hide our identity as Facebook?

No — The Facebook brand is very prominent throughout the download and installation process, before any data is collected. Also, the app name of the device appears as “Facebook Research” — see attached screenshots. We use third parties to source participants in the research study, to avoid bias in the people who choose to participate. But as soon as they register, they become aware this is research for Facebook

Author’s response: Facebook here admits that users did not know Facebook was involved before they registered.

What data do we collect? Do we read people’s private messages?

No, we don’t read private messages. We collect data to understand how people use apps, but this market research was not designed to look at what they share or see. We’re interested in information such as watch time, video duration, and message length, not that actual content of videos, messages, stories or photos. The app specifically ignores information shared via financial or health apps.

Author’s response: We never reported that Facebook was reading people’s private messages, but that it had the ability to collect them. Facebook here admits that the program was “not designed to look at what they share or see”, but stops far short of saying that data wasn’t collected. Fascinatingly, Facebook reveals it was that it was closely monitoring how much time people spent on different media types.

Facebook Research abused the Enterprise Certificate system meant for employee-only apps

Did we break Apple’s terms of service?

Apple’s view is that we violated their terms by sideloading this app, and they decide the rules for their platform, We’ve worked with Apple to address any issues; as a result, our internal apps are back up and running. Our relationship with Apple is really important — many of us use Apple products at work every day, and we rely on iOS for many of our employee apps, so we wouldn’t put that relationship at any risk intentionally. Mark and others will be available to talk about this further at Q&A later today.

Author’s response: TechCrunch reported that Apple’s policy plainly states that the Enterprise Certificate program requires companies to “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing” and that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers”. Apple took a firm stance in its statement that Facebook did violate the program’s policies, stating “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple.”

Given Facebook distributed the Research apps to teenagers that never signed tax forms or formal employment agreements, they were obviously not employees or contractors, and most likely use some Facebook-owned service that qualifies them as customers. Also, I’m pretty sure you can’t pay employees in gift cards.

Apple reactivates Facebook’s employee apps after punishment for Research spying

After TechCrunch caught Facebook violating Apple’s employee-only app distribution policy to pay people for all their phone data, Apple invalidated the social network’s Enterprise Certificate as punishment. That deactivated not only this Facebook Research app VPN, but also all of Facebook’s internal iOS apps for workplace collaboration, beta testing, and even getting the company lunch or bus schedule. That threw Facebook’s offices into chaos yesterday morning. Now after nearly two work days, Apple has ended Facebook’s time-out and restored its Enterprise Certification. That means employees can once again access all their office tools, pre-launch test versions of Facebook and Instagram…and the lunch menu.

A Facebook spokesperson issued this statement to TechCrunch: “We have had our Enterprise Certification, which enables our internal employee applications, restored. We are in the process of getting our internal apps up and running. To be clear, this didn’t have an impact on our consumer-facing services.”

 

Meanwhile, TechCrunch’s follow-up report found that Google was also violating the Enterprise Certificate program with its own “market research” VPN app called Screenwise Meter that paid people to snoop on their phone activity. After we informed Google and Apple yesterday, Google quickly apologized and took down the app. But apparently in service of consistency, this morning Apple invalidated Google’s Enterprise Certificate too, breaking its employee-only iOS apps.

Google’s internal apps are still broken. Unlike Facebook that has tons of employees on iOS, Google at least employs plenty of users of its own Android platform so the disruption may have caused fewer probelms in Mountain View than Menlo park. “We’re working with Apple to fix a temporary disruption to some of our corporate iOS apps, which we expect will be resolved soon,” said a Google spokesperson. A spokesperson for Apple said: “We are working together with Google to help them reinstate their enterprise certificates very quickly.”

TechCrunch’s investigation found that the Facebook Research app not only installed an Enterprise Certificate on users phones and a VPN that could collect their data, but also demanded root network access that allows Facebook to man-in-the-middle their traffic and even deencrypt secure transmissions. It paid users age 13 to 35 $10 to $20 per month to run the app so it could collect competitive intelligence on who to buy or copy. The Facebook Research app contained numerous code references to Onavo Protect, the app Apple banned and pushed Facebook to remove last August, yet Facebook kept on operating the Research data collection program.

When we first contacted Facebook, it claimed the Research app and its Enterprise Certificate distribution that sidestepped Apple’s oversight was in line with Apple’s policy. Seven hours later, Facebook announced it would shut down the Research app on iOS (though it’s still running on Android which has fewer rules). Facebook also claimed that “there was nothing ‘secret’ about this”, challenging the characterization of our reporting. However, TechCrunch has since reviewed communications proving that the Facebook Research program threatened legal action if its users spoke publicly about the app. That sounds pretty “secret” to us.

Then we learned yesterday morning that Facebook hadn’t voluntarily pulled the app as Apple had actually already invalidated Facebook’s Enterprise Certificate, thereby breaking the Research app and the social network’s employee tools. Apple provided this brutal statement, which it in turn applied to Google today:

We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”

Apple is being likened to a vigilante privacy regulator overseeing Facebook and Google by The Verge’s Casey Newton and The New York Times’ Kevin Roose, perhaps with too much power given they’re all competitors. But in this case, both Facebook and Google blatantly violated Apple’s policies to collect the maximum amount of data about iOS users, including teenagers. That means Apple was fully within its right to shut down their market research apps. Breaking their employee apps too could be seen as just collateral damage since they all use the same Enterprise Certification, or as additional punishment for violating the rules. This only becomes a real problem if Apple steps beyond the boundaries of its policies. But now, all eyes are on how it enforces its rules, whether to benefit its users or beat up on its rivals.

Twitter cuts off API access to follow/unfollow spam dealers

Notification spam ruins social networks, diluting the real human interaction. Desperate to gain an audience, users pay services to rapidly follow and unfollow tons of people in hopes that some will follow them back. The services can either automate this process or provide tools for users to generate this spam themselves, Earlier this month, a TechCrunch investigation found over two dozen follow-spam companies were paying Instagram to run ads for them. Instagram banned all the services in response an vowed to hunt down similar ones more aggressively.

ManageFlitter’s spammy follow/unfollow tools

Today, Twitter is stepping up its fight against notification spammers. Earlier today, the functionality of three of these services — ManageFlitter, Statusbrew, Crowdfire — ceased to function, as spotted by social media consultant Matt Navarra.

TechCrunch inquired with Twitter about whether it had enforced its policy against those companies. A spokesperson provided this comment: “We have suspended these three apps for having repeatedly violated our API rules related to aggressive following & follow churn. As a part of our commitment to building a healthy service, we remain focused on rapidly curbing spam and abuse originating from use of Twitter’s APIs.” These apps will cease to function since they’ll no longer be able to programatically interact with Twitter to follow or unfollow people or take other actions.

Twitter’s policies specify that “Aggressive following (Accounts who follow or unfollow Twitter accounts in a bulk, aggressive, or indiscriminate manner) is a violation of the Twitter Rules.” This is to prevent a ‘tragedy of the commons’ situation. These services and their customers exploit Twitter’s platform, worsening the experience of everyone else to grow these customers’ follower counts. We dug into these three apps and found they each promoted features designed to help their customers spam Twitter users.

ManageFlitter‘s site promotes how “Following relevant people on Twitter is a great way to gain new followers. Find people who are interested in similar topics, follow them and often they will follow you back.” For $12 to $49 per month, customers can use this feature shown in the GIF above to rapidly follow others, while another feature lets them check back a few days later and rapidly unfollow everyone who didn’t follow them back. 

Crowdfire had already gotten in trouble with Twitter for offering a prohibited auto-DM feature and tools specifically for generating follow notifications. Yep it only changed its functionality to dip just beneath the rate limits Twitter imposes. It seems it preferred charging users up to $75 per month to abuse the Twitter ecosystem than accept that what it was doing was wrong.

StatusBrew details how “Many a time when you follow users, they do not follow back . . . thereby, you might want to disconnect with such users after let’s say 7 days. Under ‘Cleanup Suggestion’ we give you a reverse sorted list of the people who’re Not Following Back”. It charges $25 to $416 month for these spam tools. After losing its API access today, StatusBrew posted a confusing half-mea culpa, half-“it was our customers’ fault” blog post announcing it will shut down its follow/unfollow features.

Twitter tells TechCrunch it will allow these companies “apply for a new developer account and register a new, compliant app” but the existing apps will remain suspended. I think they deserve an additional time-out period. But still, this is a good step towards Twitter protecting the health of conversation on its platform from greedy spam services. I’d urge the company to also work to prevent companies and sketchy individuals from selling fake followers or follow/unfollow spam via Twitter ads or tweets.

When you can’t trust that someone who follows you is real, the notifications become meaningless distractions, faith in finding real connection sinks, and we become skeptical of the whole app. It’s the users that lose, so it’s the platforms’ responsibility to play referee.

Net neutrality battle gets a new day in court tomorrow

More than a year after net neutrality was essentially abolished by a divided Federal Communications Commission, a major legal challenge supported by dozens of companies and advocates has its day in court tomorrow. Mozilla v. FCC argues that the agency’s decision was not just dead wrong, but achieved illegally.

“We’re not just going into court to argue that the FCC made a policy mistake,” said Public Knowledge VP Chris Lewis in a statement. “It broke the law, too. The FCC simply failed in its responsibility to engage in reasoned decision-making.”

Oral arguments before the D.C. Circuit Court of Appeals commence Friday, February 1, though the FCC attempted to have the date put off due to the shutdown — and the request was denied.

The legal challenge is one of several tacks being taken against the FCC’s replacement of 2015’s net neutrality rules with a much weaker one last year. As with any rule or law, there are multiple avenues for dissent; a direct legal challenge is among the quickest and most public.

Mozilla, along with Vimeo, Etsy, Public Knowledge, INCOMPAS, and a number of other companies and organizations, filed the challenge shortly after the new rules took effect, but these things take time to creep through the court system.

The lawsuit has a number of primary arguments against the rulemaking (you can read the full brief here), but they boil down to two basic ideas, which I’ve attempted to summarize below:

First and most important, the FCC’s entire argument that broadband is not a telecommunications service is false. This argument goes back decades, and you can read the history of it here. The short version is: telecommunication services move data from point to point, and information services do things with that data. The FCC argues that because broadband connections let you, for example, buy something online, that connection essentially is a store.

Supreme Court Justice Kavanaugh made this same very elementary mistake and was set right by a judge a couple years ago. It’s basically indefensible and no one who understands how the internet works agrees with it. As the Mozilla filing puts it, the argument “confuse[s] the road with the destination.”

The FCC also says that DNS services and caching, some of the nuts and bolts of how the internet and web work, count as information services — which is perfectly true — and that because broadband uses them, it too is an information service instead of telecommunications — which is ridiculous. It’s like saying that if a road has signs on it, the road is itself a sign. Nope. The filing again resorts to metaphor, saying “a few drops of fresh water do not turn an ocean into a lake.”

This is the primary support for the FCC’s entire case, and removing it would essentially nullify the entire new set of rules, since if the judges agree that broadband is in fact telecommunications, the industry is governed by a whole different set of statutes under the Communications Act. There are numerous other sub-arguments here that could also come into play.

Second, the FCC’s decision is “arbitrary and capricious,” and thus illegal under the Administrative Procedures Act, which requires certain standards of evidence and method to be shown in the establishment of such rules. This is supported in a number of ways, including the authority argument above. It also failed to address consumer and other complaints during the rulemaking process.

The FCC also does not justify its argument that the broadband industry is better suited to regulation by antitrust authorities, and does not justify rejection of certain other statutory authorities under which the FCC could be responsible for some of the rules. “The FCC does not adequately explain why other statutes, developed to address other problems, just happen to do the job Congress assigned to the FCC,” argue Mozilla et al.

The agency’s cost-benefit analysis, documentation required for new rules like this, is also inadequate, they argue. Certainly economic analysis of multiple major industries can be debated forever, but there are pretty basic questions unanswered or evaded here, which weakens the FCC’s entire case.

For the record, the FCC’s arguments and counter-arguments are set forth in the rule itself and court filings largely reiterate the same points.

All these arguments are not particularly new — they’ve been brought out and revised multiple times both before and after the net neutrality decision. But this is an important setting in which for them to be addressed. This panel of judges could essentially render the FCC’s rules or rulemaking process inadequate, illegal, or incorrect — or all three — and send the agency back to the drawing board.

These decisions take a great deal of time to arrive, so be ready for a wait just like the one we’ve had for the arguments to make it to court in the first place. But the wheels are in motion and it could be that in a few months’ time net neutrality will have new life.

Of course, if the FCC won’t keep net neutrality around, states will — and that’s a whole other legal battle waiting to happen.

“Comcast, Verizon, and AT&T are going to wish they never picked this fight with the Internet,” said Fight for the Future’s Evan Greer. “Internet activists are continuing to fight in the courts, in Congress, and in the states. Net neutrality is coming back with a vengeance. It’s only a matter of time.”