Messenger Kids adds expanded parental controls, details how much kids’ data Facebook collects

Facebook’s messaging app for families with children, Messenger Kids, is being updated today with new tools and features to give parents more oversight and control over their kids’ chats. Now, parents will be able to see who a child is chatting with and how often, view recent photos and videos sent through chat, access the child’s reported and block list, remotely log out of the app on other devices, and download the child’s chats, images and videos, both sent and received. The company is also introducing a new blocking mechanism and is updated the app’s Privacy Policy to include additional information about data collection, use and deletion practices.

The Messenger Kids app was first introduced in late 2017, as a way to give kids a way to message friends and family with parental oversight. It arrived at a time when kids were already embracing messaging — but were often doing so on less controlled platforms, like Kik, which attracted predators. Messenger Kids instead allows the child’s parents to determine who the child can chat with and when, through built-in parental controls.

In our household, for example, it became a convenient tool for chatting with relatives, like grandparents, aunts, uncles, and cousins, as well as few trusted friends, whose parents I knew well.

But when it came time to review the chats, a lot of scrolling back was involved.

The new Messenger Kids features will help with the oversight aspects for those parents who allow their kids to online chat. That decision, of course, is a personal one. Some parents don’t want their kids to have smartphones and outright ban apps, particularly ones that allow interactions. Others, myself included, believe that teaching kids to navigate the online world is part of your parental responsibility. And despite Facebook’s reputation, there aren’t other chat apps offering these sort of parental controls — or the convenience of being able to add everyone in your family to a child’s chat list with ease. (After all, Grandma and grandpa are already on Facebook and Messenger, but getting them to download new apps remains difficult.)

In the updated app, parents will be able to see who a child has been chatting with, and whether that’s text or video chat, over the past 30 days. This can save parents’ time, as they may not feel the need to review chat with trusted family members, for instance, so can redirect their focus their energy on reviewing the chats with friends. A log of images will help parents to see if all images and videos being sent and received are appropriate, and remove them or block them if not.

Parents can also now see if a child has blocked or reported a user in the app, or if they’ve unblocked them. This could be useful for identifying those problematic friends — the kind who sometimes cause trouble, but are later forgiven, then unblocked. (Anyone who’s dealt with tween-age drama can attest to the fact that there’s always one in every group!)  By gaining access to this information, parents can sit down with te child to talk about when to take that step and block someone, and when a disagreement with a friend can instead be worked out. These are decisions that a child will have to make on their own one day, so being able to use this as a teaching moment is useful.

With the update, unblocking is supported and parents are still able to review chats with blocked contacts. However, blocked contacts will remain visible to one another and will stay in shared group chats. They just aren’t able to message one-on-one. Kids are warned if they return to or are added to chats with blocked contacts. (If parents want a full block, they can just remove the blocked contact from the child’s contact list, as before.)

Remote device logout lets you make sure the child is logged out of Messenger Kids on devices you can’t physically access and control — like a misplaced phone. And the option to download the child’s information, similar to Facebook’s feature, lets you download a copy of everything — messages, images, and videos. This could be a way to preserve their chat history when the child outgrows the app.

The Messenger Kids’ privacy policy was updated, as well, to better detail the information being collected. The app also attempts to explain this in plain language to the kids, using cute photos. In reality, parents should read the policy for themselves and make a decision, accordingly.

The app collects a lot of information — including names, profile photos, demographic details (gender and birthday), a child’s connection to parents, contacts’ information (like most frequent contacts), app usage information, device attributes and unique identifiers, data from device settings (like time zones or access to camera and photos), network information, and information provided from things like bug reports or feedback/contact forms.

To some extent, this information is needed to help the app properly operate or to alert parents about a child’s activities. But the policy includes less transparent language about the collected information being used to “evaluate, troubleshoot, improve, create, and develop our products” or being shared with other Facebook Companies. There’s a lot of wiggle room there for extensive data collection on Facebook’s part. Service providers offering technical infrastructure and support, like a content delivery network or customer service, may also gain access to collected information, but must adhere to “strict data confidentiality and security obligations,” the policy claims, without offering further details on what those are.

Despite its lengthiness, the policy leaves plenty of room for Facebook to collect private information and share it. If you have a Facebook account, you’ve already agreed to this sort of “deal with the devil” for yourself, in order to benefit from Facebook’s free service. But parents need to strongly consider if they’re comfortable making the same decision for their children.

The policy also describes things Facebook plans to roll out later, when Messenger Kids is updated to support older kids. As kids enter tween to teen years, parents may want to loosen the reigns a bit. The new policy will cover those changes, as well.

It’s unfortunate that the easiest tool, and the one with the best parental controls, is coming from Facebook. The market is ripe for a disruptor in the kids’ space, but there’s not enough money in that, apparently. Facebook, of course, sees the potential of getting kids hooked early and can invest in a product that isn’t directly monetized. Few companies can afford to do this, but Apple would be the best to take Facebook on in this area.

Apple’s iMessage is a large, secure and private platform — but it lacks these advanced parental controls, as well as the other bells and whistles (like built-in AR filters) that makes the Messenger Kids app fun. Critically, it doesn’t work across non-Apple devices, which will always be a limiter when it comes to finding an app that the extended family can use together.

To be clear, there is no way to stop Facebook from vacuuming up the child’s information except to delete the child’s Messenger Kids Account through the Facebook Help Center. So consider your choices wisely.

 

Tinder’s handling of user data is now under GDPR probe in Europe

Dating app Tinder is the latest tech service to find itself under formal investigation in Europe over how it handles user data.

Ireland’s Data Protection Commission (DPC) has today announced a formal probe of how Tinder processes users’ personal data; the transparency surrounding its ongoing processing; and compliance with obligations with regard to data subject right’s requests.

Under Europe’s General Data Protection Regulation (GDPR) EU citizens have a number of rights over their personal data — such as the right to request deletion or a copy of their data.

While those entities processing people’s personal data must have a valid legal basis to do so.

Data security is another key consideration baked into the data protection regulation.

The DPC said complaints about the dating app have been made from individuals in multiple EU countries, not just in Ireland — with the Irish regulator taking the lead under a GDPR mechanism to manage cross-border investigations.

It said the Tinder probe came about as a result of active monitoring of complaints received from individuals “both in Ireland and across the EU” — in order to identify “thematic and possible systemic data protection issues”.

“The Inquiry of the DPC will set out to establish whether the company has a legal basis for the ongoing processing of its users’ personal data and whether it meets its obligations as a data controller with regard to transparency and its compliance with data subject right’s requests,” the DPC added.

It’s not clear exactly which GDPR rights have been complained about by Tinder users at this stage.

We’ve reached out to Tinder for a response.

Also today the DPC has finally responded to long-standing complaints by consumer rights groups of Google’s handling of location data — announcing a formal investigation of that too.

Google’s location tracking finally under formal probe in Europe

Google’s lead data regulator in Europe has finally opened a formal investigation into the tech giant’s processing of location data — more than a year after receiving a series of complaints from consumer rights groups across Europe.

The Irish Data Protection Commission (DPC) announced the probe today, writing in a statement that: “The issues raised within the concerns relate to the legality of Google’s processing of location data and the transparency surrounding that processing.”

“As such the DPC has commenced an own-volition Statutory Inquiry, with respect to Google Ireland Limited,  pursuant to Section 110 of the Data Protection 2018 and in accordance with the co-operation mechanism outlined under Article 60 of the GDPR. The Inquiry will set out to establish whether Google has a valid legal basis for processing the location data of its users and whether it meets its obligations as a data controller with regard to transparency,” its notice added.

We’ve reached out to Google for comment.

BEUC, an umbrella group for European consumer rights groups, said the complaints about ‘deceptive’ location tracking were filed back in November 2018 — several months after the General Data Protection Regulation (GDPR) came into force, in May 2018.

It said the rights groups are concerned about how Google gathers information about the places people visit which it says could grant private companies (including Google) the “power to draw conclusions about our personality, religion or sexual orientation, which can be deeply personal traits”.

The complaints argue that consent to “share” users’ location data is not valid under EU law because it is not freely given — an express stipulation of consent as a legal basis for processing personal data under the GDPR — arguing that consumers are rather being tricked into accepting “privacy-intrusive settings”.

It’s not clear why it’s taken the DPC so long to process the complaints and determine it needs to formally investigate. (We’ve asked for comment and will update with any response.)

BEUC certainly sounds unimpressed — saying it’s glad the regulator “eventually” took the step to look into Google’s “massive location data collection”.

“European consumers have been victim of these practices for far too long,” its press release adds. “BEUC expects the DPC to investigate Google’s practices at the time of our complaints, and not just from today. It is also important that the procedural rights of consumers who complained many months ago, and that of our members representing them, are respected.”

Commenting further in a statement, Monique Goyens, BEUC’s director general, also said: “Consumers should not be under commercial surveillance. They need authorities to defend them and to sanction those who break the law. Considering the scale of the problem, which affects millions of European consumers, this investigation should be a priority for the Irish data protection authority. As more than 14 months have passed since consumer groups first filed complaints about Google’s malpractice, it would be unacceptable for consumers who trust authorities if there were further delays. The credibility of the enforcement of the GDPR is at stake here.”

The Irish DPC has also been facing growing criticism over the length of time it’s taking to reach decisions on extant GDPR investigations.

A total of zero decisions on big tech cases have been issued by the regulator — some 20 months after GDPR came into force in May 2018.

As lead European regulator for multiple tech giants — as a consequence of a GDPR mechanism which funnels cross border complaints via a lead regulator, combined with the fact so many tech firms choose to site their regional HQ in Ireland (with the added carrot of attractive business rates) — the DPC does have a major backlog of complex cross-border cases.

However there is growing political and public pressure for enforcement action to demonstrate that the GDPR is functioning as intended.

Even as further questions have been raised about how Ireland’s legal system will be able to manage so many cases.

Google has felt the sting of GDPR enforcement elsewhere in the region; just over a year ago the French data watchdog, the CNIL, fined the company $57 million — for transparency and consent failures attached to the onboarding process for its Android mobile operating system.

But immediately following that decision Google switched the legal location of its international business to Ireland — meaning any GDPR complaints are now funnelled through the DPC.

UK Council websites are letting citizens be profiled for ads, study shows

On the same day that a data ethics advisor to the UK government has urged action to regulate online targeting a study conducted by pro-privacy browser Brave has highlighted how Brits are being profiled by the behavioral ad industry when they visit their local Council’s website — perhaps seeking info on local services or guidance about benefits including potentially sensitive information related to addiction services or disabilities.

Brave found that nearly all UK Councils permit at least one company to learn about the behavior of people visiting their sites, finding that a full 409 Councils exposed some visitor data to private companies.

While many large councils (serving 300,000+ people) were found exposing site visitors to what Brave describes as “extensive tracking and data collection by private companies” — with the worst offenders, London’s Enfield and Sheffield City Councils, exposing visitors to 25 data collectors apiece.

Brave argues the findings represent a conservative illustration of how much commercial tracking and profiling of visitors is going on on public sector websites — a floor, rather than a ceiling — given it was only studying landing pages of Council sites without any user interaction, and could only pick up known trackers (nor could the study look at how data is passed between tracking and data brokering companies).

Nor is the first such study to warn that public sector websites are infested with for-profit adtech. A report last year by Cookiebot found users of public sector and government websites in the EU being tracked when they performed health-related searches — including queries related to HIV, mental health, pregnancy, alcoholism and cancer.

Brave’s study — which was carried out using the webxray tool — found that almost all (98%) of the Councils used Google systems, with the report noting that the tech giant owns all five of the top embedded elements loaded by Council websites, which it suggests gives the company a god-like view of how UK citizens are interacting with their local authorities online.

The analysis also found 198 of the Council websites use the real-time bidding (RTB) form of programmatic online advertising. This is notable because RTB is the subject of a number of data protection complaints across the European Union — including in the UK, where the Information Commissioner’s Office (ICO) itself has been warning the adtech industry for more than half a year that its current processes are in breach of data protection laws.

However the UK watchdog has preferred to bark softly in the industry’s general direction over its RTB problem, instead of taking any enforcement action — a response that’s been dubbed “disastrous” by privacy campaigners.

One of the smaller RTB players the report highlights — which calls itself the Council Advertising Network (CAN) — was found sharing people’s data from 34 Council websites with 22 companies, which could then be insecurely broadcasting it on to hundreds or more entities in the bid chain.

Slides from a CAN media pack refer to “budget conscious” direct marketing opportunities via the ability to target visitors to Council websites accessing pages about benefits, child care and free local activities; “disability” marketing opportunities via the ability to target visitors to Council websites accessing pages such as home care, blue badges and community and social services; and “key life stages” marketing  opportunities via the ability to target visitors to Council websites accessing pages related to moving home, having a baby, getting married or losing a loved one.

Brave’s report — while a clearly stated promotion for its own anti-tracking browser (given it’s a commercial player too) — should be seen in the context of the ICO’s ongoing failure to take enforcement action against RTB abuses. It’s therefore an attempt to increase pressure on the regulator to act by further illuminating a complex industry which has used a lack of transparency to shield massive rights abuses and continues to benefit from a lack of enforcement of Europe’s General Data Protection Regulation.

And a low level of public understanding of how all the pieces in the adtech chain fit together and sum to a dysfunctional whole, where public services are turned against the citizens whose taxes fund them to track and target people for exploitative ads, likely contributes to discouraging sharper regulatory action.

But, as the saying goes, sunlight disinfects.

Asked what steps he would like the regulator to take, Brave’s chief policy officer, Dr Johnny Ryan, told TechCrunch: “I want the ICO to use its powers of enforcement to end the UK’s largest data breach. That data breach continues, and two years to the day after I first blew the whistle about RTB, Simon McDougall wrote a blog post accepting Google and the IAB’s empty gestures as acts of substance. It is time for the ICO to move this over to its enforcement team, and stop wasting time.”

We’re reached out to the ICO for a response to the report’s findings.

Customer feedback is a development opportunity

Online commerce accounted for nearly $518 billion in revenue in the United States alone last year. The growing number of online marketplaces like Amazon and eBay will command 40% of the global retail market in 2020. As the number of digital offerings — not only marketplaces but also online storefronts and company websites — available to consumers continues to grow, the primary challenge for any online platform lies in setting itself apart.

The central question for how to accomplish this: Where does differentiation matter most?

A customer’s ability to easily (and accurately) find a specific product or service with minimal barriers helps ensure they feel satisfied and confident with their choice of purchase. This ultimately becomes the differentiator that sets an online platform apart. It’s about coupling a stellar product with an exceptional experience. Often, that takes the form of simple, searchable access to a wide variety of products and services. Sometimes, it’s about surfacing a brand that meets an individual consumer’s needs or price point. In both cases, platforms are in a position to help customers avoid having to chase down a product or service through multiple clicks while offering a better way of comparing apples to apples.

To be successful, a company should adopt a consumer-first philosophy that informs its product ideation and development process. A successful consumer-first development resides in a company’s ability to expediently deliver fresh features that customers actually respond to, rather than prioritize the update that seems most profitable. The best way to inform both elements is to consistently collect and learn from customer feedback in a timely way — and sometimes, this will mean making decisions for the benefit of consumers versus what is in the best interest of companies.

Carriers ‘violated federal law’ by selling your location data, FCC tells Congress

More than a year and a half after wireless carriers were caught red-handed selling the real-time location data of their customers to anyone willing to pay for it, the FCC has determined that they committed a crime. An official documentation of exactly how these companies violated the law is forthcoming.

FCC Chairman Ajit Pai shared his finding in a letter to Congressman Frank Pallone (D-NJ), who chairs the Energy and Commerce Committee that oversees the agency. Rep. Pallone has been active on this and prodded the FCC for updates late last year, prompting today’s letter. (I expect a comment from his office shortly and will add it when they respond.)

“I wish to inform you that the FCC’s Enforcement Bureau has completed its extensive investigation and that it has concluded that one or more wireless carriers apparently violated federal law,” Pai wrote.

Extensive it must have been, since we first heard of this egregious breach of privacy in May of 2018, when multiple reports showed that every major carrier (including TechCrunch’s parent company Verizon) was selling precise location data wholesale to resellers who then either resold it or gave it away. It took nearly a year for the carriers to follow through on their promises to stop the practice. And now, 18 months later, we get the first real indication that regulators took notice.

“It’s a shame that it took so long for the FCC to reach a conclusion that was so obvious,” said Commissioner Jessica Rosenworcel in a statement issued alongside the Chairman’s letter. She has repeatedly brought up the issue in the interim, seemingly baffled that such a large-scale and obvious violation was going almost completely unacknowledged by the agency.

Commissioner Brendan Starks echoed her sentiment in his own statement: “These pay-to-track schemes violated consumers’ privacy rights and endangered to their safety. I’m glad we may finally act on these egregious allegations. My question is: what took so long?”

Chairman Pai’s letter explains that “in the coming days” he will be proposing a “Notice of Apparent Liability for Forfeiture,” or several of them. This complicated-sounding document is basically the official declaration, with evidence and legal standing, that someone has violated FCC rules and may be subject to a “forfeiture,” essentially a fine.

Right now that is all the information anyone has, including the other Commissioners, but the arrival of the notice will no doubt make things much clearer — and may help show exactly how seriously the agency took this problem and when it began to take action.

Disclosure: TechCrunch is owned by Verizon Media, a subsidiary of Verizon Wireless, but this has no effect on our coverage.

Tech companies, we see through your flimsy privacy promises

There’s a reason why Data Privacy Day pisses me off.

January 28 was the annual “Hallmark holiday” for cybersecurity, ostensibly a day devoted to promoting data privacy awareness and staying safe online. This year, as in recent years, it has become a launching pad for marketing fluff and promoting privacy practices that don’t hold up.

Privacy has become a major component of our wider views on security, and it’s in sharper focus than ever as we see multiple examples of companies that harvest too much of our data, share it with others, sell it to advertisers and third parties and use it to track our every move so they can squeeze out a few more dollars.

But as we become more aware of these issues, companies large and small clamor for attention about how their privacy practices are good for users. All too often, companies make hollow promises and empty claims that look fancy and meaningful.

Ring’s new security ‘control center’ isn’t nearly enough

On the same day that a Mississippi family is suing Amazon -owned smart camera maker Ring for not doing enough to prevent hackers from spying on their kids, the company has rolled out its previously announced “control center,” which it hopes will make you forget about its verifiably “awful” security practices.

In a blog post out Thursday, Ring said the new “control center,” “empowers” customers to manage their security and privacy settings.

Ring users can check to see if they’ve enabled two-factor authentication, add and remove users from the account, see which third-party services can access their Ring cameras, and opt-out of allowing police to access their video recordings without the user’s consent.

But dig deeper and Ring’s latest changes still do practically nothing to change some of its most basic, yet highly criticized security practices.

Questions were raised over these practices months ago after hackers were caught breaking into Ring cameras and remotely watching and speaking to small children. The hackers were using previously compromised email addresses and passwords — a technique known as credential stuffing — to break into the accounts. Some of those credentials, many of which were simple and easy to guess, were later published on the dark web.

Yet, Ring still has not done anything to mitigate this most basic security problem.

TechCrunch ran several passwords through Ring’s sign-up page and found we could enter any easy to guess password, like “12345678” and “password” — which have consistently ranked as some of the most common passwords for several years running.

To combat the problem, Ring said at the time users should enable two-factor authentication, a security feature that adds an additional check to prevent account breaches like password spraying, where hackers use a list of common passwords in an effort to brute force their way into accounts.

But Ring still uses a weak form of two-factor, sending you a code by text message. Text messages are not secure and can be compromised through interception and SIM swapping attacks. Even NIST, the government’s technology standards body, has deprecated support for text message-based two-factor. Experts say although text-based two-factor is better than not using it at all, it’s far less secure than app-based two-factor, where codes are delivered over an encrypted connection to an app on your phone.

Ring said it’ll make its two-factor authentication feature mandatory later this year, but has yet to say if it will ever support app-based two-factor authentication in the future.

The smart camera maker has also faced criticism for its cozy relationship with law enforcement, which has lawmakers concerned and demanding answers.

Ring allows police access to users’ videos without a subpoena or a warrant. (Unlike its parent company Amazon, Ring still does not published the number of times police demand access to customer videos, with or without a legal request.)

Ring now says its control center will allow users to decide if police can access their videos or not.

But don’t be fooled by Ring’s promise that police “cannot see your video recordings unless you explicitly choose to share them by responding to a specific video request.” Police can still get a search warrant or a court order to obtain your videos, which isn’t particularly difficult if police can show there’s reasonable grounds that it may contain evidence — such as video footage — of a crime.

There’s nothing stopping Ring, or any other smart home maker, from offering a zero-knowledge approach to customer data, where only the user has the encryption keys to access their data. Ring cutting itself (and everyone else) out of the loop would be the only meaningful thing it could do if it truly cares about its users’ security and privacy. The company would have to decide if the trade-off is worth it — true privacy for its users versus losing out on access to user data, which would effectively kill its ongoing cooperation with police departments.

Ring says that security and privacy has “always been our top priority.” But if it’s not willing to work on the basics, its words are little more than empty promises.

Avast shuts down marketing analytics subsidiary Jumpshot amid controversy over selling user data

Avast has made a huge business out of selling antivirus protection for computers and mobile devices, but more recently it was revealed that the Czech-based cybersecurity specialist was also cultivating another, more controversial, revenue stream: harvesting and selling on user data, some of which it amassed by way of those security tools.

But as of today, the latter of those businesses is no longer. Avast announced that it would be winding down Jumpshot, its $180 million marketing technology subsidiary that had been in the business of collecting data from across the web, including within walled gardens, analysing it, and then — unknown to users — selling it on to third-party customers that included tech giants like Microsoft and Google and big brands like Pepsi and Home Depot.

The significance of the incident extends beyond Avast and Jumpshot’s practices: it highlights the sometimes-obscure but very real connection between how some security technology runs the risk of stepping over the boundary into violations of privacy; and ultimately how big data is a hot commodity, a fact that potentially clouds that demarcation even more, as it did here:

“We started Jumpshot in 2015 with the idea of extending our data analytics capabilities beyond core security,” writes the CEO Ondrej Vlcek in a blog post in response to Jumpshot news. “This was during a period where it was becoming increasingly apparent that cybersecurity was going to be a big data game. We thought we could leverage our tools and resources to do this more securely than the countless other companies that were collecting data.”

Today’s news comes on the heels of a series of developments and investigations highlighting Jumpshot’s practices, stretching back to December, when Mozilla and Opera removed Avast extensions after reports that they were collecting user data and browsing histories. Avast — which has over 430 million active users — later came clean, only for a follow up investigation to get published earlier this week unveiling yet more details about the practice and the specific link to Jumpshot, which was founded in 2015 and uses data from 100 million devices.

In Avast’s announcement, it said that “plans to terminate provision of data” to Jumpshot but did not give a timeframe when when Jumpshot would completely cease to operate as part of the closure. There is still no announcement on Jumpshot’s own site.

“Jumpshot intends to continue paying its vendors and suppliers in full as necessary and in the ordinary course for products and services provided to Jumpshot during its wind down process,” the company said. “Jumpshot will be promptly notifying its customers in due course about the termination of its data services.”

Avast had a key partner in Jumpshot, the business media company that took a $60.8 million, 35% stake in the subsidiary last July, effectively valuing Jumpshot at around $177 million. An internal memo that we obtained from Ascential notes that the company has already sold its stake back to Avast for the same price, incurring no costs in the process.

Avast’s CEO Ondrej Vlcek, who joined the company 7 months ago, apologised in a separate blog post while also somewhat distancing himself from the history of the company and what it did. He noted that he identified the issues during an audit of the company when he joined (although didn’t act to change any of the practices). Perhaps more importantly, he maintained the legality of the situation:

“Jumpshot has operated as an independent company from the very beginning, with its own management and board of directors, building their products and services via the data feed coming from the Avast antivirus products,” he wrote. “During all those years, both Avast and Jumpshot acted fully within legal bounds – and we very much welcomed the introduction of GDPR in the European Union in May 2018, as it was a rigorous legal framework addressing how companies should treat customer data. Both Avast and Jumpshot committed themselves to 100% GDPR compliance.”

We have reached out to the Czech DPA to ask if it is going to be conducting any investigations around the company in relation to Jumpshot and its practices with data.

In the meantime, with the regulatory implications to one side, the incident has been a blow to Avast, which has in the last couple of days seen its shares tumble nearly 11 percent on the London Stock Exchange where it is traded. The company is currently valued around £4 billion (or $5.2 billion at today’s exchange rates).

Facebook will pay $550 million to settle class action lawsuit over privacy violations

Facebook will pay over half a billion dollars to settle a class action lawsuit that alleged systematic violation of an Illinois consumer privacy law. The settlement amount is large indeed, but a small fraction of the $35 billion maximum the company could have faced.

Class members — basically Illinois Facebook users from mid-2011 to mid-2015 — may expect as much as $200 each, but that depends on several factors. If you’re one of them you should receive some notification once the settlement is approved by the court and the formalities are worked out.

The proposed settlement would require Facebook to obtain consent in the future from Illinois users for such purposes as face analysis for automatic tagging.

This is the second major settlement from Facebook in six months; an seemingly enormous $5 billion settlement of FTC violations was announced over the summer, but it’s actually a bit of a joke.

The Illinois suit was filed in 2015, alleging that Facebook collected facial recognition data on images of users in the state without disclosure, in contravention of the state’s 2008 Biometric Information Privacy Act (BIPA). Similar suits were filed against Shutterfly, Snapchat, and Google.

Facebook pushed back in 2016, saying that facial recognition processing didn’t count as biometric data, and that anyway Illinois law didn’t apply to it, a California company. The judge rejected these arguments with flair, saying the definition of biometric was “cramped” and the assertion of Facebook’s immunity would be “a complete negation” of Illinois law in this context.

Facebook was also suspected at the time of heavy lobbying efforts towards defanging BIPA. One state senator proposed an amendment after the lawsuit was filed that would exclude digital images from BIPA coverage, which would of course have completely destroyed the case. It’s hard to imagine such a ridiculous proposal was the suggestion of anyone but the industry, which tends to regard the strong protections of the law in Illinois as quite superfluous.

As I noted in 2018, the Illinois Chamber of Commerce proposed the amendment, and a tech council there was chaired by Facebook’s own Manager of State Policy at the time. Facebook told me then that it had not taken any position on the amendment or spoken to any legislators about it.

2019 took the case to the 9th U.S. Circuit Court of Appeals, where Facebook was again rebuffed; the court concluded that “the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.”

Facebook’s request for a rehearing en banc, which is to say with the full complement of judges there present, was unanimously denied two months later.

At last, after some 5 years of this, Facebook decided to settle, a representative told TechCrunch, “as it was in the best interest of our community and our shareholders to move past this matter.” Obviously it admits to no wrongdoing.

The $550 million amount negotiated is “the largest all-cash privacy class action settlement to date,” according to law firm Edelson PC, one of three that represented the plaintiffs in the suit.

“Biometrics is one of the two primary battlegrounds, along with geolocation, that will define our privacy rights for the next generation,” said Edelson PC founder and CEO Jay Edelson in a press release. “We are proud of the strong team we had in place that had the resolve to fight this critically important case over the last five years. We hope and expect that other companies will follow Facebook’s lead and pay significant attention to the importance of our biometric information.”