Meta is testing a new meta privacy control center across apps

Meta, formerly Facebook, is tinkering around with how it tells users about their privacy options. The company has switched up where this information lives and how it’s explained a number of times over the years and now appears to be centralizing some privacy FAQs and controls in one place across its family of apps.

The new privacy center, called Privacy Center, is a test available to some U.S.-based Facebook desktop users for now but will roll out to “more people and apps in the coming months.” If you’re in the limited test, you can find the new privacy information center in the Privacy section of the Settings menu. As it stands, privacy settings are divided among Facebook’s Privacy Shortcuts and Privacy Checkup menus, which are far from ideal but still an improvement over how the platform used to handle these controls.

The new privacy center will break things down into five categories: Security, Sharing, Collection, Use and Ads. Meta is still pushing the idea of “privacy education” with the changes, a kind of shorthand way of saying that if you don’t go to the trouble of sorting through all of this stuff then it’s on you when the company puts your personal data to good use.

Facebook has historically offered users byzantine, difficult to navigate privacy controls, often with the most important settings buried in menus. That’s improved some, but regardless of what it says these days, Meta still has a vested interest in keeping its users sharing as much data as possible. That position is made abundantly clear by the company’s conspicuous objections to Apple’s iOS ad tracking changes, which limited the ability of apps to follow user behavior — a clear win for consumer privacy.

It’s not realistic to assume that most people would systemically trudge through this information, but it’s probably worth knowing where it all lives, just in case.

Ethyca raises $7.5M, open sources Fides for developers to build privacy tools directly into their codebases

Ethyca, the privacy-by-design technology startup that has built a unique set of APIs, detection tools and analytics to make it easier for organizations to adhere to data privacy policies like GDPR, is taking a couple of big steps today as it gets deeper into the business of data protection.

First, the New York-based startup is open sourcing a set of developer tools it has built called Fides, so that developers can build privacy tools and monitoring mechanisms directly into their codebases. Second, it has picked up an additional $7.5 million in funding so that it can continue developing and commercializing its proprietary tools, specifically APIs to make it easier for those building or monitoring privacy compliance to do so.

The funding, an extension to its June 2020 Series A of $13.5 million, is coming from existing investors — specifically Lee Fixel, IA Ventures , Lachy Groom’s LGF and Table Management (Bill Ackman), and it brings that Series A to $21 million. Ethyca has raised $27.5 million to date, and has amassed dozens of customers that include the likes of Away, IDEO and InVision.

Founder and CEO Cillian Kieran (pictured, right) said that Fides was actually the motivation for starting Ethyca in the first place. His thesis in 2018 was that privacy was becoming a more critical and essential aspect of building and running digital services — not only because data protection regulations and the demands of cybersecurity challenges were increasingly demanding it, but so too were the users of those services who were becoming more aware of how their data was being used (and sometimes abused).

Added to that, of course, is the role of the developer, or the organization itself, in the equation. Many believe that to get data protection right, you have to build it in from the very beginning, both as a priority and as an actual technical part of how a system works — a concept that people often refer to as “privacy by design.”

While Ethyca’s APIs are aimed at essentially building versions of those tools to make it easier to monitor and account for privacy within an existing workflow, Fides gets to the heart of it all for those who are building, to program it in from the start. Fides itself is described by Ethyca as an open source definition and configuration language for describing privacy constructs in data and software systems. It is being released initially with two OSS tools, Fides Ops and Fides Control, that respectively cover orchestration of privacy rights in an organization’s data infrastructure, and the validation of privacy rules in CI/CD workflows.

“Essentially, we have inserted privacy into the CI/CD pipeline,” Kieran said. He likens how it works to companies like Snyk, which is used by developers and starts looking for security loopholes and vulnerabilities before code is ever put into production. “Fides can do the same with the privacy.” Snyk, I’ll point out, has skyrocketed in valuation in recent times, a sign of the high priority placed right now on addressing security and data protection at the devops level.

In the case of Fides, the idea is that the toolkit will, for example, be able to detect when code is calling in user behavior, data and location at the same time. If the business doesn’t permit that, Fides will flag the relevant part of the code to fix it before it goes into production. It can also be used to automate data rights requests, such as when a person asks to be unsubscribed or have their data deleted from a system — something that typically can take many hours or weeks to carry out manually (as it often is).

Ethyca’s Pro (paid) tools already integrate with some 700 apps to monitor for data protection and privacy policy alignment, and in addition to the many private companies on its books, works with a number of publicly-traded larger enterprises and technology companies that it declines to name public (but believe me when I say… they are big and exactly the kinds of companies that have needed to hone in on better privacy and data protection measures).

Fides meanwhile already integrates with database platforms like Amazon’s DynamoDB and Redshift, Snowflake, Databricks, MongoDB, MariaDB, Microsoft’s SQL Server, MySQL, and PostgreSQL, and it has signed on Slack and Github as supporters of its open source community.

The funding will let Ethyca continue to hire to build out that commercial business while continuing to contribute to what it believes needs to be at the heart of how privacy is built in the future.

“Our investment and ongoing excitement in Ethyca reflects the need for a developer-first approach to privacy and compliance as code,” said IA Ventures’ Brad Gillespie, in a statement. “The release of Fides is the culmination of three years of work for the team and a first step towards defining a much-needed open standard for privacy. With Fides, Ethyca shows they’re thinking proactively about how best to solve the higher-order challenges that will shape data privacy discussions over the coming decade. We’re thrilled to support them in their ambitious plan to shift privacy left into the Software Development Life Cycle through open-source developer tools.”

In hearing with Snap, TikTok and YouTube, lawmakers tout new rules to protect kids online

Fallout from revelations around teen mental health on Instagram continues — and not just for Facebook. On Tuesday, policy reps from YouTube, Snap and TikTok faced Congress to talk about kids and online safety, marking the first time the latter two companies appeared in a major tech hearing.

The hearing, held by the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security, managed to stay on topic about half of the time. The committee’s Republican members were keen to steer their rare time with a TikTok executive toward questions about privacy concerns over the company’s relationship with the Chinese government.

Diversions notwithstanding, a few of the hearing’s more useful moments saw the three policy leads pressed to answer yes/no questions about specific policy proposals crawling through Congress. The hearing featured testimony from Snap VP of Global Public Policy Jennifer Stout, TikTok’s VP and Head of Public Policy Michael Beckerman and Leslie Miller, who leads government affairs and public policy at YouTube.

Both YouTube and TikTok called for the U.S. to create comprehensive laws around online privacy, with Beckerman deeming a legal framework for national privacy laws “overdue.” All three companies agreed that parents should have the ability to erase all online data for their children or teens, with Stout pointing out that Snapchat data disappears by design. Still, Snapchat’s own privacy page mentions that the company retains location data for “different lengths of time based on how precise it is and which services you use.”

Senator Ed Markey (D-MA), himself an unlikely TikTok sensation, pushed for what he calls a kids’ “privacy bill of rights for the 21st century” during the hearing. Markey pointed to his proposed changes to the Children and Teens’ Online Privacy Protection Act (COPPA) that would bolster protections for young social media users. That legislation would ban tech companies from collecting the data of users between 13 and 15-years-old without explicit consent, implement an “eraser button” that would make it easy to delete minors’ personal data and more broadly restrict the kind of information that social media platforms can collect to begin with.

Markey pressed each company rep on if they would support the COPPA changes. Speaking for TikTok, Beckerman said the company does support the proposal but views a standard method for platforms to verify the age of their users as just as essential, if not more so.

Snap wouldn’t commit to the COPPA proposal, and Markey derided Stout for playing “the old game” of tech companies refusing to commit to specifics. YouTube, which was slammed with a historic $170 million FTC fine for COPPA violations in the past, didn’t explicitly commit to anything but pointed to “constructive” talks the company has had with Markey’s staff.

In the hearing, Markey and Blumenthal also highlighted their reintroduction of the KIDS (Kids Internet Design and Safety) Act last month. That bill would protect online users under 16 from engagement-juicing features like autoplay, push alerts and like buttons. It would also ban influencer marketing to kids under 16 and force platforms to create a reporting system for the instances in which they serve harmful content to young users.

 

TikTok dodges questions about biometric data collection in Senate hearing

In its first-ever Congressional hearing, TikTok successfully dodged questions about what it plans to do with the biometric data its privacy policy permits it to collect on the app’s U.S. users. In an update to the company’s U.S. privacy policy in June, TikTok added a new section that noted the app “may collect biometric identifiers and biometric information” from its users’ content, including things like “faceprints and voiceprints.”

The company was questioned by multiple lawmakers on this matter today during a hearing conducted by the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security. The hearing was meant to be focused on social media’s detrimental impacts on children and teens, but often expanded into broader areas of concern related, as the lawmakers dug into the business models, internal research, and policies being made at Snap, TikTok and YouTube. 

Sen. Marsha Blackburn (R-TN) asked specifically why TikTok needed to collect a wide variety of biometric data, “such as faceprints, voiceprints, geolocation information, browsing and search history,” as well as “keystroke patterns and rhythms.”

Instead of directly answering the question, TikTok’s VP and Head of Public Policy Michael Beckerman responded by pointing out that many outside researchers and experts have looked at its policy and found that TikTok actually collects less data than many of its social media peers. (He also later clarified in another round of questioning that keystroke patterns were collected in order to prevent spam bots from infiltrating the service.)

Blackburn pressed on to ask if TikTok was putting together a comprehensive profile — or “virtual dossier” — on each of its users, including younger kids and teens, which included their biometric data combined with their interests and search history.

Beckerman deferred answering this question as well, saying that: “TikTok is an entertainment platform where people watch and enjoy and create short-form videos. It’s about uplifting, entertaining content.”

While the senator’s line of questioning was a bit confusing at times — she once referred to this dossier as a “virtual you,” for example — it’s worth noting that we don’t have a full picture today as to what TikTok is doing with the data it collects from its users outside of what’s outlined in its privacy policy and, per TikTok’s 2020 blog post, how some of that data plays a role in its recommendation algorithms. And given the chance to set the record straight over its plans to collect biometric data with regard to minor users, TikTok’s policy head skirted the questions.

In a line of follow-ups on its data collection practices led by Senator Cynthia Lummis (R-WY), Beckerman was asked if this sort of “mass data collection” was necessary to deliver a high-quality experience to TikTok’s users. She noted the company’s policy allowed for the collection of the person’s location, device model of their phone, browsing history outside and inside TikTok, all the messages sent on TikTok, IP address, and biometric data.

In response, Beckerman said “some of those items that you listed off are things that we’re not currently collecting.”

He also said that the privacy policy states TikTok would get user consent if it were to begin collecting those items in the future.

Though not immediately clear, his statements were likely in reference to the clause about biometric data collection. In June, TikTok declined to detail the product developments that necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users. But at the time, the company told TechCrunch it would ask for user consent in the case such data collection practices began.

The senator said the committee would follow up with TikTok on this question.

Both senators were concerned about TikTok’s connection to China, given its parent company is Beijing-based ByteDance. But the issue of over-collection of user data — particularly with regard to children and minors — isn’t just a geopolitical concern or, as Trump believed, a national security threat. It’s a matter of transparency.

Privacy and security experts generally think that users should understand why a company needs the data it collects, what is done with it, and they should have the right to refuse to share that data. Today, users can somewhat limit data collection by disallowing access to their smartphone’s sensors and other features. Apple, for example, implements opt-outs as part of its mobile operating system, iOS, which pops up consent boxes when an app wants to access your location, your microphone, your camera, or your contacts.

But there is much more data that apps can track, even when these items are blocked.

Following the questions about data collection practices, Lummis also wanted to know if TikTok had been built with the goal of keeping users engaged for as long as possible. After all, having a treasure trove of user data could greatly boost this sort of metric.

In reply, Beckerman pointed to the app’s “take a break” reminders and parental controls for screen time management.

Lummis clarified that she wanted to know if “length of engagement” was a metric the company used in order to define success.

Again, Beckerman skirted the question, noting “there’s multiple definitions of success” and that “it’s not just based on how much time somebody’s spending [on the app],”

Lummis then restated the question a couple more times as Beckerman continued to dodge answering directly, saying only that “overall engagement is more important than the amount of time that’s being spent.”

“But is it one of the metrics?,” Lummis pushed.

“It’s a metric that I think many platforms check on how much time people are spending on the app,” Beckerman said.

Snapchat to introduce family safety tools to protect minors using its app

Following moves by major tech platforms to better protect minors using their platforms, Snapchat is preparing to introduce its own set of “family engagement” tools in the coming months. Snap CEO Evan Spiegel teased the planned offering during an interview at the WSJ Tech Live conference this week, where he explained that the new product will essentially function as a family center that gives parents better visibility into how teens are using its service and provide privacy controls.

Spiegel stressed Snapchat’s more private nature as a tool for communicating with friends, noting that Snapchat user profiles were already private by default — something that differentiated it from some social media rivals until recently.

“I think the entire way this service is constructed really promotes a safe experience regardless of what age you are, but we never market our service to people under the age of 13,” he said, then added Snap is now working on new features that would allow parents to feel more comfortable with the app.

“We haven’t announced the name of this product yet, but we basically have a family center so that young people and their parents can use Snapchat together,” Spiegel said. This product would give parents more visibility into who their teenage users may be talking to on Snapchat and their privacy settings, among other things.

“I think that at least helps start a conversation between young people and their parents about what they’re experiencing on our service,” Spiegel said. These types of conversations can be a learning experience for both parents and teens alike, and can give parents the opportunity to guide their teens through some of the difficulties that come with using social media — like how to navigate uncomfortable situations, such as what to do if a stranger contacts you, for example.

Snap in June had shared this sort of work was on its roadmap, when parents who lost their son to a drug overdose were advocating for the company to work with third-party parental control software applications. At the time, Snap said that it was being careful about sharing private user data with third parties and that it was looking into developing its own parental controls, as a solution. (Recently, the company released tools to crack down on illicit drug sales in Snapchat to address this particular issue.)

Reached for comment on Spiegel’s remarks at the WSJ event, a Snap spokesperson confirmed the new family engagement tools will combine both an educational component as well as tools meant to be used by parents.

“Our overall goal is to help educate and empower young people to make the right choices to enhance their online safety and to help parents be partners with their kids in navigating the digital world,” a spokesperson said. “When we build new products or features, we try to do it in a way that reflects natural human behaviors and relationships — and the parental tools we are developing are meant to give parents better insights to help protect their kids, in ways that don’t compromise their privacy or data security, are legally compliant and offered at no charge to families within Snapchat,” they added.

The company said it looked forward to sharing more details about the family tools “soon.”

The parental controls will be under the purview of Snap’s new Global Head of Platform Safety, Jacqueline Beauchere, who recently joined the company from Microsoft, where she had served as Chief Online Safety Officer.

Her hiring comes at a time as regulatory scrutiny of social media companies and big tech, in general, has heated up.

As U.S. lawmakers consider legislation that would require tech companies to implement new safeguards to better protect minors on their services, tech companies have been trying to get ahead of the coming crackdown by putting into place their own interpretations of those rules now.

Already, many of the top tech platforms used by teens have implemented parental controls or have adjusted their default settings to be more private, or have done both.

TikTok, for example, having just put a multi-million dollar FTC fine behind it for its children’s privacy violations, led the way with the introduction of its “Family Safety Mode” feature in 2020. Those tools became globally available in spring 2020. Then, at the start of this year, TikTok announced it would change the privacy settings and defaults for all its users under the age of 18.

In March, Instagram followed with new teen safety tools of its own, then changed its default settings for minors later in the summer in addition to restricting ad targeting. Google, meanwhile, launched new parental controls on YouTube in February, then further increased its minor protections across Search, YouTube, and other platforms in August. This also included making the default settings more private and limiting ad targeting.

Snap, however, had not yet made any similar moves, despite the fact that the company regularly touts how much its app is used by the younger demographic. Currently, Snap says its app reaches 90% of 13 to 24-year-olds in the U.S. — a percentage that’s remained fairly consistent over the past few years.

The company, however, is not against the idea of additional legal requirements in the area of minor protections.

Spiegel, like other tech execs have lately, agreed some regulation may be necessary. But he cautioned that regulation isn’t a panacea for the ills big tech has wrought.

“I think the important point to make is that regulation is not a substitute for moral responsibility and for business practices that support the health and well-being of your community — because regulation just happens far too late,” Spiegel said. “So, I think regulation certainly may be necessary in some of these areas — I think other countries have made strides in that regard — but again, unless businesses are proactively promoting the health and well-being of their community, regulators are always going to be playing catch up,” he said.

Facebook stock drops after company warns Apple’s privacy changes to have bigger Q3 impact

Facebook today provided an update on how Apple’s privacy changes have impacted its ad business. The company had already warned investors during its second quarter earnings that it expected to feel an even more significant impact in its ad targeting business by Q3. This morning, it reiterated that point, but also noted that it had been underreporting iOS web conversions by approximately 15%, which had led advertisers to believe the impact was even worse than they had expected.

According to Facebook’s announcement published to its business blog, this exact percentage could vary broadly among individual advertisers. But it said the real-world conversions, including things like sales and app installs, are likely higher than what advertisers are seeing when using Facebook’s analytics.

Facebook’s stock has dropped by nearly 4% on this news, as of the time of writing.

This is not the first time Facebook has shared misleading metrics. In the past, however, it had inflated its video ad metrics and didn’t quickly act to correct the problem, leading to a class-action lawsuit. In this case, however, the issue with the metrics isn’t making Facebook look better than it is, but worse. The company noted it’s been hearing from its advertising community that they are seeing a larger-than-planned impact to their ad investments on the network, raising concerns.

Facebook offered advertisers a few tips to help them better understand a campaign’s impact and performance in this new era. It suggested waiting a minimum of 72 hours or the full length of the optimization window before evaluating performance rather than making assessments on a daily basis, as before. It also said advertisers should analyze reporting at the campaign level, when possible, as some estimated conversations are reported with a delay. And it suggested advertisers choose web events (like a purchase or sign-up) that are most aligned with their core business, among other things.

To address the issues with improving its measurements, Facebook said it’s working to improve its conversion modeling, accelerating its investments to address reporting gaps, launching new capabilities to track web conversions, and extending its ability to measure in-app conversions in apps that have already been installed. The company said it would work quickly to fix bugs, including one that recently had led to underreporting of approximately 10%, which was previously shared with advertisers.

The company in August explained how it’s been working to adapt its personalized ads business in light of both Apple and Google’s privacy changes and the new regulatory landscape, but those efforts will take time, it said.

Outside of the ad tech updates themselves, Facebook has also been working on new products that would allow advertisers to better position themselves in front of consumers browsing Facebook’s apps. Just last week, for instance, it revamped its business tool lineup with the introduction of new features and expansions of smaller tests that would offer businesses more ways to be discovered. One such test in the U.S. would direct consumers to other businesses and topics directly underneath news feed posts. It also now allows businesses to add WhatsApp buttons to their Instagram profiles and create ads that send Instagram users to WhatsApp business chats.

Facebook has been warning advertisers for some time that Apple’s new privacy features, which allow mobile users to opt out of being tracked across their iOS apps, would cause issues for the way its ad targeting business typically operated. And it repeatedly argued that Apple’s changes would impact small businesses that relied on Facebook ads to reach their customers. When the changes went into effect, Facebook’s concerns were validated as studies found very few consumers are opting into tracking on iOS.

 

FTC says health apps must notify consumers about data breaches — or face fines

The U.S. Federal Trade Commission (FTC) has warned apps and devices that collect personal health information must notify consumers if their data is breached or shared with third parties without their permission.

In a 3-2 vote on Wednesday, the FTC agreed on a new policy statement to clarify a decade-old 2009 Health Breach Notification Rule, which requires companies handling health records to notify consumers if their data is accessed without permission, such as the result of a breach. This has now been extended to apply to health apps and devices — specifically calling out apps that track fertility data, fitness, and blood glucose — which “too often fail to invest in adequate privacy and data security,” according to FTC chair Lina Khan.

“Digital apps are routinely caught playing fast and loose with user data, leaving users’ sensitive health information susceptible to hacks and breaches,” said Khan in a statement, pointing to a study published this year in the British Medical Journal that found health apps suffer from “serious problems” ranging from the insecure transmission of user data to the unauthorized sharing of data with advertisers.

There have also been a number of recent high-profile breaches involving health apps in recent years. Babylon Health, a U.K. AI chatbot and telehealth startup, last year suffered a data breach after a “software error” allowed users to access other patients’ video consultations, while period tracking app Flo was recently found to be sharing users’ health data with third-party analytics and marketing services.

Under the new rule, any company offering health apps or connected fitness devices that collect personal health data must notify consumers if their data has been compromised. However, the rule doesn’t define a “data breach” as just a cybersecurity intrusion; unauthorized access to personal data, including the sharing of information without an individual’s permission, can also trigger notification obligations.

“While this rule imposes some measure of accountability on tech firms that abuse our personal information, a more fundamental problem is the commodification of sensitive health information, where companies can use this data to feed behavioral ads or power user analytics,” Khan said.

If companies don’t comply with the rule, the FTC said it will “vigorously” enforce fines of $43,792 per violation per day.

The FTC has been cracking down on privacy violations in recent weeks. Earlier this month, the agency unanimously voted to ban spyware maker SpyFone and its chief executive Scott Zuckerman from the surveillance industry for harvesting mobile data on thousands of people and leaving it on the open internet.

Have ‘The Privacy Talk’ with your business partners

As a parent of teenagers, I’m used to having tough, sometimes even awkward, conversations about topics that are complex but important. Most parents will likely agree with me when I say those types of conversations never get easier, but over time, you tend to develop a roadmap of how to approach the subject, how to make sure you’re being clear, and how to answer hard questions.

And like many parents, I quickly learned that my children have just as much to teach me as I can teach them. I’ve learned that tough conversations build trust.

I’ve applied this lesson about trust-building conversations to an extremely important aspect of my role as the chief legal officer at Foursquare: Conducting “The Privacy Talk.”

The discussion should convey an understanding of how the legislative and regulatory environment are going to affect product offerings, including what’s being done to get ahead of that change.

What exactly is ‘The Privacy Talk’?

It’s the conversation that goes beyond the written, publicly-posted privacy policy, and dives deep into a customer, vendor, supplier or partner’s approach to ethics. This conversation seeks to convey and align the expectations that two companies must have at the beginning of a new engagement.

RFIs may ask a lot of questions about privacy compliance, information security, and data ethics. But it’s no match for asking your prospective partner to hop on a Zoom to walk you through their broader approach. Unless you hear it first-hand, it can be hard to discern whether a partner is thinking strategically about privacy, if they are truly committed to data ethics, and how compliance is woven into their organization’s culture.

Privacy-oriented search app Xayn raises $12M from Japanese backers to go into devices

Back in December 2020 we covered the launch of a new kind of smartphone app-based search engine, Xayn.

“A search engine?!” I hear you say? Well, yes, because despite the convenience of modern search engines’ ability to tailor their search results to the individual, this user-tracking comes at the expense of privacy. This mass surveillance might be what improves Google’s search engine and Facebook’s ad targeting, to name just two examples, but it’s not very good for our privacy.

Internet users are admittedly able to switch to the US-based DuckDuckGo, or perhaps France’s Qwant, but what they gain in privacy, they often lose in user experience and the relevance of search results, through this lack of tailoring.

What Berlin-based Xayn has come up with is personalized, but a privacy-safe web search on smartphones, which replaces the cloud-based AI employed by Google et al with the innate AI in-built into modern smartphones. The result is that no data about you is uploaded to Xayn’s servers.

And this approach is not just for ‘privacy freaks’. Businesses that need search but don’t need Google’s dominant market position are increasingly attracted by this model.

And the evidence comes today with the new that Xayn has now raised almost $12 million in Series A funding led by the Japanese investors Global Brain and KDDI (a Japanese telecommunications operator), with participation from previous backers, including the Earlybird VC in Berlin. Xayn’s total financing now comes to more than $23 million to date.

It would appear that Xayn’s fusion of a search engine, a discovery feed, and a mobile browser has appealed to these Asian market players, particularly because Xayn can be built into OEM devices.

The result of the investment is that Xayn will now also focus on the Asian market, starting with Japan, as well as Europe.

Leif-Nissen Lundbæk, Co-Founder and CEO of Xayn said: “We proved with Xayn that you can have it all: great results through personalization, privacy by design through advanced technology, and a convenient user experience through clean design.”

He added: “In an industry in which selling data and delivering ads en masse are the norm, we choose to lead with privacy instead and put user satisfaction front and center.”

The funding comes as legislation such as the EU’s GDPR or California’s CCPA have both raised public awareness about personal data online.

Since its launch, Xayn says its app has been downloaded around 215,000 times worldwide, and a web version of its app is expected soon.

Over a call, Lundbæk expanded on the KDDI aspect of the fund-raising: “The partnership with KDDI means we will give users access to Xayn for free, while the corporate – such as KDDI – is the actual customer but gives our search engine away for free.”

The core features of Xayn include personalized search results; a personalized feed of the entire Internet which learns from their Tinder-like swipes, without collecting or sharing personal data;
an ad-free experience.  

Naoki Kamimeada, Partner at Global Brain Corporation said: “The market for private online search is growing, but Xayn is head and shoulders above everyone else because of the way they’re re-thinking how finding information online should be.”

Kazuhiko Chuman, Head of KDDI Open Innovation Fund, said: “This European discovery engine uniquely combines efficient AI with a privacy-protecting focus and a smooth user experience. At KDDI, we’re constantly on the lookout for companies that can shape the future with their expertise and technology. That’s why it was a perfect match for us.”

In addition to the three co-founders Leif-Nissen Lundbæk (Chief Executive Officer), Professor Michael Huth (Chief Research Officer), and Felix Hahmann (Chief Operations Officer), Dr Daniel von Heyl will come on board as Chief Financial Officer, Frank Pepermans will take on the role of Chief Technology Officer, and Michael Briggs will join as Chief Growth Officer.

Opioid addiction treatment apps found sharing sensitive data with third parties

Several widely used opioid treatment recovery apps are accessing and sharing sensitive user data with third parties, a new investigation has found.

As a result of the COVID-19 pandemic and efforts to reduce transmission in the U.S, telehealth services and apps offering opioid addiction treatment have surged in popularity. This rise of app-based services comes as addiction treatment facilities face budget cuts and closures, which has seen both investor and government interest turn to telehealth as a tool to combat the growing addiction crisis.

While people accessing these services may have a reasonable expectation of privacy of their healthcare data, a new report from ExpressVPN’s Digital Security Lab, compiled in conjunction with the Opioid Policy Institute and the Defensive Lab Agency, found that some of these apps collect and share sensitive information with third parties, raising questions about their privacy and security practices.

The report studied 10 opioid treatment apps available on Android: Bicycle Health, Boulder Care, Confidant Health. DynamiCare Health, Kaden Health, Loosid, Pear Reset-O, PursueCare, Sober Grid, and Workit Health. These apps have been installed at least 180,000 times, and have received more than $300 million in funding from investment groups and the federal government.

Despite the vast reach and sensitive nature of these services, the research found that the majority of the apps accessed unique identifiers about the user’s device and, in some cases, shared that data with third parties.

Of the 10 apps studied, seven access the Android Advertising ID (AAID), a user-generated identifier that can be linked to other information to provide insights into identifiable individuals. Five of the apps also access the devices’ phone number; three access the device’s unique IMEI and IMSI numbers, which can also be used to uniquely identify a person’s device; and two access a users’ list of installed apps, which the researchers say can be used to build a “fingerprint” of a user to track their activities.

Many of the apps examined are also obtaining location information in some form, which when correlated with these unique identifiers, strengthens the capability for surveilling an individual person, as well as their daily habits, behaviors, and who they interact with. One of the methods the apps are doing this is through Bluetooth; seven of the apps request permission to make Bluetooth connections, which the researchers say is particularly worrying due to the fact this can be used to track users in real-world locations.

“Bluetooth can do what I call proximity tracking, so if you’re in the grocery store, it knows how long you’re in a certain aisle, or how close you are to someone else,” Sean O’Brien, principal researcher at ExpressVPN’s Digital Security Lab who led the investigation, told TechCrunch. “Bluetooth is an area that I’m pretty concerned about.”

Another major area of concern is the use of tracker SDKs in these apps, which O’Brien previously warned about in a recent investigation that revealed that hundreds of Android apps were sending granular user location data to X-Mode, a data broker known to sell location data to U.S. military contractors, and now banned from both Apple and Google’s app stores. SDKs, or software development kits, are bundles of code that are included with apps to make them work properly, such as collecting location data. Often, SDKs are provided for free in exchange for sending back the data that the apps collect.

“Confidentiality continues to be one of the major concerns that people cite for not entering treatment… existing privacy laws are totally not up to speed.” Jacqueline Seitz, Legal Action Center

While the researchers keen to point out that it does not categorize all usage of trackers as malicious, particularly as many developers may not even be aware of their existence within their apps, they discovered a high prevalence of tracker SDKs in seven out of the 10 apps that revealed potential data-sharing activity. Some SDKs are designed specifically to collect and aggregate user data; this is true even where the SDK’s core functionality is concerned.

But the researchers explain that an app, which provides navigation to a recovery center, for example, may also be tracking a user’s movements throughout the day and sending that data back to the app’s developers and third parties.

In the case of Kaden Health, Stripe — which is used for payment services within the app — can read the list of installed apps on a user’s phone, their location, phone number, and carrier name, as well as their AAID, IP address, IMEI, IMSI, and SIM serial number.

“An entity as large as Stripe having an app share that information directly is pretty alarming. It’s worrisome to me because I know that information could be very useful for law enforcement,” O’Brien tells TechCrunch. “I also worry that people having information about who has been in treatment will eventually make its way into decisions about health insurance and people getting jobs.”

The data-sharing practices of these apps are likely a consequence of these services being developed in an environment of unclear U.S. federal guidance regarding the handling and disclosure of patient information, the researchers say, though O’Brien tells TechCrunch that the actions could be in breach of 42 CFR Part 2, a law that outlines strong controls over disclosure of patient information related to treatment for addiction.

Jacqueline Seitz, a senior staff attorney for health privacy at Legal Action Center, however, said this 40-year-old law hasn’t yet been updated to recognize apps.

“Confidentiality continues to be one of the major concerns that people cite for not entering treatment,” Seitz told TechCrunch. “While 42 CFR Part 2 recognizes the very sensitive nature of substance use disorder treatment, it doesn’t mention apps at all. Existing privacy laws are totally not up to speed.

“It would be great to see some leadership from the tech community to establish some basic standards and recognize that they’re collecting super-sensitive information so that patients aren’t left in the middle of a health crisis trying to navigate privacy policies,” said Seitz.

Another likely reason for these practices is a lack of security and data privacy staff, according to Jonathan Stoltman, director at Opioid Policy Institute, which contributed to the research. “If you look at a hospital’s website, you’ll see a chief information officer, a chief privacy officer, or a chief security officer that’s in charge of physical security and data security,” he tells TechCrunch. “None of these startups have that.”

“There’s no way you’re thinking about privacy if you’re collecting the AAID, and almost all of these apps are doing that from the get-go,” Stoltman added.

Google is aware of ExpressVPN’s findings but has yet to comment. However, the report has been released as the tech giant prepares to start limiting developer access to the Android Advertising ID, mirroring Apple’s recent efforts to enable users to opt out of ad tracking.

While ExpressVPN is keen to make patients aware that these apps may violate expectations of privacy, it also stresses the central role that addiction treatment and recovery apps may play in the lives of those with opioid addiction. It recommends that if you or a family member used one of these services and find the disclosure of this data to be problematic, contact the Office of Civil Rights through Health and Human Services to file a formal complaint.

“The bottom line is this is a general problem with the app economy, and we’re watching telehealth become part of that, so we need to be very careful and cautious,” said O’Brien. “There needs to be disclosure, users need to be aware, and they need to demand better.”

Recovery from addiction is possible. For help, please call the free and confidential treatment referral hotline (1-800-662-HELP) or visit findtreatment.gov.

Read more: