Veteran digital health pros found virtual eating disorder care startup

For Amanda D’Ambra and Joan Zhang, the idea of starting and co-founding an eating disorder care startup was personal: Both struggled with an eating disorder, along with other mental health issues, and received treatment — a treatment they hoped more people would be able to access.

D’Ambra and Zhang previously worked in digital health spaces before deciding to found Arise, a New York-based virtual eating disorder care company. Arise is looking to provide education, care and long-term support for those afflicted with any disordered eating with a personalized care plan with licensed providers.

The one thing the founders wish they saw more of in other companies was “seeing people as humans first and supporting them in whatever it is in life that they prioritize.”

Based on their personal experience, Zhang and D’Ambra say, other mental health factors impact a patient’s journey, which is why they are trying to personalize patient care.

“There is so much complexity towards what contributed to the eating disorder, and it’s just simply not about just the food, and it’s not just about the body,” Zhang said. “ I think the other really big thing is starting to shift away from, ‘Oh, this is a need problem’, to seeing the broader systemic problem and how it contributes to this culture of disordered eating and eating disorders that has been created.”

According to the National Association of Anorexia Nervosa and Associated Disorders, eating disorders are the second deadliest mental illness (next to that of opioid use) and 26% of people with an eating disorder attempt suicide.

Additionally, BIPOC (Black, Indigenous, and People of Color) are “significantly” less likely to receive treatment compared to white people, and close to 50% of LGBTQIA+ people reported disordered eating behaviors.

For D’Ambra and Zhang, they said they hope Arise can be a welcoming, safe and open space for underserved populations by being “community focused.”

“What we aim to build is a more accessible and inclusive model that is going to serve a much broader pool of folks who really are experiencing eating disorders and disordered eating, but are not getting recognized or getting support,” D’Ambra said.

Arise has been able to garner support as they announced an oversubscribed seed funding round of $4 million led by BBG Ventures (investments in Alula and Reside Health) and Greycroft (investments in Bumble and Boulder Care) with participation from Iyah Romm, Cityblock co-founder, and Sonder Health chairperson Sylvia Romm.

The company is slated to launch their pilot program later this summer — though, the pilot will only serve up to 30 patients. According to the company, the pilot is likely to be “a short-term thing”.

Post beta trial, Arise is hoping to serve around 100 active patients by the end of year. Initially, the company will start operations in New York, North Carolina and potentially Texas. However, since the company plans on partnering with insurance providers and Medicaid it is all dependent on where they can break into.

The company is emerging at a time where mental and digital health companies have seen a loss in personnel and support.

Cerebral lost various insurance contracts after the FDA began an investigation for a potential violation of the Controlled Substances Act. Additionally, Talkspace and BetterHelp have been in the spotlight as the U.S. Senate is reviewing potential privacy rights violations.

The Senate is asking these mental health app providers to give clarification on their data collection and sharing policies after reports alluded that the companies could be sharing data with Meta and Google.

“When it comes to mental health especially, we take member data very seriously and believe strongly in the need to ensure privacy is respected and protected,” D’Ambra told TechCrunch. “For us, that comes in our approach to care, in that we bring it right to people’s homes. Importantly, it also means ensuring data is protected and put back in the hands of our members to empower their healing, and not sold to third parties for advertising or profit.”

Typically, third-party digital health companies do not fall within HIPAA’s purview — despite handling sensitive patient information — and land in a regulatory gray space. It wasn’t until September 2021 that the Federal Trade Commission issued a policy stating health apps must comply with the Health Breach Notification Rule.

UK’s Online Safety Bill falls short on protecting speech and tackling harms, warns committee

Another UK parliamentary committee has weighed in on the government’s controversial plan to regulate Internet content with a broadbrush focus on ‘safety’.

The Digital, Culture, Media and Sport (DCMS) Committee, warned in detailed report today that it has “urgent concerns” the draft legislation “neither adequately protects freedom of expression nor is clear and robust enough to tackle the various types of illegal and harmful content on user-to-user and search services”.

Among the committee’s myriad worries are how fuzzily the bill defines different types of harms, such as illegal content — and  designations of harms — with MPs calling out the government’s failure to include more detail in the bill itself, making it harder to judge impact as key components (like Codes of Practice) will follow via secondary legislation so aren’t yet on the table.

That general vagueness, combined with the complexities related to the choice for a “duty of care” approach — which the report notes in fact breaks down into several specific duties (vis-a-vis illegal content; content that poses a risk to children; and, for a subset of high risk P2P services, content that poses a risk to adults) — means the proposed framework may not be able to achieve the sought for “comprehensive safety regime”, in the committee’s view.

The bill also creates risks for freedom of expression, per the committee — which has recommended the government incorporates a balancing test for the regulator, Ofcom, to assess whether platforms have “duly balanced their freedom of expression obligations with their decision making”.

The risk of platforms responding to sudden, ill-defined liability around broad swathes of content by over-removing speech — leading to a chilling impact on freedom of expression in the UK — is one of the many criticisms raised against the bill which the committee appears to be picking up on.

It suggests the government reframes definitions of harmful content and relevant safety duties to bring the bill in line with international human rights law — in order to try to safeguard against the risk of over-removal by providing “minimum standards against which a provider’s actions, systems and processes to tackle harm, including automated or algorithmic content moderation, should be judged”.

Even on child safety — a core issue UK ministers have repeatedly pinned to the legislation — the committee flags “weaknesses” in the bill that they assert mean the proposed regime “does not map adequately onto the reality of the problem”.

They have called for the government to go further in this area, urging the bill to be expanded to cover “technically legal” practices, such as breadcrumbing (aka “where perpetrators deliberately subvert the thresholds of criminal activity and for content removal by a service provide”) — citing witness testimony which suggests the practice, while not in fact illegal, “nonetheless forms part of the sequence for online CSEA [child sexual exploitation and abuse]”.

Similarly, the committee suggests the bill needs to go further to protect women and girls against types of online violence and abuse specifically directed at them (such as “tech-enabled ‘nudifying’ of women and deepfake pornography”).

On Ofcom’s powers of investigation of platforms, the committee argues they need to be further strengthened — urging amendments to give the regulator the power to “conduct confidential auditing or vetting of a service’s systems to assess the operation and outputs in practice”; and to “request generic information about how ‘content is disseminated by means of a service'”, with MPs further suggesting the bill should provide more specific detail about the types of data Ofcom can request from platforms (presumably to avoid the risk of platforms seeking to evade effective oversight).

However — on enforcement — the committee has concerns in the other direction and is worried over a lack of clarity over how Ofcom’s (set to be) very substantial powers may be used against platforms.

It has recommended a series of tweaks, such as making clear these powers only apply to in-scope services.

MPs are also calling for a redrafting of the use of so-called “technology notices” — which will enable the regulator to mandate the use of new technology (following “persistent and prevalent” failings of the duty of care) — saying the scope and application of this power should be “more tightly” defined, and more practical information provided on the actions required to bring providers into compliance, as well as more detail on how Ofcom will test whether the use of such power is proportionate.

Here the committee flags issues of potential business disruption. It also suggests the government take time to evaluate whether these powers are “appropriately future-proofed given the advent of technology like VPNs and DNS over HTTPs”.

Other recommendations in the report include a call for the bill to contain more clarity on the subject of redress and judicial review.

The committee also warns against the government creating a dedicated joint committee to oversee online safety and digital regulation, arguing that parliamentary scrutiny is “best serviced by the existing, independent, cross-party select committees and evidenced by the work we have done and will continue to do in this area”.

It remains to be seen how much notice the government takes of the committee’s recommendations. Although the secretary of state for digital, Nadine Dorries, has previously suggested she is open to taking on board parliamentary feedback to the sweeping package of legislation.

The report, by the DCMS Committee, follows earlier recommendations — in December — by a parliamentary joint committee focused on scrutinizing the bill which also warned that the draft legislation risked falling short of the government’s safety aims.

The government published the draft Online Safety bill back in May 2021 — setting out a long-trailed plan to impose a duty of care on Internet platforms with the aim of protecting users from a swathe of harms, whether related to (already illegal) content such as terrorist propaganda, child sexual abuse material and hate speech, through more broadly problematic but not necessarily illegal content such as bullying or content promoting eating disorders or suicide (which may create disproportionate risks for younger users of social media platforms).

Speaking to the joint committee in November, Dorries predicted the legislation will usher in a systemic change to Internet culture — telling MPs and peers it will create “huge, huge” change to how Internet platforms operate.

The bill, which is still making its way through parliament, targets a broad range of Internet platforms and envisages enforcing safety-focused governance standards via regulated Codes of Conduct, overseen by Ofcom in an expanded role — including with incoming powers to issue substantial penalties for breaches.

The sweeping scope of the regulation — the intent for the law to target not just illegal content spreading online but stuff that falls into more of a grey area where restrictions risk impinging on freedom of expression and speech — mean the proposal has attracted huge criticism from civil liberties and digital rights groups, as well as from businesses concerned about liability and the compliance burden.

In parallel, the government has been stepping up attacks on platforms’ use of end-to-end encryption — deploying rhetoric that seeks to imply robust security is a barrier to catching pedophiles (see, for example, the government’s recently unveiled NoPlaceToHide PR to try to turn the public against E2E encryption). So critics are also concerned that ministers are trying to subvert Internet security and privacy by recasting good practices as barriers to a goal imposing ‘child safety’ through mass digital surveillance.

On that front, in recent months, the Home Office has also been splashing a little taxpayer cash to try to foster the development of technologies which could be applied to E2EE systems to scan for child sexual abuse material — which it claims could offer a middle ground between robust security and law enforcement’s data access requirements.

Critics of the bill already argue that using a trumped up claim of child ‘protection’ as a populist lever to push for the removal of the strongest security and privacy protections from all Internet users — simultaneously encouraging a cottage industry of commercial providers to spring up and tout ‘child protection’ surveillance services for sale — is a lot closer to gaslighting than safeguarding, however.

Zooming back out, there is also plenty of concern over the risk of the UK over regulating its digital economy.

And of the bill becoming a parliamentary “hobby horse” for every type of online grievance, as one former minister of state put it — with the potential for complex and poorly defined content regulation to end up as a disproportionate burden on UK startups vs tech giants like Facebook whose self-serving algorithms and content moderation fuelled calls for Internet regulation in the first place, as well as being hugely harmful to UK Internet users’ human rights.

 

WW launches Kurbo, a hotly debated ‘healthy eating’ app aimed at kids

Kurbo Health, a mobile weight loss solution designed to tackle childhood obesity which was acquired for $3 million by WW (the rebranded Weight Watchers), has now relaunched as Kurbo by WW — and not without some controversy. Pre-acquisition, the startup was focused on democratizing access to research, behavior modification techniques, and other tools that were previously only available through expensive programs run by hospitals or other centers.

As a WW product, however, there are concerns that parents putting kids on “diets” will lead to increased anxiety, stress, and disordered eating — in other words, Kurbo will make the problem worse, rather than solving it.

The Kurbo app first launched at TechCrunch Disrupt NY 2014. Founder Joanna Strober, a venture investor and board member at BlueNile and eToys, explained she was driven to develop Kurbo after struggling to help her own child. Mainly, she came across programs that cost money, were held at inconvenient times for working parents, or were dubbed “obesity centers” — which no child wanted to be associated with.

Her child found eventual success with the Stanford Pediatric Weight Loss Program, but this involved in-person visits and pen-and-paper documentation.

Together with Kurbo Health’s co-founder Thea Runyan, who has a Masters in Public Health and had worked at the Stanford center for 12 years, the team realized the opportunity to bring the research to more people by creating a mobile, data-driven program for kids and families.

They licensed Stanford’s program, which then became Kurbo Health.

FoodSystem Phone

The company raised funds from investors including Signia Ventures, Data Collective, Bessemer Venture Partners, Promus Ventures, as well as angels like Susan Wojcicki, CEO of YouTube; Greg Badros, former VP Engineering and Product at Facebook; and Esther Dyson (EdVenture), among others.

At launch, the app was designed to encourage healthier eating patterns without parents actually being able to see the child’s food diary. Instead, parents set a reward that was doled out simply for the child’s participation. That is, the parents couldn’t see what the child ate, specifically, which allowed them to stop playing “food police.”

ProfileStreak Phone

Unlike adult-oriented apps like MyFitnessPal or Noom, kids wouldn’t see metrics like calories, sugars, carbs and fat, but instead had their food choices categorized as “red,” “yellow,” and “green.” However, no foods were designated as “off limits,” as it instead encouraged fewer reds and more greens.

The program also included an option for virtual coaching.

As a WW product, the program has remained somewhat the same. There are still the color-coded food categorizations and optional live coaching, via a subscription. The app also now includes tools that teach meditation, recipe videos, and games that focus on healthy lifestyles. Subscribers gain access to one-on-one 15-minute virtual sessions with coaches whose professional backgrounds include counseling, fitness and other nutrition-related fields.

However, there are also things like a place to track measurements, goals like “lose weight,” and Snapchat-style “tracking streaks.”

Home Tracked Phone

 

While the original program was designed to be a solution for parents with children who would have otherwise had to seek expensive medical help for obesity issues, the association with parent company and acquirer WW has led to some backlash.

CoachingChat Phone

Today, body positivity and fat acceptance movements have gone mainstream, encouraging people to be confident in their own bodies and not hate themselves for being overweight. The general thinking is that when people respect themselves, they become more likely to care for themselves — and this will extend to making healthier food and lifestyle choices.

Meanwhile, food tracking and dieting programs often lead to failure and shame — especially when people start to think of some food as “bad” or a “cheat,” instead of just something to be eaten in moderation. And excessive tracking can even lead to disordered eating patterns for some people, studies have found.

In addition, WW has already been under fire for extending its weight loss program to teens 13-17 for free, and the launch of what’s seen as a “dieting app for kids” certainly isn’t helping the backlash.

That said, when positive reinforcement is used correctly, it can work for weight loss. As TIME reported, the red-yellow-green traffic light approach was effective in adults in one independent study by Massachusetts General Hospital and another by presented at the Biennial Childhood Obesity Conference worked in children, with 84% reducing their BMI after 21 weeks.

“According to recent reports from the World Health Organization, childhood obesity is one of the most serious public health challenges of the 21st century. This is a global public health crisis that needs to be addressed at scale,” said Joanna Strober, co-founder of Kurbo, in a statement about the launch. “As a mom whose son struggled with his weight at a young age, I can personally attest to the importance and significance of having a solution like Kurbo by WW, which is inherently designed to be simple, fun and effective,” she said.

That said, it’s one thing for a parent to work in conjunction with a doctor to help a child with a health issue, but parents who foist a food tracking app on their kids may not get the same results. In fact, they may even cause the child to develop eating disorders that weren’t present before. (And no, just because a child is overweight, that doesn’t necessarily mean they’re suffering from an “eating disorder.”)

 

There can be many other factors that could be causing a child’s unexpected weight gain, beyond just their interest in eating high-calorie foods. This includes health ailments, hormone or chemical imbalances, medication side effects, puberty and other growth spurts, genetics, and more.

Parents may also be part of the problem, by simply bringing unhealthy food into the house because it’s more affordable or because they aren’t aware of things like hidden sugars or how to avoid them. Or perhaps they’re putting money into a child’s school lunch account, without realizing the child is able to spend it on vending machine snacks, sodas, or off-menu items like pizza and chips.

The child may also suffer from health problems like asthma or allergies that have become an underlying issue, making it more difficult for them to be active.

In other words, a program like this is something that parents should approach with caution. And it’s certainly one where the child’s doctor should be involved at every stage — including whether or not it’s actually needed at all.

Social media firms agree to work with UK charities to set online harm boundaries

Social media giants, including Facebook -owned Instagram, have agreed to financially contribute to UK charities to fund them making recommendations that the government hopes will speed up decisions about removing content that promotes suicide/self-harm or eating disorders on their platforms.

The development follows the latest intervention by health secretary Matt Hancock, who met with representatives from the Facebook, Instagram, Twitter, Pinterest, Google and others yesterday to discuss what they’re doing to tackle a range of online harms.

“Social media companies have a duty of care to people on their sites. Just because they’re global doesn’t mean they can be irresponsible,” he said today.

“We must do everything we can to keep our children safe online so I’m pleased to update the house that as a result of yesterday’s summit, the leading global social media companies have agreed to work with experts… to speed up the identification and removal of suicide and self-harm content and create greater protections online.”

However he failed to get any new commitments from the companies to do more to tackle anti-vaccination misinformation — despite saying last week that he would be heavily leaning on the tech giants to remove anti-vaccination misinformation, warning it posed a serious risk to public health.

Giving an update on his latest social media moot in parliament this afternoon, Hancock said the companies had agreed to do more to address a range of online harms — while emphasizing there’s more for them to do, including addressing anti-vaccination misinformation.

“The rise of social media now makes it easier to spread lies about vaccination so there is a special responsibility on the social media companies to act,” he said, noting that coverage for the measles, mumps and rubella vaccination in England decreased for the fourth year in a row last year — dropping to 91%.

There has been a rise in confirmed measles cases from 259 to 966 over the same period, he added.

With no sign of an agreement from the companies to take tougher action on anti-vaccination misinformation, Hancock was left to repeat their preferred talking point to MPs, segwaying into suggesting social media has the potential to be a “great force for good” on the vaccination front — i.e. if it “can help us to promote positive messages” about the public health value of vaccines.

For the two other online harm areas of focus, suicide/self-harm content and eating disorders, suicide support charity Samaritans and eating disorder charity Beat were named as the two U.K. organizations that would be working with the social media platforms to make recommendations for when content should and should not be taken down.

“[Social media firms will] not only financially support the Samaritans to do the work but crucially Samaritans’ suicide prevention experts will determine what is harmful and dangerous content, and the social media platforms committed to either remove it or prevent others from seeing it and help vulnerable people get the positive support they need,” said Hancock.

“This partnership marks for the first time globally a collective commitment to act, to build knowledge through research and insights — and to implement real changes that will ultimately save lives,” he added.

The Telegraph reports that the value of the financial contribution from the social media platforms to the Samaritans for the work will be “hundreds of thousands” of pounds. And during questions in parliament MPs pointed out the amount pledged is tiny vs the massive profits commanded by the companies. Hancock responded that it was what the Samaritans had asked for to do the work, adding: “Of course I’d be prepared to go and ask for more if more is needed.”

The minister was also pressed from the opposition benches on the timeline for results from the social media companies on tackling “the harm and dangerous fake news they host”.

“We’ve already seen some progress,” he responded — flagging a policy change announced by Instagram and Facebook back in February, following a public outcry after a report about a UK schoolgirl whose family said she killed herself after being exposed to graphic self-harm content on Instagram.

“It’s very important that we keep the pace up,” he added, saying he’ll be holding another meeting with the companies in two months to see what progress has been made.

“We’ll expect… that we’ll see further action from the social media companies. That we will have made progress in the Samaritans being able to define more clearly what the boundary is between harmful content and content which isn’t harmful.

“In each of these areas about removing harms online the challenge is to create the right boundary in the appropriate place… so that the social media companies don’t have to define what is and isn’t socially acceptable. But rather we as society do.”

In a statement following the meeting with Hancock, a spokesperson for Facebook and Instagram said: “We fully support the new initiative from the government and the Samaritans, and look forward to our ongoing work with industry to find more ways to keep people safe online.”

The company also noted that it’s been working with expert organisations, including the Samaritans, for “many years to find more ways to do that” — suggesting it’s quite comfortable playing the familiar political game of ‘more of the same’.

That said, the UK government has made tackling online harms a stated policy priority — publishing a proposal for a regulatory framework intended to address a range of content risks earlier this month, when it also kicked off a 12-week public consultation.

Though there’s clearly a long road ahead to agree a law that’s enforceable, let alone effective.

Hancock resisted providing MPs with any timeline for progress on the planned legislation — telling parliament “we want to genuinely consult widely”.

“This isn’t really issue of party politics. It’s a matter of getting it right so that society decides on how we should govern the Internet, rather than the big Internet companies making those decisions for themselves,” he added.

The minister was also asked by the shadow health secretary, Jonathan Ashworth, to guarantee that the legislation will include provision for criminal sentences for executives for serious breaches of their duty of care. But Hancock failed to respond to the question.