Social media a factor in death of UK schoolgirl, inquest finds

The inquest into the death of British schoolgirl, Molly Russell, has concluded that social media was a factor in her demise, the BBC reports.

The 14-year-old had viewed thousands of pieces of content about self-harm and suicide on online platforms, including Instagram and Pinterest, prior to her death in November 2017.

Reaching a conclusion on the North London Coroner’s Court inquest into Russell’s death today, coroner Andrew Walker said the “negative effects of online content” were a factor in her death and such content “shouldn’t have been available for a child to see”.

The tragedy has led to a number of high level interventions by UK lawmakers, with Instagram boss Adam Mosseri being called in for talks with the then-health secretary, Matt Hancock, in 2019 to discuss the platform’s handling of content that promotes suicide and self harm.

The government has also claimed to be prioritizing children’s safety by putting it at the core of incoming content moderation legislation (aka the Online Safety Bill, which was presented to parliament as a first draft in May 2021). While an age appropriate design code also came into force in the UK last year — requiring platforms to apply recommended account settings for minors to protect them from profiling and other online safety risks.

In further remarks today, the coroner said: “It’s likely the material viewed by Molly… affected her mental health in a negative way and contributed to her death in a more than minimal way.”

“It would not be safe to leave suicide as a conclusion — she died from an act of self harm while suffering from depression and the negative effects of online content,” he added.

The BBC reports that the coroner will now compile a “prevention of future deaths” report setting out his concerns.

He will also write to the two social media firms which were ordered to give evidence, it said.

Instagram and Pinterest

Executives from Meta, Instagram’s parent, and Pinterest were both ordered to testify at the inquest — which was shown material Russell had viewed on their platforms.

A child psychologist who gave evidence to the inquest earlier this month, described content the schoolgirl had engaged with online as “very disturbing” — and said it would “certainly affect her and made her feel more hopeless”, per earlier BBC reporting.

While, in his own testimony, Molly’s father, Ian Russell, described what he saw looking through her web history after her death as “the bleakest of worlds”. He also told the inquest that much of the “dark, graphic, harmful material” she had been able to view online seemed to “normalise” self-harm and suicide.

The schoolgirl’s use of social media extended to having accounts on other services including Twitter and YouTube, the inquest also heard.

Meta’s representative who gave evidence, Elizabeth Lagone — the tech giant’s head of health & well-being policy — defended posts about suicide and depression that the schoolgirl had seen on Instagram prior to her death, describing them as “safe” during testimony earlier this week.

The BBC also reports that Lagone told the inquest she thought it was “safe for people to be able to express themselves”; and that content the 14-year-old had viewed was “nuanced and complicated”.

Also giving evidence to the inquest, Pinterest’s Judson Hoffman — the photo-sharing platform’s global head of community operations — apologized for content the schoolgirl had seen, saying Pinterest had not been safe when she used it.

The inquest heard that the platform had emailed images to the schoolgirl prior to her death which contained headings such as “10 depression pins you might like” and “depression recovery, depressed girl and more pins trending on Pinterest” — notification emails that were presumably automatically generated with the content curation based on behavioral profiling of the schoolgirl’s activity on the platform.

In remarks following the coroner’s conclusion today, Meta said it would “carefully consider” his report when it sees it.

Here’s Meta’s statement:

“Our thoughts are with the Russell family and everyone who has been affected by this tragic death. We’re committed to ensuring that Instagram is a positive experience for everyone, particularly teenagers, and we will carefully consider the Coroner’s full report when he provides it. We’ll continue our work with the world’s leading independent experts to help ensure that the changes we make offer the best possible protection and support for teens.” 

Pinterest also sent us this statement:

“Our thoughts are with the Russell family. We’ve listened very carefully to everything that the Coroner and the family have said during the inquest. Pinterest is committed to making ongoing improvements to help ensure that the platform is safe for everyone and the Coroner’s report will be considered with care. Over the past few years, we’ve continued to strengthen our policies around self-harm content, we’ve provided routes to compassionate support for those in need and we’ve invested heavily in building new technologies that automatically identify and take action on self-harm content. Molly’s story has reinforced our commitment to creating a safe and positive space for our Pinners.”

This summer’s change of UK prime minister and (yet another) ministerial reshuffle has led to a pause on the Online Safety Bill’s passage through parliament and a partial rethink around elements of the bill touching  ‘legal but harmful’ content. But today’s inquest verdict is likely to apply further pressure on the government to get the legislation through since the “distressing” material linked by the coroner to Russell’s death falls exactly in that greyer area.

The new secretary of state for digital, Michelle Donelan, stressed earlier this month that while the government does want changes to the bill around ‘legal but harmful’ content she said there won’t be any changes in planned restrictions for children — claiming kids’ online safety remains a core priority for new prime minister Liz Truss’ government.

Social media a factor in death of UK schoolgirl, inquest finds by Natasha Lomas originally published on TechCrunch

House Democrats debut new facial recognition bill

A group of House Democrats has unveiled a new bill that aims to put limits on the use of facial recognition technologies by law enforcement agencies across the United States.

Dubbed the Facial Recognition Act, the bill would compel law enforcement to obtain a judge-authorized warrant before using facial recognition. By adding the warrant requirement, law enforcement would first have to show a court it has probable cause that a person has committed a serious crime, rather than allowing largely unrestricted use of facial recognition under the existing legal regime.

The bill also puts other limits on what law enforcement can use facial recognition for, such as immigration enforcement or peaceful protests, or using a facial recognition match as the sole basis for establishing probable cause for someone’s arrest.

If passed, the bill would also require law enforcement to annually test and audit their facial recognition systems, and provide detailed reports of how facial recognition systems are used in prosecutions. It would also require police departments and agencies to purge databases of photos of children who were subsequently released without charge, whose charges were dismissed or were acquitted.

Facial recognition largely refers to a range of technologies that allow law enforcement, federal agencies and private and commercial customers to track people using a snapshot or photo of their faces. The use of facial recognition has grown in recent years, despite fears that the technology is flawed, disproportionately misidentifies people of color (which has led to wrongful arrests) and harms civil liberties, but is still deployed against protesters, for investigating minor crimes and used to justify arrests of individuals from a single face match.

Some cities, states and police departments have limited their use of facial recognition in recent years. San Francisco became the first city to ban the use of facial recognition by its own agencies, and Maine and Massachusetts have both passed laws curbing their powers — though all have carved out exemptions of varying degrees for law enforcement or prosecutorial purposes.

But the current patchwork of laws across the U.S. still leaves hundreds of millions of citizens without any protections at all.

“Protecting the privacy of Americans — especially against a flawed, unregulated, and at times discriminatory technology — is my chief goal with this legislation,” said Rep. Ted Lieu (D-CA, 33rd District) in a statement announcing the bill alongside colleagues Sheila Jackson Lee (D-TX, 18th District), Yvette Clarke (D-NY, 9th District) and Jimmy Gomez (D-CA, 34th District).

“Our bill is a workable solution that limits law enforcement use of [facial recognition technology] to situations where a warrant is obtained showing probable cause that an individual committed a serious violent felony,” Lieu added.

Gomez, who was one of 28 members of Congress misidentified as criminals in a mugshot database by Amazon’s facial recognition software in 2018, said that there is “no doubt that, left unchecked, the racial and gender biases which exist in FRT will endanger millions of Americans across our country and in particular, communities of color.”

The bill has so far received glowing support from privacy advocates, rights groups and law enforcement-adjacent groups and organizations alike. Woodrow Hartzog, a law professor at Boston University, praised the bill for strengthening baseline rules and protections across the U.S. “without preempting more stringent limitations elsewhere.”

House Democrats debut new facial recognition bill by Zack Whittaker originally published on TechCrunch

Google rolls out tool to request removal of personal info from search results, will later add proactive alerts

This spring, Google announced it would expand the types of personal information users could request to have removed from Google Search results to include contact information, like a phone number, address or email. At Google’s “Search On” event today, where the company unveiled a number of announcements related to its Search products and services, it said this feature would now be rolling out widely to users in the U.S. and would later expand to include alerts.

Originally, Google had said the feature would arrive in the Google App in the “coming months” without giving an exact launch date. Today, Google says the “Results About You” tool will become accessible to all English language users in the U.S. within the next few weeks. It will also introduce a new feature, yet unreported, that will allow users to receive proactive alerts about their contact information appearing in Search results.

When available, users will be able to find the tool in the Google App or by clicking on the three dots next to an individual Google Search result.

Of course, a few users already discovered a partial launch of feature in earlier phases of this rollout. Last week, for example, some people reported already seeing the “Results About You” option appear in the Google app for Android.

Image Credits: Google

When making a request, you can ask Google to remove a result because it shows your personal contact information, because it shows your contact information with an intent to harm you, because it shows other personal information, because it contains illegal information, or because the information is outdated.

After making a removal request, you can also follow its progress in the app where you can filter between the requests being processed and those that have been approved.

The new alerting feature, meanwhile, won’t arrive until early next year. However, when available, users will be able to receive notifications about new Google Search results that contain their contact information so they can quickly act to request its removal, if they choose.

The service arrives at a time when there’s been much discussion about the threats associated with doxing — a way to threaten or harass someone by revealing their personal information to the public without their permission. This is often done to silence someone because of their beliefs or opinions, and is considered a form of cyberbullying. But unlike traditional online trolling, where bad actors can simply be blocked and reported, doxing can invite real-world harm as people’s home addresses and contact information is exposed.

Image Credits: Google

A 2017 Pew Internet study indicated there’s been a slight increase in online harassment, including doxing, since its prior analysis in 2014, with some 41% of U.S. adults experiencing harassment, it said. That number has stayed consistent over the years, as Pew noted last January that roughly four in ten Americans continue to experience online harassment, with many citing politics or religion as the reason why they were targeted. (Doxing-related harassment would represent a subset of these numbers, we should note.)

In recent years, online platforms have taken stronger positions on doxing, with Reddit enacting subreddit bans over the practice, and YouTube in 2019 releasing an updated harassment policy that took a stronger stance on threats and personal attacks, including those associated with doxing. This year, Meta’s Oversight Board also pushed the company to tighten its rules around doxing, noting the practice disproportionately affects groups such as women, children and LGBTQIA+ people. Meta later updated its policy as a result of that guidance.

Image Credits: Google

Prior to the launch of “Results About You,” Google offered other removal options that would allow users whose banking and credit card details had been published online. It also allowed people under 18 or their parents to request to delete their photos from search results and had protections that allowed users to request nonconsensual explicit imagery (aka “revenge porn”) to be taken down. And in Europe, Google has to comply with local laws around takedown requests related to the “right to be forgotten.”

This change isn’t the U.S. equivalent of that, however, Google notes.

“Even though removing these results doesn’t scrub your contact information from the web overall, we’re doing everything we can to safeguard your information on Google Search,” said Danny Sullivan, Google’s public liaison for Search.

“That’s why we’re also making it easier for you to keep tabs on new results about you. We know that keeping track of your personal information online can sometimes feel like a game of whack-a-mole, so starting early next year, you’ll be able to opt into alerts if new results with your contact information appear so you can quickly request their removal. That way, you can have peace of mind that we’re helping your personal information stay personal,” he added.

read more about Google Search On 2022 on TechCrunch

Google rolls out tool to request removal of personal info from search results, will later add proactive alerts by Sarah Perez originally published on TechCrunch

Cloudflare wants to replace CAPTCHAs with Turnstile

Ahead of its Connect conference in October, Cloudflare this week announced an ambitious new project called Turnstile, which seeks to do away with the CAPTCHAs used throughout the web to verify people are who they say they are. Available to site owners at no charge, Cloudflare customers or no, Turnstile chooses from a rotating suite of “browser challenges” to check that visitors to a webpage aren’t, in fact, bots.

CAPTCHAs, the challenge-response tests most of us have encountered when filling out forms, have been around for decades, and they’ve been relatively successfully at keeping bot traffic at bay. But the rise of cheap labor, bugs in various CAPTCHA flavors and automated solvers have begun to poke holes in the system. Several websites offer human- and AI-backed CAPTCHA-solving services for as low as $0.50 per thousand solved CAPTCHAs, and some researchers claim AI-based attacks can successfully solve CAPTCHAs used by the world’s most popular websites.

Cloudflare itself was once a CAPTCHA user. But according to CTO John Graham-Cumming, the company was never quite satisfied with it — if Cloudflare’s public rallying cries hadn’t made that clear. In a conversation with TechCrunch, Graham-Cumming listed what he sees as the many downsides of CAPTCHA technology, including poor accessibility (visual disabilities can make it impossible to solve a CAPTCHA), cultural bias (CAPTCHAs assume familiarity with objects like U.S. taxis) and the strains that CAPTCHAs place on mobile data plans.

Cloudflare Turnstile

Image Credits: Cloudflare

“The biggest issue with CAPTCHA is that user experience is terrible. As computers have gotten better at solving them, the user experience has only gotten worse,” Graham-Cumming said in an email interview.

Cloudflare at one point moved to a service called hCaptcha — to mixed reviews. One frequent challenge asked users to enter their name, say whether they prefer eggplants or carrots and click every one of 27 images showing a train. The blowback — and the fees imposed by some CAPTCHA services — is part of what spurred Cloudflare to develop its own alternative, according to Graham-Cumming.

“We’ve been working on a solution for several years and blogged a few months back about how we have decreased our own CAPTCHA usage by 91%. Since we’ve proven it worked for us, we wanted to give everyone the option of getting rid of CAPTCHA,” he added.

Turnstile automatically chooses a browser challenge based on “telemetry and client behavior exhibited during a session,” Cloudflare says, rather than factors like login cookies. After running non-interactive JavaScript challenges to gather signals about the visitor and browser environment and using AI models to detect features and visitors who’ve passed a challenge before, Turnstile fine-tunes the difficulty of the challenge to the specific request — avoiding having users solve a puzzle.

To deploy Turnstile, web admins create a Cloudflare account and obtain the necessary embed code, which they then paste into their website’s code. After adding a server-side call to Cloudflare’s Turnstile API, the service goes live. Any site can call the API.

“If you’re using an existing CAPTCHA service today, it’s just a find and replace on the code string,” Graham-Cumming said. “It’s compatible with any other network provider … You don’t have to use any other Cloudflare services, like our content deliver network, to use Turnstile.”

Cloudflare Turnstile

A diagram showing how Cloudflare’s Turnstile system works.

Cloudflare claims that Turnstile is just as secure as CAPTCHA, taking advantage of features like private access tokens to minimize the amount of data that’s collected. Newly implemented in iOS 16 and macOS Ventura, private access tokens work by having a device send anonymized authentication information — tokens — to a compatible website without exposing any sensitive information about itself.

Cloudflare and rival service Fastly were among the first to announce support for private access tokens with Apple hardware.

The question is whether sites will be persuaded to deploy Turnstile over the incumbent CAPTCHA. By one measure, 97.7% of the top million websites by traffic use Google’s reCAPTCHA, currently the most popular CAPTCHA service on the market. Cloudflare says it’s working on plugins for major platforms like WordPress to make Turnstile easier to deploy, but it’ll likely take time to convince admins that it’s worth the effort  — assuming they’re ever convinced.

Graham-Cumming seemed mostly indifferent, noting that Cloudflare doesn’t have an obvious business incentive to drive adoption.

“We built an alternative, proved it works well for us and opened it up to other sites about as soon as we possibly could,” he said. “Since we’ve proven it worked for us, we wanted to give everyone the option of getting rid of CAPTCHA. Helping make the internet better really is our mission. We think giving this away to any website is a way to do that.”

As far as next steps are concerned, Graham-Cumming says that private access tokens are the best indicator for where Cloudflare would like to move in the future. The company tested a USB-based security system in the past, but requiring hardware adds a high degree of friction, he conceded.

“Customers and networks both care more and more about privacy and data segmentation. The ability for us to abstract portions of the validation to other parties without having to collect data ourselves is likely to continue,” Graham-Cumming added. “For example, [people] mention biometric authentication. I think it’s more likely we partner with hardware makers to use private access tokens to do biometric validation for us and pass an encrypted token proving that validation to us rather than doing biometric authentication ourselves.”

Cloudflare wants to replace CAPTCHAs with Turnstile by Kyle Wiggers originally published on TechCrunch

FCC solicits feedback from the public on rules to stop robotexts

The U.S. Federal Communications Commission (FCC) today proposed new rules to fight back against so-called “robotext” campaigns — the barrages of scam texts sent by malicious actors and large criminal organizations. In a press release, the agency said it would solicit ideas to apply caller ID technologies to text messaging and explore mandating that cell providers block illegal texts before they reach consumers.

“The American people are fed up with scam texts, and we need to use every tool we have to do something about it,” FCC chairwoman Jessica Rosenworcel said in a statement. “Recently, scam text messaging has become a growing threat to consumers’ wallets and privacy. More can be done to address this growing problem and today we are formally starting an effort to take a serious, comprehensive, and fresh look at our policies for fighting unwanted robotexts.”

In a notice of proposed rulemaking published this morning, the FCC suggests requiring cell carriers to block texts that purport to be from invalid, unallocated or unused numbers and numbers on do-not-originate (DNO) lists at the network level. The notice also recommends increasing efforts to educate consumers, such as steps to opt out of policies that allow companies to sell or share personal numbers.

Most FCC rules, including this one, are adopted by a process known as “notice and comment” rulemaking. The FCC gives public notice that it’s considering adopting or modifying rules on a particular subject — a notice of proposed rulemaking — and seeks the public’s comment, considering the comments received in developing the final rules.

Acting FCC chairwoman Jessica Rosenworcel first proposed new rules requiring wireless carriers to block illegal texts last October. But the rules remained pending before the full Commission — the FCC’s other four members — until this week, as they moved to the next step in the notice and comment process.

Robotexts, which encompass scam texts about unpaid bills, package delivery snafus and other such deceptive scenarios, have been on the rise in recent years. Text and spam call blocker app RoboKiller estimated that consumers received over 12 billion robotexts in June, and complaints to the FCC about unwanted texts increased from 14,000 in 2020 to 15,300 last year.

The top scam texts in 2021 involved bogus delivery messages claiming to represent Amazon, the U.S. Postal Service and other organizations, according to a report from the Consumer Watchdog office of the nonprofit U.S. PIRG. Others included fake messages from banks and texts related to the COVID-19 pandemic.

FCC solicits feedback from the public on rules to stop robotexts by Kyle Wiggers originally published on TechCrunch

Italy fires Meta urgent request for info re: election interference measures

Days ahead of the Italian general election, the country’s privacy watchdog has sent Facebook’s parent (Meta) an urgent request for information, asking the social media giant to clarify measures it’s taking around Sunday’s election.

The risk of election interference via social media continues to be a major concern for regulators after years of rising awareness of how disinformation is seeded, spread and amplified on algorithmic platforms like Facebook, and with democratic processes continuing to be considered core targets for malicious influence ops.

Privacy regulators in the European Union are also watchful of how platforms are processing personal data — with data protection laws in place that regulate the processing of sensitive data such as political opinions.

In a press release about its request yesterday, the Garante points back to a previous $1.1M sanction it imposed on Facebook for the Cambridge Analytica scandal, and for the “Candidates” project Facebook launched for Italy’s 2018 general election, writing [in Italian; translated here using machine translation] that it’s “necessary to pay particular attention to the processing of data suitable for revealing the political opinions of the interested parties and to respect the free expression of thought”.

“Facebook will have to provide timely information on the initiative undertaken; on the nature and methods of data processing on any agreements aimed at sending reminders and the publication of information ‘stickers’ (also published on Instagram — part of the Meta Group); on the measures taken to ensure, as announced, that the initiative is brought to the attention only of persons of legal age,” the watchdog adds.

The move follows what it describes as “information campaign” by Meta, targeted at Italian users, which is said to be aimed at countering interference and removing content that discourages voting — and involving the use of a virtual Operations Center to identity potential threats in real-time, as well as collaboration with independent fact-checking organizations.

The Garante said the existence of this campaign was made public by Meta publishing “promemoria” (memos). However a page on Meta’s website which provides an overview of information about its preparations for upcoming elections only currently offers downloadable documents detailing its approach for the US midterms and for Brazil’s elections. There is no information here about Meta’s approach to Italy’s general election — or any information about the information campaign it is (apparently) running locally.

A separate page on Meta’s website — entitled “election integrity” — includes a number of additional articles about its preparations for elections elsewhere, including Kenya’s 2022 general election; the 2022 Philippines’ general election; and for Ethiopia’s — 2021 — general election. Plus earlier articles for State elections in India; and an update on the Georgia runoff elections from the end of 2020, among others.

But, again, Meta does not appear to have provided any information here about its preparations for Italy’s General Election.

The reason for this oversight — which is presumably what it is — could be related to the Italian election being a snap election, called following a government crisis and the resignation of prime minister Mario Draghi, i.e. rather than a long-programmed and timetabled general election.

However the gap in Meta’s election integrity information hub on measures it’s taking to protect Italy’s general election from disinformation suggests there are limitations to its transparency in this crucial area — suggesting it’s unable to provide consistent transparency in response to what can often be dynamically changing democratic timelines.

The Italian parliament was dissolved on July 21 — which was when the president called for new elections. Which means that Meta, a company with a market cap of hundreds of billions of dollars, has had two months to make upload details of the election integrity measures it’s taking in the country to relevant hubs on its website — yet it does not appear to have done so.

We reached out to Meta yesterday with questions about what it’s doing in Italy to protect the election from interference but at the time of writing the company had not responded.

It will of course have to respond to Italy’s watchdog’s request for information. We’ve reached out to the regulator with questions.

The Garante continues to be an active privacy watchdog in policing tech giants operating on its turf in spite of not being the lead supervisor for such companies under the one-stop-shop (OSS) mechanism in the EU’s General Data Protection Regulation (GDPR), which has otherwise led to bottlenecks around GDPR enforcement. But the regulation provides some wiggle room for concerned DPAs to act on pressing matters on their own turf without having to submit to the OSS.

Yesterday’s urgent request to Meta for information by Italy’s watchdog follows a number of other proactive interventions in recent years — including a warning to TikTok this summer over a controversial privacy policy switch (which TikTok ‘paused’ soon after); a warning to WhatsApp in January 2021 over another controversial privacy policy and T&Cs update (while stemming from a wider complaint, WhatsApp went on to be fined $267M later that year over GDPR transparency breaches); and a warning to TikTok over underage users, also in January 2021 (TikTok went on remove over half a million accounts that it was unable to confirm did not belong to children and commit to other measures).

So a comprehensive answer to the question of whether the GDPR is working to regulate Big Tech requires a broader view than totting up fines or even fixing on final GDPR enforcement decisions.

Italy fires Meta urgent request for info re: election interference measures by Natasha Lomas originally published on TechCrunch

Facebook users sue Meta, accusing the company of tracking on iOS through a loophole

Apple’s major privacy update to iOS last year made it much more difficult for apps to track user behavior beyond their own borders, but a new lawsuit alleges that Facebook and Instagram parent company Meta kept snooping through a workaround.

The complaint, filed in the U.S. District Court for the Northern District of California and embedded below, alleges that Meta evaded Apple’s new restrictions by monitoring users through Facebook’s in-app browser, which opens links within the app. The proposed class-action lawsuit, first reported by Bloomberg, could allow anyone affected to sign on, which in Facebook’s case might mean hundreds of millions of U.S. users.

In the lawsuit, a pair of Facebook users allege that Meta is not only violating Apple’s policies, but breaking privacy laws at the state and federal level, including the Wiretap Act, which made it illegal to intercept electronic communications without consent. Another similar complaint (Mitchell v. Meta Platforms Inc.) was filed last week.

The plaintiffs allege that Meta follows users’ online activity by funneling them into the web browser built into Facebook and injecting JavaScript into the sites they visit. That code makes it possible for the company to monitor “every single interaction with external websites,” including where they tap, and what passwords and other text they enter:

Now, even when users do not consent to being tracked, Meta tracks Facebook users’ online activity and communications with external third-party websites by injecting JavaScript code into those sites. When users click on a link within the Facebook app, Meta automatically directs them to the in-app browser it is monitoring instead of the smartphone’s default browser, without telling users that this is happening or they are being tracked.

Apple introduced iOS 14.5 in April of last year, striking a massive blow to social media companies like Meta that relied on tracking users’ behavior for advertising purposes. The company cited the iOS changes specifically in its earning calls as it prepped investors to adjust to the new normal for its ad targeting business, describing Apple’s privacy changes as a “headwind” that it would need to overcome.

In the new iOS privacy prompt, Apple asks if a user consents to have their activity tracked “across other companies’ apps and websites.” Users who opt out might reasonably believe that they are on an external web browser when opening links within Facebook or Instagram, though the company would likely argue the opposite.

Security researcher Felix Krause surfaced concerns around Facebook and Instagram’s in-app browsers last month and the lawsuit draws heavily from his report. He urged Meta to send users to Safari or another external browser to close up the loophole.

“Do what Meta is already doing with WhatsApp: Stop modifying third party websites, and use Safari or SFSafariViewController for all third party websites,” Krause wrote in a blog post. “It’s what’s best for the user, and the right thing to do.”

 

Facebook users sue Meta, accusing the company of tracking on iOS through a loophole by Taylor Hatmaker originally published on TechCrunch

India proposes to regulate internet communication services

India has proposed to regulate internet-based communication services, requiring platforms to obtain a license for operating in the world’s second largest wireless market.

The Department of Telecommunications’ new proposal, called Draft Indian Telecommunication Bill, 2022, seeks to consolidate and update three old rules — Indian Telegraph Act, 1885, Indian Wireless Telegraphy Act, 1933, and The Telegraph Wires (Unlawful Protection) Act, 1950.

The 40-page draft proposes to grant the government the ability to intercept messages beaming through internet-powered communication services in the event of “any public emergency or in the interest of the public safety.” It also provides the government immunity against any lawsuit.

“No suit, prosecution or other legal proceeding shall lie against the Central Government, the State Government, the Government of a Union Territory, or any other authority under this Act or any person acting on their behalf as the case may be, for anything which is done in good faith, or intended to be done in pursuance of this Act or any rule, regulation or order made thereunder,” the draft said.

The draft also asks that individuals using these licensed communications apps should not “furnish any false particulars, suppress any material information or impersonate another person”.

Telecom operators in the country have long demanded regulation of apps such as WhatsApp and Telegram “to get a level-playing field” in the South Asian market. But the proliferation of WhatsApp and other chat services in India and beyond that killed the telecom industry’s costly texting tariffs did not hurt consumers.

The Department of Telecommunications said it reviewed similar legislations in Australia, Singapore, Japan, European Union, the U.K. and the U.S. while preparing its draft.

The proposed guidelines, for which the ministry will seek public comments until October 20, additionally attempts to take broader steps to curb spam messages. India is one of the worst impacted nations by spam calls and texts, a fact that has allowed call screening apps such as Truecaller to make deep inroads in the nation.

The draft says that “any message offering, advertising or promoting goods, services, interest in property, business opportunity, employment opportunity or investment opportunity” must only be sent after users’ prior consent. The draft also proposes a mechanism to enable users to report spam messages received and recommends one or more ‘Do Not Disturb’ registers to record users’ consent for receiving specific promotional messages.

The draft notably comes just over a month after India concluded its $19 billion 5G spectrum. The country is expected to get 5G networks later this year.

India proposes to regulate internet communication services by Jagmeet Singh originally published on TechCrunch

India proposes to regulate internet communication services

India has proposed to regulate internet-based communication services, requiring platforms to obtain a license for operating in the world’s second largest wireless market.

The Department of Telecommunications’ new proposal, called Draft Indian Telecommunication Bill, 2022, seeks to consolidate and update three old rules — Indian Telegraph Act, 1885, Indian Wireless Telegraphy Act, 1933, and The Telegraph Wires (Unlawful Protection) Act, 1950.

The 40-page draft proposes to grant the government the ability to intercept messages beaming through internet-powered communication services in the event of “any public emergency or in the interest of the public safety.” It also provides the government immunity against any lawsuit.

“No suit, prosecution or other legal proceeding shall lie against the Central Government, the State Government, the Government of a Union Territory, or any other authority under this Act or any person acting on their behalf as the case may be, for anything which is done in good faith, or intended to be done in pursuance of this Act or any rule, regulation or order made thereunder,” the draft said.

The draft also asks that individuals using these licensed communications apps should not “furnish any false particulars, suppress any material information or impersonate another person”.

Telecom operators in the country have long demanded regulation of apps such as WhatsApp and Telegram “to get a level-playing field” in the South Asian market. But the proliferation of WhatsApp and other chat services in India and beyond that killed the telecom industry’s costly texting tariffs did not hurt consumers.

The Department of Telecommunications said it reviewed similar legislations in Australia, Singapore, Japan, European Union, the U.K. and the U.S. while preparing its draft.

The proposed guidelines, for which the ministry will seek public comments until October 20, additionally attempts to take broader steps to curb spam messages. India is one of the worst impacted nations by spam calls and texts, a fact that has allowed call screening apps such as Truecaller to make deep inroads in the nation.

The draft says that “any message offering, advertising or promoting goods, services, interest in property, business opportunity, employment opportunity or investment opportunity” must only be sent after users’ prior consent. The draft also proposes a mechanism to enable users to report spam messages received and recommends one or more ‘Do Not Disturb’ registers to record users’ consent for receiving specific promotional messages.

The draft notably comes just over a month after India concluded its $19 billion 5G spectrum. The country is expected to get 5G networks later this year.

India proposes to regulate internet communication services by Jagmeet Singh originally published on TechCrunch

India proposes to regulate internet communication services

India has proposed to regulate internet-based communication services, requiring platforms to obtain a license for operating in the world’s second largest wireless market.

The Department of Telecommunications’ new proposal, called Draft Indian Telecommunication Bill, 2022, seeks to consolidate and update three old rules — Indian Telegraph Act, 1885, Indian Wireless Telegraphy Act, 1933, and The Telegraph Wires (Unlawful Protection) Act, 1950.

The 40-page draft proposes to grant the government the ability to intercept messages beaming through internet-powered communication services in the event of “any public emergency or in the interest of the public safety.” It also provides the government immunity against any lawsuit.

“No suit, prosecution or other legal proceeding shall lie against the Central Government, the State Government, the Government of a Union Territory, or any other authority under this Act or any person acting on their behalf as the case may be, for anything which is done in good faith, or intended to be done in pursuance of this Act or any rule, regulation or order made thereunder,” the draft said.

The draft also asks that individuals using these licensed communications apps should not “furnish any false particulars, suppress any material information or impersonate another person”.

Telecom operators in the country have long demanded regulation of apps such as WhatsApp and Telegram “to get a level-playing field” in the South Asian market. But the proliferation of WhatsApp and other chat services in India and beyond that killed the telecom industry’s costly texting tariffs did not hurt consumers.

The Department of Telecommunications said it reviewed similar legislations in Australia, Singapore, Japan, European Union, the U.K. and the U.S. while preparing its draft.

The proposed guidelines, for which the ministry will seek public comments until October 20, additionally attempts to take broader steps to curb spam messages. India is one of the worst impacted nations by spam calls and texts, a fact that has allowed call screening apps such as Truecaller to make deep inroads in the nation.

The draft says that “any message offering, advertising or promoting goods, services, interest in property, business opportunity, employment opportunity or investment opportunity” must only be sent after users’ prior consent. The draft also proposes a mechanism to enable users to report spam messages received and recommends one or more ‘Do Not Disturb’ registers to record users’ consent for receiving specific promotional messages.

The draft notably comes just over a month after India concluded its $19 billion 5G spectrum. The country is expected to get 5G networks later this year.

India proposes to regulate internet communication services by Jagmeet Singh originally published on TechCrunch