Bipartisan Senate bill would ban social media algorithms for minors

In a bipartisan effort to “protect kids from harm,” an unlikely cohort of senators introduced a bill that would restrict minors’ access to social media, as well as ban companies from using algorithms to recommend content to minors. 

Senators Brian Schatz (D-Hawaii), Chris Murphy (D-Conn), Katie Britt (R-Ala) and Tom Cotton (R-Ark) introduced the Protecting Kids on Social Media Act on Wednesday. The bill would set a minimum age of 13 to use social media sites, and would require parental consent and age verification for users under 18. 

“The growing evidence is clear: social media is making kids more depressed and wreaking havoc on their mental health. While kids are suffering, social media companies are profiting. This needs to stop,” Schatz said in a press release. “Our bill will help us stop the growing social media health crisis among kids by setting a minimum age and preventing companies from using algorithms to automatically feed them addictive content based on their personal information.”

While some social media companies, like TikTok and YouTube, have launched kid-friendly versions of their platforms with content limits and parental controls, age verification is largely based on an honor system. 

If the bill is signed into law, social media companies will be forbidden from using the “personal data” of any user to recommend content “unless the platform knows or reasonably believes that the individual is age 18 or older according to the age verification process used by the platform,” the bill’s text reads. Advertising to minors will still be allowed, as long as it’s “solely based on context,” and isn’t “targeted or recommended based on the personal data” of the user.

The bill’s language doesn’t outline how algorithms will be regulated. A representative for Schatz did not immediately respond to request for comment. 

On Twitter, users have already raised concerns about the Protecting Kids on Social Media Act, and questioned whether the proposed regulations are even enforceable. 

“Broadly speaking I’d say this: yes, Big Tech companies are harming kids,” Evan Greer, director of the digital rights nonprofit Fight for the Future, said in a tweet responding to Murphy. “We stop that by forcing those companies to change their business practices, not by kicking kids off the internet or taking away kids rights.” 

Alejandra Caraballo, a civil rights attorney and clinical instructor at Harvard Law School’s Cyberlaw Clinic, also responded to Murphy’s tweet announcing the bill, in which he described prohibiting algorithms for kids. 

“With all due respect Senator, but that is a terribly misinformed statement about social media technology. You might as well try saying you’re banning javascript for teens,” she said. 

Murphy decried social media companies as “100% committed to addicting our children to their screens” in a press release. 

“The alarm bells about social media’s devastating impact on kids have been sounding for a long time, and yet time and time again, these companies have proven they care more about profit than preventing the well-documented harm they cause,” he said. “In particular, these algorithms are sending many down dangerous online rabbit holes, with little chance for parents to know what their kids are seeing online.”  

Most social media policies already require users to be at least 13 years old, but enforcement is flimsy at best. Minors can easily fly under the radar by submitting a fake date of birth and checking off a box attesting to their supposed age. The bill would require social media platforms to take “reasonable steps beyond merely requiring attestation,” instead employing “existing age verification technologies” to ensure that users are the age they claim to be. 

The bill’s language forbids companies from storing and using any information collected during the verification process “for any other purpose.” It instead proposes a free “Pilot Program,” regulated by the Secretary of Commerce, that would provide “secure digital identification credential to individuals who are citizens and lawful residents of the United States.” 

The Pilot Program is supposed to “meet or exceed the highest cybersecurity standards” of consumer products, and the bill promises that only anonymized aggregate data will be stored. 

This isn’t the first bipartisan effort to try and curb kids’ internet use. Last year, Senators Richard Blumenthal (D-Conn) and Marsha Blackburn (R-Tenn) introduced the Kids Online Safety Act (KOSA), which would require sites to provide more parental control tools and limit the content that users under 16 can access. Dozens of civil liberties organizations, including the American Civil Liberties Union, the Electronic Frontier Foundation, Fight for the Future and GLAAD, opposed the bill

The Protecting Kids on Social Media Act follows a larger nationwide push for age verification online. This year, Louisiana, Mississippi, Virginia and Utah passed laws requiring users to submit a government-issued ID in order to view porn sites. Eleven more states have proposed similar laws. But digital privacy advocates have expressed concerns over how age verification data is stored and used. 

In the joint letter opposing KOSA, civil liberties organizations warned against age verification requirements. 

“Age verification requirements may require users to provide platforms with personally identifiable information such as date of birth and government-issued identification documents, which can threaten users’ privacy, including through the risk of data breaches, and chill their willingness to access sensitive information online because they cannot do so anonymously,” the letter said. “Rather than age-gating privacy settings and safety tools to apply to only minors, Congress should focus on ensuring that all users, regardless of age, benefit from strong privacy protections by passing comprehensive privacy legislation.” 

Bipartisan Senate bill would ban social media algorithms for minors by Morgan Sung originally published on TechCrunch

Twitter officially bans third-party clients after cutting off prominent devs

After cutting off prominent app makers like Tweetbot and Twitterific, Twitter today quietly updated its developer terms to ban third-party clients altogether.

Spotted by Engadget, the “restrictions” section of Twitter’s 5,000-some-word developer agreement was updated with a clause prohibiting “use or access the Licensed Materials to create or attempt to create a substitute or similar service or product to the Twitter Applications.” Earlier this week, Twitter said that it was “enforcing long-standing API rules” in disallowing clients access to its platform but didn’t cite which specific rules developers were violating. Now we know — retroactively.

As Engadget notes, Twitter clients are a part of Twitter history — Twitterific was created before Twitter had a native iOS app of its own. And they’ve gained a larger following in recent years, thanks in part to their lack of ads.

Twitter’s attitude toward third-party clients has long been permissive and even supportive, with the company going so far as to remove a section from its developer terms that discouraged devs from replicating its core service. But that seems to have changed under CEO Elon Musk’s leadership.

Twitter dev terms

Image Credits: Twitter

The decision seems unlikely to foster goodwill toward Twitter at a time when the platform faces challenges on a number of fronts. In a blog post, Twitterrific’s Sean Heber called Twitter “increasingly capricious” and a company he “no longer recognize[d] as trustworthy nor want to work with any longer.” Matteo Villa, the developer of Fenix, in an interview with Engadget called the lack of communication “insulting.” (Twitter has no communications department at present.)

Twitter is under immense pressure to turn a profit — or at least break even — as advertisers flee the platform, spurred by unpredictable, fast-changing content policies. The company, which has $12.5 billion in debt, is on the hook for $300 million in its first interest payment and has lost an estimated $4 billion in value since Musk acquired it at the end of October 2022. Fidelity recently slashed the value of its stake in Twitter by 56%.

Cutbacks at Twitter abound. Some employees are bringing their own toilet paper to work after the company reduced janitorial services, the New York Times reported, and Twitter has stopped paying rent for several of its offices. Musk has elsewhere attempted to save around $500 million in costs unrelated to labor, shutting down a data center and launching a fire sale after putting office items up for auction in a bid to recoup costs.

Twitter’s also heavily pushing its Twitter Blue plan (now with an annual option), aiming to make it a profit driver. It plans to lift its ban on political ads, chasing after campaign dollars in the 2024 U.S. elections. And the company is reportedly considering selling usernames through online auctions.

Twitter officially bans third-party clients after cutting off prominent devs by Kyle Wiggers originally published on TechCrunch

Ticketmaster faces antitrust scrutiny after Taylor Swift ticket chaos

Ticketmaster is facing more scrutiny from politicians after its chaotic presales for tickets to Taylor Swift’s tour. Tennessee attorney general Jonathan Skrmetti said he is looking into whether Ticketmaster violated consumer’s rights and antitrust regulations. Skrmetti is the latest politician who has called attention to Ticketmaster and Live Nation’s hold on the ticketing market.

This comes as Ticketmaster cancelled its public sales for Swift’s tour, called Eras. In a tweet, Ticketmaster said the cancellation was due to “extraordinarily high demands on ticketing systems and insufficient remaining ticket inventory to meet that demand.”

The public sale would have been for tickets left over from the site’s troubled presales, which started on Tuesday for members of its Verified Fan program. Many fans experienced technical glitches and hours-long wait times, with many ultimately unable to buy a ticket.

According to the New York Times, Ticketmaster said in a now-deleted post that 3.5 million people registered for the Verified Fan program. Around 1.5 million were given a special access code, and the rest were put on a waiting list. “Never before has a Verified Fan on sale sparked so much attention—or uninvited volume,” Ticketmaster said.

Skrmetti said he had received complaints from fans who tried to purchase tickets for Eras. In a tweet on Thursday, the attorney general said that other state attorney generals are also looking into the matter: “Ticketmaster’s decision to cancel sales underscores the important need for accountability. Fans deserve a fair chance to buy a ticket. I’m encouraged by other state AGs who are taking this issue serious as well.”

The Washington Post reports that Skrmetti said Ticketmaster should have been better prepared for the high demand and questioned whether “because they have such a dominant market position, they felt like they didn’t need to worry about that.”

In another tweet before the sale was cancelled, the attorney general’s office said Skrmetti “is concerned about consumer complaints related to @Ticketmaster’s pre-sale of @taylorswift13 concert tickets. He and his Consumer Protection team will use every available tool to ensure that no consumer protection laws were violated.”

TechCrunch has contacted Ticketmaster and the Skrmetti’s office for comment.

Eras is Swift’s first tour in four years and comes after the release of her new album “Midnights.”

Other politicians who have raised concerns over the combined company of Ticketmaster and Live Nation, which merged in 2010, including Representative Alexandria Ocasio-Cortez, Representative David N. Cicilline and Representative Bill Pascrell, Jr.

On Tuesday, Representative Ocasio-Cortez said in a tweet on Tuesday that “Ticketmaster is a monopoly, its merger with LiveNation should never have been approved, and they needed to be reigned in. Break them up.”

Representative Cicilline tweeted on Wednesday that Ticketmaster’s “excessive wait times and fees are completely unacceptable, as seen with today’s @taylorswift13 tickets, and are a symptom of a larger problem. It’s no secret that Live Nation-Ticketmaster is an unchecked monopoly.”

And Representative Pascrell, Jr., who was among the millions of fans put on a waitlist for Swift tickets, tweeted that Ticketmaster “The Ticketmaster-Live Nation monopoly should never have been allowed to merge and must be broken up.”

Consumers are also pushing for a break up of Ticketmaster and Live Nation. An alliance of consumer rights groups, including antitrust nonprofit American Economic Liberties Project, launched a campaign last month called Break Up Ticketmaster, saying that Ticketmaster’s “market power over live events is ripping off sports and music fans and undermining the vibrancy and independence of the music industry.”

Ticketmaster faces antitrust scrutiny after Taylor Swift ticket chaos by Catherine Shu originally published on TechCrunch

Mozilla looks to its next chapter

Mozilla today released its annual “State of Mozilla” report and for the most part, the news here is positive. Mozilla Corporation, the for-profit side of the overall Mozilla organization, generated $585 million from its search partnerships, subscriptions and ad revenue in 2021 — up 25% from the year before. And while Mozilla continues to mostly rely on its search partnerships, revenue from its new products like the Mozilla VPN, Mozilla Developer Network (MDN) Plus, Pocket and others now accounts for $57 million of its revenue, up 125% compared to the previous year. For the most part, that’s driven by ads on the New Tab in Firefox and in Pocket, but the security products now also have an annual revenue of $4 million.

With the launch of this year’s report, the Mozilla leadership team is also taking some time to look ahead, because in many ways, this is an inflection point for Mozilla.

When Mozilla was founded, the internet was essentially the web and the browser was the way to access it. Since then, the way we experience the internet has changed dramatically and while the browser is still one of the most important tools around, it’s not the only one. With that, Mozilla, too, has to change. Its Firefox browser has gone from dominating the space to being something of a niche product, but the organization’s mission (“to ensure the internet is a global public resource, open and accessible to all”) is just as important today — and maybe more so — as it was almost 25 years ago when Mozilla was founded.

To talk about the state of the business and to look ahead to what’s next for Mozilla, I sat down with its executive chairwoman and CEO Mitchell Baker, Mozilla chief product officer Steve Teixeira (who recently joined Mozilla after leaving Twitter), and Mozilla Foundation executive director Mark Surman.

Mozilla's executive chairperson Mitchell Baker

Mozilla’s executive chairperson Mitchell Baker.

In talking about the state of the business, Baker noted that given the pandemic, 2021 was obviously not a normal year. For the first half of 2022, with the war in Ukraine and the overall economic headwind, Mozilla’s financials looked somewhat similar, she noted. But more importantly, Mozilla’s attempts at diversification are starting to pay off. “There are some things in 2021 that I think are our doing and that we intend to continue — multi-product, different business models, engaging with consumers — […] that do represent the future and our very early steps of making this new chapter of Mozilla,” she explained.

With Mozilla’s three business models (privacy-preserving ads, subscriptions and search partnerships), the organization is now a bit more dependent on the overall ad market. But as Teixeira stressed, the market may be softer, but this has mostly manifested itself in slower growth, not a drop in the market. “For us, as a non-public company, that’s okay,” he said. “We can we can do great. Since we don’t have to manage a public market perception, all we need to have is a great business to run and that keeps us happy. ” He also believes that while its security products are a small portion of Mozilla’s overall revenue so far (with the VPN driving virtually all of that), there is still a lot of upside there. Mozilla is looking at how it can add more features and products to its overall security suite going forward, both as part of Firefox and as stand-alone products.

Earlier this year, Mozilla introduced paid subscriptions for its Mozilla Developer Network (MDN). Teixeira said it was meeting the organization’s “modest expectations,” but also represented Mozilla’s willingness to experiment with these business models. With a product like MDN, which has a very long history, Mozilla has to be careful about how it manages an experiment like this, because it can easily backfire and tarnish its brand, too, something Baker acknowledged. “We were really determined to honor MDN for what it is and make sure that what we’re adding is truly a premium service,” she said.

Yet while Mozilla continues to expand its product offerings, for a lot of people, its brand remains synonymous with Firefox. Both Baker and Teixeira stressed that Mozilla will continue to invest in Firefox and Teixeira especially stressed that he sees a lot of opportunity for growth on mobile. But Baker also noted that she wants Mozilla to stand for something more universal than that. “What I find about the Mozilla brand
is that inspires hope, it represents aspiration,” she said. “I’ve been testing that the last few years, because we knew that Firefox was a global brand — which is hard to get, especially one that’s lasted 20 years — but I’ve been testing the Mozilla brand globally and found a couple things: it is a global brand, it’s not as well-known as Firefox, but it’s more well-known than you would expect.”

But people want more from Mozilla, she argued. It’s one thing, after all, to fix Firefox (which Mozilla has arguably done) and compete with Edge and Chrome, it’s another to compete against Microsoft and Google and return a sense of agency to users when it’s so hard to preserve your privacy and security online. “[The internet] is controlled by a handful of organizations aimed at me, designed to get me to do something, buy something, believe something, take some kind of action,” she said about the current state of the internet. “What we don’t currently have, and why I spend my life energy here at Mozilla — we don’t have that power working on my behalf, reaching out into the world, looking for what I’m looking for or influencing the world, or finding the things I want or representing me. It’s aimed at me and no individual human has the resources to really understand or respond or take care of what’s going on.” Yet without solving for privacy and security online, it’s hard to see what’s next. “Privacy and security are necessary. We need them right now. People are aware of them. But I think as we do that and can show the promise of what a different model of this technology looks like, then people can see it and choose it,” she added.

Today’s Mozilla then is one that’s looking well beyond the browser. That’s not necessarily new, but I don’t think it has ever been clearer. This means both the corporate side of Mozilla and the foundation are now shifting their focus to new topics. “It feels good and particularly good at this moment to be at an organization that said we need to do things differently from the beginning, and then to be asking ourselves what does it mean to do things differently now and over the course of the next decade,” Mozilla Foundation’s Surman said. “So we have this ‘next chapter’ or ‘next 25 years’ language we use in [our report] and that’s a kind of the poetry of what you say in a report like that. But it also does feel that way, with the shift in the Foundations focus to AI and broader question of what does it means to do today’s era and tomorrow’s era of tech differently.”

Most recently this has meant the launch of Mozilla Ventures, a $35 million venture fund that the organization plans to use to invest in products and founders who want to build a better, privacy-respecting internet. “We have this huge lever in being a public benefit product organization,” he said. “And that’s expanding into new surface areas. But on the other hand, one of the deepest things about Mozilla is this kind of community approach, this movement approach or ecosystem approach.” Yet while Mozilla was able to get Firefox going with a small team and establish its Mozilla Corporation/Foundation duality, that’s difficult for founders to replicate. “We were bumping into founders, people coming out of school, all kinds of folks who were saying: ‘yeah, we want to do something that helps shift this but how do we actually go in that direction and not just get vacuumed back into what’s going wrong in the tech industry,” said Surman. Traditional venture firms, after all, want to see large returns on their investments — and the sooner the better. Surman stressed that Mozilla Ventures is also meant to generate returns, but it can be patient and invest in founders that share its values. Come next year, the Mozilla Foundation will launch a few new initiatives that will be aimed at the broader tech ecosystem.

And while Mozilla Ventures is one approach, Baker expects that we will see the organization do more in this area over the next few years, in part through cooperating with more organizations that share its vision.

“I come across so many people who share an aspiration for the richness of the internet — but better or less creepy — and want to connect with some organization that’s doing that,” she said.  “I have a hypothesis that there is a new form of what was open-source community that should be a part of Mozilla and our larger operating model. What is Mozilla? We have different organizations and employees but increasingly, we should also be affiliated with and connect and represent and identify as the larger set of people who want something tuned differently.”

Mozilla looks to its next chapter by Frederic Lardinois originally published on TechCrunch

White House proposes voluntary safety and transparency rules around AI

The White House this morning unveiled what it’s colloquially calling an “AI Bill of Rights,” which aims to establish tenets around the ways AI algorithms should be deployed as well as guardrails on their applications. In five bullet points crafted with feedback from the public, companies like Microsoft and Palantir and human rights and AI ethics groups, the document lays out safety, transparency and privacy principles that the Office of Science & Technology Policy (OSTP) — which drafted the AI Bill of Rights — argues will lead to better outcomes while mitigating harmful real-life consequences. 

The AI Bill of Rights mandates that AI systems be proven safe and effective through testing and consultation with stakeholders, in addition to continuous monitoring of the systems in production. It explicitly calls out algorithmic discrimination, saying that AI systems should be designed to protect both communities and individuals from biased decision-making. And it strongly suggests that users should be able to opt out of interactions with an AI system if they choose, for example in the event of a system failure.

Beyond this, the White House’s proposed blueprint posits that users should have control over how their data is used — whether in an AI system’s decision-making or development — and be informed in plain language of when an automated system is being used in plain language. 

To the OSTP’s points, recent history is filled with examples of algorithms gone haywire. Models used in hospitals to inform patient treatments have later been found to be discriminatory, while hiring tools designed to weed out candidates for jobs have been shown to predominately reject women applicants in favor of men — owing to the data on which the systems were trained. However, as Axios and Wired note in their coverage of today’s presser, the White House is late to the party; a growing number of bodies have already weighed in on the subject of AI regulation, including the EU and even the Vatican.

It’s also completely voluntary. While the White House seeks to “lead by example” and have federal agencies fall in line with their own actions and derivative policies, private corporations aren’t beholden to the AI Bill of Rights.

Alongside the release of the AI Bill of Rights, the White House announced that certain agencies, including the Department of Health and Human Services and the Department of Education, will publish guidance in the coming months seeking to curtail the use of damaging or dangerous algorithmic technologies in specific settings. But these steps fall short of, for instance, the EU’s regulation under development, which prohibits and curtails certain categories of AI deemed to have harmful potential.

Still, experts like Oren Etzioni, a co-founder of the Allen Institute for AI, believe that the White House guidelines will have some influence. “If implemented properly, [a] bill could reduce AI misuse and yet support beneficial uses of AI in medicine, driving, enterprise productivity, and more,” he told The Wall Street Journal.

White House proposes voluntary safety and transparency rules around AI by Kyle Wiggers originally published on TechCrunch

8 VCs discuss the overturning of Roe v. Wade, venture and the midterm elections

The overturning of Roe v. Wade sent a huge shockwave through the U.S., and while the nation recovers slowly, the venture community is already beginning to act. Founders are reassessing where they open their businesses, not wanting to lure employees to a state that doesn’t support reproductive rights, and investors are considering adding healthcare to environmental, social and governance criteria to help spur innovation in the space.

And as the midterm elections approach, the stakes are only getting higher for people who advocate for reproductive access, equality for the LGBTQ+ community, and, in some cases, just overall equality. It’s imperative to look at the role venture plays. Billions stand to be deployed throughout the year, and a show of economic prowess remains one of the few ways to capture the nation’s attention.

So, we decided to poll eight investors regarding the toppling of Roe, the Dobbs decision’s impact on the overall venture community and what they think about activism via investing.

Hessie Jones, a partner at MATR Ventures, said the right to abortion access, for example, strikes at the heart of human rights, privacy and poverty. As a result, it will impact how she conducts due diligence on companies in the future.

“What is clear is that apps that have been used to help women manage their menstrual cycles can be weaponized at the state level with warrants to identify those who may be seeking abortions,” she told TechCrunch. “Due diligence needs to expand past the point of founder ‘intentions’ and to look at the current customers using the technology.”

Like many investors we spoke to, McKeever Conwell, the founder of RareBreed Ventures, said his initial response to Roe’s reversal was a feeling of “utter disgust.” He worried it could set a precedent in terms of other cases that could be easily toppled.

“That is a very, very dangerous thing because now we have a group of lifetime appointees who have the ability to set a precedent that could basically overturn or set agendas that are not voted on by the public,” Conwell said.

However, he also noted that these political decisions have a tenuous relationship with the overall mantra of venture investing: “Our job is to make money for folks, and a lot of folks that we’re making money for are the folks who don’t care about these rights. That is the reality of the situation.”

Read the full survey here to learn how these VCs are thinking about investing in reproductive tech, which issues they are watching out for and the best way to pitch them.

8 VCs discuss the overturning of Roe v. Wade, venture and the midterm elections by Dominic-Madori Davis originally published on TechCrunch

OpenAI begins allowing users to edit faces with DALL-E 2

After initially disabling the capability, OpenAI today announced that customers with access to DALL-E 2 can upload people’s faces to edit them using the AI-powered image-generating system. Previously, OpenAI only allowed users to work with and share photorealistic faces and banned the uploading of any photo that might depict a real person, including photos of prominent celebrities and public figures.

OpenAI claims that improvements to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content. In an email to customers, the company wrote:

Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos. A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes … [We] built new detection and response techniques to stop misuse.

The change in policy isn’t opening the floodgates necessarily. OpenAI’s terms of service will continue to prohibit uploading pictures of people without their consent or images that users don’t have the rights to — although it’s not clear how consistent the company’s historically been about enforcing those policies.

In any case, it’ll be a true test of OpenAI’s filtering technology, which some customers in the past have complained about being overzealous and somewhat inaccurate. Deepfakes come in many flavors, from fake vacation photos to presidents of war-torn countries. Accounting for every emerging form of abuse will be a never-ending battle, in some cases with very high stakes.

No doubt, OpenAI — which has the backing of Microsoft and notable VC firms including Khosla Ventures — is eager to avoid the controversy associated with Stability AI’s Stable Diffusion, an image-generating system that’s available in an open source format without any restrictions. As TechCrunch recently wrote about, it didn’t take long before Stable Diffusion — which can also edit face images — was being used by some to create pornographic, nonconsensual deepfakes of celebrities like Emma Watson.

So far, OpenAI has positioned itself as a brand-friendly, buttoned-up alternative to the no-holds-barred Stability AI. And with the constraints around the new face editing feature for DALL-E 2, the company is maintaining the status quo.

DALL-E 2 remains in invite-only beta. In late August, OpenAI announced that over a million people are using the service.

OpenAI begins allowing users to edit faces with DALL-E 2 by Kyle Wiggers originally published on TechCrunch

CFPB signals that regulation is coming for BNPL

In a shot across the bow to the buy now, pay later (BNPL) industry, the U.S. Consumer Financial Protection Bureau (CFPB) today issued a report suggesting that companies like Klarna and Afterpay, which allow customers to pay for products and services in installments, must be subjected to stricter oversight.

The CFPB — in a step toward regulation — plans to issue guidance to oversee BNPL vendors and have them complete “supervisory” exams in line with credit card company reporting requirements, according to agency officials speaking at a presser this week.

The CFPB first announced that it would investigate the burgeoning (but rocky) BNPL industry in December 2021. While the agency has jurisdiction over banks, credit unions, securities firms, and other financial services firms based in the U.S., it didn’t previously regulate BNPL providers, who argued that they were exempted from many of the existing rules governing consumer lending.

BNPL services like Affirm and Apple’s forthcoming Apple Pay Later usually split up purchases into four or six equal installments over a fixed short-term period (e.g., a few months). Many don’t charge any interest or late fees, and don’t require a credit check for customers to qualify.

In the course of its investigation, the CFPB said that it found BNPL vendors are approving more customers for loans — 73% in 2021 compared with 69% in 2020 — and that delinquencies on these services are rising sharply. Meanwhile, the BNPL industry’s charge-off rate, or the rate of uncollectible loans, was 2.39% in 2021 — up from 1.83% in 2020.

Late fees are also climbing. The CFPB found that 10.5% of customers were charged at least one BNPL late fee in 2021 versus 7.8% in 2020.

CFPB director Rohit Chopra outlined the other dangers of BNPL offerings during the call, including data harvesting and taking on multiple large loans at once. (Because BNPL firms typically don’t report to credit bureaus, it’s easier for consumers to take out loans from multiple vendors at once.) These will likely become more acute as people begin to use BNPL for routine expenses, the agency said; the CFPB found that BNPL customers are increasingly paying for purchases like groceries and gas, spurred by macroeconomic pressures, including inflation.

“[BNPL] firms are harvesting and leveraging data in ways we don’t see with other companies,” Chopra said, per CNBC’s reporting. “Through their proprietary interfaces, they can see which products we buy through product placement … “We want to ensure [BNPL] firms are subjected to the appropriate examination just like regular credit card firms.”

The Financial Technology Association, an industry trade group, pushed back against the allegations that BNPL could harm consumers if left unregulated — arguing that BNPL as it exists today provides a valuable alternative to other lines of credit.

“With zero to low-interest, flexible payment terms, and transparent terms and conditions, BNPL helps consumers manage their cash flow responsibly and live healthier financial lives,” Financial Technology Association CEO Penny Lee told the Associated Press in a statement.

Some data would suggest otherwise. A DebtHammer poll showed that 32% of customers skip out on paying rent, utilities or child support to make their BNPL payments, and BNPL services can also lead to bigger purchases. In May, SFGate reported that the average Affirm customer spends $365 on a single purchase as opposed to the $100 average cart size recorded in 2020.

The BNPL industry has flirted with regulations for some time, with the U.K. last year announcing new regulatory policies for BNPL companies. California sued Afterpay after it initially refused to obtain a lender’s license from the state. Elsewhere, Massachusetts regulators entered into a consent agreement with Affirm after allegations that it engaged in loan servicing activity without a license.

CFPB signals that regulation is coming for BNPL by Kyle Wiggers originally published on TechCrunch

The EU’s AI Act could have a chilling effect on open source efforts, experts warn

The nonpartisan think tank Brookings this week published a piece decrying the bloc’s regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the E.U.’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.

If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it’s not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.

“This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI,” Alex Engler, the analyst at Brookings who published the piece, wrote. “In the end, the [E.U.’s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI.”

In 2021, the European Commission — the E.U.’s politically independent executive arm — released the text of the AI Act, which aims to promote “trustworthy AI” deployment in the E.U. As they solicit input from industry ahead of a vote this fall, E.U. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.

The legislation contains carve-outs for some categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it’d be difficult — if not impossible — to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors.

In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

Oren Etzioni, the founding CEO of the Allen Institute for AI, agrees that the current draft of the AI Act is problematic. In an email interview with TechCrunch, Etzioni said that the burdens introduced by the rules could have a chilling effect on areas like the development of open text-generating systems, which he believes are enabling developers to “catch up” to big tech companies like Google and Meta.

“The road to regulation hell is paved with the E.U.’s good intentions,” Etzioni said. “Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided ‘as is’ — consider the case of a single student developing an AI capability; they cannot afford to comply with E.U. regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results.”

Instead of seeking to regulate AI technologies broadly, E.U. regulators should focus on specific applications of AI, Etzioni argues. “There is too much uncertainty and rapid change in AI for the slow-moving regulatory process to be effective,” he said. “Instead, AI applications such as autonomous vehicles, bots, or toys should be the subject of regulation.”

Not every practitioner believes the AI Act is in need of further amending. Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, thinks it’s “perfectly fine” to regulate open source AI “a little more heavily” than needed. Setting any sort of standard can be a way to show leadership globally, he posits — hopefully encouraging others to follow suit.

“The fearmongering about ‘stifling innovation’ comes mostly from people who want to do away with all regulation and have free rein, and that’s generally not a view I put much stock into,” Cook said. “I think it’s okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it.”

To wit, as my colleague Natasha Lomas has previously noted, the E.U.’s risk-based approach lists several prohibited uses of AI (e.g., China-style state social credit scoring) while imposing restrictions on AI systems considered to be “high-risk” — like those having to do with law enforcement. If the regulations were to target product types as opposed to product categories (as Etzioni argues they should), it might require thousands of regulations — one for each product type — leading to conflict and even greater regulatory uncertainty.

An analysis written by Lilian Edwards, a law professor at the Newcastle School and a part-time legal advisor at the Ada Lovelace Institute, questions whether the providers of systems like open source large language models (e.g., GPT-3) might be liable after all under the AI Act. Language in the legislation puts the onus on downstream deployers to manage an AI system’s uses and impacts, she says — not necessarily the initial developer.

“[T]he way downstream deployers use [AI] and adapt it may be as significant as how it is originally built,” she writes. “The AI Act takes some notice of this but not nearly enough, and therefore fails to appropriately regulate the many actors who get involved in various ways ‘downstream’ in the AI supply chain.”

At AI startup Hugging Face, CEO Clément Delangue, counsel Carlos Muños Ferrandis and policy expert Irene Solaiman say that they welcome regulations to protect consumer safeguards, but that the AI Act as proposed is too vague. For instance, they say, it’s unclear whether the legislation would apply to the “pre-trained” machine learning models at the heart of AI-powered software or only to the software itself.

“This lack of clarity, coupled with the non-observance of ongoing community governance initiatives such as open and responsible AI licenses, might hinder upstream innovation at the very top of the AI value chain, which is a big focus for us at Hugging Face,” Delangue, Ferrandis and Solaiman said in a joint statement. “From a competition and innovation perspective, if you already place overly heavy burdens on openly released features at the top of the AI innovation stream you risk hindering incremental innovation, product differentiation and dynamic competition, this latter being core in emergent technology markets such as AI-related ones … The regulation should take into account the innovation dynamics of AI markets and thus clearly identify and protect core sources of innovation in these markets.”

As for Hugging Face, the company advocates for improved AI governance tools regardless of the AI Act’s final language, like “responsible” AI licenses and model cards that include information like the intended use of an AI system and how it works. Delangue, Ferrandis and Solaiman point out that responsible licensing is starting to become a common practice for major AI releases, such as Meta’s OPT-175 language model.

“Open innovation and responsible innovation in the AI realm are not mutually exclusive ends, but rather complementary ones,” Delangue, Ferrandis and Solaiman said. “The intersection between both should be a core target for ongoing regulatory efforts, as it is being right now for the AI community.”

That well may be achievable. Given the many moving parts involved in E.U. rulemaking (not to mention the stakeholders affected by it), it’ll likely be years before AI regulation in the bloc starts to take shape.

The EU’s AI Act could have a chilling effect on open source efforts, experts warn by Kyle Wiggers originally published on TechCrunch

California EV owners asked to curb charging ahead of travel holiday

As a brutal heat wave cooks the West in the run-up to Labor Day, California’s power grid manager is calling on electric vehicle owners to avoid charging at peak times. The request is part of a broader effort to keep the state’s grid up and running, while locals crank their air conditioners to outlast a streak of blazing-hot days.

Through at least September 2, the California Independent System Operator (CAISO) is asking residents to conserve energy by “setting thermostats to 78 degrees or higher, if health permits, avoiding use of major appliances and turning off unnecessary lights” from 4 to 9 p.m. Pacific. “They should also avoid charging electric vehicles” during that time frame, added the nonprofit, which oversees California’s grid and energy market.

CAISO cautioned in a separate note that it may issue further calls to safeguard electricity “through the Labor Day weekend,” in response to triple-digit forecasts. The warning came as Gov. Gavin Newsom issued an emergency proclamation to increase energy production in the state.

The soaring temps and conservation requests come as California’s Air Resources Board clears the way to ban the sale of new gasoline-powered passenger cars. The graduated regulation won’t fully kick in for more than 12 years, but it sparked questions as to whether the state’s grid can reliably power millions of additional EVs by then, given California’s recent history of summer blackouts. Across the U.S., the rise of EVs demands serious investments from utilities and grid operators to boost capacity.

The clock is ticking, but the regulation is seen by climate experts as a crucial step for California, and the other states that may follow its lead, to slash the greenhouse gas emissions that are making heat waves ever worse and more frequent. Gas-powered passenger vehicles and light-duty trucks make up more than half of U.S. transportation emissions, according to the Environmental Protection Agency. 

“For the fifth largest economy to declare such a thing by 2035 is properly aggressive,” Dr. William Collins, the director of Berkeley Lab’s Climate and Ecosystem Sciences Division and Carbon Negative Initiative, told TechCrunch after the board approved the regulation.

Dr. Anne Lusk, a researcher and teacher at Harvard’s School of Public Health, also said the timing was right in a call this week with TechCrunch.

“For the issue of mobile source air pollution, we need the policy immediately,” she said. Yet, because of other issues like range anxiety and income inequality, “I think 2035 is right,” she clarified, citing the time needed for automakers to release more affordable EVs, for more used EVs to hit secondary markets and for the U.S. to shore up its charging infrastructure. To that point: A recent J.D. Power survey spotlighted poorly maintained chargers and high prices as two key obstacles to EV adoption. 

Crucially, the 2035 ban includes an exception for new plug-in hybrids. It also does not prohibit the sale of used gas-powered vehicles, nor does it forbid them from roads.

In the meantime, it’s hot as hell and only getting hotter. California maintains a list of cooling centers and tips for residents who are suffering from extreme heat, which is the deadliest form of extreme weather in the U.S., per the National Weather Service.