CFPB signals that regulation is coming for BNPL

In a shot across the bow to the buy now, pay later (BNPL) industry, the U.S. Consumer Financial Protection Bureau (CFPB) today issued a report suggesting that companies like Klarna and Afterpay, which allow customers to pay for products and services in installments, must be subjected to stricter oversight.

The CFPB — in a step toward regulation — plans to issue guidance to oversee BNPL vendors and have them complete “supervisory” exams in line with credit card company reporting requirements, according to agency officials speaking at a presser this week.

The CFPB first announced that it would investigate the burgeoning (but rocky) BNPL industry in December 2021. While the agency has jurisdiction over banks, credit unions, securities firms, and other financial services firms based in the U.S., it didn’t previously regulate BNPL providers, who argued that they were exempted from many of the existing rules governing consumer lending.

BNPL services like Affirm and Apple’s forthcoming Apple Pay Later usually split up purchases into four or six equal installments over a fixed short-term period (e.g., a few months). Many don’t charge any interest or late fees, and don’t require a credit check for customers to qualify.

In the course of its investigation, the CFPB said that it found BNPL vendors are approving more customers for loans — 73% in 2021 compared with 69% in 2020 — and that delinquencies on these services are rising sharply. Meanwhile, the BNPL industry’s charge-off rate, or the rate of uncollectible loans, was 2.39% in 2021 — up from 1.83% in 2020.

Late fees are also climbing. The CFPB found that 10.5% of customers were charged at least one BNPL late fee in 2021 versus 7.8% in 2020.

CFPB director Rohit Chopra outlined the other dangers of BNPL offerings during the call, including data harvesting and taking on multiple large loans at once. (Because BNPL firms typically don’t report to credit bureaus, it’s easier for consumers to take out loans from multiple vendors at once.) These will likely become more acute as people begin to use BNPL for routine expenses, the agency said; the CFPB found that BNPL customers are increasingly paying for purchases like groceries and gas, spurred by macroeconomic pressures, including inflation.

“[BNPL] firms are harvesting and leveraging data in ways we don’t see with other companies,” Chopra said, per CNBC’s reporting. “Through their proprietary interfaces, they can see which products we buy through product placement … “We want to ensure [BNPL] firms are subjected to the appropriate examination just like regular credit card firms.”

The Financial Technology Association, an industry trade group, pushed back against the allegations that BNPL could harm consumers if left unregulated — arguing that BNPL as it exists today provides a valuable alternative to other lines of credit.

“With zero to low-interest, flexible payment terms, and transparent terms and conditions, BNPL helps consumers manage their cash flow responsibly and live healthier financial lives,” Financial Technology Association CEO Penny Lee told the Associated Press in a statement.

Some data would suggest otherwise. A DebtHammer poll showed that 32% of customers skip out on paying rent, utilities or child support to make their BNPL payments, and BNPL services can also lead to bigger purchases. In May, SFGate reported that the average Affirm customer spends $365 on a single purchase as opposed to the $100 average cart size recorded in 2020.

The BNPL industry has flirted with regulations for some time, with the U.K. last year announcing new regulatory policies for BNPL companies. California sued Afterpay after it initially refused to obtain a lender’s license from the state. Elsewhere, Massachusetts regulators entered into a consent agreement with Affirm after allegations that it engaged in loan servicing activity without a license.

CFPB signals that regulation is coming for BNPL by Kyle Wiggers originally published on TechCrunch

The EU’s AI Act could have a chilling effect on open source efforts, experts warn

The nonpartisan think tank Brookings this week published a piece decrying the bloc’s regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the E.U.’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.

If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it’s not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.

“This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI,” Alex Engler, the analyst at Brookings who published the piece, wrote. “In the end, the [E.U.’s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI.”

In 2021, the European Commission — the E.U.’s politically independent executive arm — released the text of the AI Act, which aims to promote “trustworthy AI” deployment in the E.U. As they solicit input from industry ahead of a vote this fall, E.U. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.

The legislation contains carve-outs for some categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it’d be difficult — if not impossible — to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors.

In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

Oren Etzioni, the founding CEO of the Allen Institute for AI, agrees that the current draft of the AI Act is problematic. In an email interview with TechCrunch, Etzioni said that the burdens introduced by the rules could have a chilling effect on areas like the development of open text-generating systems, which he believes are enabling developers to “catch up” to big tech companies like Google and Meta.

“The road to regulation hell is paved with the E.U.’s good intentions,” Etzioni said. “Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided ‘as is’ — consider the case of a single student developing an AI capability; they cannot afford to comply with E.U. regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results.”

Instead of seeking to regulate AI technologies broadly, E.U. regulators should focus on specific applications of AI, Etzioni argues. “There is too much uncertainty and rapid change in AI for the slow-moving regulatory process to be effective,” he said. “Instead, AI applications such as autonomous vehicles, bots, or toys should be the subject of regulation.”

Not every practitioner believes the AI Act is in need of further amending. Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, thinks it’s “perfectly fine” to regulate open source AI “a little more heavily” than needed. Setting any sort of standard can be a way to show leadership globally, he posits — hopefully encouraging others to follow suit.

“The fearmongering about ‘stifling innovation’ comes mostly from people who want to do away with all regulation and have free rein, and that’s generally not a view I put much stock into,” Cook said. “I think it’s okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it.”

To wit, as my colleague Natasha Lomas has previously noted, the E.U.’s risk-based approach lists several prohibited uses of AI (e.g., China-style state social credit scoring) while imposing restrictions on AI systems considered to be “high-risk” — like those having to do with law enforcement. If the regulations were to target product types as opposed to product categories (as Etzioni argues they should), it might require thousands of regulations — one for each product type — leading to conflict and even greater regulatory uncertainty.

An analysis written by Lilian Edwards, a law professor at the Newcastle School and a part-time legal advisor at the Ada Lovelace Institute, questions whether the providers of systems like open source large language models (e.g., GPT-3) might be liable after all under the AI Act. Language in the legislation puts the onus on downstream deployers to manage an AI system’s uses and impacts, she says — not necessarily the initial developer.

“[T]he way downstream deployers use [AI] and adapt it may be as significant as how it is originally built,” she writes. “The AI Act takes some notice of this but not nearly enough, and therefore fails to appropriately regulate the many actors who get involved in various ways ‘downstream’ in the AI supply chain.”

At AI startup Hugging Face, CEO Clément Delangue, counsel Carlos Muños Ferrandis and policy expert Irene Solaiman say that they welcome regulations to protect consumer safeguards, but that the AI Act as proposed is too vague. For instance, they say, it’s unclear whether the legislation would apply to the “pre-trained” machine learning models at the heart of AI-powered software or only to the software itself.

“This lack of clarity, coupled with the non-observance of ongoing community governance initiatives such as open and responsible AI licenses, might hinder upstream innovation at the very top of the AI value chain, which is a big focus for us at Hugging Face,” Delangue, Ferrandis and Solaiman said in a joint statement. “From a competition and innovation perspective, if you already place overly heavy burdens on openly released features at the top of the AI innovation stream you risk hindering incremental innovation, product differentiation and dynamic competition, this latter being core in emergent technology markets such as AI-related ones … The regulation should take into account the innovation dynamics of AI markets and thus clearly identify and protect core sources of innovation in these markets.”

As for Hugging Face, the company advocates for improved AI governance tools regardless of the AI Act’s final language, like “responsible” AI licenses and model cards that include information like the intended use of an AI system and how it works. Delangue, Ferrandis and Solaiman point out that responsible licensing is starting to become a common practice for major AI releases, such as Meta’s OPT-175 language model.

“Open innovation and responsible innovation in the AI realm are not mutually exclusive ends, but rather complementary ones,” Delangue, Ferrandis and Solaiman said. “The intersection between both should be a core target for ongoing regulatory efforts, as it is being right now for the AI community.”

That well may be achievable. Given the many moving parts involved in E.U. rulemaking (not to mention the stakeholders affected by it), it’ll likely be years before AI regulation in the bloc starts to take shape.

The EU’s AI Act could have a chilling effect on open source efforts, experts warn by Kyle Wiggers originally published on TechCrunch

California EV owners asked to curb charging ahead of travel holiday

As a brutal heat wave cooks the West in the run-up to Labor Day, California’s power grid manager is calling on electric vehicle owners to avoid charging at peak times. The request is part of a broader effort to keep the state’s grid up and running, while locals crank their air conditioners to outlast a streak of blazing-hot days.

Through at least September 2, the California Independent System Operator (CAISO) is asking residents to conserve energy by “setting thermostats to 78 degrees or higher, if health permits, avoiding use of major appliances and turning off unnecessary lights” from 4 to 9 p.m. Pacific. “They should also avoid charging electric vehicles” during that time frame, added the nonprofit, which oversees California’s grid and energy market.

CAISO cautioned in a separate note that it may issue further calls to safeguard electricity “through the Labor Day weekend,” in response to triple-digit forecasts. The warning came as Gov. Gavin Newsom issued an emergency proclamation to increase energy production in the state.

The soaring temps and conservation requests come as California’s Air Resources Board clears the way to ban the sale of new gasoline-powered passenger cars. The graduated regulation won’t fully kick in for more than 12 years, but it sparked questions as to whether the state’s grid can reliably power millions of additional EVs by then, given California’s recent history of summer blackouts. Across the U.S., the rise of EVs demands serious investments from utilities and grid operators to boost capacity.

The clock is ticking, but the regulation is seen by climate experts as a crucial step for California, and the other states that may follow its lead, to slash the greenhouse gas emissions that are making heat waves ever worse and more frequent. Gas-powered passenger vehicles and light-duty trucks make up more than half of U.S. transportation emissions, according to the Environmental Protection Agency. 

“For the fifth largest economy to declare such a thing by 2035 is properly aggressive,” Dr. William Collins, the director of Berkeley Lab’s Climate and Ecosystem Sciences Division and Carbon Negative Initiative, told TechCrunch after the board approved the regulation.

Dr. Anne Lusk, a researcher and teacher at Harvard’s School of Public Health, also said the timing was right in a call this week with TechCrunch.

“For the issue of mobile source air pollution, we need the policy immediately,” she said. Yet, because of other issues like range anxiety and income inequality, “I think 2035 is right,” she clarified, citing the time needed for automakers to release more affordable EVs, for more used EVs to hit secondary markets and for the U.S. to shore up its charging infrastructure. To that point: A recent J.D. Power survey spotlighted poorly maintained chargers and high prices as two key obstacles to EV adoption. 

Crucially, the 2035 ban includes an exception for new plug-in hybrids. It also does not prohibit the sale of used gas-powered vehicles, nor does it forbid them from roads.

In the meantime, it’s hot as hell and only getting hotter. California maintains a list of cooling centers and tips for residents who are suffering from extreme heat, which is the deadliest form of extreme weather in the U.S., per the National Weather Service.

How tech giants are responding to the growing green card backlog

In early August, Amazon’s SVP of human resources, Beth Galetti, penned a blog post urging the U.S. Department of Homeland Security to expedite the processing of employment-based green cards.

The plea was, of course, self-serving — Amazon topped the list of companies applying for green cards in 2019 with 1,500 applications, according to U.S. Department of Labor data. But it did serve to spotlight that U.S. Citizenship and Immigration Services (USCIS) — the agency responsible for issuing green cards — is barreling toward a failure to adjudicate tens of thousands of applications before a September 30 deadline.

Green cards are highly sought after. Unlike temporary work visas (e.g., H-1Bs), they allow workers to freely switch jobs without losing their immigration status. In response to demand (and political pressure), Congress allotted 281,000 employment-based green cards in 2022, up from 262,000 in 2021. (The typical cap is 140,000.) But the pandemic — and restrictive laws under the Trump administration — threw an additional wrench in the largely manual, paper-driven process.

U.S. embassies and consular offices were temporarily closed, creating a long queue of appointments to collect fingerprints and photographs. Annual per-country caps didn’t help matters — according to the Cato Institute, about 875,000 approved petitions for green cards were waitlisted in 2021 because of the limits.

This year, the USCIS claims to have taken steps to expedite adjudication, telling Bloomberg Law in July that it shifted staff resources to prioritize processing green cards and adopted a “risk-based approach” to waive interview requirements. But it’s unclear whether this will be sufficient to prevent tens of thousands of green cards from going unused. As of July 31, the USCIS reported that it had adjudicated around 210,000 applications, leaving over 70,000 to be processed.

California clears the way to ban new gas car sales

The California Air Resources Board voted Thursday afternoon to phase out the sale of new gasoline-powered cars in the Golden State.

The full effect of the binding decision will kick in more than 12 years from now, in 2035, when it will require most new cars and trucks sold in California to run exclusively on electricity or hydrogen.

The regulation will be phased in gradually until then, and it includes an exception for plug-in hybrids, allowing them to make up as much as 20% of new car sales even after 2035.

The regulation is the result of an executive order issued by Gavin Newsom back in 2020, as part of the California governor’s efforts to fight climate change. The requirements will not ban combustion engine vehicles from roads nor will it halt any used car sales in the state, but it is expected to accelerate the transition to zero-emission vehicles. Crucially, 16 other states and Washington, D.C. look to California to set stricter emissions standards and could follow with similar rules.

In the meantime, the landmark policy lays out graduated requirements for California, mandating that zero-emission vehicles make up 35% of new car sales by 2026, and 68% by 2030.

The state still has a long way to go before ridding its roads of overly pollutive vehicles: As of 2021, 12.4% of new cars sold in California were either battery-electric vehicles or plugin-in hybrids, per the state’s Energy Commission.

Though California leads the U.S. on the EV front, its rate of new plug-in sales trails Europe, which hit 22% in April. The EU is pursuing similar restrictions for 2035, and some countries, including Norway, aim to move faster.

Some legacy automakers, such as GM and Volvo, have already pledged to eventually go all electric, more or less aligning with younger firms like Tesla, Rivian, Lucid and Canoo. Other legacy businesses, like Toyota, have primarily focused on hybrids and are also teeing up future hydrogen fuel cell vehicles.

A ban by another name?

Gas-powered cars will still be welcomed on California roads even after all the sales requirements go into effect, the state’s Air Resources Board emphasized in an email to TechCrunch. “Not actually a ban,” said a spokesperson for the regulator. “The Advanced Clean Cars II regulation requires that all NEW vehicle sales be electric by 2035. Drivers with internal combustion vehicles at that time can keep them until the end of the vehicle life.”

Along similar lines, the Sierra Club, an environmental advocacy group, cast the new requirements as an essential “transition” rather than a ban. Regardless, the point is virtually the same — to restrict the sale of new gas vehicles because they’re wreaking havoc on the climate and our health.

“This is very significant, because it sends a strong signal that we need to shift to zero-emission vehicles,” said Katherine García, director of the Sierra Club’s Clean Transportation for All campaign. “This is common-sense action for protecting public health and for taking climate action.”

Building on this regulation will save lives, according to Dr. Lisa Patel, a clinical assistant professor of pediatrics at Stanford. “It is a health imperative that all states eventually ban gas-powered cars,” she said in an email to TechCrunch, pointing to a report from the American Lung Association. The lung health group estimated in March that a nationwide switch to EVs could save more than 100,000 lives over the next three decades.

“No child’s health today should be ruined by breathing in the toxic fumes generated by our gas-powered fleets,” Dr. Patel added, “particularly when we have the technological solutions to avoid it.”

Is the future of the microchip industry going to be Made in America?

Welcome to The TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s inspired by the daily TechCrunch+ column where it gets its name. Want it in your inbox every Saturday? Sign up here.

With all eyes on Taiwan and worries mounting around semiconductor supply, the U.S. CHIPS Act is particularly timely. But it is not unique: Other countries similarly aspire to reduce their reliance on imported chips. Let’s explore.Anna

From cheap as chips to billion-dollar incentives

U.S. president Joe Biden signed the CHIPS and Science Act of 2022 into law earlier this week after the bill received broad bipartisan support in the House and Senate.

The H and I in CHIPS stand for “Helpful Incentives,” hinting at the main component of the initiative: $52.7 billion in public subsidies.

Biden described the new bill on Twitter as “a once-in-a-generation law that invests in America by supercharging our efforts to make semiconductors here at home.”

How digital health startups are navigating the post-Roe legal landscape

With the overturning of Roe v. Wade earlier this year, digital health and reproductive care startups bolstered their efforts to make abortion pills and emergency contraceptives more accessible. Now, as state laws shift and abortion bans go into effect across the United States, companies are still trying to find ways to provide care while reimagining what healthcare should include.

Following the decision by the U.S. Supreme Court that overturned Roe, many leading organizations focused on reproductive medicine have spoken out against the ruling. “Decisions about healthcare, particularly reproductive healthcare, should be made by patients and physicians, not by interest groups, religious organizations, politicians, pundits, or Supreme Court justices,” said Marcelle Cedars, president of the American Society for Reproductive Medicine.

Nationally, the situation is proving tricky to navigate as each state can begin implementing individualized abortion laws. By November, 26 states are expected to face near-total abortion bans.

For that reason, TechCrunch checked in with digital health startups to learn how they intend to continue to offer reproductive care despite an increasingly hostile legal environment.

What do demand and restrictions look like?

In wake of the decision, there has been a national surge in demand for emergency contraception like Plan B, also known as the “morning after pill.”

In Arizona, Arkansas, Colorado, Florida, Georgia, Idaho, Illinois, Kansas, Maine, Mississippi, South Dakota and Tennessee, a pharmacist can refuse to dispense an emergency contraceptive if it conflicts with their religious beliefs. In some states, the medication is also excluded from what is considered mandatory contraceptive coverage, according to the Guttmacher Institute.

But digital health companies, which provide care virtually, claim they are better able to avoid these limitations.

Facebook avoids a service shut down in Europe for now

Facebook has avoided the risk of being forced to shut its service down in Europe this summer as a result of the latest twist in a long-running data protection complaint saga that relates to a clash between EU privacy and US surveillance law.

The delay — in what’s still widely expected to be a suspension order to Meta, Facebook’s parent company, to stop illegal data exports — follows objections to a draft decision by its lead data protection authority by other regional DPAs who have been reviewing it.

The Irish Business Post picked up on the development earlier.

Under the bloc’s General Data Protection Regulation (GDPR), cross-border complaints typically require cooperation and at least consensus from DPAs in affected regions so it provides a right for interested authorities to weight in on draft decisions by a lead data supervisor.

“We have received some objections from a small number of Data Protection Authorities in this case,” confirmed the Irish Data Protection Commission (DPC)’s deputy commissioner, Graham Doyle. “We are currently assessing the objections and will engage with the relevant authorities to try and resolve the issues raised.”

Doyle declined to provide details of specific objections received.

The development means that a final decision on the (seemingly) neverending saga over the legality of Facebook’s data transfers — and the fate of its service in Europe — will be kicked down the road for several more months at least.

In a previous cross-border GDPR complaint, related to WhatsApp, where objections were similarly raised to Ireland’s proposed enforcement, it took a total of around nine months before a final decision (and hefty fine) was issued.

Meta will also very likely challenge a suspension order in the Irish courts — and could also seek a stay, as it did previously, to try to keep operating as is in the meanwhile.

Back in September 2020, the DPC sent a preliminary suspension order to Facebook over the data transfers issue — triggering a legal challenge. Facebook won a stay but its bid to roll back the regulator’s decision via judicial review, challenging its procedure, was, eventually, dismissed in May 2021 reviving the enforcement process — which has been grinding on ever since.

The DPC would not comment on an expected timeframe for a final decision to be issued in light of the objections to its draft.

That will, in any case, depend on whether differing views on enforcement between DPAs can be settled without requiring a formal dispute resolution mechanism in the GDPR — which can require the European Data Protection Board to step in (as happened in the WhatsApp case).

If DPAs can’t come to agreement among themselves and the EDPB has to get involved it’s not beyond the bounds of possibility that a final decision gets pushed into 2023.

Max Schrems, the privacy campaigner and lawyer who originally raised the Facebook data transfers complaint (all the way back in 2013!), has said he expects considerable further delays in enforcement of any suspension order — including by Meta lodging appeals — as we reported previously.

The tech giant has a specific incentive to delay enforcement as long as possible as it may be banking on (or, well, hoping for) a fresh data transfer deal between the EU and the US landing to save Facebook’s service bacon in Europe.

A preliminary agreement on a new high level EU-US accord on data transfers — replacing the defunct Privacy Shield (which is one very tangible casualty of this Facebook data transfers complaint saga thus far; its predecessor Safe Harbor is another) — was reached back in March. And, earlier this year, the European Commission was suggesting it could be finalized by the end of this year.

Since then some reports have suggested progress towards agreeing a final text may not be going as smoothly as hoped, so a replacement deal may not arrive so quick — which would complicate Meta’s ‘strategy’ (if we can call it that) of banking on further delays to enforcement buying it enough time to switch its European data transfers onto a fresh, unchallenged legal basis.

The latter outcome would of course reset the whole game of legal and regulatory whack-a-mole yet again. So, well, it’s possible this saga could still have years, plural, to run…

Dear Sophie: Which immigration options are best for a decentralized team in the US?

Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

TechCrunch+ members receive access to weekly “Dear Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie,

We just raised a $20 million Series A, and we need to hire more engineers to fully develop our product.

In addition, we’d like to bring our overseas PEO contractors to the States to join us more locally and in-timezone.

We’re excited about being decentralized — which immigration options are best for us?

— Elated Entrepreneur

Dear Elated,

Congrats on your latest funding milestone! Your funding comes at an opportune time: The recent spate of layoffs and hiring freezes at many tech companies presents a great opportunity for you to scoop up top global engineering talent. Plus, I’m not surprised you raised even in this “environment” — we’re seeing a lot of that going on despite much of the news!

A few suggestions

Before I dive into your question, let me make a few suggestions: Consult both a corporate attorney and an immigration attorney.

A corporate attorney can help you set up a distributed work policy if your company doesn’t already have one. Corporate, employment and privacy laws differ in each state, so given that you are looking to expand your distributed workforce, you should know what your company can — and can’t — do in each state. You should also discuss whether to transfer the individuals you want to bring to the U.S. from your PEO (professional employer organization) to your company, particularly if you are only using the PEO for the individuals abroad. The specific immigration options you’re considering can be a factor in this as well.

A composite image of immigration law attorney Sophie Alcorn in front of a background with a TechCrunch logo.

Image Credits: Joanna Buniak / Sophie Alcorn (opens in a new window)

When you consult your immigration attorney, you’ll want to discuss visa sponsorship for any individuals you want to bring to the U.S. who are currently on your global PEO’s payroll, as well as options for folks you haven’t started working with yet, whether they are already in the States or abroad. If your PEO has only a minimal employment relationship with those individuals and cannot hire and fire them, the PEO would probably not qualify as an employer for many types of visa sponsorship processes in the U.S., but there are ways to work with this.

You should also discuss whether to set up an immigration policy that spells out:

  • When your company will sponsor an individual for a green card.
  • Immigration strategies based on your company’s hiring goals and timing.
  • Options for specific individuals based on their situation and qualifications.

Different startups choose to go different ways with whether to have these policies at all and if so, what they should include. I always recommend being strategic and deliberate about offering immigration as a benefit to recruit and retain international talent whether they are in the United States or abroad. Offering green card sponsorship after a year or two is a tremendous benefit! This also helps shape your company culture, demonstrating that you value diversity, inclusion and the well-being of your team.

The DOJ is reportedly prepping an antitrust suit against Google over its ad business

The Department of Justice is preparing a second major antitrust suit against Google, according to new reporting by Bloomberg. The DOJ could sue Google “as soon as next month,” according to the report, which details that the lawsuit will be filed in federal court in either Washington or New York.

Unlike the first major Google antitrust case the federal government initiated during the Trump administration, the new lawsuit would focus on the company’s command of the digital ad market. Bloomberg reports that DOJ antitrust lawyers are in the process of wrapping up interviews with publishers after “years of work” that will ultimate culminate in the coming lawsuit.

In 2020, the DOJ sued the tech titan over its dominance in the online search market, accusing the company of “unlawfully maintaining monopolies in the markets for general search services, search advertising, and general search text advertising in the United States.”

At the time, Google pushed back against the suit, arguing that consumers use its product because it is superior, not because they don’t have alternatives.

The Biden administration went out of its way to name a prominent Google critic, Jonathan Kanter, to lead the DOJ’s antitrust division. In a 2016 NYT op-ed, Kanter argued that Google is notorious for using an anti-competitive “playbook” to cut off the oxygen supply to its competitors.

The first Google antitrust suit was filed during Trump’s tenure, but the Biden administration inherited that framework — a rare bit of policy continuity between the Trump and Biden White House —and is still working to hold the tech giant accountable for the anti-competitive behavior that cemented its dominance over the last decade.