Twitter locked the Trump campaign out of its account for sharing COVID-19 misinformation

Twitter took action against the official Trump campaign Twitter account Wednesday, freezing @TeamTrump’s ability to tweet until it removed a video in which the president made misleading claims about the coronavirus. In the video clip, taken from a Wednesday morning Fox News interview, President Trump makes the unfounded assertion that children are “almost immune” from COVID-19.

“If you look at children, children are almost — and I would almost say definitely — but almost immune from this disease,” Trump said. “They don’t have a problem. They just don’t have a problem.”

While Trump’s main account @realDonaldTrump linked out to the @TeamTrump tweet in violation, it did not directly share it. In spite of some mistaken reports that Trump’s own account is locked, at this time his account had not been subject to the same enforcement action as the Trump campaign account, which appears to have regained its ability to tweet around 6PM PT.

“The @TeamTrump Tweet you referenced is in violation of the Twitter Rules on COVID-19 misinformation,” Twitter spokesperson Aly Pavela said in a statement provided to TechCrunch. “The account owner will be required to remove the Tweet before they can Tweet again.”

Facebook also took its own unprecedented action against President Trump’s account late Wednesday, removing the post for violating its rules against harmful false claims that any group is immune to the virus.

The president’s false claims were made in service of his belief that schools should reopen their classrooms in the fall. In June, Education Secretary Betsy DeVos made similar unscientific claims, arguing that children are “stoppers of the disease.”

In reality, the relationship between children and the virus is not yet well understood. While young children seem less prone to severe cases of COVID-19, the extent to which they contract and spread the virus isn’t yet known. In a new report examining transmission rates at a Georgia youth camp, the CDC observed that “children of all ages are susceptible to SARS-CoV-2 infection and, contrary to early reports, might play an important role in transmission.”

YouTube bans thousands of Chinese accounts to combat ‘coordinated influence operations’

YouTube has banned a large number of Chinese accounts it said were engaging in “coordinated influence operations” on political issues, the company announced today. 2,596 accounts from China alone were taken down from April to June, compared with 277 in the first three months of 2020.

“These channels mostly uploaded spammy, non-political content, but a small subset posted political content primarily in Chinese similar to the findings in a recent Graphika report, including content related to the U.S. response to COVID-19,” Google posted in its Threat Analysis Group bulletin for Q2.

The Graphika report, entitled “Return of the (Spamouflage) Dragon: Pro Chinese Spam Network Tries Again,” can be read here. It details a large set of accounts on YouTube, Facebook, Twitter, and other social media that began to be activated early this year that appeared to be part of a global propaganda push:

The network made heavy use of video footage taken from pro-Chinese government channels, together with memes and lengthy texts in both Chinese and English. It interspersed its political content with spam posts, typically of scenery, basketball, models, and TikTok videos. These appeared designed to camouflage the operation’s political content, hence the name.

It’s the “return” of this particular spam dragon because it showed up last fall in a similar form, and whoever is pulling the strings appears undeterred by detection. New, sleeper, and stolen accounts were amassed again and deployed for similar purposes, though now — as Google notes — with a COVID-19 twist.

When June rolled around, content was also being pushed related to the ongoing protests regarding the killings of George Floyd and Breonna Taylor and other racial justice matters.

The Google post notes that the Chinese campaign, as well as others from Russia and Iran, were multi-platform, as similar findings were reported by Facebook, Twitter, and cybersecurity outfits like FireEye.

Having taken down 186 channels in April, 1,098 in May, and 1,312 in June, we may be in for a bumper crop in the summer as well. Watch with care.

Technologists: Consider Canada

America’s technology industry, radiating brilliance and profitability from its Silicon Valley home base, was until recently a shining beacon of what made America great: Science, progress, entrepreneurship. But public opinion has swung against big tech amazingly fast and far; negative views doubled between 2015 and 2019 from 17% to 34%. The list of concerns is long and includes privacy, treatment of workers, marketplace fairness, the carnage among ad-supported publications and the poisoning of public discourse.

But there’s one big issue behind all of these: An industry ravenous for growth, profit and power, that has failed at treating its employees, its customers and the inhabitants of society at large as human beings. Bear in mind that products, companies and ecosystems are built by people, for people. They reflect the values of the society around them, and right now, America’s values are in a troubled state.

We both have a lot of respect and affection for the United States, birthplace of the microprocessor and the electric guitar. We could have pursued our tech careers there, but we’ve declined repeated invitations and chosen to stay at home here in Canada . If you want to build technology to be harnessed for equity, diversity and social advancement of the many, rather than freedom and inclusion for the few, we think Canada is a good place to do it.

U.S. big tech is correctly seen as having too much money, too much power and too little accountability. Those at the top clearly see the best effects of their innovations, but rarely the social costs. They make great things — but they also disrupt lives, invade privacy and abuse their platforms.

We both came of age at a time when tech aspired to something better, and so did some of today’s tech giants. Four big tech CEOs recently testified in front of Congress. They were grilled about alleged antitrust abuses, although many of us watching were thinking about other ills associated with some of these companies: tax avoidance, privacy breaches, data mining, surveillance, censorship, the spread of false news, toxic byproducts, disregard for employee welfare.

But the industry’s problem isn’t really the products themselves — or the people who build them. Tech workers tend to be dramatically more progressive than the companies they work for, as Facebook staff showed in their recent walkout over President Donald Trump’s posts.

Big tech’s problem is that it amplifies the issues Americans are struggling with more broadly. That includes economic polarization, which is echoed in big-tech financial statements, and the race politics that prevent tech (among other industries) from being more inclusive to minorities and talented immigrants.

We’re particularly struck by the Trump administration’s recent moves to deny opportunities to H-1B visa holders. Coming after several years of family separations, visa bans and anti-immigrant rhetoric, it seems almost calculated to send IT experts, engineers, programmers, researchers, doctors, entrepreneurs and future leaders from around the world — the kind of talented newcomers who built America’s current prosperity — fleeing to more receptive shores.

One of those shores is Canada’s; that’s where we live and work. Our country has long courted immigration, but it’s turned around its longstanding brain-drain problem in recent years with policies designed to scoop up talented people who feel uncomfortable or unwanted in America. We have an immigration program, the Global Talent Stream, that helps innovative companies fast-track foreign workers with specialized skills. Cities like Toronto, Montreal, Waterloo and Vancouver have been leading North America in tech job creation during the Trump years, fuelled by outposts of the big international tech companies but also by scaled-up domestic firms that do things the Canadian way, such as enterprise software developer OpenText (one of us is a co-founder) and e-commerce giant Shopify.

“Canada is awesome. Give it a try,” Shopify CEO Tobi Lütke told disaffected U.S. tech workers on Twitter recently.

But it’s not just about policy; it’s about underlying values. Canada is exceptionally comfortable with diversity, in theory (as expressed in immigration policy) and practice (just walk down a street in Vancouver or Toronto). We’re not perfect, but we have been competently led and reasonably successful in recognizing the issues we need to deal with. And our social contract is more cooperative and inclusive.

Yes, that means public health care with no copays, but it also means more emphasis on sustainability, corporate responsibility and a more collaborative strain of capitalism. Our federal and provincial governments have mostly been applauded for their gusher of stimulative wage subsidies and grants meant to sustain small businesses and tech talent during the pandemic, whereas Washington’s response now appears to have been formulated in part to funnel public money to elites.

American big tech today feels morally adrift, which leads to losing out on talented people who want to live the values Silicon Valley used to stand for — not just wealth, freedom and the few, but inclusivity, diversity and the many. Canada is just one alternative to the U.S. model, but it’s the alternative we know best and the one just across the border, with loads of technology job openings.

It wouldn’t surprise us if more tech refugees find themselves voting with their feet.

Dear Sophie: Can I bypass H-1B and sponsor a grad for a green card?

Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

“Dear Sophie” columns are accessible for Extra Crunch subscribers; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie:

A very bright and promising foreign national who graduated from a U.S. university has been working for our firm and just received a STEM OPT extension. We would like to keep her on after her STEM OPT ends. We registered her in this year’s H-1B lottery, but unfortunately, she wasn’t selected.

Given the challenges of getting an H-1B through the lottery and the #h1bvisaban, how can we bypass the H-1B and potentially sponsor her for a green card?

— Eager in Emeryville

Dear Eager,

Happy to hear you’re willing to sponsor a promising graduate from an American university for a green card. Sounds like you’re interested in exploring the EB-2 or EB-3 green card with the PERM process. For additional resources, feel free to check out my recent podcast on PERM.

Just because U.S. immigration policy often runs counter to retaining the best and the brightest college graduates in the U.S. doesn’t mean there isn’t hope. Some options exist for these talented folks and the companies that want to hire them, even though many employment-based green cards require candidates who are outstanding in their field. Recent graduates often haven’t yet built up their work experience and credentials, but there can be paths forward.

Although it may present some immigration risks to the candidate that should be weighed carefully in collaboration with an experienced business immigration attorney, many employers have been doing as you suggested: sidestepping the H-1B visa and directly pursuing a green card. This is often due to the extremely competitive H-1B lottery and high denial rates for initial H-1B petitions and extensions. Also, a moratorium on all green cards, H-1B, H-2B, J and L visas for individuals currently outside the U.S. is in effect until the end of this year. This now makes it nearly impossible for most employers to sponsor individuals to come to the U.S. unless their work is in the national interest or essential to the U.S. food supply chain.

So, many people are seeking solutions. First, the basics: Because your STEM OPT employee is already in the U.S., and the H-1B lottery now only costs $10 to register a candidate, I suggest that your company continue to enter her in the lottery as a backup option in case her F-1 STEM OPT status ends before you can secure her a green card.

The green cards for which most recent graduates would be eligible require the sponsoring employer to go through the PERM labor certification process before filing a green card petition. Separately there are other green cards for extraordinary ability which I’ve also written about.

PERM, which stands for Program Electronic Review Management, is the system used for applying for labor certification from the U.S. Department of Labor . Please speak with an attorney about the timing of this process and consider any risks to your employee’s personal immigration situation given her current F-1 nonimmigrant status.

Labor certification must be submitted to U.S. Citizenship and Immigration Services (USCIS) with EB-2 and EB-3 green card petitions. Labor certification confirms that no U.S. workers are qualified and available to accept the job offered to the green card candidate and employing the green card candidate won’t adversely affect the wages and working conditions of American workers.

Without knowing more about your STEM OPT employee’s background and qualifications, I would surmise that she might be able to qualify for one of these employment-based green cards:

Both of these green card categories require the employer sponsor to go through the PERM labor certification process. Because PERM is a complex process and will determine if you can proceed with sponsoring your employee for a green card, I recommend that you work with an experienced immigration attorney.

In general, PERM requires employers to take these steps:

  • Determine in detail the duties and minimum requirements of the position
  • File a prevailing wage request
  • Go through an extensive recruitment process
  • Get a certification

The duties and requirements of the position should be detailed and typical for your company — not tailored to the green card candidate. These duties and requirements will be used for job posting during the recruitment process.

In more detail, employers must file a prevailing wage request to the National Prevailing Wage Center of the Labor Department. The prevailing wage is determined based on the position, the geographical location of the position and economic conditions. The employer must pay the prevailing wage or higher for the position to ensure that hiring a foreign national would not adversely affect the wages of U.S. workers in similar positions. This process can take a few months.

The most time-consuming of these steps is the recruitment process to determine whether qualified U.S. workers are available for the position. To do that, an employer must advertise the job in two Sunday editions of a local newspaper, submit a job order with the state workforce agency (CalJOBS in California) and file an internal company notice of the filing. Plan ahead with your legal team to consider running some things in parallel to decrease the overall time.

For professional positions, employers need to use three additional recruitment methods, such as using a job recruiting website, an employment firm, a job fair, a posting at a career placement center at a local university or college, or incentives for employee referrals.

The job order with the state workforce agency must run for at least 30 consecutive days. The internal job posting must be up for 10 consecutive business days. Employers must allow 30 days for candidates to apply and interview U.S. workers who apply.

Generally, if there are no qualified applicants, employers then file ETA Form 9089 to the Labor Department. No supporting documents need to be submitted with the form, but the documents must be maintained for five years, especially as there could be an audit. The Labor Department will send a verification email to the employer along with a sponsorship questionnaire, which the employer should fill out within a week of receiving it. It’s important to not miss this email!

The PERM process can take anywhere from three to eight months as long as the Labor Department does not audit your case. The Labor Department conducts two types of audit: random audits and targeted audits. Random audits are done to make sure employers are following the PERM procedure.

Some common reasons for targeted audits could include:

  • The employer recently laid-off employees
  • The candidate appears unqualified for the position
  • The job does not require a bachelor’s degree
  • A company executive is related to the candidate

The Labor Department usually issues an audit notice within six months of receiving the labor certification application, and the employer must respond within 30 days. An audit does not mean an employer’s PERM will not be approved. However, it can add nine to 18 months to the process. If an employer does not respond to the audit notice, the Labor Department will deem the case abandoned, and for any future PERM applications, the employer may be required to conduct supervised recruitment.

Once the Labor Department approves the PERM Labor Certification for that position, you must file the green card petition to USCIS within 180 days. If your employee was born in any country other than China or India and you are sponsoring her for an EB-2 green card, you can file the I-140 green card petition and the I-485 adjustment of status from F-1 STEM OPT to EB-2 at the same time, assuming the “priority date” is still current.

If eligible, your STEM OPT employee could also enter the diversity green card lottery in the fall to increase her chances of getting a green card. Each year, 50,000 green cards are reserved for individuals born in countries that have low rates of immigration to the U.S.

Let me know if you have any other questions. Good luck!

— Sophie


Have a question? Ask it here. We reserve the right to edit your submission for clarity and/or space. The information provided in “Dear Sophie” is general information and not legal advice. For more information on the limitations of “Dear Sophie,” please view our full disclaimer here. You can contact Sophie directly at Alcorn Immigration Law.

Sophie’s podcast, Immigration Law for Tech Startups, is available on all major podcast platforms. If you’d like to be a guest, she’s accepting applications!

US tech needs a pivot to survive

Last month, American tech companies were dealt two of the most consequential legal decisions they have ever faced. Both of these decisions came from thousands of miles away, in Europe. While companies are spending time and money scrambling to understand how to comply with a single decision, they shouldn’t miss the broader ramification: Europe has different operating principles from the U.S., and is no longer passively accepting American rules of engagement on tech.

In the first decision, Apple objected to and was spared a $15 billion tax bill the EU said was due to Ireland, while the European Commission’s most vocal anti-tech crusader Margrethe Vestager was dealt a stinging defeat. In the second, and much more far-reaching decision, Europe’s courts struck a blow at a central tenet of American tech’s business model: data storage and flows.

American companies have spent decades bundling stores of user data and convincing investors of its worth as an asset. In Schrems, Europe’s highest court ruled that masses of free-flowing user data is, instead, an enormous liability, and sows doubt about the future of the main method that companies use to transfer data across the Atlantic.

On the surface, this decision appears to be about data protection. But there is a choppier undertow of sentiment swirling in legislative and regulatory circles across Europe. Namely that American companies have amassed significant fortunes from Europeans and their data, and governments want their share of the revenue.

What’s more, the fact that European courts handed victory to an individual citizen while also handing defeat to one of the commission’s senior leaders shows European institutions are even more interested in protecting individual rights than they are in propping up commission positions. This particular dynamic bodes poorly for the lobbying and influence strategies that many American companies have pursued in their European expansion.

After the Schrems ruling, companies will scramble to build legal teams and data centers that can comply with the court’s decision. They will spend large sums of money on pre-built solutions or cloud providers that can deliver a quick and seamless transition to the new legal reality. What companies should be doing, however, is building a comprehensive understanding of the political, judicial and social realities of the European countries where they do business — because this is just the tip of the iceberg.

American companies need to show Europeans — regularly and seriously — that they do not take their business for granted.

Europe is an afterthought no more

For many years, American tech companies have treated Europe as a market that required minimal, if any, meaningful adaptations for success. If an early-stage company wanted to gain market share in Germany, it would translate its website, add a notice about cookies and find a convenient way to transact in euros. Larger companies wouldn’t add many more layers of complexity to this strategy; perhaps it would establish a local sales office with a European from HQ, hire a German with experience in U.S. companies or sign a local partnership that could help it distribute or deliver its product. Europe, for many small and medium-sized tech firms, was little more than a bigger Canada in a tougher time zone.

Only the largest companies would go to the effort of setting up public policy offices in Brussels, or meaningfully try to understand the noncommercial issues that could affect their license to operate in Europe. The Schrems ruling shows how this strategy isn’t feasible anymore.

American tech must invest in understanding European political realities the same way they do in emerging markets like India, Russia or China, where U.S. tech companies go to great lengths to adapt products to local laws or pull out where they cannot comply. Europe is not just the European Commission, but rather 27 different countries that vote and act on different interests at home and in Brussels.

Governments in Beijing or Moscow refused to accept a reality of U.S. companies setting conditions for them from the outset. After underestimating Europe for years, American companies now need to dedicate headspace to considering how business is materially affected by Europe’s different views on data protection, commerce, taxation and other issues.

This is not to say that American and European values on the internet differ as dramatically as they do with China’s values, for instance. But Europe, from national governments to the EU and to courts, is making it clear that it will not accept a reality where U.S. companies assume that they have license to operate the same way they do at home. Where U.S. companies expect light taxation, European governments expect revenue for economic activity. Where U.S. companies expect a clear line between state and federal legislation, Europe offers a messy patchwork of national and international regulation. Where U.S. companies expect that their popularity alone is proof that consumers consent to looser privacy or data protection, Europe reminds them that (across the pond) the state has the last word on the matter.

Many American tech companies understand their commercial risks inside and out but are not prepared for managing the risks that are out of their control. From reputation risk to regulatory risk, they can no longer treat Europe as a like-for-like market with the U.S., and the winners will be those companies that can navigate the legal and political changes afoot. Having a Brussels strategy isn’t enough. Instead American companies will need to build deeper influence in the member states where they operate. Specifically, they will need to communicate their side of the argument early and often to a wider range of potential allies, from local and national governments in markets where they operate, to civil society activists like Max Schrems .

The world’s offline differences are obvious, and the time when we could pretend that the internet erased them rather than magnified them is quickly ending.

Autonomous vehicle reporting data is driving AV innovation right off the road

At the end of every calendar year, the complaints from autonomous vehicle companies start piling up. This annual tradition is the result of a requirement by the California Department of Motor Vehicles that AV companies deliver “disengagement reports” by January 1 of each year showing the number of times an AV operator had to disengage the vehicle’s autonomous driving function while testing the vehicle.

However, all disengagement reports have one thing in common: their usefulness is ubiquitously criticized by those who have to submit them. The CEO and founder of a San Francisco-based self-driving car company publicly stated that disengagement reporting is “woefully inadequate … to give a meaningful signal about whether an AV is ready for commercial deployment.” The CEO of a self-driving technology startup called the metrics “misguided.” Waymo stated in a tweet that the metric “does not provide relevant insights” into its self-driving technology or “distinguish its performance from others in the self-driving space.”

Why do AV companies object so strongly to California’s disengagement reports? They argue the metric is misleading based on lack of context due to the AV companies’ varied testing strategies. I would argue that a lack of guidance regarding the language used to describe the disengagements also makes the data misleading. Furthermore, the metric incentivizes testing in less difficult circumstances and favors real-world testing over more insightful virtual testing.

Understanding California reporting metrics

To test an autonomous vehicle on public roads in California, an AV company must obtain an AV Testing Permit. As of June 22, 2020, there were 66 Autonomous Vehicle Testing Permit holders in California and 36 of those companies reported autonomous vehicle testing in California in 2019. Only five of those companies have permits to transport passengers.

To operate on California public roads, each permitted company must report any collision that results in property damage, bodily injury, or death within 10 days of the incident.

There have been 24 autonomous vehicle collision reports in 2020 thus far. However, though the majority of those incidents occurred in autonomous mode, accidents were almost exclusively the result of the autonomous vehicle being rear-ended. In California, rear-end collisions are almost always deemed the fault of the rear-ending driver.

The usefulness of collision data is evident — consumers and regulators are most concerned with the safety of autonomous vehicles for pedestrians and passengers. If an AV company reports even one accident resulting in substantial damage to the vehicle or harm to a pedestrian or passenger while the vehicle operates in autonomous mode, the implications and repercussions for the company (and potentially the entire AV industry) are substantial.

However, the usefulness of disengagement reporting data is much more questionable. The California DMV requires AV operators to report the number and details of disengagements while testing on California public roads by January 1 of each year. The DMV defines this as “how often their vehicles disengaged from autonomous mode during tests (whether because of technical failure or situations requiring the test driver/operator to take manual control of the vehicle to operate safely).”

Operators must also track how often their vehicles disengaged from autonomous mode, and whether that disengagement was the result of software malfunction, human error, or at the option of the vehicle operator.

AV companies have kept a tight lid on measurable metrics, often only sharing limited footage of demonstrations performed under controlled settings and very little data, if any. Some companies have shared the occasional “annual safety report,” which reads more like a promotional deck than a source of data on AV performance. Furthermore, there are almost no reporting requirements for companies doing public testing in any other state. California’s disengagement reports are the exception.

This AV information desert means that disengagement reporting in California has often been treated as our only source of information on AVs. The public is forced to judge AV readiness and relative performance based on this disengagement data, which is incomplete at best and misleading at worst.

Disengagement reporting data offers no context

Most AV companies claim that disengagement reporting data is a poor metric for judging advancement in the AV industry due to a lack of context for the numbers: knowing where those miles were driven and the purpose of those trips is essential to understanding the data in disengagement reports.

Some in the AV industry have complained that miles driven in sparsely populated areas with arid climates and few intersections are miles dissimilar from miles driven in a city like San Francisco, Pittsburgh, or Atlanta. As a result, the number of disengagements reported by companies that test in the former versus the latter geography are incomparable.

It’s also important to understand that disengagement reporting requirements influence AV companies’ decisions on where and how to test. A test that requires substantial disengagements, even while safe, would be discouraged, as it would make the company look less ready for commercial deployment than its competitors. In reality, such testing may result in the most commercially ready vehicle. Indeed, some in the AV industry have accused competitors of manipulating disengagement reporting metrics by easing the difficulty of miles driven over time to look like real progress.

Furthermore, while data can look particularly good when manipulated by easy drives and clear roads, data can look particularly bad when it’s being used strategically to improve AV software.

Let’s consider an example provided by Jack Stewart, a reporter for NPR’s Marketplace covering transportation:

“Say a company rolls out a brand-new build of their software, and they’re testing that in California because it’s near their headquarters. That software could be extra buggy at the beginning, and you could see a bunch of disengagements, but that same company could be running a commercial service somewhere like Arizona, where they don’t have to collect these reports.

That service could be running super smoothly. You don’t really get a picture of a company’s overall performance just by looking at this one really tight little metric. It was a nice idea of California some years ago to start collecting some information, but it’s not really doing what it was originally intended to do nowadays.”

Disengagement reports lack prescriptive language

The disengagement reports are also misleading due to a lack of guidance and uniformity in the language used to describe the disengagements. For example, while AV companies used a variety of language, “perception discrepancies” was the most common term used to describe the reason for a disengagement — however, it’s not clear that the term “perception discrepancies” has a set meaning.

Several operators used the phrase “perception discrepancy” to describe a failure to detect an object correctly. Valeo North America described a similar error as “false detection of object.” Toyota Research Institute almost exclusively described their disengagements vaguely as “Safety Driver proactive disengagement,” the meaning of which is “any kind of disengagement.” Whereas, Pony.ai described each instance of disengagement with particularity.

Many other operators reported disengagements that were “planned testing disengagements” or that were described with such insufficient particularity as to be virtually meaningless.

For example, “planned disengagements” could mean the testing of intentionally created malfunctions, or it could simply mean the software is so nascent and unsophisticated that the company expected the disengagement. Similarly, “perception discrepancy” could mean anything from precautionary disengagements to disengagements due to extremely hazardous software malfunctions. “Perception discrepancy,” “planned disengagement” or any number of other vague descriptions of disengagements make comparisons across AV operators virtually impossible.

So, for example, while it appears that a San Francisco-based AV company’s disengagements were exclusively precautionary, the lack of guidance on how to describe disengagements and the many vague descriptions provided by AV companies have cast a shadow over disengagement descriptions, calling them all into question.

Regulations discourage virtual testing

Today, the software of AV companies is the real product. The hardware and physical components — lidar, sensors, etc. — of AV vehicles have become so uniform, they’re practically off-the-shelf. The real component that is being tested is software. It’s well known that software bugs are best found by running the software as often as possible; road testing simply can’t reach the sheer numbers necessary to find all the bugs. What can reach those numbers is virtual testing.

However, the regulations discourage virtual testing as the lower reported road miles would seem to imply that a company is not road-ready.

Jack Stewart of NPR’s Marketplace expressed a similar point of view:

“There are things that can be relatively bought off the shelf and, more so these days, there are just a few companies that you can go to and pick up the hardware that you need. It’s the software, and it’s how many miles that software has driven both in simulation and on the real roads without any incident.”

So, where can we find the real data we need to compare AV companies? One company runs over 30,000 instances daily through its end-to-end, three-dimensional simulation environment. Another company runs millions of off-road tests a day through its internal simulation tool, running driving models that include scenarios that it can’t test on roads involving pedestrians, lane merging, and parked cars. Waymo drives 20 million miles a day in its Carcraft simulation platform — the equivalent of over 100 years of real-world driving on public roads.

One CEO estimated that a single virtual mile can be just as insightful as 1,000 miles collected on the open road.

Jonathan Karmel, Waymo’s product lead for simulation and automation, similarly explained that Carcraft provides “the most interesting miles and useful information.”

Where we go from here

Clearly there are issues with disengagement reports — both in relying on the data therein and in the negative incentives they create for AV companies. However, there are voluntary steps that the AV industry can take to combat some of these issues:

  1. Prioritize and invest in virtual testing. Developing and operating a robust system of virtual testing may present a high expense to AV companies, but it also presents the opportunity to dramatically shorten the pathway to commercial deployment through the ability to test more complex, higher risk, and higher number scenarios.
  2. Share data from virtual testing. Voluntary disclosure of virtual testing data will reduce reliance on disengagement reports by the public. Commercial readiness will be pointless unless AV companies have provided the public with reliable data on AV readiness for a sustained period.
  3. Seek the greatest value from on-road miles. AV companies should continue using on-road testing in California, but they should use those miles to fill in the gaps from virtual testing. They should seek the greatest value possible out of those slower miles, accept the higher percentage of disengagements they will be required to report, and when reporting on those miles, describe their context in particularity.

With these steps, AV companies can lessen the pain of California’s disengagement reporting data and advance more quickly to an AV-ready future.

UK commits to redesign visa streaming algorithm after challenge to ‘racist’ tool

The UK government is suspending the use of an algorithm used to stream visa applications after concerns were raised the technology bakes in unconscious bias and racism.

The tool had been the target of a legal challenge. The Joint Council for the Welfare of Immigrants (JCWI) and campaigning law firm Foxglove had asked a court to declare the visa application streaming algorithm unlawful and order a halt to its use, pending a judicial review.

The legal action had not run its full course but appears to have forced the Home Office’s hand as it has committed to a redesign of the system.

A Home Office spokesperson confirmed to us that from August 7 the algorithm’s use will be suspended, sending us this statement via email: “We have been reviewing how the visa application streaming tool operates and will be redesigning our processes to make them even more streamlined and secure.”

Although the government has not accepted the allegations of bias, writing in a letter to the law firm: “The fact of the redesign does not mean that the [Secretary of State] accepts the allegations in your claim form [i.e. around unconscious bias and the use of nationality as a criteria in the streaming process].”

The Home Office letter also claims the department had already moved away from use of the streaming tool “in many application types”. But it adds that it will approach the redesign “with an open mind in considering the concerns you have raised”.

The redesign is slated to be completed by the autumn, and the Home Office says an interim process will be put in place in the meanwhile, excluding the use of nationality as a sorting criteria.

The JCWI has claimed a win against what it describes as a “shadowy, computer-driven” people sifting system — writing on its website: “Today’s win represents the UK’s first successful court challenge to an algorithmic decision system. We had asked the Court to declare the streaming algorithm unlawful, and to order a halt to its use to assess visa applications, pending a review. The Home Office’s decision effectively concedes the claim.”

The department did not respond to a number of questions we put to it regarding the algorithm and its design processes — including whether or not it sought legal advice ahead of implementing the technology in order to determine whether it complied with the UK’s Equality Act.

“We do not accept the allegations Joint Council for the Welfare of Immigrants made in their Judicial Review claim and whilst litigation is still on-going it would not be appropriate for the Department to comment any further,” the Home Office statement added.

The JCWI’s complaint centered on the use, since 2015, of an algorithm with a “traffic-light system” to grade every entry visa application to the UK.

“The tool, which the Home Office described as a digital ‘streaming tool’, assigns a Red, Amber or Green risk rating to applicants. Once assigned by the algorithm, this rating plays a major role in determining the outcome of the visa application,” it writes, dubbing the technology “racist” and discriminatory by design, given its treatment of certain nationalities.

“The visa algorithm discriminated on the basis of nationality — by design. Applications made by people holding ‘suspect’ nationalities received a higher risk score. Their applications received intensive scrutiny by Home Office officials, were approached with more scepticism, took longer to determine, and were much more likely to be refused.

“We argued this was racial discrimination and breached the Equality Act 2010,” it adds. “The streaming tool was opaque. Aside from admitting the existence of a secret list of suspect nationalities, the Home Office refused to provide meaningful information about the algorithm. It remains unclear what other factors were used to grade applications.”

Since 2012 the Home Office has openly operated an immigration policy known as the ‘hostile environment’ — applying administrative and legislative processes that are intended to make it as hard as possible for people to stay in the UK.

The policy has led to a number of human rights scandals. (We also covered the impact on the local tech sector by telling the story of one UK startup’s visa nightmare last year.) So applying automation atop an already highly problematic policy does look like a formula for being taken to court.

The JCWI’s concern around the streaming tool was exactly that it was being used to automate the racism and discrimination many argue underpin the Home Office’s ‘hostile environment’ policy. In other words, if the policy itself is racist any algorithm is going to pick up and reflect that.

“The Home Office’s own independent review of the Windrush scandal, found that it was oblivious to the racist assumptions and systems it operates,” said Chai Patel, legal policy director of the JCWI, in a statement. “This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out.”

“We’re delighted the Home Office has seen sense and scrapped the streaming tool. Racist feedback loops meant that what should have been a fair migration process was, in practice, just ‘speedy boarding for white people.’ What we need is democracy, not government by algorithm,” added Cori Crider, founder and director of Foxglove. “Before any further systems get rolled out, let’s ask experts and the public whether automation is appropriate at all, and how historic biases can be spotted and dug out at the roots.”

In its letter to Foxglove, the government has committed to undertaking Equality Impact Assessments and Data Protection Impact Assessments for the interim process it will switch to from August 7 — when it writes that it will use “person-centric attributes (such as evidence of previous travel”, to help sift some visa applications, further committing that “nationality will not be used”.

Some types of applications will be removed from the sifting process altogether, during this period.

“The intent is that the redesign will be completed as quickly as possible and at the latest by October 30, 2020,” it adds.

Asked for thoughts on what a legally acceptable visa streaming algorithm might look like, Internet law expert Lilian Edwards told TechCrunch: “It’s a tough one… I am not enough of an immigration lawyer to know if the original criteria applied re suspect nationalities would have been illegal by judicial review standard anyway even if not implemented in a sorting algorithm. If yes then clearly a next generation algorithm should aspire only to discriminate on legally acceptable grounds.

“The problem as we all know is that machine learning can reconstruct illegal criteria — though there are now well known techniques for evading that.”

“You could say the algorithmic system did us a favour by confronting illegal criteria being used which could have remained buried at individual immigration officer informal level. And indeed one argument for such systems used to be ‘consistency and non-arbitrary’ nature. It’s a tough one,” she added.

Earlier this year the Dutch government was ordered to halt use of an algorithmic risk scoring system for predicting the likelihood social security claimants would commit benefits or tax fraud — after a local court found it breached human rights law.

In another interesting case, a group of UK Uber drives are challenging the legality of the gig platform’s algorithmic management of them under Europe’s data protection framework — which bakes in data access rights, including provisions attached to legally significant automated decisions.

FCC invites public comment on Trump’s attempt to nerf Section 230

FCC Chairman Ajit Pai has decided to ask the public for its thoughts on an attempt initiated in Trump in May to water down certain protections that arguably led to the creation of the modern internet economy. The nakedly retaliatory order seems to be, legally speaking, laughable, and could be resolved without public input — but the FCC wants your opinion, so you may as well give it to them.

You can submit your comment here at the FCC’s long-suffering electronic comment filing system, but before you do so, perhaps acquaint yourself with a few facts.

Section 230 essentially prevents companies like Facebook and Google from being liable for content they merely host, as long as they work to take down illegal content quickly. Some feel these protections has given the companies the opportunity to manipulate speech on their platforms — Trump felt targeted by a fact-check warning placed by Twitter on his unsupported claims of fraud in mail-in warning.

To understand the order itself and see commentary from the companies that would be affected, as well as Senator Ron Wyden (D-OR), who co-authored the law in the first place, read our story from the day Trump signed the order. (Wyden called it “plainly illegal.”)

For a bipartisan legislative approach that actually addresses shortcomings in Section 230, check out the PACT Act announced in June. (Sen. Brian Schatz (D-HI) says they’re approaching the law “with a scalpel rather than a jackhammer.”)

More relevant to the FCC’s proceedings, however, are the comments of sitting commissioner Brendan Starks, who questioned the order’s legality and ethics, likening it to a personal vendetta intended to intimidate certain companies. As he explained:

The broader debate about Section 230 long predates President Trump’s conflict with Twitter in particular, and there are so many smart people who believe the law here should be updated. But ultimately that debate belongs to Congress. That the president may find it more expedient to influence a five-member commission than a 538-member Congress is not a sufficient reason, much less a good one, to circumvent the constitutional function of our democratically elected representatives.

Incidentally, Starks may be who Pai is referring to in a memo announcing the commentary period. “I strongly disagree with those who demand that we ignore the law and deny the public and all stakeholders the opportunity to weigh in on this important issue. We should welcome vigorous debate—not foreclose it,” Pai wrote.

This may be a reference to Commissioner Starks’s suggestion that the FCC address the order quickly and authoritatively: “If, as I suspect it ultimately will, the petition fails at a legal question of authority, I think we should say it loud and clear, and close the book on this unfortunate detour,” he said. After all, public opinion doesn’t count for much if the order has no legal effect to begin with and the FCC doesn’t even have to consider how it might revisit Section 230.

Whatever the case, the proposal is ready for you to comment on it. To do so, visit this page and click, in the box on the left, “+New Filing” or “+Express” — the first is if you would like to submit a document or evidence in support of your opinion, and the second is if you just want to explain your position in plain text. Remember, this information will be filed publicly, so anything you put in those fields — name, address, and everything — will be visible online.

To be clear, you’re commenting on the  NTIA proposal that the FCC draw up new rules regarding Section 230, which the executive order compelled that organization to send, not the executive order itself.

As with the net neutrality debacle, the FCC does not have to take your opinion into account, or reality for that matter. The comment period lasts 45 days, after which the item will likely go to internal deliberations at the Commission.

What Microsoft should demand in exchange for its “payment” to the U.S. government for TikTok

In one of the crazier news stories (and in 2020, that is saying something), President Donald Trump said today during a media availability that in order for the U.S. government to sign off on a potential Microsoft/TikTok deal, “a very substantial portion of that price is going to have to come into the Treasury of the United States,” based on my colleague Alex Wilhelm’s rough transcript.

That seems nearly impossible to actually execute in reality (corporations don’t just quote-unquote bribe the U.S. government to get their docs signed), but let’s actually take it at face value: should Microsoft pay, and if so, what should they demand in any bargain with the U.S. government?

First and foremost, some context. ByteDance, TikTok’s parent company, has been valued over $100 billion. ByteDance owns a suite of apps, including TikTok’s China-focused and extraordinarily popular sister app Douyin as well as Toutiao, an extremely successful news reader, so teasing out TikTok’s valuation by itself is difficult. Adding to the ambiguity is the regulatory chaos of the deal, and the fact that many big-pocketed buyers like Facebook are out of the running on straight antitrust grounds.

So let’s say for illustration that the price is at least $10 billion, if not tens of billions of dollars. How should Microsoft be thinking about a negotiation with the government here?

The overriding objective should be reducing Microsoft’s post-acquisition regulatory headaches. TikTok has well-documented privacy problems, which also involve teens — an area where regulations are acutely sensitive. When Facebook faced privacy problems on its own platform, it finally agreed to a settlement of $5 billion last year with the Federal Trade Commission to unify all the different cases and bring them to a conclusion. It also agreed to a set of restrictions as well as a monitoring mechanism to ensure compliance. TikTok (formerly Musical.ly) actually agreed to an FTC privacy settlement of $5.7 million last year.

On top of privacy, you have the export licensing issues from Treasury, data protection concerns on Capitol Hill due to the app’s China provenance, and potential antitrust issues from Justice.

So, it’s time to cut a deal. Offer the U.S. government a beefy sum — perhaps even a few billion depending on the final purchase price — as a “settlement fine” in exchange for immunity to all claims regarding privacy, trade, and antitrust regulations prior to TikTok’s acquisition. Perhaps have a setup where Microsoft has 180 days post-acquisition to clear up privacy issues, move data to presumably its own Azure cloud in the United States, and put in even better parental controls than TikTok has already introduced in the past few months.

Far from being an atrocious setup, this could massively limit Microsoft’s long-term liabilities, and also allow the company to avoid a lot of the escrow and holdbacks typical of large M&A deals, where an acquirer will not pay out the full acquisition price upfront lest future lawsuits bear significant costs.

It’s terrible for the President himself to get involved in such a matter in such a direct and indelicate way. But now that President Trump has opened the door — it’s actually perhaps not as bad of a path forward as it looks like at first glance. He has the power to push for an inter-agency process, line up all the government stakeholders, and accept a level of immunity in exchange for a “fine.”

A settlement can’t solve every problem. TikTok, like all internet apps in the United States, is not just governed by federal law but also by state laws around privacy, such as the California Consumer Privacy Act. A settlement with the federal government may still conflict with relevant state laws. In addition, agreeing to a large payment in the heart of election season would be deeply controversial, possibly on both sides of the aisle.

Nonetheless, this deal is by no means typical, and no one should think it will have a typical M&A process. While few lawyers would recommend engaging with the federal government over what is effectively a strange form of highway robbery — there are decent fiduciary reasons to just pay the toll, acquire some liability protection and move on.

Trump calls TikTok a hot brand, demands a chunk of its sale price

Today the president appeared to bless the budding Microsoft-TikTok deal, continuing his evolution on a possible transaction. After stating last Friday that he’d rather see TikTok banned than sold to a U.S.-based company, Trump changed his tune over the weekend. TikTok is owned by China-based company ByteDance, which owns a portfolio of apps and services.

A weekend phone call between Satya Nadella, the CEO of Microsoft, and the American prime appeared to change his mind, leading to the software company sharing publicly on Sunday that it was pursuing a deal.

Then today the president, endorsing a deal between an American company and ByteDance over TikTok, also said that he expects a chunk of the sale price to wind up in the accounts of the American government.

The American president has long struggled with basic economic concepts. For example, who pays tariffs. But to see Trump state that he expects to receive a chunk of a deal between two private companies that he is effectively forcing to the altar is surreal.

To fully grok his take, we’ve roughly transcribed the pertinent few minutes of his explanation from this morning, when asked about the weekend call with Microsoft’s Nadella. It’s worth a read (bold highlights are TechCrunch’s):

We had a great conversation, uh, he called me, to see whether or not, uh, how I felt about it. And I said look, it can’t be controlled, for security reasons, by China. Too big, too invasive. And it can’t be. And here’s the deal. I don’t mind if, whether it’s Microsoft or somebody else — a big company, a secure company, a very American company — buy it.

It’s probably easier to buy the whole thing than to buy 30% of it. ‘Cause I say how do you do 30%? Who’s going to get the name? The name is hot, the brand is hot. And who’s going to get the name? How do you do that if it’s owned by two different companies? So, my personal opinion was, you are probably better off buying the whole thing rather than buying 30% of it. I think buying 30% is complicated.

And, uh, I suggested that he can go ahead, he can try. We set a date, I set a date, of around September 15th, at which point it’s going to be out of business in the United States. But if somebody, whether it’s Microsoft or somebody else, buys it, that’ll be interesting.

I did say that if you buy it, whatever the price is, that goes to whoever owns it, because I guess it’s China, essentially, but more than anything else, I said a very substantial portion of that price is going to have to come into the Treasury of the United States. Because we’re making it possible for this deal to happen. Right now they don’t have any rights, unless we give it to ’em. So if we’re going to give them the rights, then it has to come into, it has to come into this country.

It’s a little bit like the landlord-tenant [relationship]. Uh, without a lease, the tenant has nothing. So they pay what is called “key money” or they pay something. But the United States should be reimbursed, or should be paid a substantial amount of money because without the United States they don’t have anything, at least having to do with the 30%.

So, uh, I told him that. I think we are going to have, uh, maybe a deal is going to be made, it’s a great asset, it’s a great asset. But it’s not a great asset in the United States unless they have the approval of the United States.

So it’ll close down on September 15th, unless Microsoft or somebody else is able to buy it, and work out a deal, an appropriate deal, so the Treasury of the — really the Treasury, I suppose you would say, of the United States, gets a lot of money. A lot of money.