Let’s talk about killer robots

Looking for a Thanksgiving dinner table conversation that isn’t politics or professional sports? Okay, let’s talk about killer robots. It’s a concept that long ago leapt from the pages of science fiction to reality, depending on how loose a definition you use for “robot.” Military drones abandoned Asimov’s First Law of Robotics — “A robot may not injure a human being or, through inaction, allow a human being to come to harm” — decades ago.

The topic has been simmering again of late due to the increasing prospect of killer robots in domestic law enforcement. One of the era’s best known robot makers, Boston Dynamics, raised some public policy red flags when it showcased footage of its Spot robot being deployed as part of Massachusetts State Police training exercises on our stage back in 2019.

The robots were not armed and instead were part of an exercise designed to determine how they might help keep officers out of harm’s way during a hostage or terrorist situation. But the prospect of deploying robots in scenarios where people’s lives are at immediate risk was enough to prompt an inquiry from the ACLU, which told TechCrunch:

We urgently need more transparency from government agencies, who should be upfront with the public about their plans to test and deploy new technologies. We also need statewide regulations to protect civil liberties, civil rights, and racial justice in the age of artificial intelligence.

Last year, meanwhile, the NYPD cut short a deal with Boston Dynamics following a strong public backlash, after images surfaced of Spot being deployed in response to a home invasion in the Bronx.

For its part, Boston Dynamics has been very vocal in its opposition to the weaponization of its robots. Last month, it signed an open letter, along with other leading firms Agility, ANYbotics, Clearpath Robotics and Open Robotics, condemning the action. It notes:

We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues. Weaponized applications of these newly-capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society.

The letter was believed to have been, in part, a response to Ghost Robotics’ work with the U.S. military. When images of one of its own robotic dogs showed on Twitter sporting an autonomous rifle, the Philadelphia firm told TechCrunch that it took an agnostic stance with regard to how the systems are employed by its military partners:

We don’t make the payloads. Are we going to promote and advertise any of these weapon systems? Probably not. That’s a tough one to answer. Because we’re selling to the military, we don’t know what they do with them. We’re not going to dictate to our government customers how they use the robots.

We do draw the line on where they’re sold. We only sell to U.S. and allied governments. We don’t even sell our robots to enterprise customers in adversarial markets. We get lots of inquiries about our robots in Russia and China. We don’t ship there, even for our enterprise customers.

Boston Dynamics and Ghost Robotics are currently embroiled in a lawsuit involving several patents.

This week, local police reporting site Mission Local surfaced renewed concern around killer robots – this time in San Francisco. The site notes that a policy proposal being reviewed by the city’s Board of Supervisors next week includes language about killer robots. The “Law Enforcement Equipment Policy” begins with an inventory of robots currently in the San Francisco Police Department’s possession.

There are 17 in all – 12 of which are functioning. They’re largely designed for bomb detection and disposal – which is to say that none are designed specifically for killing.

“The robots listed in this section shall not be utilized outside of training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments,” the policy notes. It then adds, more troublingly, “Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.”

Effectively, according to the language, the robots can be used to kill in order to potentially save the lives of officers or the public. It seems innocuous enough in that context, perhaps. At the very least, it seems to fall within the legal definition of “justified” deadly force. But new concerns arise in what would appear to be a profound change to policy.

For starters, the use of a bomb disposal robot to kill a suspect is not without precedent. In July 2016, Dallas police officers did just that for what was believed to be the first time in U.S. history. “We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was,” police chief David Brown said at the time.

Second, it is easy to see how new precedent could be used in a CYA scenario, if a robot is intentionally or accidentally used in this manner. Third, and perhaps most alarmingly, one could imagine the language applying to the acquisition of a future robotic system not purely designed for explosive discovery and disposal.

Mission Local adds that SF’s Board of Supervisors Rules Committee chair Aaron Peskin attempted to insert the more Asimov-friendly line, “Robots shall not be used as a Use of Force against any person.” The SFPD apparently crossed out Peskin’s change and updated it to its current language.

The renewed conversation around killer robots in California comes, in part, due to Assembly Bill 481. Signed into law by Gov. Gavin Newsom in September of last year, the law is designed to make police action more transparent. That includes an inventory of military equipment utilized by law enforcement.

The 17 robots included in the San Francisco document are part of a longer list that also includes the Lenco BearCat armored vehicle, flash-bangs and 15 sub machine guns.

Last month, Oakland Police said it would not be seeking approval for armed remote robots. The department said in a statement:

The Oakland Police Department (OPD) is not adding armed remote vehicles to the department. OPD did take part in ad hoc committee discussions with the Oakland Police Commission and community members to explore all possible uses for the vehicle. However, after further discussions with the Chief and the Executive Team, the department decided it no longer wanted to explore that particular option.
The statement followed public backlash.
The toothpaste is already out of the tube first Asimov’s first law. The killer robots are here. As for the second law — “A robot must obey the orders given it by human beings” — this is still mostly within our grasp. It’s up to society to determine how its robots behave.

Let’s talk about killer robots by Brian Heater originally published on TechCrunch

Clearview AI banned from selling its facial recognition software to most U.S. companies

A company that gained notoriety for selling access to billions of facial photos, many culled from social media without the knowledge of the individuals depicted, faces major new restrictions to its controversial business model.

On Monday, Clearview AI agreed to settle a 2020 lawsuit from the ACLU that accused the company of running afoul of an Illinois law banning the use of individuals’ biometric data without consent.

That law, the Biometric Information Privacy Act (BIPA), protects the privacy of Illinois residents, but the Clearview settlement is a clear blueprint for how the law can be leveraged to bolster consumer protections on the national stage.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” Deputy Director of ACLU’s Speech, Privacy, and Technology Project Nathan Freed Wessler said.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.”

Clearview isn’t the only company to get tangled up in the trailblazing Illinois privacy law. Last year, Facebook was ordered to pay $650 million for violating BIPA by automatically tagging people in photos with the use of facial recognition tech.

According to the terms of the Clearview settlement, which is still in the process of being finalized by the court, the company will be nationally banned from selling or giving away access to its facial recognition database to private companies and individuals.

While there is an exception made for government contractors — Clearview works with government agencies, including Homeland Security and the FBI in the U.S. — the company can’t provide its software to any government contractors or state or local government entities in Illinois for five years.

Clearview will also be forced to maintain an opt-out system to allow any Illinois residents to block their likeness from the company’s facial search results, a mechanism it must spend $50,000 to publicize online. The company must also end its controversial practice of providing free trials to police officers if those individuals don’t get approval through their departments to test the software.

The sweeping restrictions will have a huge impact on Clearview’s ability to do business in the U.S, but the company is also facing privacy headwinds in its business abroad. Last November, Britain’s Information Commissioner’s Office hit Clearview with a $22.6 million fine for failing to obtain consent from British residents before sweeping their photos into its massive database. Clearview has also run afoul of privacy laws in Canada, France and Australia, with some countries ordering the company to delete all data that was obtained without their residents’ consent.

Robotics roundup

I’m excited for any opportunity to talk about soft robotics. There’s something other-worldly about the world of inflatable robotic bladders. Human beings are wont to develop robotics in their own image. The world of soft robots are something akin to what the technology would be like if it was developed by a sea creature.

Certainly cephalopods like octopi and squid have been a major inspiration for the category, along with other invertebrates. The benefits of the technology is clear. These far less rigid structures are more compliant and are able to squeeze into more places and conform to different shapes. We have seen a number of models deployed for different tasks, including picking and placing fragile products like foodstuffs.

Image Credits: MIT CSAIL

Another major benefit is the safety. As human-robot collaborations increase, companies are looking for ways to ensure that their big, hulking machines don’t accidentally hurt their workers. That appears to be a big part of the inspiration behind this MIT project designed to create robots that can alternate between hard and soft structures.

It’s still very much in the early stages, but the research presents a compelling idea, with a series of cables that help the soft structure become more rigid. The team likens the tech to the muscles in a human arm. When flexed, it becomes much more difficult to move.

New research published by researchers at National University of Singapore and the University of Lincoln (U.K.) in the journal Nature show ocean-inspired soft robots in a seemingly more natural environment: the Marina Trench. The soft structure (this time drawing more direct inspiration from rays) was able to withstand the extreme pressure that comes with being nearly 11,000 meters below the surface of the South China Sea.

Per the team:

This self-powered robot eliminates the requirement for any rigid vessel. To reduce shear stress at the interfaces between electronic components, we decentralize the electronics by increasing the distance between components or separating them from the printed circuit board.

Another entirely different — but equally fascinating — swimming robot comes out of UC San Diego. The 2 cm long fish-shaped robots have platinum tales, which propel through bubbles created by a reaction to the hydrogen peroxide in their petri dishes. After researchers cut the little robots in half or thirds, magnetic interaction makes the pieces “heal” back together. The hope is that such application could be put to use in larger swimming robots, which are frequently made of fragile material.

Image Credits: Skydio

The big robotics investment of the week is Skydio. Not the first time that sentiment has been uttered, of course. The Series D brings the drone makers total funding up to $340 million, as it expands to additional commercial applications. There’s also the fact that the company is U.S.-based, which likely makes it all the more appealing to investors following DJI’s addition to the DoC’s “entity list” in December.

That was one of those initiatives the Trump administration took on its way out the door. Like a number of recent entity list additions, this will hopefully be scrutinized under the new administration. Thus far, however, it hasn’t appeared to impact DJI’s ability to sell drones in the States, including the new FPV.

Image Credits: Brian Heater

I got my hands on the new model this week. It’s an interesting new category the drone giant has only dipped its toes in thus far. The DJI FPV bundles goggles for a first-person flight experience that has largely been the domain of racers and high-end hobbyist models. Given that DJI current controls roughly 70% of the global market, this marks an important moment for the burgeoning category.

Animated image of a drone floating over rebar and tying it together at intersections.

Image Credits: SkyMul

SkyMul, meanwhile, is one of the more interesting applications I’ve seen in the drone space for a bit. The startup is one of a deluge of robotics companies in the construction space. Specially, it uses quadcopters for the extremely thankless job of tying rebar.

Image Credits: MIT

On the research side of the category, we’ve got these adorable little buggers out of MIT. At 0.6 grams, they weigh roughly the size of a large bumble bee. And unlike previous models, these “cassette tapes with wings” are designed to survive mid-air collisions. Turns out bumble bees tend to crash into each other a bunch while flying. That’s one of those little pieces of nature trivia that make perfect sense when you think about it for a second.

The drones are “are built with soft actuators, made from carbon nanotube-coated rubber cylinders. The actuators elongate when electricity is applied at a rate up to 500 times a second. Doing this causes the wings to beat and the drones to take flight.”

After I posted that article, someone asked if I knew about a “kids’ book about a boy who becomes an insect drone and solves mysteries?” I did not. But I asked the occasionally usual social media platform and it nearly instantly responded with the extremely 70s kids book title, “Danny Dunn, Invisible Boy.” The thirteenth book in a long-running series, the book offered a glimpse into the future of drones. Per a 2014 Medium article:

You control the drone using a keyboard box, a thoroughly funky virtual-reality helmet, and what look like a pair of souped up Nintendo Power Gloves. With head inside the helmet, the pilot sees what the dragonfly sees, and even feels what the dragonfly feels via haptic feedback in the gloves.

In less fun “science fiction predicts the future” news, a follow up from all of the hubbub surrounding Boston Dynamics last week. Once again, the ACLU is weighing in on the matter, as it did when the company debuted footage of a Spot unit in the field with the Massachusetts state PD on our stage a while back.

The org reiterates questions it and others have raised before, including some of the generally ominous images and ideas around weaponized robotics that have been floating around for a long time. Here, however, it combines these with existing conversations around policing, AI and bias:

Viewed narrowly, there’s nothing wrong with using a robot to scout a dangerous location or deliver food to hostages. But communities should take a hard look at expensive, rare-use technologies at a time when the nation is increasingly recognizing the need to invest in solving our social problems in better ways than just empowering police.

There’s a lot to untangle here, but as I said week, it’s a net positive that we’re discussing these things now, while they’re still largely hypothetical. If drones have taught us anything, it’s that technology comes at you fast. I can appreciate that last week’s art instillation was not the framing Boston Dynamics wanted to enter into this conversation — and I do think the company’s an easy target due to its profile and the framing of fiction like Black Mirror.

But having this ongoing conversation is a net positive for robotics, going forward.

Robotics roundup

I’m excited for any opportunity to talk about soft robotics. There’s something other-worldly about the world of inflatable robotic bladders. Human beings are wont to develop robotics in their own image. The world of soft robots are something akin to what the technology would be like if it was developed by a sea creature.

Certainly cephalopods like octopi and squid have been a major inspiration for the category, along with other invertebrates. The benefits of the technology is clear. These far less rigid structures are more compliant and are able to squeeze into more places and conform to different shapes. We have seen a number of models deployed for different tasks, including picking and placing fragile products like foodstuffs.

Image Credits: MIT CSAIL

Another major benefit is the safety. As human-robot collaborations increase, companies are looking for ways to ensure that their big, hulking machines don’t accidentally hurt their workers. That appears to be a big part of the inspiration behind this MIT project designed to create robots that can alternate between hard and soft structures.

It’s still very much in the early stages, but the research presents a compelling idea, with a series of cables that help the soft structure become more rigid. The team likens the tech to the muscles in a human arm. When flexed, it becomes much more difficult to move.

New research published by researchers at National University of Singapore and the University of Lincoln (U.K.) in the journal Nature show ocean-inspired soft robots in a seemingly more natural environment: the Marina Trench. The soft structure (this time drawing more direct inspiration from rays) was able to withstand the extreme pressure that comes with being nearly 11,000 meters below the surface of the South China Sea.

Per the team:

This self-powered robot eliminates the requirement for any rigid vessel. To reduce shear stress at the interfaces between electronic components, we decentralize the electronics by increasing the distance between components or separating them from the printed circuit board.

Another entirely different — but equally fascinating — swimming robot comes out of UC San Diego. The 2 cm long fish-shaped robots have platinum tales, which propel through bubbles created by a reaction to the hydrogen peroxide in their petri dishes. After researchers cut the little robots in half or thirds, magnetic interaction makes the pieces “heal” back together. The hope is that such application could be put to use in larger swimming robots, which are frequently made of fragile material.

Image Credits: Skydio

The big robotics investment of the week is Skydio. Not the first time that sentiment has been uttered, of course. The Series D brings the drone makers total funding up to $340 million, as it expands to additional commercial applications. There’s also the fact that the company is U.S.-based, which likely makes it all the more appealing to investors following DJI’s addition to the DoC’s “entity list” in December.

That was one of those initiatives the Trump administration took on its way out the door. Like a number of recent entity list additions, this will hopefully be scrutinized under the new administration. Thus far, however, it hasn’t appeared to impact DJI’s ability to sell drones in the States, including the new FPV.

Image Credits: Brian Heater

I got my hands on the new model this week. It’s an interesting new category the drone giant has only dipped its toes in thus far. The DJI FPV bundles goggles for a first-person flight experience that has largely been the domain of racers and high-end hobbyist models. Given that DJI current controls roughly 70% of the global market, this marks an important moment for the burgeoning category.

Animated image of a drone floating over rebar and tying it together at intersections.

Image Credits: SkyMul

SkyMul, meanwhile, is one of the more interesting applications I’ve seen in the drone space for a bit. The startup is one of a deluge of robotics companies in the construction space. Specially, it uses quadcopters for the extremely thankless job of tying rebar.

Image Credits: MIT

On the research side of the category, we’ve got these adorable little buggers out of MIT. At 0.6 grams, they weigh roughly the size of a large bumble bee. And unlike previous models, these “cassette tapes with wings” are designed to survive mid-air collisions. Turns out bumble bees tend to crash into each other a bunch while flying. That’s one of those little pieces of nature trivia that make perfect sense when you think about it for a second.

The drones are “are built with soft actuators, made from carbon nanotube-coated rubber cylinders. The actuators elongate when electricity is applied at a rate up to 500 times a second. Doing this causes the wings to beat and the drones to take flight.”

After I posted that article, someone asked if I knew about a “kids’ book about a boy who becomes an insect drone and solves mysteries?” I did not. But I asked the occasionally usual social media platform and it nearly instantly responded with the extremely 70s kids book title, “Danny Dunn, Invisible Boy.” The thirteenth book in a long-running series, the book offered a glimpse into the future of drones. Per a 2014 Medium article:

You control the drone using a keyboard box, a thoroughly funky virtual-reality helmet, and what look like a pair of souped up Nintendo Power Gloves. With head inside the helmet, the pilot sees what the dragonfly sees, and even feels what the dragonfly feels via haptic feedback in the gloves.

In less fun “science fiction predicts the future” news, a follow up from all of the hubbub surrounding Boston Dynamics last week. Once again, the ACLU is weighing in on the matter, as it did when the company debuted footage of a Spot unit in the field with the Massachusetts state PD on our stage a while back.

The org reiterates questions it and others have raised before, including some of the generally ominous images and ideas around weaponized robotics that have been floating around for a long time. Here, however, it combines these with existing conversations around policing, AI and bias:

Viewed narrowly, there’s nothing wrong with using a robot to scout a dangerous location or deliver food to hostages. But communities should take a hard look at expensive, rare-use technologies at a time when the nation is increasingly recognizing the need to invest in solving our social problems in better ways than just empowering police.

There’s a lot to untangle here, but as I said week, it’s a net positive that we’re discussing these things now, while they’re still largely hypothetical. If drones have taught us anything, it’s that technology comes at you fast. I can appreciate that last week’s art instillation was not the framing Boston Dynamics wanted to enter into this conversation — and I do think the company’s an easy target due to its profile and the framing of fiction like Black Mirror.

But having this ongoing conversation is a net positive for robotics, going forward.

Instagram CEO, ACLU slam TikTok and WeChat app bans for putting US freedoms into the balance

As people begin to process the announcement from the U.S. Department of Commerce detailing how it plans, on grounds of national security, to shut down TikTok and WeChat — starting with app downloads and updates for both, plus all of WeChat’s services, on September 20, with TikTok following with a shut down of servers and services on November 12 — the CEO of Instagram and the ACLU are among those that are speaking out against the move.

The CEO of Instagram, Adam Mosseri, wasted little time in taking to Twitter to criticize the announcement. His particular beef is the implication the move will have for US companies — like his — that also have built their businesses around operating across national boundaries.

In essence, if the U.S. starts to ban international companies from operating in the U.S., then it opens the door for other countries to take the same approach with U.S. companies.

Meanwhile, the ACLU has been outspoken in criticizing the announcement on the grounds of free speech.

“This order violates the First Amendment rights of people in the United States by restricting their ability to communicate and conduct important transactions on the two social media platforms,” said Hina Shamsi, director of the American Civil Liberties Union’s National Security Project, in a statement today.

Shamsi added that ironically, while the U.S. government might be crying foul over national security, blocking app updates poses a security threat in itself.

“The order also harms the privacy and security of millions of existing TikTok and WeChat users in the United States by blocking software updates, which can fix vulnerabilities and make the apps more secure. In implementing President Trump’s abuse of emergency powers, Secretary Ross is undermining our rights and our security. To truly address privacy concerns raised by social media platforms, Congress should enact comprehensive surveillance reform and strong consumer data privacy legislation.”

Vanessa Pappas, who is the acting CEO of TikTok, also stepped in to endorse Mosseri’s words and publicly asked Facebook to join TikTok’s litigation against the U.S. over its moves.

We agree that this type of ban would be bad for the industry. We invite Facebook and Instagram to publicly join our challenge and support our litigation,” she said in her own tweet responding to Mosseri, while also retweeting the ACLU. (Interesting how Twitter becomes Switzlerland in these stories, huh?) “This is a moment to put aside our competition and focus on core principles like freedom of expression and due process of law.”

The move to shutter these apps has been wrapped in an increasingly complex set of issues, and these two dissenting voices highlight not just some of the conflict between those issues, but the potential consequences and detriment of acting based on one issue over another.

The Trump administration has stated that the main reason it has pinpointed the apps has been to “safeguard the national security of the United States” in the face of nefarious activity out of China, where the owners of WeChat and TikTok, respectively Tencent and ByteDance, are based:

“The Chinese Communist Party (CCP) has demonstrated the means and motives to use these apps to threaten the national security, foreign policy, and the economy of the U.S.,” today statement from the U.S. Department of Commerce noted. “Today’s announced prohibitions, when combined, protect users in the U.S. by eliminating access to these applications and significantly reducing their functionality.”

In reality, it’s hard to know where the truth actually lies.

In the case of the ACLU and Mosseri’s comments, they are highlighting issues of principles but not necessarily precedent.

It’s not as if the US would be the first country to take a nationalist approach to how it permits the operation of apps. Facebook and its stable of apps, as of right now, are unable to operate in China without a VPN (and even with a VPN things can get tricky). And free speech is regularly ignored in a range of countries today.

But the US has always positioned itself as a standard bearer in both of these areas, and so apart from the self-interest that Instagram might have in advocating for more free market policies, it points to wider market and business position that’s being eroded.

The issue, of course, is a little like an onion (a stinking onion, I’d say), with well more than just a couple of layers around it, and with the ramifications bigger than TikTok (with 100 million users in the U.S. and huge in pop culture beyond even that) or WeChat (much smaller in the U.S. but huge elsewhere and valued by those who do use it).

The Trump administration has been carefully selecting issues to tackle to give voters reassurance of Trump’s commitment to “Make America Great Again,” building examples of how it’s helping to promote U.S. interests and demote those that stand in its way. China has been a huge part of that image building, positioned as an adversary in industrial, defence and other arenas. Pinpointing specific apps and how they might pose a security threat by sucking up our data fits neatly into that strategy.

But are they really security threats, or are they just doing the same kind of nefarious data ingesting that every social app does in order to work? Will the US banning them really mean that other countries, up to now more in favor of a free market, will fall in line and take a similar approach? Will people really stop being able to express themselves?

Those are the questions that Trump has forced into the balance with his actions, and even if they were not issues before, they have very much become so now.

Another US court says police cannot force suspects to turn over their passwords

The highest court in Pennsylvania has ruled that the state’s law enforcement cannot force suspects to turn over their password that would unlock their devices.

The state’s Supreme Court said compelling a password from a suspect is a violation of the Fifth Amendment, a constitutional protection that protects suspects from self-incrimination.

It’s not an surprising ruling given other state and federal courts have almost always come to the same conclusion. The Fifth Amendment grants anyone in the U.S. the right to remain silent, which includes the right to not turn over information that could incriminate them in a crime. These days, those protections extend to the passcodes that only a device owner knows.

But the ruling is not expected to affect the ability by police to force suspects to use their biometrics — like their face or fingerprints — to unlock their phone or computer.

Because your passcode is in stored your head and your biometrics are not, prosecutors have long argued that police can compel a suspect into unlocking a device with their biometrics, which they say are not constitutionally protected. The court also did not address biometrics. In a footnote of the ruling, the court said it “need not address” the issue, blaming the U.S. Supreme Court for creating “the dichotomy between physical and mental communication.”

Peter Goldberger, president of the ACLU of Pennsylvania, who presented the arguments before the court, said it was “fundamental” that suspects have the right to “to avoid self-incrimination.”

Despite the spate of rulings in recent years, law enforcement have still tried to find their way around compelling passwords from suspects. The now-infamous Apple-FBI case saw the federal agency try to force the tech giant to rewrite its iPhone software in an effort to beat the password on the handset of the terrorist Syed Rizwan Farook, who with his wife killed 14 people in his San Bernardino workplace in 2015. Apple said the FBI’s use of the 200-year-old All Writs Act would be “unduly burdensome” by putting potentially every other iPhone at risk if the rewritten software leaked or was stolen.

The FBI eventually dropped the case without Apple’s help after the agency paid hackers to break into the phone.

Brett Max Kaufman, a senior staff attorney at the ACLU’s Center for Democracy said the Pennsylvania case ruling sends a message to other courts to follow in its footsteps.

“The court rightly rejects the government’s effort to create a giant, digital-age loophole undermining our time-tested Fifth Amendment right against self-incrimination,” he said. “The government has never been permitted to force a person to assist in their own prosecution, and the courts should not start permitting it to do so now simply because encrypted passwords have replaced the combination lock.”

“We applaud the court’s decision and look forward to more courts to follow in the many pending cases to be decided next,” he added.

Amazon under greater shareholder pressure to limit sale of facial recognition tech to the government

This week could mark a significant setback for Amazon’s facial recognition business if privacy and civil liberties advocates — and some shareholders — get their way.

Months earlier, shareholders tabled a resolution to limit the sale of Amazon’s facial recognition tech giant calls Rekognition to law enforcement and government agencies. It followed accusations of bias and inaccuracies with the technology, which they say can be used to racially discriminate against minorities. Rekognition, which runs image and video analysis of faces, has been sold to two states so far and Amazon has pitched Immigrations & Customs Enforcement. A second resolution will require an independent human and civil rights review of the technology.

Now the ACLU is backing the measures and calling on shareholders to pass the the resolutions.

“Amazon has stayed the course,” said Shankar Narayan, director of the Technology and Liberty Project at the ACLU Washington, in a call Friday. “Amazon has heard repeatedly about the dangers to our democracy and vulnerable communities about this technology but they have refused to acknowledge those dangers let alone address them,” he said.

“Amazon has been so non-responsive to these concerns,” said Narayan, “even Amazon’s own shareholders have been forced to resort to putting these proposals addressing those concerns on the ballot.”

It’s the latest move in a concerted effort by dozens of shareholders and investment firms, tech experts and academics, and privacy and rights groups and organizations who have decried the use of the technology.

Critics say Amazon Rekognition has accuracy and bias issues. (Image: TechCrunch)

In a letter to be presented at Amazon’s annual shareholder meeting Wednesday, the ACLU will accuse Amazon of “failing to act responsibly” by refusing to stop the sale of the technology to the government.

“This technology fundamentally alters the balance of power between government and individuals, arming governments with unprecedented power to track, control, and harm people,” said the letter, shared with TechCrunch. “It would enable police to instantaneously and automatically determine the identities and locations of people going about their daily lives, allowing government agencies to routinely track their own residents. Associated software may even display dangerous and likely inaccurate information to police about a person’s emotions or state of mind.”

“As shown by a long history of other surveillance technologies, face surveillance is certain to be disproportionately aimed at immigrants, religious minorities, people of color, activists, and other vulnerable communities,” the letter added.

“Without shareholder action, Amazon may soon become known more for its role in facilitating pervasive government surveillance than for its consumer retail operations,” it read.

Facial recognition has become one of the most hot button topics in privacy in years. Amazon Rekognition, its cloud-based facial recognition system, remains in its infancy yet one of the most prominent and available systems available. But critics say the technology is flawed. Exactly a year prior to this week’s shareholder meeting, the ALCU first raised “profound” concerns with Rekognition and its installation at airports, public places and by police. Since then, the technology was shown to struggle to detect people of color. In its tests, the system struggled to match 28 congresspeople who were falsely matched in a mugshot database who had been previously arrested.

But there has been pushback — even from government. Several municipalities have rolled out surveillance-curtailing laws and ordnances in the past year. San Francisco last week became the first major U.S. city government to ban the use of facial recognition.

“Amazon leadership has failed to recognize these issues,” said the ACLU’s letter to be presented Wednesday. “This failure will lead to real-life harm.”

The ACLU said shareholders “have the power to protect Amazon from its own failed judgment.”

Amazon has pushed back against the claims by arguing that the technology is accurate — largely by criticizing how the ACLU conducted its tests using Rekognition.

Amazon did not comment when reached prior to publication.

Read more:

Facebook settles ACLU job advertisement discrimination suit

Facebook and the ACLU issued a joint statement this morning, noting that they have settled a class action job discrimination suit. The ACLU filed the suit in September, along with Outten & Golden LLC and the Communications Workers of America, alleging that Facebook allowed employers to target ads based on categories like race, national origin, age and gender.

The initial charges were filed on behalf of female workers who alleged they were not served up employment opportunities based on gender. Obviously all of that’s against all sort of federal, state and local laws, including, notably, section VII of the Civil Rights Act of 1964.

Today’s announcement finds Facebook implementing “sweeping changes” to its advertising platform in order to address these substantial concerns. The company outlined a laundry list of “far-reaching changes and steps,” including the development of a separate ad portal to handle topics like housing, employment and credit (HEC) for Facebook, Instagram and Facebook Messenger.

Targeting based on gender, age and race will not be allowed within the confines of the new system. Ditto for the company’s Lookalike Audience tool, which is similarly designed to target customers based on things like gender, age, religious views and the like.

“Civil rights leaders and experts – including members of the Congressional Black Caucus, the Congressional Hispanic Caucus, the Congressional Asian Pacific American Caucus, and Laura Murphy, the highly respected civil rights leader who is overseeing the Facebook civil rights audit – have also raised valid concerns about this issue,” Sheryl Sandberg wrote in a blog post tied to the announcement. “We take their concerns seriously and, as part of our civil rights audit, engaged the noted civil rights law firm Relman, Dane & Colfax to review our ads tools and help us understand what more we could do to guard against misuse.”

In addition to the above portal, Facebook will be creating a one-stop site where users can search amongst all job listings, independent of how ads are served up. The company has also promised to offer up “educational materials to advertisers about these new anti-discrimination measures. Facebook will also be meeting regularly with the suit’s plaintiffs to assure that it is continuing to meet all of the parameters of the settlement.

“As the internet — and platforms like Facebook — play an increasing role in connecting us all to information related to economic opportunities, it’s crucial that micro-targeting not be used to exclude groups that already face discrimination,” ACLU senior staff attorney Galen Sherwin said in the joint statement. “We are pleased Facebook has agreed to take meaningful steps to ensure that discriminatory advertising practices are not given new life in the digital era, and we expect other tech companies to follow Facebook’s lead.”

Further details of the settlement haven’t been disclosed by either party, but the update is clearly a bit of a consolatory move from a company that’s landed itself on the wrong side of a large lawsuit. Even so, it ought to be regarded as a positive outcome for a problematic product offering.