Wandelbots raises $6.8M to make programming a robot as easy as putting on a jacket

Industrial robotics is on track to be worth around $20 billion by 2020, but while it may something in common with other categories of cutting-edge tech — innovative use of artificial intelligence, pushing the boundaries of autonomous machines that are disrupting pre-existing technology — there is one key area where it differs: each robotics firm uses its own proprietary software and operating systems to run its machines, making programming the robots complicated, time-consuming and expensive.

A startup out of Germany called Wandelbots (a portmanteau of “change” and “robots” in German) has come up with an innovative way to skirt around that challenge: it has built a bridge that connects the operating systems of the 12 most popular industrial robotics makers with what a business wants them to do, and now they can be trained by a person wearing a jacket kitted with dozens of sensors.

“We are providing a universal language to teach those robots in the same way, independent of the technology stack,” said CEO Christian Piechnick said in an interview. Essentially reverse engineering the process of how a lot of software is built, Wandelbots is creating what is a Linux-like underpinning to all of it.

With some very big deals under its belt with the likes of Volkwagen, Infineon and Midea, the startup out of Dresden has now raised €6 million ($6.8 million), a Series A to take it to its next level of growth and specifically to open an office in China. The funding comes from Paua VenturesEQT Ventures and other unnamed previous investors. (It had previously raised a seed round around the time it was a finalist in our Disrupt Battlefield last year, pre-launch.)

Notably, Paua has a bit of a history of backing transformational software companies (it also invests in Stripe), and EQT, being connected to a private equity firm, is treating this as a strategic investment that might be deployed across its own assets.

Piechnick — who co-founded Wandelbots with Georg Püschel, Maria Piechnick, Sebastian Werner, Jan Falkenberg and Giang Nguyen on the back of research they did at university — said that typical programming of industrial robots to perform a task could have in the past taken three months, the employment of specialist systems integrators, and of course an extra cost on top of the machines themselves.

Someone with no technical knowledge, wearing one of Wandelbots’ jackets, can bring that process down to 10 minutes, with costs reduced by a factor of ten.

“In order to offer competitive products in the face of the rapid changes within the automotive industry, we need more cost savings and greater speed in the areas of production and automation of manufacturing processes,” said Marco Weiß, Head of New Mobility & Innovations at Volkswagen Sachsen GmbH, in a statement. “Wandelbots’ technology opens up significant opportunities for automation. Using Wandelbots offering, the installation and setup of robotic solutions can be implemented incredibly quickly by teams with limited programming skills.”

Wandelbots’ focus at the moment is on programming robotic arms rather than the mobile machines that you may have seen Amazon and others using to move goods around warehouses. For now, this means that there is not a strong crossover in terms of competition between these two branches of enterprise robotics.

However, Amazon has been expanding and working on new areas beyond warehouse movements: it has, for example, been working ways of using computer vision and robotic arms to identify and pick out the most optimal fruits and vegetables out of boxes to put into grocery orders.

Innovations like that from Amazon and others could see more pressure for innovation among robotics makers, although Piechnick notes that up to now we’ve seen very little in the way of movement, and there may never be (creating more opportunity for companies like his that build more usability).

“Attempts to build robotics operating systems have been tried over and over again, and each time it’s failed,” he said. “But robotics has completely different requirements, such as real time computing, safety issues and many other different factors. A robot in operation is much more complicated than a phone.” He also added that Wandelbots itself has a number of innovations of its own currently going through the patent process, which will widen its own functionality too in terms of what and how its software can train a robot to do. (This may see more than jackets enter the mix.)

As with companies in the area of robotic process automation — which uses AI to take over more mundane back-office features — Piechnick maintains that what he has built, and the rise of robotics overall, is not going to replace workers, but put them on to other roles, while allowing businesses to expand the scope of what they can do that a human might never have been able to execute.

“No company we work with has ever replaced a human worker with a robot,” he said, explaining that generally the upgrade is from machine to better machine. “It makes you more efficient and cost reductive, and it allows you to put your good people on more complicated tasks.”

Currently, Wandelbots is working with large-scale enterprises, although ultimately, it’s smaller businesses that are its target customer, he said.

“Previously the ROI on robots was too difficult for SMEs,” he said. “With our tech this changes.”

“Wandelbots will be one of the key companies enabling the mass-adoption of industrial robotics by revolutionizing how robots are trained and used,” said Georg Stockinger, Partner at Paua Ventures, in a statement. “Over the last few years, we’ve seen a steep decline in robotic hardware costs. Now, Wandelbots’ resolves the remaining hurdle to disruptive growth in industrial automation – the ease and speed of implementation and teaching. Both factors together will create a perfect storm, driving the next wave of industrial revolution.”

 

 

Lift Aircraft’s Hexa may be your first multirotor drone ride

We were promised jetpacks, but let’s be honest, they’re just plain unsafe. So a nice drone ride is probably all we should reasonably expect. Lift Aircraft is the latest to make a play for the passenger multirotor market, theoretical as it is, and its craft is a sleek little thing with some interesting design choices to make it suitable for laypeople to “pilot.”

The Austin-based company just took the wraps off the Hexa, the 18-rotor craft it intends to make available for short recreational flights. It just flew for the first time last month, and could be taking passengers aloft as early as next year.

The Hexa is considerably more lightweight than the aircraft that seemed to be getting announced every month or two earlier this year. Lift’s focus isn’t on transport, which is a phenomenally complicated problem both in terms or regulation and engineering. Instead, it wants to simply make the experience of flying in a giant drone available for thrill-seekers with a bit of pocket money.

This reduced scope means the craft can get away with being just 432 pounds and capable of 10-15 minutes of sustained flight with a single passenger. Compared with Lilium’s VTOL engines or Volocopter’s 36-foot wingspan, this thing looks like a toy. And that’s essentially what it is, for now. But there’s something to be said for proving your design in a comparatively easily accessed market and moving up, rather than trying to invent an air taxi business from scratch.

“Multi-seat eVTOL air taxis, especially those that are designed to transition to wing-borne flight, are probably 10 years away and will require new regulations and significant advances in battery technology to be practical and safe. We didn’t want to wait for major technology or regulatory breakthroughs to start flying,” said Chasen in a news release. “We’ll be flying years before anyone else.”

The Hexa is flown with a single joystick and an iPad; direct movements and attitude control are done with the former, while destination-based movement, takeoff and landing take place on the latter. This way people can go from walking in the front door to flying one of these things — or rather riding in one and suggesting some directions to go — in an hour or so.

It’s small enough that it doesn’t even count as a “real” aircraft; it’s a “powered ultralight,” which is a plus and a minus: no pilot’s license necessary, but you can’t go past a few hundred feet of altitude or fly over populated areas. No doubt there’s still a good deal of fun you can have flying around a sort of drone theme park, though. The whole area will have been 3D mapped prior to flight, of course.

Lifting the Hexa are 18 rotors, each of which is powered by its own battery, which spreads the risk out considerably and makes it simple to swap them out. As far as safety is concerned, it can run with up to 6 engines down, has pontoons in case of a water landing, and an emergency parachute should the unthinkable happen.

The team is looking to roll out its drone-riding experience soon, but it has yet to select its first city. Finding a good location, checking with the community, getting the proper permits — not simple. CEO Matt Chasen told New Atlas the craft is “not very loud, but they’re also not whisper-quiet, either.” I’m thinking “not very loud” is in comparison to jets — every drone I’ve ever come across, from palm-sized to cargo-bearing, has made an incredible racket and if someone wanted to start a drone preserve next door I’d fight it tooth and nail. (Apparently Seattle is high on the list, too, so this may come to pass.)

In a sense, engineering a working autonomous multirotor aircraft was the easy part of building this business. Chasen told GeekWire that the company has raised a “typical-size seed round,” and is preparing for a Series A — probably once it has a launch city in its sights.

We’ll likely hear more at SXSW in March, where the Hexa will likely fly its first passengers.

Honda reportedly retires the iconic Asimo

Honda is ceasing development of Asimo, the humanoid robot that has delighted audiences at trade shows for years but never really matured into anything more than that, the Nikkei reports. But while the venerable bot itself is bowing out, the technology that made it so impressive will live on in other products, robotic and otherwise.

Asimo (named, of course, after science fiction pioneer Isaac Asimov) is older than you might guess: although it was revealed in 2000 as the first credibly bipedal walking robot, it had at that point been under development for more than a decade. The idea of a robot helper that could navigate a human-centric environment and interact with it in the same way we do was, of course, attractive.

But the problem proved, and still proves, harder than anyone guessed. Even the latest humanoid robots fail spectacularly at the most ordinary tasks that humans do without thinking. Asimo, which operated in a sort of semi-pre-programmed manner, was far behind even these limited capabilities.

That said, Asimo was an innovative, advanced and ambitious platform: its gait was remarkably smooth, and it climbed ramps and stairs confidently on battery power. It could recognize people’s faces and avoid obstacles, and generally do all the things in a minute-long demo that made people excited for the robot future to come.

Alas, that future seems as far off today as it did in 2000; outside of factories, few robots other than the occasional Roomba have made it past the demonstration stage. We’ll get there eventually.

And the research that went into Asimo will help. It may not be the actual robot we have in our homes, but this kind of project tends to create all kinds of useful technology. The efficient actuators in its legs can be repurposed for exoskeletons and mobility aids, its sensor pathways and the software behind them can inform self-driving cars and so on. I’ve asked Honda for confirmation and more details.

Robotics is still a major focus at Honda, but this particular robot is going to take a well-deserved rest. Farewell, Asimo — you may not have done much, but you helped us see that there is much that could be done.

Disney Imagineering has created autonomous robot stunt doubles

For over 50 years, Disneyland and its sister parks have been a showcase for increasingly technically proficient versions of its “animatronic” characters. First pneumatic and hydraulic and more recently fully electronic — these figures create a feeling of life and emotion inside rides and attractions, in shows and, increasingly, in interactive ways throughout the parks.

The machines they’re creating are becoming more active and mobile in order to better represent the wildly physical nature of the characters they portray within the expanding Disney universe. And a recent addition to the pantheon could change the way that characters move throughout the parks and influence how we think about mobile robots at large.

I wrote recently about the new tack Disney was taking with self-contained characters that felt more flexible, interactive and, well, alive than ‘static’, pre-programmed animatronics. That has done a lot to add to the convincing nature of what is essentially a very limited robot.

Traditionally, most animatronic figures cannot move from where they sit or stand, and are pre-built to exacting show specifications. The design and programming phases of the show are closely related, so that the hero characters are efficient and durable enough to run hundreds of times a day, every day, for years.

The Na’avi Shaman from Pandora: The World of Avatar, at Walt Disney World, represents the state of the art of this kind of figure.

However, with the expanded universe of Disney properties including more and more dynamic and heroic figures by the year, it makes sense that they’d want to explore ways of making the robots that represent those properties in the parks more believable and active.

That’s where the Stuntronics project comes in. Built out of a research experiment called Stickman, which we covered a few months ago, Stuntronics are autonomous, self-correcting aerial performers that make on-the-go corrections to nail high-flying stunts every time. Basically robotic stuntpeople, hence the name.

I spoke to Tony Dohi, Principle R&D Imagineer and Morgan Pope, Associate Research Scientist at Disney, about the project.

“So what this is about is the realization we came to after seeing where our characters are going on screen,” says Dohi, “whether they be Star Wars characters, or Pixar characters, or Marvel characters or our own animation characters, is that they’re doing all these things that are really, really active. And so that becomes the expectation our park guests have that our characters are doing all these things on screen — but when it comes to our attractions, what are our animatronic figures doing? We realized we have kind of a disconnect here.”

So they came up with the concept of a stunt double for the ‘hero’ animatronic figures that could take their place within a show or scene to perform more aggressive maneuvering, much in the same way a double replaces a valuable and delicate actor in a dangerous scene.

The Stuntronics robot features on-board accelerometer and gyroscope arrays supported by laser range finding. In its current form, it’s humanoid, taking on the size and shape of a performer that could easily be imagined clothed in the costume of, say, one of The Incredibles, or someone on the Marvel roster. The bot is able to be slung from the end of a wire to fly through the air, controlling its pose, rotation and center of mass to not only land aerial tricks correctly but to do them on target while holding heroic poses in midair.

One use of this could be mid-show in an attraction. For relatively static shots, hero animatronics like the Shaman or new figures Imagineering is constantly working on could provide nuanced performances of face and figure. Then, a transition to a scene that requires dramatic, un-fettered action and boom, a Stuntronics double could fly across the space on its own, calculating trajectories and striking poses with its on-board hardware, hitting a target dead on every time. Queue re-set for the next audience.

This focus on creating scenarios where animatronics feel more ‘real’ and dynamic is at work in other areas of Imagineering as well, with autonomous rolling robots and — some day — the holy grail of bipedal walking robots. But Stuntronics fills one specific gap in the repertoire of a standard Animatronic figure — the ability to convince you it can be a being of action and dynamism.

“So often our robots are in the uncanny valley where you got a lot of function, but it still doesn’t look quite right. And I think here the opposite is true,” says Pope. “When you’re flying through the air, you can have a little bit of function and you can produce a lot of stuff that looks pretty good, because of this really neat physics opportunity — you’ve got these beautiful kinds of parabolas and sine waves that just kind of fall out of rotating and spinning through the air in ways that are hard for people to predict, but that look fantastic.”

The original BRICK

Like many of the solutions Imagineering comes up with for its problems, Stuntronics started out as a research project without a real purpose. In this case, it was called BRICK (Binary Robotic Inertially Controlled bricK). Basically, a metal brick with sensors and the ability to change its center of mass to control its spin to hit a precise orientation at a precise height – to ‘stick the landing’ every time.

From the initial BRICK, Disney moved on to Stickman, an articulated version of the device that could now more aggressively control the rotation and orientation of the device. Combined with some laser rangefinders you had the bones of something that, if you squint, could emulate a ‘human’ acrobat.

“Morgan, I got together and said, maybe there’s something here, we’re not really be sure. But let’s poke at it in a bunch of different directions and see what comes out of it,” says Dohi.

But the Stickman didn’t stick for long.

“When we did the BRICK, I thought that was pretty cool,” says Pope. “And then by the time I was presenting the BRICK at a conference, Tony [Dohi] had helped us make stick man. And I was like, well, this isn’t cool anymore. The Stickman is what’s really cool. And then I was down in Australia presenting Stickman and I knew we were doing the full Stuntronic back at R&D. And I was like, well, this isn’t cool anymore,” he jokes.

“But it has been so much fun. Every step of the way I think oh, this is blowing my mind. but,they just keep pushing…so it’s nice to have that challenge.”

This process has always been one of the fascinating things to me about the way that Imagineering works as a whole. You have people that are enabled by management and internal structure to spool out the threads of a problem, even though you’re not really sure what’s going to come out of it. The biggest companies on the planet have similar R&D departments in place — though the ones that make a habit of disconnecting them from a balance sheet, like Apple, are few and far in between, in my experience. Typically, so much of R&D is tied to a profit/loss spreadsheet so tightly that it’s really, really difficult to sussurate something enough to see what comes of it.

The ability to kind of have vastly different specialities like math, physics, art and design to be able to put ideas on the table and sift through them and say hey, we have this storytelling problem on one hand and this research project on the other. If we drill down on this a bit more — would this serve the purpose? As long as the storytelling always remains the North Star then you end up having a a guiding light to serve drag you through the pile and you come out the other end, holding a couple of things that could be coupled to solve a problem.

“We’re set up to do the really high risk stuff that you don’t know is going to be successful or not, because you don’t know if there’s going to be a direct application of what you’re doing,” says Dohi. “But you just have a hunch that there might be something there, and they give us a long leash, and they let us explore the possibilities and the space around just an idea, which is really quite a privilege. It’s one of the reasons why I love this place.”

This process of play and iteration and pursuit of a goal of storytelling pops up again and again with Imagineering. It’s really a cluster of very smart people across a broad spectrum of disciplines that are governed by a central nervous system of leaders like Jon Snoddy, the head of R&D at the studios, who help to connect the dots between the research side and the other areas of Imagineering that deal with the Parks or interactive projects or the digital division.

There’s an economy and lack of ego to the organization that enables exploration without wastefulness and organically curtails the pursuit of things not in service to the story. In my time exploring the workings of Imagineering I’ve often found that there is a significant disconnect between how fascinating the process is and how well the organization communicates the cleverness of its solutions.

The Disney Research white papers are certainly infinitely fascinating to people interested in emerging tech, but the points of integration between the research and the practical applications in the parks often remain unexplored. Still, they’re getting better at understanding when they’ve really got something they feel is killer and thinking about better ways to communicate that to the world.

Indeed, near the end of our conversation, Dohi says he’s come up with a solid sound byte and I have him give me his best pitch.

“One of our goals of Stuntronics is to see if we can leap across the uncanny valley.”

Not bad.

Pepper the robot gets a job at HSBC Bank in New York

Starting today, customers at HSBC’s New York City flagship will be greeted by a friendly humanoid face. The bank’s Fifth Avenue location is employing Pepper, the customer service ‘bot that has become the face of Softbank’s growing robotics wing in recent years.

In the few years it’s been available, Pepper has held a veritable factotum of different gigs, from airport greeter, to shopping mall info desk staffer. The robot has held even more roles in its native Japan, finally grabbing some temp work in the U.S. back in 2016.

As far as what such a product offers that you won’t get with the standard flesh and blood bank employee, the answer still seems to come down to novelty, as HSBC looks to offer Manhattanites a glimpse at “the branch of the future.”

“We are offering the approximately two million people who live or work within a half mile radius of our flagship branch, and the millions more who walk Fifth Avenue daily, an experience in retail banking like never before,” HSBC’s Pablo Sanchez said in a press release tied to the next. “We’re focused on developing the ‘branch of the future,’ and our use of Pepper will streamline branch operations and delight our customers, allowing bank staff to have deeper, more high-value customer engagements.”

The robot will be tasked with informing patrons of self-service banking options and answering some basic questions. Oh, and it will also take selfies. No specific details on when additional units might be hitting more U.S. locations, but the company says, “Pepper is part of a larger vision being rolled out in the coming months that will transform HSBC’s branch banking experience.”

Pepper, is of course, one prong of Softbank’s robotics strategy, which now also includes Boston Dynamics, which is purchased from Alphabet.

Pepper the robot gets a job at HSBC Bank in New York

Starting today, customers at HSBC’s New York City flagship will be greeted by a friendly humanoid face. The bank’s Fifth Avenue location is employing Pepper, the customer service ‘bot that has become the face of Softbank’s growing robotics wing in recent years.

In the few years it’s been available, Pepper has held a veritable factotum of different gigs, from airport greeter, to shopping mall info desk staffer. The robot has held even more roles in its native Japan, finally grabbing some temp work in the U.S. back in 2016.

As far as what such a product offers that you won’t get with the standard flesh and blood bank employee, the answer still seems to come down to novelty, as HSBC looks to offer Manhattanites a glimpse at “the branch of the future.”

“We are offering the approximately two million people who live or work within a half mile radius of our flagship branch, and the millions more who walk Fifth Avenue daily, an experience in retail banking like never before,” HSBC’s Pablo Sanchez said in a press release tied to the next. “We’re focused on developing the ‘branch of the future,’ and our use of Pepper will streamline branch operations and delight our customers, allowing bank staff to have deeper, more high-value customer engagements.”

The robot will be tasked with informing patrons of self-service banking options and answering some basic questions. Oh, and it will also take selfies. No specific details on when additional units might be hitting more U.S. locations, but the company says, “Pepper is part of a larger vision being rolled out in the coming months that will transform HSBC’s branch banking experience.”

Pepper, is of course, one prong of Softbank’s robotics strategy, which now also includes Boston Dynamics, which is purchased from Alphabet.

DARPA design shifts round wheels to triangular tracks in a moving vehicle

As part of its Ground X-Vehicle Technologies program, DARPA is showcasing some new defense vehicle tech that’s as futuristic as it is practical. One of the innovations, a reconfigurable wheel-track, comes out of Carnegie Mellon University’s National Robotics Engineering Center in partnership with DARPA. The wheel-track is just one of a handful of designs meant to improve survivability of combat vehicles beyond just up-armoring them.

As you can see in the video, the reconfigurable wheel-track demonstrates a seamless transition between a round wheel shape and a triangular track in about two seconds and the shift between its two modes can be executed while the vehicle is in motion without cutting speed. Round wheels are optimal for hard terrain while track-style treads allow an armored vehicle to move freely on softer ground.

According to Ground X-Vehicle Program Manager Major Amber Walker, the tech offers “instant improvements to tactical mobility and maneuverability on diverse terrains” — an advantage you can see on display in the GIF below.

While wheel technology doesn’t sound that exciting, the result is visually impressive and smooth enough to prompt a double-take.

The other designs featured in the video are noteworthy as well, with one offering a windowless navigation technology called Virtual Perspectives Augmenting Natural Experiences (V-PANE) that integrates video from an array of mounted LIDAR and video cameras to recreate a realtime model of a windowless vehicle’s surroundings. Another windowless cockpit design creates “virtual windows” for a driver, with 3D goggles for depth enhancement, head-tracking and wraparound window display screens displaying data outside the all-terrain vehicle in realtime.

DARPA design shifts round wheels to triangular tracks in a moving vehicle

As part of its Ground X-Vehicle Technologies program, DARPA is showcasing some new defense vehicle tech that’s as futuristic as it is practical. One of the innovations, a reconfigurable wheel-track, comes out of Carnegie Mellon University’s National Robotics Engineering Center in partnership with DARPA. The wheel-track is just one of a handful of designs meant to improve survivability of combat vehicles beyond just up-armoring them.

As you can see in the video, the reconfigurable wheel-track demonstrates a seamless transition between a round wheel shape and a triangular track in about two seconds and the shift between its two modes can be executed while the vehicle is in motion without cutting speed. Round wheels are optimal for hard terrain while track-style treads allow an armored vehicle to move freely on softer ground.

According to Ground X-Vehicle Program Manager Major Amber Walker, the tech offers “instant improvements to tactical mobility and maneuverability on diverse terrains” — an advantage you can see on display in the GIF below.

While wheel technology doesn’t sound that exciting, the result is visually impressive and smooth enough to prompt a double-take.

The other designs featured in the video are noteworthy as well, with one offering a windowless navigation technology called Virtual Perspectives Augmenting Natural Experiences (V-PANE) that integrates video from an array of mounted LIDAR and video cameras to recreate a realtime model of a windowless vehicle’s surroundings. Another windowless cockpit design creates “virtual windows” for a driver, with 3D goggles for depth enhancement, head-tracking and wraparound window display screens displaying data outside the all-terrain vehicle in realtime.

This smart prosthetic ankle adjusts to rough terrain

Prosthetic limbs are getting better and more personalized, but useful as they are, they’re still a far cry from the real thing. This new prosthetic ankle is a little closer than others, though: it moves on its own, adapting to its user’s gait and the surface on which it lands.

Your ankle does a lot of work when you walk: lifting your toe out of the way so you don’t scuff it on the ground, controlling the tilt of your foot to minimize the shock when it lands or as you adjust your weight, all while conforming to bumps and other irregularities it encounters. Few prostheses attempt to replicate these motions, meaning all that work is done in a more basic way, like the bending of a spring or compression of padding.

But this prototype ankle from Michael Goldfarb, a mechanical engineering professor at Vanderbilt, goes much further than passive shock absorption. Inside the joint are a motor and actuator, controlled by a chip that senses and classifies motion and determines how each step should look.

“This device first and foremost adapts to what’s around it,” Goldfarb said in a video documenting the prosthesis.

“You can walk up slopes, down slopes, up stairs and down stairs, and the device figures out what you’re doing and functions the way it should,” he added in a news release from the university.

When it senses that the foot has lifted up for a step, it can lift the toe up to keep it clear, also exposing the heel so that when the limb comes down, it can roll into the next step. And by reading the pressure both from above (indicating how the person is using that foot) and below (indicating the slope and irregularities of the surface) it can make that step feel much more like a natural one.

One veteran of many prostheses, Mike Sasser, tested the device and had good things to say: “I’ve tried hydraulic ankles that had no sort of microprocessors, and they’ve been clunky, heavy and unforgiving for an active person. This isn’t that.”

Right now the device is still very lab-bound, and it runs on wired power — not exactly convenient if someone wants to go for a walk. But if the joint works as designed, as it certainly seems to, then powering it is a secondary issue. The plan is to commercialize the prosthesis in the next couple of years once all that is figured out. You can learn a bit more about Goldfarb’s research at the Center for Intelligent Mechatronics.

In Army of None, a field guide to the coming world of autonomous warfare

The Silicon Valley-military industrial complex is increasingly in the crosshairs of artificial intelligence engineers. A few weeks ago, Google was reported to be backing out of a Pentagon contract around Project Maven, which would use image recognition to automatically evaluate photos. Earlier this year, AI researchers around the world joined petitions calling for a boycott of any research that could be used in autonomous warfare.

For Paul Scharre, though, such petitions barely touch the deep complexity, nuance, and ambiguity that will make evaluating autonomous weapons a major concern for defense planners this century. In Army of None, Scharre argues that the challenges around just the definitions of these machines will take enormous effort to work out between nations, let alone handling their effects. It’s a sobering, thoughtful, if at times protracted look at this critical topic.

Scharre should know. A former Army Ranger, he joined the Pentagon working in the Office of Secretary of Defense, where he developed some of the Defense Department’s first policies around autonomy. Leaving in 2013, he joined the DC-based think tank Center for a New American Security, where he directs a center on technology and national security. In short, he has spent about a decade on this emerging tech, and his expertise clearly shows throughout the book.

The first challenge that belies these petitions on autonomous weapons is that these systems already exist, and are already deployed in the field. Technologies like the Aegis Combat System, High-speed Anti-Radiation Missile (HARM), and the Harpy already include sophisticated autonomous features. As Scharre writes, “The human launching the Harpy decides to destroy any enemy radars within a general area in space and time, but the Harpy itself chooses the specific radar it destroys.” The weapon can loiter for 2.5 hours while it determines a target with its sensors — is it autonomous?

Scharre repeatedly uses the military’s OODA loop (for observe, orient, decide, and act) as a framework to determine the level of autonomy for a given machine. Humans can be “in the loop,” where they determine the actions of the machine, “on the loop” where they have control but the machine is mostly working independently, and “out of the loop” when machines are entirely independent of human decision-making.

The framework helps clear some of the confusion between different systems, but it is not sufficient. When machines fight machines, for instance, the speed of the battle can become so great that humans may well do more harm then good intervening. Millions of cycles of the OODA loop could be processed by a drone before a human even registers what is happening on the battlefield. A human out of the loop, therefore, could well lead to safer outcomes. It’s exactly these kinds of paradoxes that make the subject so difficult to analyze.

In addition to paradoxes, constraints are a huge theme in the book as well. Speed is one — and the price of military equipment is another. Dumb missiles are cheap, and adding automation has consistently added to the price of hardware. As Scharre notes, “Modern missiles can cost upwards of a million dollars apiece. As a practical matter, militaries will want to know that there is, in fact, a valid enemy target in the area before using an expensive weapon.”

Another constraint is simply culture. The author writes, “There is intense cultural resistance within the U.S. military to handing over jobs to uninhabited systems.” Not unlike automation in the civilian workforce, people in power want to place flesh-and-blood humans in the most complex assignments. These constraints matter, because Scharre foresees a classic arms race around these weapons as dozens of countries pursue these machines.

Humans “in the loop” may be the default today, but for how long?

At a higher level, about a third of the book is devoted to the history of automation, (generalized) AI, and the potential for autonomy, topics which should be familiar to any regular reader of TechCrunch. Another third of the book or so is a meditation on the challenges of the technology from a dual use and strategic perspective, as well as the dubious path toward an international ban.

Yet, what I found most valuable in the book was the chapter on ethics, lodged fairly late in the book’s narrative. Scharre does a superb job covering the ground of the various schools of thought around the ethics of autonomous warfare, and how they intersect and compete. He extensively analyzes and quotes Ron Arkin, a roboticist who has spent significant time thinking about autonomy in warfare. Arkin tells Scharre that “We put way too much faith in human warfighters,” and argues that autonomous weapons could theoretically be programmed never to commit a war crime unlike humans. Other activists, like Jody Williams, believe that only a comprehensive ban can ensure that such weapons are never developed in the first place.

Scharre regrets that more of these conversations don’t take into account the strategic positions of the military. He notes that international discussions on bans are led by NGOs and not by nation states, whereas all examples of successful bans have been the other way around.

Another challenge is simply that antiwar activism and anti-autonomous weapons activism are increasingly being conflated. Scharre writes, “One of the challenges in weighing the ethics of autonomous weapons is untangling which criticisms are about autonomous weapons and which are really about war.” Citing Sherman, who marched through the U.S. South in the Civil War in an aggressive pillage, the author reminds the reader that “war is hell,” and that militaries don’t choose weapons in a vacuum, but relatively against other tools in their and their competitors’ arsenals.

The book is a compendium of the various issues around autonomous weapons, although it suffers a bit from the classic problem of being too lengthy on some subjects (drone swarms) while offering limited information on others (arms control negotiations). The book also is marred at times by errors, such as “news rules of engagement” that otherwise detract from a direct and active text. Tighter editing would have helped in both cases. Given the inchoate nature of the subject, the book works as an overview, although it fails to present an opinionated narrative on where autonomy and the military should go in the future, an unsatisfying gap given the author’s extensive and unique background on the subject.

All that said, Army of None is a one-stop guide book to the debates, the challenges, and yes, the opportunities that can come from autonomous warfare. Scharre ends on exactly the right note, reminding us that ultimately, all of these machines are owned by us, and what we choose to build is within our control. “The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” We should continue to engage, and petition, and debate, but always with a vision for the future we want to realize.