Tesla vaunts creation of ‘the best chip in the world’ for self-driving

At its “Autonomy Day” today, Tesla detailed the new custom chip that will be running the self-driving software in its vehicles. Elon Musk rather peremptorily called it “the best chip in the world…objectively.” That might be a stretch, but it certainly should get the job done.

Called for now the “full self-driving computer,” or FSD Computer, it is a high-performance, special-purpose chip built (by Samsung, in Texas) solely with autonomy and safety in mind. Whether and how it actually outperforms its competitors is not a simple question and we will have to wait for more data and closer analysis to say more.

Former Apple chip engineer Pete Bannon went over the FSDC’s specs, and while the numbers may be important to software engineers working with the platform, what’s more important at a higher level is meeting various requirements specific to self-driving tasks.

Perhaps the most obvious feature catering to AVs is redundancy. The FSDC consists of two duplicate systems right next to each other on one board. This is a significant choice, though hardly unprecedented, simply because splitting the system in two naturally divides its power as well, so if performance were the only metric (if this was a server, for instance) you’d never do it.

Here, however, redundancy means that should an error or damage creep in somehow or another, it will be isolated to one of the two systems and reconciliation software will detect and flag it. Meanwhile the other chip, on its own power and storage systems, should be unaffected. And if something happens that breaks both at the same time, the system architecture is the least of your worries.

Redundancy is a natural choice for AV systems, but it’s made more palatable by the extreme levels of acceleration and specialization that are possible nowadays for neural network-based computing. A regular general-purpose CPU like you have in your laptop will get schooled by a GPU when it comes to graphics-related calculations, and similarly a special compute unit for neural networks will beat even a GPU. As Bannon notes, the vast majority of calculations are a specific math operation and catering to that yields enormous performance benefits.

Pair that with high speed RAM and storage and you have very little in the way of bottlenecks as far as running the most complex parts of the self-driving systems. The resulting performance is impressive, enough to make a proud Musk chime in during the presentation:

“How could it be that Tesla, who has never designed a chip before, would design the best chip in the world? But that is objectively what has occurred. Not best by a small margin, best by a big margin.”

Let’s take this with a grain of salt, as surely engineers from Nvidia, Mobileye, and other self-driving concerns would take issue with the statement on some grounds or another. And even if it is the best chip in the world, there will be a better one in a few months — and regardless, hardware is only as good as the software that runs on it. (Fortunately Tesla has some amazing talent on that side as well.)

(One quick note for a piece of terminology you might not be familiar with: OPs. This is short for operations for second, and it’s measured in the billions and trillions these days. FLOPs is another common term, which means floating-point operations per second; these pertain to higher-precision math often used by supercomputers for scientific calculations. One isn’t better or worse than the other, and they shouldn’t be compared directly or considered exchangeable.)

High-performance computing tasks tend to drain the battery, like doing transcoding or HD video editing on your laptop and it bites the dust after 45 minutes. If your car did that you’d be mad, and rightly so. Fortunately a side effect of acceleration tends to be efficiency.

The whole FSDC runs on about 100 watts (or 50 per compute unit), which is pretty low — it’s not cell phone chip low, but it’s well below what a desktop or high performance laptop would pull, less even than many single GPUs. Some AV-oriented chips draw more, some draw less, but Tesla’s claim is that they’re getting more power per watt than the competition. Again, these claims are difficult to vet immediately considering the closed nature of AV hardware development, but it’s clear that Tesla is at least competitive and may very well beat its competitors on some important metrics.

Two more AV-specific features found on the chip, though not in duplicate (the compute pathways converge at some point), are some CPU lockstep work and a security layer. Lockstep means that it is being very carefully enforced that the timing on these chips is the same, ensuring that they are processing the exact same data at the same time. It would be disastrous if they got out of sync either with each other or with other systems. Everything in AVs depends on very precise timing while minimizing delay, so robust lockstep measures are put in place to keep that straight.

The security section of the chip vets commands and data cryptographically to watch for, essentially, hacking attempts. Like all AV systems, this is a finely-oiled machine and interference must not be allowed for any reason — lives are on the line. So the security piece watches the input and output data carefully to watch for anything suspicious like spoofed visual data (to trick the car into thinking there’s a pedestrian, for instance) to tweaked output data (say to prevent it from taking proper precautions if it does detect a pedestrian).

The most impressive part of all might be that this whole custom chip is backwards-compatible with existing Teslas, able to be dropped right in, and it won’t even cost that much. Exactly how much the system itself costs Tesla, and how much you’ll be charged as a customer — well, that will probably vary. But despite being the “best chip in the world,” this one is relatively affordable.

Part of that might be from going with a 14nm fabrication process rather than the sub-10nm process others have chosen (and to which Tesla may eventually have to migrate). For power savings the smaller the better and as we’ve established, efficiency is the name of the game here.

We’ll know more once there’s a bit more objective — truly objective, apologies to Musk — testing on this chip and its competition. For now just know that Tesla isn’t slacking and the FSD Computer should be more than enough to keep your Model 3 on the road.

Boston Dynamics showcases new uses for SpotMini ahead of commercial production

Last year at our TC Sessions: Robotics event, Boston Dynamics announced its intention to commercialize SpotMini. It was a big step for the secretive company. After a quarter of building some of the world’s most sophisticated robots, it was finally taking a step into the commercial realm, making the quadrupedal robot available to anyone with the need and financial resources for the device.

CEO Marc Raibert made a return appearance at our event this week to discuss the progress Boston Dynamics has made in the intervening 12 months, both with regard to SpotMini and the company’s broader intentions to take a more market-based approach to a number of its creations.

The appearance came hot on the heels of a key acquisition for the company. In fact, Kinema was the first major acquisition in the company’s history — no doubt helped along by the very deep coffers of its parent company, SoftBank. The Bay Area-based startup’s imaging technology forms a key component to Boston Dynamics’ revamped version of its wheeled robot hand. With a newfound version system and its dual arms replaced with a multi-suction cupped gripper.

A recent video from the company demonstrated the efficiency and speed with which the system can be deployed to move boxes from shelf to conveyor belt. As Raibert noted onstage, Handle is the closest Boston Dynamics has come to a “purpose-built robot” — i.e. a robot designed from the ground up to perform a specific task. It marks a new focus for a company that, after its earliest days of DARPA-funded projects, appears to primarily be driven by the desire to create the world’s most sophisticated robots.

“We estimate that there’s about a trillion cubic foot boxes moved around the world every year,” says Raibert. “And most of it’s not automated. There’s really a huge opportunity there. And of course this robot is great for us, because it includes the DNA of a balancing robot and moving dynamically and having counterweights that let it reach a long way. So it’s not different, in some respects, from the robots we’ve been building for years. On the other hand, some of it is very focused on grasping, being able to see boxes and do tasks like stack them neatly together.”

The company will maintain a foot on that side of things, as well. Robots like the humanoid Atlas will still form an important piece of its work, even when no commercial applications are immediately apparent.

But once again, it was SpotMini who was the real star of the show. This time, however, the company debuted the version of the robot that will go into production. At first glance, the robot looked remarkably similar to the version we had onstage last year.

“We’ve we’ve redesigned many of the components to make it more reliable, to make the skins work better and to protect it if it does fall,” says Raibert.  “It has two sets [of cameras] on the front, and one on each side and one on the back. So we can see in all directions.”

I had have the opportunity to pilot the robot — making me one of a relatively small group of people outside of the Boston Dynamics offices who’ve had the opportunity to do so. While SpotMini has all of the necessary technology for autonomous movement, user control is possible and preferred in certain situations (some of which we’ll get to shortly).

[Gifs featured are sped up a bit from original video above]

The controller is an OEMed design that looks something like an Xbox controller with an elongated touchscreen in the middle. The robot can be controlled directly with the touchscreen, but I opted for a pair of joysticks. Moving Spot around is a lot like piloting a drone. One joystick moves the robot forward and back, the other turns it left and right.

Like a drone, it takes some getting used to, particularly with regard to the orientation of the robot. One direction is always forward for the robot, but not necessarily for the pilot. Tapping a button on the screen switches the joystick functionality to the arm (or “neck,” depending on your perspective). This can be moved around like a standard robotic arm/grasper. The grasper can also be held stationary, while the rest of the robot moves around it in a kind of shimmying fashion.

Once you get the hang of it, it’s actually pretty simple. In fact, my mother, whose video game experience peaked out at Tetris, was backstage at the event and happily took the controller from Boston Dynamics, controlling the robot with little issue.

Boston Dynamics is peeling back the curtain more than ever. During our conversation, Raibert debuted being the scenes footage of component testing. It’s a site to behold, with various pieces of the robot splayed out on lab bench. It’s a side of Boston Dynamics we’ve not really seen before. Ditto for the images of large Spot Mini testing corrals, where several are patrolling around autonomous.

Boston Dynamics also has a few more ideas of what the future could look like for the robot. Raibert shared footage of Massachusetts State Police utilizing spot in different testing scenarios, where the the robot’s ability to open doors could potential get human officers out of harm’s way during a hostage or terrorists.

Another unit was programmed to autonomously patrol a construction site in Tokyo, outfitted with a Street View-style 360 camera, so it can monitor progress. “This lets the construction company get an assessment of progress at their site,” he explains. “You might think that that’s a low end task. But these companies have thousands of sites. And they have to patrol them at least a couple of times a week to know where they are in progress. And they’re anticipating using Spot for that. So we have over a dozen construction companies lined up to do test at various stages of testing and proof of concept in their scenarios.”

Raibert says the Spot Mini is still on track for a July release. The company plans to manufacture around 100 in its initial run, though it’s still not ready to talk about pricing.

Boston Dynamics debuts the production version of SpotMini

Last year at our TC Sessions: Robotics conference, Boston Dynamics announced that SpotMini will be its first commercially available product. A revamped version of the product would use the company’s decades of quadrupedal robotics learnings as a basis for a robot designed to patrol office spaces.

At today’s event, founder and CEO Marc Raibert took to the stage to debut the production version of the electric robot. As noted last year, the company plans to produce around 100 models this year. Raibert said that the company is aiming to start production in July or August. There are robots coming off the assembly line now, but they are all betas being used for testing, and the company is still doing redesigns. Pricing details will be announced this summer.

New things about the SpotMini as it moves closer to production include redesigned components to make it more reliable, skins that work better to protect the robot if it falls and two sets of cameras on the front and one on each side and the back, so it can see in all directions.

The SpotMini also has an arm (with a hand that’s often mistaken for its head) that is stabilized in space, so it stays in the same place as the rest of the robot moves, making it more flexible for different applications.

Raibert says he hopes the SpotMini becomes the “Android of robots” (or Android of androids), with navigation software and developers eventually writing apps that can run in and interact with the controls on the robot.

SpotMini is the first commercial robot Boston Dynamics is set to release, but as we learned earlier year, it certainly won’t be the last. The company is looking to its wheeled Handle robot in an effort to push into the logistics space. It’s a super hot category for robotics right now. Notably, Amazon recently acquired Colorado-based start up Canvas to add to it sown arm of fulfillment center robots.

Boston Dynamics made its own acquisition earlier this month — a first for the company. The addition of Kinema will bring advanced vision systems to the company’s robots — a key part in implementing these sorts of systems in the field.

Breeze Automation is building soft robots for the Navy and NASA

San Francisco soft robotics startup Breeze Automation made its debut today onstage at TechCrunch’s TC Sessions: Robotics + AI event at UC Berkeley. Co-founder and CEO Gui Cavalcanti joined us onstage at the event to showcase the contract work the company has been doing for organizations like NASA and the U.S. Navy.

Cavalcanti last joined TechCrunch onstage in September 2016, decked out in aviator sunglasses and full American flag regalia as a co-founder of fighting robot league MegaBots. These days, however, the Boston Dynamics alum’s work is a lot more serious and subdued, solving problems in dangerous settings like under water and outer space.

Developed as part of San Francisco R&D facility Otherlab, Breeze leverages the concept of highly adaptable soft robotics. The company’s robotic arms are air-filled fabric structures.

“The concept Otherlab has been developing for around seven years has been this idea of Fluidic Robots, hydraulic and Pneumatic Robots that are very cheap,” Cavalcanti told TechCrunch in a conversation ahead of today’s event. “Very robust to the environment and made with very lightweight materials. The original concept was, what is the simplest possible robot you can make, and what is the lightest robot you can make? What that idea turned into was these robots made of fabric and air.”

Breeze separates from much of the competition in the soft robotics space by applying these principles to the entire structure, instead of just a, say, gripper on the end of a more traditional robotic arm.

“All of that breaks down the second you get out of those large factories, and the question of how do robots interact to the real world becomes a lot more pressing,” Cavalcanti says. “What we’re trying to do is take a lot more of the research around soft robotics and the advantages of being fully sealed systems that are moved with really compliant sources of actuation like air. It turns out that when you’re trying to interact with an environment that’s unpredictable or unstructured, and you’re going to bump into things and you’re going to not get it right because you don’t have full sensing of the state of the world. There’s a lot of advantages to having entire manipulators and arms be soft instead of just the end effector.”

Breeze showcased several works in progress, including a system developed for the Navy that uses an HTC Vive headset for remote operation. The company’s work with NASA, meanwhile, involves the creation of a robotic system that doesn’t require a central drive shaft, marking a departure from more traditional robotic systems.

“You’re now looking at robot joints that can handle significant loads, that could be entirely injection molded,” explains Cavalcanti. “You don’t need a metal shaft, you don’t need a set of bearings or whatever. You can just have a bunch of injection mold, or plastic pieces that’s put together and there’s your robot.”

Most of the company’s funding is currently coming from federal contracts from places like the Navy and NASA, but going forward, Breeze is shifting more toward commercial contracts. “Our mission right now is to harden our technology and prepare for real-world application, and that is pretty much 100 percent our focus,” he says. “Once we do harden it, there are a variety of options for going commercial that we’d like to explore.”

Industrial robotics giant Fanuc is using AI to make automation even more automated

Industrial automation is already streamlining the manufacturing process, but first those machines must be painstakingly trained by skilled engineers. Industrial robotics giant Fanuc wants to make robots easier to train, therefore making automation more accessible to a wider range of industries, including pharmaceuticals. The company announced a new artificial intelligence-based tool at TechCrunch’s Robotics/AI Sessions event today that teaches robots how to pick the right objects out of a bin with simple annotations and sensor technology, reducing the training process by hours.

Bin-picking is exactly what it sounds like: a robot arm is trained to pick items out of bins and used for tedious, time-consuming tasks like sorting bulk orders of parts. Images of example parts are taken with a camera for the robot to match with vision sensors. Then the conventional process of training bin-picking robots means teaching it many rules so it knows what parts to pick up.

“Making these rules in the past meant having to through a lot of iterations and trial and error. It took time and was very cumbersome,” said Dr. Kiyonori Inaba, the head of Fanuc Corporation’s Robot Business Division, during a conversation ahead of the event.

These rules include details like how to locate the parts on the top of the pile or which ones are the most visible. Then after that, human operators need to tell it when it makes an error in order to refine its training. In industries that are relatively new to automation, finding enough engineers and skilled human operators to train robots can be challenging.

This is where Fanuc’s new AI-based tool comes in. It simplifies the training process so the human operator just needs to look at a photo of parts jumbled in a bin on a screen and tap a few examples of what needs to be picked up, like showing a small child how to sort toys. This is significantly less training than what typical AI-based vision sensors need and can also be used to train several robots at once.

“It is really difficult for the human operator to show the robot how to move in the same way the operator moves things,” said Inaba. “But by utilizing AI technology, the operator can teach the robot more intuitively than conventional methods.” He adds that the technology is still in its early stages and it remains to be seen if it can be used during in assembly as well.

Nvidia launches its Isaac SDK to help democratize AI-powered robot development

Today at TechCrunch’s TC Sessions: Robotics + AI event at UC Berkeley, Nvidia VP of Engineering Claire Delaunay announced that the company’s Isaac SDK is available for download. Announced last month, the software development kit is part of the chipmaker’s ongoing push to help make robotics development more accessible for a wider range of users.

The system is designed to improve accessibility to key features of robotics AI and ML, including obstacle detection, speech recognition and stereo depth estimation, each of which will prove key components to even basic robotic systems, going forward.

According to the company:

Using computational graphs and an entity component system, the Isaac Robot Engine allows developers to break down complex robotic tasks into a network of smaller, simpler steps. Developing a complex system is made easy using Gems, which are modular capabilities for sensing, planning, and actuation that can be easily plugged into a robotics application.

And, of course, the system will play nicely with NVIDIA’s own robotics hardware components like the Jetson Nano and Jetson AGX Xavier. Delaunay demonstrated some of the system’s functionality onstage at today’s event, using Nvidia’s dual in-house reference platforms, the two-wheeled Carter and four-wheeled Kaya.

Aptiv takes its self-driving car ambitions (and tech) to China

Aptiv, the U.S. auto supplier and self-driving software company, is opening an autonomous mobility center in Shanghai to focus on the development and eventual deployment of its technology on public roads.

The expansion marks the fifth market where Aptiv has set up R&D, testing or operational facilities. Aptiv has autonomous driving operations in Boston, Las Vegas, Pittsburgh and Singapore. But China is perhaps its most ambitious endeavor yet.

Aptiv has never had any AV operations in China, but it does have a long history in the country including manufacturing and engineering facilities. The company, in its earlier forms as Delphi and Delco has been in China since 1993 — experience that will be invaluable as it tries to bring its autonomous vehicle efforts into a new market, Aptiv Autonomous Mobility President Karl Iagnemma told TechCrunch in a recent interview.

“The long-term opportunity in China is off the charts,” Iagnemma said, noting a recent McKinsey study that claims the country will host two-thirds of the world’s autonomous driven miles by 2040 and be trillion-dollar mobility service opportunity.

“For Aptiv, it’s always been a question of not ‘if’, but when we’re going to enter the Chinese market,” he added.

Aptiv will have self-driving cars testing on public roads by the second half of 2019.

“Our experience in other markets has shown that in this industry, you learn by doing,” Iagnemma explained.

And it’s remark that Iagnemma can stand by. Iagnemma is the co-founder of self-driving car startup nuTonomy, one of the first to launch a robotaxi service in 2016 in Singapore that the public—along with human safety drivers — could use.

NuTonomy was acquired by Delphi in 2017 for $450 million. NuTonomy became part of Aptiv after its spinoff from Delphi was complete.

Aptiv is also in discussions with potential partners for mapping and commercial deployment of Aptiv’s vehicles in China.

Some of those partnerships will likely mimic the types of relationships Aptiv has created here in the U.S., notably with Lyft . Aptiv’s self-driving vehicles operate on Lyft’s ride-hailing platform in Las Vegas and have provided more than 40,000 paid autonomous rides in Las Vegas via the Lyft app.

Aptiv will also have to create new kinds of partnerships unlike those it has in the U.S. due to restrictions and rules in China around data collection, intellectual property and creating high resolution map data.

Talk all things robotics and AI with TechCrunch writers

This Thursday, we’ll be hosting our third annual Robotics + AI TechCrunch Sessions event at UC Berkeley’s Zellerbach Hall. The day is packed start-to-finish with intimate discussions on the state of robotics and deep learning with key founders, investors, researchers and technologists.

The event will dig into recent developments in robotics and AI, which startups and companies are driving the market’s growth, and how the evolution of these technologies may ultimately play out. In preparation for our event, TechCrunch’s Brian Heater spent time over the last several months visiting some of the top robotics companies in the country. Brian will be on the ground at the event, alongside Lucas Matney who will also be on the scene. Friday at 11:00 am PT, Brian and Lucas will be sharing what they saw and what excited them most with Extra Crunch members on a conference call.

Tune in to find out about what you might have missed and to ask Brian and Lucas anything else robotics, AI or hardware. And want to attend the event in Berkeley this week? It’s not too late to get tickets.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Disney/Lucasfilm donates $1.5 million to FIRST

A day after the big Episode IX reveal, Disney and subsidiary Lucas film announced that it will be donating $1.5 million to FIRST . The non-profit group was founded by Dean Kamen in 1989 to help teach STEM through initiatives like robotics competitions.

Disney’s money will go to provide education and outreach to the underserved communities on which FIRST focuses. Details are pretty thin on precisely what the partnership will entail, but Disney’s certainly got a lot to gain from this sort of outreach — and Lucasfilm knows a thing or two about robots.

The Star Wars: Force for Change announcement was made in conjunction with Lucasfilm’s annual Star Wars Celebration in Chicago. Yesterday the event hosted a panel with the cast of the upcoming film that included a teaser trailer and title reveal.

“Star Wars has always inspired young people to look past what is and imagine a world beyond,” Lucasfilm president Kathleen Kennedy said in a release tied to the news. “It is crucial that we pass on the importance of science and technology to young people—they will be the ones who will have to confront the global challenges that lie ahead. To support this effort, Lucasfilm and Disney are teaming up with FIRST to bring learning opportunities and mentorship to the next generation of innovators.”

It’s been a good week for FIRST investments. Just yesterday Amazon announced its own commitment to the group’s robotics offerings.

IAM Robotics puts a unique spin on warehouse automation

Before robots get to do the fun stuff, they’re going to be tasked with all of the things humans don’t want to do. It’s a driving tenet of automation — developing robotics and AI designed to replace dull, dirty and dangerous tasks. It’s no surprise, then, that warehouses and fulfillment centers have been major drivers in the field.

Earlier this week, we reported that Amazon would be acquiring Canvas, adding another piece to its portfolio, adding to the 100,000 or so robotics it currently deploys across 25 or so fulfillment centers. Even Boston Dynamics has been getting into the game, acquiring a vision system in order to outfit its Handle robot for the warehouse life.

Like so much of the robotics world, Pittsburgh is a key player in the world of automation. IAM Robotics is one of the more compelling local entrants in the space. We paid the company a visit on a recent trip to town. Located in a small office outside of the city, the startup offers a unique take on the increasingly important pick and place robotics, combining a robotic arm with a mobile system.

“What’s unique about IAM robotics is we’re the only ones with a mobile robot that is also capable of manipulating objects and moving things around the warehouse by itself,” CEO Joel Reed told TechCrunch. “It doesn’t require a person in the loop to actually physically handle things. And what’s unique about that is we’re empowering machine with AI and computer vision technologies to make those decisions by itself. So it’s fully autonomous, it’s driving around, using its own ability to see.”

The startup has mostly operated quietly, in spite of a $20 million venture round led by KCK late last year. After a quick demo in the office, it’s easier to see how early investors have found promise in the company. Still, the demo marks a pretty stark contrast from the Bossa Nova warehouse where we spent the previous day.

There are a couple of small rows of groceries in a corner of the office space, a few feet away from where the rest of IAM’s staff is at work. A pair of the company’s Swift robots go to work, traveling up and down the small, makeshift aisle. When the robot locates the desired product on a shelf, a long, multi-segmented arm drops down, positioning itself in front of a box. The suction cup tip attaches to the product, then the arm swivels back around to release it into a bin.

Used correctly, the Swift could help companies staff difficult-to-fill positions, while adding a layer of efficiency in the warehouse. “Our customers or prospective customers are looking to automate to both reduce costs, but also to alleviate this manual labor shortage,” says Reed. “So we have a younger generation that’s just more interested in doing jobs like gig economy jobs, drive for Uber, Lyft, those kinds of things, because they can make more money than they could in working at a warehouse.”