Robots learn to grab and scramble with new levels of agility

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

Robots learn to grab and scramble with new levels of agility

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

Get a TechCrunch Sessions: Robotics + AI demo table while you can

Innovation’s leading-edge lives at the intersection of robotics and AI. What could possibly be more exciting than attending TechCrunch Sessions: Robotics + AI on April 18 — where you’ll spend a full day immersed in these world-changing technologies?

Well, you could showcase your early-stage startup to nearly 1,000 of the best minds in robotics and AI. Yeah, that’s pretty awesome, too. All you need to do is buy a demo table before they sell out, so don’t delay. Oh, and the $1,500 price tag also includes three attendee tickets, so you can bring your tribe.

We’re not hyperbolizing about the titanic talent at TC Sessions: Robotics + AI. Speakers at our previous events at Berkeley and MIT have included technologists, founders and investors, including the likes of Ayanna Howard (Georgia Tech), Rob Coneybeer (Shasta Ventures), Helen Greiner (CyPhyWorks), Tye Brady (Amazon Robotics), Ken Goldberg (UC Berkeley) and so many others.

These are just some of the doers, movers and shakers that can make an early-stage startup founder’s dream come true. This event provides an exceptional opportunity to demo your product in front of a very smart, very large and very targeted audience. This year’s lineup (a work in progress) will not disappoint.

Here’s what else you can expect at TC Sessions: Robotics + AI. TechCrunch editors will host a full day of interviews and demos (like this one) on the main stage. And we’ll have workshops and other demos running in parallel. Want to know more? Check out the full coverage from last year. And, as always, there will be plenty of opportunity for world-class networking.

Here are essential housekeeping details you need to know. TechCrunch Sessions: Robotics + AI takes place at UC Berkeley’s Zellerbach Hall on April 18, 2019. Right now, early-bird tickets cost $249, and if you’re a student, you get in for $45.

Don’t miss a spectacular day-long event focused exclusively on robotics and AI. Come learn, teach, demo and network. And buy your tickets and a demo table now before it’s too late. We can’t wait to see you there!

Daily Crunch: Bing has a child porn problem

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. Microsoft Bing not only shows child pornography, it suggests it

A TechCrunch-commissioned report has found damning evidence on Microsoft’s search engine. Our findings show a massive failure on Microsoft’s part to adequately police its Bing search engine and to prevent its suggested searches and images from assisting pedophiles.

2. Unity pulls nuclear option on cloud gaming startup Improbable, terminating game engine license

Unity, the widely popular gaming engine, has pulled the rug out from underneath U.K.-based cloud gaming startup Improbable and revoked its license — effectively shutting them out from a top customer source. The conflict arose after Unity claimed Improbable broke the company’s Terms of Service and distributed Unity software on the cloud.

3. Improbable and Epic Games establish $25M fund to help devs move to ‘more open engines’ after Unity debacle

Just when you thought things were going south for Improbable the company inked a late-night deal with Unity competitor Epic Games to establish a fund geared toward open gaming engines. This begs the question of how Unity and Improbable’s relationship managed to sour so quickly after this public debacle.

4. The next phase of WeChat 

WeChat boasts more than 1 billion daily active users, but user growth is starting to hit a plateau. That’s been expected for some time, but it is forcing the Chinese juggernaut to build new features to generate more time spent on the app to maintain growth.

5. Bungie takes back its Destiny and departs from Activision 

The creator behind games like Halo and Destiny is splitting from its publisher Activision to go its own way. This is good news for gamers, as Bungie will no longer be under the strict deadlines of a big gaming studio that plagued the launch of Destiny and its sequel.

6. Another server security lapse at NASA exposed staff and project data

The leaking server was — ironically — a bug-reporting server, running the popular Jira bug triaging and tracking software. In NASA’s case, the software wasn’t properly configured, allowing anyone to access the server without a password.

7. Is Samsung getting serious about robotics? 

This week Samsung made a surprise announcement during its CES press conference and unveiled three new consumer and retail robots and a wearable exoskeleton. It was a pretty massive reveal, but the company’s look-but-don’t-touch approach raised far more questions than it answered.

World’s most valuable AI startup SenseTime unveils self-driving center in Japan

The world’s highest-valued artificial intelligence startup SenseTime has set foot in Japan. The Beijing-based firm announced on Friday that it just opened a self-driving facility in Joso, a historic city 50 kilometers away from Tokyo where it plans to conduct R&D and road test driverless vehicles.

The initiative follows its agreement with Japanese auto giant Honda in 2017 to jointly work on autonomous driving technology. SenseTime, which is backed by Alibaba and last valued at more than $4.5 billion, is best known for object recognition technologies that have been deployed in China widely across retail, healthcare and public security. Bloomberg reported this week that the AI upstart is raising $2 billion in fresh funding,

Four-year-old SenseTime isn’t the only Chinese AI company finding opportunities in Japan. China’s biggest search engine provider Baidu is also bringing autonomous vehicles to its neighboring country, a move made possible through a partnership with SoftBank’s smart bus project SB Drive and Chinese automaker King Long.

Japan has in recent years made a big investment push in AI and autonomous driving, which could help it cope with an aging and declining workfoce. The government aims to put driverless cars on Tokyo’s public roads by 2020 when the Olympics takes place. The capital city said it already successfully trialled autonomous taxis last August.

SenseTime’s test park, which is situated near Japan’s famed innovation hub Tsukuba Science City, will be open to local residents who could check out the vehicles slated to transport them in a few years.

“We are glad to have the company setting up an R&D center for autonomous driving in our city,” said Mayor of Joso Takeshi Kandatsu in a statement. “I believe autonomous driving vehicles will bring not only revolutionary changes to our traffic system, but also solutions to regional traffic problems. With the help of SenseTime, I look forward to seeing autonomous cars running on the roads of Joso. We will give full support to make it happen.”

Taking a stroll with Samsung’s robotic exoskeleton

Samsung’s look but don’t touch policy left many wondering precisely how committed the company is to its new robots. On the other hand, the company was more than happy to let me take the GEMS (Gait Enhancing and Motivation System) spin.

The line includes a trio of wearable exoskeletons, the A (ankle), H (hip) and K (knee). Each serve a different set of needs and muscles, but ultimately provide the same functions: walking assistant and resistance for helping wearers improve strength and balance.

Samsung’s far from the first to tackle the market, of course. There are a number of companies with exoskeleton solutions aimed at walking support/rehabilitation and/or field assistance for physically demanding jobs. Rewalk, Ekso and SuitX have all introduced compelling solutions, and a number of automotive companies have also invested in the space.

At this stage, it’s hard to say precisely what Samsung can offer that others can’t, though certainly the company’s got plenty of money, know how and super smart employees. As with the robots, if it truly commits and invests, if could produce some really remarkable work in this space.

Having taken the hip system for a bit of a spin Samsung’s booth, I can at least say that the assistive and resistance modes do work. A rep described the resistance as feeling something akin to walking under water, and I’m hard pressed to come up with a better analogy. The assistive mode is a bit hard to pick up on at first, but is much more noticeable when walking up stairs after trying out the other mode.

Like the robots, it’s hard to know how these products will ultimately fit into the broader portfolio of a company best know for smartphones, TVs and chips. Hopefully we won’t have to wait until the next CES to find out.

Is Samsung getting serious about robotics?

A funny thing happened at Samsung’s CES press conference. After the PC news, 8K TVs and Bixby-sporting washing machines, the company announced “one more thing,” handing over a few brief moments to announce a robotics division, three new consumer and retail robots and a wearable exoskeleton.

It was a pretty massive reveal in an extremely short space, and, quite frankly, raised far more questions than it answered. Within the broader context of a press conference, it’s often difficult to determine where the hype ends and the serious commitment to a new category begins.

This goes double for a company like Samsung, which has been taking extra care in recent months to demonstrate its commitment to the future, as the mobile industry undergoes its first major slowdown since the birth of the smartphone. It follows a similar play by LG, which has offered a glimpse into its own robotics plans for back to back years, including allowing a ‘bot to copilot this year’s keynote.

We all walked away from the press conference unsure of what to make of it all, with little more to show for things than a brief onstage demo. Naturally, I jumped at the opportunity to spend some quality time with the new robots behind the scenes the following day. There were some caveats, however.

First, the company insisted we watch a kind of in-person orientation, wherein a trio of miced up spokespeople walked us through the new robots. There’s Bot Care, a healthcare robot designed to assist with elder care, which features medication reminders, health briefings and the ability to check vitals with a finger scan. There are also yoga lessons and an emergency system that will dial 911 if a user falls.

There’s also Bot Air, an adorable little trash can-style robot that zooms around monitoring air quality and cleaning it accordingly. Bot Retail rounds out the bunch, with a touchscreen for ordering and trays in the rear for delivering food and other purchases.

The other major caveat was look, but don’t touch. You can get as close as you want, but you can’t interact with the robot beyond that.

The demos were impressive. The robots’ motions are extremely lifelike, with subtle touches that imbue on each a sense of personality rarely seen outside of movie robots like Wall-E. The response time was quick and they showed a wide range of genuinely useful tasks. If the robots are capable of performing as well in person as they do in these brief, choreographed demos, Samsung may have truly cracked the code of personal care and retail robotics.

That, of course, is a big if. Samsung wouldn’t answer the question of how much these demos are being orchestrated behind the scenes, but given how closely the company kept to the script, one suspects we’re largely looking at approximations of how such a human/robot interaction could ultimately play out somewhere down the road. And a Samsung spokesperson I spoke to admitted that everything is very early stages.

Really, it looks to be more akin to a proof of concept. Like, hey, we’re Samsung. We have a lot of money, incredibly smart people and know how to build components better than just about anyone. This is what it would look like if we went all-in on robotics. The company also wouldn’t answer questions regarding how seriously they’re ultimately taking robotics as a category.

You can’t expect to succeed in building incredibly complex AI/robotics/healthcare systems by simply dipping your toe in the water. I would love to see Samsung all-in on this. These sorts of things have the potential to profoundly impact the way we interact with technology, and Samsung is one of a few companies in a prime position to successfully explore this category. But doing so is going to require a true commitment of time, money and human resources.

CES 2019 coverage - TechCrunch

Meet Caper, the AI self-checkout shopping cart

The Amazon boogie-man has every retailer scrambling for ways to fight back. But the cost and effort to install cameras all over the ceiling or into every shelf could block stores from entering the autonomous shopping era. Caper Labs wants to make eliminating checkout lines as easy as replacing their shopping carts while offering a more familiar experience for customers.

The startup makes a shopping cart with a built-in barcode scanner and credit card swiper, but it’s finalizing the technology to automatically scan items you drop in thanks to three image recognition cameras and a weight sensor. The company claims people already buy 18 percent more per visit after stores are equipped with its carts.

Caper’s cart

Today, Caper is revealing that it’s raised a total of $3 million including a $2.15 million seed round led by prestigious First Round Capital and joined by food-focused angels like Instacart co-founder Max Mullen, Plated co-founder Nick Taranto, Jet’s Jetblack shopping concierge co-founder Jenny Fleiss, plus Y Combinator. Caper is now in two retailers in the NYC area, though it plans to use the cash to expand to more and develop a smart shopping basket for smaller stores.

“If you walked in to a grocery store 100 years ago versus today, nothing has really changed” says Caper co-founder and CEO Lindon Gao. “It doesn’t make sense that you can order a cab with your phone or go book a hotel with your phone, but you can’t use your phone to make a payment and leave the store. You still have to stand in line.”

Autonomous retail is going to be a race. $50 million-funded Standard Cognition, ex-Pandora CTO Will Glaser’s Grabango, and scrappier startups like Zippin and Inokyo are all building ceiling and shelf-based camera systems to help merchants keep up with Amazon Go’s expanding empire of cashierless stores. But Caper’s plug-and-play cart-based system might be able to leapfrog its competitors if it’s easier for shops to set up.

Caper combines image recognition and a weight sensor to identify items without a barcode scan

Inventing The Smart Cart

“I don’t have an altruistic reason to care about retail, but I really want to put a dent in the universe and I think retail is severely under-innovated” Gao candidly remarked. Most founders try to spin a “super hero origin story” about why they’re the right person for the job. For Gao, chasing autonomous retail is just good business. He built his first startup in gaming commerce at age 14. The jewelry company he launched at 19 still operates. He went on to become an investment banker at Goldman Sachs and JP Morgan but “I always felt like I was more of a startup guy.”

Caper was actually a pivot from his previous entry to the space called QueueHop that made cashierless apparel security tags that unlocked when you paid. But during Y Combinator, he discovered how tough it’d be to scale a product that requires a complete rethinking of a merchant’s operations flow. So Gao hoofed it around NYC to talk to 150 merchants and discover what they really wanted. The cart was the answer.

Caper co-founder and CEO Lindon Gao

V1 of Caper’s cart lets people scan their items’ barcodes and pay on the cart with a credit card swipe or Apple/Android Pay tap and their receipt is emailed to them. But each time they scan, the cart is actually taking 120 photos and precisely weighing the items to train Caper’s machine vision algorithms in what Gao likens to how Tesla is inching towards self-driving.

Soon, Caper wants to go entirely scanless, and sections of its two pilot stores already use the technology. The cameras on the cart employ image recognition matched with a weight sensor to identify what you toss in your cart. You shop just like normal but then pay and leave with no line. Caper pulls in a store’s existing security feed to help detect shoplifting, which could be a bigger risk than with ceiling and shelf camera systems, but Gao says it hasn’t been a problem yet. He woudn’t reveal the price of the carts but said “they’re not that much more expensive than a standard shopping cart. To outfit a store it should be comparable to the price of implementing traditional self-checkout.” Shops buy the carts outright and pay a technology subscriptions but get free hardware upgrades. They’ll have to hope Caper stays alive.

“Do you want guacamole with those chips?”

Caper hopes to deliver three big benefits to merchants. First, they’ll be able to repurpose cashier labor to assist customers so they buy more and to keep shelves stocked, though eventually this technology is likely to eliminate a lot of jobs. Second, the ease and affordable cost of transitioning means businesses will be able to recoup their investment and grow revenues as shoppers buy more. And third, Caper wants to share data that its carts collect on routes through the store, shelves customers hover in front of, and more with its retail partners so they can optimize their layouts.

Caper’s screen tracks items you add to the cart and can surface discounts and recommendations

One big advantage over its ceiling and shelf camera competitors is that Caper’s cart can promote deals on nearby or related items. In the future, it plans to add recommendations based on what’s on your cart to help you fill out recipes. ‘Threw some chips in the cart? Here’s where to find the guacamole that’s on sale.’ A smaller hand-held smart basket could broaden Caper’s appeal beyond grocers amongst littler shops, though making it light enough to carry will be a challenge.

Gao says that with merchants already seeing sales growth from the carts, what keeps him up at night is handling Caper’s supply chain since the product requires a ton of different component manufacturers. The startup has to move fast if it wants to be what introduces Main Street to autonomous retail. But no matter what gadgets it builds in, Caper must keep sight of the real-world stress their tech will undergo. Gao concludes “We’re basically building a robot here. The carts need to be durable. They need to resist heat, vibration, rain, people slamming them around. We’re building our shopping cart like a tank.”

This hole-digging drone parachutes in to get the job done

A new drone from the NIMBUS group at the University of Nebraska can fall out of a plane, parachute down, fly to a certain place, dig a hole, hide sensors inside it, and then fly away like some crazy wasp. Robots are weird.

The goal of the project is to allow drones to place sensors in distant and hostile environments. The system starts on a plane or helicopter which ejects the entire thing inside of a cylindrical canister. The canister falls for a while then slows down with a parachute. Once it’s close enough to the ground it pops out, lands, and drills a massive hole with a screw drill, and leaves the heavy parts to fly home.

Drones can only fly for so long while carrying heavy gear so this ensures that the drone can get there without using battery and escape without running down to empty.

“Battery powered drones have very short flight times, especially when flying with a heavy load, which we are since we have our digging apparatus and sensor system. So to get to distant locations, we need to hitch a ride on another vehicle,” said NIMBUS co-director Carrick Detweiler to Spectrum. “This allows it to save energy for return trips. In this video we used a much larger gas powered UAS with multiple hours of flight time, but our same system could be deployed from manned aircraft or other systems.”

The drone can even sense if the ground is too hard for digging and choose another spot, allowing for quite a bit of flexibility. Given that these things can land silently in far off locations you can imagine some interesting military uses for this technology. I’m sure it will be fine for us humans, though. I mean what could go wrong with a robot that can hide things underground in distant, unpopulated places and escape undetected?

Misty’s adorable robotics platform ships in April for $2,399

The road to consumer robots is littered with the remains of failed startups. Jibo and Kuri mark two recent examples of just how hard it is bringing such a device to market. In fact, with the exception of the single-minded Roomba line, you’d be hard-pressed to name a product that has truly hit mainstream acceptance.

It’s with that in mind that Misty has been given its substantial runway. The startup has long-term goals of bringing a truly accessible mainstream robotic to market — but it’s going to take a few years and a lot of baby steps.

Things started with last year’s Misty I, a handmade version of the company’s modular robotics platform. CEO Tim Enwall tells me the company ultimately sold “dozens” of the machines, with the express plan to eventually phase out the product in favor of the more polished Misty II. The second robot is set to arrive in April, following a successful crowdfunding campaign in which the company raised just short of $1 million.

At $2,399, the new Misty isn’t cheap (thanks, in part, to the current administration’s trade tariffs). But, then, mainstream accessibility was never really the point. Misty II may be reasonably adorable, but it’s a platform first. The company is currently courting software and hardware developers and the maker community in an attempt to build a robust catalog of skills. Think of it something akin to the app store approach to creating robots.

The plan here is to have a full selection of skills in place before the company targets consumers, while having third-party developers do much of the heavy software lifting. Developers, meanwhile, get a reasonably accessible hardware platform on which to test their programs. By the time company eventually comes to market, the theory goes, Misty will have a robust feature set that’s been lacking in just about every consumer robot that has preceded it.

That means that Misty II is less personality driven than, say, Cosmo. The on-board sensors and data collection is far more important to the product’s appeal that Pixar-animated eyes.

Of course, the product’s success will hinge entirely on that adoption, and it’s hard to say how large the potential market is, especially at that price point. Misty II is reasonably sophisticated and could have appeal for educators, among others, but it’s not really the same class of product as, say, those developed by the now-defunct Willow Garage.

CES 2019 coverage - TechCrunch