Volvo teams up with Nvidia to develop self-driving commercial and industrial trucks

Volvo and Nvidia announced a new partnership today aimed at developing the next-generation decision-making engine for Volvo Group’s fully autonomous commercial trucks and industrial service vehicles. The partnership will use Nvidia’s Drive artificial intelligence platform, which encompasses processing data from sensors, perception systems, localization, mapping and path prediction and planning.

Volvo already has some freight vehicles with autonomous technology on board in early service, but these are deployed in tightly controlled environments and operate supervised, as at the Swedish port of Gothenburg. The partnership between Nvidia and Volvo Group is intended to help not only test and deploy a range of autonomous vehicles with AI decision-making capabilities on board, but also eventually ensure these commercial vehicles can operate on their own on public roads and highways.

Transport freight is only one target for the new joint effort – Nvidia and Volvo will also seek to build autonomous systems and vehicles that can handle garbage and recycling pickup, operate on construction sites, at mines, and in the forestry industry, too. Nvidia notes on its blog that its solution will help address soaring demand for global shipping, driven by increased demand for consumer package delivery. It’ll also cover smaller-scale use cases such as on-site port freight management.

The agreement between the two companies will span multiple years, and will involve teams from both companies sharing space both in Volvo’s HQ of Gothenburg, and Nvidia’s hometown of Santa Clara, California.

Nvidia has done plenty with autonomous trucking in the past, including an investment in Chinese self-driving trucking startup TuSimple, powering the intelligence of the fully driverless Einride transport vehicle and working with Uber on its ATG-driven truck business.

CapitalG co-founder introduces $175M early-stage venture fund

Valo Ventures, a new firm focused on social, economic and environmental megatrends, has closed on $175 million for its debut venture capital fund.

The effort is led by Scott Tierney, a co-founder of Alphabet’s growth investing unit CapitalG, as well as Mona ElNaggar, a former managing director of TIFF Investment Management and Julia Brady, who previously worked as a director at The Via Agency, a communications workshop.

Google is like being a kid in a candy store,” Tierney tells TechCrunch. “It’s a great place to be. For me, I thought, ‘alright, I’ve been here for seven years, I have this opportunity to create my own fund and be more entrepreneurial and take all the learnings I was fortunate to have inside of Google and apply them.’ ”

Tierney joined Google in 2011 as a director of corporate development after five years as a managing director at Steelpoint Capital Partners. In 2013, he co-founded CapitalG, where he served as a partner for the next two years. He completed his Google stint as a director of corporate development and strategic partnerships at Nest Labs, a title he held until mid-2018.

The Valo Ventures partners plan to participate in Series A, B and C deals for startups located in North America and Europe. Specifically, Valo is looking for businesses solving problems within climate change, urbanization, autonomy and mobility. 

The goal is to bring an ESG (environmental, social and corporate governance) perspective to venture capital, where investors infrequently take a mission-driven approach to deal-making. To date, Valo Ventures has deployed capital to Landit, a career pathing platform for women, and a stealth startup developing an AI platform for electricity demand and supply forecasting.

Is your product’s AI annoying people?

Artificial intelligence (AI) is allowing us all to consider surprising new ways to simplify the lives of our customers. As a product developer, your central focus is always on the customer. But new problems can arise when the specific solution under development helps one customer while alienating others.

We tend to think of AI as an incredible dream assistant to our lives and business operations, when that’s not always the case. Designers of new AI services should consider in what ways and for whom might these services be annoying, burdensome or problematic, and whether it involves the direct customer or others who are intertwined with the customer. When we apply AI services to make tasks easier for our customers which end up making things more difficult for others, that outcome can ultimately cause real harm to our brand perception.

Let’s consider one personal example taken from my own use of Amy.ai, a service (from x.ai) that provides AI assistants named Amy and Andrew Ingram. Amy and Andrew are AI assistants that help schedule meetings for up to four people. This service solves the very relatable problem of scheduling meetings over email, at least for the person who is trying to do the scheduling.

After all, who doesn’t want a personal assistant to whom you can simply say, “Amy, please find the time next week to meet with Tom, Mary, Anushya and Shiveesh.” In this way, you don’t have to arrange a meeting room, send the email, and go back and forth managing everyone’s replies. My own experience showed that while it was easier for me to use Amy to find a good time to meet with my four colleagues, it soon became a headache for those other four people. They resented me for it after being bombarded by countless emails trying to find some mutually agreeable time and place for everyone involved.

Automotive designers are another group that’s incorporating all kinds of new AI systems to enhance the driving experience. For instance, Tesla recently updated its autopilot software to allow a car to change lanes automatically when it sees fit, presumably when the system interprets that the next lane’s traffic is going faster.

In concept, this idea seems advantageous to the driver who can make a safe entrance into faster traffic, while relieving any cognitive burden of having to change lanes manually. Furthermore, by allowing the Tesla system to change lanes, it takes away the desire to play Speed Racer or edge toward competitiveness that one may feel on the highway.

However, for the drivers in other lanes who are forced to react to the Tesla autopilot, they may be annoyed if the Tesla jerks, slows down, or behaves outside the normal realm of what people expect on the freeway. Moreover, if they are driving very fast and the autopilot did not recognize they were operating at a high rate of speed when the car decided to make the lane change, then that other driver can get annoyed. We can all relate to driving 75 mph in the fast lane, only to have someone suddenly pull in front of us at 70 as if they were clueless that the lane was moving at 75.

For two-lane traffic highways that are not busy, the Tesla software might work reasonably well.   However, in my experience of driving around the congested freeways of the Bay Area, the system performed horribly whenever I changed crowded lanes, and I knew that it was angering other drivers most of the time. Even without knowing those irate drivers personally, I care enough about driving etiquette to politely change lanes without getting the finger from them for doing so.

Post Intelligence robot

Another example from the Internet world involves Google Duplex, a clever feature for Android phone users that allows AI to make restaurant reservations. From the consumer point of view, having an automated system to make a dinner reservation on one’s behalf sounds excellent. It is advantageous to the person making the reservation because, theoretically, it will save the burden of calling when the restaurant is open and the hassle of dealing with busy signals and callbacks.

However, this tool is also potentially problematic for the restaurant worker who answers the phone. Even though the system may introduce itself as artificial, the burden shifts to the restaurant employee to adapt and master a new and more limited interaction to achieve the same goal – making a simple reservation.

On the one hand, Duplex is bringing customers to the restaurant, but on the other hand, the system is narrowing the scope of interaction between the restaurant and its customer. The restaurant may have other tables on different days, or it may be able to squeeze you in if you leave early, but the system might not handle exceptions like this. Even the idea of an AI bot bothering the host who answers the phone doesn’t seem quite right.

As you think about making the lives of your customers easier, consider how the assistance you are dreaming about might be more of a nightmare for everyone else associated with your primary customer. If there is a question regarding the negative experience of anyone related to your AI product, explore that experience further to determine if there is another better way to still delight them without angering their neighbors.

From a user experience perspective, developing a customer journey map can be a helpful way to explore the actions, thoughts, and emotional experiences of your primary customer or “buyer persona.” Identify the touchpoints in which your system interacts with innocent bystanders who are not your direct customers. For those people unaware of your product, explore their interaction with your buyer persona, specifically their emotional experience.

An aspirational goal should be to delight this adjacent group of people enough that they would move towards being prospects and, eventually, becoming your customers as well. Also, you can use participant ethnography to analyze the innocent bystander in relation to your product. This is a research method which combines the observations of people as they interact with processes and the product.

A guiding design inspiration for this research could be, “How can our AI system behave in such a way that everyone who might come into contact with our product is enchanted and wants to know more?”

That’s just human intelligence, and it’s not artificial.

Intel is doing the hard work necessary to make sure robots can operate your microwave

Training computers and robots to not only understand and recognize objects (like an oven, for instance, as distinct from a dishwasher) is pretty crucial to getting them to a point where they can manage the relatively simple tasks that humans do every day. But even once you have an artificial intelligence trained to the point where it can tell your fridge from your furnace, you also need to make sure it can operate the things if you want it to be truly functional.

That’s where new work from Intel AI researchers, working in collaboration with UCSD and Stanford, comes in – in a paper presetned at the Conference on Computer Vision and Patter Recognition, the assembled research team details how they created ‘PartNet,’ a large dataset of #D objects with highly detailed, hierarchically organized and fully annotated part info for each object.

The data set is unique, and already in high demand among robotics companies, because it manages to organize objects into their segmented parts in a way that has terrific applications for building learning models for artificial intelligence applications designed to recognize and manipulate these objects in the real world. So, for instance, in the photographed example above, if you’re hoping to have a robot arm manage to turn on a microwave to reheat some leftovers, the robot needs to know about ‘buttons’ and their relation to the whole.

Robots trained using PartNet and evolutions this data set won’t be limited to just operating computer generated microwaves that looks like someone found it on a curb with a ‘free’ sign taped to the front. It includes over 570,000 parts, across more than 26,000 individual objects, and parts that are common to objects across categories are all marked as corresponding to one another – so that if an AI is trained to recognize a chair back on one variety, it should be able to recognize it on another.

That’s handy if you want to redecorate your dining room, but still want your home helper bot to be able to pull out your new chairs for guests, just like it did with the old ones.

Admittedly, my examples are all drawn from a far-flung, as-yet hypothetical future. There are plenty of near-term applications of detailed object recognition that are more useful, and part identification can likely help reinforce decision-making about general object recognition, too. But the implications for in-home robotics are definitely more interesting to ponder, and it’s an area of focus for a lot of the commercialization efforts focused around advanced robotics today.

NASA taps CMU to develop robots to help turn pits on the Moon into potential habitats

Lunar rovers are cool – but imagine how much cooler they’d be if they could also rappel. Researchers at Carnegie Mellon University will try to make rappelling robots a reality, after having been selected by NASA as the recipient of a new $2 million research grant aimed at coming up with new technology to help robots explore ‘pits’ on the Moon.

Yes, pits, as distinct from craters, which are essentially surface features caused by meteorite impacts. These pits are more akin to sinkholes or caves on earth, with surface access but also with large underground hollow caverns and spaces that might provide easier access to minerals and water ice – and that might even serve as ready-made shelter for future Lunar explorers.

CMU Robotics Institute Professor Red Whittaker put forward a potential mission design that would aim to use intelligent, agile and fast robots to study these pits close up, since the they’ve been spotted by lunar orbital observers but these images don’t really provide the kind of detail needed to actually discover if the sinkholes will be useful to future Moon missions, or how.

Whittaker’s draft plan, which is codenamed ‘Skylight,’ would use robots that have a degree of autonomy to self-select where to look in their surface investigations, and they’d also need to act quickly: Once lunar night sets in, they’d be offline permanently, so they’d get about one week of active use time per the mission parameters.

NASA’s ambitious mission to send astronauts back to the lunar surface by 2024, and to establish a base on the Moon by 2028, will benefit from the kind of scouting provided by missions like ‘Skylight,’ but timing will be tight – current projections estimate 2023 as the target for when such a mission might happen.

Habana Labs launches its Gaudi AI training processor

Habana Labs, a Tel Aviv-based AI processor startup, today announced its Gaudi AI training processor, which promises to easily beat GPU-based systems by a factor of four. While the individual Gaudi chips beat GPUs in raw performance, it’s the company’s networking technology that gives it the extra boost to reach its full potential.

Gaudi will be available as a standard PCIe card that supports eight ports of 100Gb Ethernet, as well as a mezzanine card that is compliant with the relatively new Open Compute Project accelerator module specs. This card supports either the same ten 100GB Ethernet ports or 20 ports of 50Gb Ethernet. The company is also launching a system with eight of these mezzanine cards.

Last year, Habana Labs previously launched its Goya inferencing solution. With Gaudi, it now offers a complete solution for businesses that want to use its hardware over GPUs with chips from the likes of Nvidia. Thanks to its specialized hardware, Gaudi easily beats an Nvidia T4 accelerator on most standard benchmarks — all while using less power.

“The CPU and GPU architecture started from solving a very different problem than deep learning,” Habana CBO Eitan Medina told me.  “The GPU, almost by accident, happened to be just better because it has a higher degree of parallelism. However, if you start from a clean sheet of paper and analyze what a neural network looks like, you can, if you put really smart people in the same room […] come up with a better architecture.” That’s what Habana did for its Goya processor and it is now taking what it learned from this to Gaudi.

For developers, the fact that Habana Labs supports all of the standard AI/ML frameworks, as well as the ONNX format, should make the switch from one processor to another pretty painless.

“Training AI models require exponentially higher compute every year, so it’s essential to address the urgent needs of the data center and cloud for radically improved productivity and scalability. With Gaudi’s innovative architecture, Habana delivers the industry’s highest performance while integrating standards-based Ethernet connectivity, enabling unlimited scale,” said David Dahan, CEO of Habana Labs. “Gaudi will disrupt the status quo of the AI Training processor landscape.”

As the company told me, the secret here isn’t just the processor itself but also how it connects to the rest of the system and other processors (using standard RDMA RoCE, if that’s something you really care about).

Habana Labs argues that scaling a GPU-based training system beyond 16 GPUs quickly hits a number of bottlenecks. For a number of larger models, that’s becoming a necessity, though. With Gaudi, that becomes simply a question of expanding the number of standard Ethernet networking switches so that you could easily scale to a system with 128 Gaudis.

“With its new products, Habana has quickly extended from inference into training, covering the full range of neural-network functions,” said Linley Gwennap, principal analyst of The Linley Group. “Gaudi offers strong performance and industry-leading power efficiency among AI training accelerators. As the first AI processor to integrate 100G Ethernet links with RoCE support, it enables large clusters of accelerators built using industry-standard components.”

Habana Labs launches its Gaudi AI training processor

Habana Labs, a Tel Aviv-based AI processor startup, today announced its Gaudi AI training processor, which promises to easily beat GPU-based systems by a factor of four. While the individual Gaudi chips beat GPUs in raw performance, it’s the company’s networking technology that gives it the extra boost to reach its full potential.

Gaudi will be available as a standard PCIe card that supports eight ports of 100Gb Ethernet, as well as a mezzanine card that is compliant with the relatively new Open Compute Project accelerator module specs. This card supports either the same ten 100GB Ethernet ports or 20 ports of 50Gb Ethernet. The company is also launching a system with eight of these mezzanine cards.

Last year, Habana Labs previously launched its Goya inferencing solution. With Gaudi, it now offers a complete solution for businesses that want to use its hardware over GPUs with chips from the likes of Nvidia. Thanks to its specialized hardware, Gaudi easily beats an Nvidia T4 accelerator on most standard benchmarks — all while using less power.

“The CPU and GPU architecture started from solving a very different problem than deep learning,” Habana CBO Eitan Medina told me.  “The GPU, almost by accident, happened to be just better because it has a higher degree of parallelism. However, if you start from a clean sheet of paper and analyze what a neural network looks like, you can, if you put really smart people in the same room […] come up with a better architecture.” That’s what Habana did for its Goya processor and it is now taking what it learned from this to Gaudi.

For developers, the fact that Habana Labs supports all of the standard AI/ML frameworks, as well as the ONNX format, should make the switch from one processor to another pretty painless.

“Training AI models require exponentially higher compute every year, so it’s essential to address the urgent needs of the data center and cloud for radically improved productivity and scalability. With Gaudi’s innovative architecture, Habana delivers the industry’s highest performance while integrating standards-based Ethernet connectivity, enabling unlimited scale,” said David Dahan, CEO of Habana Labs. “Gaudi will disrupt the status quo of the AI Training processor landscape.”

As the company told me, the secret here isn’t just the processor itself but also how it connects to the rest of the system and other processors (using standard RDMA RoCE, if that’s something you really care about).

Habana Labs argues that scaling a GPU-based training system beyond 16 GPUs quickly hits a number of bottlenecks. For a number of larger models, that’s becoming a necessity, though. With Gaudi, that becomes simply a question of expanding the number of standard Ethernet networking switches so that you could easily scale to a system with 128 Gaudis.

“With its new products, Habana has quickly extended from inference into training, covering the full range of neural-network functions,” said Linley Gwennap, principal analyst of The Linley Group. “Gaudi offers strong performance and industry-leading power efficiency among AI training accelerators. As the first AI processor to integrate 100G Ethernet links with RoCE support, it enables large clusters of accelerators built using industry-standard components.”

MIT develops a system to give robots more human senses

Researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have developed a new system that could equip robots with something we take for granted: the ability to link multiple senses together.

The new system created by CSAIL involves a predictive AI that’s able to learn how to see using its ‘sense’ of touch, and vice versa. That might sound confusing, but it’s really mimicking something people do every day, which is look at a surface, object or material and anticipate what that thing will feel like once touched, ie. whether it’ll be soft, rough, squishy, etc.

The system can also take tactile, touch-based input and translate that into a prediction about what it looks like – kind of like those kids’ discovery museums where you put your hands into random boxes and try to get at the objects you find within.

These examples probably don’t help in terms of articulating why this is actually useful to build, but an example provided by CSAIL should make that more apparent. The research team used their system with a robot arm to help it anticipate where an object would be without sight of the object, and then recognize it based on touch – you can imagine this being useful with a robot appendage reaching for a switch, lever or even a part it’s looking to pick up, and verifying that it has the right thing, and not, for example, a human operator it’s working with.

This type of AI could also be used to help robots operate more efficiently and effectively in low-light environments without requiring advanced sensors, for instance, and as components of more general systems when used in combination with other sensory simulation technologies.

Clockwise nabs $11M Series A to make your calendar smarter

Almost every organization, regardless of size, is inundated with meetings, so much so it’s often hard to find dedicated time do actual work. Clockwise wants to change that by bringing machine learning to the calendar to help employees free up time. Today, it announced an $11 million Series A investment, and made the product, which had been in Beta, generally available.

The round was co-led led by Greylock and Accel . Other investors included Slack Fund, Michael Ovitz, Ellen Levy, George Hu, Soraya Darabi, SV Angel and Jay Simons. The company has raised a total of $13 million.

Matt Martin, CEO and co-founder at Clockwise says the company’s mission is to help employees make time for what matters, and they are doing that by applying machine learning to the calendar to free up blocks of time to concentrate on work. Calendars have tended to be pretty static and this provides a way to bring a level of intelligence to automatically shift meetings to a better time when it makes sense.

You download Clockwise and then you can set parameters for which meetings can be moved and which are set in stone and other preferences. As Martin wrote in a blog post announcing the new tool, this gives employees “uninterrupted blocks of time to focus, think and innovate.” For now, it’s available for G Suite users.

Gif: Clockwise

You may think that this is a one-trick pony that will be hard to scale, but Martin says in the past few months, Clockwise has recovered 1000 of hours, and as they gain more data, the tool will get even more intelligent about meeting shifting.

Certainly his investors see the potential. John Lilly, who is leading the investment at Greylock believes Clockwise filling a huge unfilled need inside organizations. “Clockwise is focused on helping individuals and teams retake ownership of their time. This is not an easy feat — building the Clockwise product requires a sophisticated understanding of machine learning, user interaction, and systems design breakthrough,” Lilly said.

Clockwise founders were part of the team at RelateIQ, a company Salesforce bought for $390 million in 2014. Since leaving, RelateIQ they decided to put that experience to work on making the calendar more efficient.