Andrew Ng to talk about how AI will transform business at TC Sessions: Enterprise

When it comes to applying AI to the world around us, Andrew Ng has few if any peers. We are delighted to announce that the renowned founder, investor, AI expert and Stanford professor will join us on stage at the TechCrunch Sessions: Enterprise show on Sept. 5 at the Yerba Buena Center in San Francisco. 

AI promises to transform the $500 billion enterprise world like nothing since the cloud and SaaS.  Hundreds of startups are already seizing the AI moment in areas like recruiting, marketing and communications, and customer experience. The oceans of data required to power AI are becoming dramatically more valuable, which in turn is fueling the rise of new data platforms, another big topic of the show

Last year, Ng  launched the $175 million AI Fund, backed by big names like Sequoia, NEA, Greylock, and Softbank. The fund’s goal is to develop new AI businesses in a studio model and spin them out when they are ready for prime time. The first of that fund’s cohort is Landing AI, which also launched last year and aims to “empower companies to jumpstart AI and realize practical value.” It’s a wave businesses will want to catch if Ng is anywhere near right in his conviction that AI will generate $13 trillion in GDP growth globally in the next 20 years. You heard that right. 

At TC Sessions: Enterprise, TechCrunch’s editors will ask Ng to detail how he believes AI will unfold in the enterprise world and bring big productivity gains to business. 

As the former Chief Scientist at Baidu and the founding lead of Google Brain, Ng led the AI transformation of two of the world’s leading technology companies. Dr. Ng is the Co-founder of Coursera, an online learning platform, and founder of deeplearning.ai, an AI education platform. Dr. Ng is also an Adjunct Professor at Stanford University’s Computer Science Department and holds degrees from Carnegie Mellon University, MIT and the University of California, Berkeley.

Early Bird tickets to see Andrew at TC Sessions: Enterprise are on sale for just $249 when you book here, but hurry prices go up by $100 soon! Students, grab your discounted tickets for just $75 here.

Matterport acquires AI special effects startup Arraiy

Real estate computer vision platform startup Matterport is set to acquire Arraiy, an AI startup aiming to automate special effects processing in film.

Arraiy raised $13.9 million according to Crunchbase, most recently a $10 million Series A in March of 2018. Lux Capital and Softbank Ventures Asia led the round. Lux Capital notably also led Matterport’s Series A back in 2013. In comparison, Matterport has raised about $114 million to date.

Arraiy used AI tech to more seamlessly overlay digital content on physically captured spaces. The company had been firmly focused on changing the way digital effects houses in Hollywood made films. While plenty of computer vision startups were aiming to use AI and AR technologies to bring live Snapchat-like AR functionality to different corners of the web, Arraiy was banking on the high-fidelity world of film where special effects production is an expensive, time-intensive process.

Arraiy’s founders previously started Industrial Perception, a robotics startup which Google acquired in 2013.

The startup tackling Hollywood special effects and a startup best known for digitizing real estate properties to give potential buyers 3D tours might not seem like the most idyllic pairing, but the acquisition might allow Matterport to expand its ambitions further beyond its real estate customer base.

Luminar eyes production vehicles with $100M round and new Iris lidar platform

Luminar is one of the major players in the new crop of lidar companies that have sprung up all over the world, and it’s moving fast to outpace its peers. Today the company announced a new $100M funding round, bringing its total raised to over $250M — as well as a perception platform and a new, compact lidar unit aimed at inclusion in actual cars. Big day!

The new hardware, called Iris, looks to be about a third of the size of the test unit Luminar has been sticking on vehicles thus far. That one was about the size of a couple hardbacks stacked up, and Iris is more like a really thick sandwich.

Size is very important, of course, since few cars just have caverns of unused space hidden away in prime surfaces like the corners and windshield area. Other lidar makers have lowered the profiles of their hardware in various ways; Luminar seems to have compactified in a fairly straightforward fashion, getting everything into a package smaller in every dimension.

Luminar IRIS AND TEST FLEET LiDARS

Test model, left, Iris on the right.

Photos of Iris put it in various positions: below the headlights on one car, attached to the rear-view mirror in another, and high up atop the cabin on a semi truck. It’s small enough that it won’t have to displace other components too much, although of course competitors are aiming to make theirs even more easy to integrate. That won’t matter, Luminar founder and CEO Austin Russell told me recently, if they can’t get it out of the lab.

“The development stage is a huge undertaking — to actually move it towards real-world adoption and into true
series production vehicles,” he said (among many other things). The company who gets there first will lead the industry, and naturally he plans to make Luminar that company.

Part of that is of course the production process, which has been vastly improved over the last couple years. These units can be made quickly enough that they can be supplied by the thousands rather than dozens, and the cost has dropped precipitously — by design.

Iris will cost under $1,000 per unit for production vehicles seeking serious autonomy, and for $500 you can get a more limited version for more limited purposes like driver assistance, or ADAS. Luminar says Iris is “slated to launch commercially on production vehicles beginning in 2022,” but that doesn’t mean necessarily that they’re shipping to customers right now. The company is negotiating more than a billion dollars in contracts at present, a representative told me, and 2022 would be the earliest that vehicles with Iris could be made available.

LUMINAR IRIS TRAFFIC JAM PILOT

The Iris units are about a foot below the center of the headlight units here. Note that this is not a production vehicle, just a test one.

Another part of integration is software. The signal from the sensor has to go somewhere, and while some lidar companies have indicated they plan to let the carmaker or whoever deal with it their own way, others have opted to build up the tech stack and create “perception” software on top of the lidar. Perception software can be a range of things: something as simple as drawing boxes around objects identified as people would count, as would a much richer process that flags intentions, gaze directions, characterizes motions and suspected next actions, and so on.

Luminar has opted to build into perception, or rather has revealed that it has been working on it for some time. It now has 60 people on the task split between Palo Alto and Orlando, and hired a new VP of Software, former robo-taxi head at Daimler Christoph Schroder.

What exactly will be the nature and limitations of Luminar’s perception stack? There are dangers waiting if you decide to take it too far, since at some point you begin to compete with your customers, carmakers who have their own perception and control stacks that may or may not overlap with yours. The company gave very few details as to what specifically would be covered by its platform, but no doubt that will become clearer as the product itself matures.

Last and certainly not least is the matter of the $100 million in additional funding. This brings Luminar to a total of over a quarter of a billion dollars in the last few years, matching its competitor Innoviz, which has made similar decisions regarding commercialization and development.

The list of investors has gotten quite long, so I’ll just quote Luminar here:

G2VP, Moore Strategic Ventures, LLC, Nick Woodman, The Westly Group, 1517 Fund / Peter Thiel, Canvas Ventures, along with strategic investors Corning Inc, Cornes, and Volvo Cars Tech Fund.

The board has also grown, with former Broadcom exec Scott McGregor and G2VP’s Ben Kortland joining the table.

We may have already passed “peak lidar” as far as sheer number of deals and startups in the space, but that doesn’t mean things are going to cool down. If anything the opposite, as established companies battle over lucrative partnerships and begin eating one another to stay competitive. Seems like Luminar has no plans on becoming a meal.

Udelv partners with HEB on Texas autonomous grocery delivery pilot

Autonomous delivery company Udelv has signed yet another partner to launch a new pilot of its self-driving goods delivery service: Texas-based supermarket chain HEB Group. The pilot will provide service to customers in Olmos Park, just outside of downtown San Antonio where the grocery retailer is based.

California-based Udelv will provide HEB with one of its Newton second-generation autonomous delivery vehicles, which are already in service in trials in the Bay Area, Arizona and Houston providing deliveries on behalf of some of Udelv’s other clients, which include Walmart among others.

Udelv CEO and founder Daniel Laury explained in an interview that they’re very excited to be partnering with HEB, because of the company’s reach in Texas, where it’s the largest grocery chain with approximately 400 stores. This initial phase only covers one car and one store, and during this part of the pilot the vehicle will have a safety driver on board. But the plan includes the option to expand the partnership to cover more vehicles and eventually achieve full driverless operation.

“They’re really at the forefront of technology, in the areas where they need to be,” Laury said. “It’s a very impressive company.”

For its part, HEB Group has been in discussion with a number of potential partners for autonomous deliver trials, and according to Paul Tepfenhart, SVP of Omnichannel and Emerging Technologies at HEB, but it liked Udelv specifically because of their safety record, and because they didn’t just come in with a set plan and a fully formed off-the-shelf offering – they truly partnered with HEB on what the final deployment of the pilot would look like.

Both Tepfenhart and Laury emphasized the importance of customer experience in providing autonomous solutions, and Laury noted that he thinks Udelv’s unique advantage in the increasingly competitive autonomous curbside delivery business is its attention to the robotics of the actual delivery and storage components of its custom vehicle.

“The reason I think we’re we’ve been so successful, is because we focused a lot on the delivery robotics,” Laury explained. “If you think about it, there’s no autonomous delivery business that works if you don’t have the robotics aspect of it figured out also. You can have an autonomous vehicle, but if you don’t have an automated cargo space where merchants can load [their goods] and consumers can unload the vehicle by themselves, you have no business.”

Udelv also thinks that it has an advantage when it comes to its business model, which aims to generate revenue now, in exchange for providing actual value to paying customers, rather than counting on being supported entirely through funding from a wealthy investor or deep-pocketed corporate partners. Laury likens it to Tesla’s approach, where it actually has over 500,000 vehicles on the road helping it build its autonomous technology – but all of those are operated by paying customers who get all the benefits of owing their cars today.

“We want to be the Tesla of autonomous delivery,” Laury said. “If you think about it, Tesla has got 500,000 vehicles on the road […] if you think about this, for of all the the cars in the world that have some level of automated driver assistance (ADAS) or autonomy, I think Tesla’s 90% of them – and they get the customers to pay a ridiculous amount of money for that. Everybody else in the business is getting funding from something else. Waymo is getting funding from search; Cruise is getting funding from GM and SoftBank and others, Nuro is getting funding from SoftBank. So, pretty much everybody else is getting funding from a source that’s a different source from the actual business they’re supposed to be in.”

Laury says that Udelv’s unique strength is in the ability the company has to provide value to partners like HEB today, through its focus on robotics and solving problems like engineering the robotics of the loading and customer pick-up experience, which puts it in a unique place where it can fund its own research through revenue-generating services that can be offered in-market now, rather than ten years from now.

Techstars Detroit announce first class after major refocus

At the beginning of 2019 Techstars Mobility turned into Techstars Detroit. At the time of the announcement, Managing Director Ted Serbinski penned “the word mobility was becoming too limiting. We knew we needed to reach a broader audience of entrepreneurs who may not label themselves as mobility but are great candidates for the program.”

I always called it Techstars Detroit anyway.

With Techstars Detroit, the program is looking for startups transforming the intersection of the physical and digital worlds and can leverage the strengths of Detroit to succeed. It’s a mouthful but makes sense. Mobility is baked into Detroit but Detroit is more than mobility.

Today the program took the wraps off the first class of startups under the new direction.

Techstars has operated in Detroit since 2015 and has been a critical partner in helping the city rebuild. Since its launch, Serbinski and the Techstars Mobility (now Detroit) mentors have helped bring talented engineers and founders to the city even for a few months.

Serbinski summed up Detroit nicely for me, saying “No longer is Detroit telling the world how to move. The world is telling Detroit how it wants to move.” He added the incoming class represents the new Detroit with 60% international and 40% female founders.


Airspace Link (Detroit, MI)
Providing highways in the sky for safer drone operations.

Alpha Drive (New York, NY)
Platform for the validation of autonomous vehicle AI.

Le Car (Novi, MI)
An AI-powered personal car concierge that matches you to your perfect vehicle fit.

Octane (Fremont, CA)
Octane is a mobile app that connects car enthusiasts to automotive events and to each other out on the road.

PPAP Manager (Chihuahua, Mexico)
A platform to streamline the approval of packets of documents required in the automotive industry, known as PPAP, to validate production parts.

Ruksack (Toronto, Canada)
Connecting travellers with local travel experts to help them plan a perfect trip

Soundtrack AI (Tel Aviv, Israel)
Acoustics based & AI enabled Predictive Maintenance Platform

Teporto (Tel Aviv, Israel)
Teporto is enabling a new commute modality with its one-click smart platform for transportation companies that seamlessly adapts commuter service to commuters’ needs.

Unlimited Engineering (Barcelona, Spain)
Unlimited develops modular Light Electric Vehicles as a fun, cheap and convenient solution to last mile trips that are overserved by cars and public transportation

Zown (Toronto, Canada)
Open up your real estate property to the new mobility marketplace

Amazon expands Transparency anti-counterfeit codes to Europe, India and Canada

Amazon is no stranger to the nefarious forces of e-commerce: fake reviews, counterfeit goods and scams have all reared their heads on its marketplace in one place or another, with some even accusing it of turning a blind eye to them since, technically, Amazon profits from any transactions, not just the legit ones. The company has been working to fight that image, though, and today it announced its latest development in that mission: it announced that Transparency — a program to serialize products sold on its platform with a T-shaped QR-style code to identify when an item is counterfeit — is expanding to Europe, India and Canada. (More detail on how it actually works below.)

“Counterfeiting is an industry-wide concern – both online and offline. We find the most effective solutions to prevent counterfeit are based on partnerships that combine Amazon’s technology innovation with the sophisticated knowledge and capabilities of brands,” said Dharmesh Mehta, vice president, Amazon Customer Trust and Partner Support, in a statement. “We created Transparency to provide brands with a simple, scalable solution that empowers brands and Amazon to authenticate products within the supply chain, stopping counterfeit before it reaches a customer.”

The growth of Transparency has been quite slow so far: it has taken more than two years for Amazon to offer the service outside of the US market, where it launched first with Amazon’s own products in March 2017 and then expanded to third-party items. Even today, while Transparency is launching to sellers in more markets, the app for consumers to scan the items themselves is still only available in the US, according to Amazon’s FAQ.

In that time, take-up has been okay but not massive. Amazon says that some 4,000 brands have enrolled in the program, covering 300 million unique codes, leading to Amazon halting more than 250,000 counterfeit sales (these would have been fake versions of legit items and brands enrolled in the Transparency program).

There is some evidence that all this works. Amazon says that 2019, for products fully on-boarded into the Transparency service, there have been zero reports of counterfeit from brands or customers who purchased these products on Amazon.

But how wide ranging that is, though, compared to the bigger problem, is not quite clear. While it’s not an apples-to-apples comparison — Amazon doesn’t disclose collectively how many brands are sold on its platform, although Amazon itself accounts for 450 brands itself — there are some 2.5 million sellers on its platform globally, and my guess is that 4,000 is just a small fraction of Amazon’s branded universe.

Recent developments have put an increased focus on what role Amazon has been playing to keep in check rampant activity around counterfeiting and other illegal activity.

The NYT published a damning expose in June that highlighted how one medical publisher found rampant counterfeiting of one of its books, a guide for doctors prescribing medications to help them determine dosages of drugs, an alarming situation considering the subject matter. Regulators like the FCC have also taken action to ask Amazon (among others like eBay) to make a better effort to remove the sale of products in specific categories, such as fake pay-TV boxes.

Coupled with other kinds of dodgy activity on the platform like fake reviews, Amazon has been making more moves of late to get a grip and create more channels for brands and sellers to help themselves, from product launches and expansions, to taking legal measures to go after bad actors.

Transparency is part of former category, and it sits alongside one of the company’s other recent, big initiatives called Project Zero, an AI-based continuous monitoring of products and activities launched four months ago to proactively identify counterfeit sellers and items on the platform.

Screenshot 2019 07 10 at 11.47.45Transparency works by way of a unique code — which looks a bit like a “T” — printed on each manufactured unit. When a customer orders the product, Amazon scans the code to verify that the product it’s shipping is legit. Customers can also scan the code after receiving the item to verify authenticity. Other details that are encoded in the T are manufacturing date, manufacturing place, and other product information like ingredients.

This system also throws some light on some of the strange workings of e-commerce, supply chains, and how marketplaces operate.

On Amazon, an item you buy that might be branded — say, a North Face jacket — may not actually be sold by North Face itself, but a reseller. And those resellers may just as likely never even touch the item: they are working off stock that is distributed from another place altogether, or perhaps manufactured and sent in bulk to Amazon or another fulfilment provider that sends the item when the order is made. All of these tradeoffs within the supply chain create an environment where counterfeit goods might creep in.

Amazon’s system, by working directly with brands and not sellers, is trying to provide an over-arching level of monitoring and control into the mix, and it notes in its announcement that its Transparency codes are trackable “regardless of where customers purchased their units.”

Ironically for a service called “Transparency”, Amazon doesn’t seem to list the price for sellers to use this service, but four months ago, when Amazon launched Project Zero, we reported that the serialization service are charged between $0.01 and $0.05 per unit, based on volume. It’s a price that especially smaller brands, which are even less immune to copycats than well-capitalized big brands, are willing to pay:

“Amazon’s proactive approach and investment in tools like Transparency have allowed us to grow consumer confidence in our products and prevent inauthentic product from ending up in the hands of our customers,” said Matt Petersen, Chief Executive Officer at Neato Robotics, a maker of smart robotic vacuum cleaners, in a statement.

“Blocking counterfeits from the source has always been a tough task for us – it’s something all brand owners face through nearly all channels around the world,” said Bill Mei, Chief Executive Officer at Cowin, a manufacturer of noise cancelling audio devices, in his own statement. “After we joined Transparency, our counterfeit problem just disappeared for products protected by the program.”

Spoiler warning! This neural network spots dangerous reviews before you read them

It’s hard to avoid spoilers on the internet these days — even if you’re careful, a random tweet or recommended news item could lay to waste your plan to watch that season finale a day late or catch a movie after the crowds have subsided. But soon an AI agent may do the spoiler-spotting for you, and flag spoilerific reviews and content before you even have a chance to look.

SpoilerNet is the creation of a team at UC San Diego, composed perhaps of people who tried waiting a week to see Infinity War and got snapped for their troubles. Never again!

They assembled a database of more than a million reviews from Amazon-owned reading community Goodreads, where it is the convention to note spoilers in any reviews, essentially line by line. As a user of the site I’m thankful for this capability, and the researchers were too — because nowhere else is there a corpus of written reviews in which whatever constitutes a “spoiler” has been meticulously labeled by a conscientious community.

(Well, sort of conscientious. As the researchers note: “we observe that in reality only a few users utilize this feature.”)

At any rate, such labeled data is these days basically food for what are generally referred to as AI systems: neural networks of various types that “learn” the qualities that define a specific image, object, or in this case spoilers. The team fed the 1.3 million Goodreads reviews into the system, letting it observe and record the differences between ordinary sentences and ones with spoilers in them.

Perhaps writers of reviews tend to begin sentences with plot details in a certain way — “Later it is revealed…” — or maybe spoilery sentences tend to lack evaluative words like “great” or “complex.” Who knows? Only the network.

Once its training was complete, the agent was set loose on a separate set of sentences (from both Goodreads and mind-boggling timesink TV Tropes), which it was able to label as “spoiler” or “non-spoiler” with up to 92 percent accuracy. Earlier attempts to computationally predict whether a sentence has spoilers in it haven’t fare so well; one paper by Chiang et al. last year broke new ground, but is limited by its dataset and approach, which allow it to consider only the sentence in front of it.

“We also model the dependency and coherence among sentences within the same review document, so that the high-level semantics can be incorporated,” lead author of the SpoilerNet paper, Mengting Wan, told TechCrunch in an email. This allows for a more complete understanding of a paragraph or review, though of course it is also necessarily a more complex problem.

But the more complex model is a natural result from richer data, he wrote:

Such a model design indeed benefits from the new large-scale review dataset we collected for this work, which includes complete review documents, sentence-level spoiler tags, and other meta-data. To our knowledge, the public dataset (released in 2013) before this work only involves a few thousand single-sentence comments rather than complete review documents. For research communities, such a dataset also facilitates the possibility of analyzing real-world review spoilers in details as well as developing modern ‘data-hungry’ deep learning models in this domain.

This approach is still new, and the more complex approach has its drawbacks. For instance, the model occasionally mistakes a sentence as having spoilers if other spoiler-ish sentence are adjacent; and its understanding of individual sentences is not quite good enough to understand when certain words really indicate spoilers or not. You and I know that “this kills Darth Vader” is a spoiler, while “this kills the suspense” isn’t, but a computer model may have trouble telling the difference.

Wan told me that the system should be able to run in real time on a user’s computer, though of course training it would be a much bigger job. That opens up the possibility of a browser plugin or app that reads reviews ahead of you and hides anything it deems risky. Though Amazon is indirectly associated with the research (co-author Rishabh Misra works there) Wan said there was no plan as yet to commercialize or otherwise apply the tech.

No doubt it would be a useful tool for Amazon and its subsidiaries and sub-businesses to be able to automatically mark spoilers in reviews and other content. But until the new model is implemented (and really until it is a bit better) we’ll have to stick to the old-fashioned method of avoiding all contact with the world until we’ve seen the movie or show in question.

The team from UCSD will be presenting their work at the Association for Computational Linguistics conference in Italy later this month; you can read the full paper here — but beware of spoilers. Seriously.

TC Sessions: Mobility: Three live onstage demos that shouldn’t be missed

TechCrunch Sessions is heading to San Jose on July 10 — just a few days from now — to dig into the future (and present) of transportation.

The agenda at TC Sessions: Mobility is packed with startups and giants in the tech industry. TechCrunch has brought together some of the best and brightest minds working on autonomous vehicle technology, micromobility and electric vehicles, including Dmitri Dolgov at Waymo, Karl Iagnemma of Aptiv, Seleta Reynolds of the Los Angeles Department of Transportation, Ford Motor CTO Ken WashingtonKatie DeWitt of Scoot and Argo AI’s chief security officer, Summer Craze Fowler.

It wouldn’t be a TechCrunch Sessions without an up-close look and demonstration of the tech. Alongside the speakers, TC Sessions: Mobility will have several demos, including the unveiling of one startup currently in stealth.

The demos will begin with Holoride, the startup that spun out of Audi that aims to bring a VR experience to the backseat of every car, no matter if it’s a Ford, Mercedes or Chrysler Pacifica minivan. Later in the day, check out Damon X Labs, a company aiming to make motorcycles safer with a system that anticipates accidents and warns the rider.

Finally, the day will wrap up with a Michigan-based startup coming out of stealth. We can’t say much yet, but this startup will show off its approach to getting things to people — even in winter.

Tickets are on sale now for $349. Prices go up at the door, so book today! And students, get a super-discounted ticket for just $45 when you book here.

Lyft, Aptiv and the National Federation of the Blind partner on self-driving for low vision riders

Lyft and Aptiv are already running autonomous driving trials in Las Vegas, and now they’re expanding that limited pilot to include low vision and blind riders in a new partnership with the National Federation of the Blind. In a blog post dealing the news, Lyft notes that this is a key par of their overall strategy of provision mobility for all, via their other offerings including shared rides, electric bikes and scooters and more.

The company has already been working with the National Federation of the Blind to ensure that its core ride-hailing product is accessible to riders who might be blind or have low vision, and this extension of that work now means its pilot fully autonomous ride-hailing network is available as an option to this group of passengers with more accessible features, including Braille guides for riders of the self-driving vehicles, which provide detailed info about the route the autonomous test cars run in Las Vegas, and information about the Aptiv self-driving vehicles themselves. These detail sensors and technology positioned across the car, so riders can be fully informed as they participate.

Screen Shot 2019 07 03 at 6.23.07 PM

One of autonomous driving’s biggest potential benefits is providing access to driving to people who wouldn’t otherwise be able to use that mode of transportation, including people with low vision, people with epilepsy, or older drivers who’ve lost the ability to legally maintain a license and operate a vehicle, among many others.

Aside from the still-immense challenge of getting the autonomous driving technology to a place where it’s ready for broad deployment and consumer use, there are challenges around how the user interface will work for both hailing and accessing the vehicles, and ensuring that people making use of the service are properly informed about what’s going on. Lyft’s plan to work on all these aspects of their service with advocacy and empowerment groups who take this as their central mission seems like a smart one.

Google debuts ‘Code with Google’ coding education resource for teachers

Google is offering a new coding resource for educators via ‘Code with Google,’ which collects Google’s own free course curriculum on teaching computer science, and a variety of programs to help students learn to code or build on their existing skills, with stuff for people at all levels of ability.

The ‘Code with Google’ resources extend beyond just learning, however, and include potential scholarships, for instance, was well as summer programs, internships and residencies.

In a blog post, Google VP of Education and University Relations Maggie Johnson noted that while recognition of the importance of computer science across all levels of education is relatively high, the actual availability of courses that include hands-on programming for students is surprisingly low, and generally only accessible to students in more affluent districts with access to more resources.

All of Google’s ‘Code with Google’ resources are free, in keeping with may of its other educational offerings, as it continues to drive its education tech leadership position combined with affordable Chromebooks for schools. Google also announced a $1 million grant to the Computer Science Teachers Association alongside the unveiling of this new resource.

Google is smart to continue to approach its education strategy through free resources and easy-to-use, cloud-based software that is accessible to a broad range of both educators and students at all skill and expertise levels.