Connected audio was a bad choice

The past week, I’ve spent ample time looking to revamp my home audio setup. I think my only qualification is that my next setup is as dumb as possible.

In the past five years, my setup has gone from a fairly middling wired 2.1 speaker setup to a confusing menagerie of connected smart speakers. I’ve likely gone through at least five Google Assistant-laden speakers including the Google Home Max, a couple connected Sonos speakers, three HomePods, a Facebook Portal+, non-smart speakers connected via Chromecast Audio and god knows how many Alexa-integrated speakers. All in all, I can firmly say I have made some very bad audio decisions in my recent life.

I’ve had a lot of frustrations with my current setup, but they’re really issues with the entire smart speaker market:

  • Good audio hardware should be timeless, and devices that need frequent firmware updates, have proprietary support for a certain operating system or can lose integration support quickly fly in the face of that.
  • Home entertainment integrations with these speakers are just awful, even among products built by the same company. Repeatedly connecting my stereo HomePods to my Apple TV has been maddening.
  • Smart assistants are much less ambitious than they were years ago and the ceiling of innovation already seems to have come down significantly. Third party integrations have sunk far below expectations and it’s pretty uncertain that these voice interfaces have as bright a future as these tech companies once hoped.
  • These assistants were once going to be the operating systems of the home, but the smart home experiment largely feels like a failure and it’s growing clearer that the dream of a Jarvis-like system that plays nicely with all of your internet-connected devices was totally naive.

All in all, it’s time for me to move on and invest some cash in a setup that will sound good for decades.

Now, many of you will say that my true error was a lack of commitment to one ecosystem, which is undoubtedly spot-on and yet I don’t think any of the players had precisely what I wanted hence the wildly piecemeal approach. Dumping more funds into a robust Sonos setup probably would have been the wisest commitment, but I have commitment issues and I think part of it was a desire to see what was out there.

In quarantine, I’ve gotten ample time to spend with my home audio system and the destructive weave on non-compatible hardware is all too much. I don’t want my speakers to have their own operating systems or for one speaker to play nice with my music streaming platform of choice, but not the other. I want something that can last.

After doing half-commits to several ecosystems, I feel I’ve seen and heard it all and now I’m shopping for some good old-fashioned dumb wired surround sound speakers to integrate with a slightly smarter AV receiver. God willing, I will have strength to not buy whatever cool audio gadgets come out next year and can stay strong. If you have some good tips on a nice setup, please help me out.

Google One now offers free phone backups up to 15GB on Android and iOS

Google One, Google’s subscription program for buying additional storage and live support, is getting an update today that will bring free phone backups for Android and iOS devices to anybody who installs the app — even if they don’t have a paid membership. The catch: while the feature is free, the backups count against your free Google storage allowance of 15GB. If you need more you need — you guessed it — a Google One membership to buy more storage or delete data you no longer need. Paid memberships start at $1.99/month for 100GB.

Image Credits: Google

Last year, paid members already got access to this feature on Android, which stores your texts, contacts, apps, photos and videos in Google’s cloud. The “free” backups are now available to Android users. iOS users will get access to it once the Google One app rolls out on iOS in the near future.

Image Credits: Google

With this update, Google is also introducing a new storage manager tool in Google One, which is available in the app and on the web, and which allows you to delete files and backups as needed. The tool works across Google properties and lets you find emails with very large attachments or large files in your Google Drive storage, for example.

With this free backup feature, Google is clearly trying to get more people onto Google One. The free 15GB storage limit is pretty easy to hit, after all (and that’s for your overall storage on Google, including Gmail and other services) and paying $1.99 for 100GB isn’t exactly a major expense, especially if you are already part of the Google ecosystem and use apps like Google Photos already.

Vicariously mimics another person’s Twitter feed using lists, but it violates Twitter rules

That Vicariously app you might have seen pop up in your twitter feed via a little viral growth hacking has run aground on Twitter’s automation rules. We reached out about it after it started spamming my feed with ‘so and so has added you to a list’ notifications and Twitter says that the app is not in compliance.

To be fair, they did also say they ‘love’ it — but that it will have to find a different way to do what it does.

“We love that Vicariously uses Lists to help people find new accounts to follow and get new perspectives. However, the way the app is currently doing this is in violation of Twitter’s automation rules,” Twitter said in a statement. “We’ve reached out to them to find a way to bring the app into compliance with our rules.”

The app was made by Jake Harding, an entrepreneur who built it as a side project.

The app, which you can find here, enumerates the followers of a target account and builds a list out of the accounts that it follows. This enables you to create lists that are snapshots of the exact (minus algorithmic tweak) feed that any given user sees when they open their app. Intriguing, right?

Well, it turns out Twitter has done this themselves twice before. Once in 2011 and originally waaaay back in 2009. The product had a built in feature that allowed you to just click through and view someone’s follower graph as a feed with a tap.

I was there in 2009 when it was a thing, and I can tell you that it was just flat out cool to see someone else’s graph going by. In the early growing days it was very interesting to see who was following who or what. It sort of taught you how to ‘do’ Twitter when everyone was learning it together. I can see why Harding wanted a duplicate of this in order to re-create this feeling of ‘snapshotting’ someone else’s info apparatus.

Unfortunately, one of the big side effects of the way that Vicariously duplicates this feature using an automated ‘list builder’ is that it spams every person it adds to the list given that Twitter always notifies you when someone adds you to a list and there is no current way to alter that behavior.

So you see a lot of ‘added to their list‘ tweets and notis.

There are also other issues with the way  that Vicariously works to build public lists of people’s follower graphs. There is potential for abuse here in that it could be used to target the people that a targeted account follows. One of the major reasons Twitter killed this feature twice is that the whole thing feels hyper personal. Your Twitter follower graph is something that you, theoretically, curate. Though a lot of people have become more performative with follows and instead, ironically, add the people they want to ‘follow’ to lists.

Having your graph public is something that felt exciting and connective at one point in Twitter’s life. But the world may be too big and too nasty now for something like this to feel really comfortable if it ever spreads beyond the technorati/Twitter power user crowd. We’ll see I guess.

Oh, and Twitter, it is about time you built in a ‘can not be added to lists’ feature. Otherwise, as someone reminded me via DM, you run the risk of making all of the same mistakes as Facebook.

Garmin confirms ransomware attack took down services

Sport and fitness tech giant Garmin has confirmed its five-day outage was caused by a ransomware attack.

In a brief statement on Monday, the company said it was hit by a cyberattack on July 23 that “encrypted some of our systems.”

“As a result, many of our online services were interrupted including website functions, customer support, customer facing applications, and company communications,” the statement read. “We immediately began to assess the nature of the attack and started remediation.”

Garmin said it had “no indication” that customer data was accessed, lost, or stolen. The company said its services are being restored.

TechCrunch previously reported that the attack was caused by the WastedLocker ransomware, citing a source with direct knowledge of the incident. WastedLocker is known to be used by a Russian hacking group, known as Evil Corp., which was sanctioned by the U.S. Treasury last year.

Garmin is expected to report earnings on Wednesday.

More soon…

Cloudflare launches Workers Unbound, the next evolution of its serverless platform

Cloudflare today announced the private beta launch of Workers Unbound, the latest step in its efforts to offer a serverless platform that can compete with the likes of AWS Lambda.

The company first launched its Workers edge computing platform in late 2017. Today it has “hundreds of thousands of developers” who use it and in the last quarter alone, more than 20,000 developers built applications based on the service, according to the company. Cloudflare also uses Workers to power many of its own services, but the first iteration of the platform had quite a few limitations. The idea behind Workers Unbound is to do away with most of those and turn it into a platform that can compete with the likes of AWS, Microsoft and Google.

“The original motivation for us building Cloudflare Workers was not to sell it as a product but because we were using it as our own internal platform to build applications,” Cloudflare co-founder and CEO Matthew Prince told me ahead of today’s announcement. “Today, Cloudflare Teams, which is our fastest-growing product line, is all running on top of Cloudflare workers and it’s allowed us to innovate as fast as we have and stay nimble and stay agile and all those things that get harder as you as you become a larger and larger company.”

Cloudflare co-founder and CEO Matthew Prince

Prince noted that Cloudflare aims to expose all of the services it builds for its internal consumption to third-party developers as well. “The fact that we’ve been able to roll out a whole Zscaler competitor in almost no time is because of the fact that we had this platform and we could build on it ourselves,” he said.

The original Workers service will continue to operate (but under the Workers Bundled moniker) and essentially become Cloudflare’s serverless platform for basic workloads that only run for a very short time. Workers Unbound — as the name implies — is meant for more complex and longer-running processes.

When it first launched Workers, the company said that its killer feature was speed. Today, Prince argues that speed obviously remains an important feature — and Cloudflare Workers Unbound promises that it essentially does away with cold start latencies. But developers also adopted the platform because of its ability to scale and its price.

Indeed, Workers Unbound, Cloudflare argues, is now significantly more affordable than similar offerings. “For the same workload, Cloudflare Workers Unbound can be 75 percent less expensive than AWS Lambda, 24 percent less expensive than Microsoft Azure Functions, and 52 percent less expensive than Google Cloud Functions,” the company says in today’s press release.

As it turned out, the fact that Workers was also an edge computing platform was basically a bonus but not necessarily why developers adopted it.

Another feature Prince highlighted is regulatory compliance. “I think the thing we’re realizing as we talk to our largest enterprise customers is that for real companies — not just the individual developer hacking away at home — but for real businesses in financial services or anyone who has to deal with a regulated industry, the only thing that trumps ease of use is regulatory compliance, which is not sexy or interesting or anything else but like if your GC says you can’t use XYZ platform, then you don’t use XYZ platform and that’s the end of the story,” Prince noted.

Speed, though, is of course something developers will always care about. Prince stressed that the team was quite happy with the 5ms cold start times of the original Workers platform. “But we wanted to be better,” he said. “We wanted to be the clearly fastest serverless platform forever — and the only number that we know no one else can beat is zero — unless they invent a time machine.”

The way the team engineered this is by queuing up the process while the two servers are still negotiating their TLS handshake. “We’re excited to be the first cloud computing platform that [offers], for no additional costs, out of the box, zero millisecond cold start times which then also means less variability in the performance.”

Cloudflare also argues that developers can update their code and have it go live globally within 15 seconds.

Another area the team worked on was making it easier to use the service in general. Among the key new features here is support for languages like Python and a new SDK that will allow developers to add support for their favorite languages, too.

Prince credits Cloudflare’s ability to roll out this platform, which is obviously heavy on compute resources — and to keep it affordable — to the fact that it always thought of itself as a security platform first (the team has often said that the CDN functionality was more or less incidental). Because it performed deep packet inspection, for example, the company’s servers always featured relatively high-powered CPUs. “Our network has been optimized for CPU usage from the beginning and as a result, it’s actually made it much more natural for us to extend our network that way,” he explained. “To this day, the same machines that are running our firewall products are the same machines that are running our edge computing platform.”

Looking ahead, Prince noted that while Workers and Workers Unbound feature a distributed key-value store, the team is looking at adding a more robust database infrastructure and distributed storage.

The team is also looking at how to decompose applications to put them closest to where they will be running. “You could imagine that in the future, it might be that you write an application and we say, ‘listen, the parts of the application that are sensitive to the user of the database might run in Portland, where you are — but if the database is in Ashburn, Virginia, then the parts that are sensitive to latency in the database might run there,” he said.

 

Four steps for drafting an ethical data practices blueprint

In 2019, UnitedHealthcare’s health-services arm, Optum, rolled out a machine learning algorithm to 50 healthcare organizations. With the aid of the software, doctors and nurses were able to monitor patients with diabetes, heart disease and other chronic ailments, as well as help them manage their prescriptions and arrange doctor visits. Optum is now under investigation after research revealed that the algorithm (allegedly) recommends paying more attention to white patients than to sicker Black patients.

Today’s data and analytics leaders are charged with creating value with data. Given their skill set and purview, they are also in the organizationally unique position to be responsible for spearheading ethical data practices. Lacking an operationalizable, scalable and sustainable data ethics framework raises the risk of bad business practices, violations of stakeholder trust, damage to a brand’s reputation, regulatory investigation and lawsuits.

Here are four key practices that chief data officers/scientists and chief analytics officers (CDAOs) should employ when creating their own ethical data and business practice framework.

Identify an existing expert body within your organization to handle data risks

The CDAO must identify and execute on the economic opportunity for analytics, and with opportunity comes risk. Whether the use of data is internal — for instance, increasing customer retention or supply chain efficiencies — or built into customer-facing products and services, these leaders need to explicitly identify and mitigate risk of harm associated with the use of data.

A great way to begin to build ethical data practices is to look to existing groups, such as a data governance board, that already tackles questions of privacy, compliance and cyber-risk, to build a data ethics framework. Dovetailing an ethics framework with existing infrastructure increases the probability of successful and efficient adoption. Alternatively, if no such body exists, a new body should be created with relevant experts from within the organization. The data ethics governing body should be responsible for formalizing data ethics principles and operationalizing those principles for products or processes in development or already deployed.

Ensure that data collection and analysis are appropriately transparent and protect privacy

All analytics and AI projects require a data collection and analysis strategy. Ethical data collection must, at a minimum, include: securing informed consent when collecting data from people, ensuring legal compliance, such as adhering to GDPR, anonymizing personally identifiable information so that it cannot reasonably be reverse-engineered to reveal identities and protecting privacy.

Some of these standards, like privacy protection, do not necessarily have a hard and fast level that must be met. CDAOs need to assess the right balance between what is ethically wise and how their choices affect business outcomes. These standards must then be translated to the responsibilities of product managers who, in turn, must ensure that the front-line data collectors act according to those standards.

CDAOs also must take a stance on algorithmic ethics and transparency. For instance, should an AI-driven search function or recommender system strive for maximum predictive accuracy, providing a best guess as to what the user really wants? Is it ethical to micro-segment, limiting the results or recommendations to what other “similar people” have clicked on in the past? And is it ethical to include results or recommendations that are not, in fact, predictive, but profit-maximizing to some third party? How much algorithmic transparency is appropriate, and how much do users care? A strong ethical blueprint requires tackling these issues systematically and deliberately, rather than pushing these decisions down to individual data scientists and tech developers that lack the training and experience to make these decisions.

Anticipate – and avoid – inequitable outcomes

Division and product managers need guidance on how to anticipate inequitable and biased outcomes. Inequalities and biases can arise due simply to data collection imbalances — for instance, a facial recognition tool that has been trained on 100,000 male faces and 5,000 female faces will likely be differently effective by gender. CDAOs must help ensure balanced and representative data sets.

Other biases are less obvious, but just as important. In 2019, Apple Card and Goldman Sachs were accused of gender bias when extending higher credit lines to men than women. Though Goldman Sachs maintained that creditworthiness — not gender — was the driving factor in credit decisions, the fact that women have historically had fewer opportunities to build credit likely meant that the algorithm favored men.

To mitigate inequities, CDAOs must help tech developers and product managers alike navigate what it means to be fair. While computer science literature offers myriad metrics and definitions of fairness, developers cannot reasonably choose one in the absence of collaborations with the business managers and external experts who can offer deep contextual understanding of how data will eventually be used. Once standards for fairness are chosen, they must also be effectively communicated to data collectors to ensure adherence.

Align organizational structure with the process for identifying ethical risk

CDAOs often build analytics capacity in one of two ways: via a center of excellence, in service to an entire organization, or a more distributed model, with data scientists and analytics investments committed to specific functional areas, such as marketing, finance or operations. Regardless of organizational structure, the processes and rubrics for identifying ethical risk must be clearly communicated and appropriately incentivized.

Key steps include:

  • Clearly establishing accountability by creating linkages from the data ethics body to departments and teams. This can be done by having each department or team designate its own “ethics champion” to monitor ethics issues. Champions need to be able to elevate concerns to the data ethics body, which can advise on mitigation strategies, such as augmenting existing data, improving transparency or creating a new objective function.
  • Ensuring consistent definitions and processes across teams through education and training around data and AI ethics.
  • Broadening teams’ perspectives on how to identify and remediate ethical problems by facilitating collaborations across internal teams and sharing examples and research from other domains.
  • Creating incentives — financial or other recognitions — to build a culture that values the identification and mitigation of ethical risk.

CDAOs are charged with the strategic use and deployment of data to drive revenue with new products and to create greater internal consistencies. Too many business and data leaders today attempt to “be ethical” by simply weighing the pros and cons of decisions as they arise. This short-sighted view creates unnecessary reputational, financial and organizational risk. Just as a strategic approach to data requires a data governance program, good data governance requires an ethics program. Simply put, good data governance is ethical data governance.

Hear how three startups are approaching quantum computing differently at TC Disrupt 2020

Quantum computing is at an interesting point. It’s at the cusp of being mature enough to solve real problems. But like in the early days of personal computers, there are lots of different companies trying different approaches to solving the fundamental physics problems that underly the technology, all while another set of startups is looking ahead and thinking about how to integrate these machines with classical computers — and how to write software for them. At Disrupt 2020 on September 14-18, we will have a panel with D-Wave CEO Alan Baratz, Quantum Machines co-founder and CEO Itamar Sivan and IonQ president and CEO Peter Chapman. The leaders of these three companies are all approaching quantum computing from different angles, yet all with the same goal of making this novel technology mainstream.

D-Wave may just be the best-known quantum computing company thanks to an early start and smart marketing in its early days. Alan Baratz took over as CEO earlier this year after a few years as chief product officer and executive VP of R&D at the company. Under Baratz, D-Wave has continued to build out its technology — and especially its D-Wave quantum cloud service. Leap 2, the latest version of its efforts, launched earlier this year. D-Wave’s technology is also very different from that of many other efforts thanks to its focus on quantum annealing. That drew a lot of skepticism in its early days but it’s now a proven technology and the company is now advancing both its hardware and software platform.

Like Baratz, IonQ’s Peter Chapman isn’t a founder either. Instead, he was the engineering director for Amazon Prime before joining IonQ in 2019. Under his leadership, the company raised a $55 million funding round in late 2019, which the company extended by another $7 million last month. He is also continuing IonQ’s bet on its trapped ion technology, which makes it relatively easy to create qubits and which, the company argues, allows it to focus its efforts on controlling them. This approach also has the advantage that IonQ’s machines are able to run at room temperature, while many of its competitors have to cool their machines to as close to zero Kelvin as possible, which is an engineering challenge in itself, especially as these companies aim to miniaturized their quantum processors.

Quantum Machines plays in a slightly different part of the ecosystem from D-Wave and IonQ. The company, which recently raised $17.5 million in a Series A round, is building a quantum orchestration platform that combines novel custom hardware for controlling quantum processors — because once quantum machines reach a bit more maturity, a standard PC won’t be fast enough to control them — with a matching software platform and its own QUA language for programming quantum algorithms. Quantum Machines is Itamar Sivan’s first startup, which he launched with his co-founders after getting his Ph.D. in condensed matter and material physics at the Weizman Institute of Science.

Come to Disrupt 2020 and hear from these companies and others on September 14-18. Get a front-row seat with your Digital Pro Pass for just $245 or with a Digital Startup Alley Exhibitor Package for $445. Prices are increasing next week, so grab yours today to save up to $300.

Apple starts giving ‘hacker-friendly’ iPhones to top bug hunters

For the past decade Apple has tried to make the iPhone one of the most secure devices on the market. By locking down its software, Apple keeps its two billion iPhone owners safe. But security researchers say that makes it impossible to look under the hood to figure out what happened when things go wrong.

Once the company that claimed its computers don’t get viruses, Apple has in recent years begun to embrace security researchers and hackers in a way it hadn’t before.

Last year at the Black Hat security conference, Apple’s head of security Ivan Krstic told a crowd of security researchers that it would give its most-trusted researchers a “special” iPhone with unprecedented access to the the device’s underbelly, making it easier to find and report security vulnerabilities that Apple can fix in what it called the iOS Security Research Device program.

Starting today, the company will start loaning these special research iPhones to skilled and vetted researchers that meet the program’s eligibility.

These research iPhones will come with specific, custom-built iOS software with features that ordinary iPhones don’t have, like SSH access and a root shell to run custom commands with the highest access to the software, and debugging tools that make it easier for security researchers to run their code and better understand what’s going on under the surface.

Apple told TechCrunch it wants the program to be more of a collaboration rather than shipping out a device and calling it a day. Hackers in the research device program will also have access to extensive documentation and a dedicated forum with Apple engineers to answer questions and get feedback.

These research devices are not new per se, but have never before been made directly available to researchers. Some researchers are known to have sought out these internal, so-called “dev-fused” devices that have found their way onto underground marketplaces to test their exploits. Those out of luck had to rely on “jailbreaking” an ordinary iPhone first to get access to the device’s internals. But these jailbreaks are rarely available for the most recent iPhones, making it more difficult for hackers to know if the vulnerabilities they find can be exploited or have been fixed.

By giving its best hackers effectively an up-to-date and pre-jailbroken iPhone with some of its normal security restrictions removed, Apple wants to make it easier for trusted security researchers and hackers to find vulnerabilities deep inside the software that haven’t been found before.

But as much as these research phones are more open to hackers, Apple said that the devices don’t pose a risk to the security of any other iPhone if they are lost or stolen.

The new program is a huge leap for the company that only a year ago opened its once-private bug bounty program to everyone, a move seen as long overdue and far later than most other tech companies. For a time, some well-known hackers would publish their bug findings online without first alerting Apple — which hackers call a “zero-day” as they give no time for companies to patch — out of frustration with Apple’s once-restrictive bug bounty terms.

Now under its bounty program, Apple asks hackers to privately submit bugs and security issues for its engineers to fix, to help make its iPhones stronger to protect against nation-state attacks and jailbreaks. In return, hackers get paid on a sliding scale based on the severity of their vulnerability.

Apple said the research device program will run parallel to its bug bounty program. Hackers in the program can still file security bug reports with Apple and receive payouts of up to $1 million — and up to a 50% bonus on top of that for the most serious vulnerabilities found in the company’s pre-release software.

The new program shows Apple is less cautious and more embracing of the hacker community than it once was — even if it’s better late than never.

Amazon launches new Alexa developer tools

Amazon today announced a slew of new features for developers who want to write Alexa skills. In total, the team released 31 new features at its Alexa Live event. Unsurprisingly, some of these are relatively minor but a few significantly change the Alexa experience for the over 700,000 developers who have built skills for the platform so far.

“This year, given all our momentum, we really wanted to pay attention to what developers truly required to take us to the next level of what engaging [with Alexa] really means,” Nedim Fresko, the company’s VP of Alexa Devices & Developer Technologies, told me.

Maybe it’s no surprise then that one of the highlights of this release is the beta launch of Alexa Conversations, which the company first demonstrated at its re:Mars summit last year. The overall idea here is, as the name implies, to make it easier for users to have a natural conversation with their Alexa devices. That, as Fresko noted, is a very hard technical challenge.

Photographer: Andrew Burton/Bloomberg via Getty Images

“We’re observing that consumers really want to speak in a natural way with Alexa,” said Fresko. “But using traditional techniques, implementing naturalness is very difficult. Being prepared with random turns of phrase, remembering context, carrying over the context, dealing with oversupply or undersupply of information — it’s incredibly hard. And if you put it in a way and create a state diagram, you get bogged down and you have to stop. And then, instead of doing all of that, people just settle for ‘okay, fine, I’ll just do robot robotic commands instead.’ The only way to break that cycle is to have a quantum leap and the technology required for this so skilled developers can really focus on what’s important to them.”

For developers, this means they can use the service to create sample phrases, annotate them and provide access to APIs for Alexa to call into. Then, the service extrapolates all the path the conversation can take and makes it work, without the developer having to specify all of the possible turns the conversation with their skills could take. In many respects, this makes it similar to Google’s Dialogflow tool, though Google Cloud’s focus is a bit more on enterprise use cases.

“Alexa Conversations promises to be a breakthrough for developers, and will create great new experiences for customers,” said Steven Arkonovich, founder of Philosophical Creations, in today’s announcement. “We updated the Big Sky skill with Alexa Conversations, and now users can speak more naturally, and change their minds mid-conversation. Alexa’s AI keeps track of it, all with very little input from my skill code.”

For a subset of developers — around 400 for now, according to Fresko — the team will also enable a new deep neural network to improve Alexa’s natural language understanding. The company says this will lead to about a 15 percent improvement in accuracy for the skills that will get access to this.

“The idea is to allow developers to get an accuracy benefit with no action on their part by just changing the underlying technology and making our models more sophisticated, we’re able to provide a lift in accuracy for all skills,” explained Fresko.

alexa echo amazon 9250103

Image Credits: TechCrunch

Another new feature that will likely get a lot of attention from developers is Alexa for Apps. The idea here is to enable mobile developers to take their users from their skill on Alexa to their mobile apps. For Twitter, this could mean saying something like ‘“Alexa, ask Twitter to search for #BLM,” for example, and the Twitter skill could then open the mobile app. For some searches, after all, seeing the results on a screen and in a mobile app makes a lot more sense than hearing them read aloud. This feature is now in preview and developers can apply for the preview here.

Another new feature is Skill Resumption, now available in preview for U.S. English, which basically allows developers to have their skill sit in the background and then provide updates as needed. That’s useful for a ridesharing app, for example, that can then provide users with updates on when their car will arrive. These kinds of proactive notifications are something that all assistant platforms are starting to experiment with, though most users have probably only seen a few of those in their daily usage so far.

The team is also launching two new features that should help developers with getting their skills discovered by potential users. This remains a major problem with all voice platforms and is probably one of the reasons why most people only use a fraction of the skills currently available to them.

The first of these launches is the beta of Quick Links for Alexa, now in beta for U.S. English and U.S. Spanish, which allows developers to create links from their mobile apps, websites or ads to a new user interface that allows them to launch their skills on a device. “We think that’s going to really help folks become more reachable and more recognized,” said Fresko.

The second new feature in this bucket is the name-free interactions toolkit, now in preview. Alexa already had the capabilities to launch third-party skills whenever the system thought that a given skill could provide the best answer for a given question. Now, with this new system, developers can specify up to five suggested launch phrases (think “Alexa, when is the next train to Penn Station?”). Amazon says some of the early preview users have seen interactions with their skills increase by about 15 percent after adapting this tool, though the company is quick to point out that this will be different for every skill.

Among the other updates are new features for developers who want to build games and other more interactive experiences. New features here include the APL for audio beta, which provides tools for mixing speech, sound effects and music at runtime, the Alexa Web API for Games, to help developers use web technologies like HTML5, WebGL and Web Audio to build games for Alexa devices with screens, and APL 1.4, which now adds editable text boxes, drag and drop UI controls and more to the company’s markup language for building visual skills.

 

 

Adding an external GPU to your Mac is probably a better upgrade option than getting a new one

Apple recently announced that they would be transition their Mac line from Intel processors to their own, ARM-based Apple Silicon. That process is meant to begin with hardware to be announced later this year, and last two years according to Apple’s stated expectations, and while new Intel-powered Macs will be released and sold leading up to that time, it does mean that the writing is on the wall for Intel-based Apple hardware. Existing Macs with Intel chips will still be useful long after the transition is complete, however, and software porting means they might even support more of your existing favorite applications for the foreseeable future, which is why adding an external GPU (eGPU) likely makes more sense now than ever.

Apple added support for eGPUs a few years ago, made possible by the addition of Thunderbolt 3 ports on Macs. These have very high throughput, making it possible for a GPU in an internal enclosure to offer almost a much graphics processing capability as one connected internally. But while Apple has directly sold a few eGPUs, and natively supports AMD graphics cards without any special driver gymnastics required, it’s still mostly a niche category. But for anybody looking to extend the life of their existing Mac for a few more years to wait and see how the Apple Silicon transition shakes out, updates from Apple and key software partners make an eGPU a great choice.

Here are a couple of Thunderbolt 3 eGPU enclosure options out there for those considering this upgrade path, and the relative merits of each. Keep in mind that for each of these, the pricing is for the enclosure alone – you’ll have to add your own eGPU to make it work, but the good news is that you can continually upgrade and replace these graphics cards to give your Mac even more of a boost as graphics tech improves.

Razer Core X Chroma ($399)

The Razer Core X Chroma is Razer’s top of the line GPU enclosure, and it supports full-sized PCIe graphics cards up to three slots wide, up to a maximum of 500 watts. The integrated power supply provides 700w of power, which enables 100w output for charging any connected laptop, and on the back of the eGPU you’ll find four extra high-speed USB ports, as well as a Gigabit Ethernet port for networking. The Chroma version also comes with tunable LED lighting for additional user customization options. Razer provided me with a Core X Chrome, an AMD Radeon RX 5700 XT and an Nvidia GeForce RTX 2080 Ti for the purposes of testing across both Mac and PC systems.

This isn’t the smallest enclosure out there, but that’s in part because it supports 3-slot cards, which is over and above a lot of the competition. It’s also relatively short and long, making it a great option to tuck away under a desk, or potentially even held in an under-desk mount (with enough clearance for the fan exhaust to work properly). It’s quiet in operation, and only really makes any audible noise when the GPU held within is actually working for compatible software.

Most of my testing focused on using the Razer Core X Chroma with a Mac, and for that use you’ll need to stick with AMD’s GPUs, since Apple doesn’t natively support Nvidia graphics cards in macOS. The AMD Radeon RX 5700 XT is a beast, however, and delivers plenty of horsepower for improving activities like photo and video editing, as well as giving you additional display output options and just generally providing additional resources for the system to take advantage of.

Thanks to Adobe’s work on adding eGPU support to its Lightroom, Photoshop and Premiere products, you can get a lot of improvement in overall rendering and output in all those applications, particularly if you’re on a Mac that only has an integrated GPU. Likewise with Apple’s own applications, including Final Cut Pro X.

In my experience, using the eGPU greatly improved the export function of both Adobe and Apple’s pro video editing software, cutting export times by at least half. And working in Lightroom was in general much faster and more responsive, with significantly reduced rendering times for thumbnails and previews, which ordinarily take quite a while on my 2018 Mac mini.

Apple also uses eGPUs to accelerate the performance of any apps that use Metal, OpenGL and OpenCL, which is why you may notice a subtle general improvement in system performance when you plug one in. It’s hard to quantify this effect, but overall system performance felt less sluggish and more responsive, especially when running a large number of apps simultaneously.

The Razer Core X Chrome’s extra expansion slots, quiet operation and max power delivery all make it the top choice if you’re looking for an enclosure to handle everything you need, and it can provide big bumps both to Macs and Windows PCs alike – and both interchangeably, if you happen to use both platforms.

Akitio Node Titan ($329)

If you’re looking to spend a little less money, and get an enclosure that’s a bit more barebones but that still offers excellent performance, check out the Akitio Node Titan. Enclosure maker Akitio was acquired by OWC, a popular Mac peripheral maker and seller that has provided third-party RAM, docks, drives and more for decades. The Node Titan is their high-end eGPU enclosure.

The case for the Node Titan is a bit smaller than that of the Razer Core X, and is finished in a space gray-like color that will match Apple’s Mac notebooks more closely. The trade-off for the smaller size is that it only supports 2-slot graphics cards, but it also features an integrated pop-out handle that makes it much more convenient, combined with its lighter, more compact design, for taking with you place to place.

Akitio’s Node Titan packs in a 650w power supply, which is good for high-consumption graphics cards, but it also means that another compromise for this case vs. the Core X Chrome is that the Titan supplies only 85w output to any connected laptops. That’s under the 96W required for full-speed charging on the latest 16-inch MacBook Pro, though it’s still enough to keep your notebook powered up and provide full-speed charging to the rest of Apple’s Mac notebook lineup.

The Node Titan also provides only one port on the enclosure itself – a Thunderbolt output for connecting to your computer. Graphics cards you use with it will offer their own display connections, however, for attaching external displays.

In terms of performance, the Akitio Node Titan offers the same potential gains with the AMD Radeon RX 5700 XT for your Mac (and both AMD and Nvidia cards for PCs) when connected, since the GPU specs are what matter most when working with an enclosure. It operates a little more noisily, especially in terms of making a quiet, but still detectable constant hum even when the GPU is not being taxed.

The Node Titan is still an excellent choice, however, and potentially a better one for those looking for more portability and a bit more affordability at the expense of max notebook power output and a host of handy port expansions.

Bottom line

Back when more Macs had the option for user-expandable RAM, that was a great way to squeeze a little more life out of external machines and make a slowing machine feel much faster. Now, only a few Macs in Apple’s lineup make it easy or even possible to upgrade your memory. Adding an eGPU can have a similar effect, especially if you spend a lot of time in creative editing apps, including Adobe’s suite, Apple’s Pro apps, or various other third-party apps including DaVinci Resolve.

The total price of an eGPU setup, including card, can approach or even match the price of a new Mac, but even less expensive cards offer significant benefit, and you can always swap that out later depending on your needs. It is important to note that the future of eGPU support on Apple Silicon Macs isn’t certain, even though Apple has said they’ll support Thunderbolt. Still, an eGPU can stave off the need for an upgrade for years, making it easier to wait and watch to see what the process transition really means for Mac users.