What does a pandemic say about the tech we’ve built?

There’s a joke* being reshared on chat apps that takes the form of a multiple choice question — asking who’s the leading force in workplace digital transformation? The red-lined punchline is not the CEO or CTO but: C) COVID-19.

There’s likely more than a grain of truth underpinning the quip. The novel coronavirus is pushing a lot of metaphorical buttons right now. ‘Pause’ buttons for people and industries, as large swathes of the world’s population face quarantine conditions that can resemble house arrest. The majority of offline social and economic activities are suddenly off limits.

Such major pauses in our modern lifestyle may even turn into a full reset, over time. The world as it was, where mobility of people has been all but taken for granted — regardless of the environmental costs of so much commuting and indulged wanderlust — may never return to ‘business as usual’.

If global leadership rises to the occasional then the coronavirus crisis offers an opportunity to rethink how we structure our societies and economies — to make a shift towards lower carbon alternatives. After all, how many physical meetings do you really need when digital connectivity is accessible and reliable? As millions more office workers log onto the day job from home that number suddenly seems vanishingly small.

COVID-19 is clearly strengthening the case for broadband to be a utility — as so much more activity is pushed online. Even social media seems to have a genuine community purpose during a moment of national crisis when many people can only connect remotely, even with their nearest neighbours.

Hence the reports of people stuck at home flocking back to Facebook to sound off in the digital town square. Now the actual high street is off limits the vintage social network is experiencing a late second wind.

Facebook understands this sort of higher societal purpose already, of course. Which is why it’s been so proactive about building features that nudge users to ‘mark yourself safe’ during extraordinary events like natural disasters, major accidents and terrorist attacks. (Or indeed why it encouraged politicians to get into bed with its data platform in the first place — no matter the cost to democracy.)

In less fraught times, Facebook’s ‘purpose’ can be loosely summed to ‘killing time’. But with ever more sinkholes being drilled by the attention economy that’s a function under ferocious and sustained attack.

Over the years the tech giant has responded by engineering ways to rise back to the top of the social heap — including spying on and buying up competition, or directly cloning rival products. It’s been pulling off this trick, by hook or by crook, for over a decade. Albeit, this time Facebook can’t take any credit for the traffic uptick; A pandemic is nature’s dark pattern design.

What’s most interesting about this virally disrupted moment is how much of the digital technology that’s been built out online over the past two decades could very well have been designed for living through just such a dystopia.

Seen through this lens, VR should be having a major moment. A face computer that swaps out the stuff your eyes can actually see with a choose-your-own-digital-adventure of virtual worlds to explore, all from the comfort of your living room? What problem are you fixing VR? Well, the conceptual limits of human lockdown in the face of a pandemic quarantine right now, actually…

Virtual reality has never been a compelling proposition vs the rich and textured opportunity of real life, except within very narrow and niche bounds. Yet all of a sudden here we all are — with our horizons drastically narrowed and real-life news that’s ceaselessly harrowing. So it might yet end up wry punchline to another multiple choice joke: ‘My next vacation will be: A) Staycation, B) The spare room, C) VR escapism.’

It’s videoconferencing that’s actually having the big moment, though. Turns out even a pandemic can’t make VR go viral. Instead, long lapsed friendships are being rekindled over Zoom group chats or Google Hangouts. And Houseparty — a video chat app — has seen surging downloads as barflies seek out alternative night life with their usual watering-holes shuttered.

Bored celebs are TikToking. Impromptu concerts are being livestreamed from living rooms via Instagram and Facebook Live. All sorts of folks are managing social distancing and the stress of being stuck at home alone (or with family) by distant socializing — signing up to remote book clubs and discos; joining virtual dance parties and exercise sessions from bedrooms. Taking a few classes together. The quiet pub night with friends has morphed seamlessly into a bring-your-own-bottle group video chat.

This is not normal — but nor is it surprising. We’re living in the most extraordinary time. And it seems a very human response to mass disruption and physical separation (not to mention the trauma of an ongoing public health emergency that’s killing thousands of people a day) to reach for even a moving pixel of human comfort. Contactless human contact is better than none at all.

Yet the fact all these tools are already out there, ready and waiting for us to log on and start streaming, should send a dehumanizing chill down society’s backbone.

It underlines quite how much consumer technology is being designed to reprogram how we connect with each other, individually and in groups, in order that uninvited third parties can cut a profit.

Back in the pre-COVID-19 era, a key concern being attached to social media was its ability to hook users and encourage passive feed consumption — replacing genuine human contact with voyeuristic screening of friends’ lives. Studies have linked the tech to loneliness and depression. Now we’re literally unable to go out and meet friends the loss of human contact is real and stark. So being popular online in a pandemic really isn’t any kind of success metric.

Houseparty, for example, self-describes as a “face to face social network” — yet it’s quite the literal opposite; you’re foregoing face-to-face contact if you’re getting virtually together in app-wrapped form.

While the implication of Facebook’s COVID-19 traffic bump is that the company’s business model thrives on societal disruption and mainstream misery. Which, frankly, we knew already. Data-driven adtech is another way of saying it’s been engineered to spray you with ad-flavored dissatisfaction by spying on what you get up to. The coronavirus just hammers the point home.

The fact we have so many high-tech tools on tap for forging digital connections might feel like amazing serendipity in this crisis — a freemium bonanza for coping with terrible global trauma. But such bounty points to a horrible flip side: It’s the attention economy that’s infectious and insidious. Before ‘normal life’ plunged off a cliff all this sticky tech was labelled ‘everyday use’; not ‘break out in a global emergency’.

It’s never been clearer how these attention-hogging apps and services are designed to disrupt and monetize us; to embed themselves in our friendships and relationships in a way that’s subtly dehumanizing; re-routing emotion and connections; nudging us to swap in-person socializing for virtualized fuzz that designed to be data-mined and monetized by the same middlemen who’ve inserted themselves unasked into our private and social lives.

Captured and recompiled in this way, human connection is reduced to a series of dilute and/or meaningless transactions. The platforms deploying armies of engineers to knob-twiddle and pull strings to maximize ad opportunities, no matter the personal cost.

It’s also no accident we’re also seeing more of the vast and intrusive underpinnings of surveillance capitalism emerge, as the COVID-19 emergency rolls back some of the obfuscation that’s used to shield these business models from mainstream view in more normal times. The trackers are rushing to seize and colonize an opportunistic purpose.

Tech and ad giants are falling over themselves to get involved with offering data or apps for COVID-19 tracking. They’re already in the mass surveillance business so there’s likely never felt like a better moment than the present pandemic for the big data lobby to press the lie that individuals don’t care about privacy, as governments cry out for tools and resources to help save lives.

First the people-tracking platforms dressed up attacks on human agency as ‘relevant ads’. Now the data industrial complex is spinning police-state levels of mass surveillance as pandemic-busting corporate social responsibility. How quick the wheel turns.

But platforms should be careful what they wish for. Populations that find themselves under house arrest with their phones playing snitch might be just as quick to round on high tech gaolers as they’ve been to sign up for a friendly video chat in these strange and unprecedented times.

Oh and Zoom (and others) — more people might actually read your ‘privacy policy‘ now they’ve got so much time to mess about online. And that really is a risk.

*Source is a private Twitter account called @MBA_ish

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Motherboard noted several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Motherboard noted several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

Security lapse exposed Republican voter firm’s internal app code

A voter contact and canvassing company, used exclusively by Republican political campaigns, mistakenly left an unprotected copy of its app’s code on its website for anyone to find.

The company, Campaign Sidekick, helps Republican campaigns canvass their districts using its iOS and Android apps, which pull in names and addresses from voter registration rolls. Campaign Sidekick says it has helped campaigns in Arizona, Montana, and Ohio — and contributed to the Brian Kemp campaign, which saw him narrowly win against Democratic rival Stacey Abrams in the Georgia gubernatorial campaign in 2018.

For the past two decades, political campaigns have ramped up their use of data to identify swing voters. This growing political data business has opened up a whole economy of startups and tech companies using data to help campaigns better understand their electorate. But that has led to voter records spilling out of unprotected servers and other privacy-related controversies — like the case of Cambridge Analytica obtaining private data from social media sites.

Chris Vickery, director of cyber risk research at security firm UpGuard, said he found the cache of Campaign Sidekick’s code by chance.

In his review of the code, Vickery found several instances of credentials and other app-related secrets, he said in a blog post on Monday, which he shared exclusively with TechCrunch. These secrets, such as keys and tokens, can typically be used to gain access to systems or data without a username or password. But Vickery did not test the password as doing so would be unlawful. Vickery also found a sampling of personally identifiable information, he said, amounting to dozens of spreadsheets packed with voter names and addresses.

Fearing the exposed credentials could be abused if accessed by a malicious actor, Vickery informed the company of the issue in mid-February. Campaign Sidekick quickly pulled the exposed cache of code offline.

One of the Campaign Sidekick mockups, using dummy data, collates a voter’s data in one place. (Image: supplied)

One of the screenshots provided by Vickery showed a mockup of a voter profile compiled by the app, containing basic information about the voter and their past voting and donor history, which can be obtained from public and voter records. The mockup also lists the voter’s “friends.”

Vickery told TechCrunch he found “clear evidence” that the app’s code was designed to pull in data from its now-defunct Facebook app, which allowed users to sign-in and pull their list of friends — a feature that was supported by Facebook at the time until limits were put on third-party developers’ access to friends’ data.

“There is clear evidence that Campaign Sidekick and related entities had and have used access to Facebook user data and APIs to query that data,” Vickery said.

Drew Ryun, founder of Campaign Sidekick, told TechCrunch that its Facebook project was from eight years prior, that Facebook had since deprecated access to developers, and that the screenshot was a “digital artifact of a mockup.” (TechCrunch confirmed that the data in the mockup did not match public records.)

Ryun said after he learned of the exposed data the company “immediately changed sensitive credentials for our current systems,” but that the credentials in the exposed code could have been used to access its databases storing user and voter data.

Social Bluebook was hacked, exposing 217,000 influencers’ accounts

A social media platform used to match advertisers with thousands of influencers has been hacked.

Social Bluebook, a Los Angeles-based company, allows advertisers to pay social media “influencers” for posts that promote their products and services. The company claims it has some 300,000 influencers on its books.

But in October 2019, the company’s entire backend database was stolen in a data breach.

TechCrunch obtained the database, which contains some 217,000 user accounts — including influencer names, email addresses, and passwords hashed, which had been scrambled using the strong SHA-2 hashing algorithm.

It’s not known how the database was exfiltrated from the company’s systems or who was behind the breach.

We contacted several users who when presented with their information confirmed it as accurate. We also provided a portion of the data to Social Bluebook co-founder Sam Michie for verification.

“We have just now become aware of this data breach that occurred in October 2019,” he told TechCrunch in an email Thursday.

He said affected users will be informed of the breach by email. The company also informed the California attorney general’s office of the breach, per state law.

Social media influencers are a constant target for hackers, who often try to hijack accounts with popular handles or high follower counts. Some influencers have relied on white-hat hackers to get their hijacked accounts back.

Last year, an Indian social media firm left a database of Instagram influencers online, which included phone numbers and email addresses scraped from their profiles.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. 

It’s still easy to find coronavirus mask ads on Facebook

Ads for face masks are still appearing on Facebook, Instagram and Google, according to a review of the platforms carried out by the Tech Transparency Project (TTP). This despite pledges by the platforms that they would stamp out ads which seek to profit from the coronavirus pandemic.

Facebook said on March 6 that it would temporarily ban commerce listings and advertisements for medical face masks, in an effort to combat price-gouging and misinformation during the COVID-19 crisis.

Google followed suit a few days later, saying it would temporarily ban all medical face mask ads “out of an abundance of caution”.

The risk of online misinformation exacerbating a global public health crisis has been front of mind for policymakers in many Western markets. Meanwhile front line medical staff continue to face shortages of vital personal protective equipment, such as N95 masks, as they battle rising rates of infection.

There has also been concern that online sellers are attempting to cash in on a public health crisis by price gouging and/or targeting Internet users with ads for substandard masks.

Early last week two democrat senators urged the US’ FTC to act, blasting Google for continuing to allow ads for face masks to be shown to Internet users.

A week later and ads are still circulating.

The TTP — a research project by the nonprofit Campaign for Accountability, a group which focuses on exposing misconduct and malfeasance in public life — reported finding web users still being targeted with face mask ads on Google this week.

It also conducted a review of Facebook and Instagram, and was able to find more than 130 pages on Facebook listing masks for sale, including some using the platform’s ecommerce tools. 

“One Facebook Page called ‘CoronaVirus Mask’ offers a ‘respiratory mask collection,’ with prices ranging from $32 to $37, and uses Facebook’s ‘Shop’ feature to display its merchandise and allow people to add purchases to their cart,” it writes in a blog post. “Facebook’s ‘check out on website’ button then directs users to complete the purchase on the seller’s website.”

“Facebook pages that use WhatsApp to establish contact with buyers are employing a tactic commonly used by wildlife and other traffickers, who often display goods on Facebook and then arrange the actual purchase through WhatsApp encrypted messages. The Facebook Page ‘Surgical Face Mask For Sale,’ for example, has a video showing boxes of medical masks and the seller’s WhatsApp number scrawled on a piece of paper,” it added.

“A visit to one of these Facebook pages often triggers recommendations for other pages selling face masks, a sign that the platform’s algorithms are actually amplifying the reach of these sketchy sellers. TTP, without logging into Facebook, went to the page for ‘Corona Mask Shop’ and was served up ‘Related Pages’ for ‘Corona Mask 247’ and ‘Corona MASK on sale.'”

TechCrunch conducted our own searches on Facebook today and while some obvious search terms returned no results a little tweaking of keywords choice and we were quickly able to find additional pages hawking face masks — such as the below example grabbed from a Facebook page calling itself ‘Face Mask Manufacturer’.

From this page Facebook’s algorithm then recommended more pages — with names like ‘Medical Masks’ and ‘Dispo mask for sale’ — which also appeared to be selling masks.

The TTP’s review also found mask ads circulating on Facebook-owned Instagram.

“One Instagram account for @coronavsmask reads, ‘Act now before it’s too late! GET your N95 Respiratory Face Mask NOW!’ It only has a single post but already counts over 6,300 followers,” it wrote. “An account created on March 14 called @handsanitizers_and_coronamask includes over a dozen posts offering such products.”

It also found “several” Instagram accounts that sell drugs had begun to incorporate medical face masks into their offerings.

At the time of writing Facebook had not responded to our request for comment on the findings.

In further searches the group was reproduced examples of Google’s third party advertising display network serving ads for face masks alongside news stories related to the coronavirus — an issue highlighted by Sen. Mark Warner in a tweet last week when he blasted the company for “still running ads for facemasks and other coronavirus scams”.

“The Facebook mask pages were searched and collected on March 17-18 using the terms “corona mask,” “N95,” and “surgical mask” in Facebook’s search function,” a TTP spokesman told us when asked for more info about its review. “Of the more than 130 pages identified, 43 were created in the month of March, more than a dozen of those just days before TTP ran the searches.”

“We don’t have the same level of data from Instagram/Google. Instagram’s search function does not lend itself to the same search ability; it doesn’t bring up a list of accounts based on a single term like Facebook’s search function does. With Google, our goal was to show examples of Google-served ads; those were identified in news stories on March 18,” he added.

We reached out to Google for comment on the findings and a spokesman told us the company has a dedicated task force that has removed “millions” of ads in the past week alone — which he said jad already led to a sharp decrease in face mask ads. But Google said “opportunistic advertisers” had been trying to run “an unprecedented number” of these ads on its platforms.

Here’s Google’s statement:

Since January, we’ve blocked ads for products that aim to capitalise on coronavirus, including a temporary ban on face mask ads. In the past few weeks, we’ve seen opportunistic advertisers try to run an unprecedented number of these ads on our platforms. We have a dedicated task force working to combat this issue and have removed millions of ads in the past week alone. We’re monitoring the situation closely and continue to make real-time adjustments to protect our users.

Google declined to specify how many people it has working to identify and remove mask ads, saying only that the taskforce is made up of members from its product, engineering, enforcement and policy teams — and that it’s been set up with coverage across time zones.

It also said the examples highlighted by TTP are already over a week old and do not reflect the impact of its newest enforcement measures.

The company told us it’s analysing both ad content and how they’re served to enhance its takedown capacity.

People who mostly get news from social networks have some COVID-19 misconceptions

A new survey conducted by the Pew Research Center shows a COVID-19 information divide between people who mostly get their news from social networks and those who rely on more traditional news sources.

Pew surveyed 8,914 adults in the U.S. during the week of March 10, dividing survey respondents by the main means they use to consume political and election news. In the group of users that reports getting most of their news from social media, only 37% of respondents said that they expected the COVID-19 vaccine to be available in a year or more — an answer aligned with the current scientific consensus. In every other sample with the exception of the local TV group, at least 50% of those surveyed answered the question correctly. A third of social media news consumers also reported that they weren’t sure about the vaccine availability.

Among people who get most of their news from social media, 57% reported that they had seen at least some COVID-19 information that “seemed completely made up.” For people who consume most of their news via print media, that number was 37%.

Most alarmingly, people who primarily get their news via social media perceived the threat of COVID-19 to be exaggerated. Of the social media news consumers surveyed, 45% answered that the media “greatly exaggerated the risks” posed by the novel coronavirus. Radio news consumers were close behind, with 44% believing the media greatly exaggerated the threat of the virus, while only 26% of print consumers — those more likely to be paying for their news — believed the same.

The full results were part of Pew’s Election News Pathways project, which explores how people in the U.S. consume election news.

UK turns to WhatsApp to share coronavirus information

Three years ago, the U.K. government chastised WhatsApp for using enabling end-to-end encryption by default. Today, it’s relying on the encrypted messaging app as a vital service for sharing information about the coronavirus pandemic.

The new chatbot, supplied by the U.K. government, will let anyone subscribe to official advice about the pandemic, known as COVID-19, in the hope of reducing the burden on its national health system.

Send “hi” to 07860 064422 (or +44 7860 064422 for international users) over WhatsApp to start receiving updates.

The U.K. government’s official WhatsApp account, which it’s using to share information about the coronavirus pandemic. (Image: TechCrunch)

The U.K. government said the service will also allow the government to send messages to all opted-in users if required. Currently the U.K. does not have a national emergency alert system, unlike the U.S., to notify citizens on mass about incidents or emergencies. South Korea was praised for its use of sending up-to-date emergency alerts to citizens, which experts say has helped to “flatten the curve” of infections, a reference to slowing the rate of infection to help ease the burdens on hospitals.

British Prime Minister Boris Johnson declared a national lockdown on Tuesday, ordering all non-essential citizens and residents to stay at home in an effort to fight the spread of the pandemic.

U.K. authorities had faced criticism for failing to issue the stay-at-home order sooner. Several other countries and cities with spiking infection rates, including Italy and New York, had ordered their citizens to remain at home.

As of Wednesday, there were more than 438,000 confirmed global cases of COVID-19, with 19,000 deaths recorded.

Fox Sports to broadcast the full season of NASCAR’s virtual race series

Esports racing, helped by record-setting viewership, is hitting the big time.

Fox Sports said Tuesday it will broadcast the rest of the eNASCAR Pro Invitational iRacing Series, following Sunday’s virtual race that was watched by 903,000 viewers and attracted, according to Nielsen Media Research.

While those numbers are far below the millions of viewers who watch NASCAR’s official races — the last one at Phoenix Raceway reached 4.6 million — it still hit a number of firsts that Fox Sports found notable enough to commit to broadcasting the virtual racing series for the remainder of the season, beginning March 29.

The races will be simulcast on the FOX broadcast network, Fox Sports iRacing and the FOX Sports app. Races will be available in Canada through FOX Sports Racing.

Virtual racing, which lets competitors race using a system that includes a computer, steering wheel and pedals, has been around for years. But it’s garnered more attention as the spread of COVID-19, the disease caused by coronavirus, has prompted sports organizers to cancel or postpone live events, including the NCAA March Madness basketball tournament, NBA, NHL and MLB seasons as well as Formula 1 and NASCAR racing series.

NASCAR ran its first virtual race in the series on Sunday in lieu of its planned race at the Homestead-Miami Speedway, which was canceled due to COVID-19. Not only was it the most watched esports event in U.S. television history, it was Sunday’s most-watched sports telecast on cable television that day.

“This rapid-fire collaboration between FOX Sports, NASCAR and iRacing obviously has resonated with race fans, gamers and television viewers across the country in a very positive way,” Brad Zager, FOX Sports executive producer said in a statement. “We have learned so much in a relatively short period of time, and we are excited to expand coverage of this brand-new NASCAR esports series to an even wider audience.”

Granted, there aren’t any live sports to watch in this COVID-19 era. Still, it bodes well for the future of esports, perhaps even after the COVID-19 pandemic ends.

“The response on social media to last Sunday’s race has been incredible,” said four-time NASCAR Cup Series champion Jeff Gordon, who is announcer for Fox NASCAR. “We were able to broadcast a virtual race that was exciting and entertaining. It brought a little bit of ‘normalcy’ back to the weekend, and I can’t wait to call the action Sunday at Texas.”

NASCAR isn’t the only racing series to turn to esports. Formula 1 announced last week that it would host an esports series, the F1 Esports Virtual Grand Prix series, with a number of current F1 drivers alongside a number of other stars.

The virtual Formula 1 races will use Codemaster’s official Formula 1 2019 PC game and fans can follow along on YouTube, Twitch and Facebook, as well as on F1.com. The races will be about half as long as regular races, with 28 laps. The first race took place March 22. The first-ever virtual round of the Nürburgring Endurance Series kicked off on March 21.

Monitoring is critical to successful AI

As the world becomes more deeply connected through IoT devices and networks, consumer and business needs and expectations will soon only be sustainable through automation.

Recognizing this, artificial intelligence and machine learning are being rapidly adopted by critical industries such as finance, retail, healthcare, transportation and manufacturing to help them compete in an always-on and on-demand global culture. However, even as AI and ML provide endless benefits — such as increasing productivity while decreasing costs, reducing waste, improving efficiency and fostering innovation in outdated business models — there is tremendous potential for errors that result in unintended, biased results and, worse, abuse by bad actors.

The market for advanced technologies including AI and ML will continue its exponential growth, with market research firm IDC projecting that spending on AI systems will reach $98 billion in 2023, more than two and one-half times the $37.5 billion that was projected to be spent in 2019. Additionally, IDC foresees that retail and banking will drive much of this spending, as the industries invested more than $5 billion in 2019.

These findings underscore the importance for companies that are leveraging or plan to deploy advanced technologies for business operations to understand how and why it’s making certain decisions. Moreover, having a fundamental understanding of how AI and ML operate is even more crucial for conducting proper oversight in order to minimize the risk of undesired results.

Companies often realize AI and ML performance issues after the damage has been done, which in some cases has made headlines. Such instances of AI driving unintentional bias include the Apple Card allowing lower credit limits for women and Google’s AI algorithm for monitoring hate speech on social media being racially biased against African Americans. And there have been far worse examples of AI and ML being used to spread misinformation online through deepfakes, bots and more.

Through real-time monitoring, companies will be given visibility into the “black box” to see exactly how their AI and ML models operate. In other words, explainability will enable data scientists and engineers to know what to look for (a.k.a. transparency) so they can make the right decisions (a.k.a. insight) to improve their models and reduce potential risks (a.k.a. building trust).

But there are complex operational challenges that must first be addressed in order to achieve risk-free and reliable, or trustworthy, outcomes.

5 key operational challenges in AI and ML models