Using AI responsibly to fight the coronavirus pandemic

The emergence of the novel coronavirus has left the world in turmoil. COVID-19, the disease caused by the virus, has reached virtually every corner of the world, with the number of cases exceeding a million and the number of deaths more than 50,000 worldwide. It is a situation that will affect us all in one way or another.

With the imposition of lockdowns, limitations of movement, the closure of borders and other measures to contain the virus, the operating environment of law enforcement agencies and those security services tasked with protecting the public from harm has suddenly become ever more complex. They find themselves thrust into the middle of an unparalleled situation, playing a critical role in halting the spread of the virus and preserving public safety and social order in the process. In response to this growing crisis, many of these agencies and entities are turning to AI and related technologies for support in unique and innovative ways. Enhancing surveillance, monitoring and detection capabilities is high on the priority list.

For instance, early in the outbreak, Reuters reported a case in China wherein the authorities relied on facial recognition cameras to track a man from Hangzhou who had traveled in an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Police in China and Spain have also started to use technology to enforce quarantine, with drones being used to patrol and broadcast audio messages to the public, encouraging them to stay at home. People flying to Hong Kong airport receive monitoring bracelets that alert the authorities if they breach the quarantine by leaving their home.

In the United States, a surveillance company announced that its AI-enhanced thermal cameras can detect fevers, while in Thailand, border officers at airports are already piloting a biometric screening system using fever-detecting cameras.

Isolated cases or the new norm?

With the number of cases, deaths and countries on lockdown increasing at an alarming rate, we can assume that these will not be isolated examples of technological innovation in response to this global crisis. In the coming days, weeks and months of this outbreak, we will most likely see more and more AI use cases come to the fore.

While the application of AI can play an important role in seizing the reins in this crisis, and even safeguard officers and officials from infection, we must not forget that its use can raise very real and serious human rights concerns that can be damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be exposed or damaged if we do not tread this path with great caution. There may be no turning back if Pandora’s box is opened.

In a public statement on March 19, the monitors for freedom of expression and freedom of the media for the United Nations, the Inter-American Commission for Human Rights and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe issued a joint statement on promoting and protecting access to and free flow of information during the pandemic, and specifically took note of the growing use of surveillance technology to track the spread of the coronavirus. They acknowledged that there is a need for active efforts to confront the pandemic, but stressed that “it is also crucial that such tools be limited in use, both in terms of purpose and time, and that individual rights to privacy, non-discrimination, the protection of journalistic sources and other freedoms be rigorously protected.”

This is not an easy task, but a necessary one. So what can we do?

Ways to responsibly use AI to fight the coronavirus pandemic

  1. Data anonymization: While some countries are tracking individual suspected patients and their contacts, Austria, Belgium, Italy and the U.K. are collecting anonymized data to study the movement of people in a more general manner. This option still provides governments with the ability to track the movement of large groups, but minimizes the risk of infringing data privacy rights.
  2. Purpose limitation: Personal data that is collected and processed to track the spread of the coronavirus should not be reused for another purpose. National authorities should seek to ensure that the large amounts of personal and medical data are exclusively used for public health reasons. The is a concept already in force in Europe, within the context of the European Union’s General Data Protection Regulation (GDPR), but it’s time for this to become a global principle for AI.
  3. Knowledge-sharing and open access data: António Guterres, the United Nations Secretary-General, has insisted that “global action and solidarity are crucial,” and that we will not win this fight alone. This is applicable on many levels, even for the use of AI by law enforcement and security services in the fight against COVID-19. These agencies and entities must collaborate with one another and with other key stakeholders in the community, including the public and civil society organizations. AI use case and data should be shared and transparency promoted.
  4. Time limitation:  Although the end of this pandemic seems rather far away at this point in time, it will come to an end. When it does, national authorities will need to scale back their newly acquired monitoring capabilities after this pandemic. As Yuval Noah Harari observed in his recent article, “temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon.” We must ensure that these exceptional capabilities are indeed scaled back and do not become the new norm.

Within the United Nations system, the United Nations Interregional Crime and Justice Research Institute (UNICRI) is working to advance approaches to AI such as these. It has established a specialized Centre for AI and Robotics in The Hague and is one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. It assists national authorities, in particular law enforcement agencies, to understand the opportunities presented by these technologies and, at the same time, to navigate the potential pitfalls associated with these technologies.

Working closely with International Criminal Police Organization (INTERPOL), UNICRI has set up a global platform for law enforcement, fostering discussion on AI, identifying practical use cases and defining principles for responsible use. Much work has been done through this forum, but it is still early days, and the path ahead is long.

While the COVID-19 pandemic has illustrated several innovative use cases, as well as the urgency for the governments to do their utmost to stop the spread of the virus, it is important to not let consideration of fundamental principles, rights and respect for the rule of law be set aside. The positive power and potential of AI is real. It can help those embroiled in fighting this battle to slow the spread of this debilitating disease. It can help save lives. But we must stay vigilant and commit to the safe, ethical and responsible use of AI.

It is essential that, even in times of great crisis, we remain conscience of the duality of AI and strive to advance AI for good.

A bug bounty alone won’t save your startup — here’s why

In this world, there is no such thing as perfect security.

Every app or service you use — even the websites you visit — have security bugs. Companies go through repeated rounds of testing, code reviews and audits — sometimes even bringing in third-parties. Bugs get missed — that’s life, and it happens — but when they are uncovered, companies can get hacked.

That’s where a bug bounty comes into play. A bug bounty is an open-door policy to anyone who finds a bug or a security flaw; they are critical for channeling those vulnerabilities back to your development team so they can be fixed before bad actors can exploit them.

Bug bounties are an extension of your internal testing process and incentivize hackers to report bugs and issues and get paid for their work rather than dropping details of a vulnerability out of the blue (aka a “zero-day”) for anyone else to take advantage of.

Bug bounties are a win-win, but paying hackers for bugs is only one part of the process. As is usually the case where security meets startup culture, getting the right system in place early is best.

Why you need a vulnerability disclosure program

A bug bounty is just a small part of the overall bug-hunting and remediating process.

A former chaos engineer offers 5 tips for handling online disasters remotely

I recently had a scheduled video conference call with a Fortune 100 company.

Everything on my end was ready to go; my presentation was prepared and well-practiced. I was set to talk to 30 business leaders who were ready to learn more about how they could become more resilient to major outages.

Unfortunately, their side hadn’t set up the proper permissions in Zoom to add new people to a trusted domain, so I wasn’t able to share my slides. We scrambled to find a workaround at the last minute while the assembled VPs and CTOs sat around waiting. I ended up emailing my presentation to their coordinator, calling in from my mobile and verbally indicating to the coordinator when the next slide needed to be brought up. Needless to say, it wasted a lot of time and wasn’t the most effective way to present.

At the end of the meeting, I said pointedly that if there was one thing they should walk away with, it’s that they had a vital need to run an online fire drill with their engineering team as soon as possible. Because if a team is used to working together in an office — with access to tools and proper permissions in place — it can be quite a shock to find out in the middle of a major outage that they can’t respond quickly and adequately. Issues like these can turn a brief outage into one that lasts for hours.

Quick context about me: I carried a pager for a decade at Amazon and Netflix, and what I can tell you is that when either of these services went down, a lot of people were unhappy. There were many nights where I had to spring out of bed at 2 a.m., rub the sleep from my eyes and work with my team to quickly identify the problem. I can also tell you that working remotely makes the entire process more complicated if teams are not accustomed to it.

There are many articles about best practices aimed at a general audience, but engineering teams have specific challenges as the ones responsible for keeping online services up and running. And while leading tech companies already have sophisticated IT teams and operations in place, what about financial institutions and hospitals and other industries where IT is a tool, but not a primary focus? It’s often the small things that can make all the difference when working remotely; things that seem obvious in the moment, but may have been overlooked.

So here are some tips for managing incidents remotely:

There were many nights where I had to spring out of bed at 2 a.m., rub the sleep from my eyes and work with my team to quickly identify the problem… working remotely makes the entire process more complicated if teams are not accustomed to it.

Daily Crunch: Zoom faces security scrutiny

Researchers reveal a number of security issues with videoconferencing app Zoom, investors warn Indian startups of tough times ahead and Uber Eats expands its grocery options internationally. Here’s your Daily Crunch for April 1, 2020.

1. Maybe we shouldn’t use Zoom after all

Zoom’s recent popularity has shone a spotlight on the company’s security protections and privacy promises. Yesterday, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

In addition, two security researchers found a Zoom bug that can be abused to steal Windows passwords, while another researcher found two new bugs that can be used to take over a Zoom user’s Mac, including tapping into the webcam and microphone.

2. Investors tell Indian startups to ‘prepare for the worst’ as COVID-19 uncertainty continues

In an open letter to startup founders in India, 10 global and local private equity and venture capitalist firms — including Accel, Lightspeed, Sequoia Capital and Matrix Partners — cautioned that the current changes to the macro environment could make it difficult for a startup to close their next fundraising deal.

3. Uber Eats beefs up its grocery delivery offer as COVID-19 lockdowns continue

Uber’s food delivery division has inked a partnership with supermarket giant Carrefour in France to provide Parisians with 30-minute home delivery on a range of grocery products. In Spain, it’s partnered with the Galp service station brand to offer a grocery delivery service that consists of basic foods, over the counter medicines, beverages and cleaning products. And in Brazil, the company said it’s partnering with a range of pharmacies, convenience stores and pet shops in São Paulo to offer home delivery on basic supplies.

4. Grab hires Peter Oey as its chief financial officer

Prior to joining Grab, Oey was the chief financial officer at LegalZoom, an online legal services company based near Los Angeles. Before that, he served the same role at Mylife.com.

5. How to value a startup in a downturn

What’s been happening in public markets is going to trickle down into the private markets — in other words, startups are going to take a hit. To understand that dynamic, we spoke with Mary D’Onofrio, an investor with Bessemer Venture Partners. (Extra Crunch membership required.)

6. No proof of a Houseparty breach, but its privacy policy is still gatecrashing your data

Houseparty was swift to deny the reports of a breach and even go so far as to claim — without evidence — it was investigating indications that the “breach” was a “paid commercial smear to harm Houseparty,” offering a $1 million reward to whoever could prove its theory.

7. YouTube sellers found touting bogus coronavirus vaccines and masks

Researchers working for the Digital Citizens Alliance and the Coalition for a Safer Web — two online safety advocacy groups in the U.S. — undertook an 18-day investigation of YouTube in March, finding what they say were “dozens” of examples of dubious videos, including videos touting bogus vaccines the sellers claimed would protect buyers from COVID-19.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

Ex-NSA hacker drops new zero-day doom for Zoom

Zoom’s troubled year just got worse.

Now that a large portion of the world is working from home to ride out the coronavirus pandemic, Zoom’s popularity has rocketed, but also has led to an increased focus on the company’s security practices and privacy promises. Hot on the heels of two security researchers finding a Zoom bug that can be abused to steal Windows passwords, another security researcher found two new bugs that can be used to take over a Zoom user’s Mac, including tapping into the webcam and microphone.

Patrick Wardle, a former NSA hacker and now principle security researcher at Jamf, dropped the two previously undisclosed flaws on his blog Wednesday, which he shared with TechCrunch.

The two bugs, Wardle said, can be launched by a local attacker — that’s where someone has physical control of a vulnerable computer. Once exploited, the attacker can gain and maintain persistent access to the innards of a victim’s computer, allowing them to install malware or spyware.

Wardle’s first bug piggybacks off a previous finding. Zoom uses a “shady” technique — one that’s also used by Mac malware — to install the Mac app without user interaction. Wardle found that a local attacker with low-level user privileges can inject the Zoom installer with malicious code to obtain the highest level of user privileges, known as “root.”

Those root-level user privileges mean the attacker can access the underlying macOS operating system, which are typically off-limits to most users, making it easier to run malware or spyware without the user noticing.

The second bug exploits a flaw in how Zoom handles the webcam and microphone on Macs. Zoom, like any app that needs the webcam and microphone, first requires consent from the user. But Wardle said an attacker can inject malicious code into Zoom to trick it into giving the attacker the same access to the webcam and microphone that Zoom already has. Once Wardle tricked Zoom into loading his malicious code, the code will “automatically inherit” any or all of Zoom’s access rights, he said — and that includes Zoom’s access to the webcam and microphone.

“No additional prompts will be displayed, and the injected code was able to arbitrarily record audio and video,” wrote Wardle.

Because Wardle dropped detail of the vulnerabilities on his blog, Zoom has not yet provided a fix. Zoom also did not respond to TechCrunch’s request for comment.

In the meanwhile, Wardle said, “if you care about your security and privacy, perhaps stop using Zoom.”

Marriott says 5.2 million guest records stolen in another data breach

Marriott has confirmed a second data breach in three years — this time involving the personal information on 5.2 million guests.

The hotel giant said Tuesday it discovered the breach of an unspecified property system at a franchise hotel in late February. The hackers obtained the login details of two employees, a hotel statement said, which broke in weeks earlier during mid-January.

Marriott said it has “no reason” to believe payment data was stolen, but warned that names, addresses, phone numbers, loyalty member data, dates of birth and other travel information — such as linked airline loyalty numbers and room preferences — were taken in the breach.

Starwood, a subsidiary of Marriott, said in 2018 its central reservation system was hacked, exposing the personal data and guest records on 383 million guests. The data included  five million unencrypted passport numbers and eight million credit card records.

It prompted a swift response from European authorities, which issued Marriott with a fine of $123 million in the wake of the breach.

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Motherboard noted several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Motherboard noted several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

No proof of a Houseparty breach, but its privacy policy is still gatecrashing your data

Houseparty has been a smashing success with people staying home during the coronavirus pandemic who still want to connect with friends.

The group video chat app, interspersed with games and other bells and whistles, raises it above the more mundane Zooms and Hangouts (fun only in their names, otherwise pretty serious tools used by companies, schools and others who just need to work) when it comes to creating engaged leisure time, amid a climate where all of them are seeing a huge surge in growth.

All that looked like it could possibly fall apart for Houseparty and its new owner Epic Games when a series of reports appeared Monday claiming Houseparty was breached, and that malicious hackers were using users’ data to access their accounts on other apps such as Spotify and Netflix.

Houseparty was swift to deny the reports and even go so far as to claim — without evidence — it was investigating indications that the “breach” was a “paid commercial smear to harm Houseparty,” offering a $1 million reward to whoever could prove its theory.

For now, there is no proof that there was a breach, nor proof that there was a paid smear campaign, and when we reached out to ask Houseparty and Epic about this investigation, a spokesperson said: “We don’t have anything to add here at the moment.”

But that doesn’t mean that Houseparty doesn’t have privacy issues.

As the old saying goes, “if the product is free, you are the product.” In the case of the free app Houseparty, the publishers detail a 12,000+ word privacy policy that covers any and all uses of data that it might collect by way of you logging on to or using its service, laying out the many ways that it might use data for promotional or commercial purposes.

There are some clear lines in the policy about what it won’t use. For example, while phone numbers might get shared for tech support, with partnerships that you opt into, to link up contacts to talk with and to authenticate you, “we will never share your phone number or the phone numbers of third parties in your contacts with anyone else.”

But beyond that, there are provisions in there that could see Houseparty selling anonymized and other data, leading Ray Walsh of research firm ProPrivacy to describe it as a “privacy nightmare.”

“Anybody who decides to use the Houseparty application to stay in contact during quarantine needs to be aware that the app collects a worrying amount of personal information,” he said. “This includes geolocation data, which could, in theory, be used to map the location of each user. A closer look at Houseparty’s privacy policy reveals that the firm promises to anonymize and aggregate data before it is shared with the third-party affiliates and partners it works with. However, time and time again, researchers have proven that previously anonymized data can be re-identified.”

There are ways around this for the proactive. Walsh notes that users can go into the settings to select “private mode” to “lock” rooms they use to stop people from joining unannounced or uninvited; switch locations off; use fake names and birthdates; disconnect all other social apps; and launch the app on iOS with a long press to “sneak into the house” without notifying all your contacts.

But with a consumer app, it’s a longshot to assume that most people, and the younger users who are especially interested in Houseparty, will go through all of these extra steps to secure their information.

Palo Alto Networks to acquire CloudGenix for $420M

Palo Alto Networks announced today that it has an agreement in place to acquire CloudGenix for $420 million.

CloudGenix delivers a software-defined wide area network (SD-WAN) that helps customers stay secure by setting policies to enforce compliance with company security protocols across distributed locations. This is especially useful for companies with a lot of branch offices or a generally distributed workforce, something just about everyone is dealing with at the moment as we find millions suddenly working from home.

Nikesh Arora, chairman and CEO at Palo Alto Networks, says that this acquisition should contribute to Palo Alto’s “secure access service edge,” or SASE solutions, as it is known in industry parlance.

“As the enterprise becomes more distributed, customers want agile solutions that just work, and that applies to both security and networking. Upon the close of the transaction, the combined platform will provide customers with a complete SASE offering that is best-in-class, easy to deploy, cloud-managed, and delivered as a service,” Arora said in a statement.

CloudGenix was founded 2013 by Kumar Ramachandran, Mani Ramasamy and Venkataraman Anand, all of whom will be joining the company as part of the deal. It has 250 customers across a variety of verticals. The company has raised almost $100 million, according to PitchBook data.

Palo Alto Networks has been on an acquisitive streak. Going back to February 2019, this represents the 6th company it has acquired to the tune of over $1.6 billion overall.

The acquisition is expected to close in the fourth quarter, subject to customary regulatory approvals.