Google to face shareholders on censored ‘Dragonfly’ search in China

Another day, another shareholder suit. This time it’s Google in the spotlight.

Shareholders have tabled a resolution which, if passed, would demand Google put the brakes on its controversial search engine efforts in China. The program, internally dubbed “Dragonfly,” is said to be a censorship-friendly search engine with the capability to hide results at the behest of Beijing, which administers one of the most restrictive internets in the world.

The project remains largely secret, amid an internal upheaval and political pressure from the Trump administration to scrap the effort, but was later acknowledged by Google chief Sundar Pichai, describing China as an “important” market.

The resolution, set to be voted on at the company’s annual shareholder meeting Wednesday, would instruct Google to conduct and publish a human rights impact assessment examining the impacts of a censored Google search engine in China.

Open Mic, a non-profit representing shareholders worth $3 billion in Google assets, said Google should examine the human rights impact during Dragonfly’s development and not after.

“The Chinese government already employs invasive, data-driven surveillance to track its citizens,” said Joshua Brockwell, an investment communications director at Azzad Asset Management, which supports the resolution. “The potential for it to weaponize data from Google searches could allow the government to expand its human rights abuses, including mass detentions of the Uighur minority.”

Among recent crackdowns, China has come under international pressure in the past year for targeting Uighur Muslims and holding more than a million in detention.

Google opposes the resolution, saying in its proxy statement: “Google has been open about its desire to increase its ability to serve users in China and other countries. We have considered a variety of options for how to offer services in China in a way that is consistent with our mission and have gradually expanded our offerings to consumers in China, including Google Translate.”

A spokesperson for Google told TechCrunch it had “nothing more to add” beyond its proxy statement.

It’s unclear how the vote will go, given the pressure on Google to evaluate the introduction of search into China. In context, the shareholder in the top 10 with the least amount of shares still has $3.9 billion in stock.

TextIQ, a machine learning platform for parsing sensitive corporate data, raises $12.6M

TextIQ, a machine learning system that parses and understands sensitive corporate data, has raised $12.6 million in Series A funding led by FirstMark Capital, with participation from Sierra Ventures.

TextIQ started as cofounder Apoorv Agarwal’s Columbia thesis project titled “Social Network Extraction From Text.” The algorithm he built was able to read a novel, like Jane Austen’s Emma, for example, and understand the social hierarchy and interactions between characters.

This people-centric approach to parsing unstructured data eventually became the kernel of TextIQ, which helps corporations find what they’re looking for in a sea of unstructured, and highly sensitive, data.

The platform started out as a tool used by corporate legal teams. Lawyers often have to manually look through troves of documents and conversations (text messages, emails, Slack, etc.) to find specific evidence or information. Even using search, these teams spend loads of time and resources looking through the search results, which usually aren’t as accurate as they should be.

“The status quo for this is to use search terms and hire hundreds of humans, if not thousands, to look for things that match their search terms,” said Agarwal. “It’s super expensive, and it can take months to go through millions of documents. And it’s still risky, because they could be missing sensitive information. Compared to the status quo, TextIQ is not only cheaper and faster but, most interestingly, it’s much more accurate.”

Following success with legal teams, TextIQ expanded into HR/compliance, giving companies the ability to retrieve sensitive information about internal compliance issues without a manual search. Because TextIQ understands who a person is relative to the rest of the organization, and learns that organization’s ‘language’, it can more thoroughly extract what’s relevant to the inquiry from all that unstructured data in Slack, email, etc.

More recently, in the wake of GDPR, TextIQ has expanded its product suite to work in the privacy realm. When a company is asked by a customer to get access to all their data, or to be forgotten, the process can take an enormous amount of resources. Even then, bits of data might fall through the cracks.

For example, if a customer emailed Customer Service years ago, that might not come up in the company’s manual search efforts to find all of that customer’s data. But since TextIQ understands this unstructured data with a person-centric approach, that email wouldn’t slip by its system, according to Agarwal.

Given the sensitivity of the data, TextIQ functions behind a corporation’s firewall, meaning that TextIQ simply provides the software to parse the data rather than taking on any liability for the data itself. In other words, the technology comes to the data, and not the other way around.

TextIQ operates on a tiered subscription model, and offers the product for a fraction of the value they provide in savings when clients switch over from a manual search. The company declined to share any further details on pricing.

Former Apple and Oracle General Counsel Dan Cooperman, former Verizon General Counsel Randal Milch, former Baxter International Global General Counsel Marla Persky, and former Nationwide Insurance Chief Legal and Governance Officer Patricia Hatler are on the advisory board for TextIQ.

The company has plans to go on a hiring spree following the new funding, looking to fill positions in R&D, engineering, product development, finance, and sales. Cofounder and COO Omar Haroun added that the company achieved profitability in its first quarter entering the market and has been profitable for eight consecutive quarters.

For pen testing firm IOActive, security is cultural not transactional

IOActive may not be a household name but you almost certainly know its work.

The Seattle-headquartered company has been behind some of the most breathtaking hacks in the past decade. Its researchers have broken into in-flight airplanes from the ground and reverse engineered an ATM to spit out gobs of cash. One of the company’s most revered hackers discovered a way to remotely shock a pacemaker out of rhythm. And remember that now-infamous hack that remotely killed the engine of a Jeep? That was IOActive, too.

If it’s connected, they will bet that they can hack it.

IOActive has made a name for itself with its publicly reported findings, but its bread and butter is helping its corporate customers better understand how they approach security.

Since its founding more than two decades ago, the penetration testing and ethical hacking company now serves customers mostly in the Global 1000 largest companies to help assess and test their security posture.

“You can have the absolute most sophisticated alarm in the entire world, and I guarantee our team can break in,” said Jennifer Steffens, IOActive’s chief executive, in a call with TechCrunch. “But if you left your front door unlocked lock, hackers are going to walk right through”

“Don’t pay us to show you how to break into the alarm before someone learns how to lock the door,” she said.

House lawmakers demand end to warrantless collection of Americans’ data

Two House lawmakers are pushing an amendment that would effectively defund a massive data collection program run by the National Security Agency unless the government promises to not intentionally collect data of Americans.

The bipartisan amendment — just 15 lines in length — would compel the government to not knowingly collect communications — like emails, messages and browsing data — on Americans without a warrant.

Reps. Justin Amash (R-MI, 3rd) and Zoe Lofgren (D-CA, 19th) have already garnered the support from some of the largest civil liberties and rights groups, including the ACLU, the EFF, FreedomWorks, New America, and the Sunlight Foundation.

The Amash-Lofgren amendment. (Image: supplied)

Under the current statute, the NSA can use its Section 702 powers to collect and store the communications of foreign targets located outside the U.S. by tapping into the fiber cables owned by run by U.S. telecom giants. But this massive data collection effort also inadvertently vacuums up Americans’ data, who are typically protected from unwarranted searches under the Fourth Amendment.

The government has consistently denied to release the number of how many Americans are caught up in the NSA’s data collection. For the 2018 calendar year, the government said it made over 9,600 warrantless searches of Americans’ communications, up 28 percent year-over-year.

In a letter to lawmakers, the groups said the amendment — if passed into law — would “significantly advance the privacy rights of people within the United States.”

Last year, Section 702 was reauthorized with almost no changes, despite a rash of complaints and concerns raised by lawmakers following the Edward Snowden disclosures into mass surveillance.

The EFF said in a blog post Tuesday that lawmakers “must vote yes in order to make this important corrective.”

Leaving for a competitor? Onboarding new employees? Avoid accusations of trade secret theft

When a company hires talent away from a competitor, onboarding the new employee can pose significant legal risks for both the company and the new employee. A fundamental aspect of Silicon Valley is that employees are generally free to move between competitors.

This unrestricted movement of talent facilitates the robust competition that helps drive the Silicon Valley economy. While this is no doubt positive, unfettered employment mobility also creates unique challenges when it comes to protecting a company’s trade secrets, which are the lifeblood of many Silicon Valley companies.

Because of California’s policies regarding free employment mobility, unlike in most other states, California companies cannot protect their trade secrets with non-compete contracts. So, they instead rely heavily on trade secret laws for protection.

And, of course, when trade secret theft occurs, it is often when an employee transitions from one company to another. Thus, when a key employee gives notice that he or she is leaving for a competitor, it sets off alarm bells for the soon-to-be former company.

Unfortunately, because of the hypersensitivity to protecting trade secrets, many departing employees who have no interest in actually taking their former company’s trade secrets get accused of theft. This allegation can trigger a long, stressful, expensive legal process for both the employee and the new company, and sometimes cost the employee his or her reputation and new job.

This article explains how this situation arises and provides some practical considerations for how the employee transitioning jobs, and the onboarding company, can avoid an unnecessary legal fight.

1. California companies’ aggressive protection of trade secrets.

Tim Cook says Silicon Valley has created too much chaos

Apple CEO Tim Cook gave a speech to Stanford graduates this weekend. In addition to the usual motivational stuff, he attacked other big tech companies in a not-so-subtle way. He painted a stark picture of Silicon Valley, saying it’s responsible for too many mistakes.

“Today we gather at a moment that demands some reflections. Fueled by caffeine and code, optimism and idealism, conviction and creativity, generations of Stanford graduates and dropouts have used technology to remake our society,” Cook said.

“But I think you would agree that lately the results haven’t been neat or straightforward. In just the four years that you’ve been here, things feel like they’ve taken a sharp turn. Crisis has tempered optimism. Consequences have challenged idealism. And reality has shaken blind faith.”

In order to counterbalance that statement, Cook named a few great inventions that were born in Silicon Valley, from Hewlett Packard’s oscillator to the iPhone, social media, shareable videos and Snapchat stories — YouTube and Facebook weren’t named directly.

“But lately it seems this industry is becoming better known for a less noble innovation — the belief that you can claim credit without accepting responsibility. We see it every day now with every data breach, every privacy violation, every blind eye turned to hate speech, fake news poisoning our national conversation, the false miracles in exchange for a single drop of your blood,” he continued.

“It feels a bit crazy that anyone has to say this, but if you built a chaos factory, you can’t dodge responsibility for the chaos. Taking responsibility means having the courage to think things through,” he added later.

Cook then focused on privacy, a topic that is dear to his heart. “If we accept as normal and unavoidable that anything in our lives can be aggregated, sold or even leaked in the event of a hack, then we lose so much more than data. We lose the freedom to be human,” he said. “The chilling effect of digital surveillance is profound and it touches everything.”

Sure, it’s easier to say that as the CEO of Apple. The company still generates the vast majority of its revenue from hardware. And Apple can be criticized for many business practices as well. But it’s hard to disagree with Cook on this topic.

Millions of Venmo transactions scraped in warning over privacy settings

A computer science student has scraped seven million Venmo transactions to prove that users’ public activity can still be easily obtained, a year after a privacy researcher downloaded hundreds of millions of Venmo transactions in a similar feat.

Dan Salmon said he scraped the transactions during a cumulative six months to raise awareness and warn users to set their Venmo payments to private.

The peer-to-peer mobile payments service faced criticism last year after Hang Do Thi Duc, a former Mozilla fellow, downloaded 207 million transactions. The scraping effort was possible because Venmo payments between users are public by default. The scrapable data inspired several new projects — including a bot that tweeted out every time someone bought drugs.

A year on, Salmon showed little has changed and that it’s still easy to download millions of transactions through the company’s developer API without obtaining user permission or needing the app.

Using that data, anyone can look at an entire user’s public transaction history, who they shared money with, when, and in some cases for what reason — including illicit goods and substances.

“There’s truly no reason to have this API open to unauthenticated requests,” he told TechCrunch. “The API only exists to provide like a scrolling feed of public transactions for the home page of the app, but if that’s your goal then you should require a token with each request to verify that the user is logged in.”

He published the scraped data on his GitHub page.

Venmo has done little to curb the privacy issue for its 40 million users since the scraping effort blew up a year ago. Venmo reacted by changing its privacy guide and, and later updated its app to remove a warning when users went to change their default privacy settings from public to private.

How to change your Venmo privacy settings.

Instead, Venmo has focused its effort on making the data more difficult to scrape rather than focusing on the underlying privacy issues.

When Dan Gorelick first sounded the alarm on Venmo’s public data in 2016, few limits on the API meant anyone could scrape data in bulk and at speed. Other researchers like Johnny Xmas have since said that Venmo restricted its API to limit what historical data can be collected. But Venmo’s most recent limits still allowed Salmon to spit out 40 transactions per minute. That amounts to about 57,600 scraped transactions each day, he said.

Last year, PayPal — which owns Venmo — settled with the Federal Trade Commission over privacy and security violations. The company was criticized for misleading users over its privacy settings. The FTC said users weren’t properly informed that some transactions would be shared publicly, and that Venmo misrepresented the app’s security by saying it was “bank-grade,” which the FTC disputed.

Juliet Niczewicz, a spokesperson for PayPal, did not return a request for comment.