Google extends its BeyondCorp security model to G Suite

BeyondCorp is Google’s model for securing networks not just through VPNs and other endpoint security techniques, but through a model that focus on context-aware access policies that focus on the user’s identity, hardware and the context of the request. That has been Google’s internal security policy for a while now and over the last few months, it started brining it to its own customers, too, starting with its Cloud Identity-Aware Proxy, which is now generally available, and its VPC Service Controls.

Today, the company is extending these context-aware access capabilities to its Cloud Identity user and device management service, as well as G Suite, its productivity suite. So while earlier implementation centered around protecting a company’s technical cloud infrastructure, this release focuses on devices and cloud-based apps like Gmail, Drive, Docs, Sheets and Calendar.

In this context, some devices, for example, may be more highly trusted because they have been enrolled in the Cloud Identity service and because a number of security policies are in place for it. That’s a different kind of security posture than a system that simply trusts users because they come through a specific VPN.

Context-aware access for G Suite apps is now in beta, but only for customers who subscribe to Cloud Identity Premium, G Suite Enterprise and G Suite Enterprise for Education.

With today’s release, Google also announced the BeyondCorp Alliance, which brings together a number of security and management partners. These include Check Point, Lookout, Palo Alto Networks, Symantec and VMware. According to Google, these companies are all working to bring device posture data to Google’s context-aware access engine.

A powerful malware that tried to blow up a Saudi plant strikes again

A highly capable malware reportedly used in a failed plot to blow up a Saudi petrochemical plant has now been linked to a second compromised facility.

FireEye researchers say the unnamed “critical infrastructure” facility was the latest victim of the powerful Triton malware, the umbrella term for a series of malicious custom components used to launched directed attacks.

Triton, previously linked to the Russian government, is designed to burrow into a target’s networks and sabotage their industrial control systems, often used in power plants and oil refineries to control the operations of the facility. By compromising these controls, a successful attack can cause significant disruption — even destruction.

According to the security company’s latest findings out Wednesday, the hackers waited almost a year after their initial compromise of the facility’s network before they launched a deeper assault, taking the time to prioritize learning what the network looked like and how to pivot from one system to another. The hackers’ goal was to quietly gain access to the facility’s safety instrumented system, an autonomous monitor that ensures physical systems don’t operate outside of their normal operational state. These critical systems are strictly segmented from the rest of the network to prevent any damage in the event of a cyberattack.

But the hackers were able to gain access to the critical safety system, and focused on finding a way to effectively deploy Triton’s payloads to carry out their mission without causing the systems to enter into a safe fail-over state.

In the case of the August 2017 attack in which Triton was deployed, the Saudi facility would have been destroyed had it not been for a bug in the code.

“These attacks are also often carried out by nation states that may be interested in preparing for contingency operations rather than conducting an immediate attack,” said FireEye’s report. “During this time, the attacker must ensure continued access to the target environment or risk losing years of effort and potentially expensive custom [industrial control system] malware,” said the report. “This attack was no exception.”.

FireEye would not comment on the type of facility or its location — or even the year of the attack, but said it was likely to cause damage.

“We assess the group was attempting to build the capability to cause physical damage at the facility when they accidentally caused a process shutdown that let to the Mandiant investigation,” said Nathan Brubaker, senior manager, analysis at FireEye, in an email to TechCrunch describing the first incident, but wouldn’t comment on the motives of the second facility.

But the security firm warned that the attackers’ slow and steady approach — which involved moving slowly and precisely as to not trigger any alarms — showed they had a deep focus on not getting caught. That, they said, suggests there may be other targets beyond the second facility “where the [hackers] was or still is present.”

The security company published lists of hashes unique to the files found in the second facility’s attack in a hope that I.T. staff in other at-risk industries and facilities can check for any compromise.

“Not only can these [tactics, techniques and procedures] be used to find evidence of intrusions, but identification of activity that has strong overlaps with the actor’s favored techniques can lead to stronger assessments of actor association, further bolstering incident response efforts,” the company said.

One-hour terrorist takedowns backed by EU parliament’s civil liberties committee

The European Parliament’s civil liberties committee (Libe) voted yesterday to back proposed legislation for a one-hour takedown rule for online terrorist content which critics argue will force websites to filter uploads.

MEPs on the committee also backed big penalties for service providers that systematically and persistently fail to abide by the law — agreeing they could be sanctioned with up to 4% of their global turnover, per the Commission’s original proposal.

However the committee rejected a push by the EU’s executive for the law to include a so-called ‘duty of care obligation’ under which Internet firms would have had to take proactive measures including using automated detection tools. Critics have suggested this would create a general obligation on platforms to monitor content and filter uploads.

The Libe voted against a general obligation on hosts to monitor the information they transmit or store, and against them having to actively seek facts indicating illegal activity.

“If a company has been subject to a substantial number of removal orders, the authorities may request that it implements additional specific measures (e.g. regularly reporting to the authorities, or increasing human resources). The Civil Liberties Committee voted to exclude from these measures the obligation to monitor uploaded content and the use of automated tools,” it noted in a press release following the vote — which was carried by 35 votes to 1 (with 8 abstentions).

“Moreover, any decision in this regard should take into account the size and economic capacity of the enterprise and “the freedom to receive and impart information and ideas in an open and democratic society”,” the committee added.

Nonetheless, critics argue that a one-hour rule for terrorist takedowns will bring in filters by the backdoor and/or result in smaller websites being forced to operate on larger platforms to avoid having to comply with a stringent, one-size-fits-all deadline.

The Commission set out its proposals for new rules on online terrorist content removals last fall. Though social media platforms have had an informal one-hour rule for taking down illegal content across the region for more than a year.

The draft law seeks to cast the earlier one-hour rule into formal legislation. But it would also apply to any Internet company hosting that receives a takedown notice about terrorist content from a competent national authority — regardless of size. Hence attracting criticism for the burden it could place on smaller website operators.

The Libe committee did make some changes to the proposals aimed at helping smaller websites.

Specifically it decided that the competent authority should contact companies that have never received a removal order to provide them with information on procedures and deadlines — and do so at least 12 hours before issuing the first order to remove content they are hosting.

Commenting in a statement, Daniel Dalton (ECR, UK), EP rapporteur for the proposal, said: “Any new legislation must be practical and proportionate if we are to safeguard free speech. Without a fair process we risk the over-removal of content as businesses would understandably take a safety first approach to defend themselves. It also absolutely cannot lead to a general monitoring of content by the back door.”

However tweeting after the Libe vote, one vocal critic of the draft legislation — Pirate Party member and MEP Julia Reda — argued the Libe’s 12-hour rule will do little to help website owners.

“That’s not even enough time to able to switch off your phone over the weekend,” she wrote, dubbing the proposal “a catastrophe for work-life balance of small business owners and hobbyist websites”.

There is also the question of how online terrorist content is defined.

The Commission proposal says it refers to material and information posted online that “incites, encourages or advocates terrorist offences, provides instructions on how to commit such crimes or promotes participation in activities of a terrorist group”.

“When assessing whether online content constitutes terrorist content, the authorities responsible as well as hosting service providers should take into account factors such as the nature and wording of the statements, the context in which the statements were made, including whether it is disseminated for educational, journalistic or research purposes, and the potential to lead to harmful consequences,” runs a Commission Q&A on the draft law from September.

The committee backed protections for terrorist content disseminated for educational, journalistic or research purposes, and agreed with the earlier Commission caveat that the expression of polemic or controversial views on sensitive political questions should not be considered terrorist content.

Though, again, critics aren’t convinced the legislation won’t result in chilled speech across the bloc as platforms and websites seek to shrink their compliance risk.

The European Parliament as a whole will vote on the draft law next week. After which a new parliament — determined via forthcoming elections next month — will be in charge of negotiating with Member State representatives in the Council of Ministers, a process that will determine the final form of the legislation.

No one, not even the Secret Service, should randomly plug in a strange USB stick

If you’ve been on Twitter today, you’ve probably seen one story making the rounds.

The case follows a Chinese national, Yujing Zhang, who is accused of trying to sneak into President Trump’s private Florida resort Mar-a-Largo last month. She was caught by the Secret Service with four cellphones, a laptop, cash, an external hard drive, and a signals detector to spot hidden cameras, and a thumb drive.

The arrest sparked new concerns about the president’s security amid concerns that foreign governments have tried to infiltrate the resort.

Allegations aside and notwithstanding, what sent alarm bells ringing was how apparently the Secret Service handled the USB drive — which cannot be understated — were not good.

From the Miami Herald:

Secret Service agent Samuel Ivanovich, who interviewed Zhang on the day of her arrest, testified at the hearing. He stated that when another agent put Zhang’s thumb-drive into his computer, it immediately began to install files, a “very out-of-the-ordinary” event that he had never seen happen before during this kind of analysis. The agent had to immediately stop the analysis to halt any further corruption of his computer, Ivanovich said. The analysis is ongoing but still inconclusive, he testified.

What’s the big deal, you might think? You might not think it but USB keys are a surprisingly easy and effective way to install malware — or even destroy computers. In 2016, security researcher Elie Bursztein found dropping malware-laden USB sticks was an “effective” way of tricking someone into plugging it into their computer. As soon as the drive plugs in, it can begin dropping malware that can remotely surveil and control the affected device — and spread throughout a network. Some USB drives can even fry the innards of some computers.

It didn’t take long for security folks to seize on the security snafu.

Jake Williams, founder of Rendition Infosec and former NSA hacker, criticized the agent’s actions “threatened his own computing system and possibly the rest of the Secret Service network.”

“It’s entirely possible that the sensitivities over determining whether Zhang was targeting Mar-a-Lago or the president — or whether she was a legitimate guest or member — may have contributed to the agent’s actions on the ground,” he said, “Never before has the Secret Service had to deal with this type of scenario and they’re probably still working out the playbook.”

The big question is whether or not the agent’s computer was airgapped — which is when the computer is completely isolated from the internet or any other computer. We’ve asked the Secret Service for comment.

Williams said the best way to forensically examine a suspect USB drive is by plugging the device into an isolated Linux-based computer that doesn’t automatically mount the drive to the operating system.

“We would then create a forensic image of the USB and extract any malware for analysis in the lab,” he said. “While there is still a very small risk that the malware targets Linux, that’s not the normal case.”

But based on the agent’s description, it doesn’t sound like many — or any — precautions were made.

A powerful spyware app now targets iPhone owners

Security researchers have discovered a powerful surveillance app first designed for Android devices can now target victims with iPhones.

The spy app, found by researchers at mobile security firm Lookout, said its developer abused their Apple-issued enterprise certificates to bypass the tech giant’s app store to infect unsuspecting victims.

The disguised carrier assistance app once installed can silently grab a victim’s contacts, audio recordings, photos, videos and other device information — including their real-time location data. It can be remotely triggered to listen in on people’s conversations, the researchers found. Although there was no data to show who might have been targeted, the researchers noted that the malicious app was served from fake sites purporting to be cell carriers in Italy and Turkmenistan.

The app is one of several under the so-called “stalkerware” umbrella, apps that can be surreptitiously installed on a victim’s phone to spy on their activity, location and messages in real time.

Researchers linked the app to the makers of a previously discovered Android app, developed by the same Italian surveillance app maker Connexxa.

The Android app, dubbed Exodus, ensnared hundreds of victims — either by installing it or having it installed. Exodus had a larger feature set and expanded spying capabilities by downloading an additional exploit designed to gain root access to the device, giving the app near complete access to a device’s data, including emails, cellular data, Wi-Fi passwords and more, according to Security Without Borders.

Screenshots of the ordinary-looking iPhone app, which was silently uploading a victim’s private data and real-time location to the spyware company’s servers (Image: supplied)

Both of the apps use the same backend infrastructure, while the iOS app used several techniques — like certificate pinning — to make it difficult to analyze the network traffic, Adam Bauer, Lookout’s senior staff security intelligence engineer, told TechCrunch.

“This is one of the indicators that a professional group was responsible for the software,” he said.

Although the Android version was downloadable directly from Google’s app store, the iOS version was not widely distributed. Instead, Connexxa signed the app with an enterprise certificate issued to the developer by Apple, said Bauer, allowing the surveillance app maker to bypass Apple’s strict app store checks.

Apple says that’s a violation of its rules, which prohibits these certificates designed to be used strictly for internal apps to be pushed to consumers.

It follows a similar pattern to several app makers, as discovered by TechCrunch earlier this year, which abused their enterprise certificates to develop mobile apps that evaded the scrutiny of Apple’s app store. Every app served through an app store has to be certified by Apple or they won’t run. But several companies, like Facebook and Google, used their enterprise-only certificates to sign apps given to consumers. Apple said this violated its rules and banned the apps by revoking enterprise certificates used by Facebook and Google, knocking both of their illicit apps offline, but also every other internal app signed with the same certificate.

Facebook was unable to operate at full capacity for an entire working day until Apple issued a new certificate.

The certificate Apple issued to Connexxa (Image: supplied)

But Facebook and Google weren’t the only companies abusing their enterprise certificates. TechCrunch found dozens of porn and gambling apps — not permitted on Apple’s app store — signed with an enterprise certificate, circumventing the tech giant’s rules.

After researchers disclosed their findings, Apple revoked the app maker’s enterprise certificate, knocking every installed app offline and unable to run.

The researchers said they did not know how many Apple users were affected.

Connexxa did not respond to a request for comment. Apple did not comment.

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office.

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse which will be covered by further stringent requirements under the plan.

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although it reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, after which it says it will set out the action it will take in developing its final proposals for legislation.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident at this stage that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own.

The House of Lords committee was another that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”. And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle.

But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”

TrickerBot malware attacks are ramping up ahead of Tax Day

A powerful data-stealing malware campaign with a tax theme is on the rise to target unsuspecting filers ahead of Tax Day.

TrickBot, a financially motivated trojan, infects Windows computers through a malicious Excel document sent by a specially crafted email. Once infected, the malware targets vulnerable devices on the network and combs for passwords and banking information to send back to the attacker. The collected information can be used to steal funds for fraud. The ever-expanding malware is continually developed to collect as many credentials as possible.

By stealing tax documents, the scammers can also file fraudulent end-of-year tax forms to reap the returns. The Internal Revenue said fraudsters scammed the agency out of more than $1.6 million in fraudulent returns during the 2016 tax year.

IBM X-Force researchers say the attackers have begun impersonating emails from three of the largest accounting and payroll providers, including ADP and Paychex, by registering similar-looking domains — known as domain squatting.

One of the spoofed emails impersonating a payroll provider. (Image: supplied)

“We believe this campaign to be highly targeted in its efforts to infiltrate US organizations, with the hallmarks of the TrickBot Trojan gang,” said Limor Kessem, global executive security advisor at IBM. “Since it emerged in 2016, we’ve seen that TrickBot’s operators focus their efforts on businesses and, therefore, manage distribution in ways that would look benign to enterprise uses: through booby-trapped productivity files and fake bank websites.”

Where TrickBot traditionally focused on business banking and high-value accounts with private banking and wealth management firms, the malware in recent years has expanded to hit cryptocurrency sites and owners.

“This is not a threat of the past,” said Kessem. “Based on our research, not only is TrickBot one of the most prominent organized crime gangs in the bank fraud arena, we also expect to see it maintain its position on the global malware chart, unless it is interrupted by law enforcement in 2019.”

The malware continues to grow, IBM said. Its backend infrastructure has at least 2,400 command and control servers with hundreds of configurations and versions, with infections most common in the U.S. and U.K. — seen as high value regions.

“As cybercriminal gangs of this level continue to gain steam, it’s increasingly important for businesses and consumers to be more aware of their own activity online, even when they’re doing something as simple as clicking on a link in an email,” said Kessem. “Email is an incredibly easy way for an attacker to interact with potential victims, posing as a trusted brand to infiltrate devices and eventually your networks,” she said.

Tax Day is April 15.

Researchers find thriving Facebook cybercrime groups with 385,000 total members

You might be surprised what you can buy on Facebook, if you know where to look. Researchers with Cisco’s Talos security research team have uncovered a wave of Facebook groups dedicated to making money from variety of illicit and otherwise sketchy online behaviors, including phishing schemes, trading hacked credentials and spamming. The 74 groups researchers detected boasted a cumulative 385,000 members.

Remarkably, the groups weren’t even really trying to conceal their activities. For example, Talos found posts openly selling credit card numbers with three-digit CVV codes, some with accompanying photos of the card’s owner. According to the research group:

“The majority of these groups use fairly obvious group names, including “Spam Professional,” “Spammer & Hacker Professional,” “Buy Cvv On THIS SHOP PAYMENT BY BTC 💰💵,” and “Facebook hack (Phishing).” Despite the fairly obvious names, some of these groups have managed to remain on Facebook for up to eight years, and in the process acquire tens of thousands of group members.”

Beyond the sale of stolen credentials, Talos documented users selling shell accounts for governments and organizations, promoting their expertise in moving large sums of money and offering to create fake passports and other identifying documents.

The new research isn’t the first time that Facebook users have been busted for dealing in cybercrime. In 2018, Brian Krebs reported 120 groups with a cumulative 300,000-plus members engaged in similar activities, including phishing schemes, spamming, botnets and on-demand DDoS attacks.

As Talos researchers explain in their blog post, “Months later, though the specific groups identified by Krebs had been permanently disabled, Talos discovered a new set of groups, some having names remarkably similar, if not identical, to the groups reported on by Krebs.”

Cybercrime groups are yet another example of the game of enforcement whack-a-mole that Facebook continues to play on its massive platform. At the social network’s scale — and without the company dedicating sufficient resources to more comprehensive detection methods — it’s difficult for Facebook to track the kinds of illicit or potentially harmful behaviors that flourish in unmonitored corners of its sprawling platform.

“While some groups were removed immediately, other groups only had specific posts removed,” Talos researcher Jaeson Schultz wrote. “Eventually, through contact with Facebook’s security team, the majority of malicious groups was quickly taken down, however new groups continue to pop up, and some are still active as of the date of publishing.”

AeroGarden maker says hackers stole months of credit card data

Bad news for home gardeners: criminals might have your credit card data.

AeroGrow, the maker of the at-home garden kit AeroGarden, said in a letter to customers that its website had credit card scraping malware for more than four months.

The company said anyone who bought something through its website between October 29, 2018 and March 4, 2019 had their credit card number, expiration date and card verification value — also known as a security code — stolen by the malware. In most cases, that’s all someone would need to make fraudulent purchases,

A letter to customers, as submitted to the California attorney general’s office. (Screenshot: TechCrunch)

It’s the latest in a string of high-profile malware attacks targeting websites in the past year. Attackers will find a vulnerability often in the website running a company’s shopping cart and inject code that scrapes credit card data once it is entered into the form on the site. That data gets siphoned off and sent to a server controlled by the attacker. Because the code is running on the page, there’s no discernible or obvious way to tell if a website is affected.

One of the more well-known hacker groups includes Magecart, a collective of different hackers of varying skillsets, which attack websites large and small. In the past year, the hacker groups have targeted Ticketmaster, British Airways, and consumer electronics giant Newegg — and many more.

AeroGrow didn’t say how many customers were affected. We’ve reached out and will update if we hear back.

Startup Law A to Z: Regulatory Compliance

Startups are but one species in a complex regulatory and public policy ecosystem. This ecosystem is larger and more powerfully dynamic than many founders appreciate, with distinct yet overlapping laws at the federal, state and local/city levels, all set against a vast array of public and private interests. Where startup founders see opportunity for disruption in regulated markets, lawyers counsel prudence: regulations exist to promote certain strongly-held public policy objectives which (unlike your startup’s business model) carry the force of law.

Snapshot of the regulatory and public policy ecosystem. Image via Law Office of Daniel McKenzie

Although the canonical “ask forgiveness and not permission” approach taken by Airbnb and Uber circa 2009 might lead founders to conclude it is strategically acceptable to “move fast and break things” (including the law), don’t lose sight of the resulting lawsuits and enforcement actions. If you look closely at Airbnb and Uber today, each have devoted immense resources to building regulatory and policy teams, lobbying, public relations, defending lawsuits, while increasingly looking to work within the law rather than outside it – not to mention, in the case of Uber, a change in leadership as well.

Indeed, more recently, examples of founders and startups running into serious regulatory issues are commonplace: whether in healthcare, where CEO/Co-founder Conrad Parker was forced to resign from Zenefits and later fined approximately $500K; in the securities registration arena, where cryptocurrency startups Airfox and Paragon have each been fined $250K and further could be required to return to investors the millions raised through their respective ICOs; in the social media and privacy realm, where TikTok was recently fined $5.7 million for violating COPPA, or in the antitrust context, where tech giant Google is facing billions in fines from the EU.

Suffice it to say, regulation is not a low-stakes table game. In 2017 alone, according to Duff and Phelps, US financial regulators levied $24.4 billion in penalties against companies and another $621.3 million against individuals. Particularly in today’s highly competitive business landscape, even if your startup can financially absorb the fines for non-compliance, the additional stress and distraction for your team may still inflict serious injury, if not an outright death-blow.

The best way to avoid regulatory setbacks is to first understand relevant regulations and work to develop compliant policies and business practices from the beginning. This article represents a step in that direction, the fifth and final installment in Extra Crunch’s exclusive “Startup Law A to Z” series, following previous articles on corporate matters, intellectual property (IP), customer contracts and employment law.

Given the breadth of activities subject to regulation, however, and the many corresponding regulations across federal, state, and municipal levels, no analysis of any particular regulatory framework would be sufficiently complete here. Instead, the purpose of this article is to provide founders a 30,000-foot view across several dozen applicable laws in key regulatory areas, providing a “lay of the land” such that with some additional navigation and guidance, an optimal course may be charted.

The regulatory areas highlighted here include: (a) Taxes; (b) Securities; (c) Employment; (d) Privacy; (e) Antitrust; (f) Advertising, Commerce and Telecommunications; (g) Intellectual Property; (h) Financial Services and Insurance; and finally (i) Transportation, Health and Safety.