US senators aim to amend cybersecurity bill to include crypto

As regulators around the world try to provide frameworks for the digital asset industry, two U.S. senators have introduced a bill to help crypto companies report cybersecurity threats.

U.S. Senators Marsha Blackburn, Republican of Tennessee, and Cynthia Lummis, Republican of Wyoming, exclusively shared with TechCrunch the reformed legislation, the Cryptocurrency Cybersecurity Information Sharing Act, which would amend the Cybersecurity Information Sharing Act of 2015 to include cryptocurrency firms. The bill is endorsed by the Electronic Transactions Association.

“Some bad actors have used cryptocurrency as a way to hide their illegal practices and avoid accountability,” Blackburn said in a statement to TechCrunch. “The Cryptocurrency Cybersecurity Information Sharing Act will update existing regulations to address this misuse directly. It will provide a voluntary mechanism for crypto companies to report bad actors and protect cryptocurrency from dangerous practices.”

The bill aims to mitigate losses from a number of cyber-related incidents, including data breaches, ransomware attacks, business interruption and network damage, it stated.

During the second quarter of this year, there was a significant rise in crypto-focused phishing attacks, according to a report by CertiK. In the first half of this year, over $2 billion was lost to hacks and exploits — racking up an amount larger than the entirety of 2021 in half the time, the report stated.

In general, Lummis has been a vocal supporter of the crypto industry and has sponsored and proposed new bills focused on the crypto industry in recent months.

In June, Lummis proposed a bipartisan crypto bill alongside Senator Kirsten Gillibrand, Democrat of New York, with a goal of installing guide rails around the digital asset sector. The 69-page bill covered a broad range of crypto market subsectors from how to tax crypto transactions to guidelines for backing stablecoins.

While some find regulation to be a bad thing for innovation and the decentralized nature of crypto, others disagree. As the crypto industry continues to grow in the public light, many market players and regulators say there’s a need for greater transparency and frameworks on how the digital assets could be monitored.

US senators aim to amend cybersecurity bill to include crypto by Jacquelyn Melinek originally published on TechCrunch

Amazon Labor Union president tells Senate that workers’ rights aren’t a ‘Democrat or Republican’ issue

Back in February, Amazon tried to arrest labor organizer Christian Smalls for bringing food to warehouse employees during a union drive. One unfathomably monumental labor victory later, and today, the New Yorker is speaking before the Senate and visiting President Joe Biden at the White House.

Smalls, the Amazon Labor Union president who led the JFK8 warehouse’s historic union win, testified today in a hearing for the Senate Committee on the Budget. Chaired by Senator Bernie Sanders (I-VT), the hearing posed the question of whether tax dollars should support companies that violate labor laws. Representatives from other groups like Good Jobs First, the Teamsters and the Heritage Foundation joined the hearing as well.

“The types of things Amazon is doing… Breaking the law, intimidation… These are real things that traumatize workers in this country,” Smalls said in his opening statement. “We want to feel that we have protections. We want to feel that the government is allowing us to use our constitutional rights to organize.”

Across the country, Amazon workers have accused the company of trying to quash labor organizing. Last year, Amazonians United co-founder Jonathan Bailey filed a complaint with the National Labor Relations Board (NLRB), stating that the company violated labor laws by retaliating against him for organizing. He said he was detained and interrogated by a manager for 90 minutes after organizing a walkout. The NLRB found merit to these allegations and filed a federal complaint against Amazon. The company settled, and as part of the settlement agreement, was required to remind employees via emails and on physical bulletin boards that they have the right to organize.

Bailey’s complaint to the NLRB was one of 37 against Amazon between February 2020 and March 2021, according to NBC News. But just months after this settlement, Amazon was found to have unlawfully prevented a Staten Island employee from distributing pro-union literature in the break room. Amazon filings with the Department of Labor revealed that the company spent $4.3 million on anti-union consultants last year alone.

Senator Lindsey Graham (R-SC) defended Amazon, accusing Senator Sanders of unfairly targeting the company.

“You’re singling out a single company because of your political agenda to socialize this country,” Senator Graham said. “Every time I turn around, you’re having a hearing about [how] anybody who makes money is bad.”

Graham outlined that the NLRB has a process in place for workers to file complaints if they feel they are being treated unfairly, saying that he disagreed with a Senate hearing taking place at all.

“You can have oversight hearings all you like, but you’ve determined Amazon is a piece of crap company. That’s your political bias,” Graham told Sanders. “[Amazon is] subject to laws in the United States, they shouldn’t be subjected to this.”

Amazon Labor Union founder Christian Smalls speaks at a rally on Staten Island in April 2022.

Christian Smalls, founder of the Amazon Labor Union (ALU), speaks during an ALU rally in the Staten Island borough of New York, U.S., on Sunday, April 24, 2022. Senator Bernie Sanders visited Staten Island to meet with workers who this month successfully organized the first union in the country at an Amazon facility and workers at a separate facility who will be voting next week on whether to join a union. Photographer: Victor J. Blue/Bloomberg via Getty Images

In response, Smalls directed his opening statement to Senator Graham.

“I think that it’s in your best interest to realize that it’s not a left or right thing. It’s not a Democrat or Republican thing. It’s a workers’ issue,” Small told the Senator. “We are the ones that are suffering in the corporations that you’re talking about, […] in the warehouses that you’re talking about. So that’s the reason why I think I was invited today speak on that behalf, and you should listen, because we do represent your constituents as well.”

He continued, “The people are the ones that make these corporations go, it’s not the other way around.”

At Senator Sanders’ urging, Smalls explained the working conditions of the now-unionizing fulfillment center where he used to work. He said that workers commuted from all boroughs of New York, as well as parts of New Jersey, which meant that they would commute for about two and a half hours each way, work a 10- to 12-hour shift, and receive minimal break time. He testified that hundreds of union busters came in from across the country, as well as from overseas. These representatives would host “captive audience” anti-union meetings every 20 minutes with groups of 50 to 60 workers. Smalls said that these captive audience presentations happened four times per week.

“Imagine being a new hire at Amazon. Your second day, you don’t even know your job assignment, and the first thing they do is march you into an anti-union propaganda class,” Smalls said. He added that the facility was plastered with anti-union signs, telling workers to vote no to unionization and emphasizing that unions require a dues expense on the workers’ part.

The hearing also addressed the Protecting the Right to Organize (PRO) Act, which recently passed in the House. Currently, in twenty-seven “right-to-work” states, employees cannot be forced to join a union or pay dues, but if the PRO Act passed, it would override “right-to-work” laws. Union organizers believe, though, that “right-to-work” laws exist to discourage unionization, since it’s already federally illegal to force someone to join a union. If the PRO Act passes in the Senate (which isn’t expected, since Democrats don’t have enough seats to overcome the filibuster), it would be one of the biggest reforms of labor legislation since the National Labor Relations Act of 1935, which protects the rights of employees to organize.

After the hearing, Smalls and a number of other labor organizers visited President Biden at the White House.

“Just met the President lol he said I got him in trouble,” Smalls tweeted, likely referring to the backlash Biden experienced after expressing support for the Amazon union. “Gooooooooooood.”

There are no laws protecting kids from being exploited on YouTube — one teen wants to change that

At just 17, Chris McCarty is taking matters into their own hands to protect children from being exploited for cash in family vlogs.

As part of their project for the Girl Scouts Gold Award, the highest honor in the program, the Seattle teenager spent months researching child influencers: kids who rake in serious cash for their appearances in YouTube vlogs, which are often run by their parents. They were so fascinated and appalled by the lack of regulation around child labor and social media that they utilized their high school’s senior independent study program to phone-bank their neighbors to gauge community interest in the issue.

In January, McCarty cold-emailed a number of local lawmakers, including Washington State Representative Emily Wicks, who serves on the Children, Youth & Families Committee. McCarty presented their research, convincing the representative why she should work with a teenager to draft a new bill at the very end of the legislative session.

“I randomly got an email from Chris, and they said, ‘Here’s the problem, and here’s the potential solution,’” Wicks told TechCrunch. “I really wanted to provide the opportunity to help them understand exactly how the legislative process works, no matter how far we were able to get with the bill.”

With a 20-year age difference, McCarty and Wicks have distinctly different experiences when it comes to growing up on social media, but they worked together to propose House Bill 2023.

“Children are generating interest in and revenue for the content, but receive no financial compensation for their participation,” the bill reads. “Unlike in child acting, these children are not playing a part, and lack legal protections.”

If passed in the Washington state legislature, the bill would apply to content that generates at least 10 cents per view and has an individual minor featured over 30% of the time. In that case, a percentage of the vlog’s income would be set aside in a trust to be given to the child when they turn 18. At that time, the individual could also request that the content they appear in to be removed from the tech platform.

The bill was introduced late in the legislative session, so it has not yet been heard on the House floor.

“I’m actually not running for re-election … but I want to hand off this bill to the right people to continue working on this,” Wicks said. She highlighted the value of intergenerational collaboration on legislature, especially when it comes to topics like the impact of social media on children. “This is what being in the legislature is all about — teaching people, and helping and being the director of these great ideas people have.”

At any level of government, the journey from a proposed bill to a law is a tough one, especially when it comes to legislation like HB2023, which would require direct cooperation from tech companies like YouTube or Instagram. But as social media stars continue to edge their way into the entertainment industry, the lack of regulation for children in this line of work is becoming more concerning.

The problem with family vlogging

Life was good for Myka Stauffer, a family vlogger who amassed a combined 1 million YouTube subscribers across two channels. She rose to prominence after producing a series of 27 videos that chronicled her family’s journey as they went through the emotional and arduous process of adopting a child from China, Huxley. Stauffer was swimming in sponsorships from brands like Glossier and Fabletics, and she was high-profile enough that People Magazine announced the birth of her next son, Onyx.

But in 2020, three years after the adoption of 5-year-old Huxley, fans realized that he suddenly no longer appeared in any of her videos. At their audience’s urging, Stauffer and her husband revealed that they had rehomed Huxley because they felt that they couldn’t properly care for a child with autism.

Of course, YouTube commenters and armchair Twitter experts alike exploded into an uproar of discourse: Was this just a couple in over their head, helping a child secure caretakers that could better nurture him? Or did they build their fame and riches off of the story of an oblivious toddler, making his life story public to millions of viewers, only to discard him when things became too hard?

McCarty became interested in the ethics of family vlogging when they learned about Huxley’s story.

“If Myka Stauffer is doing it, there are probably a lot of other families doing it too,” McCarty told TechCrunch. “These children, often they’re not old enough to even understand what’s going on, much less fully consent to it.”

As early as 2010, amateur YouTubers realized that “cute kid does stuff” is a genre prone to virality. David DeVore, then 7, became an internet sensation when his father posted a YouTube video of his reaction to anesthesia called “David After Dentist.” David’s father turned the public’s interest in his son into a small business, earning around $150,000 within five months through ad revenue, merch sales and a licensing deal with Vizio. He told The Wall Street Journal at the time that he would save the money for his children’s college costs, as well as charitable donations. Meanwhile, the family behind the “Charlie bit my finger” video made enough money to buy a new house.

Over a decade later, some of YouTube’s biggest stars are children who are too young to understand the life-changing responsibility of being an internet celebrity with millions of subscribers. Seven-year-old Nastya, whose parents run her YouTube channel, was the sixth-highest-earning YouTube creator in 2022, earning $28 million. Ryan Kaji, a 10-year-old who has been playing with toys on YouTube since he was 4, earned $27 million from a variety of licensing and brand deals.

Just days ago, the controversial longtime YouTuber Trisha Paytas rebranded their channel to the “Paytas-Hacmon Family Channel,” which documents “the lives of Trish, Moses, and Baby Paytas-Hacmon (coming sept 2022).” Already, Paytas has posted vlogs about ultrasounds, maternity clothing and what they eat while pregnant.

That’s not to say that influencer parents are inherently money-hungry and short-sighted. Kaji’s parents, for instance, said they set aside 100% of his income from his Nickelodeon series “Ryan’s Mystery Playdate” for when he comes of age. But there are almost no guardrails to prevent these children from experiencing exploitation.

Huxley was adopted by another family, and an investigation by local authorities concluded that he is “very happy and well taken care of.” But when he gets old enough, he will discover that when he was only a toddler, private details about his intercontinental adoption and disability were the subject of widespread online debate and news coverage. The Stauffer family profited off of Huxley’s appearance in their vlogs (though they now no longer work as social media personalities, for clear reasons), but Huxley will never be compensated for his unwitting appearances in their vlogs.

“I really think the biggest issue here is a lack of advocacy, because so many people don’t see this as a problem, or they don’t think about it,” McCarty said. In addition to working on HB2032, they started a website called Quit Clicking Kids to raise awareness about problems facing child influencers. “But then once you get those really personal stories from specific instances of [child] influencers, I think that’s the thing that changes minds the most.”

A lack of existing legislation

In Hollywood, child actors are protected by the Coogan law, which requires parents to put 15% of a minor’s gross wages into a trust account, called a Coogan account. This protection, enacted in 1939 and updated in 1999, inspired a similar clause in the bill that Wicks introduced in Washington state.

“A lot of kids on social media accounts, there’s no protection for them right now,” McCarty told TechCrunch. “They don’t have any kind of regulated hours the way child actors do now, or Coogan laws to protect them. So I really think that we need to offer the same considerations from child actors to child stars in vlogs.”

In 2018, California State Assembly member Kansen Chu attempted to pass legislation that would classify “social media advertising” as a form of child labor. This would have required minors to obtain work permits, which would offer them greater financial protections, since children in California can only get a work permit if they have a Coogan account. But before it was written into law, the legislation was considerably weakened. Now, these child labor classifications don’t apply if a minor’s online performance is unpaid and shorter than an hour, which exempts much of kid influencers’ work.

In France, a law took effect in 2021 that required any earnings of a child influencer to be put into a bank account that they can only access once they turn 16. Like the bill proposed in Washington state, this law enforces a “right to be forgotten,” which allows the child to request that the content they appear in be taken off the internet.

Since Frances Haugen’s landmark leaks of internal Facebook documents detailing the negative effect of social media on minors, several bills have been introduced (or reintroduced) in the U.S. Senate designed to protect the safety of ordinary kids using social media.

In October, representatives from YouTube, TikTok and Snap all agreed at a hearing that parents should have the ability to erase online data for their children or teens, a concept that appears in proposed updates to the historic Children’s Online Privacy Protection Act (COPPA).

Another way to regulate the monetization of family vlog and child influencer content would be for activists like McCarty to wage pressure campaigns that urge leaders at companies like YouTube or Instagram to put guardrails up themselves. But Wicks and McCarty believe that the legislative route is more likely to be effective.

“I say, work with [tech companies] to see if they can be good players,” Wicks said. “How could we potentially work with them to say, ‘This is what our intent is, this is what we’re trying to do. … How, based on what you have today, can we make this happen?’”

Even if lawmakers mandate that tech companies wipe a child’s digital footprint, for example, there’s no guarantee that the platforms themselves have the technology to honor these requests. U.S. senators have grappled with these challenges, questioning reticent executives from Snapchat, TikTok, YouTube and Meta about how the seemingly opposing forces of Big Tech and government might cooperate.

“There have been hearings, but there hasn’t really been any kind of meaningful, concrete action,” McCarty said. “So to me, going through legislation was a better way to get more people talking about it, to make concrete change.”

Senators propose the Kids Online Safety Act after five hearings with tech execs

Last year, former Facebook employee Frances Haugen leaked a trove of internal company documents, which illuminated just how harmful apps like Instagram can be for teens. These bombshell revelations sparked five Senate subcommittee hearings on children’s internet safety, featuring testimony from executives at TikTok, Snap, YouTube, Instagram and Facebook.

As a result of these hearings, Senator Richard Blumenthal (D-CT) and Senator Marsha Blackburn (R-TN) introduced the Kids Online Safety Act (KOSA) today. The bill would require social media companies to provide users under 16 with the option to protect their information, disable addictive product features, opt out of algorithmic recommendations; give parents more control over their child’s social media usage; require social media platforms to conduct a yearly independent audit to assess their risk to minors; and allow academics and public interest organizations to use company data to inform their research on children’s internet safety.

“In hearings over the last year, Senator Blumenthal and I have heard countless stories of physical and emotional damage affecting young users, and Big Tech’s unwillingness to change,” Sen. Blackburn said in a press release. “The Kids Online Safety Act will address those harms by setting necessary safety guide rails for online platforms to follow that will require transparency and give parents more peace of mind.”

There is a lot of overlap among KOSA and other legislation floated over the last year by members of the Senate Subcommittee on Consumer Protection, on which Blackburn and Blumenthal both serve.

Back in October, representatives from YouTube, TikTok and Snap all agreed at a hearing that parents should have the ability to erase online data for their children or teens, an idea that appears in Senator Ed Markey (D-MA) and Senator Bill Cassidy (R-LA)’s proposed updates to the historic the Children’s Online Privacy Protection Act (COPPA). Plus, Blumenthal and Markey reintroduced their Kids Internet Design and Safety (KIDS) Act in September, which would protect online users under 16 from attention-grabbing features like autoplay and push alerts, ban influencer marketing targeted toward children and young teens, and even prohibit interface features that quantify popularity, like follower counts and like buttons. 

Meanwhile, the bipartisan Filter Bubble Transparency Act, introduced in both the House and the Senate, addresses concerns about the secrecy around algorithms and how they influence users. That bill would require social networks to allow users to choose to use a standard reverse-chronological feed instead of using the platform’s opaque, sometimes proprietary algorithm. The newly introduced KOSA bill doesn’t require this toggle, but it would force tech companies to turn over “critical datasets from social media platforms” to the government, the bill’s summary says. Then, academics and non-profit employees could apply to gain access to the data for research purposes.

In California, bipartisan state lawmakers plan to introduce a bill tomorrow that’s modeled off of the UK’s Age Appropriate Design Code. This bill would require companies like Meta and YouTube, which are headquartered in the state, to limit data collection from children on their platforms. Given the prevalence of Big Tech in California, even state laws could potentially force platforms to take action towards making their platforms safer for young users.

It’s too soon to say which of these proposed legislations, if any, will gain enough momentum to change the way that social media platforms operate. But lawmakers have proven committed to asserting more control over big tech.

YouTube and Snapchat were asked to defend their apps’ age ratings in Senate hearing

YouTube and Snapchat were asked by lawmakers to defend whether or not their respective social media apps were appropriately rated on the U.S. app stores, given the nature of the content they hosted. In a line of questioning led by Senator Mike Lee (R-UT), Snap was given specific examples of the types of content found on its app that seemed to be inappropriate for younger teens. Both companies were also asked to explain why their app would be rated for different age groups on the different app stores.

For example, the YouTube app on the Google Play Store is rated “Teen,” meaning its content is appropriate for ages 13 and up, while the same app on Apple’s App Store is rated “17+,” meaning the content is only appropriate for users ages 17 and up, the senator pointed out.

Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy, who attended the hearing on behalf of YouTube, responded that she was “unfamiliar with the differences” the senator had just outlined with his question.

Snapchat’s Vice President of Global Public Policy Jennifer Stout, also wasn’t sure about the discrepancy when asked the same question.

In Snapchat’s case, the app is rated 12 and up on the Apple App Store but is rated “Teen” (13 and up) on the Google Play Store. She said she believed it was because Snapchat’s content was deemed “appropriate for an age group of 13 and above.”

The inconsistency with app ratings can make it hard for a parent to decide if an app is, in fact, appropriate for their kids, based on how strict their own household rules are on this sort of thing. But the issue ultimately comes down to each platform having different policies around app ratings. And because the app stores’ ratings aren’t necessarily consistent with similar ratings across other industries, like film and TV, parents today will often look to third-party resources, like Common Sense Media, news articles, online guides, or even anecdotal advice, when deciding whether or not to let their kids use a certain app.

Lee pushed on this point with Snapchat, in particular. He said his staff entered a name, birth date, and email to create a test account for a 15-year old child. They didn’t add any further content preferences for the account.

“When they opened the Discover page on Snapchat with its default settings, they were immediately bombarded with content that I can most politely describe as wildly inappropriate for a child, including recommendations for, among other things, an invite to play an online sexualized video game that’s marketed itself to people who are 18 and up; tips on quote, ‘why you shouldn’t go to bars alone;’ notices for video games that are rated for ages 17 and up; and articles about porn stars,” said Lee.

He wanted to know how Snap determined this content was appropriate for younger teens, as per the app’s rating on the app stores.

Stout responded by saying that Snapchat’s Discover section was a closed platform where the company chooses the publishers.

“We do select — and hand-select — partners that we work with. And that kind of content is designed to appear on Discover and resonate with an audience that is 13 and above,” she explained. Stout also seemed surprised that Discover would include the sort of inappropriate content the senator described. “I want to make clear that content and community guidelines suggest that any online sexual video games should be age-gated to 18 and above,” Stout said, adding that she was “unclear” why they would display by default for a younger user by default.

She went on to note that Snap’s publisher guidelines indicate the content must be accurate and fact-checked.

Lee’s response indicated he thought she was missing the point.

“I’m sure the articles about the porn stars were accurate and fact-checked,” he barked back. “And I’m sure that the tips on why you shouldn’t go to bars alone are accurate and fact-checked, but that’s not my question. It’s about whether it’s appropriate for children ages 13 and up as you certified,” he said.

The senator’s questions harken back to the ongoing moral panics from parents over teens’ use of social media — panics that have particularly impacted Snapchat with its ephemerality primed for teenage sexting.

However, the backlash over inappropriate Discover content actually spurred Snapchat to take action in the past — ahead of its IPO, of course.

The company in 2017 said it would begin to take a harder line on the sort of risqué and misleading images that had overrun the Discover section by implementing an updated set of publisher guidelines, which Stout also referenced during the hearing. But user reviews of the app in the years that followed have pointed out that Snapchat’s news section is filled with, as some young people described it: “cringe-worthy content” and “nonsense,” or “gossip, sex, and drugs.”

“The page is driven by clickbait, reality television news, and influencer fodder, with a smattering of TV networks and reputable news publishers, including ESPN, the Washington Post, and The Wall Street Journal,” a Vox report stated in June 2021. “Users complain that they’re frequently served tabloid-like content about influencers they don’t care about, or nonsensical clickbait that is best described as ‘internet garbage,'” it said.

This is likely the sort of content the senator’s team stumbled upon, too.

Of course, Snap has the right to curate whatever sort of publisher content it wants to — but there is a question here as to how well it’s actually curating and age-gating the content when it comes to its younger teenage users and whether that’s something that should be better regulated by law, not just left to the companies themselves to police.

“What kind of oversight do you conduct on this?,” Lee wanted to know about Snap’s curation of Discover.

“We use a variety of human review as well as automated review,” said Stout, adding that Snap would be interested in talking to his staff to find out what sort of content they saw that violated its guidelines.

(As an aside, reporters know well how this is often the response a big tech company will give when confronted with a direct example of something that contradicts their guidelines. They want the journalists to do the moderation work for them by itemizing the specific issues. They’ll then follow through and block or remove the violating pieces in question while downplaying the examples as a one-off they just missed by accident — instead of acknowledging this could be an example of a systemic problem. That’s not to say this is the situation with Snap, per se, but the tactic of acting shocked and surprised then demanding to know the examples is a familiar one.)

“While I would agree with you tastes vary when it comes to the kind of content that is promoted on Discover, there is no content there that is illegal. There is no content there that is hurtful,” Stout stated.

But Lee, whose questions mirror those from parents who believe age-inappropriate content is, in fact, hurtful, would likely disagree with that assessment.

“These app ratings are inappropriate,” Lee concluded. “We all know there’s content on Snapchat and on YouTube, among many other places, that’s not appropriate for children ages 12 or 13 and up.”

In a separate line of questing led by Senator Maria Cantwell (D-WA), Snap was also pressed on whether or not advertisers in Snapchat Discover understood what sort of content they’re being placed next to.

Stout said they did.

TikTok was not questioned by Lee on the nature of its content or app store age rating, but later in the hearing did note, in response to Senator Ben Ray Luján (D-NM), that its content for younger users was curated with help from Common Sense Networks and is “an age-appropriate experience.”

TikTok dodges questions about biometric data collection in Senate hearing

In its first-ever Congressional hearing, TikTok successfully dodged questions about what it plans to do with the biometric data its privacy policy permits it to collect on the app’s U.S. users. In an update to the company’s U.S. privacy policy in June, TikTok added a new section that noted the app “may collect biometric identifiers and biometric information” from its users’ content, including things like “faceprints and voiceprints.”

The company was questioned by multiple lawmakers on this matter today during a hearing conducted by the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security. The hearing was meant to be focused on social media’s detrimental impacts on children and teens, but often expanded into broader areas of concern related, as the lawmakers dug into the business models, internal research, and policies being made at Snap, TikTok and YouTube. 

Sen. Marsha Blackburn (R-TN) asked specifically why TikTok needed to collect a wide variety of biometric data, “such as faceprints, voiceprints, geolocation information, browsing and search history,” as well as “keystroke patterns and rhythms.”

Instead of directly answering the question, TikTok’s VP and Head of Public Policy Michael Beckerman responded by pointing out that many outside researchers and experts have looked at its policy and found that TikTok actually collects less data than many of its social media peers. (He also later clarified in another round of questioning that keystroke patterns were collected in order to prevent spam bots from infiltrating the service.)

Blackburn pressed on to ask if TikTok was putting together a comprehensive profile — or “virtual dossier” — on each of its users, including younger kids and teens, which included their biometric data combined with their interests and search history.

Beckerman deferred answering this question as well, saying that: “TikTok is an entertainment platform where people watch and enjoy and create short-form videos. It’s about uplifting, entertaining content.”

While the senator’s line of questioning was a bit confusing at times — she once referred to this dossier as a “virtual you,” for example — it’s worth noting that we don’t have a full picture today as to what TikTok is doing with the data it collects from its users outside of what’s outlined in its privacy policy and, per TikTok’s 2020 blog post, how some of that data plays a role in its recommendation algorithms. And given the chance to set the record straight over its plans to collect biometric data with regard to minor users, TikTok’s policy head skirted the questions.

In a line of follow-ups on its data collection practices led by Senator Cynthia Lummis (R-WY), Beckerman was asked if this sort of “mass data collection” was necessary to deliver a high-quality experience to TikTok’s users. She noted the company’s policy allowed for the collection of the person’s location, device model of their phone, browsing history outside and inside TikTok, all the messages sent on TikTok, IP address, and biometric data.

In response, Beckerman said “some of those items that you listed off are things that we’re not currently collecting.”

He also said that the privacy policy states TikTok would get user consent if it were to begin collecting those items in the future.

Though not immediately clear, his statements were likely in reference to the clause about biometric data collection. In June, TikTok declined to detail the product developments that necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users. But at the time, the company told TechCrunch it would ask for user consent in the case such data collection practices began.

The senator said the committee would follow up with TikTok on this question.

Both senators were concerned about TikTok’s connection to China, given its parent company is Beijing-based ByteDance. But the issue of over-collection of user data — particularly with regard to children and minors — isn’t just a geopolitical concern or, as Trump believed, a national security threat. It’s a matter of transparency.

Privacy and security experts generally think that users should understand why a company needs the data it collects, what is done with it, and they should have the right to refuse to share that data. Today, users can somewhat limit data collection by disallowing access to their smartphone’s sensors and other features. Apple, for example, implements opt-outs as part of its mobile operating system, iOS, which pops up consent boxes when an app wants to access your location, your microphone, your camera, or your contacts.

But there is much more data that apps can track, even when these items are blocked.

Following the questions about data collection practices, Lummis also wanted to know if TikTok had been built with the goal of keeping users engaged for as long as possible. After all, having a treasure trove of user data could greatly boost this sort of metric.

In reply, Beckerman pointed to the app’s “take a break” reminders and parental controls for screen time management.

Lummis clarified that she wanted to know if “length of engagement” was a metric the company used in order to define success.

Again, Beckerman skirted the question, noting “there’s multiple definitions of success” and that “it’s not just based on how much time somebody’s spending [on the app],”

Lummis then restated the question a couple more times as Beckerman continued to dodge answering directly, saying only that “overall engagement is more important than the amount of time that’s being spent.”

“But is it one of the metrics?,” Lummis pushed.

“It’s a metric that I think many platforms check on how much time people are spending on the app,” Beckerman said.

GSA blocks senator from reviewing documents used to approve Zoom for government use

The General Services Administration has denied a senator’s request to review documents Zoom submitted to have its software approved for use in the federal government.

The denial was in response to a letter sent by Democratic senator Ron Wyden to the GSA in May, expressing concern that the agency cleared Zoom for use by federal agencies just weeks before a major security vulnerability was discovered in the app.

Wyden said the discovery of the bug raises “serious questions about the quality of FedRAMP’s audits.”

Zoom was approved to operate in government in April 2019 after receiving its FedRAMP authorization, a program operated by the GSA that ensures cloud services comply with a standardized set of security requirements designed to toughen the service from some of the most common threats. Without this authorization, federal agencies cannot use cloud products or technologies that are not cleared.

Months later, Zoom was forced to patch its Mac app after a security researcher found a flaw that could be abused to remotely switch on a user’s webcam without their permission. Apple was forced to intervene since users were still affected by the vulnerabilities even after uninstalling Zoom. As the pandemic spread and lockdowns were enforced, Zoom’s popularity skyrocketed — as did the scrutiny — including a technical analysis by reporters that found Zoom was not truly end-to-end encrypted as the company long claimed.

Wyden wrote to the GSA to say he found it “extremely concerning” that the security bugs were discovered after Zoom’s clearance. In the letter, the senator requested the documents known as the “security package,” which Zoom submitted as part of the FedRAMP authorization process, to understand how and why the app was cleared by GSA.

The GSA declined Wyden’s first request in July 2020 on the grounds that he was not a committee chair. In the new Biden administration, Wyden was named chair of the Senate Finance Committee and requested Zoom’s security package again.

But in a new letter sent to Wyden’s office late last month, GSA declined the request for the second time, citing security concerns.

“GSA’s refusal to share the Zoom audit with Congress calls into question the security of the other software products that GSA has approved for federal use.” Sen. Ron Wyden (D-OR)

“The security package you have requested contains highly sensitive proprietary and other confidential information relating to the security associated with the Zoom for Government product. Safeguarding this information is critical to maintaining the integrity of the offering and any government data it hosts,” said the GSA letter. “Based on our review, GSA believes that disclosure of the Zoom security package would create significant security risks.”

In response to the GSA’s letter, Wyden told TechCrunch that he was concerned that other flawed software may have been approved for use across the government.

“The intent of GSA’s FedRAMP program is good — to eliminate red tape so that multiple federal agencies don’t have to review the security of the same software. But it’s vitally important that whichever agency conducts the review do so thoroughly,” said Wyden. “I’m concerned that the government’s audit of Zoom missed serious cybersecurity flaws that were subsequently uncovered and exposed by security researchers. GSA’s refusal to share the Zoom audit with Congress calls into question the security of the other software products that GSA has approved for federal use.”

Of the people we spoke with who have first-hand knowledge of the FedRAMP process, either as a government employee or as a company going through the certification, FedRAMP was described as a comprehensive but by no means an exhaustive list of checks that companies have to meet in order to meet the security requirements of the federal government.

Others said that the process had its limits and would benefit from reform. One person with knowledge of how FedRAMP works said the process was not a complete audit of a product’s source code but akin to a checklist of best practices and meeting compliance requirements. Much of it relies on trusting the vendor, said the person, describing it like ” an honor system.” Another person said the FedRAMP process cannot catch every bug, as evidenced by executive action taken by President Biden this week aimed at modernizing and improving the FedRAMP process.

Most of the people we spoke to weren’t surprised that Wyden’s office was denied the request, citing the sensitivity of a company’s FedRAMP security package.

The people said that companies going through the certification process have to provide highly technical details about the security of their product, which if exposed would almost certainly be damaging to the company. Knowing where security weaknesses might be could tip off cyber-criminals, one of the people said. Companies often spend millions on improving their security ahead of a FedRAMP audit but companies wouldn’t risk going through the certification if they thought their trade secrets would get leaked, they added.

When asked by GSA why it objected to Wyden’s request, Zoom’s head of U.S. government relations Lauren Belive argued that handing over the security package “would set a dangerous precedent that would undermine the special trust and confidence” that companies place in the FedRAMP process.

GSA puts strict controls on who can access a FedRAMP security package. You need a federal government or military email address, which the senator’s office has. But the reason for GSA denying Wyden’s request still isn’t clear, and when reached a GSA spokesperson would not explain how a member of Congress would obtain a company’s FedRAMP security package

“GSA values its relationship with Congress and will continue to work with Senator Wyden and our committees of jurisdiction to provide appropriate information regarding our programs and operations,” said GSA spokesperson Christina Wilkes, adding:

“GSA works closely with private sector partners to provide a standardized approach to security authorizations for cloud services through the [FedRAMP]. Zoom’s FedRAMP security package and related documents provide detailed information regarding the security measures associated with the Zoom for Government product. GSA’s consistent practice with regard to sensitive security and trade secret information is to withhold the material absent an official written request of a congressional committee with jurisdiction, and pursuant to controls on further dissemination or publication of the information.”

GSA wouldn’t say which congressional committee had jurisdiction or whether Wyden’s role as chair of the Senate Finance Committee suffices, nor would the agency answer questions about the efficacy of the FedRAMP process raised by Wyden.

Zoom spokesperson Kelsey Knight said that cloud companies like Zoom “provide proprietary and confidential information to GSA as part of the FedRAMP authorization process with the understanding that it will be used only for their use in making authorization decisions. While we do not believe Zoom’s FedRAMP security package should be disclosed outside of this narrow purpose, we welcome conversations with lawmakers and other stakeholders about the security of Zoom for Government.”

Zoom said it has “engaged in security enhancements to continually improve its products,” and received FedRAMP reauthorization in 2020 and 2021 as part of its annual renewal. The company declined to say to what extent the Zoom app was audited as part of the FedRAMP process.

Over two dozen federal agencies use Zoom, including the Defense Department, Homeland Security, U.S. Customs and Border Protection, and the Executive Office of the President.

At social media hearing, lawmakers circle algorithm-focused Section 230 reform

Rather than a CEO-slamming sound bite free-for-all, Tuesday’s big tech hearing on algorithms aimed for more of a listening session vibe — and in that sense it mostly succeeded.

The hearing centered on testimony from the policy leads at Facebook, YouTube and Twitter rather than the chief executives of those companies for a change. The resulting few hours didn’t offer any massive revelations but was still probably more productive than squeezing some of the world’s most powerful men for their commitments to “get back to you on that.”

In the hearing, lawmakers bemoaned social media echo chambers and the ways that the algorithms pumping content through platforms are capable of completely reshaping human behavior. .

“… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech, which held the hearing.

Coons struck a cooperative note, observing that algorithms drive innovation but that their dark side comes with considerable costs

None of this is new, of course. But Congress is crawling closer to solutions, one repetitive tech hearing at a time. The Tuesday hearing highlighted some zones of bipartisan agreement that could determine the chances of a tech reform bill passing the Senate, which is narrowly controlled by Democrats. Coons expressed optimism that a “broadly bipartisan solution” could be reached.

What would that look like? Probably changes to Section 230 of the Communications Decency Act, which we’ve written about extensively over the years. That law protects social media companies from liability for user-created content and it’s been a major nexus of tech regulation talk, both in the newly Democratic Senate under Biden and the previous Republican-led Senate that took its cues from Trump.

Lauren Culbertson, head of U.S. public policy at Twitter

Lauren Culbertson, head of U.S. public policy at Twitter Inc., speaks remotely during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., on Tuesday, April 27, 2021. Photographer: Al Drago/Bloomberg via Getty Images

A broken business model

In the hearing, lawmakers pointed to flaws inherent to how major social media companies make money as the heart of the problem. Rather than criticizing companies for specific failings, they mostly focused on the core business model from which social media’s many ills spring forth.

“I think it’s very important for us to push back on the idea that really complicated, qualitative problems have easy quantitative solutions,” Sen. Ben Sasse (R-NE) said. He argued that because social media companies make money by keeping users hooked to their products, any real solution would have to upend that business model altogether.

“The business model of these companies is addiction,” Josh Hawley (R-MO) echoed, calling social media an “attention treadmill” by design.

Ex-Googler and frequent tech critic Tristan Harris didn’t mince words about how tech companies talk around that central design tenet in his own testimony. “It’s almost like listening to a hostage in a hostage video,” Harris said, likening the engagement-seeking business model to a gun just offstage.

Spotlight on Section 230

One big way lawmakers propose to disrupt those deeply entrenched incentives? Adding algorithm-focused exceptions to the Section 230 protections that social media companies enjoy. A few bills floating around take that approach.

One bill from Sen. John Kennedy (R-LA) and Reps. Paul Gosar (R-A) and Tulsi Gabbard (R-HI) would require platforms with 10 million or more users to obtain consent before serving users content based on their behavior or demographic data if they want to keep Section 230 protections. The idea is to revoke 230 immunity from platforms that boost engagement by “funneling information to users that polarizes their views” unless a user specifically opts in.

In another bill, the Protecting Americans from Dangerous Algorithms Act, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) propose suspending Section 230 protections and making companies liable “if their algorithms amplify misinformation that leads to offline violence.” That bill would amend Section 230 to reference existing civil rights laws.

Section 230’s defenders argue that any insufficiently targeted changes to the law could disrupt the modern internet as we know it, resulting in cascading negative impacts well beyond the intended scope of reform efforts. An outright repeal of the law is almost certainly off the table, but even small tweaks could completely realign internet businesses, for better or worse.

During the hearing, Hawley made a broader suggestion for companies that use algorithms to chase profits. “Why shouldn’t we just remove section 230 protection from any platform that engages in behavioral advertising or algorithmic amplification?” he asked, adding that he wasn’t opposed to an outright repeal of the law.

Sen. Klobuchar, who leads the Senate’s antitrust subcommittee, connected the algorithmic concerns to anti-competitive behavior in the tech industry. “If you have a company that buys out everyone from under them… we’re never going to know if they could have developed the bells and whistles to help us with misinformation because there is no competition,” Klobuchar said.

Subcommittee members Klobuchar and Sen. Mazie Hirono (D-HI) have their own major Section 230 reform bill, the Safe Tech Act, but that legislation is less concerned with algorithms than ads and paid content.

At least one more major bill looking at Section 230 through the lens of algorithms is still on the way. Prominent big tech critic House Rep. David Cicilline (D-RI) is due out soon with a Section 230 bill that could suspend liability protections for companies that rely on algorithms to boost engagement and line their pockets.

“That’s a very complicated algorithm that is designed to maximize engagement to drive up advertising prices to produce greater profits for the company,” Cicilline told Axios last month. “…That’s a set of business decisions for which, it might be quite easy to argue, that a company should be liable for.”

Apple and Google pressed in antitrust hearing on whether app stores share data with product development teams

In today’s antitrust hearing in the U.S. Senate, Apple and Google representatives were questioned on whether they have a “strict firewall” or other internal policies in place that prevent them from leveraging the data from third-party businesses operating on their app stores to inform the development of their own competitive products. Apple, in particular, was called out for the practice of copying other apps by Senator Richard Blumenthal (D-CT), who said the practice had become so common that it earned a nickname with Apple’s developer community: “sherlocking.”

Sherlock, which has its own Wikipedia entry under software, comes from Apple’s search tool in the early 2000s called Sherlock. A third-party developer, Karelia Software, created an alternative tool called Watson. Following the success of Karelia’s product, Apple added Watson’s same functionality into its own search tool, and Watson was effectively put out of business. The nickname “Sherlock” later became shorthand for any time Apple copies an idea from a third-party developer that threatens to or even destroys their business.

Over the years, developers claimed Apple has “sherlocked” a number of apps, including Konfabulator (desktop widgets), iPodderX (podcast manager), Sandvox (app for building websites) and Growl (a notification system for Mac OS X) and, in more recent years, F.lux (blue light reduction tool for screens) Duet and Luna (apps that makes iPad a secondary display), as well as various screen-time-management tools. Now Tile claims Apple has also unfairly entered its market with AirTag.

During his questioning, Blumenthal asked Apple and Google’s representatives at the hearing — Kyle Andeer, Apple’s
chief compliance officer and Wilson White, Google’s senior director of Public Policy & Government Relations, respectively — if they employed any sort of “firewall” in between their app stores and their business strategy.

Andeer somewhat dodged the question, saying, “Senator, if I understand the question correctly, we have separate teams that manage the App Store and that are engaged in product development strategy here at Apple.”

Blumenthal then clarified what he meant by “firewall.” He explained that it doesn’t mean whether or not there are separate teams in place, but whether there’s an internal prohibition on sharing data between the App Store and the people who run Apple’s other businesses.

Andeer then answered, “Senator, we have controls in place.”

He went on to note that over the past 12 years, Apple has only introduced “a handful of applications and services,” and in every instance, there are “dozens of alternatives” on the App Store. And, sometimes, the alternatives are more popular than Apple’s own product, he noted.

“We don’t copy. We don’t kill. What we do is offer up a new choice and a new innovation,” Andeer stated.

His argument may hold true when there are strong rivalries, like Spotify versus Apple Music, or Netflix versus Apple TV+, or Kindle versus Apple Books. But it’s harder to stretch it to areas where Apple makes smaller enhancements — like when Apple introduced Sidecar, a feature that allowed users to make their iPad a secondary display. Sidecar ended the need for a third-party app, after apps like Duet and Luna first proved the market.

Another example was when Apple built screen-time controls into its iOS software, but didn’t provide the makers of third-party screen-time apps with an API so consumers could use their preferred apps to configure Apple’s Screen Time settings via the third-party’s specialized interface or take advantage of other unique features.

Blumenthal said he interpreted Andeer’s response as to whether Apple has a “data firewall” as a “no.”

Posed the same question, Google’s representative, White, said his understanding was that Google had “data access controls in place that govern how data from our third-party services are used.”

Blumenthal pressed him to clarify if this was a “firewall,” meaning, he clarified again, “do you have a prohibition against access?”

“We have a prohibition against using our third-party services to compete directly with our first-party services,” White said, adding that Google has “internal policies that govern that.”

The senator said he would follow up on this matter with written questions, as his time expired.

Elizabeth Warren for President open-sources its 2020 campaign tech

Democratic senator Elizabeth Warren may have ended her 2020 presidential run, but the tech used to drive her campaign will live on.

Members of her staff announced they would make public the top apps and digital tools developed in Warren’s bid to become the Democratic nominee for president.

“In our work, we leaned heavily on open source technology — and want to contribute back to that community…[by] open-sourcing some of the most important projects of the Elizabeth Warren campaign for anyone to use,” the Warren for President Tech Team said.

In a Medium post, members of the team — including chief technology strategist Mike Conlow and chief technology officer Nikki Sutton — previewed what would be available and why.

“Our hope is that other Democratic candidates and progressive causes will use the ideas and code we developed to run stronger campaigns and help Democrats win,” the post said.

Warren’s tech team listed several of the tools they’ve turned over to the open source universe via GitHub.

One of those tools, Spoke, is a peer to peer texting app, originally developed by MoveOn, which offered the Warren Campaign high volume messaging at a fraction of the costs of other vendor options. The team used it to send four million SMS messages on Super Tuesday alone.

Pollaris is a location lookup tool with an API developed to interface directly with Warren’s official campaign website and quickly direct supporters to their correct polling stations.

One of Elizabeth Warren’s presidential campaign app, Caucus, designed for calculating delegates. (Image: supplied)

Warren’s tech team will also open-source Switchboard (FE and BE) — which recruited and connected volunteers in primary states — and Caucus App, a delegate calculating and reporting tool.

The campaign’s Redhook tool took in web hook data in real time and experienced zero downtime.

“Our intention in open sourcing it is to demonstrate that some problems campaigns face do not require vendor tools and are solved…efficiently with a tiny bit of code,” said the Tech Team.

Elizabeth Warren ended her 2020 presidential bid on March 4 after failing to win a primary. Among her many policy proposals, the Massachusetts senator had proposed breaking up big tech companies, such as Google, Facebook and Amazon.

Her campaign will continue to share the tech tools they used on open source channels.

“We’ll have more to say in the coming weeks on all that we did with technology on our campaign,” the team said.