Helen Toner worries ‘not super functional’ Congress will flub AI policy

Helen Toner, a former OpenAI board member and the director of strategy at Georgetown’s Center for Security and Emerging Technology, is worried Congress might react in a “knee-jerk” way where it concerns AI policymaking, should the status quo not change. “Congress right now — I don’t know if anyone’s noticed — is not super functional, […]

© 2024 TechCrunch. All rights reserved. For personal use only.

The controversial bill that could ban TikTok faces a rocky road in the Senate

A bill threatening to ban an app beloved by half of the American population just rocketed through the House of Representatives in a week’s time. But in the other chamber of Congress, things are likely to be much more complicated. TikTok the company and TikTok the chaotic community of creators and their followers are rightfully […]

© 2024 TechCrunch. All rights reserved. For personal use only.

TikTok begs users to tell Congress not to ban it

When some American users opened TikTok on Thursday morning, they were met with a full-screen message encouraging them to call Congress and say no to a TikTok ban. “Speak up now — before your government strips 170 million Americans of their Constitutional right to free expression,” the screen says. “Let Congress know what TikTok means […]

© 2024 TechCrunch. All rights reserved. For personal use only.

House punts on AI with directionless new task force

The House of Representatives has founded a Task Force on artificial intelligence that will “ensure America continues leading in this strategic area,” as Speaker Mike Johnson put it. But the announcement feels more like a punt after years of indecision that show no sign of ending. In a way this task force — chaired by […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Congress probes Ford’s big battery deal with China’s CATL

In a public letter addressed to Ford boss Jim Farley, House Republicans Mike Gallagher and Jason Smith announced that two congressional committees are investigating the automaker’s licensing deal with Chinese battery maker CATL.

The probe centers on Ford’s efforts to put CATL’s battery cell technology to use at an upcoming, $3.5 billion battery cell plant in Michigan.

In their letter, the chairs of the House Ways and Means Committee and Select Committee on China call on Ford to hand over details on its agreement with CATL by August 10. 

The representatives have asked for communications between Ford and the Biden administration relating to the CATL deal. They further called on Ford to explain how it will “ensure imports from CATL to produce LFP batteries in Michigan are free of forced labor or inputs from Xinjiang.”

Reached by TechCrunch, a Ford spokesperson declined to comment on the specifics of the letter. The spokesperson reiterated a statement that the automaker “alone is investing $3.5 billion and will own and run this plant in the United States, instead of building a battery plant elsewhere or exclusively importing LFP batteries from China like our competitors do.” 

Earlier this week, House Republicans announced a separate investigation into U.S. venture firms’ investments in China.

The week in robotics: Teaching robots chores from YouTube, robot dogs at the border and drone consolidation

AI’s grabbing headlines, but the robotics field is still making a significant impact in the real world — and this is your briefing on our latest coverage of the growing industry.

Before we get into the depths of the past week’s noteworthy robotics news, our resident expert Brian Heater dove into the debate over the use of robot dogs to patrol the border between the U.S. and Mexico.

As he notes in this week’s edition of his Actuator newsletter, which you can sign up for here, the robotics industry is stuck between a rock and a hard place on the potential use of their hardware for more violent purposes:

“I’ve discussed where I stand on the subject of weaponizing robots several times over the years in Actuator (not a fan), but I also understand how it can be a nuanced conversation for many. For those who sell weapon systems to the government, the argument largely centers on the notion that if we don’t get there first, someone else will.”

Drone inventory firm Gather AI buys competitor Ware

Inventory management is a challenge shared across numerous industries that becomes even more of a mess when the sheer volume of product to monitor expands to multiple warehouses at increasing sizes. That’s why it becomes a focal point for those looking to introduce automation into workflows.

One of the solutions gaining traction of late is the deployment of drones to manage inventory. By virtue of its deal with Ikea, Verity has become one of the most prominent players in the space. But they’re not alone, and the Pittsburgh-based Gather AI stepped up its competition by acquiring Ware, one of its biggest competitors.

The financials behind the acquisition have not been disclosed.

Dexory pulls in $19 million for automated inventory management

In more grounded warehouse news, Dexory, which uses autonomous robots to provide warehouses with real-time inventory management, announced a $19 million Series A led by Atomico. Its total funding now stands at $37.9 million.

“The robots can be deployed multiple times a day or once a day around their shift patterns, including overnight,” CEO Andrei Danescu told TechCrunch. “The collection of data insights over a short space of time, all the time, allows analysis for identifying issues on-the-spot and decision making in driving warehouse operational efficiencies.”

New funding pushes Realtime Robotics past a $54M raise

Realtime’s latest raise, of $9.5 million, comes hot on the heels of a $14.4 million raise in September, with the firm continuing what has been a lengthy Series A. They focus on one of the hottest spaces within robotics of late: helping manufacturers coordinate various systems running their operations that may come with their own proprietary system management software. 

“This most recent funding will be used to speed roll out of our innovative products and services to global end users and line builders across the automotive and automated warehouse industries,” CEO Peter Howard told TechCrunch.

Robots with the capacity to learn from YouTube

A team at CMU Robotics has been showcasing a program that uses video content to teach robots how to perform various tasks. But now they no longer require the human they learn from to demonstrate a task within an identical setting.

“We are using these datasets in a new and different way,” PhD student Shikhar Bahl notes. “This work could enable robots to learn from the vast amount of internet and YouTube videos available.”


Want the latest in robotics news and additional commentary and insights? Subscribe to Actuator and get the weekly newsletter in your inbox.

The week in robotics: Teaching robots chores from YouTube, robot dogs at the border and drone consolidation by Alyssa Stringer originally published on TechCrunch

House GOP discusses use of robot dogs to patrol US borders

The United States Department of Homeland Security caused a stir last February when it revealed that it was exploring deploying robot dogs on the U.S./Mexico border.

“The southern border can be an inhospitable place for man and beast, and that is exactly why a machine may excel there,” the DHS’s Brenda Long said at the time. “This [Science and Technology Directorate]-led initiative focuses on Automated Ground Surveillance Vehicles, or what we call ‘AGSVs.’ Essentially, the AGSV program is all about…robot dogs.”

The story raised the ire of several Democratic politicians, including New York Congresswoman Alexandria Ocasio-Cortez, who tweeted, “It’s shameful how both parties fight tooth + nail to defend their ability to pump endless public money into militarization. From tanks in police depts to corrupt military contracts, funding this violence is bipartisan + non-controversial, yet healthcare + housing isn’t. It’s BS.”

Last week, the subject — and the robotics firm behind it — were once again top of mind on Capitol Hill. A House GOP Cybersecurity, Information Technology and Government Innovation hearing titled “Using Cutting-Edge Technologies to Keep America Safe” gathered together representatives from a number of defense-related technology firms. The list included Ryan Rawding of biometric verification firm Pangiam, Wahid Nawabi of drone defense firm AeroVironment, Benjamin Boudreaux from the RAND Corporation and Gavin Kenneally, the CEO of Ghost, whose robot dogs have been featured in the trials.

Ghost demoed its Vision 60 robot for the panel during Kenneally’s presentation. “Looks like my high school algebra teacher,” one member of the committee inexplicably quipped. Another added, “like an Avengers movie right now.” Congresswoman Marjorie Taylor Greene called the demo, “really incredible, absolutely intriguing.”

Missouri Congressman Eric Burlison noted, “a few house Democrat members reportedly wrote to the US Customs and Border Protection last year expressing concerns about that robotic dogs could pose a lethal threat to migrants and Americans. How legitimate is that concern?”

WASHINGTON, DC - JUNE 22: Gavin Kenneally, Chief Executive Officer at Ghost Robotics speaks as Vision 60 UGV walks in during a House hearing at the US Capitol on June 22, 2023 in Washington, DC. The House Committee on Oversight and Accountbility Subcommittee on Cybersecurity, Information Technology, and Government Innovation met to discuss the use of technology at the US Border, airports and military bases. (Photo by Tasos Katopodis/Getty Images)

WASHINGTON, DC – JUNE 22: Gavin Kenneally, Chief Executive Officer at Ghost Robotics speaks as Vision 60 UGV walks in during a House hearing at the US Capitol on June 22, 2023 in Washington, DC. The House Committee on Oversight and Accountability Subcommittee on Cybersecurity, Information Technology, and Government Innovation met to discuss the use of technology at the US Border, airports and military bases. (Photo by Tasos Katopodis/Getty Images)

The executive responded, “The use case for the robots at the border is to collect data. You can either look for illegal drug trafficking by adding sensors to detect for that, or you can add infrared or thermal cameras, which let you pick up human or other animals’ thermal signatures. The robot is really a detection system, which will then actually be used to save lives. There’s hundreds of deaths every year from drowning or getting stuck trying to cross the border.”

In October 2021, the Philly-based startup made national headlines when images emerged of one of its robots sporting a remote-controlled sniper rifle designed by a company called SWORD. At the time, then-CEO Jiren Parikh told TechCrunch the system was a “walking tripod” in reference to the company’s apparent hands-off approach to payload — weaponized or otherwise.

The subject of weaponization briefly came up during the hearing.

“[A]s the world progresses into stages that could be going to more wars — especially given the Ukraine and Russian war that’s going on right now — I’d like to ask each of you how we can make sure that we prevent any types of technologies or robotics like this to ever be used as weapons against people,” Greene said. “And I think that’s extremely important. Again, it’s not weaponizing technology we want to see happen ever, and I would like to see countries around the world make agreements to this, especially with emerging incredible inventions. We don’t want to see them turned into something that would kill people.”

The Georgia congresswoman didn’t ask the question directly, however, instead pivoting to one about preventing cybersecurity attacks.

Ghost Robotics unit controlled by soldier

Image Credits: Ghost Robotics

“The robot, the way we’ve built it, it’s effectively a server on legs, so we’re able to use standard best practices to lock down the robot as much as possible, using firewalls,” answered Kenneally. “Because of the sensitive nature of our customers, all of the data that the robot collects is stored locally.”

New York Congressman Nick Langworthy, meanwhile, asked whether the robot could be deployed on the U.S./Canadian border.

“Yes, absolutely,” Kenneally responded. “We also have built these robots very purposefully to go in environments that are incredible harsh and hard for humans to traverse. We’re able to operate the robot down to -40 Celsius or -40 Fahrenheit. It’s also sealed, so it can work in all kinds of different weather conditions.”

House GOP discusses use of robot dogs to patrol US borders by Brian Heater originally published on TechCrunch

‘So infuriating’: TikTokers are fuming over potential ban

In the aftermath of TikTok CEO Shou Zi Chew’s brutal five hour Congressional hearing on Thursday, TikToker and disinformation researcher Abbie Richards summed up what so many creators were thinking: “It’s actually remarkable how much less Congress knows about social media than the average person,” Richards told TechCrunch.

Across TikTok, users mocked congresspeople for misunderstanding how technology works. In one instance, Representative Richard Hudson (R-NC) asked Chew if TikTok connects to a user’s home wi-fi network. Chew responded, bewildered, “Only if the user turns on the wi-fi.”

The ignorant questions weren’t unique to the government’s interrogation of Chew. At a high-profile hearing 2018, the late Senator Orrin Hatch (R-UT) infamously asked Meta CEO Mark Zuckerberg how Facebook makes money if the app is free. Zuckerberg responded, “Senator, we run ads,” failing to stifle a smirk. During a tech hearing two years ago, Senator Richard Blumenthal (D-CT) created another notorious viral moment by asking Facebook’s global head of safety if she would “commit to ending finsta.”

As entertaining as these lapses in basic knowledge are, TikTok creators have serious concerns about the future of an app that’s given them a community, and, in some cases, a career.

TikTok creator Vitus “V” Spehar, known as Under the Desk News, has amassed 2.9 million followers by sharing global news in an approachable way. But in this week’s news cycle, they’re front-and-center (literally — they sat right behind the TikTok CEO as he testified).

“I think it’s really concerning that a government is considering removing American citizens from the global conversation on an app as robust as TikTok,” Spehar told TechCrunch. “It’s not just banning the app in the United States, it means disconnecting American citizens from Canada, the UK, Mexico, Iran, Ukraine and all of the frontline reporting you see from those countries, it just shows up on our [For You Page].”

Spehar is part of a group of TikTok creators who travelled to Washington, D.C. this week to advocate on TikTok’s behalf — and against the looming threat of a national ban. They participated in a press conference on Wednesday afternoon hosted by Representative Jamaal Bowman (D-NY), a rare dissenting voice in Congress who raised questions about what he described as the “hysteria and panic” surrounding TikTok.

Vitus Spehar, host of the TikTok channel Under The News Desk, hosts a live stream during a news conference outside the US Capitol in Washington, DC, US, on Wednesday, March 22, 2023.

Vitus Spehar, host of the TikTok channel Under The News Desk, hosts a live stream during a news conference outside the US Capitol in Washington, DC, US, on Wednesday, March 22, 2023. (Nathan Howard/Bloomberg)

“Congress made clear that they don’t understand TikTok, they don’t listen to their constituents who are in the community of TikTokers — and are using this TikTok hysteria as a way to pass legislation that gives them superpowers to ban any app they deem ‘unsafe’ in the future,” Spehar said following the hearing.

Tech ethicists and creators alike share this frustration. Dr. Casey Fiesler, a University of Colorado Boulder professor of tech ethics and policy, believes that the national security concerns about the app are overstated.

“The risk seems to be entirely speculative right now and to me, I’m not sure how it is substantially worse than all of the things that are troubling about social media right now that the government has not been focusing on,” Fiesler said. She commands an audience of over 100,000 followers on TikTok, where she explores issues like the nuances of content moderation and other topics that might come up in her graduate courses.

“I don’t think there’s any way to frame this as a general data privacy issue without going after every other tech company,” Fiesler told TechCrunch. “The only thing that makes sense is that it’s literally only about the fact that the company is based in China.”

There is still no evidence that TikTok has shared data with the Chinese government. But reports have shown that employees at TikTok’s Beijing-based parent company ByteDance have viewed American user data. An investigation last year revealed that engineers in China had open access to TikTok data on U.S. users, undermining the company’s claims to the contrary. Another report, corroborated by ByteDance, found that a small group of engineers inappropriately accessed two U.S. journalists’ TikTok data. They planned to use the location information to determine if the reporters had crossed paths with any ByteDance employees who may have leaked information to the press.

Still, TikTokers point to the distinction between sharing data with a private Chinese company and the Chinese government. For its part, TikTok has tried to appease U.S. officials with a plan called Project Texas, a $1.5 billion undertaking that will move U.S. users’ data to Oracle servers. Project Texas would also create a subsidiary of the company called the TikTok U.S. Data Security Inc., which plans to oversee any aspect of TikTok involving national security.

Spehar said that they favor solutions like Project Texas over U.S. government proposals like the RESTRICT Act, which would give the U.S. new tools for restricting and potentially banning technology exports from foreign adversaries.

“I don’t think we should be looking at things like the RESTRICT Act, or any kind of broad legislation that gives the government the power to say, ‘We’ve decided something is unsafe,'” they told TechCrunch.

Multiple congresspeople asked Chew about how TikTok moderates dangerous trends like “the blackout challenge,” in which children tried to see how long they can hold their breath. Children died from this behavior after it circulated on TikTok, but the game didn’t originate on the platform: As early as 2008, the CDC warned parents that 82 children had died from a trend called “the choking game.” One congressman even referenced “NyQuil chicken” as a dangerous TikTok trend, despite the fact that there is little evidence anyone actually ate chicken soaked in cough medicine and the trend originated years ago on 4chan.

“The moral panic over TikTok challenges is something I’ve debunked extensively, and then they just get parroted by these politicians that don’t understand what a moral panic is,” Richards told TechCrunch. “To utilize misinformation that I’ve written about so much and tried to debunk, and to see it used against TikTok was just so infuriating.”

Richards does acknowledge that TikTok’s best feature is also its worst: Anything can go viral. She believes TikTok’s “bottom-up” information environment does lend itself to misinformation, but that same dynamic also surfaces good content that would never get exposure on a different social network.

Richards is also a vocal critic of TikTok’s content moderation policies, which — like every other social network — are not always applied evenly. During Thursday’s hearing, Rep. Kat Cammack (R-FL) dramatically screened a month-old TikTok video depicting a gun alongside text threatening the leader of the House Committee that orchestrated Chew’s testimony. It’s an obvious violation of TikTok’s content guidelines, but Richards points out that it had very little engagement.

“In the context of TikTok, something having 40 likes is effective moderation,” Richards said. “That means the video isn’t reaching very many people.” She believes that a video like the one the Florida lawmaker highlighted shouldn’t be on the platform at all, but ultimately if it doesn’t reach many users then the potential for harm is limited.

Other creators expressed frustration that congresspeople failed to consider how TikTok has helped Americans, like LGBTQ+ people who found community on the app or small business owners who were able to grow beyond their wildest dreams after going viral.

Trans Latina creator Naomi Hearts, who has 1 million TikTok followers, was invited by TikTok to support the app in D.C. (TikTok compensated this group of creators, which included Spehar, by covering lodging and travel costs). She said that she met other TikTokers on the trip who used the app to gain traction for their small businesses.

She too found an audience on TikTok that she wasn’t able to build elsewhere, after struggling to grow a following on Instagram. But on TikTok, even small accounts have the potential to go viral, a phenomenon that can jumpstart a career when things work out.

“The message of the normal person… for example, me, who was just a plus sized trans woman who grew up in South Central Los Angeles and had a dream — my message was not there,” Naomi Hearts said, referring to Instagram.

Spehar also emphasized the role that TikTok plays in helping people connect well outside the bounds of their everyday surroundings.

“You can find communities that you can’t where you live,” Spehar said. “I think about kids in Northwest Arkansas and in Tennessee — TikTok is literally one of the reasons they’re not taking their lives, because they know they’re not alone.”

Although Richards mostly writes about disinformation on TikTok, she laments the positive sides of the app that could be lost if it gets banned in the U.S.

“Banning TikTok would ultimately harm marginalized communities the most, who are least represented by institutional news and organizations,” Richards said. “And if all of a sudden, that entire infrastructure disappears, they will just suddenly in be the dark.”

‘So infuriating’: TikTokers are fuming over potential ban by Amanda Silberling originally published on TechCrunch

TikTok CEO says company scans public videos to determine users’ ages

Amid questioning about TikTok’s use of biometrics in today’s Congressional hearing, TikTok CEO Shou Zi Chew offered some insight into how the company vets potentially underage users on its platform. After denying the app collects body, face, or voice data to identify its users — beyond what’s needed for its in-app AR filters to function, that is — the exec was asked how TikTok determines the age of its users.

Chew’s initial answer was expected: the app uses age gating. This refers to the commonly used method that simply asks a user to provide their birthdate in order to determine their age. In TikTok, there are three different experiences for under-13 users, younger teens, and adults 18 and up, and which experience the user receives is based on this age input.

Relying on this method alone is a problem, of course, because kids often lie about their age when signing up for social media apps and websites.

As it turns out, TikTok is doing more than looking at the age that’s entered into a text box.

In the hearing, Chew added that TikTok scans users’ videos to determine their age.

“We have also developed some tools where we look at their public profile, to go through the videos that they post to see whether…,” Chew began, before being interrupted by Rep. Buddy Carter (R-GA), who interjected, “that’s creepy. Tell me more about that.”

When Chew was able to continue, he explained “It’s public. So if you post a video, you choose that video to go public — that’s how you get people to see your video. We look at those to see if it matches up the age that you talked about it,” he said.

“Now, this is a real challenge for our industry because privacy versus age assurance is a really big problem,” Chew said.

An interesting follow-up question to the CEO’s response would have been to ask how TikTok was scanning these videos, what specific facial recognition or other technologies it uses, and whether those technologies were built in-house or if it was relying on facial recognition tech built by third parties. Then, of course, whether any of the data that associates the age with the user was being stored permanently, rather than being used to simply boot the user off a TikTok LIVE stream, for example.

Unfortunately for us, Carter didn’t pursue this line of questioning.

Instead, he blasted the CEO for dismissing age verification as an industry-wide issue.

“We’re talking about children dying!,” he exclaimed, referencing the dangerous challenges apps like TikTok and others have allowed to viral, like the blackout challenge. (That challenge resulted in TikTok removing some half a million accounts in Italy to block underage users from its platform, at the request of the local regulator, in fact.)

The reality is that age verification is an industry-wide concern and the lack of U.S. laws around children’s use of social media leaves companies like TikTok and others to develop their own processes.

For example, Instagram began verifying users’ ages just last year by offering users a choice of three options. Users could either upload an ID, record a video selfie or ask mutual friends to verify their age on their behalf. The latter is relatively easy to bypass if you have good friends willing to lie for you.

Earlier this month, Instagram rolled out its age verification tools in Canada and Mexico, in addition to the existing support in the U.S. Brazil, and Japan. The company had earlier said it had partnered with London-based digital identity startup Yoti for the video selfie part of the age verification process.

Instagram has also previously explained at a high level how it identifies which users it suspects to be underage.

Beyond investigating flagged accounts, the company claims it developed AI technology that it uses to infer someone’s age. Its model has an understanding of how people in the same age group tend to interact with content. And another one of the ways it may identify an underage user who’s lying about their age is by scanning the comments on “Happy Birthday” posts where a user’s age may be referenced. Plus, Instagram said it may try to match a user’s age on Facebook with their stated age on Instagram, along with the use of “many other signals” which it doesn’t disclose.

TikTok’s technique has been less clear. The company does document how to verify your age if it identified you incorrectly — for example, if you were kicked off LIVE for looking too young. (Last fall TikTok announced it was raising the age requirement for using its in-app livestreaming service, TikTok LIVE to 18, up from 16).

Last year, Bloomberg reported that TikTok met with two providers of facial age-estimation software in 2021. Both companies offered software that could tell the difference between children and adults, but a TikTok exec nixed the deals over fears that facial scanning like this would lead to fears that China was spying on child users, the report had said.

Today, the U.S. had the TikTok CEO in the hot seat, poised to explain the actual techniques TikTok uses for age determination, and all we got were screaming, blustering politicians putting on a show instead of getting real answers.

TikTok CEO says company scans public videos to determine users’ ages by Sarah Perez originally published on TechCrunch

TikTok questioned on ineffective teen time limits in Congressional hearing

In hopes of heading off concerns over the addictiveness of its app, TikTok earlier this month rolled out new screen time controls that limited minors under the age of 18 to 60-minute daily screen time limits. But in a Congressional hearing today before the House Committee on Energy and Commerce, TikTik CEO Shou Zi Chew was questioned on the new tool’s inefficiency, forcing the exec to admit that the company didn’t have data on how many teens were continuing to watch beyond the default limits.

The line of questioning is notable because TikTok’s algorithm and vertical video-based feed are among the most addictive products to emerge from the broader tech industry in recent years. Each swipe on the app’s screen delivers a new and interesting video personalized to the user’s interests, leading users to waste an inordinate amount of time on TikTok compared with older social media services.

In fact, a recent study found that TikTok was now even crushing YouTube in terms of kids’ and teens’ app usage in markets around the world thanks, in part, to its addictive feed.

The format has become so popular, it’s also since been adopted by nearly all other major U.S. tech companies, including Facebook, Instagram, YouTube, and Snap. So an examination of any sort of addiction mitigation techniques is certainly warranted.

That said, the time limit TikTok designed for teens is really more for show — it doesn’t actually prevent younger users from watching TikTok.

A hard limit on TikTok viewing is still up to the teen’s parents, who would have to use the app’s included parental controls to set screen time and session limits. Otherwise, they could turn to other parental controls bundled with the mobile OS from Apple or Google or those from third parties.

In the hearing, Chew touted how TikTok was the first to launch a 60-minute watch limit for teen users, and had other teen protections, like disabled direct messaging for users under 16. He noted also that teen content couldn’t go viral on the app’s For You page, if the creator was under 18.

However, when pushed on the teen time limit’s real-world impact, the exec didn’t have any substantial data to share.

“My understanding is that teens can pretty easily bypass the notification to continue using the app if they want to,” suggested Representative John Sarbanes (D-Md.). “I mean, let’s face it, our teens are smarter than we are by half and they know how to use technology and they can get around these limits if they want to,” he said.

Sarbanes is correct. There’s really nothing to bypassing the feature — it only takes a tap of a button before you’re returned back to the feed when your time limit is up. A more effective mitigation technique would actually force a teen user to take a break from the app entirely. This could better disrupt the dopamine-fueled addiction cycle by requiring a short time-out where they’d be forced to find something else to do than continue to scroll more videos.

When asked if TikTok was measuring how many teens were still exceeding the 60-minute time limit after the new feature was added, Chew didn’t know and didn’t share any sort of guess, either. Instead, he avoided a direct answer.

“We understand those concerns,” the TikTok CEO responded. “Our intention is to have the teens and their parents have these conversations about what is the appropriate amount of time for social media,” he added, noting that the app offered a Family Pairing feature that does enforce a real screen time limit.

In other words, TikTok doesn’t think real teen protections are up for it to decide. To be fair, neither do any U.S.-based social media companies. They want parents to shoulder the responsibility.

This answer, however, showcases how a lack of U.S. regulation over these platforms is allowing the cycle of app addiction to continue. If lawmakers won’t create rules to protect kids from algorithms that tap into human psychology to keep them scrolling, then it really will be up to parents to figure step in. And many do not know or understand how parental controls work.

Sarbanes asked TikTok to follow up by providing the Congressional committee with research on how the time limits were implemented, how they’re being bypassed, and the measures TikTok is taking to address these sorts of issues.

In a further line of questioning, this time from Rep. Buddy Carter (R-Ga.), TikTok’s addictive nature of the app and the dangerous stunts and challenges it showcased were suggested to be “psychological warfare…to deliberately influence U.S. children.” While that may be a bit of a leap, it’s worth noting that when Carter asked if the Chinese version of TikTok (Douyin) had the same “challenges” as TikTok Chew also admitted he didn’t know.

“This is an industry challenge for all of us,” he said.

The TikTok CEO later reiterated how kids’ use of its app is ultimately up to parents. When responding to questions about the appropriate age for TikTok use, he noted there were three different experiences aimed at different age groups — one for under-13 year-olds, another for younger teens, and another for adults. As an interesting side note, where Chew is based in Singapore, there’s no under-13 experience available, meaning his own kids are not on TikTok. 

“Our approach is to give differentiated experiences for different age groups — and that the parents have these conversations with their children to decide what’s best for their family,” Chew said.

TikTok questioned on ineffective teen time limits in Congressional hearing by Sarah Perez originally published on TechCrunch