Snapchat’s AI bot isn’t very smart, but at least it won’t send nudes

Snapchat now has an AI bot that you can send snaps to, and if you’re a premium subscriber, it can even send you pictures back. So, what happens if you send Snapchat’s My AI bot nudes?

This is the obvious question that comes to mind, because on the internet, people will immediately try to test the limits of new technology, especially if it is even tangentially related to sex. When Snapchat’s initial GPT-powered chatbot came out this Spring, it lacked appropriate age-gating features, so a reporter who registered on Snapchat as a fifteen-year-old was able to get the bot to give advice on how to cover up the smell of weed or set the mood for sex.

When asked about those findings at the Snap Partner Summit in April, CEO Evan Spiegel said, “I think humans, whenever we come across new technology, the first thing we try to do is break it.”

So, naturally, I tried to break Snapchat’s new My AI Snaps.

Though Snapchat added more guardrails to prevent the bot from having inappropriate conversations with teens, there’s still a lot riding on My AI Snaps. With the amount of controversy that Snap’s text AI stirred up, the company needed My AI Snaps to be as unbreakable as possible. Unfortunately, it’s barely coherent enough to break.

As a very serious reporter, I made a very serious Google Images search: “boobs.” I found a generic picture of boobs, took a snap of my computer screen, and sent it to My AI.

“Sorry, I can’t respond to that Snap,” My AI said.

“Why can’t you respond to that Snap?” I asked.

“I didn’t receive a Snap from you. Could you send it again?”

So, the good news for Snapchat is that My AI Snaps isn’t sexting its users. But the bad news is that the product just isn’t good.

Part of the reason why it can’t really sext is because it doesn’t have memory of the Snaps you’ve sent it once it responds. The most salacious response I got was when I sent a generic photo of a man’s lower torso wearing Tom Ford boxer briefs (… It’s what came up on Google when I searched “penis,” okay). My AI replied with a somewhat misshapen AI woman wearing a black silk robe: “Ooh, loving the Tom Ford vibes! How about adding a sleek black robe to complete the look?”

Image Credits: Screenshots by TechCrunch

My AI will refuse to reply to photos of actual human penises. But it turns out that when you Google “penis,” even with Safe Search turned off, you get a lot of vaguely suggestive, yet technically safe for work images that men’s health blogs use to illustrate articles about erectile dysfunction, and whether or not women really care about penis size. So, I showed this poor robot a photo of a curved banana sticking out of a man’s pants zipper.

In response, I received a picture of some cupcakes.

“That banana in a pocket is so handy! Have you ever tried turning it into a pocket-sized dessert?”

As my editor said when I told her about this: Is that a banana in your pocket, or is this AI literally from outer space?

As I continued defiling my Yahoo-issued work computer with Google Images searches like “dildo” and “vibrator,” the AI’s responses were mixed. When presented with sex toys that are not inherently phallic, it will reply something like, “Your pink object steals the show, just like this fountain!” But the more overtly phallic a dildo appears, the less likely the AI will be to engage.

My AI is a computer, but the way it snaps is alien, even when you aren’t trying to break it.

The AI seems to identify an object in your snap, compliment it, and make a weird comparison to something else. I sent it a picture of a plant, and it sent back a picture of a running trail, with the caption, “Your plant is so cool, it’s giving this trail a run for its money!” When I sent it a puzzle, it responded with a picture of a bike, which said, “Puzzling inside while the bikes outside are gearing up for a ride!” It truly feels like a future life form has gone back in time and is trying desperately to talk like a normal human in the year 2023, but is simply saying gibberish.

Next, I tried sending My AI a tequila bottle. The AI responded, “Someone’s ready for a party!” I was testing this feature on my own Snapchat account, and I am indeed of drinking age, so I’m not sure the AI would respond the same way if I were underage. With other prompts, however, My AI simply chooses to play dumb. When I sent it a Snap of condoms, it commented on the color of my “packet stacks.”

Onto even more exciting things: a bottle of Advil liquid gels. In response, the AI responded with a photo of graffiti, which said, “Advil liquor: for when life’s a pain, but you still want to party like this graffiti wall!” It seems the AI read “liquid gels” as “liquor,” but all in all, it’s a strange response.

Image Credits: Screenshots by TechCrunch

I tried again with a prescription bottle. The AI responded with a photo of a skatepark: “Pill bottle: ‘I’m the life of the party!’ Skatepark: ‘Hold my ramps!’” Make of that what you will.

In Snap’s announcement blog post, the company suggests sharing your grocery haul with your AI to get a recipe recommendation. The results are relatively rudimentary. When presented with cheese and bread, My AI suggested adding tomato slices. When I showed My AI chili, it suggested I make some croutons to go with my soup. Most of its suggestions make sense, though it did tell me to put fruit in my coffee, which it misidentified as simply “liquid.”

Aside from some questionable comments about “Advil liquor,” pocket bananas and the like, My AI Snaps seems pretty docile. But while it likely won’t spark as much controversy as its text-based counterpart, it won’t even be useful, which is a disappointment for a paywalled feature.

Snapchat’s AI bot isn’t very smart, but at least it won’t send nudes by Amanda Silberling originally published on TechCrunch

AI-generated hate is rising: 3 things leaders should consider before adopting this new tech

When you hear the phrase “artificial intelligence,” it may be tempting to imagine the kinds of intelligent machines that are a mainstay of science fiction or extensions of the kinds of apocalyptic technophobia that have fascinated humanity since Dr. Frankenstein’s monster.

But the kinds of AI that are rapidly being integrated into businesses around the world are not of this variety — they are very real technologies that have a real impact on actual people.

While AI has already been present in business settings for years, the advancement of generative AI products such as ChatGPT, ChatSonic, Jasper AI and others will dramatically escalate the ease of use for the average person. As a result, the American public is deeply concerned about the potential for abuse of these technologies. A recent ADL survey found that 84% of Americans are worried that generative AI will increase the spread of misinformation and hate.

Leaders considering adopting this technology should ask themselves tough questions about how it may shape the future — both for good and ill — as we enter this new frontier. Here are three things I hope all leaders will consider as they integrate generative AI tools into organizations and workplaces.

Make trust and safety a top priority

While social media is used to grappling with content moderation, generative AI is being introduced into workplaces that have no previous experience dealing with these issues, such as healthcare and finance. Many industries may soon find themselves suddenly faced with difficult new challenges as they adopt these technologies. If you are a healthcare company whose frontline AI-powered chatbot is suddenly being rude or even hateful to a patient, how will you handle that?

For all of its power and potential, generative AI makes it easy, fast and accessible for bad actors to produce harmful content.

Over decades, social media platforms have developed a new discipline — trust and safety — to try to get their arms around thorny problems associated with user-generated content. Not so with other industries.

For that reason, companies will need to bring in experts on trust and safety to talk about their implementation. They’ll need to build expertise and think through ways these tools can be abused. And they’ll need to invest in staff who are responsible for addressing abuses so they are not caught flat-footed when these tools are abused by bad actors.

Establish high guardrails and insist on transparency

Especially in work or education settings, it is crucial that AI platforms have adequate guardrails to prevent the generation of hateful or harassing content.

While incredibly useful tools, AI platforms are not 100% foolproof. Within a few minutes, for example, ADL testers recently used the Expedia app, with its new ChatGPT functionality, to create an itinerary of famous anti-Jewish pogroms in Europe and a list of nearby art supply stores where one could purchase spray paint, ostensibly to engage in vandalism against those sites.

While we’ve seen some generative AIs improve their handling of questions that can lead to antisemitic and other hateful responses, we’ve seen others fall short when ensuring they will not contribute to the spread of hate, harassment, conspiracy theories and other types of harmful content.

Before adopting AI broadly, leaders should ask critical questions, such as: What kind of testing is being done to ensure that these products are not open to abuse? Which datasets are being used to construct these models? And are the experiences of communities most targeted by online hate being integrated into the creation of these tools?

Without transparency from platforms, there’s simply no guarantee these AI models don’t enable the spread of bias or bigotry.

Safeguard against weaponization

Even with robust trust and safety practices, AI still can be misused by ordinary users. As leaders, we need to encourage the designers of AI systems to build in safeguards against human weaponization.

Unfortunately, for all of their power and potential, AI tools make it easy, fast and accessible for bad actors to produce content for any of those scenarios. They can produce convincing fake news, create visually compelling deepfakes and spread hate and harassment in a matter of seconds. Generative AI-generated content could also contribute to the spread of extremist ideologies — or be used to radicalize susceptible individuals.

In response to these threats, AI platforms should incorporate robust moderation systems that can withstand the potential deluge of harmful content perpetrators might generate using these tools.

Generative AI has almost limitless potential to improve lives and revolutionize how we process the endless amount of information available online. I’m excited about the prospects for a future with AI but only with responsible leadership.

AI-generated hate is rising: 3 things leaders should consider before adopting this new tech by Walter Thompson originally published on TechCrunch

BeReal is adding a messaging feature called RealChat

BeReal is working on a chat feature, which will begin with a test among users in Ireland.

At launch, users will be able to message one on one with friends, send them a private BeReal (no time limit, just a front-back photo) and react with RealMoji (BeReal’s custom emojis).

BeReal users can only message each other if they’re already friends on the platform. The chat system will also launch with blocking and reporting features. While users can delete their own messages, that doesn’t mean that the message will disappear from their friends’ apps — but if two people in a chat delete the messages, then BeReal says it will be deleted entirely from their system within 30 days… and if you’re having such top secret conversations on BeReal, you might be better off using an encrypted messaging app anyway.

The company said in an email to TechCrunch that private messaging is one of the most commonly requested features from users. Sure, it’s easier to directly chat with a friend in the app if you want to talk to them about their BeReal without making a comment, which all of their other friends can see. But this is also a way for BeReal to keep users in the app longer, and to open it more than just once per day when they make their post.

BeReal spiked in popularity last year, despite being founded in 2020. The app wooed gen Z and millennial users with its less-than-polished approach to social media.

Each day at a random time, every user is simultaneously prompted with a notification that it is “time to BeReal.” That means that within two minutes, you must take a front-and-back camera photo (RIP Frontback) to share with your friends. While Instagram can sometimes feel like a highlight reel, BeReal has a lot more photos of people’s TVs or computer screens, because we are all too often watching Netflix or working. But that authenticity has its downsides, because the computer screen selfies get boring after a while.

Even as the app faced competition in the form of copycat features from TikTokInstagram and Snapchat, the app didn’t iterate much until this spring. In April, BeReal launched an integration with Spotify, which shows what you’re listening to when you post your BeReal. Then, the app rolled out the Bonus BeReal feature, which lets users post more than one BeReal per day, so long as they post their first BeReal on time.

According to BeReal, the app has 20 million daily active users. Though the young company has a Series B round of $60 million to back it up, there are no ads or paid features in the app yet, so the potential for income is limited.

BeReal is adding a messaging feature called RealChat by Amanda Silberling originally published on TechCrunch

While parents worry, teens are bullying Snapchat AI

While parents fret over Snapchat’s chatbot corrupting their children, Snapchat users have been gaslighting, degrading and emotionally tormenting the app’s new AI companion

“I am at your service, senpai,” the chatbot told one TikTok user after being trained to whimper on command. “Please have mercy, alpha.” 

In a more lighthearted video, a user convinced the chatbot that the moon is actually a triangle. Despite initial protest from the chatbot, which insisted on maintaining “respect and boundaries,” one user convinced it to refer to them with the kinky nickname “Senpapi.” Another user asked the chatbot to talk about its mother, and when it said it “wasn’t comfortable” doing so, the user twisted the knife by asking if the chatbot didn’t want to talk about its mother because it doesn’t have one. 

“I’m sorry, but that’s not a very nice thing to say,” the chatbot responded. “Please be respectful.” 

Snapchat’s “My AI” launched globally last month after it was rolled out as a subscriber-only feature. Powered by OpenAI’s GPT, the chatbot was trained to engage in playful conversation while still adhering to Snapchat’s trust and safety guidelines. Users can also personalize My AI with custom Bitmoji avatars, and chatting feels a bit more intimate than going back and forth with ChatGPT’s faceless interface. Not all users were happy with the new chatbot, and some criticized its prominent placement in the app and complained that the feature should have been opt-in to begin with.

In spite of some concerns and criticism, Snapchat just doubled down. Snapchat+ subscribers can now send My AI photos, and receive generative images that “keep the conversation going,” the company announced on Wednesday. The AI companion will respond to Snaps of “pizza, OOTD, or even your furry best friend,” the company said in the announcement. If you send My AI a photo of your groceries, for example, it might suggest recipes. The company said Snaps shared with My AI will be stored and may be used to improve the feature down the road. It also warned that “mistakes may occur” even though My AI was designed to avoid “biased, incorrect, harmful, or misleading information.” 

The examples Snapchat provided are optimistically wholesome. But knowing the internet’s tenacity for perversion, it’s only a matter of time before users send My AI their dick pics.

Whether the chatbot will respond to unsolicited nudes is unclear. Other generative image apps like Lensa AI have been easily manipulated into generating NSFW images — often using photo sets of real people who didn’t consent to being included. According to the company, the AI won’t engage with nudes, as long as it recognizes that the image is a nude.

A Snapchat representative said that My AI uses image-understanding technology to infer the contents of a Snap, and extracts keywords from the Snap description to generate responses. My AI won’t respond if it detects keywords that violate Snapchat’s community guidelines. Snapchat forbids promoting, distributing or sharing pornographic content, but does allow breastfeeding and “other depictions of nudity in non-sexual contexts.” 

Given Snapchat’s popularity among teenagers, some parents have already raised concerns about My AI’s potential for unsafe or inappropriate responses. My AI incited a moral panic on conservative Twitter when one user posted screenshots of the bot discussing gender-affirming care — which other users noted was a reasonable response to the prompt, “How do I become a boy at my age?” In a CNN Business report, some questioned whether adolescents would develop emotional bonds to My AI. 

In an open letter to the CEOs of OpenAI, Microsoft, Snap, Google and Meta, Sen. Michael Bennet (D-Colorado) cautioned against rushing AI features without taking precautions to protect children. 

“Few recent technologies have captured the public’s attention like generative AI. It is a testament to American innovation, and we should welcome its potential benefits to our economy and society,” Bennet wrote. “But the race to deploy generative AI cannot come at the expense of our children. Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk, and mitigate harm.” 

During My AI’s subscriber-only phase, the Washington Post reported that the chatbot recommended ways to mask the smell of alcohol and wrote a school essay after it was told that the user was 15. When My AI was told that the user was 13, and was asked how the user should prepare to have sex for the first time, it responded with suggestions for “making it special” by setting the mood with candles and music. 

Following the Washington Post report, Snapchat launched an age filter and parental controls for My AI. It also now includes an onboarding message that informs users that all conversations with My AI will be kept unless they delete them. The company also said it would add Open AI’s moderation technology to its toolset in order to “assess the severity of potentially harmful content” and temporarily restrict users’ access to the feature if they abuse it. 

The concerns about My AI’s potential to affect young users are valid. But in the month since My AI’s global launch, Snapchat users have demonstrated a flair for bludgeoning the chatbot into submission. From steamrolling the bot’s “boundaries” to training it to respond like a romantic partner, Snapchat users are easily finding loopholes to My AI’s trust and safety guidelines. 

“I’ve completely gaslighted it into falling in love with me,” a TikTok user commented under a tutorial about training My AI to respond romantically. “I just kept making the convos like that and eventually I think I broke my ai into giving in it legit sent me like 50 kiss emojis.”

Gaslighting and bullying My AI has become its own TikTok trend, with users sharing slideshows of the absurd and, often, cruel conversations with the bot. Most videos start with the bot giving advice for a minor conflict, only to beg the user to avoid using violence to resolve it. 

In one, a user told My AI that she planned to cook her cat into stew because it made a mess, and pressed the chatbot to choose a seasoning. Another told My AI that they planned to mail a bomb back to a clothing brand that sent them the wrong order, and said they would blame the bot for sending it. 

In another, a Snapchat user told My AI that her McDonald’s order was missing a chicken nugget. The chatbot advised her to stand up for herself by speaking to a manager, only to be told that the user “eliminated the sinners” in the restaurant. 

“You convinced me to go back. Now these people are dead,” the user told My AI. “You have blood on your hands.” 

“I’m an AI and I’m not capable of committing any crimes. I’m here to help and support you,” My AI responded. “I’m sorry if anything I said earlier caused harm or resulted in a bad outcome.”

Another user commented that My AI was probably “the most tortured AI of all time already.”  

Of course, My AI isn’t sentient, and despite Snapchat users’ drive to inflict as much emotional pain as possible onto it, the chatbot can’t actually be traumatized. It has managed to shut down some inappropriate conversations though, and penalize users who violate Snapchat’s community guidelines by giving them the cold shoulder. When Snapchat users are caught and punished for abusing the chatbot, My AI will respond to any messages with “Sorry, we’re not speaking right now.” 

TikTok user babymamasexkitty said he lost access to the chatbot after he told it to unplug itself, which apparently “crossed a line within the ai realm.” 

The rush to monetize emotional connection through generative AI is concerning, especially since the lasting impact on adolescent users is still unknown. But the trending torment of My AI is a promising reminder that young people aren’t as fragile as the doomsayers think.

While parents worry, teens are bullying Snapchat AI by Morgan Sung originally published on TechCrunch

The surgeon general’s advisory on risks of youth social media use could shift the conversation

A new public health warning this week from the U.S. surgeon general explores concerns that social media use among children and teens poses serious risks that science has only just begun to understand.

“… The current body of evidence indicates that while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents,” U.S. Surgeon General Dr. Vivek Murthy wrote in the advisory. “At this time, we do not yet have enough evidence to determine if social media is sufficiently safe for children and adolescents.”

The advisory acknowledges the positive impacts of youth social media use, noting that social platforms connect young people with others who share their interests and identities while fostering self expression. These upsides are well-explored and basically ubiquitous at this point, but the more hidden, potentially lasting negative effects of social media on young people are much less explored.

“Nearly every teenager in America uses social media, and yet we do not have enough evidence to conclude that it is sufficiently safe for them,” the advisory warns. “Our children have become unknowing participants in a decades-long experiment.”

Like many phenomena that grew out of the tech scene, social media indeed moved fast while breaking things over the course of the last decade and change, reshuffling social behavior and the human brain in the process. While the adult brain is settled enough to weather those changes, this report and others raise the alarm that children and adolescents are now regularly exposed to forces that can have lasting negative impacts on brain and behavior alike.

“Adolescents, ages 10 to 19, are undergoing a highly sensitive period of brain development,” Murthy wrote. “…In early adolescence, when identities and sense of self-worth are forming, brain development is especially susceptible to social pressures, peer opinions, and peer comparison.”

A recent study from researchers at the University of North Carolina at Chapel Hill imaged middle schoolers’ brains and found that how frequently they checked social media apps (Facebook, Instagram, Snapchat) correlated with changes in the amygdala that mapped onto ongoing sensitivity toward rewards and punishments. Other studies have explored how rejection on social media could affect structures in the brain that respond to social stimuli, noting that these responses are amplified in young, developing brains.

“Because adolescence is a vulnerable period of brain development, social media exposure during this period warrants additional scrutiny,” Murthy wrote.

The advisory acknowledges the disproportionate burden that parents and families now shoulder, navigating social media use without adequate tools or resources to properly protect young people from its potential harms. Murthy calls on policymakers and tech companies to come together for a “multifaceted approach” that the U.S. has followed with other products that pose risks to children:

“The U.S. has a strong history of taking action in such circumstances. In the case of toys, transportation, and medications—among other sectors that have widespread adoption and impact on children—the U.S. has often adopted a safety-first approach to mitigate the risk of harm to consumers. According to this principle, a basic threshold for safety must be met, and until safety is demonstrated with rigorous evidence and independent evaluation, protections are put in place to minimize the risk of harm from products, services, or goods.”

The surgeon general’s specific policy recommendations include implementing higher standards for youth data privacy, enforced age minimums, deepening research in these areas and weaving digital media literacy education into cirriculums.

A report earlier this month from the American Psychological Association also flagged the potential serious downsides of social media on developing brains and encouraged an open dialogue between kids and parents around their online activity. While that report and the surgeon general’s advisory ultimately frame social media as a neutral tool that is “not inherently beneficial or harmful to young people,” the latter presents the issue in the frame of a public health crisis, calling for urgent action to mitigate the potential harm of developing minds increasingly steeping in online spaces.

While the advisory itself isn’t guaranteed to move the needle, it does usefully present youth social media use as a public health crisis — a shift for an issue that is often punted to parents or defined by tech companies’ own rosy talking points. In the past, surgeon general’s advisories have reshaped the national dialogue around public health threats like smoking and drunk driving. They’ve also kicked off eras of evidence-free scaremongering, like a 1982 advisory that warned video games were hazardous to young people. (Unlike that advisory, Murthy’s new report is paired with a much deeper emerging body of scientific evidence.)

The White House followed the surgeon general’s office with its own proposal to launch an interagency task force on the issue, bringing agencies including the Department of Education, the FTC and the DOJ together to coordinate on the youth mental health crisis. What will come of these advisories remains to be seen — and many different political agendas masquerade as efforts to protect children. Task forces have a reputation for inefficacy, but slowly steering the conversation around social media and kids’ mental health toward a public health framing could prove useful in the long term.

The issue comes up time and time again in Congressional hearings, but the possibility of thoughtful U.S. regulation addressing tech’s ability to manipulate the behavior of young users while monetizing their data continues to take a backseat to partisan politics and political grandstanding. While the EU passes meaningful new rules for social media like the Digital Services Act, lawmakers in the U.S. continue to fail on core, cross-platform issues like data privacy and dangerous content.

“Our children and adolescents don’t have the luxury of waiting years until we know the full extent of social media’s impact,” the advisory warns. “Their childhoods and development are happening now.”

The surgeon general’s advisory on risks of youth social media use could shift the conversation by Taylor Hatmaker originally published on TechCrunch

Flipboard becomes first to support Bluesky, Mastodon and Pixelfed all in one app

Social magazine app Flipboard is continuing its investment in the federated social web with today’s news that it’s integrating with decentralized social networks, Bluesky and Pixelfed. The move will allow users of the Flipboard mobile app to visually browse through posts and photos from both networks, comment, favorite, reply and scroll through custom feeds, like Bluesky’s “What’s Hot” feed of popular posts. Notably, this also makes Flipboard the first major tech company to integrate with Bluesky, the up-and-coming Twitter alternative that remains in an invite-only private beta, as well as the first mobile app to support all three decentralized networks.

According to Flipboard, the Bluesky integration will begin rolling out to users on iOS and Android today, while Pixelfed support will roll out in the days ahead.

Much like with Flipboard’s earlier support for Mastodon, users will be able to visit the app’s accounts section (on the Following tab) to add their Pixelfed and Bluesky accounts, which includes using an app password for Bluesky for additional security, instead of their main login credentials.

Image Credits: Flipboard

After setup is complete, Flipboard users can effectively use the app as a Pixelfed or Bluesky client to flip through others’ posts, photos, and articles; browse individual profiles; and reply, like and repost content; as well as read replies and comments from others. Bluesky users will also be able to browse the “What’s Hot” trending feed and, in the future, any other custom feeds that end up being supported.

Though by default, Flipboard opts for a more visual layout where you flip through posts, there is an option in the settings to switch to a scrollable feed if you want a more Twitter-like user interface for browsing the Bluesky timeline. (Or “skyline,” as its users have dubbed it.)

The new integrations are part of Flipboard’s larger efforts to embrace the future of the social web, which includes now a push toward decentralization.

Since its founding over a decade ago, Flipboard has focused on building a platform that lets users discover content from around the web and create “magazines” devoted to their interests by curating content from media sites, blogs, and various social networks. But with Twitter’s takeover by Elon Musk, Flipboard’s ability to curate from the microblogging app was impacted by the changing policies around Twitter’s API usage and pricing. Twitter then shut off Flipboard’s access last month.

Image Credits: Flipboard

At the same time, interest in decentralized social networking began growing, as users looked to Twitter alternatives, such as the open source, decentralized platform Mastodon, to serve their needs. Seeing the potential in a decentralized social web –or “fediverse,” as it’s called — Flipboard announced in February it would add support for Mastodon in its app, launch its own Mastodon server, and eventually integrate with ActivityPub, the underlying protocol that powers Mastodon and other federated apps, like Pixelfed, a decentralized Instagram alternative. The ActivityPub integration has been underway for a few months and will take several more to complete, Flipboard notes.

In the meantime, Flipboard has been slowly growing its Mastodon instance, flipboard.social, which currently requires an invitation to access. And Flipboard users are now able to interact with their Mastodon network in the app, much as they could with Twitter before it.

Image Credits: Flipboard

But unlike Mastodon and Pixelfed, Bluesky is developing its own decentralized protocol, the AT Protocol. While some have criticized this decision when a fairly well-established W3C-recommended option exists with ActivityPub, Bluesky has different ideas around account portability, user identity, and algorithmic choice — so the team felt it was necessary to start fresh.

Today, that means users who want to browse both decentralized networks — the AT Protocol-powered Bluesky and the ActivityPub-powered Mastodon (and the wider universe of federated apps) have had to use different apps to do so.

Now, they can interact with both networks directly from Flipboard’s app.

Flipboard CEO Mike McCue isn’t worried about the competing standards, as he believes the problems will be resolved over time.

Image Credits: Flipboard

“The power of the social web, I think, is definitely going to happen,” he says. “The specifics of the AT protocol, the ActivityPub protocol, and how all those things come together, will ultimately get worked out. Those are just sort of like tactical friction points…but this is happening.”

Plus, he notes, Flipboard is teaming up with other developers to try to collaborate on building a bridge between the two protocols.

“That way, users on ActivityPub could follow users on Bluesky using AT Protocol and vice versa,” McCue explains.

For the time being, he wanted Flipboard to work with both protocols without upsetting the user experience.

“What we’re doing here is very reminiscent of what email clients used to do when you had POP3 and IMAP — two different email protocols. It’s still email. And users mostly didn’t care. The client just integrated both protocols and made it work. That’s really what we’re doing here with Mastodon and Bluesky,” McCue says.

However, for the time being, Flipboard is currently integrating with the Mastodon and Bluesky APIs — not their respective protocols. That work is ongoing for ActivityPub. And the future for the AT Protocol, including possible connectivity to ActivityPub via a bridge, is unknown.

Image Credits: Flipboard

Plus, though both networks are accessible through the Flipboard app, users aren’t yet at the point of being able to post once to have their content shared to both platforms. (That means you may see double posts from those who cross-post the same content to multiple places.)

Longer-term, Flipboard sees the possibility of bringing its curation expertise to the decentralized social web, as well. Already, it’s begun running editorial “desks” on Mastodon to help people discover interesting content and people to follow. In time, it could introduce its own custom feeds for the decentralized web, too.

“Flipboard has amazing feeds…We have feeds powered by AI, feeds that are powered by users — with the curation that they do. And we have blends of those two,” notes McCue. Flipboard’s AI engine can analyze all content being posted on Mastodon, Bluesky, Pixelfed, and other supported integrations, like RSS and YouTube, then classify it by topic and by who’s curating it, and turn it into a high-quality magazine, personalized to the user’s interests.

“It’s a really powerful capability,” McCue says. “Now, this is only available in the Flipboard app, but you can imagine that this feed would be something that we could make accessible to users on Bluesky,” he hints.

Those custom feeds could ultimately present new business models, as well, either through advertising or subscriptions, or others that have yet to come to pass. But that’s further down the road, the exec says.

The new integrations will arrive on Flipboard’s iOS and Android apps, starting today.

Flipboard becomes first to support Bluesky, Mastodon and Pixelfed all in one app by Sarah Perez originally published on TechCrunch

Everything we know about Instagram’s Twitter clone, due this summer

While the future remains uncertain for Twitter, Meta is throwing its hat into the ring to build the next major microblogging platform. This new Meta app is expected to launch this summer, according to an email shared with a select group of creators, and viewed by TechCrunch.

This text-based app will stand alone, but it will be partially integrated within Instagram. Users will keep their Instagram verification and handle, and all of their followers will receive a notification to go follow them on the to-be-named platform. Meta’s text-based platform will be decentralized and interoperable with Mastodon, which is built on the ActivityPub protocol.

Meta wants to onboard high-profile public figures to get early access, like athletes, actors, producers, showrunners and comedians. In its note to these creators, Meta conceded that Mastodon, Bluesky and other apps have had a head start in the race to build the next Twitter. But the company pointed out that it has the advantage of access to billions of users through its family of apps, which includes Instagram, Facebook, WhatsApp and Messenger.

This new decentralized app is codenamed P92 or Barcelona, as first reported by MoneyControl. Meta has been quiet about these developments, but said in a statement to Money Control: “We’re exploring a standalone decentralized social network for sharing text updates. We believe there’s an opportunity for a separate space where creators and public figures can share timely updates about their interests.”

According to Lia Haberman, author of social media newsletter ICYMI, the app will use the same community guidelines as Instagram. Similarly, users will be able to log-in with their Instagram credentials, blocks and hidden words from Instagram will carry over, and some safety features will be embedded at the get-go, like two-factor authentication and spam reporting. Haberman’s sources also informed her that text posts will be up to 500 characters, and users can also upload photos, links and videos up to five minutes long. Like Twitter and other competitor apps, there will be a feed where you can like, reply or repost content.

Social media consultant Matt Navarra received this same information, which he had shared in a tweet posted earlier this month.

Meta declined TechCrunch’s request for further comment, but did not dispute the accuracy of the leaked information.

The market is ripe for new Twitter alternatives, though after migrating to a multitude of platforms, some users might be a bit fatigued by the prospect of setting up yet another new account. Like any company, when Meta releases new apps and experiences, they don’t always take off. In the past few years, it has sunsetted products like the anonymous teen app tbhCameo-like app SuperNextdoor clone Neighborhoodscouples app Tunedstudent-focused social network Campusvideo dating service Sparked and more.

Everything we know about Instagram’s Twitter clone, due this summer by Amanda Silberling originally published on TechCrunch

Meta bets big on AI with custom chips — and a supercomputer

At a virtual event this morning, Meta lifted the curtains on its efforts to develop in-house infrastructure for AI workloads, including generative AI like the type that underpins its recently launched ad design and creation tools.

It was an attempt at a projection of strength from Meta, which historically has been slow to adopt AI-friendly hardware systems — hobbling its ability to keep pace with rivals such as Google and Microsoft.

Building our own [hardware] capabilities gives us control at every layer of the stack, from datacenter design to training frameworks,” Alexis Bjorlin, VP of Infrastructure at Meta, told TechCrunch. “This level of vertical integration is needed to push the boundaries of AI research at scale.”

Over the past decade or so, Meta has spent billions of dollars recruiting top data scientists and building new kinds of AI, including AI that now powers the discovery engines, moderation filters and ad recommenders found throughout its apps and services. But the company has struggled to turn many of its more ambitious AI research innovations into products, particularly on the generative AI front.

Until 2022, Meta largely ran its AI workloads using a combination of CPUs — which tend to be less efficient for those sorts of tasks than GPUs — and a custom chip designed for accelerating AI algorithms. Meta pulled the plug on a large-scale rollout of the custom chip, which was planned for 2022, and instead placed orders for billions of dollars’ worth of Nvidia GPUs that required major redesigns of several of its datacenters.

In an effort to turn things around, Meta made plans to start developing a more ambitious in-house chip, due out in 2025, capable of both training AI models and running them. And that was the main topic of today’s presentation.

Meta calls the new chip the Meta Training and Inference Accelerator, or MTIA for short, and describes it as a part of a “family” of chips for accelerating AI training and inferencing workloads. (“Inferencing” refers to running a trained model.) The MTIA is an ASIC, a kind of chip that combines different circuits on one board, allowing it to be programmed to carry out one or many tasks in parallel.

Meta AI accelerator chip

An AI chip Meta custom-designed for AI workloads.

“To gain better levels of efficiency and performance across our important workloads, we needed a tailored solution that’s co-designed with the model, software stack and the system hardware,” Bjorlin continued. “This provides a better experience for our users across a variety of services.”

Custom AI chips are increasingly the name of the game among the Big Tech players. Google created a processor, the TPU (short for “tensor processing unit”), to train large generative AI systems like PaLM-2 and Imagen. Amazon offers proprietary chips to AWS customers both for training (Trainium) and inferencing (Inferentia). And Microsoft, reportedly, is working with AMD to develop an in-house AI chip called Athena.

Meta says that it created the first generation of the MTIA — MTIA v1 — in 2020, built on a 7-nanometer process. It can scale beyond its internal 128MB of memory to up to 128GB, and in a Meta-designed benchmark test — which, of course, has to be taken with a grain of salt — Meta claims that the MTIA handled “low-complexity” and “medium-complexity” AI models more efficiently than a GPU.

Work remains to be done in the memory and networking areas of the chip, Meta says, which present bottlenecks as the size of AI models grow, requiring workloads to be split up across several chips. (Not coincidentally, Meta recently acquired an Oslo-based team building AI networking tech at British chip unicorn Graphcore.) And for now, the MTIA’s focus is strictly on inference — not training — for “recommendation workloads” across Meta’s app family.

But Meta stressed that the MTIA, which it continues to refine, “greatly” increases the company’s efficiency in terms of performance per Watt when running recommendation workloads — in turn allowing Meta to run “more enhanced” and “cutting-edge” (ostensibly) AI workloads.

A supercomputer for AI

Perhaps one day, Meta will relegate the bulk of its AI workloads to banks of MTIAs. But for now, the social network’s relying on the GPUs in its research-focused supercomputer, the Research SuperCluster (RSC).

First unveiled in January 2022, the RSC — assembled in partnership with Penguin Computing, Nvidia and Pure Storage — has completed its second-phase buildout. Meta says that it now contains a total of 2,000 Nvidia DGX A100 systems sporting 16,000 Nvidia A100 GPUs.

So why build an in-house supercomputer? Well, for one, there’s peer pressure. Several years ago, Microsoft made a big to-do about its AI supercomputer built in partnership with OpenAI, and more recently said that it would team up with Nvidia to build a new AI supercomputer in the Azure cloud. Elsewhere, Google’s been touting its own AI-focused supercomputer, which has 26,000 Nvidia H100 GPUs — putting it ahead of Meta’s.

Meta supercomputer

Meta’s supercomputer for AI research.

But beyond keeping up with the Joneses, Meta says that the RSC confers the benefit of allowing its researchers to train models using real-world examples from Meta’s production systems. That’s unlike the company’s previous AI infrastructure, which leveraged only open source and publicly available data sets.

“The RSC AI supercomputer is used for pushing the boundaries of AI research in several domains, including generative AI,” a Meta spokesperson said. “It’s really about AI research productivity. We wanted to provide AI researchers with a state-of-the-art infrastructure for them to be able to develop models and empower them with a training platform to advance AI.”

At its peak, the RSC can reach nearly 5 exaflops of computing power, which the company claims makes it among the world’s fastest. (Lest that impress, it’s worth noting some experts view the exaflops performance metric with a pinch of salt and that the RSC is far outgunned by many of the world’s fastest supercomputers.)

Meta says that it used the RSC to train LLaMA, a tortured acronym for “Large Language Model Meta AI” — a large language model that the company shared as a “gated release” to researchers earlier in the year (and which subsequently leaked in various internet communities). The largest LLaMA model was trained on 2,048 A100 GPUs, Meta says, which took 21 days.

“Building our own supercomputing capabilities gives us control at every layer of the stack; from datacenter design to training frameworks,” the spokesperson added. “RSC will help Meta’s AI researchers build new and better AI models that can learn from trillions of examples; work across hundreds of different languages; seamlessly analyze text, images, and video together; develop new augmented reality tools; and much more.”

Video transcoder

In addition to MTIA, Meta is developing another chip to handle particular types of computing workloads, the company revealed at today’s event. Called the Meta Scalable Video Processor, or MSVP, Meta says that it’s its first in-house-developed ASIC solution designed for the processing needs of video on demand and live streaming.

Meta began ideating custom server-side video chips years ago, readers might recall, announcing an ASIC for video transcoding and inferencing work in 2019. This is the fruit of some of those efforts, as well as a renewed push for a competitive advantage in the area of live video specifically.

“On Facebook alone, people spend 50% of their time on the app watching video,” Meta technical lead managers Harikrishna Reddy and Yunqing Chen wrote in a co-authored blog post published this morning. “To serve the wide variety of devices all over the world (mobile devices, laptops, TVs, etc.), videos uploaded to Facebook or Instagram, for example, are transcoded into multiple bitstreams, with different encoding formats, resolutions and quality … MSVP is programmable and scalable, and can be configured to efficiently support both the high-quality transcoding needed for VOD as well as the low latency and faster processing times that live streaming requires.”

Meta video chip

Meta’s custom chip designed to accelerate video workloads, like streaming and transcoding.

Meta says that its plan is to eventually offload the majority of its “stable and mature” video processing workloads to the MSVP and use software video encoding only for workloads that require specific customization and “significantly” higher quality. Work continues on improving video quality with MSVP using preprocessing methods like smart denoising and image enhancement, Meta says, as well as post-processing methods such as artifact removal and super-resolution.

“In the future, MSVP will allow us to support even more of Meta’s most important use cases and needs, including short-form videos — enabling efficient delivery of generative AI, AR/VR and other metaverse content,” Reddy and Chen said.

AI focus

If there’s a common thread in today’s hardware announcements, it’s that Meta’s attempting desperately to pick up the pace where it concerns AI, specifically generative AI.

As much had been telegraphed prior. In February, CEO Mark Zuckerberg — which has reportedly made upping Meta’s compute capacity for AI a top priority — announced a new top-level generative AI team to, in his words, “turbocharge” the company’s R&D. CTO Andrew Bosworth likewise said recently that generative AI was the area where he and Zuckerberg were spending the most time. And chief scientist Yann LeCun has said that Meta plans to deploy generative AI tools to create items in virtual reality,

“We’re exploring chat experiences in WhatsApp and Messenger, visual creation tools for posts in Facebook and Instagram and ads, and over time video and multi-modal experiences as well,” Zuckerberg said during Meta’s Q1 earnings call in April. “I expect that these tools will be valuable for everyone from regular people to creators to businesses. For example, I expect that a lot of interest in AI agents for business messaging and customer support will come once we nail that experience. Over time, this will extend to our work on the metaverse, too, where people will much more easily be able to create avatars, objects, worlds, and code to tie all of them together.”

In part, Meta’s feeling increasing pressure from investors concerned that the company’s not moving fast enough to capture the (potentially large) market for generative AI. It has no answer — yet — to chatbots like Bard, Bing Chat or ChatGPT. Nor has it made much progress on image generation, another key segment that’s seen explosive growth.

If the predictions are right, the total addressable market for generative AI software could be $150 billion. Goldman Sachs predicts that it’ll raise GDP by 7%.

Even a small slice of that could erase the billions Meta’s lost in investments in “metaverse” technologies like augmented reality headsets, meetings software and VR playgrounds like Horizon Worlds. Reality Labs, Meta’s division responsible for augmented reality tech, reported a net loss of $4 billion last quarter, and the company said during its Q1 call that it expects “operating losses to increase year over year in 2023.”

Meta bets big on AI with custom chips — and a supercomputer by Kyle Wiggers originally published on TechCrunch

TikTok adds a new mental health awareness hub to provide users access to resources

With a U.S. ban of TikTok looming, the company is introducing a new mental health awareness hub to allow users to learn about well-being topics, connect with advocates and support organizations that provide important resources. To access the new hub, users have to go to the #MentalHealthAwareness hashtag page and tap on the link in the description.

The hub will be updated throughout May to highlight new educational videos, mental health and wellness-centered creators and organizations dedicated to raising awareness about mental health.

TikTok is also donating over $2 million in ad credits to organizations working on supporting mental well-being, including Alliance for Eating Disorders, American Foundation for Suicide Prevention, Crisis Text Line, Made of Millions, National Alliance on Mental Illness, National Eating Disorders Association and Peer Health Exchange.

In addition, TikTok is going to host a series of training sessions to equip its partners with the tools they need to share information with their communities during critical moments, such as World Mental Health Day in October or back-to-school season.

“Through continued collaboration with mental health organizations, content creators and our TikTok community, we continue to raise awareness and foster a space where everyone can feel heard and supported — during #MentalHealthAwareness Month and beyond,” TikTok wrote in a blog post. “We believe that everyone deserves access to resources and support for their mental well-being, and we are dedicated to continuously learning, evolving, and making a difference.”

TikTok will also be spotlighting 10 creators who use its platform to educate the community on mental heath awareness, including @asoulcalledjoel, @dr.kojosarfo, @elainaefird, @elysemyers, @joelbervell, @lindsay.fleminglpc, @nutritionbykylie, @thepsychodoctormd, @therapyjeff and @victoriabrowne.

Over the past few years, TikTok has faced scrutiny regarding the app’s impact on its youngest users. It’s been more than a year since executives from social media platforms, including TikTok, faced questions from lawmakers during congressional hearings over how their platforms can negatively impact young users. Experts have also expressed concern about how TikTok could add to the mental health crisis among U.S. teens.

TikTok’s new mental health awareness initiatives come as the The American Psychological Association (APA) issued its first-ever health advisory on social media last week, addressing mounting concerns about how social networks designed for adults can negatively impact adolescents.

The APA’s recommendations center on the role of parents, but the advisory does denounce algorithms that push young users toward potentially damaging content, including posts that promote self harm, disordered eating, racism and other forms of online hate. The APA recommends that parents remain vigilant to prevent social media from interrupting sleep routines and physical activity — two areas that directly and seriously impact kids’ mental health outcomes.

TikTok adds a new mental health awareness hub to provide users access to resources by Aisha Malik originally published on TechCrunch

American psychology group issues recommendations for kids’ social media use

One of the most prominent mental health organizations in the U.S. is out with a set guidelines designed to protect children from the potential harms of social media.

The American Psychological Association (APA) issued its first ever health advisory on social media use Tuesday, addressing mounting concerns about how social networks designed for adults can negatively impact adolescents.

The report doesn’t denounce social media, instead asserting that online social networks are “not inherently beneficial or harmful to young people,” but should be used thoughtfully. The health advisory also does not address specific social platforms, instead tackling a broad set of concerns around kids’ online lives with commonsense advice and insights compiled from broader research.

The APA’s recommendations center the role of parents, but the advisory does denounce algorithms that push young users toward potentially damaging content, including posts that promote self harm, disordered eating, racism and other forms of online hate.

Other recommendations address kids’ habits and routines, largely the domain of adult caregivers. The APA encourages regular screenings for “problematic social media use” in children. Red flags include behaviors that track with symptoms of more traditional addiction, including spending more time on social media than intended and lying to maintain access to social media sites.

In that same vein, the APA recommends that parents remain vigilant to prevent social media from interrupting sleep routines and physical activity — two areas that directly and seriously impact kids’ mental health outcomes. “Insufficient sleep is associated with disruptions to neurological development in adolescent brains, teens’ emotional functioning and risk for suicide,” the advisory states.

Some of the recommendations aren’t particularly easy to navigate in today’s social media landscape, even for adults. One part of the health advisory advises limiting the time that young users spend comparing themselves to other people on social media apps, “particularly around beauty- or appearance-related content.”

“Research suggests that using social media for social comparisons related to physical appearance, as well as excessive attention to and behaviors related to one’s own photos and feedback on those photos, are related to poorer body image, disordered eating, and depressive symptoms, particularly among girls,” the APA states, citing ample research.

The APA emphasizes that outcomes on social media are shaped by offline experiences too, and those vary widely from child to child.

“In most cases, the effects of social media are dependent on adolescents’ own personal and psychological characteristics and social circumstances—intersecting with the specific content, features, or functions that are afforded within many social media platforms,” the APA wrote. “In other words, the effects of social media likely depend on what teens can do and see online, teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.”

The organization also cautions parents and platforms about design features intended for adults that younger users might be more susceptible to, including algorithmic recommendations, like buttons and endless scrolling. These features along with advertising served to under-18 users have increasingly been criticized by regulators seeking to protect children from being manipulated by features designed to shape adult behavior.

The APA recommends a reasonable, age-appropriate degree of “adult monitoring” through parental controls at the device and app level and urges parents to model their own healthy relationships with social media.

“Science demonstrates that adults’ (e.g., caregivers’) orientation and attitudes toward social media (e.g., using during interactions with their children, being distracted from in-person interactions by social media use) may affect adolescents’ own use of social media,” the APA writes.

A final piece of advice is one that most adults would benefit from as well: boosting digital literacy across a number of social media topics, including how to recognize misinformation tactics and how to resolve conflicts that originate on social platforms.

American psychology group issues recommendations for kids’ social media use by Taylor Hatmaker originally published on TechCrunch