Diving into TED2019, the state of social media, and internet behavior

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. Last week, TechCrunch’s Anthony Ha gave us his recap of the TED2019 conference and offered key takeaways from the most interesting talks and provocative ideas shared at the event.

Under the theme, ‘Bigger Than Us’, the conference featured talks, Q&A’s, and presentations from a wide array of high-profile speakers, including an appearance from Twitter CEO Jack Dorsey which was the talk of the week. Anthony dives deeper into the questions raised in his onstage interview that kept popping up: How has social media warped our democracy? How can the big online platforms fight back against abuse and misinformation? And what is the Internet good for, anyway?

“…So I would suggest that probably five years ago, the way that we wrote about a lot of these tech companies was too positive and they weren’t as good as we made them sound. Now the pendulum has swung all the way in the other direction, where they’re probably not as bad we make them sound…

…At TED, you’d see the more traditional TED talks about, “Let’s talk about the magic of finding community in the internet.” There were several versions of that talk this year. Some of them very good, but now you have to have that conversation with the acknowledgement that there’s much that is terrible on the internet.”

Ivan Poupyrev

Image via Ryan Lash / TED

Anthony also digs into what really differentiates the TED conference from other tech events, what types of people did and should attend the event, and even how he managed to get kicked out of the theater for typing too loud.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Sending severed heads, and even more PR DON’Ts

This week, I published a piece called the “The master list of PR DON’Ts (or how not to piss off the writer covering your startup).” The problem, of course, with writing a “master list” is that as soon as you publish it, everyone takes the opportunity to point out all the (hopefully) long-tail stories that you didn’t include the first time.

And wow, startup founders and PR folks find some funky ways to pitch journalists.

That original master list had 16 entries, ranging from not using pressure tactics to force a story to not changing your company’s name capitalization multiple times.

Now, here is a list of 12 more PR DON’Ts from the TechCrunch staff, who have turned our Slack thread on this subject into a form of work therapy.

DON’T send severed heads of the writer you want to cover your story

TechCrunch writer Anthony Ha holds his future (and his head) in his own hands.

Heads up! It’s weird to send someone’s cranium to them.

This is an odd one, but believe it or not, severed heads seem to roll into our office every couple of months thanks to the advent of 3D printing. Several of us in the New York TechCrunch office received these “gifts” in the past few days (see gifts next), and apparently, I now have a severed head resting on my desk that I get to dispose of on Monday.

Let’s think linearly on this one. Most journalists are writers and presumably understand metaphors. Heads were placed on pikes in the Middle Ages (and sadly, sometimes recently) as a warning to other group members about the risk of challenging whoever did the decapitation. Yes, it might get the attention of the person you are sending their head to, in the same way that burning them in effigy right in front of them can attract eyeballs.

Now, I get it — it’s a demo of something, and maybe it might even be funny for some. But, why take the risk that the recipient is going to see the reasonably obvious metaphorical connection? Use your noggin — no severed heads.

DON’T send gifts

Journalists have a job to do: we cover the most interesting stories and write them up for our readers. That’s what we are paid to do after all.

Twitter makes ‘likes’ easier to use in its twttr prototype app. (Nobody tell Jack.)

On the one hand, you’ve got Twitter CEO Jack Dorsey lamenting the “like” button’s existence, and threatening to just kill the thing off entirely for incentivizing the wrong kind of behavior. On the other hand, you have twttr — Twitter’s prototype app where the company is testing new concepts including, most recently, a way to make liking tweets even easier than before.

Confused about Twitter’s product direction? Apparently, so is the company.

In the latest version of the twttr prototype, released on Thursday, users are now able to swipe right to left on any tweet in order to “like” it. Previously, this gesture only worked on tweets in conversation threads, where the engagement buttons had been hidden. With the change, however, the swipe works anywhere — including the Home timeline, the Notifications tab, your Profile page, or even within Twitter Search results. In other words, it becomes a more universal gesture.

This makes sense because once you got used to swiping right, it was confusing that the gesture didn’t work in some places, but did in others. Still, it’s odd to see the company doubling down on making “likes” easier to use — and even rolling out a feature that could increase user engagement with the “Like” button — given Jack Dorsey’s repeated comments about his distaste for “likes” and the conversations around the button’s removal.

Of course, twttr is not supposed to be Dorsey’s vision. Instead, it’s meant to be a new experiment in product development, where users and Twitter’s product teams work together, in the open, to develop, test, and then one day officially launch new features for Twitter.

For the time being, the app is largely focused on redesigning conversation threads. On Twitter today, these get long and unwieldy, and it’s not always clear who’s talking to who. On twttr, however, threads are nested with a thin line connecting the various posts.

The app is also rolling out other, smaller tweaks like labels on tweets within conversations that highlight the original “Author’s” replies, or if a post comes from someone you’re “following.”

And, of course, twttr introduced the “swipe to like” gesture.

While it’s one thing to want to collaborate more directly with the community, it seems strange that twttr is rolling out a feature designed to increase — not decrease — engagement with “likes” at this point in time.

Last August, for example, Dorsey said he wanted to redesign key elements of the social network, including the “like” button and the way Twitter displays follower counts.

“The most important thing that we can do is we look at the incentives that we’re building into our product,” Dorsey had said at the time. “Because they do express a point of view of what we want people to do — and I don’t think they are correct anymore.”

Soon after, at an industry event in October 2018, Dorsey again noted how the “like” button sends the wrong kind of message.

“Right now we have a big ‘like’ button with a heart on it, and we’re incentivizing people to want to drive that up,” said Dorsey. “We have a follower count that was bolded because it felt good twelve years ago, but that’s what people see us saying: that should go up. Is that the right thing?,” he wondered.

While these comments may have seemed like a little navel-gazing over Twitter’s past, a Telegraph report about the “like” button’s removal quickly caught fire. It claimed Dorsey had said the “like” button was going to go away entirely, which caused so much user backlash that Twitter comms had to respond. The company said the idea has been discussed, but it wasn’t something happening “soon.” (See above tweet).

Arguably, the “like” button is appreciated by Twitter’s user base, so it’s not surprising that a gesture that could increase its usage would become something that gets tried out in the community-led twttr prototype app. It’s worth noting, however, how remarkably different the development process is when it’s about what Twitter’s users want, not the CEO.

Hmmm.

Hey, twttr team? Maybe we can get that “edit” button now?

 

 

Pew: U.S. adult Twitter users tend to be younger, more demographic; 10% create 80% of tweets

A new report out this morning from Pew Research Center offers insight into the U.S. adult Twitter population. The firm’s research indicates the Twitterverse tends to skew younger and more Democratic than the general public. It also notes that the activity on Twitter is dominated by a small percentage — most users rarely tweet, while the most prolific 10 percent are responsible for 80 percent of tweets from U.S. adults.

Pew says only around 22 percent of American adults today use Twitter, and they are representative of the broader population in some ways, but not in others.

For starters, Twitter’s U.S. adult users tend to be younger.

The study found the median age of Twitter users is 40, compared with the median age of U.S. adults, which is 47. Though less pronounced than the age differences, Twitter users also tend to have higher levels of household income and educational attainment, compared with the general population.

42 percent of adult Twitter users in the U.S. have at least a bachelor’s degree, which is 11 percentage points higher than the share of the public with this level of education (31%). Likely related to this is a higher income level. 41 percent of Twitter users have a household income above $75,000, which is 9 points higher than the same figure in the general population (32%).

A major difference — and a notable one, given yesterday’s sit-down between Twitter CEO Jack Dorsey and President Trump — is Pew’s discovery that 36 percent of Twitter U.S. adult users identify with the Democratic Party, versus 30 percent of U.S. adults (the latter, as per a November 2018 survey). Meanwhile, 21 percent of Twitter users identify as Republicans, versus 26 percent of U.S. adults. Political independents make up 29 percent of Twitter users, and a similar 27 percent of the general population.

Despite these differences, there are areas where Twitter users are more like the general U.S. adult population — specifically, in terms of the gender and racial makeup, Pew says.

In addition to the makeup of the adult population on Twitter, Pew also researched the activity on the platform, and found that the median user only tweets twice per month.

That means the conversation on Twitter is dominated by extremely active (or, in their parlance, “extremely online“) users. That means a large majority of Twitter’s content is created by a small number — 10 percent of users are responsible for 80 percent of all tweets from U.S. adults on Twitter.

The median user in this top 10 percent creates 138 tweets per month, favorites 70 posts per month, follows 456 accounts and has 387 followers. They tend to be women (65% are), and tend to tweet about politics (69% say they do.) They also more often use automated methods to tweet (25% do).

Meanwhile, the median users in the bottom 90 percent creates 2 tweets per month, favorites 1 post per month, follows 74 accounts, and has 19 followers. 48 percent are women, and 39 percent tweet about politics. Only 13 percent say they tweeted about politics in the last 30 days, compared with 42 percent of the top 10 percent of users. They are also less likely to use automated methods of tweeting, as only 15 percent do.

These differences lead to other ways where how the Twitterverse feels about key issues — like equality or immigration — differs from the general public, with viewpoints that lean more Democratic.

It’s worth noting, too, how the small amount of activity from a large group of Twitter users also speaks to Twitter’s inability to grow its monthly active user base (MAUs).

This week, Twitter reported its first quarter earnings and noted that its MAUs were 330 million in Q1, down by 6 million users from a year ago. Twitter now prefers to report on its monetizable daily active users — a metric that favors the app’s heavier users.

Pew’s research was conducted Nov. 21, 2018 through Dec. 17, 2018, among 2,791 U.S. adult Twitter users. The full report is available from Pew’s website.

 

 

 

Twitter to offer report option for misleading election tweets

Twitter is adding a dedicated report option that enables users to tell it about misleading tweets related to voting — starting with elections taking place in India and the European Union .

From tomorrow users in India can report tweets they believe are trying to mislead voters — such as disinformation related to the date or location of polling stations; or fake claims about identity requirements for being able to vote — by tapping on the arrow menu of the suspicious tweet and selecting the ‘report tweet’ option and then choosing: ‘It’s misleading about voting’.

Twitter says the tool will go live for the Indian Lok Sabha elections from tomorrow, and will launch in all European Union member states on April 29 — ahead of elections for the EU parliament next month.

The ‘misleading about voting’ option will persist in the list of available choices for reporting tweets for seven days after each election ends, Twitter said in a blog post announcing the feature.

It also said it intends to the vote-focused feature to be rolled out to “other elections globally throughout the rest of the year”, without providing further detail on which elections and markets it will prioritize for getting the tool.

“Our teams have been trained and we recently enhanced our appeals process in the event that we make the wrong call,” Twitter added.

In recent months the European Commission has been ramping up pressure on tech platforms to scrub disinformation ahead of elections to the EU parliament — issuing monthly reports on progress, or, well, the lack of it.

This follows a Commission initiative last year which saw major tech and ad platforms — including Facebook, Google and Twitter — sign up to a voluntary Code of Practice on disinformation, committing themselves to take some non-prescribed actions to disrupt the ad revenues of disinformation agents and make political ads more transparent on their platforms.

Another strand of the Code looks to have directly contributed to the development of Twitter’s new ‘misleading about voting’ report option — with signatories committing to:

  • Empower consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;

In the latest progress report on the Code, which was published by the Commission yesterday but covers steps taken by the platforms in March 2019, it noted some progress made — but said it’s still not enough.

“Further technical improvements as well as sharing of methodology and data sets for fake accounts are necessary to allow third-party experts, fact-checkers and researchers to carry out independent evaluation,” EC commissioners warned in a joint statement.

In the case of Twitter the company was commended for having made political ad libraries publicly accessible but criticized (along with Google) for not doing more to improve transparency around issue-based advertising.

“It is regrettable that Google and Twitter have not yet reported further progress regarding transparency of issue-based advertising, meaning issues that are sources of important debate during elections,” the Commission said. 

It also reported that Twitter had provided figures on actions undertaken against spam and fake accounts but had failed to explain how these actions relate to activity in the EU.

“Twitter did not report on any actions to improve the scrutiny of ad placements or provide any metrics with respect to its commitments in this area,” it also noted.

The EC says it will assess the Code’s initial 12-month period by the end of 2019 — and take a view on whether it needs to step in and propose regulation to control online disinformation. (Something which some individual EU Member States are already doing.)

Jack Dorsey just met with Trump to talk about the health of Twitter’s public discourse

Twitter’s co-founder and CEO historically doesn’t have the most discerning tastes when it comes to who he decides to engage with. Fresh off the podcast circuit, today a thoroughly beardy Jack Dorsey sat down with President Trump for his most high profile tête-à-tête yet.

Unlike his recent amble onto the Joe Rogan show, Dorsey’s 30 minute meeting with Trump happened behind closed doors. Motherboard reported the meeting just before Trump tweeted about it.

Unless either of the men decides to share more about what they discussed we won’t know how things went down exactly, though it’s probably easy enough to guess. According to the Motherboard report, the initial internal Twitter email named “the health of the public conversation on Twitter” as the topic of the day.

Given that, we’d guess that Trump probably took the chance to bring up recent unfounded gripes about conservative censorship on the platform while Dorsey likely offered reassurances, active listening and other assorted gestures of noncommittal mildness.

According to the internal memo, Dorsey preemptively defended his decision to accept an invite from Trump. “Some of you will be very supportive of our meeting [with] the president, and some of you might feel we shouldn’t take this meeting at all,” Dorsey wrote in an email. “In the end, I believe it’s important to meet heads of state in order to listen, share our principles and our ideas.”

Talk media and TED2019 key takeaways with TechCrunch’s Anthony Ha

Anthony just returned from Vancouver, where he was covering the TED2019 conference — a much-parodied gathering where VCs, executives and other bigwigs gather to exchange ideas.

This year, Twitter CEO Jack Dorsey got the biggest headlines, but the questions raised in his onstage interview kept popping up throughout the week: How has social media warped our democracy? How can the big online platforms fight back against abuse and misinformation? And what is the Internet good for, anyway? Wednesday at 11:00 am PT, Anthony will recap the five-day event’s most interesting talks and provocative ideas with Extra Crunch members on a conference call.

Tune in to dig into what happened onstage and off and ask Anthony any and all things media.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Jack Dorsey says it’s time to rethink the fundamental dynamics of Twitter

Twitter CEO Jack Dorsey took the stage today at the TED conference. But instead of giving the standard talk, he answered questions from TED’s Chris Anderson and Whitney Pennington Rodgers.

For most of the interview, Dorsey outlined steps that Twitter has taken to combat abuse and misinformation, but Anderson explained why the company’s critics sometimes find those steps so insufficient and unsatisfying. He compared Twitter to the Titanic, and Dorsey to the captain, listening to passengers’ concerns about the iceberg up ahead — then going back to the bridge and showing “this extraordinary calm.”

“It’s democracy at stake, it’s our culture at stake,” Anderson said, echoing points made yesterday in a talk by journalist Carole Cadwalladr. So why isn’t Twitter addressing these issues with more urgency?

“We are working as quickly as we can, but quickness will not get the job done,” Dorsey replied. “It’s focus, it’s prioritization, it’s understanding the fundamentals of the network.”

He also argued that while Twitter could “do a bunch of superficial things to address the things you’re talking about,” that isn’t the real solution.

“We want the changes to last, and that means going really, really deep,” Dorsey said.

In his view, that means rethinking how Twitter incentivizes user behavior. He suggested that the service works best as an “interest-based network,” where you log in and see content relevant to your interests, no matter who posted it — rather than a network where everyone feels like they need to follow a bunch of other accounts, and then grow their follower numbers in turn.

Dorsey recalled that when the team was first building the service, it decided to make follower count “big and bold,” which naturally made people focus on it.

“Was that the right decision at the time? Probably not,” he said. “If I had to start the service again, I would not emphasize the follower count as much … I don’t think I would create ‘likes’ in the first place.”

Since he isn’t starting from scratch, Dorsey suggested that he’s trying to find ways to redesign Twitter to shift the “bias” away from accounts and towards interests.

More specifically, Rodgers asked about the frequent criticism that Twitter hasn’t found a way to consistently ban Nazis from the service.

“We have a situation right now where that term is used fairly loosely,” Dorsey said. “We just cannot take any one mention of that word accusing someone else as a factual indication of whether someone can be removed from the platform.”

He added that Twitter does remove users who are connected to hate groups like the Ku Klux Klan and the American Nazi Party, as well those who post hateful imagery or who are otherwise guilty of conduct that violates Twitter’s terms and conditions — terms that Dorsey said the company is rewriting to make them “human readable,” and to emphasize that fighting abuse and hateful content is the top priority.

“Our focus is on removing the burden of work from the victims,” Dorsey said.

He also pointed to efforts that Twitter has already announced to measure (and then improve) conversational health and to use machine learning to automatically detect abusive content. (The company said today that 38 percent of abusive content that Twitter takes action against is found proactively.)

And while Dorsey said he’s less interested in maximizing time spent on Twitter and more in maximizing “what people take away from it and what they want to learn from it,” Anderson suggested that Twitter may struggle with that goal since it’s a public company, with a business model based on advertising. Would Dorsey really be willing to see time spent on the service decrease, even if that means improving the conversation?

“More relevance means less time on the service, and that’s perfectly fine,” Dorsey said, adding that Twitter can still serve ads against relevant content.

In terms of how the company is currently measuring its success, Dorsey said it focuses primarily on daily active users, and secondly on “conversation chains — we want to incentivize healthy contributions back to the network.”

Getting back to Dorsey himself, Rodgers wondered whether serving as the CEO of two public companies (the other is Square) gives him enough time to solve these problems.

“My goal is to build a company that is not dependent upon me and outlives me,” he said. “The situation between the two companies and how my time is spent forces me immediately to create frameworks that are scalable, that are decentralized, that don’t require me being in every single detail … That is true of any organization that scales beyond the original founding moment.”

Twitter to launch a ‘hide replies’ feature, plus other changes to its reporting process

In February, Twitter confirmed its plans to launch a feature that would allow users to hide replies that they felt didn’t contribute to a conversation. Today, alongside news of other changes to the reporting process and its documentation, Twitter announced the new “Hide Replies” feature is set to launch in June.

Twitter says the feature will be an “experiment” — which means it could be changed or even scrapped, based on user feedback.

The feature is likely to spark some controversy, as it puts the original poster in control of which tweets appear in a conversation thread. This, potentially, could silence dissenting opinions or even fact-checked clarifications. But, on the flip side, the feature also means that people who enter conversations with plans to troll or make hateful remarks are more likely to see their posts tucked away out of view.

This, Twitter believes, could help encourage people to present their thoughts and opinions in a more polite and less abusive fashion, and shifts the balance of power back to the poster without an overcorrection. (For what it worth, Facebook and Instagram gives users far more control over their posts, as you can delete trolls’ comments entirely.)

“We already see people trying keep their conversations healthy by using block, mute, and report, but these tools don’t always address the issue. Block and mute only change the experience of the blocker, and report only works for the content that violates our policies,” explained Twitter’s PM of Health Michelle Yasmeen Haq earlier this year. “With this feature, the person who started a conversation could choose to hide replies to their tweets. The hidden replies would be viewable by others through a menu option.”

In other words, hidden responses aren’t being entirely silenced — just made more difficult to view, as displaying them would require an extra click.

Twitter unveiled its plans to launch the “Hide Replies” feature alongside a host of other changes it has in store for its platform, some of which it had previously announced.

It says, for example, it will add more notices within Twitter for clarity around tweets that breaks its rules but are allowed to remain on the site. This is, in part, a response to some users’ complaints around President Trump’s apparently rule-breaking tweets that aren’t taken down. Twitter’s head of legal, policy and trust Vijaya Gadde recently mentioned this change was in the works, in an interview with The Washington Post.

Twitter also says it will update its documentation around its Rules to be simpler to understand. And it will make it easier for people to share specifics when reporting tweets so Twitter can act more swiftly when user safety is a concern.

This latter change follows a recent controversy over how Twitter handled death threats against Rep. Ilhan Omar. Twitter left the death threats online so law enforcement could investigate, according to a BuzzFeed News report. But it raised questions as to how Twitter should handle threats against a user’s life.

More vaguely, Twitter states it’s improving its technology to help it proactively review content that breaks rules before it’s reported — specifically in the areas of those who dox users (tweet private information), make threats and other online abuse. The company didn’t clarify in depth how it’s approaching these problems, but it did acquire an anti-abuse technology provider Smyte last year, with the goal of better addressing the abuse on its platform.

Donald Hicks, VP Twitter Services, in a company blog post, hints Twitter is using its existing technology in new ways to address abuse:

The same technology we use to track spam, platform manipulation and other rule violations is helping us flag abusive Tweets to our team for review. With our focus on reviewing this type of content, we’ve also expanded our teams in key areas and geographies so we can stay ahead and work quickly to keep people safe. Reports give us valuable context and a strong signal that we should review content, but we’ve needed to do more and though still early on, this work is showing promise.

Twitter also today shared a handful of self-reported metrics that paint of picture of progress.

This includes the following: today, 38 percent of abusive content that’s enforced is handled proactively (note: much content still has no enforcement action taken, though); 16 percent fewer abuse reports after an interaction from an account the reporter doesn’t follow; 100K accounts suspended for returning to create new accounts during Jan. – March 2019, a 45 percent increase from the same time last year; a 60 percent faster response rates to appeals requests through its in-app appeal process, 3x more abusive accounts suspended within 24 hours, compared to the same time last year; and 2.5x more private info removed with its new reporting process. 

Despite Twitter’s attempts to solve issues around online abuse (an area people now wonder may never be solvable), it still drops the ball in handling what should be straightforward decisions.

Twitter admits it still has more to do, and will continue to share its progress in the future.

Get ready for a new era of personalized entertainment

New machine learning technologies, user interfaces and automated content creation techniques are going to expand the personalization of storytelling beyond algorithmically generated news feeds and content recommendation.

The next wave will be software-generated narratives that are tailored to the tastes and sentiments of a consumer.

Concretely, it means that your digital footprint, personal preferences and context unlock alternative features in the content itself, be it a news article, live video or a hit series on your streaming service.

The title contains different experiences for different people.

From smart recommendations to smarter content

When you use Youtube, Facebook, Google, Amazon, Twitter, Netflix or Spotify, algorithms select what gets recommended to you. The current mainstream services and their user interfaces and recommendation engines have been optimized to serve you content you might be interested in.

Your data, other people’s data, content-related data and machine learning methods are used to match people and content, thus improving the relevance of content recommendations and efficiency of content distribution.

However, so far the content experience itself has mostly been similar to everyone. If the same news article, live video or TV series episode gets recommended to you and me, we both read and watch the same thing, experiencing the same content.

That’s about to change. Soon we’ll be seeing new forms of smart content, in which user interface, machine learning technologies and content itself are combined in a seamless manner to create a personalized content experience.

What is smart content?

Smart content means that content experience itself is affected by who is seeing, watching, reading or listening to content. The content itself changes based on who you are.

We are already seeing the first forerunners in this space. TikTok’s whole content experience is driven by very short videos, audiovisual content sequences if you will, ordered and woven together by algorithms. Every user sees a different, personalized, “whole” based on her viewing history and user profile.

At the same time, Netflix has recently started testing new forms of interactive content (TV series episodes, e.g. Black Mirror: Bandersnatch) in which user’s own choices affect directly the content experience, including dialogue and storyline. And more is on its way. With Love, Death & Robots series, Netflix is experimenting with episode order within a series, serving the episodes in different order for different users.

Some earlier predecessors of interactive audio-visual content include sports event streaming, in which the user can decide which particular stream she follows and how she interacts with the live content, for example rewinding the stream and spotting the key moments based on her own interest.

Simultaneously, we’re seeing how machine learning technologies can be used to create photo-like images of imaginary people, creatures and places. Current systems can recreate and alter entire videos, for example by changing the style, scenery, lighting, environment or central character’s face. Additionally, AI solutions are able to generate music in different genres.

Now, imagine, that TikTok’s individual short videos would be automatically personalized by the effects chosen by an AI system, and thus the whole video would be customized for you. Or that the choices in the Netflix’s interactive content affecting the plot twists, dialogue and even soundtrack, were made automatically by algorithms based on your profile.

Personalized smart content is coming to news as well. Automated systems, using today’s state-of-the-art NLP technologies, can generate long pieces of concise, comprehensible and even inventive textual content at scale. At present, media houses use automated content creation systems, or “robot journalists”, to create news material varying from complete articles to audio-visual clips and visualizations. Through content atomization (breaking content into small modular chunks of information) and machine learning, content production can be increased massively to support smart content creation.

Say that a news article you read or listen to is about a specific political topic that is unfamiliar to you. When comparing the same article with your friend, your version of the story might use different concepts and offer a different angle than your friend’s who’s really deep into politics. A beginner’s smart content news experience would differ from the experience of a topic enthusiast.

Content itself will become a software-like fluid and personalized experience, where your digital footprint and preferences affect not just how the content is recommended and served to you, but what the content actually contains.

Automated storytelling?

How is it possible to create smart content that contains different experiences for different people?

Content needs to be thought and treated as an iterative and configurable process rather than a ready-made static whole that is finished when it has been published in the distribution pipeline.

Importantly, the core building blocks of the content experience change: smart content consists of atomized modular elements that can be modified, updated, remixed, replaced, omitted and activated based on varying rules. In addition, content modules that have been made in the past, can be reused if applicable. Content is designed and developed more like a software.

Currently a significant amount of human effort and computing resources are used to prepare content for machine-powered content distribution and recommendation systems, varying from smart news apps to on-demand streaming services. With smart content, the content creation and its preparation for publication and distribution channels wouldn’t be separate processes. Instead, metadata and other invisible features that describe and define the content are an integral part of the content creation process from the very beginning.

Turning Donald Glover into Jay Gatsby

With smart content, the narrative or image itself becomes an integral part of an iterative feedback loop, in which the user’s actions, emotions and other signals as well as the visible and invisible features of the content itself affect the whole content consumption cycle from the content creation and recommendation to the content experience. With smart content features, a news article or a movie activates different elements of the content for different people.

It’s very likely that smart content for entertainment purposes will have different features and functions than news media content. Moreover, people expect frictionless and effortless content experience and thus smart content experience differs from games. Smart content doesn’t necessarily require direct actions from the user. If the person wants, the content personalization happens proactively and automatically, without explicit user interaction.

Creating smart content requires both human curation and machine intelligence. Humans focus on things that require creativity and deep analysis while AI systems generate, assemble and iterate the content that becomes dynamic and adaptive just like software.

Sustainable smart content

Smart content has different configurations and representations for different users, user interfaces, devices, languages and environments. The same piece of content contains elements that can be accessed through voice user interface or presented in augmented reality applications. Or the whole content expands into a fully immersive virtual reality experience.

In the same way as with the personalized user interfaces and smart devices, smart content can be used for good and bad. It can be used to enlighten and empower, as well as to trick and mislead. Thus it’s critical, that human-centered approach and sustainable values are built in the very core of smart content creation. Personalization needs to be transparent and the user needs to be able to choose if she wants the content to be personalized or not. And of course, not all content will be smart in the same way, if at all.

If used in a sustainable manner, smart content can break filter bubbles and echo chambers as it can be used to make a wide variety of information more accessible for diverse audiences. Through personalization, challenging topics can be presented to people according to their abilities and preferences, regardless of their background or level of education. For example a beginner’s version of vaccination content or digital media literacy article uses gamification elements, and the more experienced user gets directly a thorough fact-packed account of the recent developments and research results.

Smart content is also aligned with the efforts against today’s information operations such as fake news and its different forms such as “deep fakes” (http://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes). If the content is like software, a legit software runs on your devices and interfaces without a problem. On the other hand, even the machine-generated realistic-looking but suspicious content, like deep fake, can be detected and filtered out based on its signature and other machine readable qualities.


Smart content is the ultimate combination of user experience design, AI technologies and storytelling.

News media should be among the first to start experimenting with smart content. When the intelligent content starts eating the world, one should be creating ones own intelligent content.

The first players that master the smart content, will be among tomorrow’s reigning digital giants. And that’s one of the main reasons why today’s tech titans are going seriously into the content game. Smart content is coming.