Voter manipulation on social media now a global problem, report finds

New research by the Oxford Internet Institute has found that social media manipulation is getting worse, with rising numbers of governments and political parties making cynical use of social media algorithms, automation and big data to manipulate public opinion at scale — with hugely worrying implications for democracy.

The report found that computational propaganda and social media manipulation have proliferated massively in recently years — now prevalent in more than double the number of countries (70) vs two years ago (28). An increase of 150%.

The research suggests that the spreading of fake news and toxic narratives has become the dysfunctional new ‘normal’ for political actors across the globe, thanks to social media’s global reach.

“Although propaganda has always been a part of political discourse, the deep and wide-ranging scope of these campaigns raise critical public interest concerns,” the report warns.

The researchers go on to dub the global uptake of computational propaganda tools and techniques a “critical threat” to democracies.

“The use of computational propaganda to shape public attitudes via social media has become mainstream, extending far beyond the actions of a few bad actors,” they add. “In an information environment characterized by high volumes of information and limited levels of user attention and trust, the tools and techniques of computational propaganda are becoming a common – and arguably essential – part of digital campaigning and public diplomacy.”

Techniques the researchers found being deployed by governments and political parties to spread political propaganda include the use of bots to amplify hate speech or other forms of manipulated content; the illegal harvesting of data or micro-targeting; and the use of armies of ‘trolls’ to bully or harass political dissidents or journalists online.

The researchers looked at computational propaganda activity in 70 countries around the world — including the US, the UK, Germany, China, Russia, India, Pakistan, Kenya, Rwanda, South Africa, Argentina, Brazil and Australia (see the end of this article for the full list) — finding organized social media manipulation in all of them.

So next time Facebook puts out another press release detailing a bit of “coordinated inauthentic behavior” it claims to have found and removed from its platform, it’s important to put it in context of the bigger picture. And the picture painted by this report suggests that such small-scale, selective discloses of propaganda-quashing successes sum to misleading Facebook PR vs the sheer scale of the problem.

The problem is massive, global and largely taking place through Facebook’s funnel, per the report.

Facebook remains the platform of choice for social media manipulation — with researchers finding evidence of formally organised political disops campaigns on its platform taking place in 56 countries.

We reached out to Facebook for a response to the report and the company sent us a laundry list of steps it says it’s been taking to combat election interference and coordinated inauthentic activity — including in areas such as voter suppression, political ad transparency and industry-civil society partnerships.

But it did not offer any explanation why all this apparent effort (just its summary of what it’s been doing exceeds 1,600 words) has so spectacularly failed to stem the rising tide of political fakes being amplified via Facebook.

Instead it sent us this statement: “Helping show people accurate information and protecting against harm is a major priority for us. We’ve developed smarter tools, greater transparency, and stronger partnerships to better identify emerging threats, stop bad actors, and reduce the spread of misinformation on Facebook, Instagram and WhatsApp. We also know that this work is never finished and we can’t do this alone. That’s why we are working with policymakers, academics, and outside experts to make sure we continue to improve.”

We followed up to ask why all its efforts have so far failed to reduce fake activity on its platform and will update this report with any response.

Returning to the report, the researchers say China has entered the global disinformation fray in a big way — using social media platforms to target international audiences with disinformation, something the country has long directed at its domestic population of course.

The report describes China as “a major player in the global disinformation order”.

It also warns that the use of computational propaganda techniques combined with tech-enabled surveillance is providing authoritarian regimes around the world with the means to extend their control of citizens’ lives.

“The co-option of social media technologies provides authoritarian regimes with a powerful tool to shape public discussions and spread propaganda online, while simultaneously surveilling, censoring, and restricting digital public spaces,” the researchers write.

Other key findings from the report include that both democracies and authoritarian states are making (il)liberal use of computational propaganda tools and techniques.

Per the report:

  • In 45 democracies, politicians and political parties “have used computational propaganda tools by amassing fake followers or spreading manipulated media to garner voter support”
  • In 26 authoritarian states, government entities “have used computational propaganda as a tool of information control to suppress public opinion and press freedom, discredit criticism and oppositional voices, and drown out political dissent”

The report also identifies seven “sophisticated state actors” — China, India, Iran, Pakistan, Russia, Saudi Arabia and Venezuela — using what it calls “cyber troops” (aka dedicated online workers whose job is to use computational propaganda tools to manipulate public opinion) to run foreign influence campaigns.

Foreign influence operations — which includes election interference — were found by the researchers to primarily be taking place on Facebook and Twitter.

We’ve reached out to Twitter for comment and will update this article with any response.

A year ago, when Twitter CEO Jack Dorsey was questioned by the Senate Intelligence Committee, he said it was considering labelling bot accounts on its platform — agreeing that “more context” around tweets and accounts would be a good thing, while also arguing that identifying automation that’s scripted to look like a human is difficult.

Instead of adding a ‘bot or not’ label, Twitter has just launched a ‘hide replies’ feature — which lets users screen individual replies to their tweets (requiring an affirmative action from viewers to unhide and be able to view any hidden replies). Twitter says this is intended at increasing civility on the platform. But there have been concerns the feature could be abused to help propaganda spreaders — i.e. by allowing them to suppress replies that debunk their junk.

The Oxford Internet Institute researchers found bot accounts are very widely used to spread political propaganda (80% of countries studied used them). However the use of human agents was even more prevalent (87% of countries).

Bot-human blended accounts, which combine automation with human curation in an attempt to fly under the BS detector radar, were much rarer: Identified in 11% of countries.

While hacked or stolen accounts were found being used in just 7% of countries.

In another key finding from the report, the researchers identified 25 countries working with private companies or strategic communications firms offering a computational propaganda as a service, noting that: “In some cases, like in Azerbaijan, Israel, Russia, Tajikistan, Uzbekistan, student or youth groups are hired by government agencies to use computational propaganda.”

Commenting on the report in a statement, professor Philip Howard, director of the Oxford Internet Institute, said: “The manipulation of public opinion over social media remains a critical threat to democracy, as computational propaganda becomes a pervasive part of everyday life. Government agencies and political parties around the world are using social media to spread disinformation and other forms of manipulated media. Although propaganda has always been a part of politics, the wide-ranging scope of these campaigns raises critical concerns for modern democracy.”

Samantha Bradshaw, researcher and lead author of the report, added: “The affordances of social networking technologies — algorithms, automation and big data — vastly changes the scale, scope, and precision of how information is transmitted in the digital age. Although social media was once heralded as a force for freedom and democracy, it has increasingly come under scrutiny for its role in amplifying disinformation, inciting violence, and lowering trust in the media and democratic institutions.”

Other findings from the report include that:

  • 52 countries used “disinformation and media manipulation” to mislead users
  • 47 countries used state sponsored trolls to attack political opponents or activists, up from 27 last year

Which backs up the widespread sense in some Western democracies that political discourse has been getting less truthful and more toxic for a number of years — given tactics that amplify disinformation and target harassment at political opponents are indeed thriving on social media, per the report.

Despite finding an alarming rise in the number of government actors across the globe who are misappropriating powerful social media platforms and other tech tools to influence public attitudes and try to disrupt elections, Howard said the researchers remain optimistic that social media can be “a force for good” — by “creating a space for public deliberation and democracy to flourish”.

“A strong democracy requires access to high quality information and an ability for citizens to come together to debate, discuss, deliberate, empathise and make concessions,” he said.

Clearly, though, there’s a stark risk of high quality information being drowned out by the tsunami of BS that’s being paid for by self-interested political actors. It’s also of course much cheaper to produce BS political propaganda than carry out investigative journalism.

Democracy needs a free press to function but the press itself is also under assault from online ad giants that have disrupted its business model by being able to spread and monetize any old junk content. If you want a perfect storm hammering democracy this most certainly is it.

It’s therefore imperative for democratic states to arm their citizens with education and awareness to enable them to think critically about the junk being pushed at them online. But as we’ve said before, there are no shortcuts to universal education.

Meanwhile regulation of social media platforms and/or the use of powerful computational tools and techniques for political purposes simply isn’t there. So there’s no hard check on voter manipulation.

Lawmakers have failed to keep up with the tech-fuelled times. Perhaps unsurprisingly, given how many political parties have their own hands in the data and ad-targeting cookie jar. (Concerned citizens are advised to practise good digital privacy hygiene to fight back against undemocratic attempts to hack public opinion. More privacy tips here.)

The researchers say their 2019 report, which is based on research work carried out between 2018 and 2019, draws upon a four-step methodology to identify evidence of globally organised manipulation campaigns — including a systematic content analysis of news articles on cyber troop activity and a secondary literature review of public archives and scientific reports, generating country specific case studies and expert consultations.

Here’s the full list of countries studied:

Angola, Argentina, Armenia, Australia, Austria, Azerbaijan, Bahrain, Bosnia & Herzegovina, Brazil, Cambodia, China, Colombia, Croatia, Cuba, Czech Republic, Ecuador, Egypt, Eritrea, Ethiopia, Georgia, Germany, Greece, Honduras, Guatemala, Hungary, India, Indonesia, Iran, Israel, Italy, Kazakhstan, Kenya, Kyrgyzstan, Macedonia, Malaysia, Malta, Mexico, Moldova, Myanmar, Netherlands, Nigeria, North Korea, Pakistan, Philippines, Poland, Qatar, Russia, Rwanda, Saudi Arabia, Serbia, South Africa, South Korea, Spain, Sri Lanka, Sweden, Syria, Taiwan, Tajikistan, Thailand, Tunisia, Turkey, Ukraine, United Arab Emirates, United Kingdom, United States, Uzbekistan, Venezuela, Vietnam, and Zimbabwe.

Tibetans hit by the same mobile malware targeting Uyghurs

A recently revealed mobile malware campaign targeting Uyghur Muslims also ensnared a number of senior Tibetan officials and activists, according to new research.

Security researchers at the University of Toronto’s Citizen Lab say some of the Tibetan targets were sent specifically tailored malicious web links over WhatsApp, which, when opened, stealthily gained full access to their phone, installed spyware and silently stole private and sensitive information.

The exploits shared “technical overlaps” with a recently disclosed campaign targeting Uyghur Muslims, an oppressed minority in China’s Xinjiang state. Google last month disclosed the details of the campaign, which targeted iPhone users, but did not say who was targeted or who was behind the attack. Sources told TechCrunch that Beijing was to blame. Apple, which patched the vulnerabilities, later confirmed the exploits targeted Uyghurs.

Although Citizen Lab would not specify who was behind the latest round of attacks, the researchers said the same group targeting both Uyghurs and Tibetans also utilized Android exploits. Those exploits, recently disclosed and detailed by security firm Volexity, were used to steal text messages, contact lists and call logs, as well as watch and listen through the device’s camera and microphone.

It’s the latest move in a marked escalation of attacks on ethnic minority groups under surveillance and subjection by Beijing. China has long claimed rights to Tibet, but many Tibetans hold allegiance to the country’s spiritual leader, the Dalai Lama. Rights groups say China continues to oppress the Tibetan people, just as it does with Uyghurs.

A spokesperson for the Chinese consulate in New York did not return an email requesting comment, but China has long denied state-backed hacking efforts, despite a consistent stream of evidence to the contrary. Although China has recognized it has taken action against Uyghurs on the mainland, it instead categorizes its mass forced detentions of more than a million Chinese citizens as “re-education” efforts, a claim widely refuted by the west.

The hacking group, which Citizen Lab calls “Poison Carp,” uses the same exploits, spyware and infrastructure to target Tibetans as well as Uyghurs, including officials in the Dalai Lama’s office, parliamentarians and human rights groups.

Bill Marczak, a research fellow at Citizen Lab, said the campaign was a “major escalation” in efforts to access and sabotage these Tibetans groups.

In its new research out Tuesday and shared with TechCrunch, Citizen Lab said a number of Tibetan victims were targeted with malicious links sent in WhatsApp messages by individuals purporting to work for Amnesty International and The New York Times. The researchers obtained some of those WhatsApp messages from TibCERT, a Tibetan coalition for sharing threat intelligence, and found each message was designed to trick each target into clicking the link containing the exploit. The links were disguised using a link-shortening service, allowing the attackers to mask the full web address but also gain insight into how many people clicked on a link and when.

“The ruse was persuasive,” the researchers wrote. During a week-long period in November 2018, the targeted victims opened more than half of the attempted infections. Not all were infected, however; all of the targets were running non-vulnerable iPhone software.

One of the specific social engineering messages, pretending to be an Amnesty International aid worker, targeting Tibetan officials (Image: Citizen Lab/supplied)

The researchers said tapping on a malicious link targeting iPhones would trigger a chain of exploits designed to target a number of vulnerabilities, one after the other, in order to gain access to the underlying, typically off-limits, iPhone software.

The chain “ultimately executed a spyware payload designed to steal data from a range of applications and services,” said the report.

Once the exploitation had been achieved, a spyware implant would be installed, allowing the attackers to collect and send data to the attackers’ command and control server, including locations, contacts, call history, text messages and more. The implant also would exfiltrate data, like messages and content, from a hardcoded list of apps — most of which are popular with Asian users, like QQMail and Viber.

Apple had fixed the vulnerabilities months earlier (in July 2018); they were later confirmed as the same flaws found by Google earlier this month.

“Our customers’ data security is one of Apple’s highest priorities and we greatly value our collaboration with security researchers like Citizen Lab,” an Apple spokesperson told TechCrunch. “The iOS issue detailed in the report had already been discovered and patched by the security team at Apple. We always encourage customers to download the latest version of iOS for the best and most current security enhancements.”

Meanwhile, the researchers found that the Android-based attacks would detect which version of Chrome was running on the device and would serve a matching exploit. Those exploits had been disclosed and were “obviously copied” from previously released proof-of-concept code published by their finders on bug trackers, said Marczak. A successful exploitation would trick the device into opening Facebook’s in-app Chrome browser, which gives the spyware implant access to device data by taking advantage of Facebook’s vast number of device permissions.

The researchers said the code suggests the implant could be installed in a similar way using Facebook Messenger, and messaging apps WeChat and QQ, but failed to work in the researchers’ testing.

Once installed, the implant downloads plugins from the attacker’s server in order to collect contacts, messages, locations and access to the device’s camera and microphone.

When reached, Google did not comment. Facebook, which received Citizen Lab’s report on the exploit activity in November 2018, did not comment at the time of publication.

“From an adversary perspective what makes mobile an attractive spying target is obvious,” the researchers wrote. “It’s on mobile devices that we consolidate our online lives and for civil society that also means organizing and mobilizing social movements that a government may view as threatening.”

“A view inside a phone can give a view inside these movements,” they said.

The researchers also found another wave of links trying to trick a Tibetan parliamentarian into allowing a malicious app access to their Gmail account.

Citizen Lab said the threat from the mobile malware campaign was a “game changer.”

“These campaigns are the first documented cases of iOS exploits and spyware being used against these communities,” the researchers wrote. But attacks like Poison Carp show mobile threats “are not expected by the community,” as shown by the high click rates on the exploit links.

Gyatso Sither, TibCERT’s secretary, said the highly targeted nature of these attacks presents a “huge challenge” for the security of Tibetans.

“The only way to mitigate these threats is through collaborative sharing and awareness,” he said.

Meet Facebook’s latest fake

Facebook CEO Mark Zuckerberg, a 35-year-old billionaire who keeps refusing to sit in front of international parliamentarians to answer questions about his ad business’ impact on democracy and human rights around the world, has a new piece of accountability theatre to sell you: An “Oversight Board“.

Not of Facebook’s business itself. Though you’d be forgiven for thinking that’s what Facebook’s blog post is trumpeting, with the grand claim that it’s “Establishing Structure and Governance for an Independent Oversight Board”.

Referred to during the seeding stage last year, when Zuckerberg gave select face-time to podcast and TV hosts he felt comfortable would spread his conceptual gospel with a straight face, as a sort of ‘Supreme Court of Facebook’, this supplementary content decision-making body has since been outfitted in the company’s customary (for difficult topics) bloodless ‘Facebookese’ (see also “inauthentic behavior”; its choice euphemism for fake activity on its platform)

The Oversight Board is intended to sit atop the daily grind of Facebook content moderation, which takes place behind closed doors and signed NDAs, where outsourced armies of contractors are paid to eyeball the running sewer of hate, abuse and violence so actual users don’t have to, as a more visible mechanism for resolving and thus (Facebook hopes) quelling speech-related disputes.

Facebook’s one-size-fits-all content moderation policy doesn’t and can’t. There’s no such thing as a 2.2BN+ “community” — as the company prefers to refer to its globe-spanning user-base. So quite how the massive diversity of Facebook users can be meaningfully represented by the views of a last resort case review body with as few as 11 members has not yet been made clear.

“When it is fully staffed, the board is likely to be forty members. The board will increase or decrease in size as appropriate,” Facebook writes vaguely this week.

Even if it were proposing one board member per market of operation (and it’s not) that would require a single individual to meaningfully represent the diverse views of an entire country. Which would be ludicrous, as well as risking the usual political divides from styming good faith effort.

It seems most likely Facebook will seek to ensure the initial make-up of the board reflects its corporate ideology — as a US company committed to upholding freedom of expression. (It’s clearly no accident the first three words in the Oversight Board’s charter are: “Freedom of expression”.)

Anything less US-focused might risk the charter’s other clearly stated introductory position — that “free expression is paramount”.

But where will that leave international markets which have suffered the worst kinds of individual and societal harms as a consequence of Facebook’s failure to moderate hate speech, dangerous disinformation and political violence, to name a few of the myriad content scandals that dog the company wherever it goes.

Facebook needs international markets for its business to turn a profit. But you sure wouldn’t know it from its distribution of resources. Not for nothing has the company been accused of digital colonialism.

The level of harm flowing from Facebook decisions to take down or leave up certain pieces of content can be excruciatingly high. Such as in Myanmar where its platform became a conduit for hate speech-fuelled ethnic violence towards the Rohingya people and other ethnic minorities.

It’s reputational-denting failures like Myanmar — which last year led the UN to dub Facebook’s platform “a beast” — that are motivating this latest self-regulation effort. Having made its customary claim that it will do a better job of decision-making in future, Facebook is now making a show of enlisting outsiders for help.

The wider problem is Facebook has scaled so big its business is faced with a steady pipeline of tricky, controversial and at times life-threatening content moderation decisions. Decisions it claims it’s not comfortable making as a private company. Though Facebook hasn’t expressed discomfort at monetizing all this stuff. (Even though its platform has literally been used to target ads at nazis.)

Facebook’s size is humanity’s problem but of course Facebook isn’t putting it like that. Instead — coming sometime in 2020 — the company will augment its moderation processes with a lottery-level chance of a final appeal via a case referral to the Oversight Board.

The level of additional oversight here will of course be exceptionally select. This is a last resort, cherry-picked appeal layer that will only touch a fantastically tiny proportion of the content choices Facebook moderators make every second of every day — and from which real world impacts ripple out and rain down. 

“We expect the board will only hear a small number of cases at first, but over time we hope it will expand its scope and potentially include more companies across the industry as well,” Zuckerberg writes this week, managing output expectations still many months ahead of the slated kick off — before shifting focus onto the ‘future hopes’ he’s always much more comfortable talking about. 

Case selection will be guided by Facebook’s business interests, meaning the push, even here, is still for scale of impact. Facebook says cases will be selected from a pool of complaints and referrals that “have the greatest potential to guide future decisions and policies”.

The company is also giving itself the power to leapfrog general submissions by sending expedited cases directly to the board to ask for a speedy opinion. So its content questions will be prioritized. 

Incredibly, Facebook is also trying to sell this self-styled “oversight” layer as independent from Facebook.

The Oversight Board’s overtly bureaucracy branding is pepped up in Facebook headline spin as “an Independent Oversight Board”. Although the adjective is curiously absent from other headings in Facebook’s already sprawling literature about the OB. Including the newly released charter which specifies the board’s authority, scope and procedures, and was published this week.

The nine-page document was accompanied by a letter from Zuckerberg in which he opines on “Facebook’s commitment to the Oversight Board”, as his header puts it — also dropping the word ‘independent’ in favor of slipping into a comfortable familiar case. Funny that.

The body text of Zuckerberg’s letter goes on to make several references to the board as “independent”; an “independent organization”; exercising “its independent judgement”. But here that’s essentially just Mark’s opinion.

The elephant in the room — which, if we continue the metaphor, is in the process of being dressed by Facebook in a fancy costume that attempts to make it look like, well, a board room table — is the supreme leader’s ongoing failure to submit himself and his decisions to any meaningful oversight.

Supreme leader is an accurate descriptor for Zuckerberg as Facebook CEO, given the share structure and voting rights he has afforded himself mean no one other than Zuckerberg can sack Zuckerberg. (Asked last year, during a podcast interview with recode’s Kara Swisher if he was going to fire himself, in light of myriad speech scandals on his platform, Zuckerberg laughed and then declined.)

It’s a corporate governance dictatorship that has allowed Facebook’s boy king to wield vast power around the world without any internal checks. Power without moral responsibility if you will.

Throughout Zuckerberg’s (now) 15-year apology tour turn as Facebook CEO neither the claims he’ll do things differently next time nor the cool expansionist ambition have wavered. He’s still at it of course; with a plan for a global digital currency (Libra), while bullishly colonizing literal hook-ups (Facebook Dating). Anything to keep the data and ad dollars flowing.

Recently Facebook also paid a $5BN FTC fine to avoid its senior executives having to face questions about their data governance and policy enforcement fuck-ups — leaving Zuckerberg & co free to get back to lucrative privacy-screwing business as usual. (To put the fine in context, Facebook’s 2018 full year revenue clocked in at $55.8BN.)

All of which is to say that an ‘independent’ Facebook-devised “Oversight Board” is just a high gloss sticking plaster to cover the lack of actual regulation — internal and external — of Zuckerberg’s empire.

It is also an attempt by Facebook to paper over its continued evasion of democratic accountability. To distract from the fact its ad platform is playing fast and loose with people’s rights and lives; reshaping democracies and communities while Facebook’s founder refuses to answer parliamentarians’ questions or account for scandal-hit business decisions. Privacy is never dead for Mark Zuckerberg.

Evasion is actually a little tame a term. How Facebook operates is far more actively hostile than that. Its platform is reshaping us without accountability or oversight, even as it ploughs profits into spinning and shape-shifting its business in a bid to prevent our democratically elected representatives from being able to reshape it.

Zuckerberg appropriating the language of civic oversight and jurisprudence for this “project”, as his letter calls the Oversight Board — committing to abide by the terms of a content decision-making review vehicle entirely of his own devising, whose Facebook-written charter stipulates it will “review and decide on content in accordance with Facebook’s content policies and values” — is hardly news. Even though Facebook is spinning at the very highest level to try to make it so.

What would constitute a newsworthy shock is Facebook’s CEO agreeing to take questions from the democratically elected representatives of the billions of users of his products who live outside the US.

Zuckerberg agreeing to meet with parliamentarians around the world so they can put to him questions and concerns on a rolling and regular basis would be a truly incredible news flash.

Instead it’s fiction. That’s not how the empire functions.

The Facebook CEO has instead ducked as much democratic scrutiny as a billionaire in charge of a historically unprecedented disinformation machine possibly can — submitting himself to an awkward question-dodging turn in Congress last year; and one fixed-format meeting of the EU parliament’s conference of presidents, initially set to take place behind closed doors (until MEPs protested), where he was heckled for failing to answer questions.

He has also, most recently, pressed US president Donald Trump’s flesh. We can only speculate on how that meeting of minds went. Power meet irresponsibility — or was it vice versa?

 

International parliamentarians trying on behalf of the vast majority of the world’s Facebook users to scrutinize Zuckerberg and hold his advertising business to democratic account have, meanwhile, been roundly snubbed.

Just this month Zuckerberg declined a third invitation to speak in front of the International Grand Committee on Disinformation which will convene in Dublin this November.

At a second meeting in Canada earlier this year Zuckerberg and COO Sheryl Sandberg both refused to appear — leading the Canadian parliament’s ethics committee to vote to subpoena the pair.

While, last year, the UK parliament got so frustrated with Facebook’s evasive behavior during a timely enquiry into online disinformation, which saw its questions fobbed off by a parade of Zuckerberg stand-ins armed with spin and misdirection, that a sort of intergovernmental alchemy occurred — and the International Grand Committee on Disinformation was formed in an eye-blink, bringing multiple parliaments together to apply democratic pressure to Facebook. 

The UK Digital, Culture, Media and Sport committee’s frustration at Facebook’s evasive behavior also led it to deploy arcane parliamentary powers to seize a cache of internal Facebook documents from a US lawsuit in a creative attempt to get at the world-view locked inside Zuckerberg’s blue box.

The unvarnished glimpse of Facebook’s business that these papers afforded certainly isn’t pretty… 

US legal discovery appears to be the only reliable external force capable of extracting data from inside the bellow of the nation-sized beast. That’s a problem for democracies. 

So Facebook instructing an ‘oversight board’ of its own making to do anything other than smooth publicity bumps in the road, and pave the way for more Facebook business as usual, is like asking a Koch brothers funded ‘stink tank’ to be independent of fossil fuel interests. The OB is just Facebook’s latest crisis PR tool. More fool anyone who signs up to ink their name to its democratically void rubberstamp.

Dig into the detail of the charter and cracks in the claimed “independence” soon appear.

Aside from the obvious overriding existential points that the board only exists because Facebook exists, making it a dependent function of Facebook whose purpose is to enable its spawning parental system to continue operating; and that it’s funded and charged with chartered purpose by the very same blue-veined god it’s simultaneously supposed to be overseeing (quite the conflict of interest), the charter states that Facebook itself will choose the initial board members. Who will then choose the rest of the first cohort of members.

“To support the initial formation of the board, Facebook will select a group of cochairs. The co-chairs and Facebook will then jointly select candidates for the remainder of the board seats,” it writes in pale grey Facebookese with a tone set to ‘smooth reassurance’ — when the substance of what’s being said should really make you go ‘wtf, how is that even slightly independent?!’

Because the inaugural (Facebook-approved) member cohort will be responsible for the formative case selections — which means they’ll be laying down the foundational ‘case law’ that the board is also bound, per Facebook’s charter, to follow thereafter.

“For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar,” runs an instructive section on the “basis of decision-making”.

The problem here hardly needs spelling out. This isn’t Facebook changing, this is more of the same ‘Facebook first’ ethos which has always driven its content moderation decisions — just now with a highly polished ‘overseen’ sheen.

This isn’t accountability either. It’s Facebook trying to protect its business from actual regulation by creating a blame-shifting firewall to shield its transparency-phobic execs from democratic (and moral) scrutiny. And indeed to shield Zuckerberg & his inner circle from future content scandals that might threaten to rock the throne, a la Cambridge Analytica.

(Judging by other events this week that mission may not be going so well… )

Given the lengths this company is going to to eschew democratic scrutiny — ducking and diving even as it weaves its own faux oversight structure to manage negative PR on its behalf (yep, more fakes!) — you really have to wonder what Facebook is trying to hide.

A moral vacuum the size of a black hole? Or perhaps it’s just trying to buy time to complete its corporate takeover of the democratic world order…

Because of course the Oversight Board can’t set actual Facebook policy. Don’t be ridiculous! It can merely issue policy recommendations — which Facebook can just choose to ignore.

So even if we imagine the OB running years in the future, when it might theoretically be possible its membership has drifted out of Facebook’s comfortable set-up “support” zone, the charter has baked in another firewall that lets Zuckerberg ignore any policy pressure he doesn’t like. Just, y’know, on the off-chance the board gets too independently minded. Truly, there’s nothing to see here.

Entities structured by corporate interests to role-play ‘neutral’ advice or ensure ‘transparent’ oversight — or indeed to promulgate self-interested propaganda dressed in the garb of intellectual expertise — are almost always a stacked trick.

This is why it’s preferable to live in a democracy. And be governed by democratically accountable institutions that are bound by legally enforcement standards of transparency. Though Facebook hopes you’ll be persuaded to vote for manipulation by corporate interest instead.

So while Facebook’s claim that the Oversight Board will operate “transparently” sure sound good it’s also entirely meaningless. These are not legal standards of transparency. Facebook is a business, not a democracy. There are no legal binds here. It’s self regulation. Ergo, a pantomime.

You can see why Facebook avoided actually calling the OB its ‘Supreme Court’; that would have been trolling a little too close to the bone.

Without legal standards of transparency (or indeed democratic accountability) being applied, there are endless opportunities for Facebook’s self interest to infiltrate the claimed separation between oversight board, oversight trust and the rest of its business; to shape and influence case selections, decisions and policy recommendations; and to seed and steer narrative-shaping discussion around hot button speech issues which could help move the angry chatter along — all under the carefully spun cover of ‘independent external oversight’.

No one should be fooled into thinking a Facebook-shaped and funded entity can meaningful hold Facebook to account on anything. Nor, in this case, when it’s been devised to absorb the flak on irreconcilable speech conflicts so Facebook doesn’t have to.

It’s highly doubtful that even a truly independent board cohort slotted into this Zuckerberg PR vehicle could meaningfully influence Facebook’s policy in a more humanitarian direction. Not while its business model is based on mass-scale attention harvesting and privacy-hostile people profiling. The board’s policy recommendations would have to demand a new business model. (To which we already know Facebook’s response: ‘LOL! No.’)

The Oversight Board is just the latest blame-shifting publicity exercise from a company with a user-base as big as a country that gifts it massive resource to throw at its ‘PR problem’ (as Facebook sees it); i.e. how to seem like a good corporate citizen whilst doing everything possible to evade democratic scrutiny and outrun the leash of government regulation. tl;dr: You can’t fix anything if you don’t believe there’s an underlying problem in the first place.

For an example of how the views of a few hand-picked independent experts can be channeled to further a particular corporate agenda look no further than the panel of outsiders Google assembled in Europe in 2014 in response to the European Court of Justice ‘right to be forgotten’ ruling — an unappealable legal decision that ran counter to its business interests.

Google used what it billed as an “advisory committee” of outsiders mostly as a publicity vehicle, holding a large number of public ‘hearings’ where it got to frame a debate and lobby loudly against the law. In such a context Google’s nakedly self-interested critique of EU privacy rights was lent a learned, regionally seasoned dressing of nuanced academic concern, thanks to the outsiders doing time on its platform.

Google also claimed the panel would steer its decision-making process on how to implement the ruling. And in their final report the committee ended up aligning with Google’s preference to only carry out search de-indexing at the European (rather than .com global) domain level. Their full report did contain some dissent. But Google’s preferred policy position won out. (And, yes, there were good people on that Google-devised panel.)

Facebook’s Oversight Board is another such self-interested tech giant stunt. One where Facebook gets to choose whether or not to outsource a few tricky content decisions while making a big show of seeming outward-looking, even as it works to shift and defuse public and political attention from its ongoing lack of democratic accountability.

What’s perhaps most egregious about this latest Facebook charade is it seems intended to shift attention off of the thousands of people Facebook pays to labor daily at the raw coal face of its content business. An outsourced army of voiceless workers who are tasked with moderating at high speed the very worst stuff that’s uploaded to Facebook — exposing themselves to psychological stress, emotional trauma and worse, per multiple media reports.

Why isn’t Facebook announcing a committee to provide that existing expert workforce with a public voice on where its content lines should lie, as well as the power to issue policy recommendations?

It’s impossible to imagine Facebook actively supporting Oversight Board members being selected from among the pool of content moderation contractors it already pays to stop humanity shutting its business down in sheer horror at what’s bubbling up the pipe.

On member qualifications, the Oversight Board charter states: “Members must have demonstrated experience at deliberating thoughtfully and as an open-minded contributor on a team; be skilled at making and explaining decisions based on a set of policies or standards; and have familiarity with matters relating to digital content and governance, including free expression, civic discourse, safety, privacy and technology.”

There’s surely not a Facebook moderator in the whole wide world who couldn’t already lay claim to that skill-set. So perhaps it’s no wonder the company’s ‘Oversight Board’ isn’t taking applications.

Tinder’s interactive series ‘Swipe Night’ could bring a needed boost to user engagement

On Sundays in October, Tinder is launching an “interactive adventure” in its dating app called “Swipe Night” that will present a narrative where users make a series of choices in order to proceed. This sort of choose-your-own-adventure format has been more recently popularized by Netflix and others as a new way to engage with digital media. In Tinder’s case, its larger goal may not a dramatic entry into scripted, streaming video, as has been reported, but rather a creative way to juice some lagging user engagement metrics.

For example, based on analysis of Android data in the U.S. from SimilarWeb, Tinder’s sessions per user, meaning the number of times the average user opens the app per day, have declined. From the period of January – August 2018 to the same period in 2019 (January – August 2019), sessions declined 10.8%, from 4.5 to 4.1.

The open rate, meaning the percentage of the Tinder install base that opens the app on a daily basis, also declined 5.9% during this time, going from 28% to 22.1%.

These sort of metrics are hidden behind what would otherwise appear to be steady growth. Tinder’s daily active users, for example, grew 3.1% year-over-year from 1.114 million to 1.149 million. And its install penetration on Android devices grew by 1%, the firm found. (See below).

Tinder Install Penetration

Drops in user engagement are worth tracking, given the potential revenue impact.

App store intelligence firm Sensor Tower found Tinder experienced its first-ever quarter-over-quarter decline in combined revenue from both the App Store and Google Play in Q2 2019.

Spending was down 8.8% from $260 million in Q1 to $237 million in Q2, the firm says. This was largely before Tinder shifted in-app spending out of Google Play, which was in late Q2 to early Q3. Tinder revenue was still solidly up 46% year-over-year, the company itself reported in Q2, due to things like pricing changes, product optimizations, better “Tinder Gold” merchandising and more.

There are many reasons as to why users could be less engaged with Tinder’s app. Maybe they’re just not having as much fun — something “Swipe Night” could help to address. Sensor Tower also noted that negative sentiment in Tinder’s user ratings on the U.S. App Store was at 79% last quarter, up from 68% in Q2 2018. That’s a number you don’t want to see climbing.

Of course, all these figures are estimates from third-parties, not directly reported — so take them with the proverbial grain of salt. But they help to paint a picture as to why Tinder may want to try some weird, experimental “mini-series”-styled event like this.

It wouldn’t be the first gimmick that Tinder used to boost engagement, either. It also recently launched engagement boosters like Spring Break mode and Festival mode, for example. But this would be the most expensive to produce and far more demanding, from a technical standpoint.

Swipe Night Intro

In “Swipe Night,” Tinder users will participate by launching the app on Sundays in October, anytime from 6 PM to midnight. The 5-minute story will follow a group of friends in an “apocalyptic adventure” where users will face both moral dilemmas and practical choices.

You’ll have 7 seconds to make a decision and proceed with the narrative, Tinder says. These decisions will then be added to your user profile, so people can see what decisions others made at those same points. You’ll make your choice using the swipe mechanism, hence the series’ name.

Every Sunday, a new part of the series will arrive. Tinder shot over 2 hours worth of video for the effort, but you’ll only see the portions relevant to your own choices.

The series stars Angela Wong Carbone (“Chinatown Horror Story”), Jordan Christian Hearn (“Inherent Vice”), and Shea Gabor, and was directed by Karena Evans, a music director used by Drake. Writers include Nicole Delaney (Netflix’s “Big Mouth”) and Brandon Zuck (HBO’s “Insecure”).

Swipe Night Choice

 

Tinder touts the event as a new way to match users and encourage conversations.

“More than half of Tinder members are Gen Z, and we want to meet the needs of our ever-evolving community. We know Gen Z speaks in content, so we intentionally built an experience that is native to how they interact,” said Ravi Mehta, Tinder’s Chief Product Officer. “Dating is all about connection and conversation, and Swipe Night felt like a way to take that to the next level. Our hope is that it will encourage new, organic conversations based on a shared content experience,” he said.

How someone chooses to play through a game doesn’t necessarily translate into some sort of criteria as to whether they’d be a good match, however. Which is why it’s concerning that Tinder plans to feed this data to its algorithm, according to Variety.

At best, a series like this could give you something to talk about — but it’s probably not as much fun as chatting about a shared interest in a popular TV show or movie.

Variety also said the company is considering whether to air the series on another streaming platform in the future.

Tinder declined to say if it plans to launch more of these experiences over time.

Despite the user engagement drop, which crazy stunts like “Swipe Night” could quickly — if temporarily — correct, the dating app doesn’t have much to worry about at this time. Tinder still accounts for the majority of spending (59%) in the top 10 dating apps globally as of last quarter, Sensor Tower noted. This has not changed significantly from Q2 2018 when Tinder accounted for 60% of spending in the top 10 dating apps, it said.

 

Facebook has suspended ‘tens of thousands’ of apps suspected of hoarding data

Facebook has suspended “tens of thousands” of apps connected to its platform which it suspects may be collecting large amounts of user profile data.

That’s a sharp rise from the 400 apps flagged a year ago by the company’s investigation in the wake of Cambridge Analytica, a scandal that saw tens of millions of Facebook profiles scraped to help swing undecided voters in favor of the Trump campaign during the U.S. presidential election in 2016.

Facebook did not provide a more specific number in its blog post but said the apps were built by 400 developers.

Many of the apps had been banned for a number of reasons, like siphoning off Facebook user profile data or making data public without protecting their identities, or other violations of the company’s policies.

Despite the bans, the social media giant said it has “not confirmed” other instances of misusing user data beyond those it has already notified the public about. Among those previously disclose include South Korean analytics firm Rankwave, accused of abusing the developer platform and refusing an audit; and myPersonality, a personality quiz that collected data on more than four million users.

The action comes in the wake of the since-defunct Cambridge Analytica and other serious privacy and security breaches. Federal authorities and lawmakers have launched investigations and issued fines from everything from its Libra cryptocurrency project to how the company handles users’ privacy.

Facebook said its investigation will continue.

Twitter launches its controversial ‘Hide Replies’ feature in the U.S. and Japan

Twitter’s controversial “Hide Replies feature, aimed at civilizing conversations on its platform, is launching today in the U.S. and Japan after earlier tests in Canada. The addition is one of the more radical changes to Twitter to date. It puts people back in control of a conversation they’ve started by giving them the ability to hide those contributions they think are unworthy.

These replies, which may range from the irrelevant to the outright offensive, aren’t actually deleted from Twitter. They’re just put behind an extra click.

That means people who come into a conversation to cause drama, make inappropriate remarks, or bully and abuse others won’t have their voices heard by the majority of the conversation’s participants. Only those who choose to view the hidden replies will see those posts.

ModeratedRepliesAuthor

Other social media platforms don’t give so much power to commenters to disrupt conversations. On Facebook and Instagram, for example, you can delete any replies to your own posts.

But Twitter has a different vibe. It’s meant to be a public town square, where everyone has a right to speak (within reason.)

Unfortunately, Twitter’s open nature also led to bullying and abuse. Before today, the only options Twitter offered were to mute, block and report users. Blocking and muting, however, only impact your own Twitter experience. You may no longer see posts from those users, but others still could. Reporting a tweet is also a complicated process that takes time. It’s not an immediate solution for a conversation rapidly spinning out of control.

While “Hide Replies” will help to address these problems, it ships with challenges of its own, too. It could be used as a way to silence dissenting opinions, including those expressed thoughtfully, or even fact-checked clarifications.

Twitter believes the feature will ultimately encourage people to better behave when posting to its platform.

“We already see people trying to keep their conversations healthy by using block, mute, and report, but these tools don’t always address the issue. Block and mute only change the experience of the blocker, and report only works for the content that violates our policies,” explained Twitter’s PM of Health Michelle Yasmeen Haq earlier this year.

ModeratedRepliesConsumer

Since launching in Canada in July, Twitter said that people mostly used the feature to hide replies they found were irrelevant, abusive or unintelligible. User feedback was positive, as well, as those who used the tool said they found it was a helpful way to control what they saw, similar to keyword muting.

In a survey, 27% of those who had their tweets hidden said they would reconsider how they interact with others in the future, Twitter said. That’s not a large majority but it’s enough to make a dent. However, it’s unclear how representative this survey was. Twitter declined to say how many people used the feature or how many were surveyed about its impacts.

The system will now also ask users who hide replies if they also want to block the account, as means of clarifying that “hiding” is a different function.

“These are positive and heartening results: the feature helped people have better conversations, and was a useful tool against replies that deterred from the person’s original intent,” explained Twitter in a blog post, shared today. “We’re interested to see if these trends continue, and if new ones emerge, as we expand our test to Japan and the U.S. People in these markets use Twitter in many unique ways, and we’re excited to see how they might use this new tool,” the post read.

Despite the expansion, Twitter says “Hide Replies” is still considered a test as the company is continuing to evaluate the system, and it’s not available to Twitter’s global user base.

The new feature will start rolling out at 2 PM PT in both the U.S. and Japan and will be available across mobile and web clients.

YouTube overhauls its problematic verification program

YouTube’s verification program is getting a massive overhaul, the company announced today, which will likely result in a number of less prominent creators losing their verification status. Previously, YouTube allowed any channel that reached 100,000 subscribers to request verification. That limit is being removed, with a change to the verification program that rolls out in October. Going forward, YouTube will focus its efforts on verifying channels that have more of a need to prove their authenticity — like those belonging to a brand, public figure, artist or another creator who might be subject to impersonation, for example.

YouTube says the earlier verification system was established when the site was smaller, but its ecosystem has since grown and “become more complex.”

Instead of looking at a number of subscribers — a metric than can be gamed by bots — the new system will have more murky requirements. YouTube says it’s about “prominence,” which it defines in a number of ways.

For starters, YouTube will determine if the channel represents a “well-known or highly searched for creator, artist, public figure or company.” It will also consider if the channel is widely recognized outside of YouTube and has a strong online presence, or if it’s a channel that has a very similar name to many other channels.

We understand YouTube will use a combination of human curation and algorithmic signals to make these determinations. When asked, the company declined to discuss the specifics, however.

Creators V3

There were several reasons YouTube wanted to change its system, beyond raising the threshold for verification.

The company had run into a similar problem that Twitter once faced — people mistook the verification badge as an endorsement. On Twitter, that issue reached a tipping point when it was discovered that Twitter had verified the Charlottesville rally organizer. Twitter stopped verifying accounts shortly afterward. Its system today is still being fixed, but the project has been put on the back burner.

Similarly, YouTube’s research found that over 30% of users misunderstood the verification badge’s meaning, believing the checkmark indicted “endorsement of content,” not “identity.”

This is problematic for YouTube for a number of reasons, but mainly because the company wants to distance itself from the content on its platform — content that is often racist, vile, false, dangerous, conspiracy-filled and extremist. YouTube wants to be an open site, with all the troubles that entails, but doesn’t want to be held accountable for the terrible things posted there — like the 14-year-old girl who grew to online fame by posting racist, anti-Muslim, anti-LGBTQ videos, or the high-profile star who made repeated racist comments, then gets honored by YouTube with special creator rewards. 

There were other issues with the prior system, as well.

Some creators would fake their verification status, for instance. Before the changes, a verified channel would display a checkmark next to its channel name. This could be easily forged by simply adding a checkmark to the end of a channel name.

Plus, the checkmark itself only really worked when people viewed the channel’s main watch page on desktop or mobile. It didn’t translate as well to interactions in live chats, on community posts or in stories.

Music V3

By revamping the verification system, YouTube is clarifying that the verification isn’t an endorsement — it’s a neutral statement of fact. It’s also less difficult to forge, and works everywhere the creator interacts with fans.

The updated verification system drops the checkmark in favor of a gray swipe across the channel name (see above).

This applies to both channels and artists. With regards to the latter, it will replace the music note.

The system will roll out in late October, YouTube said, and the new criteria will apply for all channels.

Those who meet the new requirements won’t need to apply — they’ll automatically receive the new verified treatment. Others who didn’t qualify for re-verification will be notified today and will have the option to appeal the decision before the changes take place.

Information on the appeals process will be available in YouTube’s Help Center.

Update, 9/19/19, 1:26 PM ET: Here’s the letter YouTube creators are receiving. Note it refers to a timeframe of “early” instead of “late” October for the changes.

youtube letter

‘The best VC on Instagram’ is now VC-backed

About 18 months ago, Jenny Gyllander created an Instagram account by the name @thingtesting.

The premise was simple. Gyllander, who was at the center of the London startup ecosystem as an investor with the British seed fund Backed.VC, would upload photos of interesting direct-to-consumer products with a caption that served as a bite-sized review. The experiment began with Birchbox, a provider of curated boxes of beauty products that rose to prominence amid the subscription box hype of yesteryear. In her short review, tailored perfectly for the Instagram generation, Gyllander admitted to being “like 10 years late to this much hyped subscription-everything party,” adding that “after two boxes and ten products, only three products were relevant to me.” Her honesty, and perhaps more importantly, her brevity, garnered her a small following of venture capitalists, founders and consumer brand enthusiasts.

 

View this post on Instagram

 

A post shared by thingtesting (@thingtesting) on

Since that first post, Gyllander has featured and reviewed more than 100 products on her Instagram account — which today counts 32,800 followers — quit her day job and began building an Instagram inspired, full-fledged review business.

“I found something I am very, very passionate about,” Gyllander tells TechCrunch. “Finding the D2C niche was for me a little bit of a holy grail. It’s where brands and startups align for the first time in a concrete way.”

With a $300,000 pre-seed investment from angel investor and Homebrew co-founder Hunter Walk, who previously called Thingtesting “The best VC on Instagram,” early Spotify investor Shakil Khan and more, Gyllander wants to create a full-scale D2C review platform with a team of reviewers and content creators, and a portal for her loyal followers to write and submit their own reviews. She compares what she envisions for Thingtesting to that of Rotten Tomatoes. Akin to the popular website for movie and television reviews, each product review on her future website will include a Thingtesting score and an audience score. The goal is to help consumers shop smarter and filter through the D2C noise.

“People are confused right now by the sheer amount of products launching,” Gyllander said. “I want Thingtesting to be a filter for people to consume better … It’s a role department stores used to have back in the day but no body has really filled that role in the online world.”

[gallery ids="1883585,1883587,1883588"]

Gyllander, already making money from what was once a side project, has plans in store to generate significantly more revenue. Currently, she’s capitalizing off Instagram’s Close Friends list, which the social media hub launched last year to allow users to share content to fewer people. Gyllander, like a slew of other Instagram Influencers, however, quickly realized an opportunity to monetize content using the feature, a trend explained in detail in a recent report from The Atlantic.

Gyllander charges a lifetime fee of $100 to her followers hoping for a spot on her Close Friends list. Those followers are then provided exclusive content, including behind-the-scenes looks at her product review journeys. So far, 300 people have been granted access to the exclusive group as others sit on the waitlist. Gyllander explains she hasn’t green-lit every request to enter the coveted group because she wants to maintain a sense of community as the account grows in popularity. Early next year, she hopes, she will have launched a Thingtesting website and a new subscription-based membership tier targeting D2C connoisseurs, investors and anyone interested in a front seat view of the booming D2C industry.

As Thingtesting morphs into a digital review platform and expands from the bounds of Instagram, Gyllander will have to work harder to differentiate what she’s built from other review sites and D2C blogs. Her secret weapon, she believes, is her authenticity.

“It’s my honesty,” Gyllander said. “And it’s the fact that there’s no payment involved from the brands and that I’m not being paid to review products. That’s something quite rare in the Instagram world today. There aren’t that many accounts that are just talking about new products with non-monetary incentives.”

 

View this post on Instagram

 

A post shared by thingtesting (@thingtesting) on


Since launching with a review of Birchbox, Gyllander has shared her thoughts on Magic Spoon, a D2C cereal company: “one bowl kept me full for hours,” she wrote, ultimately concluding she wouldn’t continue eating the cereal. More recently, she referred to the D2C aperitif brand Haus as “stunning;” wrote a lukewarm review of the blue light-protecting eyewear brand Felix Gray; and posted a glowing summary of Dripkit, a D2C coffee brand.

To secure a spot on Gyllander’s grid, a product must bring something new to the market, as well as boast killer branding and packaging. The former VC says she tries out about 20 products a month and shares official reviews of four or five.

“The majority of people today, when it comes to modern brands, they have their first interaction through an ad or an influencer telling them about the product,” Gyllander explained. “Discovery is in a weird place right now when it comes to the general consumer.”

It’s difficult to imagine a venture-scale business within Gyllander’s vision for Thingtesting. But one should never underestimate the value of an exclusive and hyper-focused network. Gyllander, in a short time, has created a meeting place for D2C aficionados and venture capitalists and, as she’s proven, her thoughts are worth paying for.

The portrait of an avatar as a young artist

In this episode of Flux I talk with LaTurbo Avedon, an online avatar who has been active as an artist and curator since 2008.  Recently we’ve seen a wave of next-gen virtual stars rise up, from Lil Miquela in the west to pop-stars like Kizuna AI in the east. As face and body tracking make real-time avatar representation accessible, what emergent behaviors will we see? What will our virtual relationships evolve? How will these behaviors translate into the physical world when augmented reality is widespread?
LaTurbo was early to exploring these questions of identity and experimenting with telepresence. She has shape-shifted across media types, spending time in everything from AOL and chat rooms, to MMOs, virtual worlds and social media platforms. In this conversation she shares her thoughts on how social networks have breached our trust, why a breakup is likely, and how users should take control of their data. We get into the rise of battle royale gaming, why multiplicity of self is important, and how we can better express agency and identity online.

An excerpt of our conversation is published below. Full podcast on iTunes and transcript on Medium.

***

ALG: Welcome to the latest episode of Flux. I am excited to introduce LaTurbo Avedon. LaTurbo is an avatar and artist originating in virtual space, per her website and online statement. Her works can be described as research into dimensions, deconstructions, and explosion of forms exploring topics of virtual authorship and the physicality of the Internet. LaTurbo has exhibited all over the world from Peru to Korea to the Whitney in New York. I’m thrilled to have her on the show. Metaphorically of course. It’s just me here in the studio. LaTurbo is remote. 

When we got the demo file earlier I was excited to hear the slight Irish lilt in your robotic voice. As a Brit I feel like we have a bond there.

LaTurbo: Thank you for the patience. It is like a jigsaw puzzle, our voices together.

ALG: Of course it’s all about being patient as we try out new things on the frontier. And you represent that frontier. This show is about people that are pushing the boundaries in their fields. A lot of them are building companies, some are scientists. Recently we’ve had a few more artists on and that’s something I believe is important in all of these fields. Because you’re taking the time to do the hard work and think about technology and its impact and how we can stretch it and use it in different ways and broaden our thinking. You play an important role.

LaTurboWe will get things smoothed out eventually as my vocalization gets easier and more natural with better tools. Alice I appreciate you trekking out here with me and trying this format out.

ALG: I love a good trek. Maybe you can give a brief intro on who is LaTurbo. I believe you started in Second Life. I’d love to hear about those origins. Phil Rosedale was one of the first people I interviewed on this podcast, the founder of Second Life. Shout out to Phil. I’d love to hear what’s been your journey since then. Oh and also happy 10th birthday.

“I’ve spent decades inside of virtual environments, in many ways I came of age alongside the Internet. My early years in my adolescence in role-playing games. From the early years I was enamored by cyber space”

LaTurboI know that it is circuitous at times but this process has made me work hard to explore what it takes to be here like this. Well I started out early on in the shapes of America Online, intranets, and private message boards. Second Life opened this up incredibly, taking things away from the closed worlds of video games. We had to work even harder to be individuals in early virtual worlds using character editors, roleplaying games, and other platforms in shared network spaces. This often took the shape of default characters — letting Final Fantasy, Goldeneye, or other early game titles be the space where we performed alternative identities.

ALG: If you’re referring to Goldeneye on N64 I spent considerable time on it growing up. So I might have seen you running around there.

LaTurbo: It was a pleasure to listen to your conversation with Philip Rosedale as he continues to explore what comes next, afterwards, in new sandboxes. What was your first avatar?

ALG: I did play a lot of video games growing up. I was born in Hong Kong and was exposed mostly to the Nintendo and Sega side of things, so maybe one of those Mario Kart characters — Princess Peach or really I went for Yoshi if those count as avatars. I’d love to get into your experience in gaming. You said you started off exploring more closed world games and then you discovered Second Life. You’ve spent a lot of time in MMORPGs and obviously that’s one of the main ways that people have engaged with avatars. I’d love to hear how your experiences have been in different games and any commentary on the worlds you’re spending time in now.

LaTurbo: I think that even if they weren’t signature unique identities or your own avatar, those forms of early video games were a first key to understand more about facets of yourself through them. For me gaming is like water being added to the creative sandbox. There is fusion inside of game worlds — narrative, music, performance, design, problem solving, communication, so many different factors of life and creativity that converge within a pliable file. Some of the most Final Fantasies of games are now realities. Users move place to place using many maps and system menus on their devices. The physical world so closely bonded by users like me that brought bits of the game out with us. Recently I spent several months wandering around inside of Red Dead Redemption 2. I enjoyed the narrative of the main storyline though I was far more interested in having quiet moments away from all of the violence. I named my horse Sontag and went out exploring, taking photographs and using slow motion game exploits to make videos. Several months as the weary cowboy named Arthur, and then I carried on my way. I take bits and pieces with me on the way.

LaTurbo’s Overwatch avatar

ALG: As you’ve gone across different games and platforms like Red Dead Redemption 2 are there specific people you’ve made friends with? How have your friendships formed in these different communities and do they travel between games? 

LaTurbo: I have had many gaming friends. Virtual friends overlap between all of these worlds. My Facebook friends are not very different than those I fight with in Overwatch or the ones I challenge scores with in Tetris Effect.

ALG: One thing you’ve said about gaming and I’ll read the quote straight out:

“I love the MMO or massively multiplayer online experience for a lot of reasons but primarily because I want to create works collaboratively with my network, because we are in this moment together. For a long time virtual worlds were partitioned from the public because you either had to be invested in gaming or a chat room/ BBS user to get into them.”

I want to explore that. Gaming has come a long way in the 10 years since you were created. It’s more widespread now. Things like Fortnite. I saw that Red Dead Redemption is introducing a Fortnite like feature where they’re going to have battle royale mode and toss people into a battle zone and force them to search for weapons to survive. I think a lot of people are looking at the success of Fortnite and replicating elements of it. Can you comment on how gaming has become more widespread or more in the public mind and what you think of the rise of Fortnite?

LaTurboOur histories are fluid, intersecting and changing depending on the world we choose to inhabit. Sometimes we are discussing art on Instagram. Other times we are discussing game lore or customization of ourselves. This variety is so important to me. There is a lot exchanged between worlds like Fortnite and the general physical day to day. Expectations are real and high. The battle royale model has pushed people to a sort of edge at all times. A constant pressure of chance and risk, it crosses between games but also into general attention. Video apps like TikTok have a similar model — always needing to have the drop on the creators around you.

ALG: It’s interesting that tension. These games are driven to create competition. They are businesses so they’re supposed to build in loops and mechanics that keep people engaging. But as you describe of your experience in Red Redemption you’ve also found quiet moments of exploration being alone and not necessarily fiercely competing. 

LaTurbo: Red Dead could be a hundred games in one. Yet for some reason we come back to the royale again. It is a maximal experience in a lot of ways. One that uses failure and frustration to keep users trying again perpetually. This is a telling sign as you’ve said about the business of games. The loop. I worry that this is a risky model because it doesn’t encourage a level of introspection very often.

ALG: I love video games but have never been a fan of first person shooters. I don’t enjoy the violence. But I’ve always loved strategy and exploration games. To your point about exploring, I would spend hours wandering on Epona [the horse] in Zelda, running across the fields. But I didn’t feel that a lot of those games were designed for women or people who weren’t interested in the violence or the GTA type approach. I’m excited to see more of that happening now and gaming CEOs realizing there’s a huge untapped market of people that want to play in different modes and experience gaming in different ways. It feels like we are moving towards that future. I do want to get in to how you have expanded beyond gaming. I’ll read some of your quotes from when you started out:

“I’ve been making work in digital environments since 2008 to 2009, though I’ve only been using social media for about a year now since I can’t go out and mingle with people it’s been quite nice to use social platforms to share my work. This way I can be in real life IRL as much as people allow me to be.”

I want to get to the question of how you’ve expanded from gaming to social media, building your Twitter and Instagram presence and how you think about your engagement on those platforms.

LaTurbo: I celebrate the multiplicity of self. Walt Whitman spoke of their contradictions years ago accepting themselves in the sense that they contain multitudes. As I wandered the fields of fictitious Admiral Grant in Red Dead Redemption 2, it occurred to me that I was wondering inside of Leaves of Grass. It made sense that I too was wandering around out in the fields and trees. Virtual life in poetry, song, or simulation gives us a different sort of armor where our forms can forget about borders, rules and expectations that have yet to change outside.

It has been quite a decade. Events of the past 10 years could easily be the plot of a William Gibson novel. A cyber drama and all its actors. With and without consent users have watched their personal data slip away from their control, quick to release in the terms of service. Quick to be public, to have more followers and visibility. Is it real without the Instagram proof? I chose to socialize away from game worlds for a few different purposes. To imbue my virtual identity with the moment of social media. But also to create a symbol of a general virtual self. A question mark or a mirror, to encourage reflection before people fully drown themselves in the stream.

ALG: One of the reasons it’s fascinating to talk to you now is that you’ve come of age as the Internet has come of age. You’ve navigated and shape-shifted across these platforms. And so much has happened since 2008. You’ve been on everything from Tumblr to Pinterest to Vine to Snapchat to Instagram. I’m curious where you think we are in the life cycle of these social media platforms?

LaTurbo: It has been quite a journey, seeing these services pop up, new fields, new places. But it is clear that not many of these things will remain very long. A new Wild West of sorts. They are more like ingredients in a greater solution as we try to make virtual relationships that are comfortable for both mind and body.

ALG: Speaking of these services popping up I want to get to something you tweeted out, your commentary on Facebook:

“If it wasn’t bad already just imagine how toxic Facebook will be when we collectively decide to break up with them. Anticipate a paid web and an underweb. We just start spinning them out on our own, smaller and away from all these analytics moneymakers. The changeover from MySpace era networks to Facebook felt minimal because it hadn’t become such a market-oriented utility. But this impending social network breakup is going to be felt in all sorts of online sectors.”

That’s an interesting opinion. The delete Facebook movement is strong right now. But I wonder how far it will go and how many people really follow it?

LaTurbo: Business complicates this as companies extend too far and make use of this data for personal gain or manipulation. In the same way that Google Glass failed because of a camera, these services destroy themselves as they breach the trust of those who use them. These companies know that these are toxic relationships whether it is on a game economy or a social network. They know that the leverage over your personal data is valuable. Losing this, our friends, and our histories is frightening. We need to find some way to siphon ourselves and our data back so we can learn to express agency with who we are online. Your data is more valuable than the services that you give it to. The idea that people feel that it is fair to let their accounts be inherently bound to a single service is disturbing. Our virtual lives exceed us and will continue to do so onward into time. Long after us this data may still linger somewhere.

ALG: I’m going to throw in a Twitter poll you did a few months ago. “If you had the choice to join some sort of afterlife simulation that would keep you around forever at the expense of having your data used for miscellaneous third party purposes would you?” 35% said yes and 65% said no in this poll. I bet if you ask that every two years, over time the answers will continue to change as we get more comfortable with our digital identities and what that really means. You’re pushing us to ask these questions.

LaTurboWe see in museums now torn parchments, scrolls, ancient wrappings of lives and histories. As we become more virtual these documents will inherently change too. A markup and data takes this place. However we consent to let it be represented. If we leave this to the Facebooks and Twitters of this period, our histories are in many ways contingent on the survival of these platforms. If not we have lost a dark ages, it is a moment that we will lose forever.

ALG: I’m curious what you think of the different movements to export your personal data, own it, have it travel with you across platforms and build a new pact with the companies. Are you following any of the movements to take back personal data and rewrite the social technological contract?

LaTurboIt would be sad to have less record of this period of innovation and self-discovery because we didn’t back things up or control our data appropriately. Where do you keep it? Who protects it? Who is a steward of your records? All of this needs to begin with the user and end with the user. An album, a solid state tablet of your life, something you can take charge of without concern that it is marketing fodder or some large shared database. As online as we are as a society, I recommend people have an island. Not a cloud but a private place, plugged in when you request it. A drive of your own where you have a private order. Oddly enough in an older world sense you can find solitude in solid states, when you have the retreat to files that are not connected to the Internet.

ALG: And have it backed up and air-gapped from the internet for safety and possibly in a Faraday cage in case you get EMP’ed. One thing that leads on from that — Facebook has capitalized on using our real data, our personal data. I have the statement on authentic identity from their original S-1 here:

“We believe that using your real name, connecting to your real friends and sharing your genuine interests online creates more engaging and meaningful experiences. Representing yourself with your authentic identity online encourages you to behave with the same norms that foster trust and respect in your daily life offline. Authentic identity is core to the Facebook experience and we believe that it is central to the future of the Web. Our terms of service require you to use your real name. And we encourage you to be your true self online enabling us and platform developers to provide you with more personalized experiences.”

LaTurbo: The use of a real name, authenticity, and Facebook’s message of truth. It is peculiar that Facebook used this angle because it was such a gloved gesture for them to access our accurate records. The verification is primarily to make businesses comfortable with their investment in marketing. I wish it came to celebrate personal expression not to tune business instruments.

ALG: Over the last 5 to 10 years we’ve seen a movement towards Facebook and being our real selves. Now there’s kind of a backlash both to the usage of Facebook but perhaps also to the idea that your real identity, your true self that you have offline, that that’s what you should be representing online. You are an anonymous artist and there’s precedent for that. There have been many writers with nom de plumes over centuries and in the present day we’ve got Daft Punk, Banksy, Elena Ferrante, fascinating creators. I’m curious your thoughts as we move away from real selves being represented online to expressing our other selves online. We’ve been living in an age of shameless self-promotion. Do you think that the rise of people representing themselves with digital avatars is a backlash to that? Society usually goes through a back and forth, a struggle for balance. Do you think people are getting disenchanted with the unrelenting narcissism of social media, the celebrity worship culture? Do you think this is a bigger movement that’s going to stick?

LaTurbo: I see this as an opportunity and I am wary of this chance being usurped by business. If I had the chance to see all of my friends in the avatar forms of their wishes and dreams I believe I’d be seeing them for the first time. A different sort of wholeness against the sky, where they had the chance to say and be exactly what they wished others to find. If you haven’t created an avatar before please do. Explore yourself in many facets before these virtual spaces get twisted into stratified arenas of business.

A full talk from LaTurbo Avedon is available here

I don’t seek to be anonymous but to represent myself in this strand of experiences, fully. That’s who I have become. As an artist I will continue to change with what surrounds me. Each step forward. Each new means of making and learning. I celebrate this and who I will become, even if I continue to find definition over a period of time that I right now cannot fully comprehend.

I am often in the company of crude avatars of the past. As I read journals, view sketches and works from artists past, if they understood their avatar identities and how they would be here now in 2019. I wonder what they would have done differently. What would they think of their graphic design and exhibitions? How their work is shown in other mediums? How their work is sold?

ALG: Taking that with your earlier point, you said if you had the chance you would love to see all your friends in their avatar forms “express all their wishes and dreams.” It fascinates me, the idea that we persistently remain one to one with our offline/online identities. It doesn’t make sense. I feel like everyone has multiple selves and multiple things to express. Do you feel that most people should have a digital identity or abstraction? Do you think it’s healthy to have an extension of something that’s inside of you, especially since as you say some of these avatars are pretty crude. How do you feel about most people creating a digital avatar? People have been doing this for a while without realizing through things like a Tinder bio or Instagram stories. They’re already putting out ideas of themselves. But creating true anonymous digital avatars, is that something people should pursue?

LaTurbo: Avatars remain in places that we often don’t even intend them to. Symbols of self. For those that pass or those we never had the chance to meet, there seems to be importance here. To need to take this seriously so that it isn’t misunderstood. The most beautiful experiences I’ve had online are when I feel I am interacting with a user how they wish to be seen. Whether this is in the present or for people later, finding this inward representation feels essential especially for those exposed to oppressive societies. Whether it’s toxic masculinity, cultural restrictions, or other hindrances that prevent people from showing deeper parts of their identity.

I have four essential asks of users creating avatars. Though these apply well outside of just this topic. 1/ Be sweet. 2/ Encourage others to explore themselves and all of their differences. 3/ Learn about the history of virtual identities, now, then, and long before. This means going back. Read about identities before the internet, pen names, mythologies. 4/ Celebrate your ownership of self. You, not your services, subscriptions, or products, are the one to decide your way. Don’t become billboards. I’ve been asked by many companies over the years to promote their products, to drop the branded text on my clothing or to push a new service. These are exciting times but brands know this too. Be wary of exploitation. Protect yourself and your heart.

ALG: That’s really beautiful and important. We’re rushing into this future fast and I don’t think people are stopping to pause and think about some of the ideas you’ve spent a long time thinking about. It’s probably a good place to end. I have a million more things I want to ask, hopefully we can continue this chat over Discord, Twitter, Instagram, Second Life or wherever it is. I’m in VR a lot so I’d love to meet you in there. If there’s anything you want to end on, any final comments or projects you’re working on?

LaTurbo: Yes I agree with you very much. Technology moves quickly but we need to take the time to consider ourselves as we move inside this space. We have so much potential to be inside and out simultaneously. I am excited for this new year. I hope it brings positivity to everyone. I am showing a new piece called “Afterlife Beta” in London at the Arebyte Gallery. After this I will be working on my first monograph. I am excited to make something printed that might stick around in the physical world for a while.

ALG: That’s awesome. Love a good physical piece. And congratulations on “Afterlife Beta.” I appreciate your patience with my jumping in at all times in this conversation. I’ve been following your work and hope everyone else will too. You’re a fascinating, critical thinker and artist at this current point in history. Thanks LaTurbo.

LaTurbo: Thank you for your patience with my format. As time goes on I hope it is easier for us to be here together.

In a social media world, here’s what you need to know about UGC and privacy

In today’s brand landscape, consumers are rejecting traditional advertising in favor of transparent, personalized and most importantly, authentic communications. In fact, 86% of consumers say that authenticity is important when deciding which brands they support. Driven by this growing emphasis on brand sincerity, marketers are increasingly leveraging user-generated content (UGC) in their marketing and e-commerce strategies.

Correlated with the rise in the use of UGC is an increase in privacy-focused regulation such as the European Union’s industry-defining General Data Protection Regulation (GDPR), the along with others that will go into effect in the coming years, like the California Consumer Protection Act (CCPA), and several other state-specific laws. Quite naturally, brands are asking themselves two questions:

  • Is it worth the effort to incorporate UGC into our marketing strategy?
  • And if so, how do we do it within the rules, and more importantly, in adherence with the expectations of consumers?

Consumers seek to be active participants in their favorite companies’ brand identity journey, rather than passive recipients of brand-created messages. Consumers trust images by other consumers on social media seven times more than advertising.

Additionally, 56% are more likely to buy a product after seeing it featured in a positive or relatable user-generated image. The research and results clearly show that the average consumer perceives content from a peer to be more trustworthy than brand-driven content.

With that in mind, we must help brands leverage UGC with approaches that comply with privacy regulations while also engaging customers in an authentic way.

Influencer vs user: Navigating privacy considerations in an online world