Facebook and PayPal pull pages of far right British activist filmed intimidating public figures

Facebook has confirmed it has removed the pages and profiles of a far right political activist in the UK after concerns were raised in parliament about aggressive intimidation of politicians and journalists trying to go about their business in and around Westminster.

PayPal has also closed an account that was being used to solicit donations for “political activism”.

The intimidation is being conducted by a small group of extreme Brexit supporters who have — ironically enough — lifted the ‘yellow vest’ dress code from French anti-government protestors, and are also making use of mainstream social media and crowdfunding platforms to fund and amplify attacks on public figures in an attempt to squash debate and drive an extreme ‘no deal’ Brexit. (Context: The clock is ticking down to March 29; the date when the UK is due to leave the European Union, with or without a withdrawal deal.)

In incidents widely shared on social media this week, individuals from the group were filmed live streaming harassment of Remain supporting Conservative MP Anna Soubry who was mobbed and shouted at as she walked down the street to return to parliament after being interviewed live on TV in front of the Palace of Westminster where the group heckled her with repeat chants of “nazi”.

Members of the same group were also filmed with fisted smartphones, chasing and hurling abuse at left-wing commentator Owen Jones as he walked down a London street.

In another video one of the individuals leading the verbal attacks, who has been identified in the press and online as a man called James Goddard, can be seen swearing viciously at Met Police officers and threatening to bring “war”.

The speaker of the House of Commons said today that he had written to the head of the Met Police to urge action against the “aggressive, threatening and intimidating behaviour towards MPs and journalists” around Westminster.

The Guardian reports that at least 115 MPs have written to police requesting extra protection.

Contacted today about Goddard’s presence on its platform, Facebook later confirmed to us that it had pulled the plug. “We have removed James Goddard’s Facebook Pages and Groups for violating our policies on hate speech,” a spokesperson told us. “We will not tolerate hate speech on Facebook which creates an environment of intimidation and which may provoke real-world violence.”

Earlier today one of his pages was still live on Facebook, and in a post from December 14 Goddard can be seen soliciting donations via PayPal so he can continue “confronting” people.

We also asked PayPal about Goddard’s use of its tools, pointing to the company’s terms of use which prohibit the use of the platform for promoting “hate, violence, racial and other forms of intolerance that is discriminatory”.

PayPal declined to comment on “any specific customer’s account”, citing its privacy policy but a spokesperson told us: “We do review accounts that have been flagged to us for possible breaches of our policies, and we will take action if appropriate.”

A few hours later PayPal also appeared to have pulled the plug on Goddard’s account.

A Patreon page he had seemingly been using to solicit donations for “political content, activism” is also now listed as ‘under review’ at the time of writing.

But Goddard remains on Twitter, where he is (currently) complaining about being de-platformed by Facebook and PayPal to his ~4k followers, and calling other people “fascists”.

How should mainstream tech platforms respond to people who use their tools for targeted harassment? If you read companies’ terms and conditions most prohibit abusive and intimidating conduct. Though in practice plenty flows until flagged and reviewed. (And even then takedowns frequently fail to follow.)

For all the claims from platforms that they’re getting better about enforcing their claimed community standards there are countless of examples of continued and very abject failure.

Facebook’s 2.2BN+ users especially make for an awful lot of content to wrangle. But none of these platforms is renowned for being proactive about weeding out violent types of speech they claim to forbid. And when intimidation is dressed up as political speech, and public figures are involved, they appear especially paralyzed.

Social media-savvy Far Right groups grokked this loophole long ago (see: Gamergate for a rough start date); and are continuing to exploit default inaction to get on with the violent business of megaphoning hate in the meanwhile.

You could say platforms are being gamed but the money they make off of accelerated outrage makes them rather more complicit in the problem.

The irony is it’s free speech that suffers in such a thuggish and febrile atmosphere. Yet platforms remain complicit in its undoing; doing nothing to stop hate mongers turning hugely powerful high tech soapboxes into abuse funnels.

They do this by choosing to allow groups with fascist ideologies to operate freely until enough reports are filed and/or high level political attention frowns down on particular individuals that they’ll step in and act.

Facebook’s community standards claim it aims to prevent “real-world harm”. But with such a narrow prescription it’s failing spectacularly to prevent deliberate, malicious and co-ordinated harassment campaigns that are designed to sew social division and upend constructive conversation, replacing the hard won social convention of robust political debate with mindless jeering and threats. This is not progress.

There’s nothing healthy for society or speech if mainstream platforms sit on their hands while abusive users bludgeon, bully and bend public debate into a peculiarly intolerant shape.

But we’re still waiting for the tech giants to have that revelation. And in the meanwhile they’re happy to let you watch a live streamed glimpse of mob rule.

Why Are Product Managers Moving Away From Focus Groups?

Focus groups are the way that product managers used to find out how customers felt about their product

Focus groups are the way that product managers used to find out how customers felt about their product
Image Credit: RSNY

When a product manager is put in charge of a product, one of the first questions that they would like to be able to answer is just exactly what do potential customers think about the product? There are number of different ways to go about answering this question and testing our product development definition, but one way that we’ve all be using for a long time is the veritable focus group. However, there has always been a bit of a problem with this product management tool – what customers say and what they do can be two completely different things. What’s a product manager to do?

What’s Better Than A Focus Group?

Here in the 21st Century, what every product manager who is responsible for a consumer product would like to be able to do is to monitor what his or her customers were saying about that product on the countless different social media platforms that are out there. Tools that allow this kind of monitoring to be done are exactly what is starting to get rid of the old standby – focus groups. Let’s all agree on just exactly what a focus group is. You collect a diverse group of people who represent the types of customers that you believe are most likely to purchase your product. In a carefully controlled environment using a professional moderator, you ask them questions to find out what they think about your product, how they would go about using your product, and what would cause them to select your product over the competition. If we did this well, then we’d have something to put on our product manager resume.

However, the biggest problem with focus groups is that they are probably the most misused tool in a product manager’s toolbox. Now that tools are available to monitor social media, product managers have a new way of finding out what their customers really think about their products. The end result of collecting all of this social media data is that product managers can now perform better research. Better research means that the chances of making a big mistake in regards to your product should become less.

There are many different tools that are starting to give product managers the same information that they used to get from focus groups. One example of this is the eye-tracking technology that some firms are starting to use in order to pinpoint which packaging details most attract a customer’s attention. Additionally, product teams are starting to be built using people from a wider variety of backgrounds. These backgrounds can include social work and jury consulting. The key to making both these new tools and staff be successful is to make sure that the company collects enough information on their customers to determine what their buying habits are.

What Product Managers Will Be Using In The Future

The monitoring of social media in order to determine how customers are perceiving a brand has been given a name. This is now being called “social listening”. Companies want to collect the information and then make it available to all of the employees who need it. The goal of all of this is to provide product managers with new ways to study their customers and understand how they both live and shop. What product managers want to know is what products their customers are most likely to buy.

One of the biggest challenges that product managers are currently facing has to do with their millennial customers. These customers are unlike any that have come before them. Millennial customers are much more likely to jump from brand to brand and so it becomes hard for product managers to try to predict what they are going to end up buying. Additionally, millennial customers appear to be unmoved by so-called traditional advertising. This means that it can be very difficult to reach them with information about a given product.

One of the newest techniques that product managers are using to find out what their customers really want is to create online groups where their customers can mix together. Such groups can provide customers with an opportunity to sign up in order to learn more about the company’s products and interact with each other. Since product managers can monitor the conversations that occur on the site between members, this provides them with more insights than any focus group ever could. These type of inputs can lead to ideas for new packaging and innovative brand extensions.

What All Of This Means For You

Traditionally when product manager wanted to know more about what their customers were thinking, they would do what their product manager job description told them to do and go to the effort of creating a focus group. This required assembling a collection of customers whom the product manager felt best represented their customers and then asking them questions about the product. The problem with this is that often the information that they got was either wrong or at least misleading. Product managers needed something better.

In our modern times, product managers can now collect social media data in order to find out what customers really think about their product. Product managers are starting to use new tools such as eye-tracking technology and building more diverse product teams in order to gain better insights into what their customers might be thinking. Using “social listening” product managers are starting to hear what their customers are saying about their products. This is good news because product managers have been struggling to deal with millennial customers who don’t act like other customers. Product managers are starting to set up online groups that they can monitor in order to hear what their customers think about their products.

Times are changing and so product managers have to change with them. It used to be easy to try to read the minds of our customers – we’d just create a focus group and ask them a lot of questions. This never worked out all that well, but it was the best that we had. Today’s new social media tools provide us with a different way of collecting the same information. However, now we can be confident that the information is reliable. Today’s product managers have to become good listeners if they want their product to become a success!

– Dr. Jim Anderson
Blue Elephant Consulting –
Your Source For Real World Product Management Skills™

Question For You: Can you think of a situation where a real focus group would still be valuable?

Click here to get automatic updates when
The Accidental Product Manager Blog is updated.

P.S.: Free subscriptions to The Accidental Product Manager Newsletter are now available. It’s your product – it’s your career. Subscribe now: Click Here!

What We’ll Be Talking About Next Time

As product managers we spend a great deal of our time trying to figure out how to get more people to buy our products. This is all fine, but it turns out that some of our most profitable customers may be the people who have already bought our product. Since they have already bought into our product development definition and agreed to buy one of our products, they may be more willing to buy more products, add-ons, and upgrades. Additionally, if they can remember that they have bought our product then they may be willing to recommend it to others. As product managers what we need to do is to start to send a newsletter to our customers in order to make sure that they remember us.

The post Why Are Product Managers Moving Away From Focus Groups? appeared first on The Accidental Product Manager.

Mark Zuckerberg is ‘proud’ of how Facebook handled its scandals this year

After the year Mark Zuckerberg’s had, you’d think he’d struggle to appear so chipper.

“I’m proud of the progress we’ve made,” he said in an end-of-year note posted on his Facebook page for everyone to see. Acknowledging that the social network played its part in the spread of hate speech, election interference and misinformation, Zuckerberg’s note seemed more upbeat about his response to the hurricane of hurt caused by the company’s laissez-faire attitude to world affairs and less concerned about showing contrition and empathy for the harm Facebook caused in the past year — including its inability to keep its users’ data safe and, above all else, its failure to prevent its site from being used to incite ethnic violence and genocide.

Zuckerberg’s tone-deaf remarks read like 1,000 words of patting himself on the back.

But where the Facebook co-founder pledged to “focus on addressing some of the most important issues facing our community,” he conveniently ignored some of the most damaging, ongoing problems that the company has shown little desire to solve, opting instead for quick fixes or simply pretending they don’t exist.

“More than 30,000 people working on safety…” isn’t enough to police the platform

A decade ago, Facebook had just 12 people moderating its entire site — some 120 million users. Now, the company relies largely on an army of underpaid contractors spread out across the world to moderate millions of potentially rule-breaking posts on the site each week.

Zuckerberg said the company has this year increased those working on safety to “more than 30,000 people.” That’s on top of the 33,600 full-time employees that Facebook had as of the end of September. But that’s a massive task to police Facebook’s 2.27 billion monthly active users. Those 30,000 new safety contractors equates to about one moderator for every 75,660 users.

Facebook’s contractors have long complained about long hours and low pay, and that’s not even taking into account the thousands of gruesome posts — from beheadings to child abuse and exploitation — they have to review each day. Turnover is understandably high. No other social network in the world has as many users as Facebook, and it’s impossible to know what the “right number” of moderators is.

But the numbers don’t add up. Facebook’s army of 30,000 safety staffers isn’t enough to combat the onslaught of vitriol and violence, let alone against an advanced adversary like the nation-state actors that it’s constantly blaming.

Facebook lost its chief security officer this year — and hasn’t found a replacement

Zuckerberg made no mention of the photo data exposure and account breaches that the company had to contend with this year, even if he couldn’t avoid mentioning Cambridge Analytica, the voter research firm that misused 87 million Facebook users’ information, just the once.

Yet, Zuckerberg made no commitment to doubling down on the company’s efforts to secure the platform, despite years of its “move fast and break things” mentality. Since the departure of former chief security officer Alex Stamos in August, the company hasn’t hired his replacement. All signs point to nobody taking the position at all. While many see a chief security officer as a figurehead-type position, they still provide executive-level insight into the threats they face and issues to handle — no more than ever after a string of embarrassing and damaging security incidents.

Zuckerberg said that the company invests “billions of dollars in security yearly.” That may be true. But without an executive overseeing that budget, it’s not confidence-inducing knowing that there’s nobody with the years of experience needed to oversee a company’s security posture in control of where those billions go.

There was no acknowledgement of Facebook’s role in Myanmar’s genocide

Fake news, misinformation and election meddling is one thing, but Zuckerberg refused to acknowledge the direct impact Facebook had on Myanmar’s ethnic violence — which the United Nations is calling genocide.

It can’t be much of a surprise to Zuckerberg. The UN said Facebook had a “determining role” in inciting genocide in the country. He faced questions directly from U.S. lawmakers earlier this year when he was told to testify to senators in April. Journalists are regularly arrested and murdered for reporting on the military-backed government’s activities. The Facebook boss apologized — which human rights groups on the ground called “grossly insufficient.”

Facebook said last week that it has purged hundreds of accounts, pages and groups associated with inciting violence in Myanmar, but continues to refuse setting up an office in the country — despite groups on the ground saying would be necessary to show it’s serious about the region.

“That doesn’t mean… people won’t find more examples of past mistakes before we improved our systems.”

Zuckerberg said in his note that the company “didn’t focus as much on these issues as we needed to, but we’re now much more proactive.”

“That doesn’t mean we’ll catch every bad actor or piece of bad content, or that people won’t find more examples of past mistakes before we improved our systems,” he said. Some have seen that as a hint that some of the worst revelations are yet to come. Perhaps it’s just Zuckerberg hedging his bets as a way to indemnify his remarks from criticism when the next inevitable bad news break hits the wires.

In his 1,000-word post, Zuckerberg said he was “proud” three times, he talked of the company’s “focus” four times and how much “progress” was being made five times. But there wasn’t a single “sorry” to be seen. Then again, he’s spent most of his Facebook career apologizing for the company’s fails. Any more at this point would probably come across as trite.

Zuckerberg ended on as much as a cheery note as he began, looking to the new year as an opportunity for “building community and bringing people together,” adding: “Here’s to a great new year to come.”

Well, it can’t be much worse than this year. Or can it?

Indonesia unblocks Tumblr following its ban on adult content

Indonesia, the world’s fourth largest country by population, has unblocked Tumblr nine months after it blocked the social networking site over pornographic content.

Tumblr — which, disclaimer, is owned by Oath Verizon Media Group just like TechCrunch — announced earlier this month that it would remove all “adult content” from its platform. That decision, which angered many in the adult entertainment industry who valued the platform as an increasingly rare outlet that supported erotica, was a response to Apple removing Tumblr’s app from the iOS Store after child pornography was found within the service.

This impact of this new policy has made its way to Indonesia where KrAsia reports that the service was unblocked earlier this week. The service had been blocked in March after falling foul of the country’s anti-pornography laws.

“Tumblr sent an official statement regarding the commitment to clean the platform from pornographic content,” Ferdinandus Setu, Acting Head of the Ministry of Communication and Informatics Bureau, is reported to have said in a press statement.

Messaging apps WhatsApp and Line are among the other services that have been forced to comply with the government’s ban on ‘unsuitable’ content in order to keep their services open in the country. Telegram, meanwhile, removed suspected terrorist content last year after its service was partially blocked.

While perhaps not widely acknowledged in the West, Indonesia is a huge market with a population of over 260 million people. The world’s largest Muslim country, it is the largest economy in Southeast Asia and its growth is tipped to help tripled the region’s digital economy to $240 billion by 2025.

In other words, Indonesia is a huge market for internet companies.

The country’s anti-porn laws have been used to block as many as 800,000 websites as of 2017so potentially over a million by now — but they have also been used to take aim at gay dating apps, some of which have been removed from the Google Play Store. As Vice notes, “while homosexuality is not illegal in Indonesia, it’s no secret that the country has become a hostile place for the LGBTQ community.”

Facebook is not equipped to stop the spread of authoritarianism

After the driver of a speeding bus ran over and killed two college students in Dhaka in July, student protesters took to the streets. They forced the ordinarily disorganized local traffic to drive in strict lanes and stopped vehicles to inspect license and registration papers. They even halted the vehicle of the Chief of Bangladesh Police Bureau of Investigation and found that his license was expired. And they posted videos and information about the protests on Facebook.

The fatal road accident that led to these protests was hardly an isolated incident. Dhaka, Bangladesh’s capital, which was ranked the second least livable city in the world in the Economist Intelligence Unit’s 2018 global liveability index, scored 26.8 out of 100 in the infrastructure category included in the rating. But the regional government chose to stifle the highway safety protests anyway. It went so far as raids of residential areas adjacent to universities to check social media activity, leading to the arrest of 20 students. Although there were many images of Bangladesh Chhatra League, or BCL men, committing acts of violence on students, none of them were arrested. (The BCL is the student wing of the ruling Awami League, one of the major political parties of Bangladesh.)

Students were forced to log into their Facebook profiles and were arrested or beaten for their posts, photographs, and videos. In one instance, BCL men called three students into the dorm’s guestroom, quizzed them over Facebook posts, beat them, and then handed them over to police. They were reportedly tortured in custody.

A pregnant school teacher was arrested and jailed for just over two weeks for “spreading rumors” due to sharing a Facebook post about student protests. A photographer and social justice activist spent more than 100 days in jail for describing police violence during these protests; he told reporters he was beaten in custody. And a university professor was jailed for 37 days for his Facebook posts.

A Dhaka resident who spoke on the condition of anonymity out of fear for their safety said that the crackdown on social media posts essentially silenced student protesters, many of which removed photos, videos, and status updates about the protests from their profiles entirely. While the person thought that students were continuing to be arrested, they said, “nobody is talking about it anymore — at least in my network — because everyone kind of ‘got the memo’ if you know what I mean.”

This isn’t the first time Bangladeshi citizens have been arrested for Facebook posts. As just one example, in April 2017, a rubber plantation worker in southern Bangladesh was arrested and detained for three months for liking and sharing a Facebook post that criticized the prime minister’s visit to India, according to Human Rights Watch.

Bangladesh is far from alone. Government harassment to silence dissent on social media has occurred across the region and in other regions as well — and it often comes hand-in-hand with governments filing takedown requests with Facebook and requesting data on users.

Facebook has removed posts critical of the prime minister in Cambodia and reportedly “agreed to coordinate in the monitoring and removal of content” in Vietnam. Facebook was criticized for not stopping the repression of Rohingya Muslims in Myanmar, where military personnel created fake accounts to spread propaganda which human rights groups say fueled violence and forced displacement. Facebook has since undertaken a human rights impact assessment in Myanmar, and it has also taken down coordinated inauthentic accounts in the country.

Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of the US Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee on Capitol Hill, April 10, 2018 in Washington, DC. / AFP PHOTO / JIM WATSON (Photo credit should read JIM WATSON/AFP/Getty Images)

Protesters scrubbing Facebook data for fears of repercussions isn’t uncommon. Over and over again, authoritarian-leaning regimes have utilized low-tech strategies to quell dissent. And aside from providing resources related to online privacy and security, Facebook still has little in place to protect its most vulnerable users from these pernicious efforts. As various countries pass laws calling for a local presence and increased regulation, it is possible that the social media conglomerate doesn’t always even want to.

“In many situations, the platforms are under pressure,” said Raman Jit Singh Chima, policy director at Access Now. “Tech companies are being directly sent takedown orders, user data requests. The danger of that is that companies will potentially be overcomplying or responding far too quickly to government demands when they are able to push back on those requests,” he said.

Elections are often a critical moment for oppressive behavior from governments — Uganda, Chad, and Vietnam have specifically targeted citizens — and candidates — during election time. Facebook announced just last Thursday that it had taken down nine Facebook pages and six Facebook accounts for engaging in coordinated inauthentic behavior in Bangladesh. These pages, which Facebook believes were linked to people associated with the Bangladesh government, were “designed to look like independent news outlets and posted pro-government and anti-opposition content.” The sites masqueraded as news outlets, including fake BBC Bengali, BDSNews24, and Bangla Tribune and news pages with photoshopped blue checkmarks, according to the Atlantic Council’s Digital Forensic Research Lab.

Still, the imminent election in Bangladesh doesn’t bode well for anyone who might wish to express dissent. In October, a digital security bill that regulates some types of controversial speech was passed in the country, signaling to companies that as the regulatory environment tightens, they too could become targets.

More restrictive regulation is part of a greater trend around the world, said Naman M. Aggarwal, Asia policy associate at Access Now. Some countries, like Brazil and India, have passed “fake news” laws. (A similar law was proposed in Malaysia, but it was blocked in the Senate.) These types of laws are frequently followed by content takedowns. (In Bangladesh, the government warned broadcasters not to air footage that could create panic or disorder, essentially halting news programming on the protests.)

Other governments in the Middle East and North Africa — such as Egypt, Algeria, United Arab Emirates, Saudi Arabia, and Bahrain — clamp down on free expression on social media under the threat of fines or prison time. And countries like Vietnam have passed laws requiring social media companies to localize their storage and have a presence in the country — typically an indication of greater content regulation and pressure on the companies from local governments. In India, WhatsApp and other financial tech services were told to open offices in the country.

And crackdowns on posts about protests on social media come hand-in-hand with government requests for data. Facebook’s biannual transparency report provides detail on the percentage of government requests the company complies within each country, but most people don’t know until long after the fact. Between January and June, the company received 134 emergency requests and 18 legal processes from Bangladeshi authorities for 205 users or accounts. Facebook turned over at least some data in 61 percent of emergency requests and 28 percent of legal processes.

Facebook said in a statement that it “believes people deserve to have a voice, and that everyone has the right to express themselves in a safe environment,” and that it handles requests for user data “extremely carefully.'”

The company pointed to its Facebook for Journalists resources and said it is “saddened by governments using broad and vague regulation or other practices to silence, criminalize or imprison journalists, activists, and others who speak out against them,” but the company said it also helps journalists, activists, and other people around the world to “tell their stories in more innovative ways, reach global audiences, and connect directly with people.”

But there are policies that Facebook could enact that would help people in these vulnerable positions, like allowing users to post anonymously.

“Facebook’s real names policy doesn’t exactly protect anonymity, and has created issues for people in countries like Vietnam,” said Aggarwal. “If platforms provide leeway, or enough space for anonymous posting, and anonymous interactions, that is really helpful to people on ground.”

BERLIN, GERMANY – SEPTEMBER 12: A visitor uses a mobile phone in front of the Facebook logo at the #CDUdigital conference on September 12, 2015 in Berlin, Germany. (Photo by Adam Berry/Getty Images)

A German court found the policy illegal under its decade-old privacy law in February. Facebook said it plans to appeal the decision.

“I’m not sure if Facebook even has an effective strategy or understanding of strategy in the long term,’ said Sean O’Brien, lead researcher at Yale Privacy Lab. “In some cases, Facebook is taking a very proactive role… but in other cases, it won’t.” In any case, these decisions require a nuanced understanding of the population, culture, and political spectrum in various regions — something it’s not clear Facebook has.

Facebook isn’t responsible for government decisions to clamp down on free expression. But the question remains: How can companies stop assisting authoritarian governments, inadvertently or otherwise?

“If Facebook knows about this kind of repression, they should probably have… some sort of mechanism to at the very least heavily try to convince people not to post things publicly that they think they could get in trouble for,” said O’Brien. “It would have a chilling effect on speech, of course, which is a whole other issue, but at least it would allow people to make that decision for themselves.”

This could be an opt-in feature, but O’Brien acknowledges that it could create legal liabilities for Facebook, leading the social media giant to create lists of “dangerous speech” or profiles on “dissidents,” and could theoretically shut them down or report them to the police. Still, Facebook could consider rolling a “speech alert” feature to an entire city or country if that area becomes volatile politically and dangerous for speech, he said.

O’Brien says that social media companies could consider responding to situations where a person is being detained illegally and potentially coerced into giving their passwords in a way that could protect them, perhaps by triggering a temporary account reset or freeze to prevent anyone from accessing the account without proper legal process. Some actions that might trigger the reset or freeze could be news about an individual’s arrest — if Facebook is alerted to it, contact from the authorities, or contact from friends and loved ones, as evaluated by humans. There could even be a “panic button” type trigger, like Guardian Project’s PanicKit, but for Facebook — allowing users to wipe or freeze their own accounts or posts tagged preemptively with a codeword only the owner knows.

“One of the issues with computer interfaces is that when people log into a site, they get a false sense of privacy even when the things they’re posting in that site are widely available to the public,” said O’Brien. Case in point: this year, women anonymously shared their experiences of abusive coworkers in a shared Google Doc — the so-called “Shitty Media Men” list, likely without realizing that a lawsuit could unmask them. That’s exactly what is happening.

Instead, activists and journalists often need to tap into resources and gain assistance from groups like Access Now, which runs a digital security helpline, and the Committee to Protect Journalists. These organizations can provide personal advice tailored to their specific country and situation. They can access Facebook over the Tor anonymity network. Then can use VPNs, and end-to-end encrypted messaging tools, and non-phone-based two-factor authentication methods. But many may not realize what the threat is until it’s too late.

The violent crackdown on free speech in Bangladesh accompanied government-imposed Internet restrictions, including the throttling of Internet access around the country. Users at home with a broadband connection did not feel the effects of this, but “it was the students on the streets who couldn’t go live or publish any photos of what was going on,” the Dhaka resident said.

Elections will take place in Bangladesh on December 30.

In the few months leading up to the election, Access Now says it’s noticed an increase in Bangladeshi residents expressing concern that their data has been compromised and seeking assistance from the Digital Security hotline.

Other rights groups have also found an uptick in malicious activity.

Meenakshi Ganguly, South Asia director at Human Rights Watch, said in an email that the organization is “extremely concerned about the ongoing crackdown on the political opposition and on freedom of expression, which has created a climate of fear ahead of national elections.”

Ganguly cited politically motivated cases against thousands of opposition supporters, many of which have been arrested, as well as candidates that have been attacked.

Human Rights Watch issued a statement about the situation, warning that the Rapid Action Battalion, a “paramilitary force implicated in serious human rights violations including extrajudicial killings and enforced disappearances,” and has been “tasked with monitoring social media for ‘anti-state propaganda, rumors, fake news, and provocations.'” This is in addition to a nine-member monitoring cell and around 100 police teams dedicated to quashing so-called “rumors” on social media, amid the looming threat of news website shutdowns.

“The security forces continue to arrest people for any criticism of the government, including on social media,” Ganguly said. “We hope that the international community will urge the Awami League government to create conditions that will uphold the rights of all Bangladeshis to participate in a free and fair vote.”

The year social networks were no longer social

The term “social network” has become a meaningless association of words. Pair those two words and it becomes a tech category, the equivalent of a single term to define a group of products.

But are social networks even social anymore? If you have a feeling of tech fatigue when you open the Facebook app, you’re not alone. Watching distant cousins fight about politics in a comment thread is no longer fun.

Chances are you have dozens, hundreds or maybe thousands of friends and followers across multiple platforms. But those crowded places have never felt so empty.

It doesn’t mean that you should move to the woods and talk with animals. And Facebook, Twitter or LinkedIn won’t collapse overnight. They have intrinsic value with other features — social graphs, digital CVs, organizing events…

But the concept of wide networks of social ties with an element of broadcasting is dead.

From interest-based communities to your lousy neighbor

If you’ve been active on the web for long enough, you may have fond memories of internet forums. Maybe you were a fan of video games, Harry Potter or painting.

Fragmentation was key. You could be active on multiple forums and you didn’t have to mention your other passions. Over time, you’d see the same names come up again and again on your favorite forum. You’d create your own running jokes, discover things together, laugh, cry and feel something.

When I was a teenager, I was active on multiple forums. I remember posting thousands of messages a year and getting to know new people. It felt like hanging out with a welcoming group of friends because you shared the same passions.

It wasn’t just fake internet relationships. I met “IRL” with fellow internet friends quite a few times. One day, I remember browsing the list of threads and learning about someone’s passing. Their significant other posted a short message because the forum meant a lot to this person.

Most of the time, I didn’t know the identities of the persons talking with me. We were all using nicknames and put tidbits of information in bios — “Stuttgart, Germany” or “train ticket inspector.”

And then, Facebook happened. At first, it was also all about interest-based communities — attending the same college is a shared interest, after all. Then, they opened it up to everyone to scale beyond universities.

When you look at your list of friends, they are your Facebook friends not because you share a hobby, but because you’ve know them for a while.

Facebook constantly pushes you to add more friends with the infamous “People you may know” feature. Knowing someone is one thing, but having things to talk about is another.

So here we are, with your lousy neighbor sharing a sexist joke in your Facebook feed.

As social networks become bigger, content becomes garbage.

Facebook’s social graph is broken by design. Putting names and faces on people made friend requests emotionally charged. You can’t say no to your high school best friend, even if you haven’t seen her in five years.

It used to be okay to leave friends behind. It used to be okay to forget about people. But the fact that it’s possible to stay in touch with social networks have made those things socially unacceptable.

Too big to succeed

One of the key pillars of social networks is the broadcasting feature. You can write a message, share a photo, make a story and broadcast them to your friends and followers.

But broadcasting isn’t scalable.

Most social networks are now publicly traded companies — they’re always chasing growth. Growth means more revenue and revenue means that users need to see more ads.

The best way to shove more ads down your throat is to make you spend more time on a service. If you watch multiple YouTube videos, you’re going to see more pre-roll ads. And there are two ways to make you spend more time on a social network — making you come back more often and making you stay longer each time you visit.

And 2018 has been the year of cheap tricks and dark pattern design. In order to make you come more often, companies now send you FOMO-driven notifications with incomplete, disproportionate information.

This isn’t just about opening an app. Social networks now want to direct you to other parts of the service. Why don’t you click on this bright orange banner to open IGTV? Look at this shiny button! Look! Look!

And then, there’s all the gamification, algorithm-driven recommendations and other Skinner box mechanisms. That tiny peak of adrenaline you get when you refresh your feed, even if it only happens once per week, is what’s going to make you come back again and again.

Don’t forget that Netflix wanted to give kids digital badges if they completed a season. The company has since realized that it was going too far. Still, U.S. adults now spend nearly six hours per day consuming digital media — and phones represent more than half of that usage.

Given that social networks need to give you something new every time, they want you to follow as many people as possible, subscribe to every YouTube channel you can. This way, every time you come back, there’s something new.

Algorithms recommend some content based on engagement, and guess what? The most outrageous, polarizing content always ends up at the top of the pile.

I’m not going to talk about fake news or the fact that YouTubers now all write titles in ALL CAPS to grab your attention. That’s a topic for another article. But YouTube shouldn’t be surprised that Logan Paul filmed a suicide victim in Japan to drive engagement and trick the algorithm.

In other words, as social networks become bigger, content becomes garbage.

Private communities

Centralization is always followed by decentralization. Now that we’ve reached a social network dead end, it’s time to build our own digital house.

Group messaging has been key when it comes to staying in touch with long-distance family members. But you can create your own interest-based groups and talk about things you’re passionate about with people who care about those things.

Social networks that haven’t become too big still have an opportunity to pivot. It’s time to make them more about close relationships and add useful features to talk with your best friends and close ones.

And if you have interesting things to say, do it on your own terms. Create a blog instead of signing up to Medium. This way, Medium won’t force your readers to sign up when they want to read your words.

If you spend your vacation crafting the perfect Instagram story, you should be more cynical about it. Either you want to make a career out of it and become an Instagram star, or you should consider sending photos and videos to your communities directly. Otherwise, you’re just participating in a rotten system.

If you want to comment on politics and life in general, you should consider talking about those topics with people surrounding you, not your friends on Facebook.

Put your phone back in your pocket and start a conversation. You might end up discussing for hours without even thinking about the red dots on all your app icons.

Twitter’s newest feature is reigniting the ‘iPhone vs Android’ war

Twitter’s newest feature is reigniting the flame war between iOS and Android owners.

The U.S. social media company’s latest addition is a subtle piece of information that shows the client that each tweet is sent from. In doing so, the company now displays whether a user tweets from the web or mobile and, if they are on a phone, whether they used Twitter’s iOS or Android apps, or a third-party service.

The feature — which was quietly enabled on Twitter’s mobile clients earlier this month; it has long been part of the TweetDeck app — has received a mixed response from users since CEO Jack Dorsey spotlighted it.

Some are happy to have additional details to dig into for context, for example, whether a person is on mobile or using third-party apps, but others believe it is an unnecessary addition that is stoking the rivalry between iOS and Android fans.

Interestingly, the app detail isn’t actually new. Way back in 2012 — some six years ago — Twitter stripped out the information as part of a series of changes to unify users across devices, focus on service’s reading experience and push people to its official apps where it could maximize advertising reach.

That was a long time ago — so long that TechCrunch editor-in-chief Matthew Panzarino was still a reporter when he wrote about it; he and I were at another publication altogether — and much has changed at Twitter, which has grown massively in popularity to reach 330 million users.

Back in 2012, Twitter was trying to reign in the mass of third-party apps that were popular with users in order to centralize its advertising to get itself, and its finances, together before going public. Twitter’s IPO happened in 2013 and it did migrate most users to its own apps, but it did a terrible job handling developers and thus, today, there are precious few third-party apps. That’s still a sore point with many users, since the independent apps were traditionally superior with better design and more functions. Most are dead now and Twitter’s official apps reign supreme.

Many Twitter users may not be aware of the back story, so it is pretty fascinating to see some express uncertainty at displaying details of their phone. Indeed, a number of Android users lamented that the new detail is ‘exposing’ their devices.

Here’s a selection of tweets:

I could go on — you can see more here — but it seems like, for many, iPhone is still the ultimate status symbol over Android despite the progress made by the likes of Samsung, Huawei and newer Android players Xiaomi and Oppo.

While it may increase arguments between mobile’s two tribes, the feature has already called out brands and ambassadors using the ‘wrong’ device. Notable examples including a Korean boyband sponsored by LG using iPhones or the Apple Music team sending a tweet via an Android device. Suddenly spotting these mismatches is a whole lot easier.

Industries must adopt ethics along with technology

A recent New York Times investigation into how smartphone-resident apps collect location data exposes why it’s important for industry to admit that the ethics of individuals who code and commercialize technology is as important as the technology’s code itself.

For the benefit of technology users, companies building technologies must make efforts to raise awareness of their potential human risks – and be honest about how people’s data is used by their innovations. People developing innovations must demand commitment from the C-suite – and boardrooms – of global technology companies to ethical technology. Specifically, the business world needs to instill workforce ethics champions throughout company ranks, develop corporate transparency frameworks and hire diverse teams to interact with, create and improve upon these technologies.

Image courtesy of Shutterstock

Responsible handling of data is no longer a question

Our data is a valuable asset and the commercial insight it brings to marketers is priceless. Data has become a commodity akin to oil or gold, but user privacy should be the priority – and endgame – for companies across industries benefiting from data. As companies grow and shift, there needs to be an emphasis placed on user consent, clearly establishing what and how data is being used, tracking collected data, placing privacy at the forefront and informing users where AI is making sensitive decisions.

On the flip side, people are beginning to realize that seemingly harmless data they enter into personal profiles, apps and platforms can be taken out of context, commercialized and potentially sold without user consent. The bottom line: consumers are now holding big data and big tech accountable for data privacy – and the public scrutiny of companies operating inside and outside of tech will only grow from here.

Whether or not regulators in the United States, United Kingdom, European Union and elsewhere act, the onus is on Big Tech and private industry to step up by addressing public scrutiny head-on. In practice, this involves Board and C-Suite level acknowledgement of the issues and working-level efforts to address them comprehensively. Companies should clearly communicate steps being taken to improve data security, privacy, ethics and general practices.

Image courtesy of TechCrunch/Bryce Durbin

People working with data need to be more diverse and ethical

Efforts to harvest personal data submitted to technology platforms reinvigorates the need for ethics training for people in all positions at companies that handle sensitive data. The use of social media and third party platforms raises the importance of building backend technologies distributing and analyzing human data, like AI, to be ethical and transparent. We also need the teams actually creating these technologies to be more diverse, as diverse as the community that will eventually use them. Digital equality should be a human right that encompasses fairness in algorithms, access to digital tools and the opportunity for anyone to develop digital skills.

Many companies boast reactionary and retrospective improvements, to boost ethics and transparency in products already on the market. The reality is that it’s much harder to retrofit ethics into technology after the fact. Companies need to have the courage to make the difficult decision at the working and corporate levels not launch biased or unfair systems in some cases.

In practice, organizations must establish guidelines that people creating technologies can work within throughout a product’s development cycle. It’s established and common practice for developers and researchers to test usability, potential flaws and security prior to a product hitting the market. That’s why technology developers should also be testing for fairness, potential biases and ethical implementation before a product hits the market or deploys into the enterprise.

Photo courtesy of Getty Images

The future of technology will be all about transparency

Recent events confirm that the business world’s approach to building and deploying data-consuming technologies, like AI, needs to focus squarely on ethics and accountability. In the process, organizations building technologies and supporting applications need to fundamentally incorporate both principles into their engineering. A single company that’s not careful, and breaks the trust of its users, can cause a domino effect in which consumers lose trust in the greater technology and any company leveraging it.

Enterprises need to develop internal principles and processes that hold people from the Board to the newest hire accountable. These frameworks should govern corporate practices and transparently showcase companies’ commitment to ethical AI and data practices. That’s why my company introduced The Ethics of Code to address critical ethics issues before AI products launch and our customers questions around accountability.

Moving into 2019 with purpose

Ultimately, there’s now a full-blown workforce, public and political movement toward ethical data practices that was already in motion within some corners of the tech community. Ideally, the result will be change in the form of more ethical technology created, improved and managed transparently by highly accountable people – from company developers to CEOs to Boards of Directors. Something the world has needed since way before ethical questions sparked media headlines, entered living rooms and showed up on government agendas.

Amnesty International used machine-learning to quantify the scale of abuse against women on Twitter

A new study by Amnesty International and Element AI puts number to a problem many women already know about: that Twitter is a cesspool of harassment and abuse. Conducted with the help of 6,500 volunteers, the study, billed by Amnesty International as “the largest ever” into online abuse against women, used machine-learning software from Element AI to analyze tweets sent to a sample of 778 women politicians and journalists during 2017. It found that 7.1%, or 1.1 million, of those tweets were either “problematic” or “abusive,” which Amnesty International said amounts to one abusive tweet sent every 30 seconds.

On an interactive website breaking down the study’s methodology and results, Amnesty International said many women either censor what they post, limit their interactions on Twitter, or just quit the platform altogether. “At a watershed moment when women around the world are using their collective power to amplify their voices through social media platforms, Twitter’s failure to consistently and transparently enforce its own community standards to tackle violence and abuse means that women are being pushed backwards towards a culture of silence,” stated the human rights advocacy organization.

Amnesty International, which has been researching abuse against women on Twitter for the past two years, signed up 6,500 volunteers for what it refers to as the “Troll Patrol” after releasing another study in March 2018 that described Twitter as a “toxic” place for women. The Troll Patrol’s volunteers, who come from 150 countries and range in age from 18 to 70 years old, received training about constitutes a problematic or abusive tweet. Then they were shown anonymized tweets mentioning one of the 778 women and asked whether or not the tweets were problematic or abusive. Each tweet was shown to several volunteers. In addition, Amnesty International said “three experts on violence and abuse against women” also categorized a sample of 1,000 tweets to “ensure we were able to assess the quality of the tweets labelled by our digital volunteers.”

The study defined “problematic” as tweets “that contain hurtful or hostile content, especially if repeated to an individual on multiple occasions, but do not necessarily meet the threshold of abuse,” while “abusive” meant tweets “that violate Twitter’s own rules and include content that promote violence against or threats of people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”

In total, the volunteers analyzed 288,000 tweets sent between January to December 2017 to the 778 women studied, who included politicians and journalists across the political spectrum from the United Kingdom and United States. Politicians included members of the U.K. Parliament and the U.S. Congress, while journalists represented a diverse group of publications including The Daily Mail, The New York Times, Guardian, The Sun, gal-dem, Pink News, and Breitbart.

Then a subset of the labelled tweets was processed using Element AI’s machine-learning software to extrapolate the analysis to the total of 14.5 million tweets that mentioned the 778 women during 2017. (Since tweets weren’t collected for the study until March 2018, Amnesty International notes that the scale of abuse was likely even higher because some abusive tweets may have been deleted or made by accounts that were suspended or disabled). Element AI’s extrapolation produced the finding that 7.1% of tweets sent to the women were problematic or abusive, amounting to 1.1 million tweets 2017.

Black, Asian, Latinx, and mixed race women were 34% more likely to be mentioned in problematic or abusive tweets than white women. Black women in particular were especially vulnerable: they were 84% more likely than white women to be mentioned in problematic or abusive tweets. One in 10 tweets mentioning black women in the study sample was problematic or abusive, compared to one in 15 for white women.

“We found that, although abuse is targeted at women across the political spectrum, women of color were much more likely to be impacted, and black women are disproportionately targeted. Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices,” said Milena Marin, Amnesty International’s senior advisor for tactical research, in the statement.

Breaking down the results by profession, the study found that 7% of tweets that mentioned the 454 journalists in the study were either problematic or abusive. The 324 politicians surveyed were targeted at a similar rate, with 7.12% of tweets that mentioned them problematic or abusive.

Of course, findings from a sample of 778 journalists and politicians in the U.K. and U.S. is difficult to extrapolate to other professions, countries, or the general population. The study’s findings are important, however, because many politicians and journalists need to use social media in order to do their jobs effectively. Women, and especially women of color, are underrepresented in both professions, and many stay on Twitter simply to make a statement about visibility, even though it means dealing with constant harassment and abuse. Furthermore, Twitter’s API changes means many third-party anti-bullying tools no longer work, as technology journalist Sarah Jeong noted on her own Twitter profile, and the platform has yet to come up with tools that replicate their functionality.

Amnesty International’s other research about abusive behavior towards women on Twitter includes a 2017 online poll of women in 8 countries, and an analysis of abuse faced by female members of Parliament before the UK’s 2017 snap election. The organization said the Troll Patrol isn’t about “policing Twitter or forcing it to remove content.” Instead, the organization wants the platform to be more transparent, especially about how the machine-learning algorithms it uses to detect abuse.

Because the largest social media platforms now rely on machine learning to scale their anti-abuse monitoring, Element AI also used the study’s data to develop a machine-learning model that automatically detects abusive tweets. For the next three weeks, the model will be available to test on Amnesty International’s website in order to “demonstrate the potential and current limitations of AI technology.” These limitations mean social media platforms need to fine-tune their algorithms very carefully in order to detect abusive content without also flagging legitimate speech.

“These trade-offs are value-based judgements with serious implications for freedom of expression and other human rights online,” the organization said, adding that “as it stands, automation may have a useful role to play in assessing trends or flagging content for human review, but it should, at best, be used to assist trained moderators, and certainly should not replace them.”

TechCrunch has contacted Twitter for comment.

Facebook’s got 99 problems but Trump’s latest “bias” tweet ain’t one

By any measure Facebook hasn’t had the best of years in 2018.

But while toxic problems keep piling up and, well, raining acidly down on the social networking giant — from election interference, to fake accounts, faulty metrics, security flaws, ethics failuresprivacy outrages and much more besides — the silver lining of having a core business now widely perceived as hostile to democratic processes and civilized sentiment, and the tool of choice for shitposters agitating for hate and societal division, well, everywhere in the world, is that Facebook has frankly far more important things to worry about than the latest anti-tech-industry salvo from President Trump.

In an early morning tweet today, Trump (again) attacked what he dubbed anti-conservative “bias” in the digital social sphere — hitting out at not just Facebook but tech’s holy trinity of social giants, with a claim that “Facebook, Twitter and Google are so biased towards the Dems it is ridiculous!”

Time was when Facebook was so sensitive to accusations of internal anti-conservative bias that it fired a bunch of journalists it had contracted and replaced them with algorithms — which almost immediately pumped up a bunch of fake news. RIP irony.

Not today, though.

When asked if it had a response to Trump’s accusation of bias a Facebook spokesperson told us: “We don’t have anything to add here.”

The brevity and alacrity of the response suggested the spokesperson had a really cheerful expression on their face when they typed it.

The relief of Facebook not having to give a shit this time was kinda palpable, even in pixel form.

It was also a far cry from the screeds the company routinely dispenses these days to try to muffle journalistic — and indeed political — enquiry.

Trump evidently doesn’t factor ‘bigly’ on Facebook’s oversubscribed risk-list.

Even though Facebook was the first name on the president’s (non-alphabetical) tech giant hit-list.

Still, Twitter appeared to have irked Trump more, as his tweet singled out the short-form platform — with an accusation that Twitter has made it “much more difficult for people to join [sic] @realDonaldTrump”. (We think by “join” he means follow. But we’re speculating wildly.)

This is perhaps why Twitter felt moved to provide a response to the claim of bias, albeit also without wasting a lot of words.

Here’s its statement:

Our focus is on the health of the service, and that includes work to remove fake accounts to prevent malicious behavior. Many prominent accounts have seen follower counts drop, but the result is higher confidence that the followers they have are real, engaged people.

Presumably the president failed to read our report, from July, when we trailed Twitter’s forthcoming spam purge, warning it would result in users with lots of followers taking a noticeable hit in the coming days. In a word: Sad.

Of course we also asked Google for a response to Trump’s bias claim. But just got radio silence.

In similar “bias” tweets from August the company got a bigger Trump-lashing. And in a response statement then it told us: “We never rank search results to manipulate political sentiment.”

Google CEO Sundar Pichai has also just had to sit through some three hours of questions from Republicans in Congress on this very theme.

So the company probably feels it’s exhausted the political bias canard.

Even while, as the claims drone on and on, it might truly come to understand what it feels like to be stuck inside a filter bubble.

In any case there are far more pressing things to accuse Google’s algorithms of than being ‘anti-Trump’.

So it’s just as well it didn’t waste time on another presidential sideshow intended to distract from problems of Trump’s own making.