Facebook is testing pop-up messages telling people to read a link before they share it

Years after popping open a pandora’s box of bad behavior, social media companies are trying to figure out subtle ways to reshape how people use their platforms.

Following Twitter’s lead, Facebook is trying out a new feature designed to encourage users to read a link before sharing it. The test will reach 6 percent of Facebook’s Android users globally in a gradual rollout that aims to encourage “informed sharing” of news stories on the platform.

Users can still easily click through to share a given story, but the idea is that by adding friction to the experience, people might rethink their original impulses to share the kind of inflammatory content that currently dominates on the platform.

Twitter introduced prompts urging users to read a link before retweeting it last June and the company quickly found the test feature to be successful, expanding it to more users.

Facebook began trying out more prompts like this last year. Last June, the company rolled out pop-up messages to warn users before they share any content that’s more than 90 days old in an an effort to cut down on misleading stories taken out of their original context.

At the time, Facebook said it was looking at other pop-up prompts to cut down on some kinds of misinformation. A few months later, Facebook rolled out similar pop-up messages that noted the date and the source of any links they share related to COVID-19.

The strategy demonstrates Facebook’s preference for a passive strategy of nudging people away from misinformation and toward its own verified resources on hot button issues like COVID-19 and the 2020 election.

While the jury is still out on how much of an impact this kind of gentle behavioral shaping can make on the misinformation epidemic, both Twitter and Facebook have also explored prompts that discourage users from posting abusive comments.

Pop-up messages that give users a sense that their bad behavior is being observed might be where more automated moderation is headed on social platforms. While users would probably be far better served by social media companies scrapping their misinformation and abuse-ridden existing platforms and rebuilding them more thoughtfully from the ground up, small behavioral nudges will have to do.

If 12% is the new 30%, 4% is the new 12%

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines.

The whole team was aboard for this recording, with Grace and Chris behind the scenes, and Danny, Alex, and Natasha on the mics. We had to cut more than we included this week, which should give you a good idea of how busy the startup and VC worlds are of late.

Make sure that you are following the podcast on Twitter, where we post all sorts of memes and cuts and, perhaps, the occasional video here and there. That aside, here’s the rundown:

  • Investing legend David Swenson passed away.
  • Twitter is buying Scroll (neat, very cool) as part of its subscription push, but also killing Nuzzel in the process (bad, very uncool). Natasha and Danny fill us in on why Nuzzel will be missed. Alex has thoughts on why Twitter-Scroll is good.
  • Epic bought ArtStation and cut its marketplace take rate. This is the future, says Danny, who throws his own estimates in, too.
  • Sony and Discord are tying up after the Microsoft-Discord deal fell apart.
  • Edtech is doing the edtech thing in which it raises money and consolidates, as shown by Kahoot’s latest scoop.
  • A friend of the pod, Jomayra Herrera, is joining Reach Capital as its first ever outside-partner hire.
  • Uber is teaming up with Arrival for ride-hailing designed electric vehicles. We’re pretty bullish on the idea. Also Alex likes to say “microfactories.”
  • IVF startups are raising venture capital, and this time its Alife Health that we’re talking about. 
  • WorkBoard raised again. Alex once again made us talk about OKR-focused startups. He needs to get a life, and so does the rest of the Equity team which fought to do the transition into this segment.
  • To end, we spoke about Leda Health, a new startup focused on at-home rape kits for sexual assault survivors. It’s a controversial company, and we discuss critiques and opportunities,

And that’s our show! No private equity deal can slow the Equity team down, so we’ll see you Monday!

Equity drops every Monday at 7:00 a.m. PST, Wednesday, and Friday at 6:00 AM PST, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts!

Twitter Tip Jar lets you pay people for good tweetin’

Twitter today confirmed earlier reports that it’s testing a new Tip Jar feature. The new addition utilizes a number of different payment platforms, including PayPal, Venmo, Patreon, Cash App and Bandcamp (all region-dependent).

“Tip Jar is an easy way to support the incredible voices that make up the conversation on Twitter,” the company wrote in a blog post confirming the news. “This is a first step in our work to create new ways for people to receive and show support on Twitter — with money.”

Currently available on both iOS and Android, the feature is designed to give users a way to quickly tip creators with a few taps. Tip Jar is beginning to roll out to select groups of users, including nonprofits, journalists, experts and creators. The company has further plans to roll it out to additional groups and languages.

For now, those using Twitter in English will be able to send a tip. Those profiles that have enabled it will show the Tip Jar icon on their profile page to the left of the Follow button. Hitting that will show a list of the aforementioned third-party money transfer apps. The opt-in feature will pop up in the mobile app, letting qualified users choose which payment platforms they’ll accept.

In addition to the above, Android users will be able to send money via Twitter’s Clubhouse competitor, Spaces. The company says it won’t be taking a percentage of those transactions.

The feature comes as the service looks to become a more well-rounded content-creation platform. In addition to the audio feature, Spaces (which recently saw a much wider roll out), Twitter has also been looking to take on the likes of Substack with its own newsletter-style offering.

 

 

Twitter rolls out bigger images and cropping control on iOS and Android

Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.

In March, Twitter rolled out a limited test of uncropped, larger images in users’ feeds. Now, it’s declared those tests a success and improved the image sharing experience for everybody.

On Twitter for Android or iOS, standard aspect ratio images (16:9 and 4:3) will now display in full without any cropping. Instead of gambling on how an image will show up in the timeline — and potentially ruining an otherwise great joke — images will look just like they did when you shot them.

Twitter’s new system will show anyone sharing an image a preview of what it will look like before it goes live in the timeline, resolving past concerns that Twitter’s algorithmic cropping was biased toward highlighting white faces.

“Today’s launch is a direct result of the feedback people shared with us last year that the way our algorithm cropped images wasn’t equitable,” Twitter spokesperson Lauren Alexander said. The new way of presenting images decreases the platform’s reliance on automatic, machine learning-based image cropping.

Super tall or wide images will still get a centered crop, but Twitter says it’s working to make that better too, along with other aspects of how visual media gets displayed in the timeline.

For visual artists like photographers and cartoonists who promote their work on Twitter, this is actually a pretty big deal. Not only will photos and other kinds of art score more real estate on the timeline, but artists can be sure that they’re putting their best tweet forward without awkward crops messing stuff up.

Twitter’s Chief Design Officer Dantley Davis celebrated by tweeting a requisite dramatic image of the Utah desert (Dead Horse Point — great spot!)

We regret to inform you that the brands are also aware of the changes.

The days of “open for a surprise” tweets might be numbered, but the long duck can finally have his day.

Facebook’s Oversight Board throws the company a Trump-shaped curveball

Facebook’s controversial policy-setting supergroup issued its verdict on Trump’s fate Wednesday, and it wasn’t quite what most of us were expecting.

We’ll dig into the decision to tease out what it really means, not just for Trump, but also for Facebook’s broader experiment in outsourcing difficult content moderation decisions and for just how independent the board really is.

What did the Facebook Oversight Board decide?

The Oversight Board backed Facebook’s determination that Trump violated its policies on “Dangerous Individuals and Organizations,” which prohibits anything that praises or otherwise supports violence. The the full decision and accompanying policy recommendations are online for anyone to read.

Specifically, the Oversight Board ruled that two Trump posts, one telling Capitol rioters “We love you. You’re very special” and another calling them “great patriots” and telling them to “remember this day forever” broke Facebook’s rules. In fact, the board went as far as saying the pair of posts “severely” violated the rules in question, making it clear that the risk of real-world harm in Trump’s words was was crystal clear:

The Board found that, in maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible. At the time of Mr. Trump’s posts, there was a clear, immediate risk of harm and his words of support for those involved in the riots legitimized their violent actions. As president, Mr. Trump had a high level of influence. The reach of his posts was large, with 35 million followers on Facebook and 24 million on Instagram.”

While the Oversight Board praised Facebook’s decision to suspend Trump, it disagreed with the way the platform implemented the suspension. The group argued that Facebook’s decision to issue an “indefinite” suspension was an arbitrary punishment that wasn’t really supported by the company’s stated policies:

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.

In applying this penalty, Facebook did not follow a clear, published procedure. ‘Indefinite’ suspensions are not described in the company’s content policies. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.”

The Oversight Board didn’t mince words on this point, going on to say that by putting a “vague, standardless” punishment in place and then kicking the ultimate decision to the Oversight Board, “Facebook seeks to avoid its responsibilities.” Turning things around, the board asserted that it’s actually Facebook’s responsibility to come up with an appropriate penalty for Trump that fits its set of content moderation rules.

Is this a surprise outcome?

If you’d asked me yesterday, I would have said that the Oversight Board was more likely to overturn Facebook’s Trump decision. I also called Wednesday’s big decision a win-win for Facebook, because whatever the outcome, it wouldn’t ultimately be criticized a second time for either letting Trump back onto the platform or kicking him off for good. So much for that!

Facebook likely saw a more clear-cut decision on the Trump situation in the cards. This is a relatively challenging outcome for a company that’s probably ready to move on from its (many, many) missteps during the Trump era. But there’s definitely an argument that if the board declared that Facebook made the wrong call and reinstated Trump that would have been a much bigger headache.

A lot of us didn’t see the “straight up toss the ball back into Facebook’s court” option as a possible outcome. It’s ironic and a bit surprising that the Oversight Board’s decision to give Facebook the final say actually makes the board look more independent, not less.

But: It’s worth remembering that at the end of the day, Facebook could undermine the whole thing by just refusing to do what the board says. The board only has as much power as Facebook grants it and the company could call off the deal at any second, if it chose to.

What does it mean that the Oversight Board sent the decision back to Facebook?

Ultimately the Oversight Board is asking Facebook to either a) give Trump’s suspension and end date or b) delete his account. In a less severe case, the normal course of action would be for Facebook to remove whatever broke the rules, but given the ramifications here and the fact that Trump is a repeat Facebook rule-breaker, this is obviously all well past that option.

What will Facebook do?

We’re in for a wait. The board called for Facebook to evaluate the Trump situation and reach a final decision within six months, calling for a “proportionate” response that is justified by its platform rules. Since Facebook and other social media companies are re-writing their rules all the time and making big calls on the fly, that gives the company a bit of time to build out policies that align with the actions it plans to take.

In the months following the violence at the U.S. Capitol, Facebook repeatedly defended its Trump call as “necessary and right.” It’s hard to imagine the company deciding that Trump will get reinstated six months from now, but in theory Facebook could decide that length of time was an appropriate punishment and write that into its rules. The fact that Twitter permanently banned Trump means that Facebook could comfortably follow suit at this point.

In direct response to the decision, Facebook’s Nick Clegg wrote only: “We will now consider the board’s decision and determine an action that is clear and proportionate.” Clegg says Trump will stay suspended until then but didn’t offer further hints at what comes next. See you again on November 5.

If Trump had won reelection, this whole thing probably would have gone down very differently. As much as Facebook likes to say its decisions are aligned with lofty ideals — absolute free speech, connecting people — the company is ultimately very attuned to its regulatory and political environment.

Trump’s actions were on January 6 were dangerous and flagrant, but Biden’s looming inauguration two weeks later probably influenced the company’s decision just as much. Circumventing regulatory scrutiny is also arguably the r’aison dêtre for the Oversight Board to begin with.

Did the board actually change anything?

Potentially. In its decision, the Oversight Board said that Facebook asked for “observations or recommendations from the Board about suspensions when the user is a political leader.” The board’s policy recommendations aren’t binding like its decisions are, but since Facebook asked, it’s likely to listen.

If it does, the Oversight Board’s recommendations could reshape how Facebook handles high profile accounts in the future:

The Board stated that it is not always useful to draw a firm distinction between political leaders and other influential users, recognizing that other users with large audiences can also contribute to serious risks of harm.

While the same rules should apply to all users, context matters when assessing the probability and imminence of harm. When posts by influential users pose a high probability of imminent harm, Facebook should act quickly to enforce its rules. Although Facebook explained that it did not apply its ‘newsworthiness’ allowance in this case, the Board called on Facebook to address widespread confusion about how decisions relating to influential users are made. The Board stressed that considerations of newsworthiness should not take priority when urgent action is needed to prevent significant harm.

Facebook and other social networks have hidden behind newsworthiness exemptions for years instead of making difficult policy calls that would upset half their users. Here, the board not only says that political leaders don’t really deserve special consideration while enforcing the rules, but that it’s much more important to take down content that could cause harm than it is to keep it online because it’s newsworthy.

So… we’re back to square one?

Yes and no. Trump’s suspension may still be up in the air, but the Oversight Board is modeled after a legal body and its real power is in setting precedents. The board kicked this case back to Facebook because the company picked a punishment for Trump that wasn’t even on the menu, not because it thought anything about his behavior fell in a gray area.

The Oversight Board clearly believed that Trump’s words of praise for rioters at the Capitol created a high stakes, dangerous threat on the platform. It’s easy to imagine the board reaching the same conclusion on Trump’s infamous “when the looting starts, the shooting starts” statement during the George Floyd protests, even though Facebook did nothing at the time. Still, the board stops short of saying that behavior like Trump’s merits a perma-ban — that much is up to Facebook.

Twitter rolls out improved ‘reply prompts’ to cut down on harmful tweets

A year ago, Twitter began testing a feature that would prompt users to pause and reconsider before they replied to a tweet using “harmful” language — meaning language that was abusive, trolling, or otherwise offensive in nature. Today, the company says it’s rolling improved versions of these prompts to English-language users on iOS and soon, Android, after adjusting its systems that determine when to send the reminders to better understand when the language being used in the reply is actually harmful.

The idea behind these forced slow downs, or nudges, are about leveraging psychological tricks in order to help people make better decisions about what they post. Studies have indicated that introducing a nudge like this can lead people to edit and cancel posts they would have otherwise regretted.

Twitter’s own tests found that to be true, too. It said that 34% of people revised their initial reply after seeing the prompt, or chose not to send the reply at all. And, after being prompted once, people then composed 11% fewer offensive replies in the future, on average. That indicates that the prompt, for some small group at least, had a lasting impact on user behavior. (Twitter also found that users who were prompted were less likely to receive harmful replies back, but didn’t further quantify this metric.)

Image Credits: Twitter

However, Twitter’s early tests ran into some problems. it found its systems and algorithms sometimes struggled to understand the nuance that occurs in many conversations. For example, it couldn’t always differentiate between offensive replies and sarcasm or, sometimes, even friendly banter. It also struggled to account for those situations in which language is being reclaimed by underrepresented communities, and then used in non-harmful ways.

The improvements rolling out starting today aim to address these problems. Twitter says it’s made adjustments to the technology across these areas, and others. Now, it will take the relationship between the author and replier into consideration. That is, if both follow and reply to each other often, it’s more likely they have a better understanding of the preferred tone of communication than someone else who doesn’t.

Twitter says it has also improved the technology to more accurately detect strong language, including profanity.

And it’s made it easier for those who see the prompts to let Twitter know if the prompt was helpful or relevant — data that can help to improve the systems further.

How well this all works remains to be seen, of course.

Image Credits: Twitter

While any feature that can help dial down some of the toxicity on Twitter may be useful, this only addresses one aspect of the larger problem — people who get into heated exchanges that they could later regret. There are other issues across Twitter regarding abusive and toxic content that this solution alone can’t address.

These “reply prompts” aren’t the only time Twitter has used the concept of nudges to impact user behavior. It also reminds users to read an article before you retweet and amplify it in an effort to promote more informed discussions on its platform.

Twitter says the improved prompts are rolling out to all English-language users on iOS starting today, and will reach Android over the next few days.

Twitter acquires distraction-free reading service Scroll to beef up its subscription product

Twitter this morning announced it’s acquiring Scroll, a subscription service that offers readers a better way to read through long-form content on the web, by removing ads and other website clutter that can slow down the experience. The service will become a part of Twitter’s larger plans to invest in subscriptions, the company says, and will later be offered as one of the premium features Twitter will provide to subscribers.

Premium subscribers will be able to use Scroll to easily read their articles from news outlets and from Twitter’s own newsletters product, Revue, another recent acquisition that’s already been integrated into Twitter’s service. When subscribers use Scroll through Twitter, a portion of their subscription revenue would go to support the publishers and the writers creating the content, explains Twitter in an announcement.

Scroll’s service today works across hundreds of sites, including The Atlantic, The Verge, USA Today, The Sacramento Bee, The Philadelphia Inquirer and The Daily Beast, among others. For readers, the experience of using Scroll is similar to that of a “reader view” — ads, trackers, and other website junk is stripped so readers can focus on the content.

Image Credits: Twitter

Scroll’s pitch to publishers has been that it can end up delivering cleaner content that can make them more money than advertising alone.

Deal terms were not disclosed, but Twitter will be bringing on the entire Scroll team, totalling 13 people.

For the time being, Scroll will pause new customer sign-ups so it can focus on integrating its product into Twitter’s subscriptions work and prepare for the expected growth. It will, however, continue to onboard new publishers who want to participate in Scroll’s network, following the deal’s closure.

And Scroll itself will be headed back into private beta as the team works to integrate the product into Twitter.

Image Credits: Twitter

Twitter says it will also be winding down Scroll’s news aggregator Nuzzel product, but will work to bring some of Nuzzel’s core elements to Twitter over time.

“Twitter exists to serve the public conversation. Journalism is the mitochondria of that conversation. It initiates, energizes and informs. It converts and confounds perspectives. At its best it helps us stand in one another’s shoes and understand each other’s common humanity,” said Tony Haile, Scroll CEO, in the company’s post about Scroll’s acquisition.

“The mission we’ve been given by Jack and the Twitter team is simple: take the model and platform that Scroll has built and scale it so that everyone who uses Twitter has the opportunity to experience an internet without friction and frustration, a great gathering of people who love the news and pay to sustainably support it,” he added.

Twitter earlier this year detailed its plans to head into subscriptions, as a way to diversify beyond ad revenue for its own business. The company unveiled what it’s calling “Super Follow,” a creator-focused subscription that would give paid subscribers access to an expanded array of perks, like exclusive content, subscriber-only newsletters, deals, badges, paywalled media, and more. The company is aiming to use this new product to help it achieve its goal of doubling company revenue from $3.7 billion in 2020 to $7.5 billion or more in 2023, it said.

Twitter expands Spaces to anyone with 600+ followers, details plans for tickets, reminders and more

Twitter Spaces, the company’s new live audio rooms feature, is opening up more broadly. The company announced today it’s making Twitter Spaces available to any account with 600 followers or more, including both iOS and Android users. It also officially unveiled some of the features it’s preparing to launch, like Ticketed Spaces, scheduling features, reminders, support for co-hosting, accessibility improvements, and more.

Along with the expansion, Twitter is making Spaces more visible on its platform, too. The company notes it has begun testing the ability to find and join a Space from a purple bubble around someone’s profile picture right from the Home timeline.

Image Credits: Twitter

Twitter says it decided on the 600 follower figure as being the minimum to gain access to Twitter Spaces based on its earlier testing. Accounts with 600 or more followers tend to have “a good experience” hosting live conversations because they have a larger existing audience who can tune in. However, Twitter says it’s still planning to bring Spaces to all users in the future.

In the meantime, it’s speeding ahead with new features and developments. Twitter has been building Spaces in public, taking into consideration user feedback as it prioritizes features and updates. Already, it has built out an expanded set of audience management controls, as users requested, introduced a way for hosts to mute all speakers at once, and added the laughing emoji to its set of reactions, after users requested it.

Now, its focus is turning towards creators. Twitter Spaces will soon support multiple co-hosts, and creators will be able to better market and even charge for access to their live events on Twitter Spaces. One feature, arriving in the next few weeks, will allow users to schedule and set reminders about Spaces they don’t want to miss. This can also help creators who are marketing their event in advance, as part of the RSVP process could involve pushing users to “set a reminder” about the upcoming show.

Twitter Spaces’ rival, Clubhouse, also just announced a reminders feature during its Townhall event on Sunday as well at the start of its external Android testing. The two platforms, it seems, could soon be neck-and-neck in terms of feature set.

Image Credits: Twitter

But while Clubhouse recently launched in-app donations feature as a means of supporting favorite creators, Twitter will soon introduce a more traditional means of generating revenue from live events: selling tickets. The company says it’s working on a feature that will allow hosts to set ticket prices and how many are available to a given event, in order to give them a way of earning revenue from their Twitter Spaces.

A limited group of testers will gain access to Ticketed Spaces in the coming months, Twitter says. Unlike Clubhouse, which has yet to tap into creator revenue streams, Twitter will take a small cut from these ticket sales. However, it notes that the “majority” of the revenue will go to the creators themselves.

Image Credits: Twitter

Twitter also noted that it’s improving its accessibility feature, live captions, so they can be paused and customized, and is working to make them more accurate.

The company will be hosting a Twitter Space of its own today around 1 PM PT to further discuss these announcements in more detail.

What3Words sends legal threat to a security researcher for sharing an open-source alternative

A U.K. company behind digital addressing system What3Words has sent a legal threat to a security researcher for offering to share an open-source software project with other researchers, which What3Words claims violate its copyright.

Aaron Toponce, a systems administrator at XMission, received a letter on Thursday from a law firm representing What3Words, requesting that he delete tweets related to the open source alternative, WhatFreeWords. The letter also demands that he disclose to the law firm the identity of the person or people with whom he had shared a copy of the software, agree that he would not make any further copies of the software, and to delete any copies of the software he had in his possession.

The letter gave him until May 7 to agree, after which What3Words would “waive any entitlement it may have to pursue related claims against you,” a thinly-veiled threat of legal action.

“This is not a battle worth fighting,” he said in a tweet. Toponce told TechCrunch that he has complied with the demands, fearing legal repercussions if he didn’t. He has also asked the law firm twice for links to the tweets they want deleting but has not heard back. “Depending on the tweet, I may or may not comply. Depends on its content,” he said.

The legal threat sent to Aaron Toponce. (Image: supplied)

U.K.-based What3Words divides the entire world into three-meter squares and labels each with a unique three-word phrase. The idea is that sharing three words is easier to share on the phone in an emergency than having to find and read out their precise geographic coordinates.

But security researcher Andrew Tierney recently discovered that What3Words would sometimes have two similarly-named squares less than a mile apart, potentially causing confusion about a person’s true whereabouts. In a later write-up, Tierney said What3Words was not adequate for use in safety-critical cases.

It’s not the only downside. Critics have long argued that What3Words’ proprietary geocoding technology, which it bills as “life-saving,” makes it harder to examine it for problems or security vulnerabilities.

Concerns about its lack of openness in part led to the creation of the WhatFreeWords. A copy of the project’s website, which does not contain the code itself, said the open-source alternative was developed by reverse-engineering What3Words. “Once we found out how it worked, we coded implementations for it for JavaScript and Go,” the website said. “To ensure that we did not violate the What3Words company’s copyright, we did not include any of their code, and we only included the bare minimum data required for interoperability.”

But the project’s website was nevertheless subjected to a copyright takedown request filed by What3Words’ counsel. Even tweets that pointed to cached or backup copies of the code were removed by Twitter at the lawyers’ requests.

Toponce — a security researcher on the side — contributed to Tierney’s research, who was tweeting out his findings as he went. Toponce said that he offered to share a copy of the WhatFreeWords code with other researchers to help Tierney with his ongoing research into What3Words. Toponce told TechCrunch that receiving the legal threat may have been a combination of offering to share the code and also finding problems with What3Words.

In its letter to Toponce, What3Words argues that WhatFreeWords contains its intellectual property and that the company “cannot permit the dissemination” of the software.

Regardless, several websites still retain copies of the code and are easily searchable through Google, and TechCrunch has seen several tweets linking to the WhatFreeWords code since Toponce went public with the legal threat. Tierney, who did not use WhatFreeWords as part of his research, said in a tweet that What3Words’ reaction was “totally unreasonable given the ease with which you can find versions online.”

We asked What3Words if the company could point to a case where a judicial court has asserted that WhatFreeWords has violated its copyright. What3Words spokesperson Miriam Frank did not respond to multiple requests for comment.

An Oracle EVP took a brass-knuckled approach with a reporter today; now he’s suspended from Twitter

Companies and the reporters who cover them routinely find themselves at odds, particularly when the stories being chased are unflattering or bring unwanted attention to a business’s dealings, or, in the company’s estimation, simply inaccurate.

Many companies fight back, which is why crisis communications is a very big and lucrative business. Still, how a company fights back matters. And according to crisis communications pros who TechCrunch spoke with this afternoon, a new post on Oracle’s corporate blog misses the mark, as did the company’s related follow-up on social media.

In fact, the author of the post, an Oracle executive named Ken Glueck, a 25-year-long veteran of the company, has been temporarily suspended by Twitter, the company told Gizmodo this afternoon, after encouraging his followers to harass a female reporter.

The trouble ties to a series of pieces by the news site The Intercept about how a “network of local resellers helps funnel Oracle technology to the police and military in China,” and Oracle’s response to the pieces.

While it isn’t uncommon for companies to post responses to media stories on their own platforms (as well as to take out ads in mainstream media outlets), the crisis execs with whom we spoke — they asked not to be named as they work with companies like Oracle — had some observations that might be helpful to Oracle in the future.

Rule number one: don’t draw attention unnecessarily to work that you might prefer didn’t exist. Oracle’s newest post doesn’t link back to the new Intercept story that Glueck works to dismantle, but in an earlier post about the first Intercept story that ran in February, Glueck hyperlinks to the story on Oracle’s blog. It’s hard to know what Oracle wants its audience to read more — Glueck’s blog post or that Intercept story, particularly given its intriguing title (“How Oracle Sells Repression in China”).

“How many of Oracle’s customers or employees saw [The Intercept piece] and didn’t give a damn and now he’s drawing attention to it?” noted one exec we’d interviewed today.

Rule number two: Don’t attack reporters; attack (if you must) the outlet. In Glueck’s first diatribe against The Intercept over its February piece, he mentions the outlet 26 times and the author of the piece once. In Glueck’s newest salvo against The Intercept, he refers to its author, reporter Mara Hvistendahl, 22 times — mostly by her first name — and even invites readers of Oracle’s blog to reach out to him, writing in boldface: “If you have any information about Mara or her reporting, write me securely at kglueck AT protonmail.com.”

Though Glueck has since said the call-out was a tongue-in-cheek gesture, it was subsequently removed from the post, possibly owing to its “sinister tone” as observed by one of our experts. “No one likes a bully,” said this comms pro, adding that  “bullying conveys weakness.”

Before

After

 

Rule number three: Know your purpose. By lashing out in what is a plainly derisive tone to The Intercept’s piece, as well as continuing to doubling down on its attack against Hvistendahl on social media afterward, Glueck’s strategy became less and less clear, says one of the crisis specialists we spoke with.

“You can do what Ken did and mock” the reporter, said this person, “but is that going to stop The Intercept from continuing to do stories about Oracle? And what is the reaction of other media? Are they scared off by [what happened today] or are they going to circle the wagons?” (Below: a note from an L.A. Times reporter to Glueck today in response to his call for information about Hvistendahl.)

Rule four: Keep it short. Two of the pros we spoke with today commended Glueck’s writing style, calling it both fluid and funny. Both also observed that his response was far too long. “I just couldn’t get through it,” said one.

Last rule: Find another way if possible. The crisis experts we spoke with said it’s ideal to first work with a reporter, then the reporter’s editor if necessary, and if it comes to it, involve lawyers, of which Oracle surely has plenty. “That’s the chain of appeal if a reporter has gotten a story blatantly wrong,” said one source.

Very possibly, Glueck decided to throw out this rulebook by design. Oracle tends to do things its own way, and Glueck is very much a product of that culture. Indeed, the WSJ wrote a 1,300-word profile about Glueck last year, calling him a “potent weapon” for Oracle.

As for Hvistendahl, she suggests there is another reason Oracle took the route that it did.

In a statement sent to us earlier, she writes that “Ken Glueck has published two lengthy blog posts attacking me and my editor, Ryan Tate. But Oracle has not refuted my central finding, which is that the company marketed its analytics software for use by police in China. Oracle also hasn’t refuted our reporting on Oracle’s sale and marketing of its analytics software to police elsewhere in the world. We found evidence of Oracle selling or marketing analytics software to police in Mexico, Pakistan, Turkey, and the UAE. In Brazil, my colleague Tatiana Dias uncovered police contracts between Oracle and Rio de Janeiro’s notoriously corrupt Civil Police.”