Social Media Blog

Thoughts and rants on social networking.

Moderating Twitter

Twitter has a troll problem.

If you’re white, male, not a celebrity and don’t tend to say anything much that’s controversial, then blocking the occasional drive-by troll works perfectly well. If at least one of those things doesn’t apply to you there’s plenty of evidence that Twitter is a little bit broken and better blocking and moderation functionality is needed.

Twitter does have a function to report abuse, but I’m seeing complaints that it’s far too cumbersome, and that has a (possibly deliberate) effect of limiting its use. At least one person has noted that it takes more effort to report an account for abuse than it does for a troll to create yet another throwaway sock-puppet account, a recipe for a perpetual game of whack-a-mole.

In contrast, here’s the Report Abuse form from The Guardian’s online community. There is no real reason why reporting abuse on Twitter needs to be any more complicated than this.

Grainiad Abuse Report
And here’s dropdown listing the reasons. Not all of those would be appropriate for Twitter; “Spam” and “Personal Abuse” certainly are, the others less so.

Grainiad Abuse Report 2
While I approve of Twitter taking a far tougher line against one-to-one harassment, I am not at all convinced that more generalised speech codes are appropriate for a site on the scale of Twitter. Such things are perfectly acceptable and even expected for smaller community sites where it’s part of the deal when you sign up and reflects the ethos behind the site. Indeed, most such community sites are only as good as their moderation, and there are as many where it’s done badly as those where it’s done well. We can all name sites where either lack of moderation or overly partisan moderation creates a toxic environment.

But for a global site with millions of users the idea of speech codes opens a lot of cans of worms which ultimately boil down to power. Who decides what is and isn’t acceptable speech? Whose community values should they reflect? Who gets to shut down speech they don’t like and who doesn’t? I can’t imagine radical feminists taking kindly to conservative Christians telling them what they can or cannot say on Twitter. Or vice versa.

Better to make it easier for groups of people whose values clash so badly that they cannot coexist in the same space to be able to avoid one another more effectively. Yes, there is a danger of creating echo-chambers; as I’ve said before, if you spend too much time in an echo-chamber, then your bullshit detectors cease to function effectively. But Twitter’s current failure mode is in the other direction; pitchfork-wielding mobs who pile on to anyone who dares to say something they don’t like, overwhelming their conversations.

At the moment, the only moderation tool available to individual users is the block function, which is a bit of a blunt instrument, and is only available retrospectively, once the troll has already invaded your space.

There are other things Twitter could implement if they wanted to:

For a start, now that Twitter has threaded conversations, how about adding the ability to moderate responses to your own posts ? Facebook and Google+ both allow you delete other people’s comments below your own status updates. The equivalent in Twitter would be to allow you to delete other people’s tweets that were @replies to your own. If that’s too much against the spirit of Twitter, which it may well be, at least give the power to sever the link so the offending tweet doesn’t appear as part of the threaded conversation.

Then perhaps there ought to be some limits to who can @reply to you in the first place. I’ve seen one suggestion for a setting that prevents accounts whose age is below a user-specified number of days from appearing in your replies tab, which would filter out newly-created sock-puppet accounts. A filter on follower count would have similar effect; sock-puppets won’t have many friends.

Another idea would be to filter on the number of people you follow who have blocked the account. This won’t be as much use against sock-puppets, but will be effective against persistent trolls who have proved sufficiently annoying or abusive to other people in your network.

All of these are things which Twitter could implement quite easily if the will was there. But instead they seem more interested spending their development effort on Facebook-style algorithmic feeds.

Posted in Social Media | Tagged , | Comments Off

Violet Blue on Facebook

Tech commentator Violet Blue writes about Facebook’s “emotional contagion” experments, and does not mince words, calling them “unethical, untrustworthy, and now downright harmful“:

Everyone except the people who worked on “Experimental evidence” agree that what Facebook did was unethical. In fact, it’s gone from toxic pit of ethical bankruptcy to unmitigated disaster in just a matter of days….

…. Intentionally doing things to make people unhappy in their intimate networks isn’t something to screw around with — especially with outdated and unsuitable tools.

It’s dangerous, and Facebook has no way of knowing it didn’t inflict real harm on its users.

We knew we couldn’t trust Facebook, but this is something else entirely.

Time will tell, but I wonder whether this will turn out to be a tipping point when significant numbers of people conclude that Zuckerberg and co cannot be trusted and seek other ways of keeping in touch online with those they really care about.

It may just be a bizarre coincidence, but I’ve noticed a lot of people I used to know on Facebook showing up as “People I may know” on Google+. Not that Google is much less creepy and intrusive than Facebook.

Posted in Social Media | Tagged , | Comments Off

Facebook: As Creepy As Hell

The media have now picked up on the story of Facebook tinkering with users’ feeds for a massive psychology experiment.

Even if this is technically legal under the small print of Facebook’s Terms of Service, there is no way in hell what they did can be remotely ethical. Although it’s difficult to describe it as a “betrayal of trust” since nobody in their right mind should be trusting this creepy organisation as far as they can throw them.

I really hope this revelation encourages more people to log off from Facebook and find other, better ways of keeping in touch with the people they care about.

Posted in Social Media | Tagged | Comments Off

Things Twitter could do

FailWhaleI think few people would deny that Twitter has a troll problem. For us regular users with a few hundred followers it’s easy enough to block the occasional drive-by troll, especially if we’re male. But it’s a different story for public figures, especially women, who can find themselves bombarded with hundreds of abusive messages.

Technical solutions for social problems aren’t ideal, but trying to re-educate the sections of the population who live in the bottom half of the internet is at best a very long term project.  In the meantime there are things Twitter could do make it harder for trolls to ruin people’s Twitter experience.

One would be to give users the ability to filter the Notifictations tab. At the moment, anyone you haven’t blocked will be visible in that tab if they @message your username. It’s not technically difficult to filter than by degrees of separation, so what you see in your Connections tab can take into account things like:

  • The number of people you’re following who follow them
  • The number of people you follow who have blocked them
  • The total number of people who have blocked them relative to their number of followers.

Of course it would need to be refined to prevent the trolls themselves from gaming the system. For example, perhaps blocks from those who are very block-happy but have themselves collected a lot of blocks could be disregarded.

Twitter could also crack down on abuse of multiple accounts. There are plenty of legitimate reasons why people need multiple accounts, but it’s well known that trolls often churn through multiple throwaway accounts as each one gets blocked by their targets. Surely it’s not impossible for some kind of pattern-matching on IP addresses and word use to identify which accounts are being used by the same, and deal with them accordingly when any one is suspeded for abuse.

Twitter is very efficient at nuking spam accounts, and they’re pretty easy to identify algorithmically. Dealing with trolls is harder, and will require more human intervention, but that’s no excuse for Twitter to do nothing. As I’ve pointed out, there are plemty of things they could do if the will was there.

Posted in Social Media | Comments Off

Bloom.fm bites the dust?

Bloom Gameover

Sad news on Bloom.fm’s blog

We’ll keep this short because we’re pretty shell-shocked.

It’s game over for Bloom.fm.

Our investor, who’s been along for the ride since day one, has unexpectedly pulled our funding.

It’s come so out of the blue that we don’t have time to find new investment. So, with enormous regret, we have to shut up shop.

This is a poetically crappy turn of events as our young business was showing real promise. Our apps and web player are looking super-nice and we had 1,158,914 registered users in a little over a year. Yep.

A massive thanks to everyone that helped us get this far. We’re absolutely gutted. But it’s been a real pleasure.

A later blog post states that the application will remain running for a few days while they make last ditch attempts to find a buyer.

Coming so soon after the demise of last.fm’s streaming radio, it does make you question the viability of legal online streaming services. Are the labels and collection agencies being too greedy when it comes to licencing? Or do they want startups like Bloom to fail so as not to cannibalise download sales?

Update: In an interview today, Bloom’s Oleg Formenko suggests that all may not be lost, and there are a number of potential buyers in the frame,

Posted in Music News, Social Media | Tagged , , | 2 Comments

Context Collapse

Interesting post on the Software Testing Club on the subject of Context Collapse.

I recently heard the term “context collapse” in a podcast discussing the possible flight of the younger audience from some social media applications. It is unclear who originally coined the term in the early 2000′s, which initially referred generically to the overlapping circles on social media leading to a poster’s inability to focus on a single audience. In the podcast, the meaning was more specifically defined to identify the clash of incompatible social circles: college acquaintances, close friends, family, and work connections (especially management). That incompatibility leads to an abandonment of the media or couching postings in coded terms that are (supposedly) only understood within a specific circle.

Yes, that’s exactly why I decided to leave Facebook. I didn’t realised there was actually a term for it. The post on STC goes on to describe another case of Context Collapse involving accessibility testing, which the team eventually dealt with by getting actual disabled people to test the product. It’s a very interesting read.

Posted in Social Media, Testing & Software | Tagged | Comments Off

WTF Twitter?

FailWhaleYesterday I was horrified to see a Promoted Tweet for a PUA (Pick Up Artist) promoting their hideous rapey misogynistic subculture. It’s not often I swear on Twitter, but I’m told my reaction was more than justified.

Yes, I immediately reported it as abusive, since it must be in violation of Twitter’s policy on ads for “Sexual services”. But it begs the question of how the Hell such a promoted tweet got into my timeline in the first place. This is almost certainly not unconnected to the fact that Twitter employ virtually no women.

Either Promoted Tweets are not screened at all, and they rely on users reporting offensive ads. Or somebody in Twitter reviewed it, and thought it was OK.

I don’t know which is worse.

Posted in Social Media | Tagged , | Comments Off

Woman becomes first person to be jailed for ‘trolling herself is today’s bizarre headline. The actual story isn’t quite as bizarre as the headline, but is more evidence that many persistent trolls aren’t rational people with unpleasant agendas, but troubled individuals with mental health problems or substance abuse issues.

Posted on by Tim Hall | Comments Off

Epic Bureaucracy Fail from Canada.

In Canada, Government tweets are sanitized through ‘super-rigid process’ This is just head-explodingly ridiculous.

Newly disclosed documents from Industry Canada show how teams of bureaucrats often work for weeks to sanitize each lowly tweet, in a medium that’s supposed to thrive on spontaneity and informality.

Most 140-character tweets issued by the department are planned weeks in advance; edited by dozens of public servants; reviewed and revised by the minister’s staff; and sanitized through a 12-step protocol, the documents indicate.

Insiders and experts say the result is about as far from the spirit of Twitter as you can get — and from a department that’s supposed to be on the leading edge of new communications technologies.

Some things are the very epitome of unwieldy, top-heavy bureaucracy. It reads like something straight out of Parkinson’s Law, except C. Northcote Parkinson would have rejected is as too unbelievably surreal.

Posted in Social Media | Tagged , | 3 Comments

Every time I see an Upworthy-style link-bait headline beginning with “You won’t believe…”. I have to fight the urge to want to kick a “Viral Content Editor” in the bollocks. This probably makes me a Bad Person….

Posted on by Tim Hall | 6 Comments