I write to you today to raise your awareness to an issue that's having an increasing impact on your platform, and especially to a certain demographic that has been increasingly coming under attack in an organised manner that directly impacts safety.
While we're perfectly cognisant of the fact that you wish Twitter to be a safe space free of bullying, this can sometimes be taken too far, and Twitter is losing value and becoming less safe for some. I have my own issues with the entire 'safe space' culture, but that's beyond the scope of this missive for the moment, though I'm more than happy to enter into discussion about the shoddy logic underlying the whole concept.
Please note that, while I myself have become victim to this, I'm also fully aware that I'm responsible for my own behaviour, and that I've been known to be possibly more aggressive than I need to be. This is something that I need to work on, and I'm doing so. I'm not motivated by a favourable outcome for myself, although I'll happily accept one if it's forthcoming.
The problem is this:
It seems that there is an increasing effort by some, offended at the mere challenge to their worldview, that are targeting reports at those who offer such challenges. Several accounts that I'm aware of have been suspended whether temporarily or permanently (one wonders why you don't call permanent suspension what it actually is; banishment), ostensibly for 'targeted abuse'. Upon appealing these suspensions, we get a generic response with no useful information. Your sanctions system, if I'm not mistaken, is entirely algorithmic, filtering out certain words as if words are magical and have some hidden power beyond their utility in conveying ideas.
Now, there's certainly a case for dealing with abusive behaviour, and I have no problem with this, but having an algorithm do the work for you is actually having a chilling effect on discourse, rather than protecting and encouraging it, the latter of which should be the aim.
Interestingly, while your algorithm seems to leap upon certain words and act upon them, it utterly fails to spot the abuses levelled at certain demographics. When looking at the discursive behaviour of some users, it's fairly easy to see what the problem is.
For example, while I can get repeatedly sanctioned for, say, calling a provable liar a liar, there seems to be no effort whatsoever to deal with genuinely abusive behaviour levelled at some demographics. I have several black, Jewish and female friends, for instance, who suffer abuse that any thinking moral agent should find unconscionable. When racist, misogynistic or anti-Semitic tweets are reported, it appears that all is hunky dory. Suggest that somebody whose grasp on reality is tenuous - at best - is an idiot or a moron or some other generic epithet, and all the algorithmic guns are broken out.
I've seen reported tweets that are absolutely beyond the pale, sexually-oriented guff aimed at female sci-commers and sceptics, properly reported in volume, with the response that no breach of Twitter rules can be found.
Komrade Trichindova, the cheeto-in-chief, regularly posts abusive content, threatening violence against entire nations, yet this passes with a conspicuous lack of sanction. Bear in mind that this unhinged megalomaniac who lacks the ability to plan five seconds into the future is not only perfectly capable of the most heinous behaviour, as his personal history has shown, but also has access to weapons that can do genuine harm to millions, as opposed to the milquetoast epithets levelled at the residents of fantasy island.
Several of my female friends have been actively stalked on Twitter, one of which I'd thought had disappeared until I was put in touch with her stealth account. I know others who keep their identities hidden simply because doing otherwise puts them at risk.
Add to all this the fact that, when you actually do take action, that action seems to be taken by a bot with no ability to understand nuance or context, and an appeals process that amounts to nothing more than a button that triggers a generic response such as we see on the right.
As you can see this has less information in it than the brain of the average creationist, and that's a truly stupendous achievement. Further attempts at appeal generate exactly the same response, no matter how many times it's repeated.
All attempts to contact a human - who might be able to look at it and determine that there's mitigating context and nuance - are futile. While I'm perfectly happy to accept the permanent suspension of my account, I at least think that I should be able to talk to somebody about it. It isn't even that I don't feel responsible, but the accusation of targeted abuse is asinine in the extreme. Yes, I've called people idiots and morons, but targeting? Seriously?
While penning this epistle, a wonderful example of this inability to deal with context and nuance has been brought to my attention, and it beautifully highlights the problem with not being able to talk to a human. The picture on the left is a bot response to a tweet. I don't know if this was a reported tweet, but it's almost entirely irrelevant.
In this tweet, a user is explaining hate speech to somebody. As a result of using some words that your algorithm has determined are 'bad words' (I won't enter into the logic of that, not least because it's been covered elsewhere on this blog) a line of code has sanctioned the user. Because the user is unwilling to delete a tweet that your moderation algorithm has deemed is unfit for publication, his account is currently on suspension.
Now, there are those who will insist that certain words may not ever be used, or may only be used by certain demographics such as those to whom such words apply, and that's certainly a useful topic for discussion, but in this case, it's trivial to see that context is everything. I myself have used the first of those words on this blog to highlight something very much like the same thing. Is it really appropriate to allow a keyword-driven algorithm to adjudicate this matter? Given that your appeals system also appears to be moderated by code, is the user likely to get a different response to his appeal?
I know this user well, and I have no hesitation in saying that racism, indeed any form of prejudice, is about as far from being a reasonable accusation as it's possible to get without relocating to the Kalium galaxy. This brings the issue of algorithmic moderation into stark relief.
Coming back to my own circumstance, I can understand why an algorithm might assess my action as targeted abuse, because it can't actually understand the context of those comments. Because mine is primarily a sci-comm channel and because, as a result, I routinely butt heads with those who wish to undermine or otherwise denigrate valid science for their own ideological ends, it can certainly seem that my depredations are targeted, but a quick look from a human will see that there's more to it than that. I don't only deal with Christian deniers, I'm an equal-opportunity debunker regularly dealing with anti-vaxxers, flat-earthers, geocentrists, homoeopaths and purveyors of all manner of superstitious, dangerous, and scientifically illiterate tripe, and indeed I have several religious people among my followers who support my account and my blog, because they have no trouble with valid science. I even have one creationist, a man from my own town (who knew that there were creationists in Manchester) who, while a denier in the fullest sense, recognises that my motivation is always that of edification and undermining the creeping, pernicious trend to keep children stupid.
Moving on, a brief word about my identity, because it's been suggested that the fact that I don't use my real name or photo on my account may be a factor. I don't think this stacks up, because some of the victims of this egregious abuse I'm talking about use their own names and photos.
My nickname is my real-life nickname, given to me by a colleague in music production, owing to my approach to music production, wherein I cut away all the cheesy, unnecessary bullshit to expose the pure musical joy beneath. My avatar is a picture done by a friend on an internet forum.
My identity is trivial to discover for anybody who wants to know it, and I offer it on request or on being challenged with the asinine assertion that I'm somehow hiding behind anonymity, and that I wouldn't employ such robust challenges if my identity were known. Not only have I been using the same nick over many platforms for well over a decade with my true identity attached, my real name and photographs of me can be found on my media page. In case you're wondering, my name is Tony Murphy, I live in Manchester, and my photo can be seen on the left.
This blog is mostly about how we think about things, and this is a perfect topic, because how we think about offence is among the most ubiquitous cases of shoddy thinking in human experience. I'm extremely passionate about correct thinking and the education of our children and, while I always try to avoid letting my passion get the better of me, this is an important issue for the future of our species, and said passion can occasionally spill over into epithets. Again, this is my responsibility, and I make no excuses for it, not least because no excuses are apposite.
Offence is generally something taken, not something given. There are exceptions, of course, and they highlight the genuine issues faced by real people and their lived experiences. Racism, misogyny, anti-Semitism, abuse of those with non-binary gender, supporters of basic rights such as marriage, the list goes on. All that, of course, entirely aside from the practice of sending unsolicited dick-pics to female users. Interestingly, as described above, the majority of such exceptions routinely go entirely unsanctioned on your platform, despite their ubiquity. Meanwhile, people engaged in what is, at bottom, no more than slightly robust discourse are sanctioned with impunity and no recourse to appeal, despite what your T&Cs say on the subject of the latter.
It's also worth noting that there are groups of users, notably those subscribed to the account going by the moniker @/MaxKolbeGroup and its sister account @EscapingAtheism, as well as a known targeted campaign by one person under many guises, the artist formerly known as @Sacerdotus, who engage in all sorts of abuse of sceptics and atheists. Because they don't use the specific keywords recognised by your algorithms, however, they remain free of sanction. They question our integrity, our morality, accuse us of all sorts of things that no right-thinking moral agent would countenance, equate us with mass-murderers and state-sanctioned genocide, and all manner of disgusting insinuations. If this isn't targeted abuse, then nothing is.
I've been tweeting more or less continuously since the suspension of my primary account, tagging in Jack, support and your head of safety, to no avail. When you're dealing with the actions and discourse of humans, you really need some recourse to humans. I fully understand that, given the number of users you have, it's problematic to have a human look at every case but, at the very least, a human should look at all appeals, or users should have some means of contacting a human so that discourse can be engaged in. With the current system, this appears to be impossible. All attempts to enter into discourse fail, and one gets some insight into what it must have been like for the crew of NCC-1701-D when confronted with Locutus.
Twitter, like all venues geared toward discourse, should be a place wherein robust discussion is encourage and allowed freedom to breathe. This is not happening currently, because the nature and scope of your sanction system is unfairly weighted against those who would honestly speak their minds. Certainly, insults should be kept to an absolute minimum, as they can serve to curtail discourse, but a true statement is not an insult, no matter how offended somebody might be by it. It's also worth noting that the abuses themselves are far more indicative of intent than the language with which we choose to express them. Moreover, insults do not, in and of themselves, curtail discourse nearly as much as your current draconian and Kafkaesque reporting and sanction system is doing.
The reference to Kafka is apposite here. Christopher Hitchens, still much missed by the sceptic community, often related an amusing anecdote about a visit to the then Czechoslovakia in the days of the communist regime:
Journalists hate cliché. I know it doesn’t always seem like that when you read the papers, but we try and avoid them. I went to Prague once under the old days of the communist regime. I thought whatever happens to me here, I’m not going to mention Franz Kafka in my essay. I’m going to be the first journalist not to do it. I went to a meeting of the opposition underground, somebody betrayed us because the secret police came in and, suddenly, wham like this just broke down the door, dogs, torches, rubber truncheons, the lot. They slammed me against the wall, "You’re under arrest." "Well, I demand to see the British ambassador." "Blah, blah, you’re under arrest." "What’s the charge?" "We don't have to tell you that." I thought, fuck, I’ve got to mention Kafka after all.The reference is, of course, to Kafka's novel The Trial, in which Joseph K., a man about town, is arrested. The entire town knows about his case, and it's the subject of much discussion, but there's no mention of precisely what the charge is or what it is he's supposed to have done. I won't spoil the novel in case anybody reading this wishes to check it out (or the excellent and sinfully-overlooked film adaptation starring Kyle MacLachlan, with screenplay by Harold Pinter), but it's beautifully reflected in the non-response given to my appeal picture above. In my case, I can only make some guesses based on the timing, but even that is purely a guess. I know of one recent case in which somebody was sanctioned for a tweet well over a year old.
Ultimately, Twitter's strength is its users. It's we who give value to the platform and make it a desirable place for advertisers, those who actually make it profitable, to punt their wares. I don't expect that our tiny demographic's concerns are likely to undermine Twitter's ability to do business in any way, but it's worthy of consideration that your reporting system, ostensibly geared toward safety and reduction of abuse, is actually having the opposite effect. Any outlet aimed at discourse that fails to enter discourse in matters of concern to users - any users - is undermining its utility and attraction. It may only be our demographic that's massively affected at the moment by this, but the lack of consistency in dealing with abuse as detailed above, especially in ignoring those instances of abuse that are going unpunished, and the overlooking of high-follower accounts when they engage in abusive and demeaning comments aimed at large demographics, is making Twitter a less and less desirable platform.
As somebody who comes here from the forum culture as user, moderator and administrator, and having engaged in internet activity prolifically for several decades, I'm more than aware that there's no simple solution to this problem, but I'm also aware that failing to enter into discussion with those voicing concerns is no solution at all.
In closing, your reporting system, far from making Twitter a 'safe space', is actually having the opposite effect. Please get it sorted. I know there are many users who not only feel these concerns, but are more than willing to engage with your representatives to aid you in resolving these issues. I myself have some suggestions in this regard, garnered over many years of moderation, engagement and customer service principles, and all that would come at a minimum disruption to your service and, ultimately, to your bottom line.
Regards,
Tony Murphy
Concerned Tweep and science communicator
Manchester, UK
P.S. I'm going to begin collating a list of affected accounts. I'm happy to provide this if I receive any response from Twitter regarding this matter.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.