Dear Ms Harvey,
I hope you are very well.
Thank you for your assistance in restoring my account.
Today, I'm writing to you about something that's causing a lot of problems for users on your platform. Ordinarily, I'd take this through support channels, but it's extremely problematic to get through to anybody, let alone somebody who can help. This also has safety implications that I think are probably not being considered.
There's been an increase of late in instances of users being 'shadow banned', and it's beginning to be a massive problem. In case you're unaware of this (I'm sure you are, but others will be reading this who may not be), this is the practice of placing a restriction on a user's account so that they don't appear in searches and people don't receive notifications from them. It's only when another user visits their timeline and sees that they've been tagged in a tweet that they didn't receive that we get any indication that there's a problem.
There have been instances, in fact, of users who have been shadow banned not appearing in threads in which other users are conversing with them directly. I attach an image of one such instance, which I had to visit the user's timeline to obtain.
As you can see, this looks like an orphaned tweet, despite it being a reply to a genuine human.
In just this last couple of weeks, many people who use Twitter for social interaction and debate have fallen under this pernicious practice. One might think that assessing it as pernicious is somewhat hyperbolic, except that the knock-on effects can actually be quite dangerous.
Firstly, this happens without any communication to the user, which means that the user is left entirely oblivious to the fact that what they're saying isn't getting out. It's only when somebody notices something strange that some investigation shows what the problem is.
It's been suggested by some that this is to do with filter settings, but I know that it isn't, because I have all filters switched off on my account.
Here's the danger, or one of them. There have also been cases in which users have been sent direct messages by shadow banned and never received them.
There are those, like myself, who leave their direct messages open for people who need an ear. One of the best things about twitter is that one can find somebody to talk to at all times, and there's always somebody ready to listen and to send a virtual hug or two. This is really important to a lot of people. If they reach out and get no response, this can be a huge problem, especially to the most vulnerable.
Yesterday, one of my friends was shadow banned. This person, as a child, was kept isolated for long periods of time, and this has resulted in isolation becoming a trigger for her. What's even worse is that, to somebody in this situation, they don't actually have any idea that the isolation is a function of a decision made by an algorithm, rather they feel that they're simply being ignored. As somebody whose primary concern is safety, I know you grasp how risky this is for those who are vulnerable and have nobody to turn to.
Just to head off one line of objection, it's certainly the case that people with issues that leave them vulnerable should seek professional help wherever possible, it's not the case that such help can always be found when the need for it is greatest, and that's setting aside the very real problems for those who simply can't talk to professionals for one reason or another. I know of at least one person who finds it impossible because of very bad experiences they've had in the past.
While we all recognise the need to cut down on 'bots' and particular forms of spam, I feel that it's critical to the safety of the platform that this is done in a manner that doesn't impact genuine users, or the cure becomes worse than the disease. I don't know what the solution is, but I'm more than happy to engage in dialogue to work toward a solution that fulfils all the requirements to cut down problematic traffic while ensuring that the effects to genuine users is minimal. One thing that shouldn't be a factor is tweeting the same tweet multiple times, especially in private chat groups, as users will often want to bring news items to the attention of multiple groups of people, so this is a very poor method for weeding out bots without impact on human users.
This issue has been flagged to support by many people over a fair period of time yet, rather than anything being done to rectify the issue, it appears of late to have been escalating, and it's beyond time something was done to protect users from this practice.
At the time of this writing, the users affect that I know personally to be genuine, having interacted with them both on and off twitter, are:
@Geek_0nline @DonMichaud1 @NotJahWitness @CAPBrony @Fenrirdgodeater @mirandadied4u @oohglobbits5 @Clara_Resists @LavenderLady0 @DiamondLynne1
@WycheNick
I'd still really love to have some discussion about the issues raised in my previous email, especially with regard to having some human intervention in the moderation process. I'm sure that, between us, we can come up with something that will be of great benefit not only to Twitter but also to its users.
In the interest of full disclosure, I will be publishing this email on my blog so that we can have a ready venue for gathering data about how widespread this problem is. I will encourage users to leave usernames in the comments of anybody they know who's affected by this issue.
Thanks again for the work you do in making Twitter a safe environment, and I look forward to any discussion you feel might benefit from the inclusion of your users. Help us to help you.
Kind regards,
Tony Murphy
@hackenslash2
Related:
An Open Letter to Twitter
No comments:
Post a Comment
Note: only a member of this blog may post a comment.