Twitch: More transparency planned for channel bans

Twitch: More transparency planned for channel bans

Twitch

Those responsible at Twitch would like to become a little more transparent in the future when it comes to blocking a channel on the streaming platform. In the past there was always criticism here because there were sometimes only a few details about the ban for the streamer concerned. With the new system, the notification will now list the title of the transfer, the date and the length of the lock. An example of what this message will look like for streamers was recently published on Twitter.

Recommended editorial content Here you can find external content from [PLATFORM]. To protect your personal data, external integrations are only displayed if you confirm this by clicking on "Load all external content": Load all external content I consent to external content being displayed to me. This means that personal data is transmitted to third-party platforms. Read more about our privacy policy . External content More on this in our data protection declaration. However, Twitch is still not completely transparent. Accordingly, the company is holding back when it comes to the exact reason for the channel block. With the new details, streamers can at least narrow down the period in order to possibly find the cause themselves. One of the most famous bans in Twitch history is the end of Dr Disrespect in 2020. To date, it has not been clarified why the long-standing Twitch partner was kicked off the platform with immediate effect. At the moment, the doc is active on YouTube Gaming - a return to Twitch is considered impossible.

Instead, Guy "Dr Disrespect" Beahm is now apparently even planning to found his own development studio. Twitch is currently grappling with criticism on many fronts. Most recently, several Twitch partners on Twitter have asked for more money for their subscriptions, from which they receive 50% of the revenue so far. Only the platform's elite have negotiated individual contracts with the streaming platform.





Twitch responds to ‘Twitch Do Better’ movement with improved chat filters

Today, Twitch has issued a statement announcing the steps it’s taking to protect its marginalized streamers.

“We’ve seen a lot of conversation about botting, hate raids, and other forms of harassment targeting marginalized creators,” Twitch writes. “You’re asking us to do better, and we know we need to do more to address these issues.”

Twitch says it’s identified “a vulnerability in our proactive filters, and have rolled out an update to close this gap and better detect hate speech in chat.” It also says it will implement more safety features in the coming weeks, including improvements to the account verification process and ban evasion detection tools.

This statement is in response to the hashtag #twitchdobetter, which was an effort started by Twitch creator RekItRaven in order to bring awareness to harassment issues that Black creators were experiencing on the streaming platform.

“I was hate raided for the 2nd time in a week and I shared both the first and second occurrences [on Twitter] because they were very pointed rather than the normal, ‘You’re fat, black, gay stuff,’” Raven tells The Verge via direct messaging.

(Content warning: racism)

Raiding is a popular Twitch feature that allows a streamer to send viewers to another streamer at the end of their broadcast. It’s a tool used to boost viewership, grow communities, and foster connections between streamers and their audiences. Hate raids are the polar, toxic opposite. In hate raids, a streamer directs their viewers to another creator — who is often times Black, queer, female, or has an intersection of marginalized identities — in order to bombard that streamer with hate speech and harassment.

Raven believes they became a target for hate raids because they stream using the Black tag, a new Twitch feature that allows users to classify their streams with different markers. The tags are ostensibly used for creators to categorize their streams so interested users can better find the content they’re looking for, but it also creates a beacon trolls use to zero in on vulnerable, marginalized streamers. After their experience with hate raids, Raven noticed other marginalized streamers in their community were having the same experiences. And with no word from Twitch on what was being done to protect its users from that kind of targeted, violent harassment, Raven decided to re-start the conversation.

“I started #TwitchDoBetter because I’m tired of having to fight to exist on a platform that says they’re diverse and inclusive but remained silent to the pleas of marginalized creators asking for more protections from hate raids,” Raven says.

Twitch struggles with keeping toxicity off its platform. Last year, streamer CriticalBard was subjected to a wave of racist trolls when he became the temporary face of the “pogchamp” emote. Twitch also removed its TwitchCop emote amid concerns it might be used to harass creators talking about police violence after George Floyd’s murder. In these situations and now, Twitch has been reactive to the needs of its users rather than proactive, resulting in creator frustration. Better, more proactive moderation tools have been a perennial ask from Twitch’s marginalized creators.

The tools Twitch is implementing in today’s safety rollout will seemingly only address trolls using non-Latin characters to circumvent chat filters. Streamers are asking for more.

“I’d love to see creators having more tools to control their experience like allowing creators to block [recently created] accounts from chatting, [and] allowing mods to approve or decline raids,” Raven says.