The live video game streaming giant has struggled for months to stem a wave of racist and homophobic harassment, which consists of outbursts of hateful comments against certain creators of content, including people from marginalized communities.
Insults and shocking images flood the messaging windows of their victims. If these ban malicious users, the bullying continues under a new account.
Artificial intelligence intervenes
The new tool, called Suspicious User Detection (detection of suspicious accounts),
is there to help you identify these users based on certain signals […] so you can take actionTwitch said in a statement.
This program, based on so-called machine learning technologies, will identify fraud
possible. In the first case, the messages will not appear in public, only the instavidist and his moderators will see them.
These people will then decide the fate of the bullies, that is, whether to watch them or ban them.
No machine learning system is 100% reliable, says Twitch, however. That’s why [l’outil] does not automatically rule out all potential fraud.
Twitch in the crosshairs
The platform, which is owned by tech giant Amazon, largely dominates the global cloud industry. It claims to receive more than 30 million unique visits per day.
Last August, instavidists declared a day without Twitch on September 1 to encourage society to respond to the outbursts of hate.
Twitch launched new tools and also filed a complaint against two users, who it claims manage multiple accounts on the platform from Europe under different identities and are able tobots) in a few minutes “,” text “:” generate thousands of robots (bots) in a few minutes “}}”>generate thousands of bots (bots) in a few minutes for the purpose of harassing their victims.