Many have wondered whether their gaming community is one of the most toxic, and a recent study now gives insight into which games trigger some of the most offensive comments. A study by Clutch used an AI to read over 1.3 million comments from one hundred different communities and hopefully track toxicity within those groups. While games attempt to filter language used by players, online communities give the clearest insight into what is frequently said.

Major game companies have formed alliances to combat toxicity, yet the issue persists. While the AI could not fully contextualize the comments, for example, some profanities were used as jokes or compliments, it did manage to catch a lot of offensive language and slurs. The technology that has allowed for this study to take place was created by IBM after people gave thousands of words a toxicity rating.

RELATED: Phil Spencer Talks Toxic Gaming Environments And Early Xbox One Failures

Games can evoke a negative reaction because of the difficulty of the challenge and the study attributes this to PvP games and MOBAs. Yet it should be noted that player v player interactions also lead to frustrations being focused at another person.

The study divides toxicity into four categories: "Obscene, Threatening, Insulting, and Identity Hate." While these categories had varying results, many won't be surprised to see titles that have had previous issues with toxicity, such as League of Legends, making the top ten in the overall toxicity study.

AI study of toxicity

Professional players have complained about the toxic behavior of these communities, and many of the titles included won't come as too much of a surprise to gamers. In terms of "Identity Hate", The Binding Of Issac is at the top of the list, while Payday: The Heist was the worst for "Insulting" and "Obscene" subreddits. The most "Threatening" threads belonged to Mount & Blade. These gaming communities have an abundance of offensive language even if the AI could not identify every contextual phrase. Yet, it should be noted that these toxicity ratings are from Reddit comments and not in-game incidents and so the gaming experience may be different from these findings.

The study isn't incredibly reliable because the language used by gamers would often trigger the AI without any real tangible toxicity. While toxicity is still a problem in games, this study is not the most accurate way to hone in on the problem. With that said, finding communities that trigger the parameters of this study is not only interesting but a useful way for some to avoid possible harm.

MORE: Priest Creates Minecraft Server to Combat Toxicity

Source: Clutch