Ubisoft and Riot Games are partnering for research into AI-driven moderation tools to combat toxicity in video game communities. Toxicity in online games is an issue that gaming studios continue to struggle to address. It's not through lack of trying, with major online games supporting both automated and customer service-based solutions. Automation can't catch all issues and humans can't oversee everything, necessitating innovation. Ubisoft and Riot are partnering for just that.

Riot's interest in such a partnership is self-evident. League of Legends is one of the most popular online multiplayer games in the world. A 2019 report found that 75% of League of Legends players had experience harassment in the game. Tom Clancy's Rainbow Six Siege, Trackmania, and The Division are just a few of Ubisoft's successful multiplayer releases. Countering toxicity is in both companies' interests.

RELATED: Riot Games' Valorant Voice Chat Monitoring Controversy Explained

Introducing Zero Harm in Comms, what Riot Games and Ubisoft describe as the first step in a cross-industry project to benefit everyone that plays video games. Zero Harm in Comms is a research project built around building a database of in-game data. This database will "train AI-based preemptive moderation tools." The goal is to improve the automatic detection of harmful behavior and ultimately foster more positive communities. As Ubisoft says, "With more data, these systems can theoretically gain an understanding of nuance and context beyond words."

Anyone reading that Ubisoft and Riot are putting together a database regarding player information would understandably have immediate concerns regarding privacy. Riot explains that data that would be used to identify an individual will be "removed before sharing." The data, which is said to be primarily chat logs from Riot and Ubisoft games including League of Legends, will be "scrubbed clean" of any personal information. There is notably no word regarding whether there will be oversight of this process, which is certain to lead to skepticism regardless of what's said now.

Work on the Zero Harm in Comms project has been ongoing for around six months now, with Ubisoft director of LaForge R&D Department Yves Jacquier and Riot head of tech research Wesley Kerr already collaborating. The pair plan to share the findings of the Zero Harm in Comms project with the industry at large in 2023

Few online video game fans would disagree that moderation efforts need to be improved. However, many of those video game players would say that automated systems are less of a priority than human customer service. Ubisoft and Riot have a lot to prove to gain player trust on this subject, particularly when the issue of player privacy is at stake, too. Two of the largest online gaming companies in the world are creating a database of player chats with a goal of improving automated moderation efforts, for better or worse.

MORE: Celebrating Assassin's Creed Sisterhood One Year Later