Call of Duty, one of the largest online games, is taking steps to address the problem of toxic behavior in its voice chat feature. The game will implement a new AI software called ToxMod, which is designed to detect and take action against toxic speech, including hate speech, discriminatory language, and harassment.
ToxMod is a proactive voice chat moderation solution, specifically built for gaming. While it has been used by smaller online games before, this will be the first time it will be implemented on such a large scale with Call of Duty. The software works by identifying toxic speech in real-time and enforcing consequences against the offenders.
It’s important to note that ToxMod will not issue direct punishments to players. Instead, it will submit reports about toxic behavior to Activision, the publisher of Call of Duty. Activision will then decide how to enforce voice chat moderation violations. The flagged abuse will be evaluated not only based on specific words used but also on the context in which they are used.
For example, while the use of the n-word is generally considered offensive, it may be used positively within certain communities. In such cases, context will be taken into account when determining the severity of the offense. The aim is to prevent false positives and ensure fairness in moderation.
While the introduction of this AI software is a positive step towards curbing toxic behavior, some concerns remain. Moderation can be a complex issue, and relying solely on technology to make judgment calls may not always be foolproof. Platforms like Instagram have faced criticism for their questionable moderation rules, which often result in unintended consequences.
The new moderation tool, ToxMod, will be rolled out in its beta stage across North America from August 31. As for when it will be available to all users, Activision will provide updates accordingly. In the meantime, players are encouraged to follow the rules, play nicely, and avoid engaging in toxic behavior.