The internet can feel like a bottomless pit of the worst aspects of humanity. So far, there’s little indication that the metaverse—an envisioned virtual digital world where we work, play, and live—will be much better. As I reported last month, a beta tester in Meta’s virtual social platform, Horizon Worlds, has already complained of being groped.
Tiffany Xingyu Wang feels she has a solution. In August 2020—more than a year before Facebook announced it would change its name to Meta and shift its focus from its flagship social media platform to plans for its own metaverse—Wang launched the nonprofit Oasis Consortium, a group of game firms and online companies that envisions “an ethical internet where future generations trust they can interact, co-create, and exist free from online hate and toxicity.”
How? Wang thinks that Oasis can ensure a safer, better metaverse by helping tech companies self-regulate.
Earlier this month, Oasis released its User Safety Standards, a set of guidelines that include hiring a trust and safety officer, employing content moderation, and integrating the latest research in fighting toxicity. Companies that join the consortium pledge to work toward these goals.
Source: This group of tech firms just signed up to a safer metaverse | MIT Technology Review