“Using large language models to moderate content is a step forward, not because it will be perfect, but because it will be more consistent and less prone to human emotion and cultural differences.”
– Reed Albergotti
OpenAI is using an AI model to moderate its website. OpenAI‘s head of safety systems described the practice in an interview with Semaphor.
The breakthrough is having sufficient contextual language awareness for the model to identify problematic content.
The OpenAI safety chief suggested their system could also suit other sites, such as social media and e-commerce. Their system is said to perform at the same level as a human with “light” training, not yet matching highly experienced human oversight.
Automated automation offers several benefits, including scale, speed, and safety. It offers a tireless, neutral, and dispassionate view when reviewing material. Using automated systems could also prevent traumatic effects reported by some human reviewers after continually looking at offensive content.
Can ChatGPT become a content moderator? | SEMAPHOR | August 15, 2023 | by Reed Albergotti