Upgrading the Moderation API with our new multimodal moderation model

OpenAI News
Upgrading the Moderation API with our new multimodal moderation model

Today we are introducing a new moderation model, `omni-moderation-latest`, in the Moderation API⁠(opens in a new window). Based on GPT‑4o⁠, the new model supports both text and image inputs and is more accurate than our previous model, especially in non-English languages. Like the previous version, this model uses OpenAI's GPT‑based classifiers to assess whether content should be flagged across categories such as hate, violence, and self-harm, while also adding the ability to detect additional harm categories. Additionally, it provides more granular control over moderation decisions by calibrating probability scores to reflect the likelihood of content matching the detected category. The new moderation model is free to use for all developers through the Moderation API.

Since we first launched⁠ the Moderation API in 2022, the volume and variety of content that automated moderation systems need to handle has increased, especially as more AI apps have reached massive scale in production. We hope today’s upgrades help more developers benefit from the latest research and investments in our safety systems.

Companies across various sectors—from social media platforms and productivity tools to generative AI platforms—are using the Moderation API to build safer products for their users. For instance, Grammarly is using the Moderation API as part of the safety guardrails in its AI communications assistance to ensure its products outputs are safe and fair. Similarly, ElevenLabs utilizes the Moderation API along with in-house solutions to scan content generated by their audio AI products, preventing and flagging outputs that violate their policies.

The updated moderation model includes a number of major improvements:

Multilingual Performance Representative Performance

##### text-moderation-007 vs omni-moderation-latest multilingual performance

omni-moderation-latest

A higher AUPRC indicates better model performance in distinguishing between safe and unsafe example on the hard multilingual eval set

* Calibrated scores:the new model’s scores now more accurately represent the probability that a piece of content violates the relevant policies and will be significantly more consistent across future moderation models.

AI content moderation systems help enforce platform policies and ease the workload on human moderators, crucially sustaining the health of digital platforms. That’s why, just like our previous model⁠, we’re making the new moderation model free to use for all developers through the Moderation API, with rate limits depending on usage tier. To get started, see our Moderation API guide⁠(opens in a new window).

Ian Kivlichan, Justyn Harriman, Cameron Raymond, Meghan Shah, Shraman Ray Chaudhuri, Keren Gu-Lemberg

Flo Leoni, Jieqi Yu, Madelaine Boyd, Mingxuan Wang, Nithanth Kudige, Yao Zhou, Andrea Vallone, Alec Helyar, Edmund Wong, Francis Zhang, Hadi Salman, Henrique Ponde de Oliveira Pinto, Joyce Lee, Nick Preston, Raul Puri, Shibani Santurkar, Lindsay McCallum, Leher Pathak, Edwin Arbus, Kevin Whinnery, Beth Hoover, Freddie Sulit, Filippo Raso, Cary Hudson, Dev Valladares, Pranav Deshpande, Sam Toizer, Lilian Weng, Owen Campbell-Moore

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

Originally published on OpenAI News.