
OpenAI’s New gpt-oss Models Aim to Shift the AI Landscape
For the first time in over five years, OpenAI is back in the open-source spotlight. On Tuesday, the company launched two open-weight AI reasoning models — gpt-oss-120b and gpt-oss-20b — freely available under the Apache 2.0 license. With this release, OpenAI aims to regain its position in the rapidly growing open AI ecosystem.
Relevance of This Development
OpenAI has largely followed a closed-source strategy in recent years. Its API-based business model and proprietary models like GPT-4o have dominated the market. But the rising influence of Chinese AI labs like DeepSeek and Moonshot AI — and U.S. political pressure — seems to have nudged OpenAI toward a more open stance.
CEO Sam Altman admitted earlier this year that OpenAI may have been on the “wrong side of history” regarding open-source AI. Now, with gpt-oss, the company is responding to both developer demand and geopolitical urgency.
Two Models, Two Use Cases
The new open-weight models are built for accessibility:
- gpt-oss-120b: A high-performance model that can run on a single Nvidia GPU.
- gpt-oss-20b: A lightweight version that can work on laptops with just 16GB RAM.
While these models are text-only, developers can link them with OpenAI’s closed models to enable advanced tasks like image processing. This hybrid approach keeps the base open while offering optional power-ups.
How They Perform?
OpenAI says both models beat many of their open-source rivals on reasoning tasks. On Codeforces, a coding benchmark, the models scored 2622 and 2516, outperforming DeepSeek’s R1. On Humanity’s Last Exam, they scored 19% and 17.3%, better than Qwen and DeepSeek.
However, they lag behind OpenAI’s o-series models and show higher hallucination rates — up to 53% on PersonQA. That’s a trade-off OpenAI attributes to smaller model sizes and limited world knowledge.
Built with Safety and Efficiency in Mind
OpenAI used mixture-of-experts (MoE) architecture, activating only 5.1 billion of the 117 billion parameters per token in gpt-oss-120b. This makes the models more efficient.
They also underwent reinforcement learning (RL) to strengthen reasoning. As a result, both models can support AI agents, call external tools like web search or Python, and follow a structured chain-of-thought reasoning path.
Open but Not Fully Transparent
Despite the “open” tag, OpenAI has not released training data — likely due to ongoing copyright lawsuits. But developers can freely use and monetize the models under Apache 2.0, without needing permission or paying OpenAI.
The company conducted safety audits before launch, ensuring the models aren’t easily weaponized through fine-tuning. Their findings show a marginal risk increase in biological capabilities, but not enough to halt the release.
What Comes Next?
While the gpt-oss models push OpenAI into the open AI field again, the race isn’t over. Developers are already eyeing DeepSeek’s R2 and Meta’s next release from its Superintelligence Lab.
Still, this launch signals OpenAI’s intention to keep open AI development aligned with democratic values, while encouraging global adoption and collaborative innovation.