OpenAI’s GPT-OSS is Another Deepseek Moment (But Not the Way You Think)

OpenAI released its first open models. (#irony) And the whole AI space is going crazy. GPT-OSS models are off the charts in the benchmarks, matching or even beating some of the leading models out there, including leading OpenAI models (at least for a few more days when they release something big). And yes, we do need confidence intervals around benchmarks

And it is great to have a very good Western open model. (The discussion about how open an open model is, I’ll save for another day.) I mean, we have had open models from Meta for a long while now; there was Mistral, Gemini, not to mention all the open models out of China. Sure, Meta has hinted that now that they bought up all the talent they can poach from other companies, they may not release open models anymore. Mistral has been slowly closing their models, and Gemma (Google’s open model) is not that great without mixture of experts or reasoning capabilities. So gpt-oss is kind of a big deal. But I have been using pretty amazing open models out of China for a while now. Qwen 3 has done pretty well with just about everything we throw at it, and while the big one is, indeed, massive, we can run it Q4 quantized on our 144GB VRAM (three RTX 8000s) machine, ie, on prosumer-grade equipment.

This really feels like the big Deepseek moment in February 2025 when the (admittedly very cool) model I have used since late November 2024, Deepseek R1-Lite-Preview, did a minor update and everyone lost their minds. Even the industry professionals were saying that the V3 model, which they released a few weeks before Deepseek R1, was so much more interesting in terms of innovation. Now I am sitting here just blinking and wondering, what’s so great about these gpt-oss open models?

Let’s give credit where it’s due. It’s a good model. Both the 20B and the 120B versions are great. They perform amazingly on the benchmarks. (More on this later.) Both are super fast. They are just the right size. Qwen3 235B22A is OK if you have 140GB of VRAM (so three 48GB, four 40GB, two 80GB GPUs), but who has that? It won’t even run on a single Nvidia DGX Spark machine if Nvidia ever decides to ship those. (They announced it in early January, and it’s August.) But Qwen30B3A is good. You can easily run it, and run it very fast, on a 24GB RAM Mac or an RTX 3090. Sure, Deepseek is massive. You need eight 80GB GPUs to run that model if you don’t use a distilled version. But mixture of expert reasoning open models are not that new. (Mistral had one in 2023 already.)

But how do these models benchmark against gpt-oss? Who cares? It is becoming blatantly clear that models today are trained to beat benchmarks. Coding capabilities are certainly important, but a lot of us use these models for more than coding. Follow The Nerdy Novelist on YouTube if you want to see how these models do in various real-life applications (in his case, fiction writing). Apparently, gpt-oss starts to crap out quickly when you throw languages other than English at it? (And the reality is that with the exception of Mistral models, I have not had much luck asking an LLM to write in Hungarian. Claude is an amazing writer… in English, that is. Just don’t ask it to help you with grantwriting in Hungarian.)

My team has immediately started testing gpt-oss for our research applications. You know what? It’s not that great. The closed OpenAI (#moreirony) models are better. And I was just told that of the open models, Facebook’s much poo-pooed Llama4 (with its poor benchmarks) does great for our use case. 

So there, you have it. I am actually super excited to see more open models. I am excited to see the conversations around AI safety with regards to gpt-oss. These models are pretty impressive for the narrow things they focused on: beating the benchmarks. Beating benchmarks certainly has externalities of excellent programming performance, and etc. It’s great to have such great models running on small hardware. But these gpt-oss models are not as great as they make them out to be once you really hit them with real-life usage. It takes 10 hours of usage to really get to know an LLM. I would not be surprised if models that didn’t benchmark amazing worked better for your use cases. GPT-OSS models will get better. (All models will get better.) I can’t wait for gpt-oss2, where they gather all the feedback and really improve these models. They can’t come fast enough.

But until then, remember, all models are only as good as they are useful for your use cases. And nothing will tell you how good they are, other than your experience with that use case. And when you look at things this way, there’s really not a whole lot to see here. I wouldn’t be surprised if Qwen3 30B3A (a model admittedly slightly bigger than gpt-oss 20B, but definitely comparable in size) would beat these new Open OpenAI models (#Ishouldstop) for most things, that is not asking about Xi Jinpin or Tiananmen Square.

If you want to actually hear me babble (more) about AI in Hungarian, check out this podcast.