OpenAI rolls out o3 and o4-mini: From coding and maths to visuals, how ChatGPT’s new models handle it all
OpenAI has released its most advanced AI models yet—o3 ando4-mini on Wednesday. These models are a real step up in how artificial intelligence can reason, solve problems, and even use tools to get things done, claims the San-Fransisco based company.
“These new models are part of OpenAI’s o-series, and they’re designed tothink more deeply before answering, helping them tackle tougher, more complex questions in less time,” said the company in a blog post.
What is OpenAI o3?
The headline model,OpenAI o3, is now the most powerful reasoning model OpenAI has built. It is claimed to perform exceptionally well across subjects like programming, maths, science, and even visual analysis—setting new standards on well-known academic benchmarks like Codeforces, SWE-bench, and MMMU. According to the company, o3 makes 20 per cent fewer major errors than its predecessor, o1, especially in high-skill areas such as business consulting and technical innovation.
What is OpenAI o4-mini?
For those looking for speed and efficiency,o4-mini is a smaller but mighty model, built for fast, cost-effective reasoning. Despite its size, it is turning heads—topping the charts on math-heavy exams like the 2024 and 2025 AIME and outperforming earlier models across a range of STEM and non-STEM tasks, claims OpenAI.
Functions of OpenAI o3 and o4-mini
These models are now much better at deciding when andhow to use tools. They can search the web, run Python code, analyse images, generate charts, and explain their findings—all without needing much hand-holding. So, if you ask something like, “How will California’s summer energy use compare to last year?”—the model can look up the data, build a forecast, generate a graph, and walk you through the reasoning behind the prediction.
Another standout feature is their ability to work with visuals in a much more intelligent way. You can upload a photo of a whiteboard, a messy sketch, or even a blurry textbook diagram, and the model can interpret and reason through it—sometimes even manipulating the image as part of its thinking. This kind ofvisual reasoning is something earlier models could not really do well.
Both o3 and o4-mini now come with full access to ChatGPT tools—including file analysis, web search, code interpretation, and image generation.
What’s new and updated?
These models have been trained to knowwhen to use each tool, which helps them handle more complicated tasks with ease and flexibility. OpenAI has also beefed up its safety protocols. These models were trained with updated safety data and rigorously tested across risk areas like cybersecurity, bio-threats, and even AI self-improvement, claims the company.
Notably, the company says both models passed their most demanding safety tests yet and remain well below any high-risk thresholds.
Alongside these upgrades, OpenAI has introducedCodex CLI, a simple but powerful coding agent you can run directly from your computer’s terminal. It brings the reasoning capabilities of o3 and o4-mini straight to your local environment, supporting tasks like reading screenshots or working with your own codebase. It is open-source and already live on GitHub, and OpenAI is also launching a $1 million grant programme to support innovative projects using Codex CLI.
Starting 16 April, 2025, ChatGPT Plus, Pro, and Team users can access o3, o4-mini, and the new o4-mini-high, replacing older versions like o1 and o3-mini. Enterprise and education users will get access in the next week. Free users can also try o4-mini by selecting the new “Think” option when typing a prompt.
Post Comment