
OpenAI today announced the availability of o3-mini, its latest affordable reasoning model that can match the performance of OpenAI's o1 model in math, coding, and science. This new o3-mini model is available via APIs from OpenAI with full support for function calls, Structured Outputs, streaming, and developer messages through the Chat Completions API, Assistants API, and Batch API.
Coinciding with OpenAI's announcement, Microsoft has now announced that OpenAI's o3-mini is available in the Microsoft Azure OpenAI Service. Interested developers can sign up for Azure AI Foundry to access o3-mini.
Yina Arenas, Vice President of Product, Core AI at Microsoft, wrote the following regarding the o3-mini launch:
o3-mini adds significant cost efficiencies compared with o1-mini with enhanced reasoning, with new features like reasoning effort control and tools, while providing comparable or better responsiveness. o3-mini’s advanced capabilities, combined with its efficiency gains, make it a powerful tool for developers and enterprises looking to optimize their AI applications.
Microsoft's GitHub also announced the availability of o3-mini in GitHub Copilot and GitHub Models for developers. Compared to o1-mini, developers can expect better quality outputs from the new o3-mini model. The new model is available to GitHub Copilot Pro, Business, and Enterprise users via the model picker in Visual Studio Code and github.com chat. GitHub will bring o3-mini model support to Visual Studio and JetBrains in the coming weeks.
GitHub Copilot subscribers will get up to 50 messages every 12 hours. GitHub Business or Enterprise admins can also enable the new o3-mini model access for their organization members through their admin settings pages.
GitHub is also bringing the o3-mini to the GitHub Models playground where developers can explore the model's capabilities and also compare it with other models from Cohere, DeepSeek, Meta, and Mistral.
The o3-mini model's availability across multiple platforms will enable developers to integrate its new reasoning capabilities into their applications and services.
0 Comments - Add comment