Yesterday, OpenAI launched its most cost-efficient small model, GPT-4o mini. GPT-4o mini has surpassed both Gemini 1.5 Flash and Claude 3 Haiku in several AI benchmarks in textual intelligence and multimodal reasoning.
As expected, Microsoft has announced that GPT-4o mini is now available on Azure OpenAI Service. You can try the model for free in the Azure OpenAI Studio Playground. Along with the GPT-4o mini model's availability, Microsoft also announced several updates to Azure OpenAI Service, detailed below.
AI Safety for GPT-4o mini:
All existing Azure AI Content Safety features, including prompt shields and protected material detection, will be "on by default" for the GPT-4o mini model on Azure OpenAI Service. Microsoft is also improving the performance of Azure AI Content Safety with the new asynchronous filter feature. Additionally, Microsoft's Customer Copyright Commitment will apply to GPT-4o mini. This means Microsoft will defend customers who use the GPT-4o mini model on Azure OpenAI Service against any third-party intellectual property claims for output content.
Data Residency:
Azure OpenAI Service now offers data residency for all 27 regions. This allows customers to control where their AI-related data is stored and processed. Offerings like Regional pay-as-you-go and Provisioned Throughput Units (PTUs) will give customers more control over data processing and storage.
Global Pay-as-you-go:
The new GPT-4o mini model on Azure OpenAI Service is now available via pay-as-you-go deployment at 15 cents per million input tokens, 60 cents per million output tokens, and 15M tokens per minute (TPM) throughput. The global pay-as-you-go deployment option allows customers to pay only for the resources they consume. Additionally, customers can now upgrade from existing models to the latest models in the same region as their existing deployments. Azure OpenAI Service offers GPT-4o mini with 99.99% availability.
Efficient Deployment:
GPT-4o mini will be coming to Azure Batch Service this month. Through Batch Service, customers can deploy high-throughput jobs with a 24-hour turnaround at a 50% discount rate by using off-peak capacity. Microsoft is also bringing fine-tuning for GPT-4o mini this month, allowing customers to customize the model for their specific cases. On a related note, Microsoft Azure AI Service switched to token-based billing for training to reduce hosting charges by up to 43%.
With over 53,000 customers already leveraging Azure AI and more than half of the Fortune 500 building applications on Azure OpenAI Service, the release of GPT-4o mini and these subsequent enhancements further solidify Microsoft's AI leadership.
Source: Microsoft
0 Comments - Add comment