GPT-4o is the most advanced multimodal model from OpenAI. It is faster and cheaper than GPT-4 Turbo, with stronger vision capabilities. OpenAI today announced that the new gpt-4o-2024-08-06 model is now available for $2.50 per 1 million input tokens and $10.00 per 1 million output tokens. This new price is 50% cheaper for input tokens and 33% cheaper for output tokens compared to gpt-4o-2024-05-13. For comparison, the Gemini 1.5 Pro model costs $3.50 per 1 million input tokens and $10.50 per 1 million output tokens.
At DevDay 2023, OpenAI first announced JSON mode, which generates valid JSON outputs from the model, but the response will not conform to a particular schema. Yesterday, OpenAI announced Structured Outputs in the API, which will ensure that the model-generated outputs will exactly match JSON Schemas provided by developers. This Structured Outputs capability is now generally available in the API.
OpenAI has updated its Python and Node SDKs with native support for Structured Outputs. Structured Outputs with function calling is now available on all OpenAI models that support function calling in the API. This includes gpt-4o, gpt-4o-mini, gpt-4-0613, and gpt-3.5-turbo-0613, and any fine-tuned models that support function calling. This capability can be used on the Chat Completions API, Assistants API, and Batch API, even with vision inputs.
Structured Outputs with response formats is available on gpt-4o-mini and gpt-4o-2024-08-06 and any fine-tuned models based on these models. This capability can be used on the Chat Completions API, Assistants API, and Batch API, and with vision inputs.
It is important to note that JSON Schemas supplied with Structured Outputs will not be eligible for Zero Data Retention. Also, there are a few limitations when using Structured Outputs; you can read more about it from the source link below.
The ongoing price war between OpenAI and Google, marked by recent significant price reductions from both companies, is a promising development for developers. This increased competition is expected to drive innovation, leading to even more powerful and accessible large language models in the future.
Source: OpenAI
3 Comments - Add comment