Microsoft may have partnered with OpenAI and its GPT large language models to help create generative AI services like Copilot (formerly known as Bing Chat). However, the company is also working on LMs of its own. This week, Microsoft Research announced the release of Orca 2, a second version of its Orca language
In a blog post, Microsoft stated that Orca 2 was designed specifically to be a smaller LM, but can still be used to answer complex questions like LLMs. Orca 2 comes in two sizes (7 billion and 13 billion parameters) and they were made in part by using the Llama 2 LLM that it helped to launch with Meta earlier this year. The company fine tuned the Llama 2 based model "on tailored, high-quality synthetic data."
Microsoft stated that this allowed the Orca 2 models to handle problems that matched the performance of other "5-10 times larger" language models, It stated:
Orca 2 is trained with an expanded, highly tailored synthetic dataset. The training data was generated such that it teaches Orca 2 various reasoning techniques, such as step-by-step processing, recall then generate, recall-reason-generate, extract-generate, and direct answer methods, while also teaching it to choose different solution strategies for different tasks.
The Orca 2 models were put up against a number of larger language models like Llama 2 and WizardLM with a series of benchmarks that covered topics like "anguage understanding, common-sense reasoning, multi-step reasoning, math problem solving, reading comprehension" and more. The blog stated:
Our preliminary results indicate that Orca 2’s performance significantly surpasses models of similar size. It also attains performance levels similar or better than those of models at least 10 times larger, showcasing the potential of equipping smaller models with better reasoning capabilities.
While Microsoft admitted that Orca 2 does have limitations, the testing so far shows "potential for future advancements." Microsoft is releasing Orca 2 as an open source project so others can work on it as well.
0 Comments - Add comment