Red Hat has announced the general availability of its Red Hat Enterprise Linux (RHEL) AI platform for server deployments across the hybrid cloud. RHEL AI is the company's foundation model platform that lets organizations develop, test, and run generative AI (gen AI) models that power enterprise applications.
RHEL AI comes with the Granite large language model (LLM) family and InstructLab model alignment tools and is available as a bootable image for servers. Benchmarks published by IBM Research show Granite models to be competitive, or better, than rival models of a similar size.
Red Hat said that it has developed RHEL AI to make generative AI more accessible, efficient, and flexible for enterprise IT organizations and Chief Information Officers (CIOs). It said that its solution helps in the following ways:
- Empower gen AI innovation with enterprise-grade, open source-licensed Granite models, and aligned with a wide variety of gen AI use cases.
- Streamline aligning gen AI models to business requirements with InstructLab tooling, making it possible for domain experts and developers within an organization to contribute unique skills and knowledge to their models even without extensive data science skills.
- Train and deploy gen AI anywhere across the hybrid cloud by providing all of the tools needed to tune and deploy models for production servers wherever associated data lives. RHEL AI also provides a ready on-ramp to Red Hat OpenShift AI for training, tuning and serving these models at scale while using the same tooling and concepts.
RHEL AI is backed up by your Red Hat subscription benefits such as 24/7 production support, extended model lifecycle support, and Open Source Assurance legal protections. It's available now from the Red Hat Customer Portal to run on-premise or you can use it on AWS and IBM Cloud as a "bring your own subscription" (BYOS) offering. It will also be able to be available on Azure, Google Cloud, and IBM Cloud later this year.
Source: Red Hat
1 Comment - Add comment