Nvidia has entered into a definitive agreement to buy the Kubernetes-based workload management and orchestration software provider Run:ai. According to Nvidia, AI deployments are becoming more complex with workloads being distributed across cloud, edge, and on-premises data centre infrastructure which Run:ai’s solutions solve.
Commenting on the decision to buy Run:ai, its CEO and cofounder, said:
“Run:ai has been a close collaborator with NVIDIA since 2020 and we share a passion for helping our customers make the most of their infrastructure. We’re thrilled to join NVIDIA and look forward to continuing our journey together.”
Explaining the benefits customers can get from Run:ai’s platform, NVIDIA listed:
- A centralised interface to manage shared compute infrastructure, enabling easier and faster access for complex AI workloads.
- Functionality to add users, curate them under teams, provide access to cluster resources, control over quotas, priorities and pools, and monitor and report on resource use.
- The ability to pool GPUs and share computing power — from fractions of GPUs to multiple GPUs or multiple nodes of GPUs running on different clusters — for separate tasks.
- Efficient GPU cluster resource utilisation, enabling customers to gain more from their compute investments.
For any customers that use Run:ai"s platform, Nvidia said it will continue to offer the same products under the same business model in the immediate future. It will also continue to invest in the Run:ai’s product roadmap but integrate these features in Nvidia DGX Cloud.
From this acquisition, Nvidia hopes that customers will be able to benefit from better GPU utilisation, improved management of GPU infrastructure, and greater flexibility from the open architecture. Neither Nvidia nor Run:ai gave any details about the acquisition such as how much the company is being purchased for or when the acquisition should be complete. The move should help NVIDIA strengthen its position as a leader in AI infrastructure.
Source: Nvdia