OpenAI has shelved plans to build its custom foundries and is planning to partner with semiconductor giant Broadcom to develop its first lot of foundry of custom AI chips, according to new reports. The company has reportedly built a team of about 20 engineers for chip development, led by former Google TPU engineers, to work with Broadcom and TSMC on an AI inference chip.
OpenAI"s chip is focusing first on inference, running already-trained machine learning models on new data. Training AI requires enormous computing resources, but analysts think the market for inference chips might eclipse training within a few years as more AI applications hit the market. Custom silicon will not be produced before 2026, and so, in the meantime, OpenAI will depend on AMD"s MI300X chips in the systems that are sitting on Microsoft Azure.
AMD is looking to gain market share away from dominant player Nvidia, whose GPUs currently hold over 80% of the AI chip market. However, shortages and rising costs have pushed large customers, such as Microsoft, Meta, and now OpenAI, to develop in-house or custom silicon.
Demand for AI capabilities keeps rising across all industries, but the task of building out sufficient resources in-house has proved almost impossible, even for the largest tech firms. By partnering with Broadcom, OpenAI expects to get access to chip design and manufacturing whizzes while reducing costs compared to trying to develop everything on its own.
Other leading AI players like Google, Microsoft, and Amazon have already designed several generations of customized chips on account of their early movements into chip design. However, OpenAI believes that partnering with established players such as Broadcom will help scale up hardware fitting its workloads more quickly. The company will likely be adding more partners and is still considering whether it will design or outsource other chip components.
Source: Reuters