Foxconn is building the world's largest facility to manufacture Nvidia's Blackwell GPUs

Foxconn, the world"s largest contract electronics manufacturer in the world, has announced plans to build the world"s largest manufacturing unit for Nvidia"s GB200 chips. The plan was announced by Benjamin Ting, Foxconn"s senior vice president, during Nvidia"s annual tech day in Taipei, Taiwan. However, he did not disclose where this facility will be built.

The move comes as there"s a huge demand for Nvidia"s upcoming Blackwell platform, which is made especially for compute-demanding stuff, like training AI models.

Foxconn, which is mainly known as Apple"s main supplier, is expanding its operations to be able to manufacture other electronics, like GPUs as well, for which demand has recently soared a lot as AI startups continue to train new AI models. Training these models requires significant computational power, and many companies are already expanding their data centers to accommodate this.

Once an AI model is trained and deployed, it still requires a lot of computing resources to process the increasing amounts of data generated by various applications. To make all of this possible, Nvidia announced its Blackwell platform and the GB200 GPU, and many companies like Microsoft have already placed advanced orders for them.

Nvidia"s GB200 is a super chip that combines two Blackwell B200 GPUs and one Grace CPU, where each GPU is equipped with 192GB of HBM3E memory, while the CPU connects to 512GB of LPDDR5 memory, providing a total of 896GB of unified memory. These components are interconnected via NVLINK, that allows high-speed communication between the GPUs and CPU. To make things even faster, the GB200 can be utilized as a computer rack within the GB200 NVL72 platform which can scale up to 512 GPUs within a single NVLINK domain. This setup significantly enhances performance for large-scale AI training and inference tasks.

GPUs like the GB200 are part of high-performance computing (HPC) systems that are designed to handle complex calculations and large datasets efficiently. They are designed to utilize multiple interconnected processors that work in parallel, making them ideal for data-intensive tasks.

Foxconn"s chairman Young Liu stated at the event that the company"s supply chain is ready for the AI revolution. He spoke about Foxconn"s advanced manufacturing capabilities, which include essential technologies like liquid cooling and heat dissipation systems that are necessary for the infrastructure for manufacturing Nvidia"s GB200 systems.

Foxconn and Nvidia are also working together to build Taiwan’s largest supercomputer, called the Hon Hai Kaohsiung Super Computing Center. The supercomputer is going to be built on the same Blackwell architecture and will feature the GB200 NVL72 platform with 64 racks and 4,608 Tensor Core GPUs. The total performance of this supercomputer is expected to be over 90 exaflops.

The construction of this supercomputer has already begun in Kaohsiung, Taiwan and the first phase is expected to be operational by mid-2025, while the full deployment is targeted for 2026.

Via Reuters

Report a problem with article
Next Article

Google joins Amazon and Microsoft to consider nuclear energy to power AI data centers

Previous Article

MediaTek's flagship Dimensity 9400 to support multimodal Gemini Nano