2 min read

Nvidia Holds Back on Power-Saving Optical Chip Tech

Nvidia, the leading chipmaker for artificial intelligence (AI), is holding back on using power-saving optical technology in its flagship graphics processing units (GPUs), CEO Jensen Huang announced at the company's annual developer conference. While the technology, known as co-packaged optics, offers significant energy efficiency gains, Huang cited reliability concerns as the primary reason for the delay.

"Copper is far better," Huang told journalists after his keynote address, emphasizing the need for a reliable product roadmap for Nvidia's customers, including major AI players like OpenAI and Oracle.

Co-packaged optics uses laser light beams to transmit information through fiber optic cables, promising faster connections and superior energy efficiency compared to traditional copper cables. Nvidia will, however, adopt this technology in two new networking chips for use in switches on top of its servers, a move Huang described as a "small but significant step." These chips are slated for release later this year and into 2026.

Despite its potential, Huang expressed concerns about the current reliability of co-packaged optics for direct GPU connections, stating that copper cables are "orders of magnitude" more reliable. This reliability gap, he argued, outweighs the potential energy savings.

"That's not worth it," Huang said, referencing the significant capital expenditure involved in building AI infrastructure. "We keep playing with that equation. Copper is far better."

The technology has garnered significant interest from Silicon Valley startups, with companies like Ayar Labs, Lightmatter, and Celestial AI raising hundreds of millions in venture capital, including some funding from Nvidia itself. These startups are focused on integrating co-packaged optics directly onto AI chips, aiming to address the limitations of copper connections, which are relatively cheap and fast but have a limited range of a few meters.

Nvidia's current flagship product, containing 72 chips in a single server, already consumes 120 kilowatts of electricity and generates substantial heat, necessitating a liquid cooling system. The company's upcoming flagship server, scheduled for release in 2027, will pack hundreds of its Vera Rubin Ultra Chips into a single rack, consuming a whopping 600 kilowatts of power.

This dramatic increase in chip density underscores the challenges of relying solely on copper connections, which have a limited reach. As AI workloads demand ever-increasing amounts of data transfer between chips, the need for more efficient connectivity becomes paramount.

Mark Wade, CEO of Ayar Labs, a company backed by Nvidia, highlighted the ongoing challenges of manufacturing co-packaged optics at scale with both low cost and high reliability. While he expects the transition to occur sometime after 2028, he emphasized that optics are the only viable solution for scaling up AI servers.

"Just look at the power consumption going up and up on racks with electrical connections," Wade told Reuters. "Optics is the only technology that gets you off of that train."