To meet the bandwidth demands of rapidly advancing automotive and artificial intelligence (AI) and machine learning (ML) workloads, companies such as Rambus are beginning to change where Peripheral Component Interconnect Express (PCIe) interconnects are getting used, effectively joining PCIe and CXL data planes to optimize interconnect performance.
There are probably few surprises to be found in the latest iteration of PCIe as many industry players contributed to its development — the PCIe special interest group (SIG) now boasts 900 members. PCIe has become somewhat ubiquitous in computing over the past two decades, enabling other mature and emerging standards such as Non-Volatile Memory Express (NVMe) and Compute Express Link (CXL).
Similar to its predecessors, PCIe 6.0 is aimed at data–intensive environments such as data centers, high–performance computing (HPC), AI and ML. But as the modern vehicle continues its evolution as a server on wheels — nay, a data center on wheels — many storage technologies are making the trip to automotive applications, including solid-state drives (SSDs) that use both NVMe and PCIe.
PCIe SIG president and board chair Al Yanes said automotive is a focal point from a strategic perspective given that PCIe is synonymous with storage space and NVMe. Not to mention automotive is trending toward higher bandwidth needs just as smartphones have since their introduction.
It’s the accelerators and AI and ML applications, however, that will need the bandwidth provided by PCIe 6.0 the most, which Yanes said is a “revolutionary jump” in an interview with EE Times. This is because PCIe 6.0 offers double the bandwidth of its predecessor to deliver a raw data rate of 64 GT/s and up to 256 GB/s via x16 configuration.
PCIe 6.0 has also implemented Pulse Amplitude Modulation 4–level (PAM4) signaling and flow control unit (Flit) based encoding that supports the PAM4 modulation and works in conjunction with newly added Forward Error Correction and the Cyclic Redundancy Check to enable the bandwidth doubling. Yanes explained all this has been accomplished without sacrificing latency while still being backward compatible with PCIe 5.0.
Not everyone is going to require PCIe 6.0, but it does offer the option of reducing pin count while maintaining bandwidth, Yanes said. It’s also helpful for PCIe SIG members to know the roadmap is there so they can make informed decisions while planning their products.
Helping customers incorporate the latest iteration of PCIe for when the market is ready is the impetus behind Rambus announcing the availability of its PCIe 6.0 controller, which is aimed at meeting the expected demands of rapidly advancing AI/ML and data-intensive workloads. It’s PCIe 6.0 controller also offers security features including an Integrity and Data Encryption (IDE) engine that monitors and protects PCIe links against physical attacks.
One of the key considerations in the development of the controller was that PCIe 6.0 and CXL 3.0 will be sharing the same electrical interface going forward, said Matt Jones, general manager of IP cores for Rambus. This is because latency becomes key with the joining of the PCIe and CXL data planes.
“We’ve done some extremely clever things to basically make this a zero–latency hit to add our IDE,” Jones said. Rambus took advantage of the Flit capabilities, for example, to allow the logic in the controller to be much more optimized.
Getting ready to support technologies that won’t be adopted for months or even years isn’t uncommon for Rambus. The High Bandwidth Memory (HBM) 3 specification has only just been formally introduced, but the company is already helping its customers ready designs that might not be in products for another 18 months with its HBM 3–ready memory interface consisting of a fully integrated physical layer and digital memory controller.
Other companies already announcing PCIe 6.0 solutions include Tektronix, with what it claims is the industry’s first PCI Express 6.0 compatible base transmitter test solution. In the meantime, PCIe 6.0 adoption is a little way off, as PCIe 4.0 is only just getting widespread traction. For example, Micron Technology’s latest SSD uses PCIe 4.0 with its 176-Layer 3D NAND.
Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.