Tech giants create interconnect that lets processors and accelerators share data

Spread the love

AMD, ARM, Huawei, IBM, Mellanox, Qualcomm and Xilinx have set themselves the goal of developing a new interconnect. The Cache Coherent Interconnect for Accelerators should connect processors and accelerators from different manufacturers.

The Cache Coherent Interconnect for Accelerators or CCIX should allow hardware accelerators to access data “wherever it is in the system”, according to the companies. The partnership claims that hardware acceleration in data center applications has become a necessity due to benefits in terms of consumption and size of components. Accelerators can provide significant speed gains in big data analysis, machine learning and in-memory database applications.

For example, Google announced last week that it had developed its own Tensor Processing Units. The problem is that components from different instruction set architectures cannot consistently access the same memory pool.

Details about the upcoming interconnect are not yet available, but the companies are pushing for a standard that will provide higher bandwidth, lower latency and support for cache coherence. The specification aims to ensure that processors based on different architectures can easily share data with accelerators such as GPUs and FPGAs. Some manufacturers’ standards for memory sharing already exist, such as IBM’s coherent processor accelerator interface and Nvidia’s NVLink, but there is no open universal interconnect yet.

Intel is not in the line of manufacturers working on ccix. The data center chip leader has its own OmniPath interconnect and last year acquired fpga manufacturer Altera, which may be the reason its competitors have teamed up.

You might also like