Nvidia's top GPU has a formidable new AI rival and it's not who you think it is

Tachyum, the company that plans to kill the CPU and GPU with its universal Prodigy processor, this week shared additional details regarding its upcoming chip.  

As it turns out, the new processor will support industry-standard open-source development frameworks for AI applications, as well as a rather whopping amount of RAM. 

The Tachyum Prodigy, a universal homogeneous processor with up to 128 cores, can run x86, Arm, and RISC-V binaries using software emulation without performance degradation, according to the company.

Meanwhile, when running native code, Prodigy can outperform Intel’s Xeon processors by 10 times at a lower power, as well as leaving behind Nvidia’s A100 GPUs in HPC, AI training, and inference tasks.

8TB of RAM

According to Tachyum, while its proprietary compiler will be required for the chip to demonstrate everything it can do, software makers do not have to worry as they will be able to use open-source TensorFlow and PyTorch environments to develop artificial intelligence and machine learning applications. 

Each Tachyum Prodigy will support up to 8TB of memory per chip, which is in line with what is expected from upcoming CPUs from AMD and Intel (modern EPYCs support up to 4TB), but considerably more than is supported by modern GPUs, such as Nvidia’s A100.

Tachyum does not disclose which type of memory Prodigy supports, though we are probably talking about regular RAM types, such as DDR4 or DDR5, but not high-end HBM-class DRAM. 

Tachyum claims that different versions of Prodigy will be able to serve edge and data center applications, which indicates that the chip will be rather scalable in terms of performance and power consumption. 

Prodigy will enter volume production some time in 2021, so expect it to hit the market in the next year or two.

Source: Tachyum



Comments