Neurxcore, the French AI processor startup, has launched a Neural Processor Unit (NPU) line for AI inference applications.
It is built on an enhanced and extended version of the open-source NVIDIA’s Deep Learning Accelerator (Open NVDLA) technology, combined with patented in-house architectures.
The primary focus of the SNVDLA IP from Neurxcore is on image processing, including classification and object detection.
SNVDLA also offers versatility for generative AI applications and has already been silicon-proven, operating on a 22nm TSMC platform, and showcased on a demonstration board running a variety of applications.
The IP package includes the Heracium SDK built by Neurxcore upon the open-source Apache TVM (Tensor-Virtual Machine) framework to configure, optimise and compile neural network applications on SNVDLA products.
Neurxcore’s product line is for sensors and IoT, wearables, smartphones, smart homes, surveillance, Set-Top Box and Digital TV (STB/DTV), smart TV, robotics, edge computing, AR/VR, ADAS, servers and more.
In addition Neurxcore offers a complete package allowing the development of customised NPU offerings, including new operators, AI-enabled optimised subsystem design, and optimised model development, covering training and quantisation .
“80% of AI computational tasks involve inference. Achieving energy and cost reduction while maintaining performance is crucial,” says CEO Virgile Javerliac.
The product line’s fine-grain tunable capabilities, such as the number of MAC OPS per core, are claimed to allow versatile applications.