Free Shipping on orders over US$49.99

Edge AI processor slashes inference latency

GrAI Matter Labs (GML) unveiled the GrAI VIP, a sparsity-driven AI SoC optimized for ultra-low latency and low-power processing at the endpoint. According to the company, the vision inference processor drastically reduces application latency. For example, it can reduce end-to-end latencies for deep learning networks, such as Resnet-50, to the order of a millisecond.

A near-sensor AI solution, the GrAI VIP offers 16-bit floating-point capability to achieve best-in-class performance with a low-power envelope. The edge AI processor is based on GML’s NeuronFlow technology, which combines the dynamic dataflow paradigm with sparse computing to produce massively parallel in-network processing. Aimed at applications that rely on understanding and transforming signals produced by a multitude of sensors at the edge, GrAI VIP can be used in industrial automation, robotics, AR/VR, smart homes, and infotainment in automobiles.

GML demonstrated its Life-Ready AI SoC at this month’s Global Industrie exhibition. AI application developers looking for high-fidelity and low-latency responses for their edge algorithms can now gain early access to the full-stack GrAI VIP platform, including hardware and software development kits.

GrAI VIP product page

GrAI Matter Labs

Find more datasheets on products like this one at, searchable by category, part #, description, manufacturer, and more.

Source link

We will be happy to hear your thoughts

Leave a reply

Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart