Free Shipping on orders over US$49.99

Embedded Vision: Four Trends to Watch

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

The Embedded Vision Summit is coming up May 16–19 in Santa Clara, California. It’s a conference uniquely focused on practical computer vision and visual artificial intelligence (AI), aimed squarely at innovators incorporating vision capabilities in products. One of the great things about being part of the Summit team is seeing trends emerge in the embedded vision space, and the editors at EE Times asked me to share some of the things we’re seeing in 2022.

Phil Lapsley
Phil Lapsley (Source: Embedded Vision Summit)

The first trend that jumped out at me is the tremendous increase in performance and efficiency for embedded vision applications. Interestingly, these increases aren’t just from processors. Certainly, processors are getting faster, often due to a diversity of architectural approaches (a “Cambrian Explosion,” as my colleague Jeff Bier wrote about recently). But algorithms and tools are driving this increase as well. A great example of a practical algorithmic innovation is Edge Impulse’s talk on “Faster Objects, More Objects” (FOMO!), presented by their CTO Jan Jongboom. Similarly, Felix Baum, director of Product Management at Qualcomm, will discuss the company’s latest tools to help developers get the best possible machine learning performance out of their embedded processors.

(By the way, the great thing about this trend is that these performance gains are multiplicative: when you combine efficiency increases in algorithms, tools, and processors — any one of which might be significant on its own — you quickly realize you’re looking at a fantastic year–over–year improvement.)

A second trend is democratization of edge AI by simplifying development. For edge and vision AI to become mainstream, system developers without deep experience must be able to master the technology. This means more use of off–the–shelf models, like the 270+ models available in the OpenVINO Open Model Zoo, featured in the talk by Intel’s Ansley Dunn and Ryan Loney. And it means raising the level of abstraction for developers with low–code/no–code tools, such as those presented by NVIDIA’s Alvin Clark.

A third trend is deployment at scale. How do you get from proof of concept to deployment at scale? Emerging MLops techniques and tools mean that product developers are no longer on their own to figure out thorny problems like version control for training data, as we’ll see in Nicolás Eiris’ talk on AI reproducibility and continuous updates and Rakshit Agrawal’s talk on Kubernetes and containerization for edge vision applications.

The fourth trend concerns reliability and trustworthiness of AI. As AI–enabled systems are deployed more widely, there are more opportunities for mistakes that can have serious consequences. Industry veterans will share their perspectives on how to make AI more trustworthy. Notable examples here are Krishnaram Kenthapadi’s talk on responsible AI and model ops and Robert Laganiere’s talk on sensor fusion. Too, there are important questions to consider about privacy, bias, and ethics in AI. Professor Susan Kennedy from Santa Clara University will present on “Privacy: a Surmountable Challenge for Computer Vision,” followed by an extended audience Q&A session called “Ask the Ethicist: Your Questions about AI Privacy, Bias, and Ethics Answered.”

This is such an exciting time to be involved in edge AI and vision. What trends will you spot at the Summit?

— Phil Lapsley is a co–founder of consulting firm BDTI and one of the organizers of the Embedded Vision Summit, which will be held in Santa Clara, California, May 16–19.

Source link

We will be happy to hear your thoughts

Leave a reply

Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart