Neural networks can be made faster, cheaper, and smaller. This can result in higher performing and lower cost operation of complex AI applications at the edge of the network –where ever that edge might be such as a factory, vehicle, ship, or other remote location.  In this episode, hear Jags Kandasamy explain how Latent AI’s development platform helps customers and suppliers in every industry compress and adapt neural networks to run “at the edge” and how this ultimately speeds up application development and delivery, as well as improves the performance of the AI application itself. Also we’ll touch on the relationship between Edge AI, and 5G.

Watch on Youtube

Listen on Simplecast


-- TIMING --

00:00 Introduction
00:47 Genesis story
03:30 What is Edge computing? And Edge AI?
07:02 Wearables and smart watches as edge devices
08:04 Video cameras as edge devices
11:38 Edge AI can assist with maintaining privacy
12:43 Deep learning is too compute heavy for the edge
15:07 Automotive production example: predictive maintenance
20:59 LEIP Compress compresses the model to 1/10th the size while only reducing predictive accuracy a few percent
24:08 LEIP Compile targets various end hardware devices so that the developers don’t have to keep track of it all
27:40 AI accelerator chips and hardware are exploding
29:49 The telco use case: AI at the edge of the Telco network and Content delivery network
34:04 Where to focus when there are so many opportunities in so many sectors?
38:19 Partners and system integrators are required in order to scale
40:26 What types of customers are a fit for Latent AI?
43:40 Wrap-up!

-- LINKS --

If you found this podcast episode helpful, don’t forget to subscribe at

DISCLOSURE: To support the channel, we use referral links wherever possible, which means if you click one of the links in this video or description and make a purchase, we may receive a small commission or other compensation.