"tinyML as-a-Service - Bringing ML inference to the deepest IoT Edge"
Senior researcher, Ericsson
TinyML, as a concept, concerns the running of ML inference on Ultra Low-Power microcontrollers found on IoT devices. Yet today, various challenges still limit the effective execution of TinyML in the embedded IoT world. As both a concept and community, it is still under development. Here at Ericsson, the focus of our TinyML as-a-Service activity is to democratize TinyML, enabling manufacturers to start their AI businesses using TinyML more easily. Our goal is to make the execution of ML tasks possible and easy in a specific class of devices. These devices are characterized by very constrained hardware and software resources such as sensor and actuator nodes based on these microcontrollers. We will present how we can bind the “as-a-service” model to TinyML and provide a high-level technical overview of our concept and introduce the design requirements and building blocks which characterize this emerging paradigm.
"Speech Recognition on low power devices"
Founder & CTO, Fluent.ai Inc.
Lead ML Developer, Fluent.ai Inc.
In this talk, we will cover how we at Fluent.ai go from training models in high level libraries such as Pytorch and then run the models on low-power MCUs, such as ARM Cortex M series of microcontrollers, or DSPG digital signal processors. We will talk about optimizations achieved using low-level programming optimizations, as well as, neural network optimizations, such as 8-bit quantization, unique model architectures, network compression, layer selection, etc.