"Benchmarking and Improving NN Execution on Digital Signal Processor vs. Custom Accelerator for Hearing Instruments"
Hearing instruments are supported by multi-core processor platforms, that include several digital signal processors (DSPs). These resources can be used to implement neural networks (NNs); however, execution time and energy consumption are prohibitive to do so. In this presentation, we will talk about bench marking neural network workloads relevant for hearing aids on Demant’s DSP-based platform. We will also introduce a custom NN processing engine (NNE) that was developed to achieve further power optimizations by exploiting a set of various techniques (reduced word length, several MACs in parallel, two-step scaling etc.).
A pre-trained, fully connected feed forward NN (Hello Edge: Keyword Spotting on Microcontrollers) was used as a benchmark model to run a keyword spotting application using Google speech command dataset on both, the DSP and NNE. We will talk about the performance of the two implementations, where the NNE significantly outperforms the DSP solution.
"How to train and deploy tinyML models for three common sensor types"
TinyML is incredibly exciting, but if you're hoping to train your own model it can be difficult to know where to start. In this talk, Dan walks through his workflow and best practices for training models for three very different types of data: time-series from sensors, audio, and vision. We'll be using Edge Impulse, a free online studio for training embedded machine learning models.