" AI/ML SoC for Ultra-Low-Power Mobile and IoT devices"
This talk will present how AI models, run on a best-fit hardware and software (HW/SW) architecture, combined with the compression and optimization methods, can relieve many AI core bottlenecks and expand the device’s capabilities. The AI compression approach helps to scale down megabyte (Mbyte) models to kilobyte (Kbyte) models, increases the memory utilization efficiency for reduced power consumption (and potentially smaller memory size), and run very complex models on a very small AI system-on-chip (SoC), resulting in an ultra-low-power AI device consuming only microwatts (µW) of power.
"Pushing the Limits of Ultra-low Power Computer Vision for tinyML Applications"
Staff Engineer, Qualcomm AI Research
Qualcomm Technologies, Inc.
Achieving always-on computer vision in a battery-constrained device for TinyML applications is a challenging feat. To meet the requirements of computer vision at <1mW, innovation and end-to-end optimization is necessary across the sensor, custom ASIC components, architecture, algorithm, software, and custom trainable models. Qualcomm Technologies developed an always-on computer vision module that comprises a low-power monochrome qVGA CMOS image sensor and an ultra-low power custom SoC with dedicated hardware for computer vision algorithms. By challenging long-held assumptions in traditional computer vision, we are enabling new applications in mobile phones, wearables, and IoT. We also introduce always-on computer vision system training tools, which facilitate rapid training, tuning, and deployment of custom object detection models. This talk presents the Qualcomm QCC112 chip, use cases enabled by this device, and an overview of the training tools.