webinar register page

Webinar banner
tinyML Talks webcast: 1) Towards Softw-Defined Imaging: Adaptive Video Subsampling for Energy-Efficient Object Tracking 2) The Akida Neural Processor: Low Power CNN Inference and Learn. at the Edge
"Towards Software-Defined Imaging: Adaptive Video Subsampling for Energy-Efficient Object Tracking"
Suren Jayasuriya
Assistant professor
Arizona State University

CMOS image sensors have become more computational in nature including region-of-interest (ROI) readout, high dynamic range (HDR) functionality, and burst photography capabilities. Software-defined imaging is the new paradigm, modeling similar advances of radio technology, where image sensors are increasingly programmable and configurable to meet application-specific needs. In this talk, we present a suite of software-defined imaging algorithms that leverage CMOS sensors' ROI capabilities for energy-efficient object tracking. In particular, we discuss how adaptive video subsampling can learn to jointly track objects and subsample future image frames in an online fashion. We present software results as well as FPGA accelerated algorithms that achieve video rate performance in their latency.

"The Akida Neural Processor: Low Power CNN Inference and Learning at the Edge"
Kristofor Carlson
Senior Research Scientist, BrainChip Inc.

The Akida event-based neural processor is a high-performance, low-power SoC targeting edge applications. In this session, we discuss the key distinguishing factors of Akida’s computing architecture which include aggressive 1 to 4-bit weight and activation quantization, event-based implementation of machine-learning operations, and the distribution of computation across many small neural processing units (NPUs). We show how these architectural changes result in a 50% reduction of MACs, parameter memory usage, and peak bandwidth requirements when compared with non-event-based 8-bit machine learning accelerators. Finally, we describe how Akida performs on-chip learning with a proprietary bio-inspired learning algorithm. We present state-of-the-art few-shot learning in both visual (MobileNet on mini-imagenet) and auditory (6-layer CNN on Google Speech Commands) domains.

Sep 1, 2020 08:00 AM in Pacific Time (US and Canada)

Webinar logo
Webinar is over, you cannot register now. If you have any questions, please contact Webinar host: Olga Goremichina.