Expanding on its mission to furthering the knowledge and skills of professionals involved in audio development, the Audio Product Education Institute (APEI) presents a new webinar on Automotive Audio. This session will discuss the features that are common to all kinds of processing, and ones that are unique to audio processing. The most distinctive characteristic for automotive audio applications is the need for processing to be deterministic for predictive control. This becomes more difficult due to some standard computer hardware techniques, such as caching and virtual memory.
Auto manufacturers are always pushing the limits existing processing platforms to meet the increasing complexities of automotive systems. While they are building features like fully autonomous driving, they are simultaneously innovating on the in-cabin experience: a huge component of which is audio and voice. Immersive sound and personal audio zones, automotive active and road noise cancellation (ANC/RNC), voice-based user-interfaces and in-car communications, engine sound synthesis (ESS) and electric vehicle warning sound systems (EVWSS/AVAS) are just some of the current areas of focus for automotive audio developers.
Digital signal processing (DSP) core technologies need to offer deterministic and very low processing latency with best-in-class MIPS/mW performance. Some processors feature hardware accelerators to offload common digital signal processing algorithms from the core, making them the ideal choice for real-time audio applications. Considerations with complex peripherals such as Ethernet and USB bring another level of demands.
This event will be presented by Roger Shively (JJR Acoustics, LLC) APEI’s Automotive Pillar Chair. Following opening remarks, the event will feature two presentations from Paul Beckmann (CTO, DSP Concepts), and John Redford (DSP Architect, Analog Devices), offering a valuable perspective on these platforms.