In this session, we’ll examine the rapidly expanding mental health monitoring tech industry. Due to social distancing and the stresses of the COVID-19 pandemic, many Americans have turned to "Emotion AI" wellness apps that track our movements, facial expressions, speech, and eye blinks to diagnose our emotional wellbeing. While these apps may help in day-to-day management of symptoms, they also raise concerns surrounding surveillance and bias.
Joined by privacy and mental health advocates Amelia Vance (Future of Privacy Forum) and Hannah Zeavin (UC Berkeley), we'll discuss the lack of government oversight of these technologies, their connection to other surveillance tech like facial recognition, and the racist, ableist biases often informing digital emotional and behavioral assessments.
Moderated by S.T.O.P.’s Albert Fox Cahn.
Amelia Vance is Vice President of Youth and Education Privacy at the Future of Privacy Forum. She advises policymakers, academics, companies, and schools on child and student privacy laws and best practices; oversees the website Student Privacy Compass; and convenes stakeholders to ensure the responsible use of child and student data. She is a regular speaker at education and privacy conferences in the U.S. and abroad. This year, she coauthored "The Privacy and Equity Implications of Using Self-Harm Monitoring Technologies: Recommendations for Schools."
Hannah Zeavin is a Lecturer in the Departments of English and History at the University of California, Berkeley and is a faculty affiliate of the University of California at Berkeley Center for Science, Technology, Medicine, and Society and on the Berkeley Center for New Media's Executive Committee. Zeavin’s first book, The Distance Cure: A History of Teletherapy, tracks the history of teletherapy, including the surveillance and weakened ethical standards we've seen in remote and digital therapy.