Artificial intelligence is not neutral. It’s created by humans and how it’s built, trained and applied greatly influences the outcomes. How an algorithm interacts with human beings from different cultures, genders, sexualities, races, etc., depends on the team of AI experts that built the system and the training data they used as inputs. AI systems do not learn bad habits without humans programming those bad habits into them. At a time when many companies and governments are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority. In this lesson we’ll look at discrimination that is already being observed in AI managed systems and discuss some suggested tactics to combat it in future developments. What does it mean to carefully consider every angle of making, iterating, and designing AI? Every step of this process needs to be thoroughly re-examined through different lenses.
What will you learn:
* What is bias in technology?
* How can AI’s be developed to serve a diverse set of users?
* What does it mean to design a data set as a form of protest?
Caroline Sinders, machine-learning-design researcher and artist / US
Lorena Jaume-Palasí, Executive Director The Ethical Tech Society / Germany
Gunay Kazimzade, Weizenbaum Institute for the Networked Society / Germany
Tutorial and Data Protection