The ViC team has been invited to deliver a workshop on Human Values & AI at CIRN, Prato, Italy, October 2018.
This workshop takes a deep dive into human values, examines how they work, and what structures they may exhibit. It uses some of the new tools developed for the AI & Ethics seminar series just held at Alpbach and covers some of the questions explored in our previous post on AI & Ethics.
AI has seen a massive and rapid development in the past twenty years. With such accelerating advances, concerns around the undesirable and unpredictable impact that AI may have on society are mounting. In response to such concerns, leading AI thinkers and practitioners have drafted a set of principles – the Asilomar AI Principles – for Beneficial AI, one that would benefit humanity instead of causing it harm.
Underpinning these principles is the perceived importance for AI to be aligned to human values and promote the ‘common good’. We argue that efforts from leading AI thinkers must be supported by constructive critique, dialogue and informed scrutiny from different constituencies asking questions such as: what and whose values? What does ‘common good’ mean, and to whom?
The aim of this workshop is to take a deep dive into human values, examine how they work, and what structures they may exhibit. Specifically, our twofold objective is to capture the diversity of meanings for each value and their interrelationships in the context of AI. We will do so both systematically and creatively using tools and techniques developed as part of the Values in Computing (ViC) research.
In practice, we will engage in a small set of facilitated group activities designed to explore the Asilomar AI Principles in the context of a broader values theoretical framework briefly outlined in this paper.