ViC was excited to join the first Not-Equal summer school, which took place in Swansea from the 27-30th August 2019 and explored the intersections between Algorithmic Social Justice and Digital Security through a programme of talks, workshops and panel sessions. ViC hosted a session on ‘Values-First and Responsible Computing‘, abstract below:
The software industry, computing research, and wider society have yet to fully grasp the potentially devastating consequences of unleashing, at scale, software that has been built without putting societal concerns and people’s lives first. This is even more critical with the emergence of new trends, especially with AI, where software is already making autonomous decisions, e.g. in the financial sector. Following an increasing number of high-profile software scandals and malpractices (e.g. the VW deceit software, FB/Cambridge Analytica illicit personal data harvesting, and Boeing 373 Max anti-stall software disasters), calls have been made for radical changes in business models and tough policies to strongly regulate against pervasive software industry malpractices. Although much needed, laws and regulations can be broken or circumvented. What cannot be so easily broken is a values-informed, diverse, and well-connected community of computing practice; one that understands what human values are, what social responsibility means, and the way that values and responsibility are written into code. This talk will share research approaches, tools, findings, and future directions from on-going work at Lancaster University.
C121 Collaborative Learning Space, Building C, Monash Caulfield Campus
Thursday 24 January 2019 2-4pm
With the rapid advances of AI, concerns around the undesirable and unpredictable impact that AI may have on society are mounting. In response to such concerns, leading AI thinkers and practitioners have started drafting principles and guidelines to envision an AI that would benefit humanity instead of causing harm. Underpinning these principles is the perceived importance for AI to be aligned to human values and promote the ‘common good’. We argue that efforts from leading AI thinkers and practitioners must be supported by constructive critique, dialogue and informed scrutiny from different constituencies asking questions such as: what and whose values? What does ‘common good’ mean, and to whom?
The aim of this workshop is to take a deep dive into human values, examine how they work, and what structures they may exhibit. Specifically, our twofold objective is to capture the diversity of meanings for each value and their interrelationships in the context of AI. We will do so by using some of the tools and techniques developed as part of the Values in Computing (ViC) research.
Workshop structure and high-level outcomes available on request (send MAF an email ).
Workshop @ PACTMAN: Trust, Privacy and Consent in Future Pervasive Environments Symposium 10-11 December 2018. Event Page.
“An Alien Intelligence has been discovered. As it happens, it is an Artificial form of Alien Intelligence emerged as a colony of spawn-like creatures. It is commonly accepted that entities exhibiting Artificial Intelligent (AI) characteristics can be described as intelligent agents, that is systems that perceive and act in some environment. However, being an Alien form of AI, the exact criterion for intelligence is difficult to establish and the environment of origin unknown. Our goal is to explore and specify this very criterion of intelligence and, from that, explore the potential attitudes and behavior of Alien AI in a range of environments.”
This workshop is part of the Values in Computing (ViC) research programm and facilitated by Marie & Steve from Team ViC. ViC receives support from the EPSRC.
The ViC team has been invited to deliver a workshop on Human Values & AI at CIRN, Prato, Italy, October 2018.
This workshop takes a deep dive into human values, examines how they work, and what structures they may exhibit. It uses some of the new tools developed for the AI & Ethics seminar series just held at Alpbach and covers some of the questions explored in our previous post on AI & Ethics.
AI has seen a massive and rapid development in the past twenty years. With such accelerating advances, concerns around the undesirable and unpredictable impact that AI may have on society are mounting. In response to such concerns, leading AI thinkers and practitioners have drafted a set of principles – the Asilomar AI Principles – for Beneficial AI, one that would benefit humanity instead of causing it harm.