ViC at Dagstuhl

ViC was at Dagstuhl!

ViC at Dagstuhl
ViC at Dagstuhl
Title / Seminar number: 19291

Values in Computing


Christoph Becker (University of Toronto, CA)
Gregor Engels (Universität Paderborn, DE)
Andrew Feenberg (Simon Fraser University – Burnaby, CA)
Maria Angela Ferrario (Lancaster University, UK)
Geraldine Fitzpatrick (TU Wien, AT)

Aim: to examine the  relations between human values, computing technologies and society. It does so by bringing together practitioners and researchers from several areas within and beyond computer science, including human computer interaction, software engineering, computer ethics, moral philosophy, philosophy of technology, investigative data science, and critical data studies.

Outcomes: a research agenda to be included in the Dagstuhl Report; and a jointly designed ‘Values in Computing’ teaching module, to be piloted across a selection of participating universities.

Values in Computing Dagstuhl micro site

Dagstuhl seminars are a fantastic opportunity for academics and practitioners  to come together, exchange experiences, explore ideas and put research to work. The seminar took pace on 15-19 July 2019.


AI & Human Values – Workshop @CIRN, Italy

AI hatchlings
AI hatchlings in 3DP production. Photo by sf@vic

The ViC team has been invited to deliver a workshop on Human Values & AI  at CIRN, Prato, Italy, October 2018.

This workshop  takes a deep dive into human values, examines how they work, and what structures they may exhibit. It uses some of the new tools developed for the AI & Ethics seminar series just held at Alpbach and covers some of the questions explored in our previous post on AI & Ethics.

Full Workshop Proposal


AI has seen a massive and rapid development in the past twenty years. With such accelerating advances, concerns around the undesirable and unpredictable impact that AI may have on society are mounting. In response to such concerns, leading AI thinkers and practitioners have drafted a set of principles – the Asilomar AI Principles – for Beneficial AI, one that would benefit humanity instead of causing it harm.

Continue reading “AI & Human Values – Workshop @CIRN, Italy”

AI & the Media

This post was drafted  in response to journalists’ questions  (e.g. from Wiener Zeitung  and Forbes)  in occasion of a week-long AI & Ethics  seminar and of the  ‘State of AI’ panel discussion i was invited to. Shame they asked me before James Mickens gave his phenomenal Usenix2018 keynote on ML, IoT et al. . I would have simply directed them to it.

  • Q1: In addition to a Universal Declaration of Human Rights, do you need a Universal Declaration of cyborg / robot rights?
  • Q2: Who has to be protected from whom?
  • Q3: What importance do programmers have in the future?
  • Q4: How can one ensure that there is a universal catalog of ethical behavior in artificial intelligence?
  • Q5: Do man and machine merge?
  • Q6. What are the future challenges in coping with artificial intelligence and transhumanism?
  • Q7: Have we arrived in the first phase of transhumanism?
  • Q8: is it time that, alongside the IT industry, more and more humanities scientists are involved in the questions of the technological future?

Original list of questions


First, let’s clarify what we mean by Artificial Intelligence. AI is an umbreall terms for a number of related technologies and field of studies.  Suchman‘s description of AI is a good starting point:  “AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence”. The words ‘conventionally understood’  are key.

In the last 20 years, this field of study has been focused on the “construction of intelligent agents — systems that perceive and act in some environment. In this context, the criterion for intelligence is related to statistical and economic notions of rationality —  the ability to make good decisions, plans, or inferences” (AI open letter  2015).

The combination of access to big data, increasing computing process power, and market promises has vastly accelerated AI development in the last two decades. The  ‘criterion for intelligence’  is here linked to rational thinking as defined by statistics and economics – e.g. in terms of utility, and performance.

This means that, once humans have designed the problem space in which machine intelligence operates, this will act rationally. Rationally does not necessarily mean fairly and considerately.


To better answer the   questions,  I have made the distinction  between ‘Intelligent agents type A and ‘Intelligent agents type B, this distinction is fictional, and broadly overlaps with the distinction between ‘narrow’ and ‘general AI’ .  Put it simply,  it focuses  on what is available now and what we do not have yet, but may have in the future.

  1. Intelligent agents A – this is what  we have now, systems that use AI computational techniques (e.g.  machine learning) for a variety of tasks with some specific goal (e.g. speech recognition, image classification, machine translation). These component tasks can be used in isolation or combined. Examples include image recognition for cancer detection, and data models for diabetes prediction.
  2. Intelligent agents B – this is what we do not have yet, but it is a the centre of much media attention: an engineered non-biological sentient entity  (i.e. synthetic, hybrid) equipped with General AI  unbounded capabilities. In other words, an entity engineered to successfully perform – and surpass – any human intellectual task.

I would not say that is not possible to build a ‘type B’ intelligence, but I’d question the values underpinning such desire.


The journalists’ questions were many and complex. I thought it best to reason about them with ‘team ViC’. Below are my personal reflections, informed by team discussions. As an experiment, each of us also gave “one-word” answer to each of your questions. I have summarized them at the start in italic.

Q1: In addition to a Universal Declaration of Human Rights, do you need a Universal Declaration of cyborg / robot rights?

No. (not until)

Given our Human Rights track record, we should not build  ‘type B’ machines that need to be granted rights until the rights of every single human being is respected. Also granting an ‘electronic person’ status to a machine, may be just another way for lifting tech industry from their responsibilities.

The Universal Declaration of Human Rights (UDRH) is a testament to both human ‘kindness’ and to our struggle to honor our very own rights.  We, as human species, are currently in breach of every single UDRH article. Furthermore, old problems not only seems to stay, but they seem to morph  and grow in scale, for example:

  • Slavery  is rife and on the rise. There is  now an estimate 40M people in slavery (vs 30M in 2013). Slavery has morphed into more subtle, hidden and deeply pervasive new  forms.
  • Wars still ravage many of our nations, and the trend is upwards. AI development and the military are  historically  intertwined and problematically so (e.g.  the 4000 Google employees project Maven walk out).
  •  Inequalities both  social and economic are widening and it is happening at our door step.  Research has found that 1 in 3 children in Britain lives in poverty and that this is a raising trend.

How can we be capable of respecting entities that are not ‘us’ when we are not good at respecting our own rights? Or, in reverse, are we sure that we can build machines that will respect humans? Some may dream up of Sophia, but it is Mary Shelley who seems to be currently stealing the  show.

Many top AI researchers argue that we should not grant machines the status of an electronic person – to make machines responsible of their actions (good or bad) would mean that their designers and manufacturers could be lifted from their responsibilities. What would you do if a machine does something wrong? Fine it? Put it into jail?

Q2: Who has to be protected from whom?

Humans from Humans

(See Q1 above)

Q3: What importance do programmers have in the future?

Much and Hardly Any

Perhaps too much focus is placed on  individual developers’ responsibilities, whereas these responsibilities are often distributed.  More emphasis should be placed, firstly, on the importance of investigating the changes that computational-intensive technologies are bringing to society, starting by reflecting on our own individual lives. Secondly, on questioning whether these changes are desirable or not, and, most importantly, desirable to whom. Desirability is a difficult nut to crack, Russel et al. try to explain why.

Q4: How can one ensure that there is a universal catalog of ethical behavior in artificial intelligence?

One Can Not

Philosophically, this assumes the possibility (and desirability?) of Universal Ethics to be used by machines.  Ethics are codified principles of what a society, organisation, or any other human constituency considers right or wrong;  behaviour is situated, contextual, and salient. Formalised Universal Ethics  do not guarantee ethical behaviour because it is context-dependent, particular,  and volatile.

Technically, we can improve the transparency and externalization of machine reasoning. For example,  research in  autonomous systems  is looking into externalizing the reasoning underpinning their behavior, in this paper, they describe their technical challenges.

Q5: Do man and machine merge?


Man and machine have long merged, the biggest merge is with the Internet, to which we are constantly and collectively connected. Individually, we also have pace makers, defibrillators, insulin pumps – complexity and scale will increase, but fundamentally the merge has already happened. Cybernetics, is about technology-driven  systems  control.  I have seen ‘dying’  ravaged by a simple pacemaker,  what could the implications be – for both the living and dying – of intelligence augmentation ?

Q6. What are the future challenges in coping with artificial intelligence and transhumanism?

Batteries, jobs, children.

Firstly, the intelligent machines of which we speak, need power,  energy, ‘food’. How will they be fed? Who will feed them? Bio-fuel crops already compete with human foodstuff production, and predictions indicate that communication industry is to consume more than 20% of world electricity by 2020.

Secondly, much focus is placed on human / machine competitiveness in the job market.  I’d also keep a closer eye on the roles that humans have and should have in society,  how those may get eroded (e.g. the joy and opportunities to exercise creativity and problem solving, care for children, care for the elderly).

These are roles that, I’d argue, define the very essence of being human. Is trans-humanism really what most of us want for our children? Is that what our older selves want?

Q7: Have we arrived in the first phase of transhumanism?


There is certainly an increased desire for trans-humanism, and where there is a will, there is a way. Human history has been shaped by the desire and reverence for ‘entities’ that transcend us. When such entities cannot be found, we imagine, evoke, and, eventually build them.

Q8: is it time that, alongside the IT industry, more and more humanities scientists are involved in the questions of the technological future?

Maybe too late

Humanities have been for long involved in these questions. Historians and philosophers have been working on pattern recognition for centuries, and noted that at times it works and other times it doesn’t.

  1. In addition to a Universal Declaration of Human Rights, do you need a Universal Declaration of cyborg / robot rights?
  2. Who has to be protected from whom?
  3. What importance do programmers have in the future?
  4. How can one ensure that there is a universal catalog of ethical behavior in artificial intelligence?
  5. Is not it easier to teach artificial intelligence an ethical framework along which decisions are made?
  6. Do man and machine merge?
  7. Which social, ethical, cultural and political consequences arise from this?
  8. Are there technical or biological. Limits of the merger, where are they going so far?
  9. Why do not you talk about cyborgs anymore?
  10. How human has AI to be? Is human society ready for AI?
  11. What are the future challenges in coping with artificial intelligence and transhumanism?
  12. Have we arrived in the first phase of transhumanism?
  13. Is it time that, alongside the IT industry, more and more humanities scientists are involved in the questions of
    the technological future?

“Fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones.” {AI Q&As}. Media should support mature and informed conversations on this topic. This blog post has been drafted as  a basis for conversation with the media.

(MY) research context

I work at the intersection of human computer interaction (HCI)  and  software engineering (SE).  My research focuses on human values in computing, particularly values in software production, but not specifically in AI.

My expertise is in applied digital innovation (i.e.  in health, environmentsocial change). I  examine the role and impact of digital innovation on society (and vice-versa), often through rapidly prototyped technologies.  I have also put some narrative about my background below to furthercontextualize my answers.


I did Philosophy and Social Psychology as my first degree/masters in Italy (Universita’ Cattolica, Milano). Two weeks after my Viva,  I left Italy for Ireland to learn English. I only planned to stay for three months. A year after I  was awarded a  place on a MSc in Multimedia Systems Design, at the Engineering Department of Trinity College Dublin, Ireland; after that I was offered the PhD at the Computer Science Department, University College, Dublin.

My PhD was in a branch of AI (Case Based Reasoning); it was 15 years ago. Immediately after my PhD I left academia and the field precisely because I felt uncomfortable with the research domain and its applications (i. e. user profiling for commercial purposes). After that, I worked as a project manager for a  EU agency. The focus was on peace building and reconciliation through technology and economy development of EU crossborder regions.

Fast forward several year, and I am back in academia as a lecturer in computer science. Underpinning my research,  is a passion for understanding  the interplay between human values and computing. This comes from years of working on digital innovation research partnerships with vulnerable parts of our  society.

Measuring Values in SE

Values as mental representations.
FIgure 1: Values as mental representations to be studied on three levels: system (L1), personal (L2), and instantiation (L3) level.

“Measuring Values in Software Engineering”  is our latest peer-reviewed  work. It has been accepted for presentation at the 12th International Symposium on Empirical Software Engineering and Measurement (ESEM2018), 11-12 October 2018, Oulu, Finland.  Accepted on: 13th  August 2018 . Pre-print version.


Background: Human values, such as prestige, personal security, social justice, and financial success, influence software production decision-making processes. Whether held by developers, clients or institutions, values are both highly subjective and deeply impactful on software outcomes. While their subjectivity makes some values difficult to measure, their impact on software motivates our research. Aim: To contribute to the scientific understanding and the empirical investigation of human values in Software Engineering (SE). Approach: Drawing from experimental psychology, we consider values as mental representations to be investigated on three levels: at a system (universal, L1), personal (abstract, L2), and instantiation level (concrete, L3). Method: We design and develop a selection of tools for the investigation of values at each level. As an example, we focus on the design, development, and use of a Values Q-Sort built by mapping Schwartz’s universal values model onto the ACM Code of Ethics. Results: Q-statistic sorts work with smaller samples than R-statistic surveys; from our study with 12 software practitioners, it is possible to extract 3 values ‘prototypes’ indicative of an emergent typology of values considerations in SE. Conclusions: The Values Q-Sort combines the extraction of quantitative values prototypes that indicate the connections between values (L1) with rich personal narratives (L2) reflective of specific software practices (L3), and as such, it supports a systematic, empirically-based approach to capturing values in SE.

Values & Workplace

How can we bring to the open and address the personal, institutional and political values tensions manifesting in our workplaces ?

Vic team writes about the values tensions observed in academia. Below is a re-post of the original article posted in  the ACM interaction magazine blog on 25th June 2018

Values Tensions in Academia: an Exploration Within the HCI Community

Figure 1. Wish you were here – by @_JPhelps

February and March 2018 saw the largest ever industrial action in the UK’s higher education sector. Whilst the cause of the strike was changes to the USS pension scheme, the picket lines were sites for conversations about many other issues within academia. Whether it was dissatisfaction with the corporatisation of universities, the precarious working conditions of early career researchers, or over-work, there was a clear sense that the values held by those striking were in sharp contrast with the realities of university life. The ‘depth of feeling’ was often bitter and angry, and the frustration with today’s higher education system palpable.

Whilst many reported a loss of trust in the system and in their own institutions, fresh hope and renewed energy came from activities such as the teach outs, open teaching and discussion sessions outside the campus. These initiatives offered concrete examples of different ways of engaging with learning and research across disciplines and roles; ideas were proliferating like a “thousand butterflies”. Many now feel that very broad bridges are needed to start filling the values gap that has manifested itself so clearly during the strikes; as Prof Stephen Toope, Cambridge University Vice Chancellor, puts it “the focus should be on what values our society expects to see reflected in our universities, not just value for money”.

Figure 2. Schwartz’s values model. Adapted from [(Schwartz 2012)]
From the HCI community standpoint, a similar values tension was captured by a survey carried out as part of the ‘Values in Computing’ (ViC) workshop at CHI2017. With just over 150 respondents, the survey explored views about the values driving HCI research at a personal and institutional level. The survey was designed around Schwartz’s values model (Figure 2) and tried to capture relationships (i.e. lines of friction) within and between the personal and institutional values held by the HCI community. Although the survey was exploratory and the sample may not be representative of the whole HCI community, the numbers did show tensions within the community.

Overall, most respondents felt their values matched their institution’s to some extent (57% of the respondents). However, almost a third reported that their values either did not match (26.5%) or did not match at all (6%) those of their Institution. The survey also asked respondents to rank a list of options according to which were most highly valued in their work; this is where the values tensions became manifest. As the bar charts in Figure 3 show, the top three most highly ranked options were ‘making the world a better place’; ‘competence and intellectual independence’; and ‘relationships with colleagues, students and partners’. These statements were designed to represent Universalism, Self-Direction, and Benevolence in Schwartz’s values model and followed previous research guidelines. Positive societal impact, autonomy of thought and meaningful relationships were thus the things that these computing professionals most valued about their work. By contrast, ‘financial recognition’ (Power) was the least valued.

Figure 3. Personal and organizational values ranking.

They felt that their institution most highly valued ‘financial success’, ‘international prestige’ and ‘league tables/rankings’. All these three options belong to the Power values group. By contrast, the bottom three options were ‘making the world a better place through work, research and teaching’, ‘staff relationships with colleagues, students and research/work partners’, and ‘supporting the well-being of staff, students and partners’. Thus, the things that the respondents most valued – with the exception of intellectual autonomy – were seen to not be highly valued by their institutions.

The implications of this friction between personal and institutional values cannot be ignored and deserve further attention. Even if this tension may be, to a certain extent, ‘perceived’ or ‘inevitable’ or both, the widening of the values gap may have problematic consequences. For example, recent research and extensive media coverage worldwide suggest high levels of stress and mental health problems within academia. However, the emphasis of these studies is often on the temporal and mental burdens created by the demands of the workplace, and the need for raising awareness and promoting self-care (i.e. through apps and physical activity).

Something that isn’t often talked about is whether the values tensions may have health and well-being implications, and the need for digging deep into the root causes of these tensions before defaulting to self-care coping mechanisms. This may be particularly the case in the HCI community, as many of us not only grapple with personal challenges, but also with the challenges of a much deeper and broader “existential crisis”. This is especially important because much of HCI research focuses on designing and developing digital technologies that can change people’s lives rather than examining how digital technologies come to life. We need to look into values tensions not only for the end-users and broader stakeholders, but for us – researchers, educators, designers, and developers.

To this end, we argue that a better understanding of values is needed, especially when it comes to computing technologies. From a research and practice perspective, this means to build on, but also go beyond, the substantial corpus of research in Ethics and the well-established research field of values sensitive design (VSD).

Our question for the HCI and broader computing community is how to bring to the open the personal, institutional, and political values tensions manifesting in our workplaces (i.e. academia, research). In other words, how can we support the next generation of computing professionals with the deliberative, technical, and critical skills necessary to tell the difference between what is worth pursuing from what is potentially harmful to self and society? And how can we create and support institutions where this civic purpose can flourish?

Thank you!

This work is part-funded by the Engineering and Physical Sciences Research Council UK (Grant number: EP/R009600/1). Warm thanks go to our project partners and to the CHI2017 ViC workshop participants, who have jointly shaped the vision and direction of this research. A special mention also goes to the thousands of conversations had with colleagues and students within our School and across campus. More information about ViC and related work can be found at

ViC at UofT

Bluebug, Toronto, photo by m@ViC

Marie from team ViC has a visiting position at the DCI, University of Toronto, Canada (as a DCI Fellow in Digital Sustainability). This week she will be giving a talk on ‘Values in Computing, Connecting the Bits’.

This talk introduces some of the tools and emerging findings from ViC latest work and from activities carried out at UofT as part of her fellowship. A broader perspective on the issues connecting tech industry, academic research, and governance will be thrown into the mix as well as provocations from the current state of affairs and the metaphysical roots of the binary system.