The European Commission is taking a different approach to protecting its citizens from unethical applications of artificial intelligence to that taken by United States legislators. Rather than legislating to enforce companies to audit their systems for problems such as bias and put in place steps to mitigate the problem, they are taking the moral high-ground. They have appointed a team of specialists, the High-Level Expert Group on Artificial Intelligence (AI HLEG), comprising academics, industrial and legal professionals and representatives from NGOs. This team has developed a set of guidelines for companies building AI systems to adhere to. The guidelines are based on fundamental rights.

In effect the Commission is proposing a voluntary self-regulation scheme, similar to schemes aimed at guiding the media in their day-to-day work, such as the Council of Europe’s Guidelines on Safeguarding Privacy in the Media.

According to the guidelines, trustworthy AI systems should:

  1. respect all applicable laws and regulations
  2. respect ethical principles and values
  3. be technically and socially robust

There are 4 fundamental ethical principles that should be adhered to:

  1. respect for human autonomy
  2. prevention of harm
  3. fairness
  4. explicability

The guidelines put forward a set of 7 key requirements that AI systems should meet in order to uphold the ethical principles.

AI systems:

  1. should empower human beings
  2. must be resilient and secure
  3. should ensure full respect for privacy and data protection
  4. should be transparent
  5. must avoid unfair bias
  6. should benefit all human beings, including future generations
  7. should have mechanisms that ensure responsibility and accountability

They also offer a pilot assessment list which is intended to guide AI practitioners when they develop systems. This assessment list includes 146 questions to be asked during the life cycle of developments such as:

Did you consider the appropriate level of human control for the particular AI system and use case?

or

Did you put in place measures to ensure that the data used is comprehensive and up to date?

or

Did you assess to what extent the decisions and hence the outcome made by the AI system can be understood?

There is a lot of value in these guidelines. They have clearly been created by a group of people who deeply understand AI and the implications for society and have thought deeply about how to construct a logical and coherent approach to preventing harm to humans from AI in the wild.

However, as an active practitioner in the field, it is hard not to read them with a sinking heart. The likelihood of any company, even the biggest, using a checklist with 146 complex questions necessitating a length self-evaluation process, to guarantee they comply with guidelines that are barely in the public eye seems very low. For the foreseeable future applications developed on top of AI are a battleground and companies are still under pressure to “move fast and break things” as the famous Mark Zuckerberg phrase puts it.

As with so many things, real change will only come from one of two directions: pressure from legislation or pressure from consumers i.e. how the users of AI applications react to them. As mentioned above legislation is the path the USA seems to have chosen.

Consumer pressure will require educated consumers. The more educated about the problems with AI users become, the more that they reject systems that show obvious bias, the more that they vote with their feet, the more likely it is that companies will take these issues seriously. Here, famous cases like Amazon discarding their gender-biased recruitment filtering AI will be important milestones for people to conceptualise the problem.

Fiction has an incredibly important role to play in this. Stories, written and filmed, help people to see behind complex systems and understand the possible consequences of the unchecked use of AI.

Published by Casper

A veteran of the software industry, I've spent my career working in artificial intelligence and search engines since back in the days when nobody gave two figs for it. Now it's a scorching hot topic, I work for Europe's biggest job board group, helping match people to their perfect jobs. *** For fun I roast coffee, read history and collect art catalogues. On the big screen I watch Cary and Jimmy, Katherine and Rita, but nothing can hold a candle to The Third Man. As an aging curmudgeon I worry about what the future will bring and use my writing to try and predict it. Tomorrow don't look good... ***(All opinions, thoughts, ideas expressed through my writing are entirely my own and do not necessarily reflect the official views of my employer.)

Leave a comment

Your email address will not be published. Required fields are marked *