Featured

Artificial Intelligence and the Limits of Freedom

Spinoza

Freedom is a slippery concept. At a physical or philosophical level there are many who deny that it exists at all. Spinoza, for instance, thought it didn’t. “Men believe themselves to be free, simply because they are conscious of their actions, and unconscious of the causes whereby those actions are determined”.

Poems you ought to know” (1903)

Yet it is hardwired into us to feel we are in control of our decisions. Not for nothing do the words “I am the master of my fate: I am the captain of my soul.” from Invictus by William Ernest Henley resonate so strongly.

But we are happy to hand over the decision making process in many areas of our lives depending on our level of competence. It starts as children – no-one expects a child to be at liberty to do what he or she pleases and the same goes for people who are incapable: the mentally ill, severely disabled or people who require assistance. Drawing on the resources of the state involves a gatekeeper and the limits of a person’s ability to live a full life can be largely determined by the decisions the  keeper of resources takes.

It seems there is a deep tacit understanding that the gatekeeper in question is another human being; a person who can bring their empathy and emotional insight to bear on the decision making process. We give up freedom, only because we expect to be treated fairly as and by a fellow creature.

But what if that isn’t true? What if the gatekeeper is a machine? An artificial intelligence optimised to provide the best possible decisions for the case in question.

Virtual Interviewer

Are we as happy to hand over our liberty to a machine as to a person?

Does it depend on the outcome? If the computer says ‘no’ have I lost my freedom, but when it’s beneficial to me I’m still free?

There are subtler effects in play as well. Consider the role of the ubiquitous recommendation engines. Based on your past history and that of all other users, they build a picture of what you will most likely do in any given circumstance. The actual recommendations they make are always optimised to the desired outcome for the organisation providing the service.

Amazon Recommender

Recommendations are not necessarily part of any gateway service. They are usually a form of artificial advice. And in the end people can take that advice or leave it. Whilst this is fair and true, the reality is that most people are tired and lazy. The majority take the easy option. Everyone who works with search engines knows that the result in first position draws the most clicks even when it isn’t relevant to the user’s query.

Netflix Recommender

If most people watch the first recommendation that a streaming movie platform offers them and if that is derived from choices that other individuals have made, to what extent have those individuals chosen freely themselves?

Picking the film you were basically told to pick by an AI system is probably not a big deal. Most of the time the worst thing that happens is a lousy movie, but recommendations can have serious negative impacts (suggesting images of self-harm to a vulnerable teenager for instance).

Recommenders can also influence  major life decisions like choosing a job. Imagine a recommendation engine on a job board that has a bias towards jobs in out-of-the-way places. Applying to the first job suggested could have a big impact on someone, especially if the jobs in their own hometown were never displayed to them. The effect is often self-reinforcing: once you have expressed a preference in one direction the system will be more likely to replicate that choice.

LinkedIn Job Recommender

This might not seem like a big deal, but what if the recruiter on the other side is using an artificial intelligence to screen candidates for interview? You now have a situation where the majority of people who can be hired have been through at least two stages of machine-based decision making. All well and good you might say, this will guarantee that companies have employees who fit better into their roles.

Now this may be true (and probably is considering how much investment is being made on building the technology), but here’s the critical point: in the situation where everyone is happy  because the machine takes the decisions, what is the role of human freedom?

That is the topic this blog will focus on. I will post about new technical developments where AI encroaches on human decision making, about efforts to regulate the impact of AI, about philosophical debates on the ethics and realities of the situation and finally (in acts of unbridled self-promotion) I will draw attention to one of my passions which is writing science fiction, because often the best way to handle ethical topics is to examine them under the microscope of the imagination.

These posts will act as subject matter and stimulus for those short fictions and will also look at how likely it is that the scenarios portrayed in the stories could actually happen.

Europe’s approach to ethical AI

The European Commission is taking a different approach to protecting its citizens from unethical applications of artificial intelligence to that taken by United States legislators. Rather than legislating to enforce companies to audit their systems for problems such as bias and put in place steps to mitigate the problem, they are taking the moral high-ground. They have appointed a team of specialists, the High-Level Expert Group on Artificial Intelligence (AI HLEG), comprising academics, industrial and legal professionals and representatives from NGOs. This team has developed a set of guidelines for companies building AI systems to adhere to. The guidelines are based on fundamental rights.

In effect the Commission is proposing a voluntary self-regulation scheme, similar to schemes aimed at guiding the media in their day-to-day work, such as the Council of Europe’s Guidelines on Safeguarding Privacy in the Media.

According to the guidelines, trustworthy AI systems should:

  1. respect all applicable laws and regulations
  2. respect ethical principles and values
  3. be technically and socially robust

There are 4 fundamental ethical principles that should be adhered to:

  1. respect for human autonomy
  2. prevention of harm
  3. fairness
  4. explicability

The guidelines put forward a set of 7 key requirements that AI systems should meet in order to uphold the ethical principles.

AI systems:

  1. should empower human beings
  2. must be resilient and secure
  3. should ensure full respect for privacy and data protection
  4. should be transparent
  5. must avoid unfair bias
  6. should benefit all human beings, including future generations
  7. should have mechanisms that ensure responsibility and accountability

They also offer a pilot assessment list which is intended to guide AI practitioners when they develop systems. This assessment list includes 146 questions to be asked during the life cycle of developments such as:

Did you consider the appropriate level of human control for the particular AI system and use case?

or

Did you put in place measures to ensure that the data used is comprehensive and up to date?

or

Did you assess to what extent the decisions and hence the outcome made by the AI system can be understood?

There is a lot of value in these guidelines. They have clearly been created by a group of people who deeply understand AI and the implications for society and have thought deeply about how to construct a logical and coherent approach to preventing harm to humans from AI in the wild.

However, as an active practitioner in the field, it is hard not to read them with a sinking heart. The likelihood of any company, even the biggest, using a checklist with 146 complex questions necessitating a length self-evaluation process, to guarantee they comply with guidelines that are barely in the public eye seems very low. For the foreseeable future applications developed on top of AI are a battleground and companies are still under pressure to “move fast and break things” as the famous Mark Zuckerberg phrase puts it.

As with so many things, real change will only come from one of two directions: pressure from legislation or pressure from consumers i.e. how the users of AI applications react to them. As mentioned above legislation is the path the USA seems to have chosen.

Consumer pressure will require educated consumers. The more educated about the problems with AI users become, the more that they reject systems that show obvious bias, the more that they vote with their feet, the more likely it is that companies will take these issues seriously. Here, famous cases like Amazon discarding their gender-biased recruitment filtering AI will be important milestones for people to conceptualise the problem.

Fiction has an incredibly important role to play in this. Stories, written and filmed, help people to see behind complex systems and understand the possible consequences of the unchecked use of AI.

The Algorithmic Accountability Act of 2019 – turning back the tide?

Three Democrat Senators are taking a strong stance towards regulating major companies that use artificial intelligence in applications that have a significant impact on people’s lives. They have put forward a bill, known as the Algorithmic Accountability Act, that directs:

“… the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”

Algorithmic Accountability Act of 2019

Let’s unpack this a bit.

“The term ‘automated decision system’ means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.”

This seems well drafted. The emphasis on decision making captures the essence of artificial intelligence, but the extension to the facilitation of human decision making allows recommendation systems to be captured by the scope of the bill, as well as personal assistants. The techniques mentioned are pretty much a catchall, so no worries there. In case it’s not clear (it wasn’t to me) the term consumer here is just a synonym for individual and carries no sense of being involved in a transaction involving money.

“The term ‘automated decision system impact assessment’ means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security…”

Again, well drafted. A system that is inaccurate, unfair, biased, discriminatory, invades privacy or risks security should certainly be highlighted and steps taken to address the issues. Obviously this begs the question of what exactly is bias in a machine learning context, where every system is designed to build in preferences towards certain groups based on the data, but this is a subject for philosophers, courts and public opinion. It might not be easy to define bias, but people can recognise it when they see it.

What kind of steps are required by the bill?

an assessment of the risks posed by the automated decision system to the privacy or security of personal information of consumers and the risks that the automated decision system may result in or contribute to inaccurate, unfair, biased, or discriminatory decisions impacting consumers; and (D) the measures the covered entity will employ to minimize the risks described in subparagraph (C), including technological and physical safeguards.

Assess risks and minimise them. This is the crux of the bill. No attempt to get through legislation that would allow the FTC to stop a company providing a service or anything like that. So presumably, the idea is to allow the market to decide; once consumers understand the biases in, for instance, a restaurant search and recommendation engine, they can simply stop using it.

So what is the route from the assessment to the consumer? How will people become aware of the problems with a system they use? Here the bill seems to break down:

OPTIONAL PUBLICATION OF IMPACT ASSESSMENTS. The impact assessments under subparagraphs (A) and (B) may be made public by the covered entity at its sole discretion.

Why is this the case? Why not require the assessments to be made publicly available? It’s not clear, but what is clear is that in the end this is really an appeal to the consciences of the corporations who are the target of the bill (to qualify a company must have greater than $50m in average annual gross receipts in the last 3 years or possess or control personal information on more than 1m consumers or 1m consumer devices, data brokers are also included.)

The real question is: can these companies, most of which are public, afford to have consciences when their shareholders are demanding they exploit to the full the oil gushers of data they have with smarter and smarter algorithms? GAFAM might be able to afford to employ data scientists just to perform audits, but what about the level below that are struggling to keep up with the hyperspeed of development in AI?

Can Recommendation Algorithms Promote Self-Harm or Even Suicide?

This story is as shocking as it gets for AI and recommenders. A father claims that while using social media, especially Instagram, his daughter was bombarded with inappropriate images and that “some of that content is shocking in that it encourages self-harm, it links self-harm to suicide.” The girl in question went on to commit suicide.

Instagram/Facebook’s response has been comprehensive:

We will not allow any graphic images of self-harm, such as cutting on Instagram – even if it would previously have been allowed as admission. We have never allowed posts that promote or encourage suicide or self harm, and will continue to remove it when reported.

Instagram appears to admit that recommendation algorithms can promote negative messages and harm vulnerable people. The girl’s father is absolutely clear on this point:

We are very keen to raise awareness of the harmful and disturbing content that is freely ­available to young people online. Not only that, but the social media companies, through their algorithms, expose young people to more and more harmful content, just from one click on one post.

Civil liberties organisations are worried about AI

Recently both Liberty (UK) and the ACLU (USA) have become very concerned about the impact of artificial intelligence on civil liberties.

Liberty’s remit to “challenge injustice, defend freedom and campaign to make sure everyone in the UK is treated fairly” and the ACLU’s mission to be the USA’s “guardian of liberty” puts the well-publicised problem of AI’s bias against minorities squarely in their sights. Continue reading “Civil liberties organisations are worried about AI”