Recently both Liberty (UK) and the ACLU (USA) have become very concerned about the impact of artificial intelligence on civil liberties.
Liberty’s remit to “challenge injustice, defend freedom and campaign to make sure everyone in the UK is treated fairly” and the ACLU’s mission to be the USA’s “guardian of liberty” puts the well-publicised problem of AI’s bias against minorities squarely in their sights.
The two organisations are broadly in agreement, but there are differences in their positions on the amount of human oversight necessary. “AI must never be the sole basis for a decision which affects someone’s human rights.” states Liberty’s Hannah Couchman. While the ACLU says “…safeguards are necessary to give human beings a role in their creation, oversight, and deployment. Without democratic participation, we have no way to ensure that artificial intelligence isn’t exacerbating inequality and obstructing human agency — possibly, without our ever knowing.”
Liberty would prefer to see humans directly regulating the output of AI whereas the ACLU wants oversight in the AI development process. There are pros and cons to both approaches. Directly regulating output is expensive and unlikely to be practical in most cases, but will provide the most safety. Overseeing the development process is liable to miss a lot of serious problems, but is much more likely to be put into practice.
In the end the better approach will depend on how far you agree with Charles Sumner’s assertion that “It is by compromise that human rights have been abandoned.”
What’s clear is that for civil liberties advocates there’s a new game in town. And it’s one with far-reaching consequences. As Ronald Reagan pithily put it “Freedom is never more than one generation away from extinction.”
