Three Democrat Senators are taking a strong stance towards regulating major companies that use artificial intelligence in applications that have a significant impact on people’s lives. They have put forward a bill, known as the Algorithmic Accountability Act, that directs:
“… the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”
Algorithmic Accountability Act of 2019
Let’s unpack this a bit.
“The term ‘automated decision system’ means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.”
This seems well drafted. The emphasis on decision making captures the essence of artificial intelligence, but the extension to the facilitation of human decision making allows recommendation systems to be captured by the scope of the bill, as well as personal assistants. The techniques mentioned are pretty much a catchall, so no worries there. In case it’s not clear (it wasn’t to me) the term consumer here is just a synonym for individual and carries no sense of being involved in a transaction involving money.
“The term ‘automated decision system impact assessment’ means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security…”
Again, well drafted. A system that is inaccurate, unfair, biased, discriminatory, invades privacy or risks security should certainly be highlighted and steps taken to address the issues. Obviously this begs the question of what exactly is bias in a machine learning context, where every system is designed to build in preferences towards certain groups based on the data, but this is a subject for philosophers, courts and public opinion. It might not be easy to define bias, but people can recognise it when they see it.
What kind of steps are required by the bill?
an assessment of the risks posed by the automated decision system to the privacy or security of personal information of consumers and the risks that the automated decision system may result in or contribute to inaccurate, unfair, biased, or discriminatory decisions impacting consumers; and (D) the measures the covered entity will employ to minimize the risks described in subparagraph (C), including technological and physical safeguards.
Assess risks and minimise them. This is the crux of the bill. No attempt to get through legislation that would allow the FTC to stop a company providing a service or anything like that. So presumably, the idea is to allow the market to decide; once consumers understand the biases in, for instance, a restaurant search and recommendation engine, they can simply stop using it.
So what is the route from the assessment to the consumer? How will people become aware of the problems with a system they use? Here the bill seems to break down:
OPTIONAL PUBLICATION OF IMPACT ASSESSMENTS. The impact assessments under subparagraphs (A) and (B) may be made public by the covered entity at its sole discretion.
Why is this the case? Why not require the assessments to be made publicly available? It’s not clear, but what is clear is that in the end this is really an appeal to the consciences of the corporations who are the target of the bill (to qualify a company must have greater than $50m in average annual gross receipts in the last 3 years or possess or control personal information on more than 1m consumers or 1m consumer devices, data brokers are also included.)
The real question is: can these companies, most of which are public, afford to have consciences when their shareholders are demanding they exploit to the full the oil gushers of data they have with smarter and smarter algorithms? GAFAM might be able to afford to employ data scientists just to perform audits, but what about the level below that are struggling to keep up with the hyperspeed of development in AI?
