Post by bear on Dec 17, 2019 12:17:05 GMT 7
People should be held accountable for AI and algorithm errors, rights commissioner says.
After the robodebt scandal, Ed Santow says it’s time to set rules to govern how such new technologies are used. People need to be held accountable for the mistakes AI and algorithms make on their behalf, such as that seen in the government’s robodebt scandal, according to Australian human rights commissioner Ed Santow.
The proposal comes in a new discussion paper on the impact of new technologies on human rights in Australia, released by the commission on Tuesday.
After the Australian government backed down on the use of automatic debt notices based on income averaging, and had legislation for its facial recognition system rejected by a government-dominated parliamentary committee, Santow said it was time to set some rules to govern how these new technologies are used.
“Robodebt is just a prominent example of data science and government AI being used in decision-making,” he said.
All government use of AI should be enshrined in legislation, he said.
“It should prompt a process of making sure people’s basic human rights are protected,” he said.
“That’s the whole point of having statements of compatibility with human rights accompanying any new bill – it throws up any problem before they come to pass.”
The paper proposes each use of AI by government should be accompanied by a cost-benefit analysis and public consultation before it is brought in. Once a system is in place, people should be able to have an AI-led decision explained to them in a non-technical way.
“When you have blackbox AI, particularly where the stakes are really high, then you can have an individual left with this unsettling feeling that, say, they’ve been denied a bank loan, and maybe it’s because of their gender, maybe it’s not, but if you can never scratch the surface and find out why the decision was made then you’ll never be able to enforce your basic equality rights,” Santow said.
The commission has also proposed that legislation should be introduced that a person is ultimately responsible and legally liable for the decisions made by AI.
“Responsibility is critical because there has to be a chain of legal liability and then having really meaningful human oversight and intervention,” he said.
“You need to make sure the human is properly empowered to identify when things might be going wrong and intervene to correct them.”
Ultimately, he said, government should be the model user of AI.
“It is simultaneously the regulator and rule setter on the one hand, and on the other hand it is one of the most enthusiastic adopters of AI,” he said.
“What it needs to do is set and enforce really clear rules that protect our communities based on human rights. It also needs to lead the way in adopting good practices itself and showing how to protect human rights when you use AI.”
The commission has also called for a moratorium on the use of facial recognition technology in areas such as law enforcement – as envisioned by the failed legislation – until a framework that includes human rights protections has been included in consultation with the Australian Human Rights Commission and the information commissioner.
The parliamentary joint committee on intelligence and security took the rare step of rejecting the government’s proposal for its facial recognition system, which links together passport photos with driver’s licence photos and other forms of face IDs, on the grounds that the privacy protections weren’t sufficiently outlined in the legislation.
Santow said facial recognition technology needed to be used carefully in areas such as law enforcement because of the impact it would have on human rights when issues such as false positives arise.
It could lead to people being falsely arrested and detained, for example. Santow said the planned uses for facial recognition technology should also be enshrined in the legislation, so people know exactly how their personal information will be used.
“When government is using a piece of technology that has a significant impingement on our basic human rights, it should do so transparently and democratically,” he said.
“What we are seeing is a growing community distrust of these new technologies, and frankly there are some strong reasons to be concerned about the misuse of AI and other new tech.”
The discussion paper makes 29 proposals, including a complete review of AI use in government, and the proposal for an AI commissioner that would help guide government agencies on how to implement AI effectively with human rights in mind.
The commission is accepting submissions on the consultation paper until March 10, 2020.
www.theguardian.com/law/2019/dec/17/people-should-be-held-accountable-for-ai-and-algorithm-errors-rights-commissioner-says
After the robodebt scandal, Ed Santow says it’s time to set rules to govern how such new technologies are used. People need to be held accountable for the mistakes AI and algorithms make on their behalf, such as that seen in the government’s robodebt scandal, according to Australian human rights commissioner Ed Santow.
The proposal comes in a new discussion paper on the impact of new technologies on human rights in Australia, released by the commission on Tuesday.
After the Australian government backed down on the use of automatic debt notices based on income averaging, and had legislation for its facial recognition system rejected by a government-dominated parliamentary committee, Santow said it was time to set some rules to govern how these new technologies are used.
“Robodebt is just a prominent example of data science and government AI being used in decision-making,” he said.
All government use of AI should be enshrined in legislation, he said.
“It should prompt a process of making sure people’s basic human rights are protected,” he said.
“That’s the whole point of having statements of compatibility with human rights accompanying any new bill – it throws up any problem before they come to pass.”
The paper proposes each use of AI by government should be accompanied by a cost-benefit analysis and public consultation before it is brought in. Once a system is in place, people should be able to have an AI-led decision explained to them in a non-technical way.
“When you have blackbox AI, particularly where the stakes are really high, then you can have an individual left with this unsettling feeling that, say, they’ve been denied a bank loan, and maybe it’s because of their gender, maybe it’s not, but if you can never scratch the surface and find out why the decision was made then you’ll never be able to enforce your basic equality rights,” Santow said.
The commission has also proposed that legislation should be introduced that a person is ultimately responsible and legally liable for the decisions made by AI.
“Responsibility is critical because there has to be a chain of legal liability and then having really meaningful human oversight and intervention,” he said.
“You need to make sure the human is properly empowered to identify when things might be going wrong and intervene to correct them.”
Ultimately, he said, government should be the model user of AI.
“It is simultaneously the regulator and rule setter on the one hand, and on the other hand it is one of the most enthusiastic adopters of AI,” he said.
“What it needs to do is set and enforce really clear rules that protect our communities based on human rights. It also needs to lead the way in adopting good practices itself and showing how to protect human rights when you use AI.”
The commission has also called for a moratorium on the use of facial recognition technology in areas such as law enforcement – as envisioned by the failed legislation – until a framework that includes human rights protections has been included in consultation with the Australian Human Rights Commission and the information commissioner.
The parliamentary joint committee on intelligence and security took the rare step of rejecting the government’s proposal for its facial recognition system, which links together passport photos with driver’s licence photos and other forms of face IDs, on the grounds that the privacy protections weren’t sufficiently outlined in the legislation.
Santow said facial recognition technology needed to be used carefully in areas such as law enforcement because of the impact it would have on human rights when issues such as false positives arise.
It could lead to people being falsely arrested and detained, for example. Santow said the planned uses for facial recognition technology should also be enshrined in the legislation, so people know exactly how their personal information will be used.
“When government is using a piece of technology that has a significant impingement on our basic human rights, it should do so transparently and democratically,” he said.
“What we are seeing is a growing community distrust of these new technologies, and frankly there are some strong reasons to be concerned about the misuse of AI and other new tech.”
The discussion paper makes 29 proposals, including a complete review of AI use in government, and the proposal for an AI commissioner that would help guide government agencies on how to implement AI effectively with human rights in mind.
The commission is accepting submissions on the consultation paper until March 10, 2020.
www.theguardian.com/law/2019/dec/17/people-should-be-held-accountable-for-ai-and-algorithm-errors-rights-commissioner-says