The guidance, ‘Explaining decisions made with AI’, was launched last week by the Information Commissioners’ Office (ICO) and the Alan Turing Institute. It attempts to tackle the legal and ethical pitfalls around the decision-making process in AI by laying out four key principles. These are: be transparent, be accountable, consider context and reflect on impacts.
Simon McDougall, executive director for technology, policy and innovation, ICO, said: ‘Our draft guidance goes into detail about different types of explanations, how to extract explanations of the logic used by the system to make a decision, and how to deliver explanations to the people they are about. It also outlines different types of explanation and emphasises the importance of using inherently explainable AI systems.’
The consultation is available to view on the ICO website, and ends on 24 January. A final version of the guidance will be published later in 2020.