This Anti-Corruption Helpdesk brief was produced in response to a query from one of Transparency International’s national chapters. The Anti-Corruption Helpdesk is operated by Transparency International and funded by the European Union.
Please provide an overview of the discussion on algorithmic transparency and accountability
Computer algorithms are being deployed in ever more areas of our economic, political and social lives. The decisions these algorithms make have profound effects in sectors such as healthcare, education, employment, and banking. Their application in the anti-corruption field is also becoming increasingly evident, notably in the domain of anti-money laundering.
The expansion of algorithms into public decision making processes calls for a concomitant focus on the potential challenges and pitfalls associated with the development and use of algorithms, notably the concerns around potential bias. This issue is made all the more urgent by the accumulating evidence that algorithmic systems can produce outputs that are flawed or discriminatory in nature. The two main sources of bias that can distort the accuracy of algorithms are the developers themselves and the input data with which the algorithms are provided.
Equally troublingly, the analytical processes that algorithms rely on to produce their outputs are often too complex and opaque for humans to comprehend, which can make it extremely difficult to detect erroneous outputs. The Association for Computing Machinery (2017) points to three potential causes of opacity in algorithmic decision making processes. First, there are technical factors that can mean that the algorithm’s outcomes may not lend themselves to human explanation, a problem particularly acute in machine-learning systems that can resemble a “black box.” Second, economic factors such as commercial secrets and other costs associated with disclosing information can inhibit algorithmic transparency. Finally, socio-political challenges, such as data privacy legislation may complicate efforts to disclose information, particularly with regards to the training data used.
This paper considers these challenges to algorithmic transparency, and seeks to shed some light on what could constitute meaningful transparency in these circumstances, as well as how this can be used to leverage effective accountability in realm of algorithmic decision-making.
Given the potential for automated decision-making to result in discriminatory outcomes, the use of algorithms in public administration needs to come with certain standards. What these safeguards look like will vary in different contexts, but should be built into each stage of adopting algorithmic systems: from design, through the building and testing phases through to implementation (see Center for Democracy and Technology 2017).
Ultimately, institutions that use algorithms as part of automated decision making processes need to be held to the same standards as institutions in which humans make these decisions. Developers must ensure that the algorithmic systems they design are able to comply with the demands of impartial and accountable administration, such as accountability, redress and auditability.
This paper considers the knotty topic of transparency and accountability in the use of algorithms. While it touches on the use of algorithms in the private sector to provide illustrative examples, it is primarily considered with the implications for public administration posed by the adoption of algorithmic decision-making. While noting that the use of algorithms is most prominent in the area of service delivery, the paper also considers the ramifications for the adoption of algorithms at higher levels of governmental policy-making.
The paper briefly reflects on ways in which algorithms can (re)produce opportunities for the abuse of entrusted power, including corruption, but does not analyse in detail the topic of potentially “corrupt” algorithms – that is algorithms developed with the specific aim of achieving fraudulent outcome. Neither does this paper seek to address the potential application of artificial intelligence and machine learning as anti-corruption tools, a matter that has been studied elsewhere (Adam and Fazekas 2018).
- Algorithmic systems in public administration
- Algorithmic decision-making and corruption
- Bias and opacity in algorithmic decision-making Transparency as antidote?
- The promise and peril of algorithmic transparency
- Guiding principles for algorithmic transparency
The OECD (2019a) identifies a number of opportunities and challenges in relation to the use of artificial intelligence (AI) and algorithms in the realm of good governance and anti-corruption work.
- Algorithmic decision-making can identify and predict potential corruption issues by digesting diverse data sets.
- AI can increase the efficiency and accuracy of due diligence, and identify loopholes within regulatory frameworks.
- The predictions and performance of algorithms are constrained by the decisions and values of those who design them, the training data they use, and their intended goals for use.
- By learning based on the data fed to them, AI-powered decisions face the risk of being biased, inaccurate or unfair especially in critical areas such as citizen risk profiling in criminal justice procedure and access to credit and insurance. This may amplify social biases and cause discrimination.
- The risk of inequality is exacerbated, with wealth and power becoming concentrated into a few AI companies, raising questions about the potential impacts of AI on income distribution and on public trust in government.
- The difficulty and sometimes technical impossibility of understanding how AI has reached a given decision inhibits the transparency, explainability and interpretability of these systems.”
Niklas Kossow, Svea Windwehr and Matthew Jenkins (email@example.com)
Daniel Eriksson and Jon Vrushi, Transparency International, and Laurence Millar, Transparency International New Zealand