Skip to content

- Emily Dickinson

You know that Portrait in the Moon --

So tell me who 'tis like --

The very Brow -- the stooping eyes --

A fog for -- Say -- Whose Sake?

...

Read full poem

noun

A decorated cloth hung at the back of a stage.

Know more
890 words~5 min read

For A Stronger Right to Explanation

Should automated decisions come with a stronger right to explanation? This question has become central to debates about algorithmic governance, fairness, and democratic accountability. In an era where machine learning models determine creditworthiness, employment prospects, bail eligibility, and even medical diagnoses, the opacity of these systems poses a profound challenge to individual autonomy and institutional legitimacy. On balance, the answer should be yes: a stronger right to explanation is not only desirable but necessary for a just society. The issue is not merely whether the proposal sounds attractive, but whether it improves public reasoning, accountability, and fair institutional design.

First, people deserve reasons when systems affect jobs, loans, or opportunities. This matters because public systems lose legitimacy when power operates without sufficient transparency or scrutiny. A serious argument therefore begins with the conditions of trust, not only with convenience. When a bank denies a loan using a proprietary algorithm, the applicant is left in the dark about why they were rejected. They cannot challenge the decision, learn from it, or hold the institution accountable. This asymmetry of information erodes trust in the very systems that govern modern life. Without explanation, individuals are reduced to passive subjects of automated authority, stripped of the ability to contest or understand the forces shaping their lives. The right to explanation restores a measure of equality between the individual and the institution, ensuring that power is exercised with justification rather than fiat.

Second, explanation rights support accountability and contestability. The reasoning here concerns structure as much as outcome: incentives, information flows, and institutional habits all shape what follows. That makes the issue larger than one isolated case. When decision-makers know they must explain their reasoning, they are incentivised to design fairer, more robust systems. They are more likely to audit their algorithms for bias, test for edge cases, and ensure that outcomes align with stated goals. Contestability—the ability to challenge a decision—requires a meaningful explanation. Without it, appeals become empty rituals. For example, if a predictive policing algorithm flags a neighbourhood for increased surveillance, residents need to understand the factors behind that classification to contest potential discrimination. Explanation rights thus serve as a structural safeguard, embedding accountability into the fabric of automated decision-making.

Without explanation, individuals are reduced to passive subjects of automated authority, stripped of the ability to contest or understand the forces shaping their lives.

Third, without explanation, power becomes harder to question. This point is persuasive because it connects principle with implementation rather than pretending the two can be separated. Public policy improves when strong values are translated into workable expectations. Consider the case of automated hiring systems. Studies have shown that such systems can perpetuate racial and gender biases if trained on historical data. Without a right to explanation, a rejected candidate cannot know whether the decision was based on legitimate qualifications or discriminatory patterns. The lack of transparency shields the system from scrutiny, allowing bias to persist unchecked. Explanation rights force organisations to confront the inner workings of their algorithms, promoting a culture of reflection and improvement. They also empower civil society, researchers, and regulators to identify systemic issues and advocate for change.

A substantial counterargument is that complex systems are not always easy to explain accurately. This objection has force. Modern machine learning models, particularly deep neural networks, operate in ways that defy simple human interpretation. Requiring explanations could lead to oversimplified or misleading accounts, or force developers to use less accurate but more interpretable models. However, incomplete solutions are not necessarily bad solutions; the better question is whether the proposal improves the baseline of accountability and informed judgment. Even a partial explanation—such as identifying the most influential features in a decision—can empower individuals and regulators. Moreover, the field of explainable AI is advancing rapidly, offering techniques like LIME, SHAP, and counterfactual explanations that provide meaningful insights without sacrificing performance. The challenge of complexity does not justify abandoning the principle; it calls for thoughtful implementation and ongoing research.

Another objection is that explanation rights could impose excessive costs on businesses and slow innovation. Yet this concern overlooks the long-term benefits of trust and fairness. Companies that embrace transparency often enjoy stronger customer loyalty and reduced regulatory risk. Furthermore, the right to explanation need not require full disclosure of proprietary algorithms; it can be limited to the key factors influencing a decision, protecting trade secrets while still providing meaningful accountability. Proportionality is key: the scope and detail of explanations can be calibrated to the stakes of the decision. High-stakes decisions like medical diagnoses or parole hearings warrant more detailed explanations than low-stakes ones like product recommendations.

For these reasons, the affirmative position remains stronger. The issue ultimately turns on how a democratic society protects trust, responsibility, and informed choice. A stronger right to explanation is not a panacea, but it is a vital tool for ensuring that automated systems serve human ends rather than subordinating them to opaque processes. As algorithms become more pervasive, the demand for explanation will only grow. Societies that embrace this principle will be better equipped to harness the benefits of AI while mitigating its risks. The path forward requires legislation that mandates explanation rights, investment in explainable AI research, and a cultural shift towards valuing transparency as a cornerstone of justice. The alternative—a world where automated decisions shape our lives without justification—is neither efficient nor fair. It is a recipe for resentment, alienation, and the erosion of democratic norms. We must choose explanation.