ResearchGate Twitter Flickr Tumblr

Regarding transparency, large language models and algorithms in general pose two distinct challenges: companies’ willful obfuscation of both the data sets and the human agents involved in training their models, and how these models arrive at their decisions. While I’m usually focused on disputes over the former, problems arising from the latter are equally important and can be just as harmful.

A proposal by the European Commission, whose general approach was adopted by the European Council last December, proposes to update AI liability to include cases that involve black box AI systems that are so “complex, autonomous, and opaque” that it becomes difficult for victims to identify in detail how the damage was caused. Thus, recipients of automated decisions must be able “to express their point of view and to contest the decision.” Which, in practice, requires convincing explanations. But how would you get convincing explanations when you’re dealing with black box AI systems?

One of the first questions would certainly be why black box AI systems need to exist in the first place. In her 2022 paper “Does Explainability Require Transparency?,” Elena Esposito puts it like this:

The dilemma is often presented as a trade-off between model performance and model transparency: if you want to take full advantage of the intelligence of the algorithms, you have to accept their unexplainability—or find a compromise. If you focus on performance, opacity will increase—if you want some level of explainability you can maybe better control negative consequences, but you will have to give up some intelligence.

This goes back to Shmueli’s distinction between explanatory and predictive modeling and the suggested trade-off between comprehensibility and efficiency, which suggests that obscure algorithms can be more accurate and efficient by “disengaging from the burden of comprehensibility”—an approach on which I’m not completely sold with regard to AI implementation and practice, even though it’s probably the case with regard to evolved complex systems.

However, following Esposito, approaches from the sociological perspective can change the question and show that this “somewhat depressing approach” to XAI (Explainable AI) is not the only possible one:

Explanations can be observed as a specific form of communication, and their conditions of success can be investigated. This properly sociological point of view leads to question an assumption that is often taken for granted, the overlap between transparency and explainability: the idea that if there is no transparency (that is, if the system is opaque), it cannot be explained—and if an explanation is produced, the system becomes transparent. From the point of view of a sociological theory of communication, the relationship between the concepts of transparency and explainability can be seen in a different way: explainability does not necessarily require transparency, and the approach to incomprehensible machines can change radically.

If this perspective sounds familiar, it’s because it’s based on Luhmann’s notion of communication, to which she explicitly refers:

Machines must be able to produce adequate explanations by responding to the requests of their interlocutors. This is actually what happens in the communication with human beings as well. I refer here to Niklas Luhmann’s notion of communication[.] Each of us, when we understand a communication, understand in our own way what the others are saying or communicating, and do not need access to their thoughts. […]

Social structures such as language, semantics, and communication forms normally provide for sufficient coordination, but perplexities may arise, or additional information may be needed. In these cases, we may be asked to give explanations[.] But what information do we get when we are given an explanation? We continue to know nothing about our partner’s neurophysiological or psychic processes—which (fortunately) can remain obscure, or private. To give a good explanation we do not have to disclose our thoughts, even less the connections of our neurons. We can talk about our thoughts, but our partners only know of them what we communicate, or what they can derive from it. We simply need to provide our partners with additional elements, which enable them to understand (from their perspective) what we have done and why.

This, obviously, rhymes with the EU proposal and the requirement of providing convincing explanations. That way, the requirement of transparency could be abandoned:

Even when harm is produced by an intransparent algorithm, the company using it and the company producing it must respond to requests and explain that they have done everything necessary to avoid the problems—enabling the recipients to challenge their decisions. [T]he companies using the algorithms have to deliver motivations, not “a complex explanation of the algorithms used or the disclosure of the full algorithm” (European Data Protection Board, 2017, p.25).

Seen in this light, the EU’s GDPR §22.3, its 2017 Annual Report on Data Protection and Privacy, and the aforementioned proposal sound a lot more reasonable than they’re often painted as in press releases or news reports.

But beyond algorithms, particularly with regard to LLM, the other side of the black box equation must be solved too, where by “solved” I of course mean “regulated.” To prevent all kinds of horrific consequences inflicted on us by these high-impact technologies, convincing explanations for what comes out of the black box must be complemented with full transparency of what goes in.

permalink