ResearchGate Twitter Flickr Tumblr

The German Ethic Council’s Statement on “Humans and Machines”

Regarding the German ethic council’s published statement (PDF) on “Humans and Machines: Challenges Posed by Artificial Intelligence,” the first thing to notice is that it has been referenced and commented on a lot on social media, especially by those who haven’t read it. And most of what’s been commented on, which is the second thing to notice, are personal opinions (or interpretations, I’ll come to that) by high-ranking authors of that statement, in interviews and even on their own website, that don’t quite rhyme with what they actually wrote.

For example, in this Süddeutsche Zeitung article, the ethic council’s chairperson and deputy chairperson are quoted as saying, respectively, “AI must not replace humans” and “AI applications cannot replace human intelligence, responsibility, and evaluation.” But what their published statement actually does is discuss, in much more nuanced ways, “the social dimension of the relationship between delegating, expanding, and reducing [and how] people can be affected differently by processes of delegating or replacing” (page 255). Furthermore, the same newspaper article proclaims that “The German ethics council has now also dealt with questions relating to the relationship between humans and machines and has spoken out in favor of strict limitations on the use of AI,” and mdr aktuell quotes the council’s chairperson as speaking out for “pausing the further development of artificial intelligence.” All that’s quite curious, because none of that is part of the ethic council’s published statement itself.

Here are two detailed quotes from the published statement (the original German text is so horribly stilted that even Google Translate had its problems; I tried to defuse it a bit, but don’t expect miracles). They paint a different and, as mentioned, more nuanced picture:

For such a context-specific perspective, the German ethics council looked into representative applications in medicine, K-12 (school) education, public communication, and public administration. Fields were deliberately selected where the penetration of AI-based technologies is very different, and where different extents of the replacement of previously human actions by AI can be illustrated. In all four fields, deployment scenarios are characterized by sometimes significant relationship and power asymmetries, which makes the responsible use of AI and the consideration of the interests and well-being of particularly vulnerable groups of people all the more important. Considering the different ways in which AI is used, and their respective degrees of delegation to machines, allows nuanced ethical considerations to be made. (p.45)

And:

In order to prevent the diffusion of responsibility, the use of AI-supported digital technologies must be designed in the sense of decision support and not decision replacement. It must not come at the expense of effective control options. Those affected by algorithmically supported decisions must be given the opportunity to access the basis for the decision, particularly in areas with a high degree of intervention. This presupposes that, at the end of the technical procedures, decision-making people remain visible who are able and obliged to take responsibility. (p.264)

If anything, the news media quotes from the council’s chairpersons are interpretations of their published statement, which is odd and raises question. However, let’s proceed to an overview of what the ethic council’s published statement actually contains.

The first part is a historical summary and definition of “AI,” which includes the challenges in defining “intelligence” and “reason” and touches upon topics like authorship, intention, responsibility, and the relationship between humans and technology in general. All in all, it’s pretty solid, and I’m not going to nitpick.

The second part dives into four representative fields of application: medicine, K–12 (school) education, public communication, and public administration.

  1. The discussion of the first field, medicine, is solid and specific, and even includes psychotherapy. It’s a surprisingly positive view, provided further advances are conducted in a careful and risk-conscious manner, with adequate controls and protocols in place.

  2. The discussion of the second field, K–12 education, is excessively vague and unspecific and doesn’t come up with anything resembling actionable advice. Not only does it become clear right away that the responsible council members are hopelessly stuck in the learning model of cognitivism; they also insert a gratuitous swipe at behaviorism to broadcast their blissful obliviousness of conceptual innovations and application in that field (up to and including game-based learning). The best they can come up with are unspecified advantages of “tutor systems” and “classroom analytics,” and the risks of “attention monitoring” and “affect recognition,” which, like, yeah sure. However, I suspect that the complete uselessness of this chapter merely reflects how utterly obsolete and unsalvageable K–12-based educational systems are in the Information Age.

  3. The discussion of the third field, public communication and how people form opinions, can be called pretty solid again, but sadly for the wrong reasons. On the one hand, their severe criticism is certainly valid—ranging from algorithmic timelines to sensationalism and clickbait to the challenges of content moderation to chilling effects to undesirable shifts in public discourse (basically the Overton Window effect, even if they don’t refer to it explicitly). Many of these arguments I presented myself, right before COVID-19 hit, as a consultant on right-wing extremism and discrimination in gaming communities. But here’s the hitch: this chapter’s relation to current advances in AI technologies is tenuous at best, and much of that criticism should be leveled as ferociously at good old traditional media as well, from newspapers to news channels and other TV formats. (Which I do in my presentations and recommendations.) So yes, the council’s basically correct here, and their case for regulation is reasonable. But the whole chapter strikes me as both too tangential and too narrow in scope.

  4. The discussion of the fourth and last field, public administration, is again solid and presents well-argued points and recommendations with regard to automatic and algorithmic decision-making systems, automation bias and algorithmic bias, and data-related systemic distortions and discrimination. There’s nothing to complain about here.

To sum it up, the ethic council’s published statement is neither as terrible as made out to be by some, nor as useful in certain areas as it could be, and probably should be. For all its weaknesses, however, it doesn’t fall for AI hype, addresses actual risks instead of illusionary risks, and presents views and recommendations not altogether different to what other experts and scientists say, including the authors of the “Stochastic Parrots” paper.

Which, I fear, is too nuanced for politicians to comprehend and act upon. And that high-ranking members of the ethics council themselves undermine their published statement by front-loading the public discussion with urgent opinion soundbites doesn’t bode well for rational discussions or a positive impact either.

——————–
*As yet, no English-language version of the council’s published statement exists; all the quotes in this post are translated by yours truly with some help from Google Translate. (For this post’s preview image, by the way, click here.)

permalink