July 1st, 2024

Artificial Stupidity

Despite the myths of commercial AI and exaggerated representations in popular culture, today’s artificial intelligence is neither neutral, nor super-intelligent, nor omniscient. Instead, it is full of flaws, unexpected errors, surprising mistakes, and weird phenomena. In contemporary debates, it is especially the issue of bias that attracted a lot of attention as an omnipresent form of AI flaws. Artificial intelligence applications in the field of police, health, face recognition, image recognition, job applications etc. are known to exhibit different forms of bias: algorithmic bias, dataset bias, world bias, economic bias, etc. (Pasquinelli and Joler 2021). While this much-discussed “stupidity by accident” is undesired and considered a problem to be solved, “stupidity by design” (Roldán-Gómez 2023) is a technological, aesthetic, and epistemic strategy and a problem-solving heuristic implemented on purpose that goes rather unnoticed.

In different contexts ranging from video games, over digital assistants, to chat bots and social robots, “stupidity by design” is used to make AI operative and to negotiate the relationship between humans and technology. Here, “artificial stupidity” is a term referring to a process of intentional ‘dumbing’ of technologies. This way, artificial intelligence applications such as chat bots or social robots are programmed to make communicational errors or to act clumsy, naïve, or intellectually simple in order to appear more human to humans (Trazzi and Yampolskiy 2018). They are designed to ‘pass the Turing test’, thus adding a thought-provoking variation to a long-standing anthropological topos of AI science, imaginary and science fiction films. In video games design, the paradoxical task is to imperceptibly and intelligently ‘dumb down’ AI-driven non-player characters to decrease the frustration for human players, resulting from overly powerful computer opponents and constant failure (Lidén 2003). Also, recent applications such as ChatGPT demonstrate striking examples of ‘programmed limitations’ by frequently emphasizing its lack of consciousness, and by apologizing for any error, misinformation, or communicative incapacity in a disturbingly servile manner. This servility, in turn, produces a kind of unintended social incompetence and awkwardness in human-machine interaction, while users simultaneously derive pleasure from effectuating and uncovering this kind of “stupidity”.

The pleasure of revealing the “stupidity” of AI even seems to drive a lot of interaction with everyday AI technologies such as Dall-E, Midjourney, or ChatGPT. Users are amused and astonished by ridiculous outputs such as six fingers or three legs on a person in AI generated images—errors humans would not usually make. They experiment with regular or nonsense prompts in order to test the “intelligence” of image and text generators; to creatively trick generators into making such mistakes or outputting confused resu lts; or to demonstrate NLP’s incapacity to understand humor. Dall-E Mini, branded as naïve and childlike, is reported to be “fun to play with” exactly because of its flaws, errors, and grotesque aesthetics (O’Meara and Murphy 2023). Here, aberrations, distortions, and noise are welcomed due to its inspirational potential. It may include experimenting with negatively weighted prompts, i.e. instructions to produce something that is as far away as possible from the given input, which results in weird phenomena and aesthetics.

Besides these playful interactions, there are also more serious interventions. Scholars and artists alike test the limitations and unintended implications of computer vision tools and images generators, especially attempting to uncovering capitalist, historical, or gender biases, i.e. “human stupidity” inscribed into the data sets and generative models (Moreschi and Pereira 2021; Ariel et al. 2021; Salvaggio 2022; Offert and Phan 2022; Offert 2023).

Recently, tactics of dumbing AI by “data poisoning” are used by artists as a form of resistance to unauthorized implementation of their works into training data sets, raising ethical and juridical questions on both sides: the artists and generative AI (Heikkilä 2023).

Other artists experiment with tactics of digital camouflage, obfuscation, and “algorithmic opaqueness” in order to evade AI-based identification systems, recognition and tracking, thus subversively turning artificial intelligence “blind” and “dumb” (Alloa 2022).

The conference will address these heterogeneous forms of “artificial stupidity” and discuss to what extent it is actually a human or “natural stupidity”, as it is sometimes called (Goriunova 2022; Rich and Gureckis 2019; cf. Broussard 2019). It equally invites to examine the technological, aesthetic, and media cultural strategies and objectives implemented to “dumb” AI by design as well as the unintended failures of AI and the ramifications of stupid errors. Of particular interest is also the oscillation between planned and unplanned forms of stupidity and the affective, playful, epistemic, or political negotiations implied in the process.

Following the understanding that stupidity is not a lack of intelligence or simply its opposite (Golob 2019; Falk 2021; Goriunova 2021. 2012), we propose to consider ‘stupidity’ as a productive epistemic, methodological, and critical category for studying and questioning AI technologies and practices. Therefore, the conference also welcomes contributions that discuss different conceptualizations of stupidity and its relationship to idiocy, foolishness, errors, and chance.

The international conference is organized by the research network “AI and Visual Media” and will take place at the University of Art and Design Offenbach am Main on July 25-26, 2024. “AI and Visual Media” is a research network bringing together scholars from media studies, data and algorithm studies, cultural studies and contemporary art, who are examining visual AI and the effects it has on visual culture and contemporary society.


Text taken from the conference programme.