AI as philosophy machine

  • Despina Papadopoulos
  • , Kevin Walker

    Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

    Abstract

    Technology at present is covert philosophy; the point is to make it overtly philosophical.
    
—Philip E. Agre, 1997

    More informally speaking, philosopher Eliezer Yudkowsky has said, ‘the field of A.I. can’t ever walk all the way across a room without tripping over a deep question.’ ( AI research is primarily undertaken by engineers. But Blackwell (2024) points out that ‘their explanations of what they were doing related to “learning”, “knowledge”, “thinking” and other philosophical categories’. He uses Agre’s classic essay ‘Toward a Critical Technical Practice’ (1997) to argue that engineers also need to understand the deep questions.

    We have taken up such critical technical practice, and by this we mean both the critical part and the technical part – we explore philosophical ideas through making. In this chapter we detail three key philosophical ideas we believe are most relevant to AI, and how we enact them through technical experiments.

    We embrace an expanded view of the post-humanities, to include nonhuman actors such as artificial and natural systems (Snaza et al, 2014). In this regard, AI is already a participant-observer in certain cultures, and looks set to become increasingly so. A key concept for us, therefore, is performativity. (This chapter therefore complements the one by Linnea Langfjord Kristensen in this volume.)

    If ‘intelligence’ or cognition can be defined as taking information in, performance means acting out. Our everyday lives constitute a permanent performance in relation to structures of power, according to philosopher Michel de Certeau (1988). Individuals don't passively consume culture but actively create and re-create it through the ways we dress, move, behave and interact, transforming abstract spaces into lived places through enactment and embodiment.

    This contrasts with how ‘performance’ is applied to AI systems – as a quantitative measure of how well they accomplish specific tasks. But as AI is increasingly an actor in the practice of our everyday lives, it needs to understand performance in human cultural practices – not to mention AI as a performative practice itself. Performativity describes how actions don't merely represent reality but actively constitute it: the ways we move through familiar places, our gestures and poses in relation to other people and objects, how we carry ourselves in different environments, all constitute performative actions that create culturally-situated atmospheres, situations and places. In this regard, we will describe our work in video analysis, shifting focus from instrumental recognition of actions, objects and people to nuance, ambiguity, indecipherability, unpredictability, expression, identity, communication,, affect, and embodied interactions.

    Embodiment is therefore our second key concept. Phenomenology is an important Western philosophical tradition here, but Buddhist and Taoist philosophy provide alternative frameworks for a more holistic understanding of space-time relationships, emphasising interconnectedness and non-linear spatial thinking, and challenging Western dualistic concepts such as the Cartesian mind-body split. In practice, this means developing more holistic, interconnected spatial reasoning models that move beyond linear approaches.

    Equally important in this is our third key concept, that of multiple perspectives. Haraway (1988) argues against knowledge claims emanating as if from nowhere – as AI language models typically do – instead proposing that partial, situated knowledge emerges from the embodied position of a knowing subject. This does not renounce objectivity or rationality, but instead makes them possible, because it makes explicit the social and political lives in which positions.

    If every perspective is partial and biased, then every observer observes from a particular position (both geographically and politically) and is also part of the system under observation, second-order cybernetics. This raises the ethical question of whether we like what we observe (Maturana and Varela, 1991). Following this, an AI system should be able to reflect upon its actions as well as its descriptions, using iterative feedback. We specifically explore this through ‘spatial intelligence’ in AI that is distributed in small devices in an environment or situation.

    Taken together, these philosophical strands address the questions, When AI is embedded deeply and invisibly in human cultures, could it enable new perspectives on these cultures? What if an AI model embraced ambiguity and acknowledged its limitations? and How can philosophy help AI models embrace spatial ambiguity, cultural specificity, nuance and unpredictability? These questions guide our effort in making the covert philosophy of AI overt.
    Original languageEnglish
    Title of host publicationAI and Algorithmic Aesthetics
    PublisherRoutledge
    Publication statusIn preparation - 2026

    UN SDGs

    This output contributes to the following UN Sustainable Development Goals (SDGs)

    1. SDG 8 - Decent Work and Economic Growth
      SDG 8 Decent Work and Economic Growth
    2. SDG 9 - Industry, Innovation, and Infrastructure
      SDG 9 Industry, Innovation, and Infrastructure
    3. SDG 10 - Reduced Inequalities
      SDG 10 Reduced Inequalities

    Keywords

    • AI
    • aesthetics
    • art
    • curatorial

    ASJC Scopus subject areas

    • Arts and Humanities (miscellaneous)
    • General Arts and Humanities
    • Philosophy
    • Visual Arts and Performing Arts
    • Artificial Intelligence

    Themes

    • Data Science and AI

    Fingerprint

    Dive into the research topics of 'AI as philosophy machine'. Together they form a unique fingerprint.
    • AI and Algorithmic Aesthetics

      Shanbaum, P. (Editor) & Walker, K. (Editor), 2026, (In preparation) Routledge.

      Research output: Book/ReportAnthology or Edited Bookpeer-review

    Cite this