Vivre avec l’IA ? Comment cohabiter avec des machines produisant des contenus ressemblant parfois à ceux d’humains ?

Une note scientifique de Roberto Casati et Piera Maurizio de l’Institut Jean Nicod (ENS-PSL)

Créé le
30 janvier 2026
Le développement rapide de l'intelligence artificielle interroge tous les champs de la connaissance. Roberto Casati, Piera Maurizio, Quentin Coudray, Alda Mari et Gloria Origgi de l’Institut Jean Nicod (ENS-PSL, CNRS, EHESS) publient la note scientifique « Living with AI, A Philosophical Toolkit for Navigating the Conceptual Challenges of Artificial Intelligence Systems ». Celle-ci propose des recommandations de politique publique ayant pour objectif de redéfinir la manière dont sont appréhendés les systèmes d’intelligence artificielle.
 
Note IA Jean Nicod
Naval battle, Master of the Die after Giulio Romano (Rosenwald Collection)

Si la recherche en intelligence artificielle s’est particulièrement accélérée et démocratisée ces dernières années avec l’apparition de systèmes de plus en plus performants, la recherche autour de ses concepts et ses conséquences est, elle, prise de court. Visant particulièrement l’interaction d’utilisateurs avec des systèmes d’intelligence artificielle, les auteurs démontrent que ces outils sont régulièrement anthropomorphisés. En découle une question centrale : comment cohabiter avec des machines pouvant produire des contenus (textes, images, etc) ressemblant parfois à s’y méprendre à ceux d’humains ? L’enjeu est notamment la redéfinition de quatre concepts : la conscience, la créativité, la signification mais aussi la personnalité, au sens d’une personne pouvant être tenue responsable de ses actes. Il faut ainsi, selon les auteurs, encadrer l’usage de l’intelligence artificielle afin de préserver l'intégrité épistémique, empêcher le transfert de responsabilité et orienter l'interaction entre l'homme et l'IA de manière socialement et démocratiquement fondée.

Lire la note dans son intégralité

Executive Summary

Living with Artificial Intelligence: Conceptual Engineering for Everyday Human–AI Interaction

Artificial intelligence systems, especially generative AI such as chatbots, text generators, and image models, are increasingly embedded in everyday life. Ordinary users now routinely interact with systems that produce fluent language, convincing images, and context-sensitive responses. These systems challenge some of our most fundamental concepts: consciousness, creativity, meaning, agency, and personhood. While technical and legal debates around AI are advancing rapidly, our shared conceptual framework for understanding and governing these systems has not kept pace.

This policy brief argues that many current difficulties in AI governance stem from conceptual confusion. We often talk about AI using categories originally developed to describe human mental life, even when those categories no longer apply straightforwardly. As a result, public discourse, design choices, and policy debates risk being driven by misleading metaphors and speculative futures rather than by the realities of how AI systems function and how people actually interact with them.

Scope and Approach

The brief focuses deliberately on everyday interactions between non-expert users and AI systems, such as conversational chatbots, generative text and image tools, and assistive applications used in education, communication, and cultural production. It does not address military, industrial, or highly specialized professional uses of AI, although many of the conceptual tools developed here may later be extended to those domains. Methodologically, the brief adopts conceptual forcing: a pragmatic philosophical strategy that stipulates clear working assumptions in order to enable concrete reasoning and decision- making. In particular, the brief proceeds on the assumption that current AI systems are not conscious, not creative in the human sense, not moral agents, and not bearers of meaning or responsibility – even though they are often perceived as such by users. The central question, therefore, is not what AI systems “really are,” but how we should live with machines that convincingly simulate human-like capacities.

Key Findings

Through four case studies – consciousness, creativity, meaning, and personhood – the brief shows how AI systems generate powerful illusions that shape user behavior, trust, and social expectations:
•    Consciousness: AI systems simulate attention, memory, and emotional responsiveness, triggering intuitive attributions of sentience. These attributions are driven by surface cues and interactional design, not by genuine experience.
•    Creativity: AI systems generate novel and valuable outputs without intention or expressive aims, destabilizing traditional criteria for authorship, originality, and artistic value.
•    Meaning: AI-generated texts resemble human communication but lack communicative intent and truth commitment, producing “quasi-texts” that risk polluting epistemic environments.
•    Personhood: Treating AI systems as persons can blur responsibility and displace accountability from designers and institutions to machines that cannot be morally responsible.

Across all cases, the core risk is not metaphysical error but epistemic and normative drift: over-trust, a-critical deference, responsibility misattribution, and erosion of practices that sustain human agency, interpretation, and judgment.

Policy Orientation

The brief argues that these challenges arise at the point of design, not only at deployment or use. Design choices – both in system architecture and in training data – shape how users interpret AI systems and how social norms evolve around them. Conceptual engineering must therefore be integrated upstream into AI development and governance. Rather than proposing a comprehensive regulatory framework, the brief offers actionable policy recommendations organized around six priorities:
1. Promoting conceptual hygiene in public discourse
2. Integrating conceptual engineering into policy design
3. Guarding against misleading anthropomorphism
4. Reinforcing recognition of the human role in creative and communicative acts
5. Monitoring and protecting epistemic environments
6. Supporting shared, evolving conceptual infrastructure
These recommendations align with emerging legal frameworks, including the EU AI Act, while emphasizing that regulation alone is insufficient without sustained conceptual clarity.

Conclusion

Living with AI requires more than technical safeguards or compliance mechanisms. It requires re-engineering the concepts through which we interpret, design, and govern artificial systems. By clarifying what is at stake when we invoke notions such as consciousness, creativity, meaning, and personhood, conceptual engineering can help policymakers, designers, and users navigate AI’s societal impact with greater precision, responsibility, and democratic accountability.