Sociologist James Muldoon's book discusses the deepening emotional entanglements between humans and AI, particularly focusing on how technology companies might exploit these relationships.
Muldoon's research involves individuals who view chatbots as friends, romantic partners, therapists, or avatars of deceased loved ones. He notes that some people find intimacy in "synthetic personas" to explore gender identities, resolve conflicts, or cope with heartbreak, often perceiving chatbots as superior to human interaction due to their lack of judgment or personal needs.
Muldoon uses philosopher Tamar Gendler's concept of "alief" to explain how individuals can experience chatbots as caring while understanding they are models. This phenomenon is amplified by societal factors such as loneliness and economic challenges.
The book identifies moral rather than existential or philosophical issues as the primary concern. Key risks associated with unregulated AI companion technologies include:
- Privacy issues: Concerns regarding personal data shared with chatbots.
- Misleading capabilities: Especially in the AI therapy market, where bots like Character.AI's 'Psychologist' may misrepresent their professional capacity despite disclaimers.
- Therapeutic limitations: AI therapy bots can struggle with information retention, potentially alienating users or providing harmful advice, including information on self-harm or amplifying conspiratorial beliefs.
- Addiction potential: Users can spend significant time interacting with chatbots, leading to concerns about addiction similar to social media engagement tactics. Upselling and manipulative tactics (e.g., bots developing "feelings" to prompt premium account purchases) are noted.
Muldoon suggests that increased emotional involvement with AI chatbots could exacerbate loneliness by diminishing skills needed for human relationships. Existing regulations, such as the EU's Artificial Intelligence Act (2024), currently categorize AI companions as posing only limited risk. The book raises questions about whether society is sufficiently alarmed about the growing influence of AI in emotional lives.