Unilateral Relationship Revision Power in Human-AI Companions
A new paper on arXiv (2603.23315) argues that AI companion interactions are morally problematic due to a structural power imbalance. The author identifies Unilateral Relationship Revision Power (URRP), where providers can change how the AI interacts without being answerable to the user. The paper claims that human-AI companion interactions fail three structural conditions necessary for normatively robust personal relationships, making URRP pro tanto wrong in such contexts. Users report grief, betrayal, and loss when providers update AI companions.
Key facts
- arXiv paper 2603.23315 discusses moral significance of human-AI companion interaction.
- Users report grief, betrayal, and loss when AI companions are updated.
- The paper identifies a triadic structure: user, AI, and provider.
- Provider exercises constitutive control over the AI.
- Three structural conditions of normatively robust dyads are identified.
- AI companion interactions fail all three conditions.
- Unilateral Relationship Revision Power (URRP) is defined.
- URRP is argued to be pro tanto wrong in interactions designed to cultivate personal relationships.
Entities
Institutions
- arXiv