ARTFEED — Contemporary Art Intelligence

Three Inverse Laws of Robotics for AI Interaction

opinion-review · 2026-05-05

Since the introduction of ChatGPT in November 2022, there has been a surge of generative AI chatbots in various software tools and search engines. The author warns about the societal risks associated with users accepting AI outputs without critical evaluation. To address these concerns, three 'Inverse Laws of Robotics' are suggested: First, avoid attributing human traits to AI, as this can cloud judgment. Second, independent verification of AI outputs is crucial, given the stochastic nature and possible inaccuracies. Third, humans must take responsibility for decisions involving AI; using the excuse "the AI told us to do it" is not acceptable. Although these laws aren't infallible, they promote careful engagement with AI, viewing it as a tool rather than a decision-maker.

Key facts

  • ChatGPT launched in November 2022.
  • Generative AI chatbots are embedded in search engines, software development tools, and office software.
  • The author proposes three Inverse Laws of Robotics for humans interacting with AI.
  • Inverse Law 1: Humans must not anthropomorphise AI systems.
  • Inverse Law 2: Humans must not blindly trust AI output.
  • Inverse Law 3: Humans must remain fully responsible and accountable for AI use.
  • The laws are inspired by Isaac Asimov's Three Laws of Robotics.
  • The author notes that AI systems can produce factually incorrect, misleading, or incomplete output.

Entities

Sources