AI's Role in Art: Four Scientific Studies on Emotional Intelligence, Attribution, Accessibility, and Classification
A collection of four scientific studies demonstrates how artificial intelligence is reshaping art creation, perception, and classification. Researchers at Stanford developed ArtEmis, an algorithm trained on 81,000 paintings from WikiArt and 440,000 emotional responses from over 6,500 participants. Led by professor Leonidas Guibas, the algorithm classifies paintings into eight emotional categories and can distinguish multiple emotions within a single work, as seen in Rembrandt's depiction of the beheading of John the Baptist. In 2018, Christie's auctioned Edmond de Belamy, an AI-generated portrait by Obvious, for $432,500. A study by MIT and the Max Planck Institute for Human Development found that attribution of AI art depends on how information is presented to viewers; when no creator information was given, participants humanized the AI and credited it as the author. Platforms like Artbreeder (formerly Ganbreeder) enable anyone to create images using Generative Adversarial Networks and BigGAN, while Google's Deep Dream Generator uses convolutional neural networks to produce hallucinogenic effects. Researchers at Zhejiang University of Technology tested seven neural network models on three art datasets to classify works by artist, genre, and style, achieving state-of-the-art results, particularly with smaller datasets.
Key facts
- Stanford researchers developed ArtEmis, an algorithm trained on 81,000 WikiArt paintings and 440,000 emotional responses from over 6,500 participants.
- ArtEmis classifies paintings into eight emotional categories and can distinguish multiple emotions within a single work.
- In October 2018, Christie's auctioned Edmond de Belamy, an AI-generated portrait by Obvious, for $432,500.
- A study by MIT and the Max Planck Institute for Human Development found that attribution of AI art depends on how information is presented.
- Artbreeder (formerly Ganbreeder) is an open-source platform for creating images using Generative Adversarial Networks and BigGAN.
- Google's Deep Dream Generator uses convolutional neural networks to enhance patterns in images, creating dream-like effects.
- Researchers at Zhejiang University of Technology tested seven neural network models on three art datasets for classification.
- The neural network models achieved state-of-the-art results, especially with smaller datasets.
Entities
Artists
- Leonidas Guibas
- Rembrandt
- Obvious
Institutions
- Stanford University
- WikiArt
- Christie's
- Massachusetts Institute of Technology
- Max Planck Institute for Human Development
- Zhejiang University of Technology
Locations
- Stanford
- United States
- China