LLMs Show Perceptual Alignment with Human Touch in Textile Task
Researchers from an unspecified institution have published a study on arXiv (2406.06587v2) investigating how well large language models (LLMs) align with human tactile perception. Using a 'Guess What Textile' interaction, participants handled two textile samples (a target and a reference) without seeing them and described the differences to an LLM. The LLM then attempted to identify the target textile by assessing similarity in its high-dimensional embedding space. Results indicate a degree of perceptual alignment exists, but it varies significantly among different textile samples. The study highlights the challenge of aligning AI with nuanced sensory modalities like touch, which is more multifaceted than vision.
Key facts
- Study published on arXiv with ID 2406.06587v2
- Investigates human-AI perceptual alignment in touch
- Uses 'textile hand' task with 'Guess What Textile' interaction
- Participants handled two textile samples without seeing them
- LLM identified target textile via embedding space similarity
- Perceptual alignment varies significantly across textile samples
- Touch is more multifaceted than vision for AI alignment
- Research focuses on aligning LLM behavior with human intent
Entities
Institutions
- arXiv