Form-Persona Dilemma
The more humanlike a robot’s appearance, the higher the expectations users project onto it, and the harder those expectations become to meet.
This is why non-humanoid robots (pet-like companions, abstract assistants) sometimes outperform humanoid designs in user satisfaction. They set appropriate expectations. A robot dog that acts like a dog exceeds expectations. A humanoid robot that can’t match human conversational nuance disappoints.
The pattern: form creates implicit promises. Humanoid form promises human-level capability. Abstract or zoomorphic form promises something different, and often more achievable.
Sony’s Aibo (robotic dog) succeeds partly because it sidesteps the uncanny valley entirely. Users don’t expect human social intelligence from a dog. They expect pet-like behavior, which the system delivers.
Amazon’s Astro (wheeled home assistant) takes a “utility-first, persona-second” approach. It’s explicitly not trying to be a social companion, it’s a mobile Alexa. The reduced expectations match the actual capability.
For embodied AI designers: the form factor is a design decision with significant downstream consequences for user perception, trust calibration, and satisfaction.
Related: [None yet]