Action Similarity Modeling
Overview
Understanding Human Action Similarity through Kinematics and Computational Models
📅 Jan 2019 – June 2021
🏛️ University of Skövde, Sweden; Istituto Italiano di Tecnologia, Italy; Università di Genova, Italy
👥 Collaborator: Paul Hemeren, Erik Billing, Alessia Vignolo, Elena Nicora, Nicoletta Noceti, Alessandra Sciutti, Francesco Rea, Giulio Sandini
This project addressed a core question in visual cognition:
How do we perceive the similarity between everyday human actions—based on motion alone?
Aim
To determine what motion features (e.g., speed, trajectory, or change over time) influence our ability to judge whether two hand actions are similar, and whether computational models can emulate this perceptual capability.
Methodology
- Stimuli: Common hand actions (e.g., stirring, chopping) visualized as:
- Point-light Display (for humans)
- Optical flow (2D motion for models)
- Motion capture (3D joint data for models)
- Participants: Human subjects performed similarity judgments on Point Light Displays (motion sequences).
- Models: Computational systems analyzed the same actions using:
- Spatial features
- Velocity profiles
- Combined feature sets
Key Concepts
Motion Cues in Similarity Judgments – Humans naturally assess action similarity using rich motion information:
- Velocity: How fast the movement is
- Position: Path or spatial layout of the action
- Acceleration: How the movement evolves over time
We tested which of these cues are essential for similarity judgments—both in people and machines.
Kinematic Primitives – The project introduced and validated the use of kinematic primitives—basic units of motion—as a shared representational tool for both human cognition and machine perception.
Findings
- Models using both spatial and velocity information closely mimicked human judgments.
- Velocity-only models often failed on nuanced actions.
- Human perception is highly sensitive to motion detail, requiring more than just speed-based analysis.
Applications
- AI & Robotics: Creating perceptually grounded action recognition systems
- Neuroscience: Understanding how humans abstract motion information
- HCI: Improving gesture and action-based user interfaces
Project Outcomes
📚 See Journal article
📚 See Conference article
Collaboration Opportunities
Open to collaboration or discussion on methodology, data, or future directions. Happy to exchange ideas and explore new perspectives.