HRI: Attention & Intention
Overview
How Robots Communicate Intent: Visuoauditory Cues in HRI
📅 Mar 2022 – present
🏛️ University of Skövde, Sweden
👥 Collaborators: Erik Lagerstedt
This pilot study investigates how specific robot behaviors (gaze, gesture, speech timing) guide human attention and intention understanding during collaborative tasks. Using Pepper robot, we examine how multimodal cues affect task performance and event comprehension - foundational work for a larger HRI cognition project.
Research Focus Areas
- Timing effects in robot-human turn-taking
- Joint attention initiation strategies
- Gesture-speech integration for intention conveyance
- Relationship between robot attitudes and cue interpretation
Methodology
We employ mixed methods across two experimental conditions:
Version 1: Real-time Interaction
- Participants (n=8) collaborate with Pepper on guided tasks
- Measures:
- Eye-tracking (Pupil Labs glasses)
- Multi-angle video (robot & participant POVs, side- and top-views)
- Post-task interviews (event recollection)
- NARS questionnaire (robot attitudes)
Version 2: Observational Study In progress
- Participants (TBD) watch manipulated HRI and HHI clips
- Measures:
- Screen-based eye-tracking
- Event segmentation task (button-press during viewing)
- NARS questionnaire
Experimental Setup
- Environment: Controlled lab with Pepper robot
- Robot: SoftBank Pepper with custom interaction protocols
- Interacting Participants for HRI: 8 real-time
- Observational Participants for HRI/HHI: TBD
- Tasks:
- Collaborative puzzle solving
- Instruction-following with intentional errors
- Joint attention navigation challenges
- Manipulated Variables:
- Turn-taking timing (0s vs. 3s gaps)
- Joint attention initiation (head-only vs head+hand)
- Intention conveyance (gesture+speech vs speech-only)
Data Collected
Note: Data marked as ‘in progress’ is undergoing processing, annotation, and cleanup.
- Multimodal behavioral data (in progress):
- 90+ minutes of annotated HRI interactions
- Pupil response and gaze patterns during task execution
- Subjective measures (in progress):
- Audio transcripts of post-task interviews
- NARS attitude scores
- Event segmentation responses
- Stimulus library:
- 10 manipulated HRI/HHI video clips
Applications
- Social robotics: Designing more intuitive communication systems
- Assistive technologies: Improving assistive robot transparency
- Human-AI systems collaboration: Cross-domain principles for intention signaling
- Robot enchanced Autism therapy: Robot-mediated social cue training
Future Directions
- Participant cohort expansion
- Develop real-time attention adaptation system
Expected Outcomes
- Quantifiable attention patterns during turn-taking gaps
- Performance differences in intention understanding based on cue modality
- Framework for robot behavior design optimizing:
- Joint attention initiation
- Turn-taking rhythms
- Multimodal (e.g., speech, gaze) integration
Collaboration Opportunities
Open to collaboration or discussion on methodology, data, or future directions. Happy to exchange ideas and explore new perspectives.