Develop safer, context-aware, and human-centered autonomous and assisted driving systems through research on driver attentiveness, pedestrian behavior, and multimodal streetscape cues.
Decode audience behavior, attention, and interpretation of media through advanced visuospatial and cognitive analysis methods.
Design immersive and behaviorally realistic virtual environments for gaming, simulation, training, and human–AI avatar interaction.
Design, deploy, and optimize systems that automatically recognize, track, and interpret human actions and events from video and multimodal data — enabling context-aware computing.
Design, test, and deploy humanoid and social robots by leveraging human-centered, cognitive, and multimodal behavioral insights.
Preprocessing, annotation, and analysis of human behavioral data (video, sensors, multimodal streams) that generate actionable insights for design, performance, and decision-making.