Perception Cues For Social Platforms
About this talk
A Google TechTalk, 2018-11-12, presented by Eakta Jain ABSTRACT: As eye-tracking becomes a built-in service for virtual and augmented reality headsets, I am interested in converting gaze data into usable information. In this talk, I will give an overview of three projects from my lab at University of Florida that use perception cues toward different goals. (I)In the attention economy, what is the relative importance of pictorial and text elements on a website? Simply put, we use eye-tracking to quantify how many words a picture is worth. (II) As eye-tracking becomes a built-in service for virtual and augmented reality, can we extract more than gaze positions from it? Specifically, we infer user engagement from pupil diameter changes. (III) Looking ahead to social virtual reality, what would it take to create avatars for child users? We investigate how users perceive adult and child motion capture data. About the speaker: Prof. Eakta Jain Eakta Jain is an Assistant Professor of Computer and Information Science and Engineering at the University of Florida. She received her PhD and MS degrees in Robotics from Carnegie Mellon University and her B.Tech. degree from IIT Kanpur. She has worked in industrial research at Texas Instruments R&D labs, Disney Research Pittsburgh, and the Walt Disney Animation Studios. Her research group at the University of Florida is funded through faculty research awards from Facebook/Oculus and Google/YouTube, federal funding from the National Science Foundation, and state funding from the Florida Department of Transportation.
Topics covered
Stay Updated
Get notified about new features and conference additions.