PERCxR Workshop Invited Speakers
PERCxR Workshop Invited Speakers
Haley Adams
PhD Candidate, Vanderbilt University
Haley Adams is a PhD candidate in Vanderbilt University’s Learning in Virtual Environments (LiVE) Lab. She quantifies perceptual differences between virtual, augmented, and real environments to better understand how those differences affect the way people interact with immersive technology. The results of her research can be used to improve software and hardware design for mixed reality displays. Her dissertation work extends this investigation to the domain of accessibility by evaluating the impact of different rendering techniques on spatial perception in mixed reality for people with and without simulated vision impairments.
Depth perception and stylized graphics in AR HMDs
Although contemporary augmented reality (AR) displays can render compelling virtual overlays—the integration between real and virtual information is imperfect. In fact, completely consistent depth cue integration is impossible with current AR technology, given inherent engineering and optics limitations. It is possible, then, that conflicting visual depth information between real and virtual cues is a contributing factor to depth misperception in AR. In the current work, we investigate this idea by evaluating the impact of realistic and non-realistic graphics on people’s absolute distance judgements to virtual targets in an optical see-through display, the Microsoft HoloLens 2, and a video see-through display, the Varjo XR-3. The results of our work provide evidence that the presence of depth cue information may be more important than the realism of its appearance.
Shakiba Davari
PhD Student, Virginia Tech
Shakiba Davari is currently a Ph.D. student at the 3-D Interaction Lab at Virginia Tech, under the supervision of Doug A. Bowman. Prior to her PhD, she was involved in multiple research projects on the applications of machine learning and computer vision at the University of Toronto and at Stanford. She earned her Bachelor of Science degree in Computer Engineering from Beheshti University in Iran.
Context-Aware Inference and Adaptation in Intelligent Augmented Reality Interfaces
Recent developments in Augmented Reality (AR) devices raise the potential for more efficient and reliable information access and signify the widespread belief that AR Glasses are the next generation of ubiquitous personal computing devices. To achieve this goal, the AR interface must both present the information in an optimal way and address the potential challenges caused by constant and pervasive presence of information. As the user switches context, an optimal all-day interface, that is the most efficient yet least intrusive interface, must adapt its interaction technique and virtual content display. This work aims to propose a research agenda to 1) design and validate the benefits of context-aware AR interfaces for multiple contexts, 2) identify the adaptive design choices for presenting and interacting with information in AR, and 3) introduce taxonomy for context and a framework for the design of such Intelligent AR interfaces.
Aysun Duyar
PhD Candidate, New York University
Aysun Duyar is a PhD candidate in Cognition and Perception at New York University. Her research focuses on quantifying how attention alters perception by reshaping basic sensory processes. She is interested in studying perception in a variety of contexts, ranging from simple graphical elements to extended reality displays.
Proposing an objective measure of embodiment
Extended reality enables infinite possibilities for experiencing virtual presence and associating oneself with virtual avatars, which can impact perceptual, cognitive and social processes, described as embodiment illusion. There is a lot of variability in how the virtual avatars are experienced, which makes it critical to quantify the level of embodiment illusion. We propose a perceptual phenomenon, sensory attenuation of action outcomes, as a potential objective and feasible measure of embodiment illusion. We propose an experiment where we task normally-hearing listeners to trigger sounds in a virtual environment by pushing a virtual button and reporting the perceived sound intensity. We propose to systematically manipulate embodiment experience by introducing varying levels of lag to the avatar hand timing relative to the physical action onset. At each lag, we will measure perceptual thresholds for perceived loudness of auditory tones, along with recording subjective reports for the questionnaire measuring body ownership, body location and sense of agency. Establishing this correlational link will suggest sensory attenuation as a potential candidate for an easy and quick measure of embodiment illusion in virtual environments.