top of page

PERCxR Workshop Invited Speakers

Holly Gagnon

PhD Candidate, University of Utah

Holly Gagnon is a PhD candidate in cognitive psychology at the University of Utah. She studies spatial perception and cognition in the real world and mixed reality. Her current research focus is the perception of affordances (action capabilities) in augmented and virtual reality.

Information for the Calibration of Perceived Action Capabilities: The Roles of Feedback and Virtual Environments

Successful completion of an action is dependent on the accurate perception of one’s capabilities in relation to the environment (i.e., affordance perception). Affordance perception can be calibrated following perturbations to the body or environment, and this calibration is facilitated by feedback that allows for the detection of information relevant to the adjusted relationship between the body and environment. Understanding what information is necessary for affordance perception calibration expands the theoretical knowledge on the relationship between perception and action and provides information that can enhance training procedures and other applications. This knowledge is particularly relevant for applications involving mixed reality because perceptual information is often limited in these contexts. With the growing accessibility of virtual and augmented reality technology, it is important to understand how users will perceive their ability to act in these novel environments. Mixed reality can also be used as a tool for affordance perception calibration research as it allows for manipulating properties that would be impossible or impractical to implement in the real world. Here, we review the literature on affordance perception calibration in the real world and in mixed reality, focusing on the effects of different forms of feedback in order to propose mechanisms underlying perception calibration and emphasize the utility of mixed reality for studying affordance perception.

Yu Zhao

PhD Student, Vanderbilt University

Yu Zhao is a PhD student of computer science at Vanderbilt University in Nashville, TN, USA. Her research focuses on understanding how mixed reality technology impacts human spatial cognition and how to effectively leverage the technology to enhance people's spatial learning.

Investigating Augmented Reality Spatial Perception via Handheld Mobile Devices in a Distributed Experimental Setup
Augmented Reality (AR) technology is now widely available at the consumer level with modern smartphones. Mobile handheld AR applications for entertainment, education, training, and communication have been developed. As these applications become more widespread and accelerate, a better understanding of user interaction with AR on mobile devices in realistic, every-day contexts is needed. However, AR experiments are typically implemented in tightly controlled conditions that are guided by experimenters. Our work provides a preliminary investigation of AR spatial perception via handheld devices in an unsupervised, distributed experimental setup. In the context of two experiments, we discuss the challenges and limitations of mobile AR for studying spatial perception. Experiment 1 tested people’s affordance judgments that required collecting participants’ body dimensions remotely. In Experiment 2, we evaluated how social cues affected the egocentric distance estimation of a virtual human avatar in the context of pandemic. The initial findings suggest that the use of mobile AR with smartphones for assessing and training perception is feasible and that data collection with smartphones is reliable. Our work provides a framework for conducting spatial perception experiments on mobile AR in the wild.

John Power

Artist and Lecturer, Royal Melbourne Institute of Technology

John uses both traditional and digital techniques to create still and time-based images. John has worked in scenic art and design in: TV, film, museum exhibition, opera, ballet, theatre and private commissions. He has created digital FX and animation for TV, worked as an art director and director and toured nationally and internationally as a VJ. Currently John provides concept, design and technical leadership in a range of screen media. John recently completed his PhD at RMIT University, Melbourne, in the School of Media and Communication.

Designing Extended Reality to support the Periphery of Attention in Shared Public Spaces and Health Contexts

Generative VR used to create ambient public screens applied Calm Technology principles and biophilic patterns in ways that supported positive emotion and calm states of attention in public place making. Two XR installations (respectively in a busy metropolitan public library and a cancer research hospital) designed to support the periphery of attention were studied through semi-structured interviews and remote observation of public participants. Analysis of these on-site public encounters yielded ways of using generative VR applications to support orientation, calm dwelling, and attention restoration. This presentation will discuss ways to develop XR installations as an attentional public amenity.

Billy Vann

PhD Student, University of Florida

Billy Vann is a PhD student in the Informatics, Cobots, and Intelligent Construction (ICIC) Lab at the University of Florida in Gainesville, FL, USA. A scientist at heart, Billy started off his professional life as a biological scientist designing medical devices and conducting clinical research in the areas of pulmonary medicine and neurophysiology. He is currently investigating human centered design approaches of assistive robots and virtual reality training methods. This research applies human factors testing, usability engineering, and fNIRS neuroimaging to measure cognitive loading while a user interacts with the autonomous technology.

Sensory Manipulation as a Countermeasure to Robot Teleoperation Delays

Currently, most interactions with robots in space exploration are achieved through teleoperations, i.e., operating a robot from a distance. During future space teleoperations, communicating time delays associated with long distances may negatively affect performance if operators do not calibrate to it. Natural afferent mechanisms, such as real-time visual feedback and haptic sensation, are critical for individuals performing upper extremity motor tasks, especially during teleoperation. Any time delays between an operators’ action and received sensory modalities will cause disruptions to the perceptual-motor system. Existing techniques for mitigating time delays in space robot teleoperation can be categorized into supervisory control and predictive feedback. If sensory manipulation, i.e., providing additional sensory modalities as reinforcement cues, can help mitigate the negative influence of teleoperation delays measured by perceived presence, neural efficiency, and task performance. Literature has already found that sensory manipulation can modulate the effectiveness of motor learning and rehabilitation. The rationale of the proposed approach is that by simulating virtual force of physical interactions on the operator end, the delayed visual cues of teleoperation are reinforced by multimodal sensory feedback, mitigating the perception of time delays and improving performance.


There is a lack of existing frameworks that consider expediated human adaption as a remedy for mitigating cognitive impacts from teleoperated delays. Here, we explore three cases of sensory manipulation to capture and understand how humans naturally perceive and adapt to haptic and visual delays during a virtual teleoperated pick and place task: 1) Haptic and visual feedback are provided with equal delay (Synchronized); 2) Haptic feedback is provided real-time, while visual feedback is provided during three delay scenarios (250ms, 500ms, or 750ms) following an operators initiated action (Anchoring); and 3) Haptic feedback is provided with a constant 250ms delay, while visual feedback is delayed an additional 250ms, 500ms, or 750ms (Asynchronized). A Virtual Reality (VR) platform was created to drive the robotic manipulation with Robot Operating System (ROS), as the experiment testbed. In all three cases, participants were asked to move six objects from the starting location to destinations with different distances and difficulty. Obstacles were placed in the middle of the routes to increase the difficulty of manipulation. During the experiments, participants were quickly asked to report a temporal label of their perceived visual and haptic time delays at the beginning of each trial and immediately after completing the trial. At the end of each trial, participants were also asked to respond to NASA TLX and motion sickness questionnaire. Throughout the entire experiment, we collected eye tracking data, neural activity via Functional near-infrared spectroscopy (fNIRS), and motion tracking data. At the psychological level, we describe the cases and conditions in which some people generate an illusion of their perceived haptic and visual experiences. These insights can be used to improve a teleoperators’ situational awareness, confidence, and long-term health by alleviating the sense of time

delays.

This research addresses HARI-02: “We need to develop design guidelines for effective human-robotic systems in operational environments that may include distributed non-collocated adaptive mixed-agent teams with variable transmission latencies.”

©2022 by the PERCxR organizing team. Proudly created with Wix.com

bottom of page