Dr. Gerd Bruder – Publications
NOTICE: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
2021 |
|
![]() | Ryan Schubert; Gerd Bruder; Alyssa Tanaka; Francisco Guido-Sanz; Gregory F. Welch Mixed Reality Technology Capabilities for Combat-Casualty Handoff Training Inproceedings Forthcoming In: HCII VAMR, Springer, Forthcoming. @inproceedings{Schubert2021mixed, title = {Mixed Reality Technology Capabilities for Combat-Casualty Handoff Training}, author = {Ryan Schubert and Gerd Bruder and Alyssa Tanaka and Francisco Guido-Sanz and Gregory F. Welch}, year = {2021}, date = {2021-07-24}, booktitle = {HCII VAMR}, publisher = {Springer}, abstract = {Patient handoffs are a common, yet frequently error prone occurrence, particularly in complex or challenging battlefield situations. Specific protocols exist to help simplify and reinforce conveying of necessary information during a combat-casualty handoff, and training can both reinforce correct behavior and protocol usage while providing relatively safe initial exposure to many of the complexities and variabilities of real handoff situations, before a patient’s life is at stake. Here we discuss a variety of mixed reality capabilities and training contexts that can manipulate many of these handoff complexities in a controlled manner. We finally discuss some future human-subject user study design considerations, including aspects of handoff training, evaluation or improvement of a specific handoff protocol, and how the same technology could be leveraged for operational use.}, keywords = {}, pubstate = {forthcoming}, tppubtype = {inproceedings} } Patient handoffs are a common, yet frequently error prone occurrence, particularly in complex or challenging battlefield situations. Specific protocols exist to help simplify and reinforce conveying of necessary information during a combat-casualty handoff, and training can both reinforce correct behavior and protocol usage while providing relatively safe initial exposure to many of the complexities and variabilities of real handoff situations, before a patient’s life is at stake. Here we discuss a variety of mixed reality capabilities and training contexts that can manipulate many of these handoff complexities in a controlled manner. We finally discuss some future human-subject user study design considerations, including aspects of handoff training, evaluation or improvement of a specific handoff protocol, and how the same technology could be leveraged for operational use. |
2020 |
|
![]() | Nahal Norouzi; Kangsoo Kim; Gerd Bruder; Greg Welch [Demo] Towards Interactive Virtual Dogs as a Pervasive Social Companion in Augmented Reality Inproceedings In: Proceedings of the combined International Conference on Artificial Reality & Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE)., pp. 29-30, 2020, (Best Demo Audience Choice Award). @inproceedings{Norouzi2020d, title = {[Demo] Towards Interactive Virtual Dogs as a Pervasive Social Companion in Augmented Reality}, author = {Nahal Norouzi and Kangsoo Kim and Gerd Bruder and Greg Welch }, url = {https://sreal.ucf.edu/wp-content/uploads/2020/12/029-030.pdf}, doi = {https://doi.org/10.2312/egve.20201283}, year = {2020}, date = {2020-12-04}, booktitle = {Proceedings of the combined International Conference on Artificial Reality & Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE).}, pages = {29-30}, abstract = {Pets and animal-assisted intervention sessions have shown to be beneficial for humans' mental, social, and physical health. However, for specific populations, factors such as hygiene restrictions, allergies, and care and resource limitations reduce interaction opportunities. In parallel, understanding the capabilities of animals' technological representations, such as robotic and digital forms, have received considerable attention and has fueled the utilization of many of these technological representations. Additionally, recent advances in augmented reality technology have allowed for the realization of virtual animals with flexible appearances and behaviors to exist in the real world. In this demo, we present a companion virtual dog in augmented reality that aims to facilitate a range of interactions with populations, such as children and older adults. We discuss the potential benefits and limitations of such a companion and propose future use cases and research directions.}, note = {Best Demo Audience Choice Award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Pets and animal-assisted intervention sessions have shown to be beneficial for humans' mental, social, and physical health. However, for specific populations, factors such as hygiene restrictions, allergies, and care and resource limitations reduce interaction opportunities. In parallel, understanding the capabilities of animals' technological representations, such as robotic and digital forms, have received considerable attention and has fueled the utilization of many of these technological representations. Additionally, recent advances in augmented reality technology have allowed for the realization of virtual animals with flexible appearances and behaviors to exist in the real world. In this demo, we present a companion virtual dog in augmented reality that aims to facilitate a range of interactions with populations, such as children and older adults. We discuss the potential benefits and limitations of such a companion and propose future use cases and research directions. |
![]() | Austin Erickson; Kangsoo Kim; Gerd Bruder; Greg Welch A Review of Visual Perception Research in Optical See-Through Augmented Reality Inproceedings In: In Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments, pp. 9, The Eurographics Association, 2020, ISBN: 978-3-03868-111-3. @inproceedings{Erickson2020e, title = {A Review of Visual Perception Research in Optical See-Through Augmented Reality}, author = {Austin Erickson and Kangsoo Kim and Gerd Bruder and Greg Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/10/DarkModeSurvey_ICAT_EGVE_2020-2.pdf}, doi = {10.2312/egve.20201256}, isbn = {978-3-03868-111-3}, year = {2020}, date = {2020-12-02}, booktitle = {In Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments}, pages = {9}, publisher = {The Eurographics Association}, abstract = {In the field of augmented reality (AR), many applications involve user interfaces (UIs) that overlay visual information over the user's view of their physical environment, e.g., as text, images, or three-dimensional scene elements. In this scope, optical see-through head-mounted displays (OST-HMDs) are particularly interesting as they typically use an additive light model, which denotes that the perception of the displayed virtual imagery is a composite of the lighting conditions of one's environment, the coloration of the objects that make up the virtual imagery, and the coloration of physical objects that lay behind them. While a large body of literature focused on investigating the visual perception of UI elements in immersive and flat panel displays, comparatively less effort has been spent on OST-HMDs. Due to the unique visual effects with OST-HMDs, we believe that it is important to review the field to understand the perceptual challenges, research trends, and future directions. In this paper, we present a systematic survey of literature based on the IEEE and ACM digital libraries, which explores users' perception of displaying text-based information on an OST-HMD, and aim to provide relevant design suggestions based on the meta-analysis results. We carefully review 14 key papers relevant to the visual perception research in OST-HMDs with UI elements, and present the current state of the research field, associated trends, noticeable research gaps in the literature, and recommendations for potential future research in this domain. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In the field of augmented reality (AR), many applications involve user interfaces (UIs) that overlay visual information over the user's view of their physical environment, e.g., as text, images, or three-dimensional scene elements. In this scope, optical see-through head-mounted displays (OST-HMDs) are particularly interesting as they typically use an additive light model, which denotes that the perception of the displayed virtual imagery is a composite of the lighting conditions of one's environment, the coloration of the objects that make up the virtual imagery, and the coloration of physical objects that lay behind them. While a large body of literature focused on investigating the visual perception of UI elements in immersive and flat panel displays, comparatively less effort has been spent on OST-HMDs. Due to the unique visual effects with OST-HMDs, we believe that it is important to review the field to understand the perceptual challenges, research trends, and future directions. In this paper, we present a systematic survey of literature based on the IEEE and ACM digital libraries, which explores users' perception of displaying text-based information on an OST-HMD, and aim to provide relevant design suggestions based on the meta-analysis results. We carefully review 14 key papers relevant to the visual perception research in OST-HMDs with UI elements, and present the current state of the research field, associated trends, noticeable research gaps in the literature, and recommendations for potential future research in this domain. |
![]() | Nahal Norouzi; Kangsoo Kim; Gerd Bruder; Austin Erickson; Zubin Choudhary; Yifan Li; Greg Welch A Systematic Literature Review of Embodied Augmented Reality Agents in Head-Mounted Display Environments Inproceedings In: In Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments, pp. 11, 2020. @inproceedings{Norouzi2020c, title = {A Systematic Literature Review of Embodied Augmented Reality Agents in Head-Mounted Display Environments}, author = {Nahal Norouzi and Kangsoo Kim and Gerd Bruder and Austin Erickson and Zubin Choudhary and Yifan Li and Greg Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/11/IVC_ICAT_EGVE2020.pdf}, year = {2020}, date = {2020-12-02}, booktitle = {In Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments}, pages = {11}, abstract = {Embodied agents, i.e., computer-controlled characters, have proven useful for various applications across a multitude of display setups and modalities. While most traditional work focused on embodied agents presented on a screen or projector, and a growing number of works are focusing on agents in virtual reality, a comparatively small number of publications looked at such agents in augmented reality (AR). Such AR agents, specifically when using see-through head-mounted displays (HMDs)as the display medium, show multiple critical differences to other forms of agents, including their appearances, behaviors, and physical-virtual interactivity. Due to the unique challenges in this specific field, and due to the comparatively limited attention by the research community so far, we believe that it is important to map the field to understand the current trends, challenges, and future research. In this paper, we present a systematic review of the research performed on interactive, embodied AR agents using HMDs. Starting with 1261 broadly related papers, we conducted an in-depth review of 50 directly related papers from2000 to 2020, focusing on papers that reported on user studies aiming to improve our understanding of interactive agents in AR HMD environments or their utilization in specific applications. We identified common research and application areas of AR agents through a structured iterative process, present research trends, and gaps, and share insights on future directions.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Embodied agents, i.e., computer-controlled characters, have proven useful for various applications across a multitude of display setups and modalities. While most traditional work focused on embodied agents presented on a screen or projector, and a growing number of works are focusing on agents in virtual reality, a comparatively small number of publications looked at such agents in augmented reality (AR). Such AR agents, specifically when using see-through head-mounted displays (HMDs)as the display medium, show multiple critical differences to other forms of agents, including their appearances, behaviors, and physical-virtual interactivity. Due to the unique challenges in this specific field, and due to the comparatively limited attention by the research community so far, we believe that it is important to map the field to understand the current trends, challenges, and future research. In this paper, we present a systematic review of the research performed on interactive, embodied AR agents using HMDs. Starting with 1261 broadly related papers, we conducted an in-depth review of 50 directly related papers from2000 to 2020, focusing on papers that reported on user studies aiming to improve our understanding of interactive agents in AR HMD environments or their utilization in specific applications. We identified common research and application areas of AR agents through a structured iterative process, present research trends, and gaps, and share insights on future directions. |
![]() | Austin Erickson; Kangsoo Kim; Gerd Bruder; Gregory F. Welch [Demo] Dark/Light Mode Adaptation for Graphical User Interfaces on Near-Eye Displays Inproceedings In: Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments, pp. 23-24, The Eurographics Association, 2020. BibTeX | Links: @inproceedings{Erickson2020f, title = {[Demo] Dark/Light Mode Adaptation for Graphical User Interfaces on Near-Eye Displays}, author = {Austin Erickson and Kangsoo Kim and Gerd Bruder and Gregory F. Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/12/DarkmodeDEMO_ICAT_EGVE_2020-2.pdf https://www.youtube.com/watch?v=VJQTaYyofCw&t=61s }, doi = {https://doi.org/10.2312/egve.20201280}, year = {2020}, date = {2020-12-02}, booktitle = {Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments}, pages = {23-24}, publisher = {The Eurographics Association}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
![]() | Sharare Zehtabian; Siavash Khodadadeh; Kangsoo Kim; Gerd Bruder; Greg Welch; Ladislau Bölöni; Damla Turgut [Poster] An Automated Virtual Receptionist for Recognizing Visitors and Assuring Mask Wearing Inproceedings In: Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments, pp. 9-10, 2020. BibTeX | Links: @inproceedings{Zehtabian2020aav, title = {[Poster] An Automated Virtual Receptionist for Recognizing Visitors and Assuring Mask Wearing}, author = {Sharare Zehtabian and Siavash Khodadadeh and Kangsoo Kim and Gerd Bruder and Greg Welch and Ladislau Bölöni and Damla Turgut}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/12/VirtualReceptionist_Poster_ICAT_EGVE2020.pdf https://www.youtube.com/watch?v=r6bXNPn3lWU&feature=emb_logo}, doi = {10.2312/egve.20201273}, year = {2020}, date = {2020-12-02}, booktitle = {Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments}, pages = {9-10}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
![]() | Celso M. de Melo; Kangsoo Kim; Nahal Norouzi; Gerd Bruder; Gregory Welch Reducing Cognitive Load and Improving Warfighter Problem Solving with Intelligent Virtual Assistants Journal Article In: Frontiers in Psychology, 11 (554706), pp. 1-12, 2020. BibTeX | Links: @article{DeMelo2020rcl, title = {Reducing Cognitive Load and Improving Warfighter Problem Solving with Intelligent Virtual Assistants}, author = {Celso M. de Melo and Kangsoo Kim and Nahal Norouzi and Gerd Bruder and Gregory Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/11/Melo2020aa-2.pdf}, doi = {10.3389/fpsyg.2020.554706}, year = {2020}, date = {2020-11-17}, journal = {Frontiers in Psychology}, volume = {11}, number = {554706}, pages = {1-12}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
![]() | Austin Erickson; Kangsoo Kim; Gerd Bruder; Gregory F. Welch Exploring the Limitations of Environment Lighting on Optical See-Through Head-Mounted Displays Inproceedings In: Proceedings of the ACM Symposium on Spatial User Interaction , pp. 1-8, Association for Computing Machinery ACM, 2020. @inproceedings{Erickson2020d, title = {Exploring the Limitations of Environment Lighting on Optical See-Through Head-Mounted Displays}, author = {Austin Erickson and Kangsoo Kim and Gerd Bruder and Gregory F. Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/09/sui20a-sub1047-cam-i26-1.pdf https://youtu.be/3jJ-j35oO1I}, year = {2020}, date = {2020-10-31}, booktitle = {Proceedings of the ACM Symposium on Spatial User Interaction }, pages = {1-8}, publisher = {ACM}, organization = {Association for Computing Machinery}, abstract = {Due to the additive light model employed by most optical see-through head-mounted displays (OST-HMDs), they provide the best augmented reality (AR) views in dark environments, where the added AR light does not have to compete against existing real-world lighting. AR imagery displayed on such devices loses a significant amount of contrast in well-lit environments such as outdoors in direct sunlight. To compensate for this, OST-HMDs often use a tinted visor to reduce the amount of environment light that reaches the user’s eyes, which in turn results in a loss of contrast in the user’s physical environment. While these effects are well known and grounded in existing literature, formal measurements of the illuminance and contrast of modern OST-HMDs are currently missing. In this paper, we provide illuminance measurements for both the Microsoft HoloLens 1 and its successor the HoloLens 2 under varying environment lighting conditions ranging from 0 to 20,000 lux. We evaluate how environment lighting impacts the user by calculating contrast ratios between rendered black (transparent) and white imagery displayed under these conditions, and evaluate how the intensity of environment lighting is impacted by donning and using the HMD. Our results indicate the further need for refinement in the design of future OST-HMDs to optimize contrast in environments with illuminance values greater than or equal to those found in indoor working environments.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Due to the additive light model employed by most optical see-through head-mounted displays (OST-HMDs), they provide the best augmented reality (AR) views in dark environments, where the added AR light does not have to compete against existing real-world lighting. AR imagery displayed on such devices loses a significant amount of contrast in well-lit environments such as outdoors in direct sunlight. To compensate for this, OST-HMDs often use a tinted visor to reduce the amount of environment light that reaches the user’s eyes, which in turn results in a loss of contrast in the user’s physical environment. While these effects are well known and grounded in existing literature, formal measurements of the illuminance and contrast of modern OST-HMDs are currently missing. In this paper, we provide illuminance measurements for both the Microsoft HoloLens 1 and its successor the HoloLens 2 under varying environment lighting conditions ranging from 0 to 20,000 lux. We evaluate how environment lighting impacts the user by calculating contrast ratios between rendered black (transparent) and white imagery displayed under these conditions, and evaluate how the intensity of environment lighting is impacted by donning and using the HMD. Our results indicate the further need for refinement in the design of future OST-HMDs to optimize contrast in environments with illuminance values greater than or equal to those found in indoor working environments. |
![]() | Gregory F Welch; Ryan Schubert; Gerd Bruder; Derrick P Stockdreher; Adam Casebolt Augmented Reality Promises Mentally and Physically Stressful Training in Real Places Journal Article In: IACLEA Campus Law Enforcement Journal, 50 (5), pp. 47–50, 2020. BibTeX | Links: @article{Welch2020aa, title = {Augmented Reality Promises Mentally and Physically Stressful Training in Real Places}, author = {Gregory F Welch and Ryan Schubert and Gerd Bruder and Derrick P Stockdreher and Adam Casebolt}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/10/Welch2020aa.pdf}, year = {2020}, date = {2020-10-05}, journal = {IACLEA Campus Law Enforcement Journal}, volume = {50}, number = {5}, pages = {47--50}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
![]() | Alexis Lambert; Nahal Norouzi; Gerd Bruder; Greg Welch A Systematic Review of Ten Years of Research on Human Interaction with Social Robots Journal Article In: International Journal of Human-Computer Interaction, pp. 10, 2020. @article{Lambert2020, title = {A Systematic Review of Ten Years of Research on Human Interaction with Social Robots}, author = {Alexis Lambert and Nahal Norouzi and Gerd Bruder and Greg Welch }, editor = {Constantine Stephanidis}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/08/8_25_2020_A-Systemat.pdf}, doi = {10.1080/10447318.2020.1801172}, year = {2020}, date = {2020-08-25}, journal = {International Journal of Human-Computer Interaction}, pages = {10}, abstract = {While research and development related to robotics has been going on for decades, the past decade in particular has seen a marked increase in related efforts, in part due to technological advances, increased technological accessibility and reliability, and increased commercial availability. What have come to be known as social robots are now being used to explore novel forms of human-robot interaction, to understand social norms, and to test expectations and human responses. To capture the contributions of these research efforts, identify the current trends, and future directions, we systematically review ten years of research in the field of social robotics between 2008 and 2018, which includes 86 publications with 70 user studies. We classify the past work based on the research topics and application areas, and provide information about the publications, their user studies, and the capabilities of the social robots utilized. We also discuss selected papers in detail and outline overall trends. Based on these findings, we identify some areas of potential future research.}, keywords = {}, pubstate = {published}, tppubtype = {article} } While research and development related to robotics has been going on for decades, the past decade in particular has seen a marked increase in related efforts, in part due to technological advances, increased technological accessibility and reliability, and increased commercial availability. What have come to be known as social robots are now being used to explore novel forms of human-robot interaction, to understand social norms, and to test expectations and human responses. To capture the contributions of these research efforts, identify the current trends, and future directions, we systematically review ten years of research in the field of social robotics between 2008 and 2018, which includes 86 publications with 70 user studies. We classify the past work based on the research topics and application areas, and provide information about the publications, their user studies, and the capabilities of the social robots utilized. We also discuss selected papers in detail and outline overall trends. Based on these findings, we identify some areas of potential future research. |
![]() | Austin Erickson; Nahal Norouzi; Kangsoo Kim; Ryan Schubert; Jonathan Jules; Joseph J. LaViola Jr.; Gerd Bruder; Gregory F. Welch Sharing gaze rays for visual target identification tasks in collaborative augmented reality Journal Article In: Journal on Multimodal User Interfaces: Special Issue on Multimodal Interfaces and Communication Cues for Remote Collaboration, 14 (4), pp. 353-371, 2020, ISSN: 1783-8738. @article{EricksonNorouzi2020, title = {Sharing gaze rays for visual target identification tasks in collaborative augmented reality}, author = {Austin Erickson and Nahal Norouzi and Kangsoo Kim and Ryan Schubert and Jonathan Jules and Joseph J. LaViola Jr. and Gerd Bruder and Gregory F. Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/07/Erickson2020_Article_SharingGazeRaysForVisualTarget.pdf}, doi = {https://doi.org/10.1007/s12193-020-00330-2}, issn = {1783-8738}, year = {2020}, date = {2020-07-09}, journal = {Journal on Multimodal User Interfaces: Special Issue on Multimodal Interfaces and Communication Cues for Remote Collaboration}, volume = {14}, number = {4}, pages = {353-371}, abstract = {Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we present a human-subjects study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was. We discuss implications for practical shared gaze applications and we present a multi-user prototype system.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we present a human-subjects study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was. We discuss implications for practical shared gaze applications and we present a multi-user prototype system. |
![]() | Austin Erickson; Kangsoo Kim; Gerd Bruder; Greg Welch Effects of Dark Mode Graphics on Visual Acuity and Fatigue with Virtual Reality Head-Mounted Displays Inproceedings In: Proceedings of IEEE International Conference on Virtual Reality and 3D User Interfaces (IEEE VR), pp. 434-442, Atlanta, Georgia, 2020. @inproceedings{Erickson2020, title = {Effects of Dark Mode Graphics on Visual Acuity and Fatigue with Virtual Reality Head-Mounted Displays}, author = {Austin Erickson and Kangsoo Kim and Gerd Bruder and Greg Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/02/VR2020_DarkMode2_0.pdf https://www.youtube.com/watch?v=wePUk0xTLA0&t=5s, YouTube Presentation}, doi = {10.1109/VR46266.2020.00-40}, year = {2020}, date = {2020-03-23}, booktitle = {Proceedings of IEEE International Conference on Virtual Reality and 3D User Interfaces (IEEE VR)}, pages = {434-442}, address = {Atlanta, Georgia}, abstract = {Current virtual reality (VR) head-mounted displays (HMDs) are characterized by a low angular resolution that makes it difficult to make out details, leading to reduced legibility of text and increased visual fatigue. Light-on-dark graphics modes, so-called ``dark mode'' graphics, are becoming more and more popular over a wide range of display technologies, and have been correlated with increased visual comfort and acuity, specifically when working in low-light environments, which suggests that they might provide significant advantages for VR HMDs. In this paper, we present a human-subject study investigating the correlations between the color mode and the ambient lighting with respect to visual acuity and fatigue on VR HMDs. We compare two color schemes, characterized by light letters on a dark background (dark mode), or dark letters on a light background (light mode), and show that the dark background in dark mode provides a significant advantage in terms of reduced visual fatigue and increased visual acuity in dim virtual environments on current HMDs. Based on our results, we discuss guidelines for user interfaces and applications.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Current virtual reality (VR) head-mounted displays (HMDs) are characterized by a low angular resolution that makes it difficult to make out details, leading to reduced legibility of text and increased visual fatigue. Light-on-dark graphics modes, so-called ``dark mode'' graphics, are becoming more and more popular over a wide range of display technologies, and have been correlated with increased visual comfort and acuity, specifically when working in low-light environments, which suggests that they might provide significant advantages for VR HMDs. In this paper, we present a human-subject study investigating the correlations between the color mode and the ambient lighting with respect to visual acuity and fatigue on VR HMDs. We compare two color schemes, characterized by light letters on a dark background (dark mode), or dark letters on a light background (light mode), and show that the dark background in dark mode provides a significant advantage in terms of reduced visual fatigue and increased visual acuity in dim virtual environments on current HMDs. Based on our results, we discuss guidelines for user interfaces and applications. |
![]() | Austin Erickson; Gerd Bruder; Pamela J. Wisniewski; Greg Welch Examining Whether Secondary Effects of Temperature-Associated Virtual Stimuli Influence Subjective Perception of Duration Inproceedings In: Proceedings of IEEE International Conference on Virtual Reality and 3D User Interfaces (IEEE VR), pp. 493-499, Atlanta, Georgia, 2020. @inproceedings{Erickson2020b, title = {Examining Whether Secondary Effects of Temperature-Associated Virtual Stimuli Influence Subjective Perception of Duration}, author = {Austin Erickson and Gerd Bruder and Pamela J. Wisniewski and Greg Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/02/TimePerception_VR2020.pdf https://www.youtube.com/watch?v=kG2M-cbjS3s&t=1s, YouTube Presentation}, doi = {10.1109/VR46266.2020.00-34}, year = {2020}, date = {2020-03-23}, booktitle = {Proceedings of IEEE International Conference on Virtual Reality and 3D User Interfaces (IEEE VR)}, pages = {493-499}, address = {Atlanta, Georgia}, abstract = {Past work in augmented reality has shown that temperature-associated AR stimuli can induce warming and cooling sensations in the user, and prior work in psychology suggests that a person's body temperature can influence that person's sense of subjective perception of duration. In this paper, we present a user study to evaluate the relationship between temperature-associated virtual stimuli presented on an AR-HMD and the user's sense of subjective perception of duration and temperature. In particular, we investigate two independent variables: the apparent temperature of the virtual stimuli presented to the participant, which could be hot or cold, and the location of the stimuli, which could be in direct contact with the user, in indirect contact with the user, or both in direct and indirect contact simultaneously. We investigate how these variables affect the users' perception of duration and perception of body and environment temperature by having participants make prospective time estimations while observing the virtual stimulus and answering subjective questions regarding their body and environment temperatures. Our work confirms that temperature-associated virtual stimuli are capable of having significant effects on the users' perception of temperature, and highlights a possible limitation in the current augmented reality technology in that no secondary effects on the users' perception of duration were observed.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Past work in augmented reality has shown that temperature-associated AR stimuli can induce warming and cooling sensations in the user, and prior work in psychology suggests that a person's body temperature can influence that person's sense of subjective perception of duration. In this paper, we present a user study to evaluate the relationship between temperature-associated virtual stimuli presented on an AR-HMD and the user's sense of subjective perception of duration and temperature. In particular, we investigate two independent variables: the apparent temperature of the virtual stimuli presented to the participant, which could be hot or cold, and the location of the stimuli, which could be in direct contact with the user, in indirect contact with the user, or both in direct and indirect contact simultaneously. We investigate how these variables affect the users' perception of duration and perception of body and environment temperature by having participants make prospective time estimations while observing the virtual stimulus and answering subjective questions regarding their body and environment temperatures. Our work confirms that temperature-associated virtual stimuli are capable of having significant effects on the users' perception of temperature, and highlights a possible limitation in the current augmented reality technology in that no secondary effects on the users' perception of duration were observed. |
![]() | Kangsoo Kim; Celso M. de Melo; Nahal Norouzi; Gerd Bruder; Gregory F. Welch Reducing Task Load with an Embodied Intelligent Virtual Assistant for Improved Performance in Collaborative Decision Making Inproceedings In: Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), pp. 529-538, Atlanta, Georgia, 2020. BibTeX | Links: @inproceedings{Kim2020rtl, title = {Reducing Task Load with an Embodied Intelligent Virtual Assistant for Improved Performance in Collaborative Decision Making}, author = {Kangsoo Kim and Celso M. de Melo and Nahal Norouzi and Gerd Bruder and Gregory F. Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/02/IEEEVR2020_ARDesertSurvival.pdf https://www.youtube.com/watch?v=G_iZ_asjp3I&t=6s, YouTube Presentation}, doi = {10.1109/VR46266.2020.00-30}, year = {2020}, date = {2020-03-23}, booktitle = {Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR)}, pages = {529-538}, address = {Atlanta, Georgia}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
![]() | Zubin Choudhary; Kangsoo Kim; Ryan Schubert; Gerd Bruder; Gregory F. Welch Virtual Big Heads: Analysis of Human Perception and Comfort of Head Scales in Social Virtual Reality Inproceedings In: Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), pp. 425-433, Atlanta, Georgia, 2020. BibTeX | Links: @inproceedings{Choudhary2020vbh, title = {Virtual Big Heads: Analysis of Human Perception and Comfort of Head Scales in Social Virtual Reality}, author = {Zubin Choudhary and Kangsoo Kim and Ryan Schubert and Gerd Bruder and Gregory F. Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/02/IEEEVR2020_BigHead.pdf https://www.youtube.com/watch?v=14289nufYf0, YouTube Presentation}, doi = {10.1109/VR46266.2020.00-41}, year = {2020}, date = {2020-03-23}, booktitle = {Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR)}, pages = {425-433}, address = {Atlanta, Georgia}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
![]() | Austin Erickson; Nahal Norouzi; Kangsoo Kim; Joseph J. LaViola Jr.; Gerd Bruder; Gregory F. Welch Effects of Depth Information on Visual Target Identification Task Performance in Shared Gaze Environments Journal Article In: IEEE Transactions on Visualization and Computer Graphics, 26 (5), pp. 1934-1944, 2020, ISSN: 1077-2626, (Presented at IEEE VR 2020). @article{Erickson2020c, title = {Effects of Depth Information on Visual Target Identification Task Performance in Shared Gaze Environments}, author = {Austin Erickson and Nahal Norouzi and Kangsoo Kim and Joseph J. LaViola Jr. and Gerd Bruder and Gregory F. Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2020/02/shared_gaze_2_FINAL.pdf https://www.youtube.com/watch?v=JQO_iosY62Y&t=6s, YouTube Presentation}, doi = {10.1109/TVCG.2020.2973054}, issn = {1077-2626}, year = {2020}, date = {2020-02-13}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {26}, number = {5}, pages = {1934-1944}, abstract = {Human gaze awareness is important for social and collaborative interactions. Recent technological advances in augmented reality (AR) displays and sensors provide us with the means to extend collaborative spaces with real-time dynamic AR indicators of one's gaze, for example via three-dimensional cursors or rays emanating from a partner's head. However, such gaze cues are only as useful as the quality of the underlying gaze estimation and the accuracy of the display mechanism. Depending on the type of the visualization, and the characteristics of the errors, AR gaze cues could either enhance or interfere with collaborations. In this paper, we present two human-subject studies in which we investigate the influence of angular and depth errors, target distance, and the type of gaze visualization on participants' performance and subjective evaluation during a collaborative task with a virtual human partner, where participants identified targets within a dynamically walking crowd. First, our results show that there is a significant difference in performance for the two gaze visualizations ray and cursor in conditions with simulated angular and depth errors: the ray visualization provided significantly faster response times and fewer errors compared to the cursor visualization. Second, our results show that under optimal conditions, among four different gaze visualization methods, a ray without depth information provides the worst performance and is rated lowest, while a combination of a ray and cursor with depth information is rated highest. We discuss the subjective and objective performance thresholds and provide guidelines for practitioners in this field.}, note = {Presented at IEEE VR 2020}, keywords = {}, pubstate = {published}, tppubtype = {article} } Human gaze awareness is important for social and collaborative interactions. Recent technological advances in augmented reality (AR) displays and sensors provide us with the means to extend collaborative spaces with real-time dynamic AR indicators of one's gaze, for example via three-dimensional cursors or rays emanating from a partner's head. However, such gaze cues are only as useful as the quality of the underlying gaze estimation and the accuracy of the display mechanism. Depending on the type of the visualization, and the characteristics of the errors, AR gaze cues could either enhance or interfere with collaborations. In this paper, we present two human-subject studies in which we investigate the influence of angular and depth errors, target distance, and the type of gaze visualization on participants' performance and subjective evaluation during a collaborative task with a virtual human partner, where participants identified targets within a dynamically walking crowd. First, our results show that there is a significant difference in performance for the two gaze visualizations ray and cursor in conditions with simulated angular and depth errors: the ray visualization provided significantly faster response times and fewer errors compared to the cursor visualization. Second, our results show that under optimal conditions, among four different gaze visualization methods, a ray without depth information provides the worst performance and is rated lowest, while a combination of a ray and cursor with depth information is rated highest. We discuss the subjective and objective performance thresholds and provide guidelines for practitioners in this field. |
2019 |
|
![]() | Myungho Lee; Nahal Norouzi; Gerd Bruder; Pamela J. Wisniewski; Gregory F. Welch Mixed Reality Tabletop Gameplay: Social Interaction with a Virtual Human Capable of Physical Influence Journal Article In: IEEE Transactions on Visualization and Computer Graphics, 24 (8), pp. 1-12, 2019, ISSN: 1077-2626. @article{Lee2020, title = {Mixed Reality Tabletop Gameplay: Social Interaction with a Virtual Human Capable of Physical Influence}, author = {Myungho Lee and Nahal Norouzi and Gerd Bruder and Pamela J. Wisniewski and Gregory F. Welch }, url = {https://sreal.ucf.edu/wp-content/uploads/2019/12/TVCG_Physical_Virtual_Table_2019.pdf}, doi = {10.1109/TVCG.2019.2959575}, issn = {1077-2626}, year = {2019}, date = {2019-12-18}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {24}, number = {8}, pages = {1-12}, abstract = {In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in a mixed reality environment. In Experiment 1, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through augmented reality (AR) glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH’s token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH’s physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH’s ability to move physical objects to other elements in the real world. Also, the VH’s physical influence improved participants’ overall experience with the VH. In Experiment 2, we further looked into the question how the physical-virtual latency in movements affected the perceived plausibility of the VH’s interaction with the real world. Our results indicate that a slight temporal difference between the physical token reacting to the virtual hand’s movement increased the perceived realism and causality of the mixed reality interaction. We discuss potential explanations for the findings and implications for future shared mixed reality tabletop setups. }, keywords = {}, pubstate = {published}, tppubtype = {article} } In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in a mixed reality environment. In Experiment 1, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through augmented reality (AR) glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH’s token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH’s physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH’s ability to move physical objects to other elements in the real world. Also, the VH’s physical influence improved participants’ overall experience with the VH. In Experiment 2, we further looked into the question how the physical-virtual latency in movements affected the perceived plausibility of the VH’s interaction with the real world. Our results indicate that a slight temporal difference between the physical token reacting to the virtual hand’s movement increased the perceived realism and causality of the mixed reality interaction. We discuss potential explanations for the findings and implications for future shared mixed reality tabletop setups. |
![]() | Kangsoo Kim; Nahal Norouzi; Tiffany Losekamp; Gerd Bruder; Mindi Anderson; Gregory Welch Effects of Patient Care Assistant Embodiment and Computer Mediation on User Experience Inproceedings In: Proceedings of the IEEE International Conference on Artificial Intelligence & Virtual Reality (AIVR), pp. 17-24, IEEE, 2019. @inproceedings{Kim2019epc, title = {Effects of Patient Care Assistant Embodiment and Computer Mediation on User Experience}, author = {Kangsoo Kim and Nahal Norouzi and Tiffany Losekamp and Gerd Bruder and Mindi Anderson and Gregory Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2019/11/AIVR2019_Caregiver.pdf}, doi = {10.1109/AIVR46125.2019.00013}, year = {2019}, date = {2019-12-09}, booktitle = {Proceedings of the IEEE International Conference on Artificial Intelligence & Virtual Reality (AIVR)}, pages = {17-24}, publisher = {IEEE}, abstract = {Providers of patient care environments are facing an increasing demand for technological solutions that can facilitate increased patient satisfaction while being cost effective and practically feasible. Recent developments with respect to smart hospital room setups and smart home care environments have an immense potential to leverage advances in technologies such as Intelligent Virtual Agents, Internet of Things devices, and Augmented Reality to enable novel forms of patient interaction with caregivers and their environment. In this paper, we present a human-subjects study in which we compared four types of simulated patient care environments for a range of typical tasks. In particular, we tested two forms of caregiver mediation with a real person or a virtual agent, and we compared two forms of caregiver embodiment with disembodied verbal or embodied interaction. Our results show that, as expected, a real caregiver provides the optimal user experience but an embodied virtual assistant is also a viable option for patient care environments, providing significantly higher social presence and engagement than voice-only interaction. We discuss the implications in the field of patient care and digital assistant.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Providers of patient care environments are facing an increasing demand for technological solutions that can facilitate increased patient satisfaction while being cost effective and practically feasible. Recent developments with respect to smart hospital room setups and smart home care environments have an immense potential to leverage advances in technologies such as Intelligent Virtual Agents, Internet of Things devices, and Augmented Reality to enable novel forms of patient interaction with caregivers and their environment. In this paper, we present a human-subjects study in which we compared four types of simulated patient care environments for a range of typical tasks. In particular, we tested two forms of caregiver mediation with a real person or a virtual agent, and we compared two forms of caregiver embodiment with disembodied verbal or embodied interaction. Our results show that, as expected, a real caregiver provides the optimal user experience but an embodied virtual assistant is also a viable option for patient care environments, providing significantly higher social presence and engagement than voice-only interaction. We discuss the implications in the field of patient care and digital assistant. |
![]() | Kangsoo Kim; Austin Erickson; Alexis Lambert; Gerd Bruder; Gregory F. Welch Effects of Dark Mode on Visual Fatigue and Acuity in Optical See-Through Head-Mounted Displays Inproceedings In: Proceedings of the ACM Symposium on Spatial User Interaction (SUI), pp. 9:1-9:9, ACM, 2019, ISBN: 978-1-4503-6975-6/19/10. @inproceedings{Kim2019edm, title = {Effects of Dark Mode on Visual Fatigue and Acuity in Optical See-Through Head-Mounted Displays}, author = {Kangsoo Kim and Austin Erickson and Alexis Lambert and Gerd Bruder and Gregory F. Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2019/10/Kim2019edm.pdf}, doi = {10.1145/3357251.3357584}, isbn = {978-1-4503-6975-6/19/10}, year = {2019}, date = {2019-10-19}, booktitle = {Proceedings of the ACM Symposium on Spatial User Interaction (SUI)}, pages = {9:1-9:9}, publisher = {ACM}, abstract = {Light-on-dark color schemes, so-called "Dark Mode," are becoming more and more popular over a wide range of display technologies and application fields. Many people who have to look at computer screens for hours at a time, such as computer programmers and computer graphics artists, indicate a preference for switching colors on a computer screen from dark text on a light background to light text on a dark background due to perceived advantages related to visual comfort and acuity, specifically when working in low-light environments. In this paper, we investigate the effects of dark mode color schemes in the field of optical see-through head-mounted displays (OST-HMDs), where the characteristic "additive" light model implies that bright graphics are visible but dark graphics are transparent. We describe a human-subject study in which we evaluated a normal and inverted color mode in front of different physical backgrounds and among different lighting conditions. Our results show that dark mode graphics on OST-HMDs have significant benefits for visual acuity, fatigue, and usability, while user preferences depend largely on the lighting in the physical environment. We discuss the implications of these effects on user interfaces and applications.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Light-on-dark color schemes, so-called "Dark Mode," are becoming more and more popular over a wide range of display technologies and application fields. Many people who have to look at computer screens for hours at a time, such as computer programmers and computer graphics artists, indicate a preference for switching colors on a computer screen from dark text on a light background to light text on a dark background due to perceived advantages related to visual comfort and acuity, specifically when working in low-light environments. In this paper, we investigate the effects of dark mode color schemes in the field of optical see-through head-mounted displays (OST-HMDs), where the characteristic "additive" light model implies that bright graphics are visible but dark graphics are transparent. We describe a human-subject study in which we evaluated a normal and inverted color mode in front of different physical backgrounds and among different lighting conditions. Our results show that dark mode graphics on OST-HMDs have significant benefits for visual acuity, fatigue, and usability, while user preferences depend largely on the lighting in the physical environment. We discuss the implications of these effects on user interfaces and applications. |
![]() | Nahal Norouzi; Austin Erickson; Kangsoo Kim; Ryan Schubert; Joseph J. LaViola Jr.; Gerd Bruder; Gregory F. Welch Effects of Shared Gaze Parameters on Visual Target Identification Task Performance in Augmented Reality Inproceedings In: Proceedings of the ACM Symposium on Spatial User Interaction (SUI), pp. 12:1-12:11, ACM, 2019, ISBN: 978-1-4503-6975-6/19/10, (Best Paper Award). @inproceedings{Norouzi2019esg, title = {Effects of Shared Gaze Parameters on Visual Target Identification Task Performance in Augmented Reality}, author = {Nahal Norouzi and Austin Erickson and Kangsoo Kim and Ryan Schubert and Joseph J. LaViola Jr. and Gerd Bruder and Gregory F. Welch}, url = {https://sreal.ucf.edu/wp-content/uploads/2019/10/a12-norouzi.pdf}, doi = {10.1145/3357251.3357587}, isbn = {978-1-4503-6975-6/19/10}, year = {2019}, date = {2019-10-19}, booktitle = {Proceedings of the ACM Symposium on Spatial User Interaction (SUI)}, pages = {12:1-12:11}, publisher = {ACM}, abstract = {Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users' interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we conducted a human-subject study to understand the impact of accuracy, precision, latency, and dropout based errors on users' performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants' objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found some significant differences suggesting that the simulated error levels had stronger effects on participants' performance than target distance with accuracy and latency having a high impact on participants' error rate. We also observed that participants assessed their own performance as lower than it objectively was, and we discuss implications for practical shared gaze applications.}, note = {Best Paper Award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users' interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we conducted a human-subject study to understand the impact of accuracy, precision, latency, and dropout based errors on users' performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants' objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found some significant differences suggesting that the simulated error levels had stronger effects on participants' performance than target distance with accuracy and latency having a high impact on participants' error rate. We also observed that participants assessed their own performance as lower than it objectively was, and we discuss implications for practical shared gaze applications. |