2021
|
| Mindi Anderson; Frank Guido-Sanz; Desiree A. Díaz; Gregory F. Welch; Laura Gonzalez Poster: Using XR Technology to Innovate Healthcare Education Presentation 20.06.2021. @misc{Anderson2021b,
title = {Poster: Using XR Technology to Innovate Healthcare Education},
author = {Mindi Anderson and Frank Guido-Sanz and Desiree A. Díaz and Gregory F. Welch and Laura Gonzalez},
url = {https://www.inacsl.org/education/future-conferences/
https://sreal.ucf.edu/wp-content/uploads/2021/11/1559212-1621968939.pdf},
year = {2021},
date = {2021-06-20},
urldate = {2021-06-20},
keywords = {},
pubstate = {published},
tppubtype = {presentation}
}
|
| Austin Erickson; Kangsoo Kim; Alexis Lambert; Gerd Bruder; Michael P. Browne; Greg Welch An Extended Analysis on the Benefits of Dark Mode User Interfaces in Optical See-Through Head-Mounted Displays Journal Article In: ACM Transactions on Applied Perception, vol. 18, no. 3, pp. 22, 2021. @article{Erickson2021,
title = {An Extended Analysis on the Benefits of Dark Mode User Interfaces in Optical See-Through Head-Mounted Displays},
author = {Austin Erickson and Kangsoo Kim and Alexis Lambert and Gerd Bruder and Michael P. Browne and Greg Welch},
editor = {Victoria Interrante and Martin Giese},
url = {https://sreal.ucf.edu/wp-content/uploads/2021/03/ACM_TAP2020_DarkMode1_5.pdf},
doi = {https://doi.org/10.1145/3456874},
year = {2021},
date = {2021-05-20},
journal = {ACM Transactions on Applied Perception},
volume = {18},
number = {3},
pages = {22},
abstract = {Light-on-dark color schemes, so-called “Dark Mode,” are becoming more and more popular over a wide range of display technologies and application fields. Many people who have to look at computer screens for hours at a time, such as computer programmers and computer graphics artists, indicate a preference for switching colors on a computer screen from dark text on a light background to light text on a dark background due to perceived advantages related to visual comfort and acuity, specifically when working in low-light environments.
In this paper, we investigate the effects of dark mode color schemes in the field of optical see-through head-mounted displays (OST-HMDs), where the characteristic “additive” light model implies that bright graphics are visible but dark graphics are transparent. We describe two human-subject studies in which we evaluated a normal and inverted color mode in front of different physical backgrounds and different lighting conditions. Our results indicate that dark mode graphics displayed on the HoloLens have significant benefits for visual acuity, and usability, while user preferences depend largely on the lighting in the physical environment. We discuss the implications of these effects on user interfaces and applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Light-on-dark color schemes, so-called “Dark Mode,” are becoming more and more popular over a wide range of display technologies and application fields. Many people who have to look at computer screens for hours at a time, such as computer programmers and computer graphics artists, indicate a preference for switching colors on a computer screen from dark text on a light background to light text on a dark background due to perceived advantages related to visual comfort and acuity, specifically when working in low-light environments.
In this paper, we investigate the effects of dark mode color schemes in the field of optical see-through head-mounted displays (OST-HMDs), where the characteristic “additive” light model implies that bright graphics are visible but dark graphics are transparent. We describe two human-subject studies in which we evaluated a normal and inverted color mode in front of different physical backgrounds and different lighting conditions. Our results indicate that dark mode graphics displayed on the HoloLens have significant benefits for visual acuity, and usability, while user preferences depend largely on the lighting in the physical environment. We discuss the implications of these effects on user interfaces and applications. |
| Hiroshi Furuya; Kangsoo Kim; Gerd Bruder; Pamela J. Wisniewski; Gregory F. Welch Autonomous Vehicle Visual Embodiment for Pedestrian Interactions in Crossing Scenarios: Virtual Drivers in AVs for Pedestrian Crossing Proceedings Article In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 7, Association for Computing Machinery, New York, NY, USA, 2021, ISBN: 9781450380959. @inproceedings{Furuya2021,
title = {Autonomous Vehicle Visual Embodiment for Pedestrian Interactions in Crossing Scenarios: Virtual Drivers in AVs for Pedestrian Crossing},
author = {Hiroshi Furuya and Kangsoo Kim and Gerd Bruder and Pamela J. Wisniewski and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2021/06/Furuya2021.pdf},
doi = {10.1145/3411763.3451626},
isbn = {9781450380959},
year = {2021},
date = {2021-05-08},
booktitle = {Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
number = {304},
pages = {7},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI EA'21},
abstract = {This work presents a novel prototype autonomous vehicle (AV) human-machine interface (HMI) in virtual reality (VR) that utilizes a human-like visual embodiment in the driver's seat of an AV to communicate AV intent to pedestrians in a crosswalk scenario. There is currently a gap in understanding the use of virtual humans in AV HMIs for pedestrian crossing despite the demonstrated efficacy of human-like interfaces in improving human-machine relationships. We conduct a 3x2 within-subjects experiment in VR using our prototype to assess the effects of a virtual human visual embodiment AV HMI on pedestrian crossing behavior and experience. In the experiment participants walk across a virtual crosswalk in front of an AV. How long they took to decide to cross and how long it took for them to reach the other side were collected, in addition to their subjective preferences and feelings of safety. Of 26 participants, 25 preferred the condition with the most anthropomorphic features. An intermediate condition where a human-like virtual driver was present but did not exhibit any behaviors was least preferred and also had a significant effect on time to decide. This work contributes the first empirical work on using human-like visual embodiments for AV HMIs.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
This work presents a novel prototype autonomous vehicle (AV) human-machine interface (HMI) in virtual reality (VR) that utilizes a human-like visual embodiment in the driver's seat of an AV to communicate AV intent to pedestrians in a crosswalk scenario. There is currently a gap in understanding the use of virtual humans in AV HMIs for pedestrian crossing despite the demonstrated efficacy of human-like interfaces in improving human-machine relationships. We conduct a 3x2 within-subjects experiment in VR using our prototype to assess the effects of a virtual human visual embodiment AV HMI on pedestrian crossing behavior and experience. In the experiment participants walk across a virtual crosswalk in front of an AV. How long they took to decide to cross and how long it took for them to reach the other side were collected, in addition to their subjective preferences and feelings of safety. Of 26 participants, 25 preferred the condition with the most anthropomorphic features. An intermediate condition where a human-like virtual driver was present but did not exhibit any behaviors was least preferred and also had a significant effect on time to decide. This work contributes the first empirical work on using human-like visual embodiments for AV HMIs. |
| Mindi Anderson; Frank Guido-Sanz; Desiree A. Díaz; Benjamin Lok; Jacob Stuart; Ilerioluwa Akinnola; Gregory Welch Augmented Reality in Nurse Practitioner Education: Using a Triage Scenario to Pilot Technology Usability and Effectiveness Journal Article In: Clinical Simulation in Nursing, vol. 54, pp. 105-112, 2021, ISSN: 1876-1399. @article{Anderson2021,
title = {Augmented Reality in Nurse Practitioner Education: Using a Triage Scenario to Pilot Technology Usability and Effectiveness},
author = {Mindi Anderson and Frank Guido-Sanz and Desiree A. Díaz and Benjamin Lok and Jacob Stuart and Ilerioluwa Akinnola and Gregory Welch},
url = {https://www.sciencedirect.com/science/article/pii/S1876139921000098
https://sreal.ucf.edu/wp-content/uploads/2022/08/MindiAnderson2021qd.pdf
},
issn = {1876-1399},
year = {2021},
date = {2021-05-01},
urldate = {2021-05-01},
journal = {Clinical Simulation in Nursing},
volume = {54},
pages = {105-112},
abstract = {Background
Before implementation, simulations and new technologies should be piloted for usability and effectiveness. Simulationists and augmented reality (AR) researchers developed an augmented reality (AR) triage scenario for Nurse Practitioner (NP) students.
Methods
A mixed-method, exploratory, pilot study was carried out with NP students and other volunteers. Participants completed several tools to appraise the usability of the AR modality and the effectiveness of the scenario for learning. Open-ended questions were asked, and qualitative themes were obtained via content analysis.
Results
Mixed results were received by the twelve participants (8 students, 4 other volunteers). There were some issues with usability, and technical challenges occurred. The debriefing was found to be effective, and favorable comments were made on simulation realism. Further preparation for the content and technology, along with more practice, was inferred. Those with reported previous AR experience found the experience more effective.
Conclusions
Further improvements are needed with usability of the AR modality. Debriefing can be effective and the simulation realistic. Participants need further preparation in triaging and use of the technology, and more practice is needed. AR simulations have promise for use in NP education.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Background
Before implementation, simulations and new technologies should be piloted for usability and effectiveness. Simulationists and augmented reality (AR) researchers developed an augmented reality (AR) triage scenario for Nurse Practitioner (NP) students.
Methods
A mixed-method, exploratory, pilot study was carried out with NP students and other volunteers. Participants completed several tools to appraise the usability of the AR modality and the effectiveness of the scenario for learning. Open-ended questions were asked, and qualitative themes were obtained via content analysis.
Results
Mixed results were received by the twelve participants (8 students, 4 other volunteers). There were some issues with usability, and technical challenges occurred. The debriefing was found to be effective, and favorable comments were made on simulation realism. Further preparation for the content and technology, along with more practice, was inferred. Those with reported previous AR experience found the experience more effective.
Conclusions
Further improvements are needed with usability of the AR modality. Debriefing can be effective and the simulation realistic. Participants need further preparation in triaging and use of the technology, and more practice is needed. AR simulations have promise for use in NP education. |
| Zubin Choudhary; Matt Gottsacker; Kangsoo Kim; Ryan Schubert; Jeanine Stefanucci; Gerd Bruder; Greg Welch Revisiting Distance Perception with Scaled Embodied Cues in Social Virtual Reality Proceedings Article In: IEEE Virtual Reality (VR), 2021, 2021. @inproceedings{Choudhary2021,
title = {Revisiting Distance Perception with Scaled Embodied Cues in Social Virtual Reality},
author = {Zubin Choudhary and Matt Gottsacker and Kangsoo Kim and Ryan Schubert and Jeanine Stefanucci and Gerd Bruder and Greg Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2021/04/C2593-Revisiting-Distance-Perception-with-Scaled-Embodied-Cues-in-Social-Virtual-Reality-7.pdf},
year = {2021},
date = {2021-04-01},
publisher = {IEEE Virtual Reality (VR), 2021},
abstract = {Previous research on distance estimation in virtual reality (VR) has well established that even for geometrically accurate virtual objects and environments users tend to systematically misestimate distances. This has implications for Social VR, where it introduces variables in personal space and proxemics behavior that change social behaviors compared to the real world. One yet unexplored factor is related to the trend that avatars’ embodied cues in Social VR are often scaled, e.g., by making one’s head bigger or one’s voice louder, to make social cues more pronounced over longer distances.
In this paper we investigate how the perception of avatar distance is changed based on two means for scaling embodied social cues: visual head scale and verbal volume scale. We conducted a human subject study employing a mixed factorial design with two Social VR avatar representations (full-body, head-only) as a between factor as well as three visual head scales and three verbal volume scales (up-scaled, accurate, down-scaled) as within factors. For three distances from social to far-public space, we found that visual head scale had a significant effect on distance judgments and should be tuned for Social VR, while conflicting verbal volume scales did not, indicating that voices can be scaled in Social VR without immediate repercussions on spatial estimates. We discuss the interactions between the factors and implications for Social VR.
},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Previous research on distance estimation in virtual reality (VR) has well established that even for geometrically accurate virtual objects and environments users tend to systematically misestimate distances. This has implications for Social VR, where it introduces variables in personal space and proxemics behavior that change social behaviors compared to the real world. One yet unexplored factor is related to the trend that avatars’ embodied cues in Social VR are often scaled, e.g., by making one’s head bigger or one’s voice louder, to make social cues more pronounced over longer distances.
In this paper we investigate how the perception of avatar distance is changed based on two means for scaling embodied social cues: visual head scale and verbal volume scale. We conducted a human subject study employing a mixed factorial design with two Social VR avatar representations (full-body, head-only) as a between factor as well as three visual head scales and three verbal volume scales (up-scaled, accurate, down-scaled) as within factors. For three distances from social to far-public space, we found that visual head scale had a significant effect on distance judgments and should be tuned for Social VR, while conflicting verbal volume scales did not, indicating that voices can be scaled in Social VR without immediate repercussions on spatial estimates. We discuss the interactions between the factors and implications for Social VR.
|
2020
|
| Nahal Norouzi; Kangsoo Kim; Gerd Bruder; Greg Welch [Demo] Towards Interactive Virtual Dogs as a Pervasive Social Companion in Augmented Reality Proceedings Article In: Proceedings of the combined International Conference on Artificial Reality & Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE)., pp. 29-30, 2020, (Best Demo Audience Choice Award). @inproceedings{Norouzi2020d,
title = {[Demo] Towards Interactive Virtual Dogs as a Pervasive Social Companion in Augmented Reality},
author = {Nahal Norouzi and Kangsoo Kim and Gerd Bruder and Greg Welch },
url = {https://sreal.ucf.edu/wp-content/uploads/2020/12/029-030.pdf},
doi = {https://doi.org/10.2312/egve.20201283},
year = {2020},
date = {2020-12-04},
booktitle = {Proceedings of the combined International Conference on Artificial Reality & Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE).},
pages = {29-30},
abstract = {Pets and animal-assisted intervention sessions have shown to be beneficial for humans' mental, social, and physical health. However, for specific populations, factors such as hygiene restrictions, allergies, and care and resource limitations reduce interaction opportunities. In parallel, understanding the capabilities of animals' technological representations, such as robotic and digital forms, have received considerable attention and has fueled the utilization of many of these technological representations. Additionally, recent advances in augmented reality technology have allowed for the realization of virtual animals with flexible appearances and behaviors to exist in the real world. In this demo, we present a companion virtual dog in augmented reality that aims to facilitate a range of interactions with populations, such as children and older adults. We discuss the potential benefits and limitations of such a companion and propose future use cases and research directions.},
note = {Best Demo Audience Choice Award},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Pets and animal-assisted intervention sessions have shown to be beneficial for humans' mental, social, and physical health. However, for specific populations, factors such as hygiene restrictions, allergies, and care and resource limitations reduce interaction opportunities. In parallel, understanding the capabilities of animals' technological representations, such as robotic and digital forms, have received considerable attention and has fueled the utilization of many of these technological representations. Additionally, recent advances in augmented reality technology have allowed for the realization of virtual animals with flexible appearances and behaviors to exist in the real world. In this demo, we present a companion virtual dog in augmented reality that aims to facilitate a range of interactions with populations, such as children and older adults. We discuss the potential benefits and limitations of such a companion and propose future use cases and research directions. |
| Nafisa Mostofa; Indira Avendano; Ryan P. McMahan; Greg Welch [POSTER] Tactile Telepresence for Isolated Patients Proceedings Article In: Kulik, Alexander; Sra, Misha; Kim, Kangsoo; Seo, Byung-Kuk (Ed.): ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos, pp. 7–8, The Eurographics Association, 2020, ISBN: 978-3-03868-112-0. @inproceedings{Mostofa2020aa,
title = {[POSTER] Tactile Telepresence for Isolated Patients},
author = {Nafisa Mostofa and Indira Avendano and Ryan P. McMahan and Greg Welch},
editor = {Kulik, Alexander and Sra, Misha and Kim, Kangsoo and Seo, Byung-Kuk},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/12/ICAT-EGVE2020_PosterExtendedAbstract-TTIP.pdf
https://www.youtube.com/watch?v=5Dmxzd58rOk&feature=emb_logo},
doi = {10.2312/egve.20201272},
isbn = {978-3-03868-112-0},
year = {2020},
date = {2020-12-02},
booktitle = {ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos},
journal = {ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos},
pages = {7--8},
publisher = {The Eurographics Association},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|
| Austin Erickson; Kangsoo Kim; Gerd Bruder; Gregory F. Welch [Demo] Dark/Light Mode Adaptation for Graphical User Interfaces on Near-Eye Displays Proceedings Article In: Kulik, Alexander; Sra, Misha; Kim, Kangsoo; Seo, Byung-Kuk (Ed.): Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments, pp. 23-24, The Eurographics Association The Eurographics Association, 2020, ISBN: 978-3-03868-112-0. @inproceedings{Erickson2020f,
title = {[Demo] Dark/Light Mode Adaptation for Graphical User Interfaces on Near-Eye Displays},
author = {Austin Erickson and Kangsoo Kim and Gerd Bruder and Gregory F. Welch},
editor = {Kulik, Alexander and Sra, Misha and Kim, Kangsoo and Seo, Byung-Kuk},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/12/DarkmodeDEMO_ICAT_EGVE_2020-2.pdf
https://www.youtube.com/watch?v=VJQTaYyofCw&t=61s
},
doi = {https://doi.org/10.2312/egve.20201280},
isbn = {978-3-03868-112-0},
year = {2020},
date = {2020-12-02},
booktitle = {Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments},
pages = {23-24},
publisher = {The Eurographics Association},
organization = {The Eurographics Association},
abstract = {In the fields of augmented reality (AR) and virtual reality (VR), many applications involve user interfaces (UIs) to display various types of information to users. Such UIs are an important component that influences user experience and human factors in AR/VR because the users are directly facing and interacting with them to absorb the visualized information and manipulate the content. While consumer’s interests in different forms of near-eye displays, such as AR/VR head-mounted displays (HMDs), are increasing, research on the design standard for AR/VR UIs and human factors becomes more and more interesting and timely important. Although UI configurations, such as dark mode and light mode, have increased in popularity on other display types over the last several years, they have yet to make their way into AR/VR devices as built in features. This demo showcases several use cases of dark mode and light mode UIs on AR/VR HMDs, and provides general guidelines for when they should be used to provide perceptual benefits to the user},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In the fields of augmented reality (AR) and virtual reality (VR), many applications involve user interfaces (UIs) to display various types of information to users. Such UIs are an important component that influences user experience and human factors in AR/VR because the users are directly facing and interacting with them to absorb the visualized information and manipulate the content. While consumer’s interests in different forms of near-eye displays, such as AR/VR head-mounted displays (HMDs), are increasing, research on the design standard for AR/VR UIs and human factors becomes more and more interesting and timely important. Although UI configurations, such as dark mode and light mode, have increased in popularity on other display types over the last several years, they have yet to make their way into AR/VR devices as built in features. This demo showcases several use cases of dark mode and light mode UIs on AR/VR HMDs, and provides general guidelines for when they should be used to provide perceptual benefits to the user |
| Nahal Norouzi; Kangsoo Kim; Gerd Bruder; Austin Erickson; Zubin Choudhary; Yifan Li; Greg Welch A Systematic Literature Review of Embodied Augmented Reality Agents in Head-Mounted Display Environments Proceedings Article In: In Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments, pp. 11, 2020. @inproceedings{Norouzi2020c,
title = {A Systematic Literature Review of Embodied Augmented Reality Agents in Head-Mounted Display Environments},
author = {Nahal Norouzi and Kangsoo Kim and Gerd Bruder and Austin Erickson and Zubin Choudhary and Yifan Li and Greg Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/11/IVC_ICAT_EGVE2020.pdf
https://www.youtube.com/watch?v=IsX5q86pH4M},
year = {2020},
date = {2020-12-02},
urldate = {2020-12-02},
booktitle = {In Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments},
pages = {11},
abstract = {Embodied agents, i.e., computer-controlled characters, have proven useful for various applications across a multitude of display setups and modalities. While most traditional work focused on embodied agents presented on a screen or projector, and a growing number of works are focusing on agents in virtual reality, a comparatively small number of publications looked at such agents in augmented reality (AR). Such AR agents, specifically when using see-through head-mounted displays (HMDs)as the display medium, show multiple critical differences to other forms of agents, including their appearances, behaviors, and physical-virtual interactivity. Due to the unique challenges in this specific field, and due to the comparatively limited attention by the research community so far, we believe that it is important to map the field to understand the current trends, challenges, and future research. In this paper, we present a systematic review of the research performed on interactive, embodied AR agents using HMDs. Starting with 1261 broadly related papers, we conducted an in-depth review of 50 directly related papers from2000 to 2020, focusing on papers that reported on user studies aiming to improve our understanding of interactive agents in AR HMD environments or their utilization in specific applications. We identified common research and application areas of AR agents through a structured iterative process, present research trends, and gaps, and share insights on future directions.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Embodied agents, i.e., computer-controlled characters, have proven useful for various applications across a multitude of display setups and modalities. While most traditional work focused on embodied agents presented on a screen or projector, and a growing number of works are focusing on agents in virtual reality, a comparatively small number of publications looked at such agents in augmented reality (AR). Such AR agents, specifically when using see-through head-mounted displays (HMDs)as the display medium, show multiple critical differences to other forms of agents, including their appearances, behaviors, and physical-virtual interactivity. Due to the unique challenges in this specific field, and due to the comparatively limited attention by the research community so far, we believe that it is important to map the field to understand the current trends, challenges, and future research. In this paper, we present a systematic review of the research performed on interactive, embodied AR agents using HMDs. Starting with 1261 broadly related papers, we conducted an in-depth review of 50 directly related papers from2000 to 2020, focusing on papers that reported on user studies aiming to improve our understanding of interactive agents in AR HMD environments or their utilization in specific applications. We identified common research and application areas of AR agents through a structured iterative process, present research trends, and gaps, and share insights on future directions. |
| Sharare Zehtabian; Siavash Khodadadeh; Kangsoo Kim; Gerd Bruder; Greg Welch; Ladislau Bölöni; Damla Turgut [Poster] An Automated Virtual Receptionist for Recognizing Visitors and Assuring Mask Wearing Proceedings Article In: Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments, pp. 9-10, 2020. @inproceedings{Zehtabian2020aav,
title = {[Poster] An Automated Virtual Receptionist for Recognizing Visitors and Assuring Mask Wearing},
author = {Sharare Zehtabian and Siavash Khodadadeh and Kangsoo Kim and Gerd Bruder and Greg Welch and Ladislau Bölöni and Damla Turgut},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/12/VirtualReceptionist_Poster_ICAT_EGVE2020.pdf
https://www.youtube.com/watch?v=r6bXNPn3lWU&feature=emb_logo},
doi = {10.2312/egve.20201273},
year = {2020},
date = {2020-12-02},
booktitle = {Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments},
pages = {9-10},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|
| Austin Erickson; Kangsoo Kim; Gerd Bruder; Greg Welch A Review of Visual Perception Research in Optical See-Through Augmented Reality Proceedings Article In: In Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments, pp. 8, The Eurographics Association The Eurographics Association, 2020, ISBN: 978-3-03868-111-3. @inproceedings{Erickson2020e,
title = {A Review of Visual Perception Research in Optical See-Through Augmented Reality},
author = {Austin Erickson and Kangsoo Kim and Gerd Bruder and Greg Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2021/05/DarkModeSurvey_ICAT_EGVE_2020.pdf},
doi = {10.2312/egve.20201256},
isbn = {978-3-03868-111-3},
year = {2020},
date = {2020-12-02},
booktitle = {In Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments},
pages = {8},
publisher = {The Eurographics Association},
organization = {The Eurographics Association},
abstract = {In the field of augmented reality (AR), many applications involve user interfaces (UIs) that overlay visual information over the user's view of their physical environment, e.g., as text, images, or three-dimensional scene elements. In this scope, optical see-through head-mounted displays (OST-HMDs) are particularly interesting as they typically use an additive light model, which denotes that the perception of the displayed virtual imagery is a composite of the lighting conditions of one's environment, the coloration of the objects that make up the virtual imagery, and the coloration of physical objects that lay behind them. While a large body of literature focused on investigating the visual perception of UI elements in immersive and flat panel displays, comparatively less effort has been spent on OST-HMDs. Due to the unique visual effects with OST-HMDs, we believe that it is important to review the field to understand the perceptual challenges, research trends, and future directions. In this paper, we present a systematic survey of literature based on the IEEE and ACM digital libraries, which explores users' perception of displaying text-based information on an OST-HMD, and aim to provide relevant design suggestions based on the meta-analysis results. We carefully review 14 key papers relevant to the visual perception research in OST-HMDs with UI elements, and present the current state of the research field, associated trends, noticeable research gaps in the literature, and recommendations for potential future research in this domain. },
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In the field of augmented reality (AR), many applications involve user interfaces (UIs) that overlay visual information over the user's view of their physical environment, e.g., as text, images, or three-dimensional scene elements. In this scope, optical see-through head-mounted displays (OST-HMDs) are particularly interesting as they typically use an additive light model, which denotes that the perception of the displayed virtual imagery is a composite of the lighting conditions of one's environment, the coloration of the objects that make up the virtual imagery, and the coloration of physical objects that lay behind them. While a large body of literature focused on investigating the visual perception of UI elements in immersive and flat panel displays, comparatively less effort has been spent on OST-HMDs. Due to the unique visual effects with OST-HMDs, we believe that it is important to review the field to understand the perceptual challenges, research trends, and future directions. In this paper, we present a systematic survey of literature based on the IEEE and ACM digital libraries, which explores users' perception of displaying text-based information on an OST-HMD, and aim to provide relevant design suggestions based on the meta-analysis results. We carefully review 14 key papers relevant to the visual perception research in OST-HMDs with UI elements, and present the current state of the research field, associated trends, noticeable research gaps in the literature, and recommendations for potential future research in this domain. |
| Gregory F. Welch Kalman Filter Book Chapter In: Rehg, Jim (Ed.): Computer Vision: A Reference Guide, pp. 1–3, Springer International Publishing, Cham, Switzerland, 2020, ISBN: 978-3-030-03243-2. @inbook{Welch2020ab,
title = {Kalman Filter},
author = {Gregory F. Welch},
editor = {Jim Rehg},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/12/Welch2020ae.pdf},
doi = {10.1007/978-3-030-03243-2_716-1},
isbn = {978-3-030-03243-2},
year = {2020},
date = {2020-12-01},
booktitle = {Computer Vision: A Reference Guide},
pages = {1--3},
publisher = {Springer International Publishing},
address = {Cham, Switzerland},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
|
| Celso M. de Melo; Kangsoo Kim; Nahal Norouzi; Gerd Bruder; Gregory Welch Reducing Cognitive Load and Improving Warfighter Problem Solving with Intelligent Virtual Assistants Journal Article In: Frontiers in Psychology, vol. 11, no. 554706, pp. 1-12, 2020. @article{DeMelo2020rcl,
title = {Reducing Cognitive Load and Improving Warfighter Problem Solving with Intelligent Virtual Assistants},
author = {Celso M. de Melo and Kangsoo Kim and Nahal Norouzi and Gerd Bruder and Gregory Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/11/Melo2020aa-2.pdf},
doi = {10.3389/fpsyg.2020.554706},
year = {2020},
date = {2020-11-17},
journal = {Frontiers in Psychology},
volume = {11},
number = {554706},
pages = {1-12},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
| Austin Erickson; Kangsoo Kim; Gerd Bruder; Gregory F. Welch Exploring the Limitations of Environment Lighting on Optical See-Through Head-Mounted Displays Proceedings Article In: Proceedings of the ACM Symposium on Spatial User Interaction , pp. 1-8, Association for Computing Machinery ACM, New York, NY, USA, 2020, ISBN: 9781450379434. @inproceedings{Erickson2020d,
title = {Exploring the Limitations of Environment Lighting on Optical See-Through Head-Mounted Displays},
author = {Austin Erickson and Kangsoo Kim and Gerd Bruder and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/09/sui20a-sub1047-cam-i26-1.pdf
https://youtu.be/3jJ-j35oO1I},
doi = {10.1145/3385959.3418445},
isbn = {9781450379434},
year = {2020},
date = {2020-10-31},
booktitle = {Proceedings of the ACM Symposium on Spatial User Interaction },
pages = {1-8},
publisher = {ACM},
address = {New York, NY, USA},
organization = {Association for Computing Machinery},
series = {SUI '20},
abstract = {Due to the additive light model employed by most optical see-through head-mounted displays (OST-HMDs), they provide the best augmented reality (AR) views in dark environments, where the added AR light does not have to compete against existing real-world lighting. AR imagery displayed on such devices loses a significant amount of contrast in well-lit environments such as outdoors in direct sunlight. To compensate for this, OST-HMDs often use a tinted visor to reduce the amount of environment light that reaches the user’s eyes, which in turn results in a loss of contrast in the user’s physical environment. While these effects are well known and grounded in existing literature, formal measurements of the illuminance and contrast of modern OST-HMDs are currently missing. In this paper, we provide illuminance measurements for both the Microsoft HoloLens 1 and its successor the HoloLens 2 under varying environment lighting conditions ranging from 0 to 20,000 lux. We evaluate how environment lighting impacts the user by calculating contrast ratios between rendered black (transparent) and white imagery displayed under these conditions, and evaluate how the intensity of environment lighting is impacted by donning and using the HMD. Our results indicate the further need for refinement in the design of future OST-HMDs to optimize contrast in environments with illuminance values greater than or equal to those found in indoor working environments.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Due to the additive light model employed by most optical see-through head-mounted displays (OST-HMDs), they provide the best augmented reality (AR) views in dark environments, where the added AR light does not have to compete against existing real-world lighting. AR imagery displayed on such devices loses a significant amount of contrast in well-lit environments such as outdoors in direct sunlight. To compensate for this, OST-HMDs often use a tinted visor to reduce the amount of environment light that reaches the user’s eyes, which in turn results in a loss of contrast in the user’s physical environment. While these effects are well known and grounded in existing literature, formal measurements of the illuminance and contrast of modern OST-HMDs are currently missing. In this paper, we provide illuminance measurements for both the Microsoft HoloLens 1 and its successor the HoloLens 2 under varying environment lighting conditions ranging from 0 to 20,000 lux. We evaluate how environment lighting impacts the user by calculating contrast ratios between rendered black (transparent) and white imagery displayed under these conditions, and evaluate how the intensity of environment lighting is impacted by donning and using the HMD. Our results indicate the further need for refinement in the design of future OST-HMDs to optimize contrast in environments with illuminance values greater than or equal to those found in indoor working environments. |
| Gregory F Welch; Ryan Schubert; Gerd Bruder; Derrick P Stockdreher; Adam Casebolt Augmented Reality Promises Mentally and Physically Stressful Training in Real Places Journal Article In: IACLEA Campus Law Enforcement Journal, vol. 50, no. 5, pp. 47–50, 2020. @article{Welch2020aa,
title = {Augmented Reality Promises Mentally and Physically Stressful Training in Real Places},
author = {Gregory F Welch and Ryan Schubert and Gerd Bruder and Derrick P Stockdreher and Adam Casebolt},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/10/Welch2020aa.pdf},
year = {2020},
date = {2020-10-05},
journal = {IACLEA Campus Law Enforcement Journal},
volume = {50},
number = {5},
pages = {47--50},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
| Seungwon Kim; Mark Billinghurst; Kangsoo Kim Multimodal interfaces and communication cues for remote collaboration Journal Article In: Journal on Multimodal User Interfaces, vol. 14, no. 4, pp. 313-319, 2020, ISSN: 1783-7677, (Special Issue Editorial). @article{Kim2020mia,
title = {Multimodal interfaces and communication cues for remote collaboration},
author = {Seungwon Kim and Mark Billinghurst and Kangsoo Kim},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/10/Kim2020mia_Submission.pdf},
doi = {10.1007/s12193-020-00346-8},
issn = {1783-7677},
year = {2020},
date = {2020-10-03},
journal = {Journal on Multimodal User Interfaces},
volume = {14},
number = {4},
pages = {313-319},
note = {Special Issue Editorial},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
| Alexis Lambert; Nahal Norouzi; Gerd Bruder; Greg Welch
A Systematic Review of Ten Years of Research on Human Interaction with Social Robots Journal Article In: International Journal of Human-Computer Interaction, pp. 10, 2020. @article{Lambert2020,
title = {A Systematic Review of Ten Years of Research on Human Interaction with Social Robots},
author = {Alexis Lambert and Nahal Norouzi and Gerd Bruder and Greg Welch
},
editor = {Constantine Stephanidis},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/08/8_25_2020_A-Systemat.pdf},
doi = {10.1080/10447318.2020.1801172},
year = {2020},
date = {2020-08-25},
journal = {International Journal of Human-Computer Interaction},
pages = {10},
abstract = {While research and development related to robotics has been going on for decades, the past decade in particular has seen a marked increase in related efforts, in part due to technological advances, increased technological accessibility and reliability, and increased commercial availability. What have come to be known as social robots are now being used to explore novel forms of human-robot interaction, to understand social norms, and to test expectations and human responses. To capture the contributions of these research efforts, identify the current trends, and future directions, we systematically review ten years of research in the field of social robotics between 2008 and 2018, which includes 86 publications with 70 user studies. We classify the past work based on the research topics and application areas, and provide information about the publications, their user studies, and the capabilities of the social robots utilized. We also discuss selected papers in detail and outline overall trends. Based on these findings, we identify some areas of potential future research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
While research and development related to robotics has been going on for decades, the past decade in particular has seen a marked increase in related efforts, in part due to technological advances, increased technological accessibility and reliability, and increased commercial availability. What have come to be known as social robots are now being used to explore novel forms of human-robot interaction, to understand social norms, and to test expectations and human responses. To capture the contributions of these research efforts, identify the current trends, and future directions, we systematically review ten years of research in the field of social robotics between 2008 and 2018, which includes 86 publications with 70 user studies. We classify the past work based on the research topics and application areas, and provide information about the publications, their user studies, and the capabilities of the social robots utilized. We also discuss selected papers in detail and outline overall trends. Based on these findings, we identify some areas of potential future research. |
| Laura Gonzalez; Salam Daher; Greg Welch Neurological Assessment Using a Physical-Virtual Patient (PVP) Journal Article In: Simulation & Gaming, pp. 1–17, 2020. @article{Gonzalez2020aa,
title = {Neurological Assessment Using a Physical-Virtual Patient (PVP)},
author = {Laura Gonzalez and Salam Daher and Greg Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/08/Gonzalez2020aa.pdf},
year = {2020},
date = {2020-08-12},
journal = {Simulation & Gaming},
pages = {1--17},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
| Austin Erickson; Nahal Norouzi; Kangsoo Kim; Ryan Schubert; Jonathan Jules; Joseph J. LaViola Jr.; Gerd Bruder; Gregory F. Welch Sharing gaze rays for visual target identification tasks in collaborative augmented reality Journal Article In: Journal on Multimodal User Interfaces: Special Issue on Multimodal Interfaces and Communication Cues for Remote Collaboration, vol. 14, no. 4, pp. 353-371, 2020, ISSN: 1783-8738. @article{EricksonNorouzi2020,
title = {Sharing gaze rays for visual target identification tasks in collaborative augmented reality},
author = {Austin Erickson and Nahal Norouzi and Kangsoo Kim and Ryan Schubert and Jonathan Jules and Joseph J. LaViola Jr. and Gerd Bruder and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2020/07/Erickson2020_Article_SharingGazeRaysForVisualTarget.pdf},
doi = {https://doi.org/10.1007/s12193-020-00330-2},
issn = {1783-8738},
year = {2020},
date = {2020-07-09},
urldate = {2020-07-09},
journal = {Journal on Multimodal User Interfaces: Special Issue on Multimodal Interfaces and Communication Cues for Remote Collaboration},
volume = {14},
number = {4},
pages = {353-371},
abstract = {Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we present a human-subjects study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was. We discuss implications for practical shared gaze applications and we present a multi-user prototype system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we present a human-subjects study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was. We discuss implications for practical shared gaze applications and we present a multi-user prototype system. |
| Arup Kumar Ghosh; Charles E. Hughes; Pamela J. Wisniewski Circle of Trust: A New Approach to Mobile Online Safety for Teens and Parents Proceedings Article In: Proceedings of CHI Conference on Human Factors in Computing Systems, pp. 618:1-14, 2020. @inproceedings{Ghosh2020cot,
title = {Circle of Trust: A New Approach to Mobile Online Safety for Teens and Parents},
author = {Arup Kumar Ghosh and Charles E. Hughes and Pamela J. Wisniewski },
doi = {10.1145/3313831.3376747},
year = {2020},
date = {2020-04-25},
booktitle = {Proceedings of CHI Conference on Human Factors in Computing Systems},
pages = {618:1-14},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|