2019
|
| Susanne Schmidt; Gerd Bruder; Frank Steinicke Effects of Virtual Agent and Object Representation on Experiencing Exhibited Artifacts Journal Article In: Elsevier Computers and Graphics, vol. 83, pp. 1-10, 2019. @article{Schmidt2019,
title = {Effects of Virtual Agent and Object Representation on Experiencing Exhibited Artifacts },
author = {Susanne Schmidt and Gerd Bruder and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/07/Schmidt2019.pdf},
doi = {10.1016/j.cag.2019.06.002},
year = {2019},
date = {2019-10-01},
journal = {Elsevier Computers and Graphics},
volume = {83},
pages = {1-10},
abstract = {With the emergence of speech-controlled virtual agents (VAs) in consumer devices such as Amazon’s Echo or Apple’s HomePod, we have seen a large public interest in related technologies. While most of the current interactive conversational VAs appear in the form of voice-only assistants, other representations showing, for example, a contextually related or generic humanoid body are possible. In our previous work, we analyzed the effectiveness of different forms of VAs in the context of a virtual reality (VR) exhibition space. We found positive evidence that agent embodiment induces a higher sense of spatial and social presence. The results also suggest that both embodied and thematically related audio-visual representations of VAs positively affect the overall user experience. We extend this work by further analyzing the effects of the physicality of the agent’s environment (i.e., virtual vs. real). The results of the follow-up study indicate some benefits of virtual environments, e.g., regarding user engagement and learning of visual facts. We also evaluate some interaction effects between the representations of the virtual agent and its surrounding and discuss implications on the design of exhibition spaces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
With the emergence of speech-controlled virtual agents (VAs) in consumer devices such as Amazon’s Echo or Apple’s HomePod, we have seen a large public interest in related technologies. While most of the current interactive conversational VAs appear in the form of voice-only assistants, other representations showing, for example, a contextually related or generic humanoid body are possible. In our previous work, we analyzed the effectiveness of different forms of VAs in the context of a virtual reality (VR) exhibition space. We found positive evidence that agent embodiment induces a higher sense of spatial and social presence. The results also suggest that both embodied and thematically related audio-visual representations of VAs positively affect the overall user experience. We extend this work by further analyzing the effects of the physicality of the agent’s environment (i.e., virtual vs. real). The results of the follow-up study indicate some benefits of virtual environments, e.g., regarding user engagement and learning of visual facts. We also evaluate some interaction effects between the representations of the virtual agent and its surrounding and discuss implications on the design of exhibition spaces. |
| Gregory F. Welch; Gerd Bruder; Peter Squire; Ryan Schubert Anticipating Widespread Augmented Reality: Insights from the 2018 AR Visioning Workshop Technical Report University of Central Florida and Office of Naval Research no. 786, 2019. @techreport{Welch2019b,
title = {Anticipating Widespread Augmented Reality: Insights from the 2018 AR Visioning Workshop},
author = {Gregory F. Welch and Gerd Bruder and Peter Squire and Ryan Schubert},
url = {https://stars.library.ucf.edu/ucfscholar/786/
https://sreal.ucf.edu/wp-content/uploads/2019/08/Welch2019b-1.pdf},
year = {2019},
date = {2019-08-06},
issuetitle = {Faculty Scholarship and Creative Works},
number = {786},
institution = {University of Central Florida and Office of Naval Research},
abstract = {In August of 2018 a group of academic, government, and industry experts in the field of Augmented Reality gathered for four days to consider potential technological and societal issues and opportunities that could accompany a future where AR is pervasive in location and duration of use. This report is intended to summarize some of the most novel and potentially impactful insights and opportunities identified by the group.
Our target audience includes AR researchers, government leaders, and thought leaders in general. It is our intent to share some compelling technological and societal questions that we believe are unique to AR, and to engender new thinking about the potentially impactful synergies associated with the convergence of AR and some other conventionally distinct areas of research.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
In August of 2018 a group of academic, government, and industry experts in the field of Augmented Reality gathered for four days to consider potential technological and societal issues and opportunities that could accompany a future where AR is pervasive in location and duration of use. This report is intended to summarize some of the most novel and potentially impactful insights and opportunities identified by the group.
Our target audience includes AR researchers, government leaders, and thought leaders in general. It is our intent to share some compelling technological and societal questions that we believe are unique to AR, and to engender new thinking about the potentially impactful synergies associated with the convergence of AR and some other conventionally distinct areas of research. |
| Kangsoo Kim; Ryan Schubert; Jason Hochreiter; Gerd Bruder; Gregory Welch Blowing in the Wind: Increasing Social Presence with a Virtual Human via Environmental Airflow Interaction in Mixed Reality Journal Article In: Elsevier Computers and Graphics, vol. 83, no. October 2019, pp. 23-32, 2019. @article{Kim2019blow,
title = {Blowing in the Wind: Increasing Social Presence with a Virtual Human via Environmental Airflow Interaction in Mixed Reality},
author = {Kangsoo Kim and Ryan Schubert and Jason Hochreiter and Gerd Bruder and Gregory Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/06/ELSEVIER_C_G2019_Special_BlowWindinMR_ICAT_EGVE2018_20190606_reduced.pdf},
doi = {10.1016/j.cag.2019.06.006},
year = {2019},
date = {2019-07-05},
journal = {Elsevier Computers and Graphics},
volume = {83},
number = {October 2019},
pages = {23-32},
abstract = {In this paper, we describe two human-subject studies in which we explored and investigated the effects of subtle multimodal interaction on social presence with a virtual human (VH) in mixed reality (MR). In the studies, participants interacted with a VH, which was co-located with them across a table, with two different platforms: a projection based MR environment and an optical see-through head-mounted display (OST-HMD) based MR environment. While the two studies were not intended to be directly comparable, the second study with an OST-HMD was carefully designed based on the insights and lessons learned from the first projection-based study. For both studies, we compared two levels of gradually increased multimodal interaction: (i) virtual objects being affected by real airflow (e.g., as commonly experienced with fans during warm weather), and (ii) a VH showing awareness of this airflow. We hypothesized that our two levels of treatment would increase the sense of being together with the VH gradually, i.e., participants would report higher social presence with airflow influence than without it, and the social presence would be even higher when the VH showed awareness of the airflow. We observed an increased social presence in the second study when both physical–virtual interaction via airflow and VH awareness behaviors were present, but we observed no clear difference in participant-reported social presence with the VH in the first study. As the considered environmental factors are incidental to the direct interaction with the real human, i.e., they are not significant or necessary for the interaction task, they can provide a reasonably generalizable approach to increase social presence in HMD-based MR environments beyond the specific scenario and environment described here.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In this paper, we describe two human-subject studies in which we explored and investigated the effects of subtle multimodal interaction on social presence with a virtual human (VH) in mixed reality (MR). In the studies, participants interacted with a VH, which was co-located with them across a table, with two different platforms: a projection based MR environment and an optical see-through head-mounted display (OST-HMD) based MR environment. While the two studies were not intended to be directly comparable, the second study with an OST-HMD was carefully designed based on the insights and lessons learned from the first projection-based study. For both studies, we compared two levels of gradually increased multimodal interaction: (i) virtual objects being affected by real airflow (e.g., as commonly experienced with fans during warm weather), and (ii) a VH showing awareness of this airflow. We hypothesized that our two levels of treatment would increase the sense of being together with the VH gradually, i.e., participants would report higher social presence with airflow influence than without it, and the social presence would be even higher when the VH showed awareness of the airflow. We observed an increased social presence in the second study when both physical–virtual interaction via airflow and VH awareness behaviors were present, but we observed no clear difference in participant-reported social presence with the VH in the first study. As the considered environmental factors are incidental to the direct interaction with the real human, i.e., they are not significant or necessary for the interaction task, they can provide a reasonably generalizable approach to increase social presence in HMD-based MR environments beyond the specific scenario and environment described here. |
| Nahal Norouzi; Luke Bölling; Gerd Bruder; Gregory F. Welch Augmented Rotations in Virtual Reality for Users with a Reduced Range of Head Movement Journal Article In: Journal of Rehabilitation and Assistive Technologies Engineering, vol. 6, pp. 1-9, 2019. @article{Norouzi2019c,
title = {Augmented Rotations in Virtual Reality for Users with a Reduced Range of Head Movement},
author = {Nahal Norouzi and Luke Bölling and Gerd Bruder and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/05/RATE2019_AugmentedRotations.pdf},
doi = {10.1177/2055668319841309},
year = {2019},
date = {2019-05-21},
journal = {Journal of Rehabilitation and Assistive Technologies Engineering},
volume = {6},
pages = {1-9},
abstract = {Introduction: A large body of research in the field of virtual reality (VR) is focused on making user interfaces more natural and intuitive by leveraging natural body movements to explore a virtual environment. For example, head-tracked user interfaces allow users to naturally look around a virtual space by moving their head. However, such approaches may not be appropriate for users with temporary or permanent limitations of their head movement.
Methods: In this paper, we present techniques that allow these users to get virtual benefits from a reduced range of physical movements. Specifically, we describe two techniques that augment virtual rotations relative to physical movement thresholds.
Results: We describe how each of the two techniques can be implemented with either a head tracker or an eye tracker,e.g., in cases when no physical head rotations are possible.
Conclusions: We discuss their differences and limitations and we provide guidelines for the practical use of such augmented user interfaces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Introduction: A large body of research in the field of virtual reality (VR) is focused on making user interfaces more natural and intuitive by leveraging natural body movements to explore a virtual environment. For example, head-tracked user interfaces allow users to naturally look around a virtual space by moving their head. However, such approaches may not be appropriate for users with temporary or permanent limitations of their head movement.
Methods: In this paper, we present techniques that allow these users to get virtual benefits from a reduced range of physical movements. Specifically, we describe two techniques that augment virtual rotations relative to physical movement thresholds.
Results: We describe how each of the two techniques can be implemented with either a head tracker or an eye tracker,e.g., in cases when no physical head rotations are possible.
Conclusions: We discuss their differences and limitations and we provide guidelines for the practical use of such augmented user interfaces. |
| Salam Daher; Jason Hochreiter; Nahal Norouzi; Ryan Schubert; Gerd Bruder; Laura Gonzalez; Mindi Anderson; Desiree Diaz; Juan Cendan; Greg Welch [POSTER] Matching vs. Non-Matching Visuals and Shape for Embodied Virtual Healthcare Agents Proceedings Article In: Proceedings of IEEE Virtual Reality (VR), 2019, 2019. @inproceedings{daher2019matching,
title = {[POSTER] Matching vs. Non-Matching Visuals and Shape for Embodied Virtual Healthcare Agents},
author = {Salam Daher and Jason Hochreiter and Nahal Norouzi and Ryan Schubert and Gerd Bruder and Laura Gonzalez and Mindi Anderson and Desiree Diaz and Juan Cendan and Greg Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/03/IEEEVR2019_Poster_PVChildStudy.pdf},
year = {2019},
date = {2019-03-27},
publisher = {Proceedings of IEEE Virtual Reality (VR), 2019},
abstract = {Embodied virtual agents serving as patient simulators are widely used in medical training scenarios, ranging from physical patients to virtual patients presented via virtual and augmented reality technologies. Physical-virtual patients are a hybrid solution that combines the benefits of dynamic visuals integrated into a human-shaped physical
form that can also present other cues, such as pulse, breathing sounds, and temperature. Sometimes in simulation the visuals and shape do not match. We carried out a human-participant study employing graduate nursing students in pediatric patient simulations comprising conditions associated with matching/non-matching of the visuals and shape.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Embodied virtual agents serving as patient simulators are widely used in medical training scenarios, ranging from physical patients to virtual patients presented via virtual and augmented reality technologies. Physical-virtual patients are a hybrid solution that combines the benefits of dynamic visuals integrated into a human-shaped physical
form that can also present other cues, such as pulse, breathing sounds, and temperature. Sometimes in simulation the visuals and shape do not match. We carried out a human-participant study employing graduate nursing students in pediatric patient simulations comprising conditions associated with matching/non-matching of the visuals and shape. |
| Myungho Lee; Gerd Bruder; Greg Welch The Virtual Pole: Exploring Human Responses to Fear of Heights in Immersive Virtual Environments Journal Article In: Journal of Virtual Reality and Broadcasting, vol. 14(2017), no. 6, 2019, ISSN: 1860-2037. @article{Lee2018b,
title = {The Virtual Pole: Exploring Human Responses to Fear of Heights in Immersive Virtual Environments},
author = {Myungho Lee and Gerd Bruder and Greg Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/06/Lee2019b.pdf},
doi = {10.20385/1860-2037/14.2017.6},
issn = {1860-2037},
year = {2019},
date = {2019-02-04},
journal = {Journal of Virtual Reality and Broadcasting},
volume = {14(2017)},
number = {6},
abstract = {Measuring how effective immersive virtual environments (IVEs) are in reproducing sensations as in similar situations in the real world is an important task for many application fields. In this paper, we present an experimental setup which we call the virtual pole, where we evaluated human responses to fear of heights. We conducted experiments where we analyzed correlations between subjective and physiological anxiety measures and the participant’s view direction. Our results show that the view direction plays an important role in subjective and physiological anxiety in an IVE due to the limited field of view, and that the subjective and physiological anxiety measures monotonically increase with the increasing height. In addition, we also found that participants recollected the virtual content they saw at the top more accurately compared to that at the medium height. We discuss the results and provide guidelines for simulations aimed at evoking fear of heights responses in IVEs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Measuring how effective immersive virtual environments (IVEs) are in reproducing sensations as in similar situations in the real world is an important task for many application fields. In this paper, we present an experimental setup which we call the virtual pole, where we evaluated human responses to fear of heights. We conducted experiments where we analyzed correlations between subjective and physiological anxiety measures and the participant’s view direction. Our results show that the view direction plays an important role in subjective and physiological anxiety in an IVE due to the limited field of view, and that the subjective and physiological anxiety measures monotonically increase with the increasing height. In addition, we also found that participants recollected the virtual content they saw at the top more accurately compared to that at the medium height. We discuss the results and provide guidelines for simulations aimed at evoking fear of heights responses in IVEs. |
| Nahal Norouzi; Gerd Bruder; Brandon Belna; Stefanie Mutter; Damla Turgut; Greg Welch A Systematic Review of the Convergence of Augmented Reality, Intelligent Virtual Agents, and the Internet of Things Book Chapter In: Artificial Intelligence in IoT, pp. 37, Springer, 2019, ISBN: 978-3-030-04109-0. @inbook{Norouzi2019,
title = {A Systematic Review of the Convergence of Augmented Reality, Intelligent Virtual Agents, and the Internet of Things},
author = {Nahal Norouzi and Gerd Bruder and Brandon Belna and Stefanie Mutter and Damla Turgut and Greg Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/05/Norouzi-2019-IoT-AR-Final.pdf},
doi = {10.1007/978-3-030-04110-6_1},
isbn = {978-3-030-04109-0},
year = {2019},
date = {2019-01-10},
booktitle = {Artificial Intelligence in IoT},
pages = {37},
publisher = {Springer},
abstract = {In recent years we are beginning to see the convergence of three distinct research fields: Augmented Reality (AR), Intelligent Virtual Agents (IVAs), and the Internet of Things (IoT). Each of these has been classified as a disruptive technology for our society. Since their inception, the advancement of knowledge and development of technologies and systems in these fields was traditionally performed with limited input from each other. However, over the last years, we have seen research prototypes and commercial products being developed that cross the boundaries between these distinct fields to leverage their collective strengths. In this review paper, we resume the body of literature published at the intersections between each two of these fields, and we discuss a vision for the nexus of all three technologies.},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
In recent years we are beginning to see the convergence of three distinct research fields: Augmented Reality (AR), Intelligent Virtual Agents (IVAs), and the Internet of Things (IoT). Each of these has been classified as a disruptive technology for our society. Since their inception, the advancement of knowledge and development of technologies and systems in these fields was traditionally performed with limited input from each other. However, over the last years, we have seen research prototypes and commercial products being developed that cross the boundaries between these distinct fields to leverage their collective strengths. In this review paper, we resume the body of literature published at the intersections between each two of these fields, and we discuss a vision for the nexus of all three technologies. |
2018
|
| Omar Janeh; Gerd Bruder; Frank Steinicke; Alessandro Gulberti; Monika Poetter-Nerger Analyses of Gait Parameters of Younger and Older Adults during (Non-)Isometric Virtual Walking Journal Article In: IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 24, no. 10, pp. 2663-2674, 2018. @article{Janeh2018a,
title = {Analyses of Gait Parameters of Younger and Older Adults during (Non-)Isometric Virtual Walking},
author = {Omar Janeh and Gerd Bruder and Frank Steinicke and Alessandro Gulberti and Monika Poetter-Nerger},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/01/Janeh2018a.pdf},
doi = {10.1109/TVCG.2017.2771520},
year = {2018},
date = {2018-12-01},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
volume = {24},
number = {10},
pages = {2663-2674},
abstract = {Understanding real walking in virtual environments (VEs) is important for immersive experiences, allowing users to move through VEs in the most natural way. Previous studies have shown that basic implementations of real walking in virtual spaces, in which head-tracked movements are mapped isometrically to a VE, are not estimated as entirely natural. Instead, users estimate a virtual walking velocity as more natural when it is slightly increased compared to the user's physical locomotion. However, these findings have been reported in most cases only for young persons, e.g., students, whereas older adults are clearly underrepresented in such studies. Recently, virtual reality (VR) has received significant public and media attention. Therefore, it appears reasonable to assume that people at different ages will have access to VR, and might use this technology more and more in application scenarios such as rehabilitation or training. To better understand how people at different ages walk and perceive locomotion in VR, we have performed a study to investigate the effects of (non-)isometric mappings between physical movements and virtual motions in the VE on the walking biomechanics across generations, i.e., younger and older adults. Three primary domains (pace, base of support and phase) of spatio-temporal parameters were identified to evaluate gait performance. The results show that the older adults walked very similar in the real and VE in the pace and phasic domains, which differs from results found in younger adults. In contrast, the results indicate differences in terms of base of support domain parameters for both groups while walking within a VE and the real world. For non-isometric mappings, we found in both younger and older adults an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback. The results provide important insights into the design of future VR applications for older adults in domains ranging from medicine and psychology to rehabilitation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Understanding real walking in virtual environments (VEs) is important for immersive experiences, allowing users to move through VEs in the most natural way. Previous studies have shown that basic implementations of real walking in virtual spaces, in which head-tracked movements are mapped isometrically to a VE, are not estimated as entirely natural. Instead, users estimate a virtual walking velocity as more natural when it is slightly increased compared to the user's physical locomotion. However, these findings have been reported in most cases only for young persons, e.g., students, whereas older adults are clearly underrepresented in such studies. Recently, virtual reality (VR) has received significant public and media attention. Therefore, it appears reasonable to assume that people at different ages will have access to VR, and might use this technology more and more in application scenarios such as rehabilitation or training. To better understand how people at different ages walk and perceive locomotion in VR, we have performed a study to investigate the effects of (non-)isometric mappings between physical movements and virtual motions in the VE on the walking biomechanics across generations, i.e., younger and older adults. Three primary domains (pace, base of support and phase) of spatio-temporal parameters were identified to evaluate gait performance. The results show that the older adults walked very similar in the real and VE in the pace and phasic domains, which differs from results found in younger adults. In contrast, the results indicate differences in terms of base of support domain parameters for both groups while walking within a VE and the real world. For non-isometric mappings, we found in both younger and older adults an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback. The results provide important insights into the design of future VR applications for older adults in domains ranging from medicine and psychology to rehabilitation. |
| Niels Christian Nilsson; Tabitha Peck; Gerd Bruder; Eric Hodgson; Stefania Serafin; Mary Whitton; Frank Steinicke; Evan Suma Rosenberg 15 Years of Research on Redirected Walking in Immersive Virtual Environments Journal Article In: IEEE Computer Graphics and Applications, vol. 38, no. 2, pp. 44-56, 2018. @article{Nilsson2018a,
title = {15 Years of Research on Redirected Walking in Immersive Virtual Environments},
author = {Niels Christian Nilsson and Tabitha Peck and Gerd Bruder and Eric Hodgson and Stefania Serafin and Mary Whitton and Frank Steinicke and Evan Suma Rosenberg},
year = {2018},
date = {2018-12-01},
journal = {IEEE Computer Graphics and Applications},
volume = {38},
number = {2},
pages = {44-56},
abstract = {Virtual reality users wearing head-mounted displays can experience the illusion of walking in any direction for infinite distance while, in reality, they are walking a curvilinear path in physical space. This is accomplished by introducing unnoticeable rotations to the virtual environment-a technique called redirected walking. This paper gives an overview of the research that has been performed since redirected walking was first practically demonstrated 15 years ago.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Virtual reality users wearing head-mounted displays can experience the illusion of walking in any direction for infinite distance while, in reality, they are walking a curvilinear path in physical space. This is accomplished by introducing unnoticeable rotations to the virtual environment-a technique called redirected walking. This paper gives an overview of the research that has been performed since redirected walking was first practically demonstrated 15 years ago. |
| Myungho Lee; Nahal Norouzi; Gerd Bruder; Pamela J. Wisniewski; Gregory F. Welch The Physical-virtual Table: Exploring the Effects of a Virtual Human's Physical Influence on Social Interaction Proceedings Article In: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, pp. 25:1–25:11, ACM, New York, NY, USA, 2018, ISBN: 978-1-4503-6086-9, (Best Paper Award). @inproceedings{Lee2018ac,
title = {The Physical-virtual Table: Exploring the Effects of a Virtual Human's Physical Influence on Social Interaction},
author = {Myungho Lee and Nahal Norouzi and Gerd Bruder and Pamela J. Wisniewski and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2018/11/Lee2018ab.pdf},
doi = {10.1145/3281505.3281533},
isbn = {978-1-4503-6086-9},
year = {2018},
date = {2018-11-28},
booktitle = {Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology},
journal = {Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology},
pages = {25:1--25:11},
publisher = {ACM},
address = {New York, NY, USA},
series = {VRST '18},
abstract = {In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in augmented reality (AR). In our study, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We com- pared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through AR glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH’s token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH’s phys- ical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further ob- served transference effects when participants attributed the VH’s ability to move physical objects to other elements in the real world. Also, the VH’s physical influence improved participants’ overall experience with the VH. We discuss potential explanations for the findings and implications for future shared AR tabletop setups.},
note = {Best Paper Award},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in augmented reality (AR). In our study, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We com- pared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through AR glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH’s token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH’s phys- ical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further ob- served transference effects when participants attributed the VH’s ability to move physical objects to other elements in the real world. Also, the VH’s physical influence improved participants’ overall experience with the VH. We discuss potential explanations for the findings and implications for future shared AR tabletop setups. |
| Kangsoo Kim; Gerd Bruder; Gregory F. Welch Blowing in the Wind: Increasing Copresence with a Virtual Human via Airflow Influence in Augmented Reality Proceedings Article In: Proceedings of the International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE 2018), Limassol, Cyprus, November 7–9, 2018, pp. 183-190, 2018, (Honorable Mention Award). @inproceedings{Kim2018c,
title = {Blowing in the Wind: Increasing Copresence with a Virtual Human via Airflow Influence in Augmented Reality},
author = {Kangsoo Kim and Gerd Bruder and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2018/10/Kim_Airflow_ICAT_EGVE2018.pdf},
doi = {10.2312/egve.20181332},
year = {2018},
date = {2018-11-07},
booktitle = {Proceedings of the International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE 2018), Limassol, Cyprus, November 7–9, 2018},
pages = {183-190},
abstract = {In a social context where two or more interlocutors interact with each other in the same space, one's sense of copresence with the others is an important factor for the quality of communication and engagement in the interaction. Although augmented reality (AR) technology enables the superposition of virtual humans (VHs) as interlocutors in the real world, the resulting sense of copresence is usually far lower than with a real human interlocutor.
In this paper, we describe a human-subject study in which we explored and investigated the effects that subtle multi-modal interaction between the virtual environment and the real world, where a VH and human participants were co-located, can have on copresence. We compared two levels of gradually increased multi-modal interaction: (i) virtual objects being affected by real airflow as commonly experienced with fans in summer, and (ii) a VH showing awareness of this airflow. We chose airflow as one example of an environmental factor that can noticeably affect both the real and virtual worlds, and also cause subtle responses in interlocutors. We hypothesized that our two levels of treatment would increase the sense of being together with the VH gradually, i.e., participants would report higher copresence with airflow influence than without it, and the copresence would be even higher when the VH shows awareness of the airflow. The statistical analysis with the participant-reported copresence scores showed that there was an improvement of the perceived copresence with the VH when both the physical–virtual interactivity via airflow and the VH's awareness behaviors were present together. As the considered environmental factors are directed at the VH, i.e., they are not part of the direct interaction with the real human, they can provide a reasonably generalizable approach to support copresence in AR beyond the particular use case in the present experiment.},
note = {Honorable Mention Award},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In a social context where two or more interlocutors interact with each other in the same space, one's sense of copresence with the others is an important factor for the quality of communication and engagement in the interaction. Although augmented reality (AR) technology enables the superposition of virtual humans (VHs) as interlocutors in the real world, the resulting sense of copresence is usually far lower than with a real human interlocutor.
In this paper, we describe a human-subject study in which we explored and investigated the effects that subtle multi-modal interaction between the virtual environment and the real world, where a VH and human participants were co-located, can have on copresence. We compared two levels of gradually increased multi-modal interaction: (i) virtual objects being affected by real airflow as commonly experienced with fans in summer, and (ii) a VH showing awareness of this airflow. We chose airflow as one example of an environmental factor that can noticeably affect both the real and virtual worlds, and also cause subtle responses in interlocutors. We hypothesized that our two levels of treatment would increase the sense of being together with the VH gradually, i.e., participants would report higher copresence with airflow influence than without it, and the copresence would be even higher when the VH shows awareness of the airflow. The statistical analysis with the participant-reported copresence scores showed that there was an improvement of the perceived copresence with the VH when both the physical–virtual interactivity via airflow and the VH's awareness behaviors were present together. As the considered environmental factors are directed at the VH, i.e., they are not part of the direct interaction with the real human, they can provide a reasonably generalizable approach to support copresence in AR beyond the particular use case in the present experiment. |
| Greg Welch; Tianren Wang; Gary Bishop; Gerd Bruder A Novel Approach for Cooperative Motion Capture (COMOCAP) Proceedings Article In: Bruder, G.; Cobb, S.; Yoshimoto, S. (Ed.): ICAT-EGVE 2018 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, The Eurographics Association, Limassol, Cyprus, 2018. @inproceedings{Welch2018ab,
title = {A Novel Approach for Cooperative Motion Capture (COMOCAP)},
author = {Greg Welch and Tianren Wang and Gary Bishop and Gerd Bruder},
editor = {G. Bruder and S. Cobb and S. Yoshimoto},
url = {https://sreal.ucf.edu/wp-content/uploads/2018/11/Welch2018ab.pdf},
year = {2018},
date = {2018-11-07},
booktitle = {ICAT-EGVE 2018 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
publisher = {The Eurographics Association},
address = {Limassol, Cyprus},
abstract = {Conventional motion capture (MOCAP) systems, e.g., optical systems, typically perform well for one person, but less so for multiple people in close proximity. Measurement quality can decline with distance, and even drop out as source/sensor components are occluded by nearby people. Furthermore, conventional optical MOCAP systems estimate body posture using a global estimation approach employing cameras that are fixed in the environment, typically at a distance such that one person or object can easily occlude another, and the relative error between tracked objects in the scene can increase as they move farther from the cameras and/or closer to each other. Body-relative tracking approaches use body-worn sensors and/or sources to track limbs with respect to the head or torso, for example, taking advantage of the proximity of limbs to the body. We present a novel approach to MOCAP that combines and extends conventional global and body- relative approaches by distributing both sensing and active signaling over each person’s body to facilitate body-relative (intra-user) MOCAP for one person and body-body (inter-user) MOCAP for multiple people, in an approach we call cooperative motion capture (COMOCAP). We support the validity of the approach with simulation results from a system comprised of acoustic transceivers (receiver-transmitter units) that provide inter-transceiver range measurements. Optical, magnetic, and other types of transceivers could also be used. Our simulations demonstrate the advantages of this approach to effectively improve accuracy and robustness to occlusions in situations of close proximity between multiple persons.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Conventional motion capture (MOCAP) systems, e.g., optical systems, typically perform well for one person, but less so for multiple people in close proximity. Measurement quality can decline with distance, and even drop out as source/sensor components are occluded by nearby people. Furthermore, conventional optical MOCAP systems estimate body posture using a global estimation approach employing cameras that are fixed in the environment, typically at a distance such that one person or object can easily occlude another, and the relative error between tracked objects in the scene can increase as they move farther from the cameras and/or closer to each other. Body-relative tracking approaches use body-worn sensors and/or sources to track limbs with respect to the head or torso, for example, taking advantage of the proximity of limbs to the body. We present a novel approach to MOCAP that combines and extends conventional global and body- relative approaches by distributing both sensing and active signaling over each person’s body to facilitate body-relative (intra-user) MOCAP for one person and body-body (inter-user) MOCAP for multiple people, in an approach we call cooperative motion capture (COMOCAP). We support the validity of the approach with simulation results from a system comprised of acoustic transceivers (receiver-transmitter units) that provide inter-transceiver range measurements. Optical, magnetic, and other types of transceivers could also be used. Our simulations demonstrate the advantages of this approach to effectively improve accuracy and robustness to occlusions in situations of close proximity between multiple persons. |
| Susanne Schmidt; Gerd Bruder; Frank Steinicke Effects of Embodiment on Generic and Content-Specific Intelligent Virtual Agents as Exhibition Guides Proceedings Article In: Proceedings of the International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE 2018), Limassol, Cyprus, November 7–9, 2018, pp. 13-20, 2018, (Best Paper Award). @inproceedings{Schmidt2018a,
title = {Effects of Embodiment on Generic and Content-Specific Intelligent Virtual Agents as Exhibition Guides},
author = {Susanne Schmidt and Gerd Bruder and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/01/Schmidt2018a.pdf},
doi = {10.2312/egve.20181309},
year = {2018},
date = {2018-11-07},
booktitle = {Proceedings of the International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE 2018), Limassol, Cyprus, November 7–9, 2018},
pages = {13-20},
abstract = {Intelligent Virtual Agents (IVAs) received enormous attention in recent years due to significant improvements in voice communication technologies and the convergence of different research fields such as Machine Learning, Internet of Things, and Virtual Reality (VR). Interactive conversational IVAs can appear in different forms such as voice-only or with embodied audio-visual representations showing, for example, human-like contextually related or generic three-dimensional bodies. In this paper, we analyzed the benefits of different forms of virtual agents in the context of a VR exhibition space. Our results suggest positive evidence showing large benefits of both embodied and thematically related audio-visual representations of IVAs. We discuss implications and suggestions for content developers to design believable virtual agents in the context of such installations.},
note = {Best Paper Award},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Intelligent Virtual Agents (IVAs) received enormous attention in recent years due to significant improvements in voice communication technologies and the convergence of different research fields such as Machine Learning, Internet of Things, and Virtual Reality (VR). Interactive conversational IVAs can appear in different forms such as voice-only or with embodied audio-visual representations showing, for example, human-like contextually related or generic three-dimensional bodies. In this paper, we analyzed the benefits of different forms of virtual agents in the context of a VR exhibition space. Our results suggest positive evidence showing large benefits of both embodied and thematically related audio-visual representations of IVAs. We discuss implications and suggestions for content developers to design believable virtual agents in the context of such installations. |
| Ryan Schubert; Gerd Bruder; Greg Welch Adaptive filtering of physical-virtual artifacts for synthetic animatronics Proceedings Article In: Bruder, G.; Cobb, S.; Yoshimoto, S. (Ed.): ICAT-EGVE 2018 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Limassol, Cyprus, November 7-9 2018, 2018. @inproceedings{Schubert2018,
title = {Adaptive filtering of physical-virtual artifacts for synthetic animatronics},
author = {Ryan Schubert and Gerd Bruder and Greg Welch},
editor = {G. Bruder and S. Cobb and S. Yoshimoto},
url = {https://sreal.ucf.edu/wp-content/uploads/2019/01/Schubert2018.pdf},
year = {2018},
date = {2018-11-07},
booktitle = {ICAT-EGVE 2018 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Limassol, Cyprus, November 7-9 2018},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|
| Nahal Norouzi; Kangsoo Kim; Jason Hochreiter; Myungho Lee; Salam Daher; Gerd Bruder; Gregory Welch A Systematic Survey of 15 Years of User Studies Published in the Intelligent Virtual Agents Conference Proceedings Article In: IVA '18 Proceedings of the 18th International Conference on Intelligent Virtual Agents, pp. 17-22, ACM ACM, 2018, ISBN: 978-1-4503-6013-5/18/11. @inproceedings{Norouzi2018c,
title = {A Systematic Survey of 15 Years of User Studies Published in the Intelligent Virtual Agents Conference},
author = {Nahal Norouzi and Kangsoo Kim and Jason Hochreiter and Myungho Lee and Salam Daher and Gerd Bruder and Gregory Welch },
url = {https://sreal.ucf.edu/wp-content/uploads/2018/11/p17-norouzi-2.pdf},
doi = {10.1145/3267851.3267901},
isbn = {978-1-4503-6013-5/18/11},
year = {2018},
date = {2018-11-05},
booktitle = {IVA '18 Proceedings of the 18th International Conference on Intelligent Virtual Agents},
pages = {17-22},
publisher = {ACM},
organization = {ACM},
abstract = {The field of intelligent virtual agents (IVAs) has evolved immensely over the past 15 years, introducing new application opportunities in areas such as training, health care, and virtual assistants. In this survey paper, we provide a systematic review of the most influential user studies published in the IVA conference from 2001 to 2015 focusing on IVA development, human perception, and interactions. A total of 247 papers with 276 user studies have been classified and reviewed based on their contributions and impact. We identify the different areas of research and provide a summary of the papers with the highest impact. With the trends of past user studies and the current state of technology, we provide insights into future trends and research challenges.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
The field of intelligent virtual agents (IVAs) has evolved immensely over the past 15 years, introducing new application opportunities in areas such as training, health care, and virtual assistants. In this survey paper, we provide a systematic review of the most influential user studies published in the IVA conference from 2001 to 2015 focusing on IVA development, human perception, and interactions. A total of 247 papers with 276 user studies have been classified and reviewed based on their contributions and impact. We identify the different areas of research and provide a summary of the papers with the highest impact. With the trends of past user studies and the current state of technology, we provide insights into future trends and research challenges. |
| Salam Daher; Jason Hochreiter; Nahal Norouzi; Laura Gonzalez; Gerd Bruder; Greg Welch Physical-Virtual Agents for Healthcare Simulation Proceedings Article In: Proceedings of IVA 2018, November 5-8, 2018, Sydney, NSW, Australia, ACM, 2018. @inproceedings{daher2018physical,
title = {Physical-Virtual Agents for Healthcare Simulation},
author = {Salam Daher and Jason Hochreiter and Nahal Norouzi and Laura Gonzalez and Gerd Bruder and Greg Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2018/10/IVA2018_StrokeStudy_CameraReady_Editor_20180911_1608.pdf},
year = {2018},
date = {2018-11-04},
booktitle = {Proceedings of IVA 2018, November 5-8, 2018, Sydney, NSW, Australia},
publisher = {ACM},
abstract = {Conventional Intelligent Virtual Agents (IVAs) focus primarily on the visual and auditory channels for both the agent and the interacting human: the agent displays a visual appearance and speech as output, while processing the human’s verbal and non-verbal behavior as input. However, some interactions, particularly those between a patient and healthcare provider, inherently include tactile components.We introduce an Intelligent Physical-Virtual Agent (IPVA) head that occupies an appropriate physical volume; can be touched; and via human-in-the-loop control can change appearance, listen, speak, and react physiologically in response to human behavior. Compared to a traditional IVA, it provides a physical affordance, allowing for more realistic and compelling human-agent interactions. In a user study focusing on neurological assessment of a simulated patient showing stroke symptoms, we compared the IPVA head with a high-fidelity touch-aware mannequin that has a static appearance. Various measures of the human subjects indicated greater attention, affinity for, and presence with the IPVA patient, all factors that can improve healthcare training.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Conventional Intelligent Virtual Agents (IVAs) focus primarily on the visual and auditory channels for both the agent and the interacting human: the agent displays a visual appearance and speech as output, while processing the human’s verbal and non-verbal behavior as input. However, some interactions, particularly those between a patient and healthcare provider, inherently include tactile components.We introduce an Intelligent Physical-Virtual Agent (IPVA) head that occupies an appropriate physical volume; can be touched; and via human-in-the-loop control can change appearance, listen, speak, and react physiologically in response to human behavior. Compared to a traditional IVA, it provides a physical affordance, allowing for more realistic and compelling human-agent interactions. In a user study focusing on neurological assessment of a simulated patient showing stroke symptoms, we compared the IPVA head with a high-fidelity touch-aware mannequin that has a static appearance. Various measures of the human subjects indicated greater attention, affinity for, and presence with the IPVA patient, all factors that can improve healthcare training. |
| Steffen Haesler; Kangsoo Kim; Gerd Bruder; Gregory F. Welch [POSTER] Seeing is Believing: Improving the Perceived Trust in Visually Embodied Alexa in Augmented Reality Proceedings Article In: Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2018), Munich, Germany, October 16–20, 2018, 2018. @inproceedings{Haesler2018,
title = {[POSTER] Seeing is Believing: Improving the Perceived Trust in Visually Embodied Alexa in Augmented Reality},
author = {Steffen Haesler and Kangsoo Kim and Gerd Bruder and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2018/08/Haesler2018.pdf},
doi = {10.1109/ISMAR-Adjunct.2018.00067},
year = {2018},
date = {2018-10-16},
booktitle = {Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2018), Munich, Germany, October 16–20, 2018},
abstract = {Voice-activated Intelligent Virtual Assistants (IVAs) such as Amazon Alexa offer a natural and realistic form of interaction that pursues the level of social interaction among real humans. The user experience with such technologies depends to a large degree on the perceived trust in and reliability of the IVA. In this poster, we explore the effects of a three-dimensional embodied representation of Amazon Alexa in Augmented Reality (AR) on the user’s perceived trust in her being able to control Internet of Things (IoT) devices in a smart home environment. We present a preliminary study and discuss the potential of positive effects in perceived trust due to the embodied representation compared to a voice-only condition.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Voice-activated Intelligent Virtual Assistants (IVAs) such as Amazon Alexa offer a natural and realistic form of interaction that pursues the level of social interaction among real humans. The user experience with such technologies depends to a large degree on the perceived trust in and reliability of the IVA. In this poster, we explore the effects of a three-dimensional embodied representation of Amazon Alexa in Augmented Reality (AR) on the user’s perceived trust in her being able to control Internet of Things (IoT) devices in a smart home environment. We present a preliminary study and discuss the potential of positive effects in perceived trust due to the embodied representation compared to a voice-only condition. |
| Kangsoo Kim; Luke Boelling; Steffen Haesler; Jeremy N. Bailenson; Gerd Bruder; Gregory F. Welch Does a Digital Assistant Need a Body? The Influence of Visual Embodiment and Social Behavior on the Perception of Intelligent Virtual Agents in AR Proceedings Article In: Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2018), Munich, Germany, October 16–20, 2018, 2018. @inproceedings{Kim2018a,
title = {Does a Digital Assistant Need a Body? The Influence of Visual Embodiment and Social Behavior on the Perception of Intelligent Virtual Agents in AR},
author = {Kangsoo Kim and Luke Boelling and Steffen Haesler and Jeremy N. Bailenson and Gerd Bruder and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2018/08/Kim2018a.pdf},
doi = {10.1109/ISMAR.2018.00039},
year = {2018},
date = {2018-10-16},
booktitle = {Proceedings of the 17th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2018), Munich, Germany, October 16–20, 2018},
abstract = {Intelligent Virtual Agents (IVAs) are becoming part of our everyday life, thanks to artificial intelligence technology and Internet of Things devices. For example, users can control their connected home appliances through natural voice commands to the IVA. However, most current-state commercial IVAs, such as Amazon Alexa, mainly focus on voice commands and voice feedback, and lack the ability to provide non-verbal cues which are an important part of social interaction. Augmented Reality (AR) has the potential to overcome this challenge by providing a visual embodiment of the IVA.
In this paper we investigate how visual embodiment and social behaviors influence the perception of the IVA. We hypothesize that a user's confidence in an IVA's ability to perform tasks is improved when imbuing the agent with a human body and social behaviors compared to the agent solely depending on voice feedback. In other words, an agent's embodied gesture and locomotion behavior exhibiting awareness of the surrounding real world or exerting influence over the environment can improve the perceived social presence with and confidence in the agent. We present a human-subject study, in which we evaluated the hypothesis and compared different forms of IVAs with speech, gesturing, and locomotion behaviors in an interactive AR scenario. The results show support for the hypothesis with measures of confidence, trust, and social presence. We discuss implications for future developments in the field of IVAs.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Intelligent Virtual Agents (IVAs) are becoming part of our everyday life, thanks to artificial intelligence technology and Internet of Things devices. For example, users can control their connected home appliances through natural voice commands to the IVA. However, most current-state commercial IVAs, such as Amazon Alexa, mainly focus on voice commands and voice feedback, and lack the ability to provide non-verbal cues which are an important part of social interaction. Augmented Reality (AR) has the potential to overcome this challenge by providing a visual embodiment of the IVA.
In this paper we investigate how visual embodiment and social behaviors influence the perception of the IVA. We hypothesize that a user's confidence in an IVA's ability to perform tasks is improved when imbuing the agent with a human body and social behaviors compared to the agent solely depending on voice feedback. In other words, an agent's embodied gesture and locomotion behavior exhibiting awareness of the surrounding real world or exerting influence over the environment can improve the perceived social presence with and confidence in the agent. We present a human-subject study, in which we evaluated the hypothesis and compared different forms of IVAs with speech, gesturing, and locomotion behaviors in an interactive AR scenario. The results show support for the hypothesis with measures of confidence, trust, and social presence. We discuss implications for future developments in the field of IVAs. |
| Sungchul Jung; Gerd Bruder; Pamela Wisniewski; Chistian Sandor; Charles E. Hughes Over My Hand: Using a Personalized Hand in VR to Improve Object Size Estimation, Body Ownership, and Presence Proceedings Article In: Proceedings of the 6th ACM Symposium on Spatial User Interaction (SUI 2018), Berlin, Germany, October 13-14, 2018, 2018, (Best Paper Award). @inproceedings{Jung2018b,
title = {Over My Hand: Using a Personalized Hand in VR to Improve Object Size Estimation, Body Ownership, and Presence},
author = {Sungchul Jung and Gerd Bruder and Pamela Wisniewski and Chistian Sandor and Charles E. Hughes},
year = {2018},
date = {2018-10-13},
booktitle = {Proceedings of the 6th ACM Symposium on Spatial User Interaction (SUI 2018), Berlin, Germany, October 13-14, 2018},
note = {Best Paper Award},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|
| Kangsoo Kim; Mark Billinghurst; Gerd Bruder; Henry Been-Lirn Duh; Gregory F. Welch Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–2017) Journal Article In: IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 11, pp. 2947-2962, 2018, ISSN: 1077-2626. @article{Kim2018b,
title = {Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–2017)},
author = {Kangsoo Kim and Mark Billinghurst and Gerd Bruder and Henry Been-Lirn Duh and Gregory F. Welch},
url = {https://sreal.ucf.edu/wp-content/uploads/2018/08/Kim2018b.pdf},
doi = {10.1109/TVCG.2018.2868591},
issn = {1077-2626},
year = {2018},
date = {2018-09-06},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {24},
number = {11},
pages = {2947-2962},
abstract = {In 2008, Zhou et al. presented a survey paper summarizing the previous ten years of ISMAR publications, which provided invaluable insights into the research challenges and trends associated with that time period. Ten years later, we review the research that has been presented at ISMAR conferences since the survey of Zhou et al., at a time when both academia and the AR industry are enjoying dramatic technological changes. Here we consider the research results and trends of the last decade of ISMAR by carefully reviewing the ISMAR publications from the period of 2008–2017, in the context of the first ten years. The numbers of papers for different research topics and their impacts by citations were analyzed while reviewing them—which reveals that there is a sharp increase in AR evaluation and rendering research. Based on this review we offer some observations related to potential future research areas or trends, which could be helpful to AR researchers and industry members looking ahead.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In 2008, Zhou et al. presented a survey paper summarizing the previous ten years of ISMAR publications, which provided invaluable insights into the research challenges and trends associated with that time period. Ten years later, we review the research that has been presented at ISMAR conferences since the survey of Zhou et al., at a time when both academia and the AR industry are enjoying dramatic technological changes. Here we consider the research results and trends of the last decade of ISMAR by carefully reviewing the ISMAR publications from the period of 2008–2017, in the context of the first ten years. The numbers of papers for different research topics and their impacts by citations were analyzed while reviewing them—which reveals that there is a sharp increase in AR evaluation and rendering research. Based on this review we offer some observations related to potential future research areas or trends, which could be helpful to AR researchers and industry members looking ahead. |