2013
|
| Gerd Bruder; Frank Steinicke; Wolfgang Stürzlinger To Touch or not to Touch? Comparing 2Đ Touch and 3Đ Mid-Air Interaction on Stereoscopic Tabletop Surfaces Proceedings Article In: Proceedings of the ACM Symposium on Spatial User Interaction (SUI), pp. 1–8, 2013. @inproceedings{BSS13a,
title = {To Touch or not to Touch? Comparing 2Đ Touch and 3Đ Mid-Air Interaction on Stereoscopic Tabletop Surfaces},
author = { Gerd Bruder and Frank Steinicke and Wolfgang Stürzlinger},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BSS13a-optimized.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the ACM Symposium on Spatial User Interaction (SUI)},
pages = {1--8},
abstract = {Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays. |
| Rüdiger Beimler; Gerd Bruder; Frank Steinicke SmurVEbox: A Smart Multi-User Real-Time Virtual Environment for Generating Character Animations Proceedings Article In: Proceedings of the Virtual Reality International Conference (VRIC), pp. 1–7, 2013. @inproceedings{BBS13a,
title = {SmurVEbox: A Smart Multi-User Real-Time Virtual Environment for Generating Character Animations},
author = { Rüdiger Beimler and Gerd Bruder and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BBS13a.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the Virtual Reality International Conference (VRIC)},
pages = {1--7},
abstract = {Animating virtual characters is a complex task, which requires professional animators and performers, expensive motion capture systems, or considerable amounts of time to generate convincing results. In this paper we introduce the SmurVEbox, which is a cost-effective animating system that encompasses many important aspects of animating virtual characters by providing a novel shared user experience. SmurVEbox is a collaborative environment for generating character animations in real time, which has the potential to enhance the computer animation process. Our setup allows animators and performers to cooperate on the same virtual animation sequence in real time. Performers are able to communicate with the animator in the real space while simultaneously perceiving the effects of their actions on the virtual character in the virtual space. The animator can refine actions of a performer in real time so that both collaborate together on the same animation of a virtual character. We describe the setup and present a simple application.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Animating virtual characters is a complex task, which requires professional animators and performers, expensive motion capture systems, or considerable amounts of time to generate convincing results. In this paper we introduce the SmurVEbox, which is a cost-effective animating system that encompasses many important aspects of animating virtual characters by providing a novel shared user experience. SmurVEbox is a collaborative environment for generating character animations in real time, which has the potential to enhance the computer animation process. Our setup allows animators and performers to cooperate on the same virtual animation sequence in real time. Performers are able to communicate with the animator in the real space while simultaneously perceiving the effects of their actions on the virtual character in the virtual space. The animator can refine actions of a performer in real time so that both collaborate together on the same animation of a virtual character. We describe the setup and present a simple application. |
| Gerd Bruder; Frank Steinicke Implementing Walking in Virtual Environments Proceedings Article In: Human Walking in Virtual Environments: Perception, Technology, and Applications, pp. 221–240, Springer, 2013. @inproceedings{BS13a,
title = {Implementing Walking in Virtual Environments},
author = { Gerd Bruder and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BS13a-optimized.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Human Walking in Virtual Environments: Perception, Technology, and Applications},
pages = {221--240},
publisher = {Springer},
abstract = {In the previous chapter, locomotion devices have been described, which prevent displacements in the real world while a user is walking. In this chapter we explain different strategies, which allow users to actually move through the real-world, while these physical displacements are mapped to motions of the camera in the virtual environment (VE) in order to support unlimited omnidirectional walking. Transferring a user's head movements from a physical workspace to a virtual scene is an essential component of any immersive VE. This chapter describes the pipeline of transformations from tracked real-world coordinates to coordinates of the VE. The chapter starts with an overview of different approaches for virtual walking, and gives an introduction to tracking volumes, coordinate systems and transformations required to set up a workspace for implementing virtual walking. The chapter continues with the traditional isometric mapping found in most immersive VEs, with special emphasis on combining walking in a restricted interaction volume via reference coordinates with virtual traveling metaphors (e.g., flying). Advanced mappings are then introduced with user-centric coordinates, which provide a basis to guide users on different paths in the physical workspace than what they experience in the virtual world.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In the previous chapter, locomotion devices have been described, which prevent displacements in the real world while a user is walking. In this chapter we explain different strategies, which allow users to actually move through the real-world, while these physical displacements are mapped to motions of the camera in the virtual environment (VE) in order to support unlimited omnidirectional walking. Transferring a user's head movements from a physical workspace to a virtual scene is an essential component of any immersive VE. This chapter describes the pipeline of transformations from tracked real-world coordinates to coordinates of the VE. The chapter starts with an overview of different approaches for virtual walking, and gives an introduction to tracking volumes, coordinate systems and transformations required to set up a workspace for implementing virtual walking. The chapter continues with the traditional isometric mapping found in most immersive VEs, with special emphasis on combining walking in a restricted interaction volume via reference coordinates with virtual traveling metaphors (e.g., flying). Advanced mappings are then introduced with user-centric coordinates, which provide a basis to guide users on different paths in the physical workspace than what they experience in the virtual world. |
| Rüdiger Beimler; Gerd Bruder; Frank Steinicke Immersive Guided Tours for Virtual Tourism through 3D City Models Proceedings Article In: Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR), pp. 69–75, 2013. @inproceedings{BBS13,
title = {Immersive Guided Tours for Virtual Tourism through 3D City Models},
author = { Rüdiger Beimler and Gerd Bruder and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BBS13.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR)},
pages = {69--75},
abstract = {Since decades, computer-mediated realities such as virtual reality (VR) or augmented reality (AR) have been used to visualize and explore virtual city models. The inherent three-dimensional (3D) nature as well as our natural understanding of urban areas and city models makes them suitable for immersive or semi-immersive installations, which support natural exploration of such complex datasets. In this paper, we present a novel VR approach to leverage immersive guided virtual tours through 3D city models. Therefore, we combine an immersive head-mounted display (HMD) setup, which is used by one or more tourists, with a touch-enabled tabletop, which is used by the guide. While the guide overviews the entire virtual 3D city model and the virtual representations of each tourist inside the model, tourists perceive an immersive view from an egocentric perspective to regions of the city model, which can be pointed out by the guide. We describe the implementation of the setup and discuss interactive virtual tours through a 3D city model.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Since decades, computer-mediated realities such as virtual reality (VR) or augmented reality (AR) have been used to visualize and explore virtual city models. The inherent three-dimensional (3D) nature as well as our natural understanding of urban areas and city models makes them suitable for immersive or semi-immersive installations, which support natural exploration of such complex datasets. In this paper, we present a novel VR approach to leverage immersive guided virtual tours through 3D city models. Therefore, we combine an immersive head-mounted display (HMD) setup, which is used by one or more tourists, with a touch-enabled tabletop, which is used by the guide. While the guide overviews the entire virtual 3D city model and the virtual representations of each tourist inside the model, tourists perceive an immersive view from an egocentric perspective to regions of the city model, which can be pointed out by the guide. We describe the implementation of the setup and discuss interactive virtual tours through a 3D city model. |
| Gerd Bruder; Phil Wieland; Benjamin Bolte; Markus Lappe; Frank Steinicke Going With the Flow: Modifying Self-Motion Perception with Computer-Mediated Optic Flow Proceedings Article In: Proceedings of the International Symposium on Mixed and Augmenting Reality (ISMAR), pp. 67–74, 2013. @inproceedings{BWBLS13,
title = {Going With the Flow: Modifying Self-Motion Perception with Computer-Mediated Optic Flow},
author = { Gerd Bruder and Phil Wieland and Benjamin Bolte and Markus Lappe and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BWBLS13.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the International Symposium on Mixed and Augmenting Reality (ISMAR)},
pages = {67--74},
abstract = {One major benefit of wearable computers is that users can naturally move and explore computer-mediated realities. However, researchers often observe that users' space and motion perception severely differ in such environments compared to the real world, an effect that is often attributed to slight discrepancies in sensory cues, for instance, caused by tracking inaccuracy or system latency. This is particularly true for virtual reality (VR), but such conflicts are also inherent to augmented reality (AR) technologies. Although, head-worn displays will become more and more available soon, the effects on motion perception have rarely been studied, and techniques to modify self-motion in AR environments have not been leveraged so far. In this paper we introduce the concept of emphcomputer-mediated optic flow, and analyze its effects on self-motion perception in AR environments. First, we introduce different techniques to modify optic flow patterns and velocity. We present a psychophysical experiment which reveals differences in self-motion perception with a video see-through head-worn display compared to the real-world viewing condition. We show that computer-mediated optic flow has the potential to make a user perceive self-motion as faster or slower than it actually is, and we discuss its potential for future AR setups.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
One major benefit of wearable computers is that users can naturally move and explore computer-mediated realities. However, researchers often observe that users' space and motion perception severely differ in such environments compared to the real world, an effect that is often attributed to slight discrepancies in sensory cues, for instance, caused by tracking inaccuracy or system latency. This is particularly true for virtual reality (VR), but such conflicts are also inherent to augmented reality (AR) technologies. Although, head-worn displays will become more and more available soon, the effects on motion perception have rarely been studied, and techniques to modify self-motion in AR environments have not been leveraged so far. In this paper we introduce the concept of emphcomputer-mediated optic flow, and analyze its effects on self-motion perception in AR environments. First, we introduce different techniques to modify optic flow patterns and velocity. We present a psychophysical experiment which reveals differences in self-motion perception with a video see-through head-worn display compared to the real-world viewing condition. We show that computer-mediated optic flow has the potential to make a user perceive self-motion as faster or slower than it actually is, and we discuss its potential for future AR setups. |
| Björn Janich; Monique Dittrich; Milena Schlosser; Gerd Bruder; Frank Steinicke Evaluation der ber Proceedings Article In: Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR), pp. 15–26, 2013. @inproceedings{JDSBS13,
title = {Evaluation der ber},
author = {Björn Janich and Monique Dittrich and Milena Schlosser and Gerd Bruder and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/JDSBS13.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR)},
pages = {15--26},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|
| Gerd Bruder; Frank Steinicke; Wolfgang Stüerzlinger Effects of Visual Conflicts on 3D Selection Task Performance in Stereoscopic Display Environments Proceedings Article In: Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI), pp. 115–118, 2013. @inproceedings{BSS13,
title = {Effects of Visual Conflicts on 3D Selection Task Performance in Stereoscopic Display Environments},
author = {Gerd Bruder and Frank Steinicke and Wolfgang Stüerzlinger},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BSS13.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI)},
pages = {115--118},
abstract = {Mid-air direct-touch interaction in stereoscopic display environments poses challenges to the design of 3D user interfaces. Not only is passive haptic feedback usually absent when selecting a virtual object displayed with positive or negative parallax relative to a display surface, but such setups also suffer from inherent visual conflicts, such as vergence/accommodation mismatches and double vision. In particular, if the user tries to select a virtual object with a finger or input device, either the virtual object or the user's finger will appear blurred, resulting in an ambiguity for selections that may significantly impact the user's performance. In this paper we evaluate the effect of visual conflicts for mid-air 3D selection performance within arm's reach on a stereoscopic table with a Fitts' Law experiment. We compare three different techniques with different levels of visual conflicts for selecting a virtual object: real hand, virtual offset cursor, and virtual offset hand. Our results show that the error rate is highest for the real hand condition and less for the virtual offset-based techniques. However, our results indicate that selections with the real hand resulted in the highest effective throughput of all conditions. This suggests that virtual offset-based techniques do not improve overall performance.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Mid-air direct-touch interaction in stereoscopic display environments poses challenges to the design of 3D user interfaces. Not only is passive haptic feedback usually absent when selecting a virtual object displayed with positive or negative parallax relative to a display surface, but such setups also suffer from inherent visual conflicts, such as vergence/accommodation mismatches and double vision. In particular, if the user tries to select a virtual object with a finger or input device, either the virtual object or the user's finger will appear blurred, resulting in an ambiguity for selections that may significantly impact the user's performance. In this paper we evaluate the effect of visual conflicts for mid-air 3D selection performance within arm's reach on a stereoscopic table with a Fitts' Law experiment. We compare three different techniques with different levels of visual conflicts for selecting a virtual object: real hand, virtual offset cursor, and virtual offset hand. Our results show that the error rate is highest for the real hand condition and less for the virtual offset-based techniques. However, our results indicate that selections with the real hand resulted in the highest effective throughput of all conditions. This suggests that virtual offset-based techniques do not improve overall performance. |
| David Zilch; Gerd Bruder; Frank Steinicke; Frank Lamack Design and Evaluation of 3D GUI Widgets for Stereoscopic Touch-Displays Proceedings Article In: Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR), pp. 37–48, 2013. @inproceedings{ZBSL13,
title = {Design and Evaluation of 3D GUI Widgets for Stereoscopic Touch-Displays},
author = {David Zilch and Gerd Bruder and Frank Steinicke and Frank Lamack},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/ZBSL13.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR)},
pages = {37--48},
abstract = {Recent developments in the area of interactive entertainment have suggested to combine stereoscopic visualization with multi-touch displays, which has the potential to open up new vistas for natural interaction with interactive three-dimensional applications. However, the question arises how user interfaces for such setups should be designed in order to provide an effective user experience. In this paper we introduce 3D GUI widgets for interaction with stereoscopic touch displays. We have designed the widgets according to skeuomorph features and affordances. We evaluated the developed widgets in the scope of an example application in order to analyze the usability of and user behavior with this 3D user interface. The results reveal differences in user behavior with and without stereoscopic display during touch interaction, and show that the developed 3D GUI widgets can be used effectively in different applications.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Recent developments in the area of interactive entertainment have suggested to combine stereoscopic visualization with multi-touch displays, which has the potential to open up new vistas for natural interaction with interactive three-dimensional applications. However, the question arises how user interfaces for such setups should be designed in order to provide an effective user experience. In this paper we introduce 3D GUI widgets for interaction with stereoscopic touch displays. We have designed the widgets according to skeuomorph features and affordances. We evaluated the developed widgets in the scope of an example application in order to analyze the usability of and user behavior with this 3D user interface. The results reveal differences in user behavior with and without stereoscopic display during touch interaction, and show that the developed 3D GUI widgets can be used effectively in different applications. |
| Marina Hofmann; Ronja Bürger; Ninja Frost; Julia Karremann; Jule Keller-Bacher; Stefanie Kraft; Gerd Bruder; Frank Steinicke Comparing 3D Interaction Performance in Comfortable and Uncomfortable Regions Proceedings Article In: Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR), pp. 3–14, 2013. @inproceedings{HBFKKKBS13,
title = {Comparing 3D Interaction Performance in Comfortable and Uncomfortable Regions},
author = {Marina Hofmann and Ronja Bürger and Ninja Frost and Julia Karremann and Jule Keller-Bacher and Stefanie Kraft and Gerd Bruder and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/HBFKKKBS13.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR)},
pages = {3--14},
abstract = {Immersive virtual environments (IVEs) have the potential to afford natural interaction in the three-dimensional (3D) space around a user. While the available physical workspace can differ between IVEs, only a small region is located within arm's reach at any given moment. This interaction space is solely defined by the shape and posture of the user's body. Interaction performance in this space depends on a variety of ergonomics factors, the user's endurance, muscular strength, as well as fitness. In this paper we investigate differences in selection task performance when users interact with their hands in a comfortable or uncomfortable region around their body. In a pilot study we identified comfortable and uncomfortable interaction regions for users who are standing upright. We conducted a Fitts' Law experiment to evaluate selection performance in these different regions over a duration of about thirty minutes. Although, we could not find any significant differences in interaction performance between the two regions, we observed a trend that the extent of physical fitness of the users affects performance: Athletic users perform better than unathletic users. We discuss implications for natural interaction in IVEs.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Immersive virtual environments (IVEs) have the potential to afford natural interaction in the three-dimensional (3D) space around a user. While the available physical workspace can differ between IVEs, only a small region is located within arm's reach at any given moment. This interaction space is solely defined by the shape and posture of the user's body. Interaction performance in this space depends on a variety of ergonomics factors, the user's endurance, muscular strength, as well as fitness. In this paper we investigate differences in selection task performance when users interact with their hands in a comfortable or uncomfortable region around their body. In a pilot study we identified comfortable and uncomfortable interaction regions for users who are standing upright. We conducted a Fitts' Law experiment to evaluate selection performance in these different regions over a duration of about thirty minutes. Although, we could not find any significant differences in interaction performance between the two regions, we observed a trend that the extent of physical fitness of the users affects performance: Athletic users perform better than unathletic users. We discuss implications for natural interaction in IVEs. |
| Gerd Bruder; Frank Steinicke 2.5D Touch Interaction on Stereoscopic Tabletop Surfaces Proceedings Article In: Proceedings of Interactive Surfaces for Interaction with Stereoscopic 3D (ISIS3D), pp. 1–4, 2013. @inproceedings{BS13,
title = {2.5D Touch Interaction on Stereoscopic Tabletop Surfaces},
author = {Gerd Bruder and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BS13.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of Interactive Surfaces for Interaction with Stereoscopic 3D (ISIS3D)},
pages = {1--4},
abstract = {Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate a visually perceived object with negative parallax to touch the 2D surface behind the object. Conversely, recent technologies support capturing nger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In previous works we compared such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results showed that within a small range above the surface 2D interaction is benecial whereas for objects farther away 3D interaction is benecial. For these reasons, we discuss the concept of 2.5D interaction for such setups and introduce corresponding widgets for interaction with stereoscopic touch displays by means of an example application.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate a visually perceived object with negative parallax to touch the 2D surface behind the object. Conversely, recent technologies support capturing nger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In previous works we compared such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results showed that within a small range above the surface 2D interaction is benecial whereas for objects farther away 3D interaction is benecial. For these reasons, we discuss the concept of 2.5D interaction for such setups and introduce corresponding widgets for interaction with stereoscopic touch displays by means of an example application. |
| David Cyborra; Moritz Albert; Frank Steinicke; Gerd Bruder [POSTER] Touch & Move: A Portable Stereoscopic Multi-Touch Table Proceedings Article In: Proceedings of IEEE Virtual Reality (VR), pp. 97–98, 2013. @inproceedings{CASB13,
title = {[POSTER] Touch & Move: A Portable Stereoscopic Multi-Touch Table},
author = { David Cyborra and Moritz Albert and Frank Steinicke and Gerd Bruder},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/CASB13.pdf},
year = {2013},
date = {2013-01-01},
booktitle = {Proceedings of IEEE Virtual Reality (VR)},
pages = {97--98},
abstract = {Recent developments in the fields of display technology provide new possibilities for engaging users in interactive exploration of three-dimensional (3D) virtual environments (VEs). Tracking technologies such as the Microsoft Kinect and emerging multi-touch interfaces enable inexpensive and low-maintenance interactive setups while providing portable solutions for engaging presentations and exhibitions. In this poster we describe an extension of the smARTbox, which is a responsive touch-enabled stereoscopic out-of-the-box technology for interactive setups. We extended the smARTbox by making the entire setup portable, which provides a new interaction experience, when exploring 3D data sets. The portable tracked multi-touch interface supports two different interaction paradigms: exploration by multi-touch gestures as well a s exploration by lateral movements of the entire setup. Hence, typical gestures supporting rotation and panning can be implemented via multi-touch gestures, but also via actual movements of the setup.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Recent developments in the fields of display technology provide new possibilities for engaging users in interactive exploration of three-dimensional (3D) virtual environments (VEs). Tracking technologies such as the Microsoft Kinect and emerging multi-touch interfaces enable inexpensive and low-maintenance interactive setups while providing portable solutions for engaging presentations and exhibitions. In this poster we describe an extension of the smARTbox, which is a responsive touch-enabled stereoscopic out-of-the-box technology for interactive setups. We extended the smARTbox by making the entire setup portable, which provides a new interaction experience, when exploring 3D data sets. The portable tracked multi-touch interface supports two different interaction paradigms: exploration by multi-touch gestures as well a s exploration by lateral movements of the entire setup. Hence, typical gestures supporting rotation and panning can be implemented via multi-touch gestures, but also via actual movements of the setup. |
2012
|
| Frank Steinicke; Gerd Bruder Visual Perception of Perspective Distortions Proceedings Article In: Proceedings of the IEEE Virtual Reality Workshop on Perceptual Illusions in Virtual Environments (PIVE), pp. 37–40, 2012. @inproceedings{SB12,
title = {Visual Perception of Perspective Distortions},
author = {Frank Steinicke and Gerd Bruder},
year = {2012},
date = {2012-01-01},
booktitle = {Proceedings of the IEEE Virtual Reality Workshop on Perceptual Illusions in Virtual Environments (PIVE)},
pages = {37--40},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|
| Gerd Bruder; Frank Steinicke; Phil Wieland; Markus Lappe Tuning Self-Motion Perception in Virtual Reality with Visual Illusions Journal Article In: IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 18, no. 7, pp. 1068–1078, 2012. @article{BSWL12,
title = {Tuning Self-Motion Perception in Virtual Reality with Visual Illusions},
author = {Gerd Bruder and Frank Steinicke and Phil Wieland and Markus Lappe},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BSWL12.pdf},
year = {2012},
date = {2012-01-01},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
volume = {18},
number = {7},
pages = {1068--1078},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
| Martin Fischbach; Marc E. Latoschik; Gerd Bruder; Frank Steinicke smARTbox: Out-of-the-box Technologies for Interactive Art and Exhibition Proceedings Article In: Proceedings of the Virtual Reality International Conference (VRIC), pp. 1–7, 2012. @inproceedings{FLBS12,
title = {smARTbox: Out-of-the-box Technologies for Interactive Art and Exhibition},
author = {Martin Fischbach and Marc E. Latoschik and Gerd Bruder and Frank Steinicke},
year = {2012},
date = {2012-01-01},
booktitle = {Proceedings of the Virtual Reality International Conference (VRIC)},
pages = {1--7},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|
| Martin Fischbach; Dennis Wiebusch; Marc E. Latoschik; Gerd Bruder; Frank Steinicke smARTbox: A Portable Setup for Intelligent Interactive Applications Proceedings Article In: Mensch & Computer 2012 - Workshopband: interaktiv informiert - allgegenw"artig und allumfassend!?, pp. 521–524, 2012. @inproceedings{FWLBS12a,
title = {smARTbox: A Portable Setup for Intelligent Interactive Applications},
author = { Martin Fischbach and Dennis Wiebusch and Marc E. Latoschik and Gerd Bruder and Frank Steinicke},
year = {2012},
date = {2012-01-01},
booktitle = {Mensch & Computer 2012 - Workshopband: interaktiv informiert - allgegenw"artig und allumfassend!?},
pages = {521--524},
abstract = {This paper presents a semi-immersive, multimodal fish tank simulation realized using the smARTbox, an out-of-the-box platform for intelligent interactive applications. The smARTbox provides portability, stereoscopic visualization, marker-less user tracking and direct interscopic touch input. Off-the-shelf hardware is combined with a state-of-the-art simulation platform to provide a powerful system environment. The environment combines direct (touch) and indirect (movement) interaction.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
This paper presents a semi-immersive, multimodal fish tank simulation realized using the smARTbox, an out-of-the-box platform for intelligent interactive applications. The smARTbox provides portability, stereoscopic visualization, marker-less user tracking and direct interscopic touch input. Off-the-shelf hardware is combined with a state-of-the-art simulation platform to provide a powerful system environment. The environment combines direct (touch) and indirect (movement) interaction. |
| Gerd Bruder; Victoria Interrante; Lane Phillips; Frank Steinicke Redirecting Walking and Driving for Natural Navigation in Immersive Virtual Environments Journal Article In: IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 18, no. 4, pp. 538–545, 2012. @article{BIPS12,
title = {Redirecting Walking and Driving for Natural Navigation in Immersive Virtual Environments},
author = {Gerd Bruder and Victoria Interrante and Lane Phillips and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BIPS12-optimized.pdf},
year = {2012},
date = {2012-01-01},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
volume = {18},
number = {4},
pages = {538--545},
abstract = {Walking is the most natural form of locomotion for humans, and real walking interfaces have demonstrated their benefits for several navigation tasks. With recently proposed redirection techniques it becomes possible to overcome space limitations as imposed by tracking sensors or laboratory setups, and, theoretically, it is now possible to walk through arbitrarily large virtual environments. However, walking as sole locomotion technique has drawbacks, in particular, for long distances, such that even in the real world we tend to support walking with passive or active transportation for longer-distance travel. In this article we show that concepts from the field of redirected walking can be applied to movements with transportation devices. We conducted psychophysical experiments to determine perceptual detection thresholds for redirected driving, and set these in relation to results from redirected walking. We show that redirected walking-and-driving approaches can easily be realized in immersive virtual reality laboratories, e.g., with electric wheelchairs, and show that such systems can combine advantages of real walking in confined spaces with benefits of using vehiclebased self-motion for longer-distance travel.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Walking is the most natural form of locomotion for humans, and real walking interfaces have demonstrated their benefits for several navigation tasks. With recently proposed redirection techniques it becomes possible to overcome space limitations as imposed by tracking sensors or laboratory setups, and, theoretically, it is now possible to walk through arbitrarily large virtual environments. However, walking as sole locomotion technique has drawbacks, in particular, for long distances, such that even in the real world we tend to support walking with passive or active transportation for longer-distance travel. In this article we show that concepts from the field of redirected walking can be applied to movements with transportation devices. We conducted psychophysical experiments to determine perceptual detection thresholds for redirected driving, and set these in relation to results from redirected walking. We show that redirected walking-and-driving approaches can easily be realized in immersive virtual reality laboratories, e.g., with electric wheelchairs, and show that such systems can combine advantages of real walking in confined spaces with benefits of using vehiclebased self-motion for longer-distance travel. |
| Loren P. Fiore; Lane Phillips; Gerd Bruder; Victoria Interrante; Frank Steinicke Redirected Steering for Virtual Self-Motion Control with a Motorized Electric Wheelchair Proceedings Article In: Proceedings of the Joint Virtual Reality Conference (JVRC), pp. 45–48, 2012. @inproceedings{FPBIS12,
title = {Redirected Steering for Virtual Self-Motion Control with a Motorized Electric Wheelchair},
author = {Loren P. Fiore and Lane Phillips and Gerd Bruder and Victoria Interrante and Frank Steinicke},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/FPBIS12.pdf},
year = {2012},
date = {2012-01-01},
booktitle = {Proceedings of the Joint Virtual Reality Conference (JVRC)},
pages = {45--48},
abstract = {Redirection techniques have shown great potential for enabling users to travel in large-scale virtual environments while their physical movements have been limited to a much smaller laboratory space. Traditional redirection approaches introduce a subliminal discrepancy between real and virtual motions of the user by subtle manipula- tions, which are thus highly dependent on the user and on the virtual scene. In the worst case, such approaches may result in failure cases that have to be resolved by obvious interventions, e.g., when a user faces a physical obstacle and tries to move forward. In this paper we introduce a remote steering method for redirection techniques that are used for physical trans- portation in an immersive virtual environment. We present a redirection controller for turning a legacy wheelchair device into a remote control vehicle. In a psychophysical experiment we analyze the automatic angular motion redirection with our proposed controller with respect to detectability of discrepancies between real and virtual motions. Finally, we discuss this redirection method with its novel affordances for virtual traveling.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Redirection techniques have shown great potential for enabling users to travel in large-scale virtual environments while their physical movements have been limited to a much smaller laboratory space. Traditional redirection approaches introduce a subliminal discrepancy between real and virtual motions of the user by subtle manipula- tions, which are thus highly dependent on the user and on the virtual scene. In the worst case, such approaches may result in failure cases that have to be resolved by obvious interventions, e.g., when a user faces a physical obstacle and tries to move forward. In this paper we introduce a remote steering method for redirection techniques that are used for physical trans- portation in an immersive virtual environment. We present a redirection controller for turning a legacy wheelchair device into a remote control vehicle. In a psychophysical experiment we analyze the automatic angular motion redirection with our proposed controller with respect to detectability of discrepancies between real and virtual motions. Finally, we discuss this redirection method with its novel affordances for virtual traveling. |
| Falko Kellner; Benjamin Bolte; Gerd Bruder; Ulrich Rautenberg; Frank Steinicke; Markus Lappe; Reinhard Koch Geometric Calibration of Head-Mounted Displays and its Effects on Distance Estimation Journal Article In: IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 18, no. 4, pp. 589–596, 2012. @article{KBBRSLK12,
title = {Geometric Calibration of Head-Mounted Displays and its Effects on Distance Estimation},
author = {Falko Kellner and Benjamin Bolte and Gerd Bruder and Ulrich Rautenberg and Frank Steinicke and Markus Lappe and Reinhard Koch},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/KBBRSLK12.pdf},
year = {2012},
date = {2012-01-01},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
volume = {18},
number = {4},
pages = {589--596},
abstract = {Head-mounted displays (HMDs) allow users to observe virtual environments (VEs) from an egocentric perspective. However, several experiments have provided evidence that egocentric distances are perceived as compressed in VEs relative to the real world. Recent experiments suggest that the virtual view frustum set for rendering the VE has an essential impact on the user's estimation of distances. In this article we analyze if distance estimation can be improved by calibrating the view frustum for a given HMD and user. Unfortunately, in an immersive virtual reality (VR) environment, a full per user calibration is not trivial and manual per user adjustment often leads to mini- or magnification of the scene. Therefore, we propose a novel per user calibration approach with optical see-through displays commonly used in augmented reality (AR). This calibration takes advantage of a geometric scheme based on 2D point - 3D line correspondences, which can be used intuitively by inexperienced users and requires less than a minute to complete. The required user interaction is based on taking aim at a distant target marker with a close marker, which ensures non-planar measurements covering a large area of the interaction space while also reducing the number of required measurements to five. We found the tendency that a calibrated view frustum reduced the average distance underestimation of users in an immersive VR environment, but even the correctly calibrated view frustum could not entirely compensate for the distance underestimation effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Head-mounted displays (HMDs) allow users to observe virtual environments (VEs) from an egocentric perspective. However, several experiments have provided evidence that egocentric distances are perceived as compressed in VEs relative to the real world. Recent experiments suggest that the virtual view frustum set for rendering the VE has an essential impact on the user's estimation of distances. In this article we analyze if distance estimation can be improved by calibrating the view frustum for a given HMD and user. Unfortunately, in an immersive virtual reality (VR) environment, a full per user calibration is not trivial and manual per user adjustment often leads to mini- or magnification of the scene. Therefore, we propose a novel per user calibration approach with optical see-through displays commonly used in augmented reality (AR). This calibration takes advantage of a geometric scheme based on 2D point - 3D line correspondences, which can be used intuitively by inexperienced users and requires less than a minute to complete. The required user interaction is based on taking aim at a distant target marker with a close marker, which ensures non-planar measurements covering a large area of the interaction space while also reducing the number of required measurements to five. We found the tendency that a calibrated view frustum reduced the average distance underestimation of users in an immersive VR environment, but even the correctly calibrated view frustum could not entirely compensate for the distance underestimation effects. |
| Gerd Bruder; Frank Steinicke; Benjamin Bolte; Phil Wieland; Harald Frenz; Markus Lappe Exploiting Perceptual Limitations and Illusions to Support Walking through Virtual Environments in Confined Physical Spaces Journal Article In: Elsevier Displays, vol. 34, no. 2, pp. 132–141, 2012. @article{BSBWFL12,
title = {Exploiting Perceptual Limitations and Illusions to Support Walking through Virtual Environments in Confined Physical Spaces},
author = {Gerd Bruder and Frank Steinicke and Benjamin Bolte and Phil Wieland and Harald Frenz and Markus Lappe},
url = {https://sreal.ucf.edu/wp-content/uploads/2017/02/BSBWFL12.pdf},
year = {2012},
date = {2012-01-01},
journal = {Elsevier Displays},
volume = {34},
number = {2},
pages = {132--141},
abstract = {Head-mounted displays (HMDs) allow users to immerse in a virtual environment (VE) in which the user's viewpoint can be changed according to the tracked movements in real space. Because the size of the virtual world often differs from the size of the tracked lab space, a straightforward implementation of omni-directional and unlimited walking is not generally possible. In this article we review and discuss a set of techniques that use known perceptual limitations and illusions to support seemingly natural walking through a large virtual environment in a confined lab space. The concept behind these techniques is called redirected walking. With redirected walking, users are guided unnoticeably on a physical path that differs from the path the user perceives in the virtual world by manipulating the transformations from real to virtual movements. For example, virtually rotating the view in the HMD to one side with every step causes the user to unknowingly compensate by walking a circular arc in the opposite direction, while having the illusion of walking on a straight trajectory. We describe a number of perceptual illusions that exploit perceptual limitations of motion detectors to manipulate the user's perception of the speed and direction of his motion. We describe how gains of locomotor speed, rotation, and curvature can gradually alter the physical trajectory without the users observing any discrepancy, and discuss studies that investigated perceptual thresholds for these manipulations. We discuss the potential of self-motion illusions to shift or widen the applicable ranges for gain manipulations and to compensate for over- or underestimations of speed or travel distance in VEs. Finally, we identify a number of key issues for future research on this topic.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Head-mounted displays (HMDs) allow users to immerse in a virtual environment (VE) in which the user's viewpoint can be changed according to the tracked movements in real space. Because the size of the virtual world often differs from the size of the tracked lab space, a straightforward implementation of omni-directional and unlimited walking is not generally possible. In this article we review and discuss a set of techniques that use known perceptual limitations and illusions to support seemingly natural walking through a large virtual environment in a confined lab space. The concept behind these techniques is called redirected walking. With redirected walking, users are guided unnoticeably on a physical path that differs from the path the user perceives in the virtual world by manipulating the transformations from real to virtual movements. For example, virtually rotating the view in the HMD to one side with every step causes the user to unknowingly compensate by walking a circular arc in the opposite direction, while having the illusion of walking on a straight trajectory. We describe a number of perceptual illusions that exploit perceptual limitations of motion detectors to manipulate the user's perception of the speed and direction of his motion. We describe how gains of locomotor speed, rotation, and curvature can gradually alter the physical trajectory without the users observing any discrepancy, and discuss studies that investigated perceptual thresholds for these manipulations. We discuss the potential of self-motion illusions to shift or widen the applicable ranges for gain manipulations and to compensate for over- or underestimations of speed or travel distance in VEs. Finally, we identify a number of key issues for future research on this topic. |
| Dennis Wiebusch; Martin Fischbach; Alexander Strehler; Marc E. Latoschik; Gerd Bruder; Frank Steinicke Evaluation von Headtracking in interaktiven virtuellen Umgebungen auf Basis der Kinect Proceedings Article In: Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR), pp. 189–200, 2012. @inproceedings{WFSLBS12a,
title = {Evaluation von Headtracking in interaktiven virtuellen Umgebungen auf Basis der Kinect},
author = {Dennis Wiebusch and Martin Fischbach and Alexander Strehler and Marc E. Latoschik and Gerd Bruder and Frank Steinicke},
year = {2012},
date = {2012-01-01},
booktitle = {Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR)},
pages = {189--200},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|