Publikationsliste

  • Artikel
  • Buch
  • Dissertation
  • Studien- / Abschlussarbeit
  • Tagungsbericht
  • Andere
  • An mHealth Intervention Promoting Physical Activity and Healthy Eating in a Family Setting (SMARTFAMILY) : Randomized Controlled Trial

    ×

    Background:


    Numerous smartphone apps are targeting physical activity (PA) and healthy eating (HE), but empirical evidence on their effectiveness for the initialization and maintenance of behavior change, especially in children and adolescents, is still limited. Social settings influence individual behavior; therefore, core settings such as the family need to be considered when designing mobile health (mHealth) apps.



    Objective:


    The purpose of this study was to evaluate the effectiveness of a theory- and evidence-based mHealth intervention (called SMARTFAMILY [SF]) targeting PA and HE in a collective family–based setting.



    Methods:


    A smartphone app based on behavior change theories and techniques was developed, implemented, and evaluated with a cluster randomized controlled trial in a collective family setting. Baseline (t0) and postintervention (t1) measurements included PA (self-reported and accelerometry) and HE measurements (self-reported fruit and vegetable intake) as primary outcomes. Secondary outcomes (self-reported) were intrinsic motivation, behavior-specific self-efficacy, and the family health climate. Between t0 and t1, families of the intervention group (IG) used the SF app individually and collaboratively for 3 consecutive weeks, whereas families in the control group (CG) received no treatment. Four weeks following t1, a follow-up assessment (t2) was completed by participants, consisting of all questionnaire items to assess the stability of the intervention effects. Multilevel analyses were implemented in R (R Foundation for Statistical Computing) to acknowledge the hierarchical structure of persons (level 1) clustered in families (level 2).



    Results:


    Overall, 48 families (CG: n=22, 46%, with 68 participants and IG: n=26, 54%, with 88 participants) were recruited for the study. Two families (CG: n=1, 2%, with 4 participants and IG: n=1, 2%, with 4 participants) chose to drop out of the study owing to personal reasons before t0. Overall, no evidence for meaningful and statistically significant increases in PA and HE levels of the intervention were observed in our physically active study participants (all P>.30).



    Conclusions:


    Despite incorporating behavior change techniques rooted in family life and psychological theories, the SF intervention did not yield significant increases in PA and HE levels among the participants. The results of the study were mainly limited by the physically active participants and the large age range of children and adolescents. Enhancing intervention effectiveness may involve incorporating health literacy, just-in-time adaptive interventions, and more advanced features in future app development. Further research is needed to better understand intervention engagement and tailor mHealth interventions to individuals for enhanced effectiveness in primary prevention efforts.



    Trial Registration:


    German Clinical Trials Register DRKS00010415; drks.de/search/en/trial/DRKS00010415

  • Skovhus Lunding, Rasmus; Skovhus Lunding, Mille; Feuchtner, Tiare; Graves Petersen, Marianne; Grønbæk, Kaj; Suzuki, Ryo (2024): RoboVisAR : Immersive Authoring of Condition-based AR Robot Visualisations GROLLMAN, Dan, ed., Elizabeth BROADBENT, ed.. HRI '24 : Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. New York, NY: ACM, 2024, pp. 462-471. ISBN 979-8-4007-0322-5. Available under: doi: 10.1145/3610977.3634972

    RoboVisAR : Immersive Authoring of Condition-based AR Robot Visualisations

    ×

    We introduce RoboVisAR, an immersive augmented reality (AR) authoring tool for in-situ robot visualisations. AR robot visualisations, such as the robot's movement path, status, and safety zones, have been shown to benefit human-robot collaboration. However, their creation requires extensive skills in both robotics and AR programming. To address this, RoboVisAR allows users to create custom AR robot visualisations without programming. By recording an example robot behaviour, users can design, combine, and test visualisations in-situ within a mixed reality environment. RoboVisAR currently supports six types of visualisations (Path, Point of Interest, Safety Zone, Robot State, Message, Force/Torque) and four types of conditions for when they are displayed (Robot State, Proximity, Box, Force/Torque). With this tool, users can easily present different visualisations on demand and make them context-aware to avoid visual clutter. An expert user study with three participants suggests that users appreciate the customizability of the visualisations, which could easily be authored in less than ten minutes.

  • Mueller, Florian ‘Floyd’; Obrist, Marianna; Bertran, Ferran Altarriba; Makam, Neharika; Kim, Soh; Dawes, Christopher; Marti, Patrizia; Reiterer, Harald; Wang, Hongyue; Wang, Yan (2024): Grand Challenges in Human-Food Interaction International Journal of Human-Computer Studies. Elsevier. 2024, 183, 103197. ISSN 1071-5819. eISSN 1095-9300. Available under: doi: 10.1016/j.ijhcs.2023.103197

    Grand Challenges in Human-Food Interaction

    ×

    There is an increasing interest in combining interactive technology with food, leading to a new research area called human-food interaction. While food experiences are increasingly benefiting from interactive technology, for example in the form of food tracking apps, 3D-printed food and projections on dining tables, a more systematic advancement of the field is hindered because, so far, there is no comprehensive articulation of the grand challenges the field is facing. To further and consolidate conversations around this topic, we invited 21 HFI experts to a 5-day seminar. The goal was to review our own and prior work to identify the grand challenges in human-food interaction. The result is an articulation of 10 grand challenges in human-food interaction across 4 categories (technology, users, design and ethics). By presenting these grand challenges, we aim to help researchers move the human-food interaction research field forward.

  • Hauser, Stefan R.; Heinrich, Eva-Maria; Reiterer, Harald; Schlag, Eberhard; Schreiber, Falk (Hrsg.) (2024): MAG Mediale Ausstellungsgestaltung : Eine Kooperation zwischen der Universität Konstanz und HTWG Konstanz

    MAG Mediale Ausstellungsgestaltung : Eine Kooperation zwischen der Universität Konstanz und HTWG Konstanz

    ×

    Im mit dem Landeslehrpreis 2021 des Landes Baden-Württemberg ausgezeichneten Studienangebot Mediale Ausstellungsgestaltung entwerfen Studierende innovative, interaktive Ausstellungskonzepte. Die Veranstaltung ist Teil einer deutschlandweit einmaligen Kooperation verschiedener Hochschulen und Fachrichtungen – Architektur und Kommunikationsdesign (HTWG Konstanz), Informatik und Geschichte (Universität Konstanz) sowie Musikdesign (Staatliche Hochschule für Musik Trossingen). In dem viersemestrigen Studienangebot erlernen Studierende die Grundlagen moderner Ausstellungsgestaltung und entwerfen daraufhin in interdisziplinären Teams neuartige Ausstellungskonzepte, welche in eigenen Ausstellungen umgesetzt werden.

  • Adolf, Jindřich; Kán, Peter; Feuchtner, Tiare; Adolfová, Barbora; Doležal, Jaromír; Lhotská, Lenka (2024): Offistretch : camera-based real-time feedback for daily stretching exercises The Visual Computer. Springer. ISSN 0178-2789. eISSN 1432-2315. Available under: doi: 10.1007/s00371-024-03450-y

    Offistretch : camera-based real-time feedback for daily stretching exercises

    ×

    In this paper, we present OffiStretch, a camera-based system for optimal stretching guidance at home or in the workplace. It consists of a vision-based method for real-time assessment of the user’s body pose to provide visual feedback as interactive guidance during stretching exercises. Our method compares the users’ actual pose with a pre-trained target pose to assess the quality of stretching for a number of different exercises. We utilize angular and spatial pose features to perform this comparison for each individual exercise. The result of this pose assessment is presented to the user as real-time visual feedback on an "augmented mirror" display. As our method relies simply on a single RGB camera, it can be easily utilized in everyday training scenarios. We validate our method in a user study, comparing users’ performance and motivation in stretching when receiving audio-visual guidance on a TV screen both with and without our live feedback. While participants performed equally well in both conditions, feedback boosted their motivation to perform the exercises, highlighting its potential for increasing users’ well-being. Moreover, our results suggest that participants preferred stretching exercises with our live feedback over the condition without the feedback. Finally, an expert evaluation with professional physiotherapists reveals that further work must target improvements of the feedback to ensure correct guidance during stretching.

  • Wieland, Jonathan (2023): Designing and Evaluating Interactions for Handheld AR BIEHL, Jacob, ed., Scott CARTER, ed., Andrés LUCERO, ed., Ville MÄKELÄ, ed., Florian ALT, ed.. Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces. New York, USA: ACM, 2023, pp. 100-103. ISBN 979-8-4007-0425-3. Available under: doi: 10.1145/3626485.3626555

    Designing and Evaluating Interactions for Handheld AR

    ×

    Mobile devices such as phones and tablets have become the most prevalent AR devices and are applied in various domains, including architecture, entertainment, and education. However, due to device-specific characteristics, designing and evaluating handheld AR interactions can be especially challenging as handheld AR displays provide 1) limited input options, 2) a narrow camera field of view, and 3) restricted context awareness. To address these challenges with design recommendations and research implications, the dissertation follows a mixed-methods approach with three research questions (RQs): On the one hand, specific aspects of fundamental 3D object manipulation tasks (RQ1) and awareness and discoverability (RQ2) are explored and evaluated in controlled lab studies. For increased ecological validity, the developed interaction concepts are also investigated in public interactive exhibitions. These studies then inform the creation of a framework for designing and evaluating handheld AR experiences using VR simulations of the interaction context (RQ3).

  • Auer, Stefan; Anthes, Christoph; Reiterer, Harald; Jetter, Hans-Christian (2023): Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback Proceedings of the ACM on Human-Computer Interaction. ACM. 2023, 7(ISS), pp. 420-443. eISSN 2573-0142. Available under: doi: 10.1145/3626481

    Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback

    ×

    Safety-critical interactive spaces for supervision and time-critical control tasks are usually characterized by many small displays and physical controls, typically found in control rooms or automotive, railway, and aviation cockpits. Using Virtual Reality (VR) simulations instead of a physical system can significantly reduce the training costs of these interactive spaces without risking real-world accidents or occupying expensive physical simulators. However, the user's physical interactions and feedback methods must be technologically mediated.


    Therefore, we conducted a within-subjects study with 24 participants and compared performance, task load, and simulator sickness during training of authentic aircraft cockpit manipulation tasks.


    The participants were asked to perform these tasks inside a VR flight simulator (VRFS) for three feedback methods (acoustic, haptic, and acoustic+haptic) and inside a physical flight simulator (PFS) of a commercial airplane cockpit. The study revealed a partial equivalence of VRFS and PFS, control-specific differences input elements, irrelevance of rudimentary vibrotactile feedback, slower movements in VR, as well as a preference for PFS.

  • Hybrid User Interfaces : Complementary Interfaces for Mixed Reality Interaction

    ×

    dc.title:


    dc.contributor.author: Dachselt, Raimund; Elmqvist, Niklas; Feiner, Steven; Lee, Benjamin; Schmalstieg, Dieter

  • Hubenschmid, Sebastian; Fink, Daniel I.; Zagermann, Johannes; Wieland, Jonathan; Reiterer, Harald; Feuchtner, Tiare (2023): Colibri : A Toolkit for Rapid Prototyping of Networking Across Realities 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Piscataway, NJ: IEEE, 2023, pp. 9-13. ISBN 979-8-3503-2891-2. Available under: doi: 10.1109/ismar-adjunct60411.2023.00010

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    Colibri : A Toolkit for Rapid Prototyping of Networking Across Realities

    ×

    We present Colibri, an open source networking toolkit for data exchange, model synchronization, and voice transmission to support rapid development of distributed cross reality research prototypes. Development of such prototypes often involves multiple heterogeneous components, which necessitates data exchange across a network. However, existing networking solutions are often unsuitable for research prototypes as they require significant development resources and may be lacking in terms of data privacy, logging capabilities, latency requirements, or supporting heterogeneous devices. In contrast, Colibri is specifically designed for networking in interactive research prototypes: Colibri facilitates the most common tasks for establishing communication between cross reality components with little to no code necessary. We describe the usage and implementation of Colibri and report on its application in three cross reality prototypes to demonstrate the toolkit’s capabilities. Lastly, we discuss open challenges to better support the creation of cross reality prototypes.

  • Reinschlüssel, Anke; Zagermann, Johannes (2023): Exploring Hybrid User Interfaces for Surgery Planning 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Piscataway, NJ: IEEE, 2023, pp. 208-210. ISBN 979-8-3503-2891-2. Available under: doi: 10.1109/ismar-adjunct60411.2023.00048

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    Exploring Hybrid User Interfaces for Surgery Planning

    ×

    Hybrid user interfaces are a great opportunity to combine complementary interfaces to make use of the best interface for specific steps in a workflow. This position paper outlines one diverse application field: surgery planning. Planning a surgery is a complex task as the surgical team has to get an overview and understanding of a patient's medical history and the internal anatomical structures of the organ or region of interest. In this position paper, we outline how different hardware (e.g., mixed reality head-worn devices and physical objects) and interaction concepts (e.g., gesture-based interaction or keyboard and mouse) can create an optimal workflow for surgery planning.

  • Zagermann, Johannes; Hubenschmid, Sebastian; Fink, Daniel I.; Wieland, Jonathan; Reiterer, Harald; Feuchtner, Tiare (2023): Challenges and Opportunities for Collaborative Immersive Analytics with Hybrid User Interfaces 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Piscataway, NJ: IEEE, 2023, pp. 191-195. ISBN 979-8-3503-2891-2. Available under: doi: 10.1109/ismar-adjunct60411.2023.00044

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    Challenges and Opportunities for Collaborative Immersive Analytics with Hybrid User Interfaces

    ×

    Over the past years, we have seen an increase in the number of user studies involving mixed reality interfaces. As these environments usually exceed standardized user study settings that only measure time and error, we developed, designed, and evaluated a mixed- immersion evaluation framework called RELIVE. Its combination of in-situ and ex-situ analysis approaches allows for the holistic and malleable analysis and exploration of mixed reality user study data of an individual analyst in a step-by-step approach that we previously described as an asynchronous hybrid user interface. Yet, collaboration was coined as a key aspect for visual and immersive analytics – potentially allowing multiple analysts to synchronously explore mixed reality user study data from different but complemen- tary angles of evaluation using hybrid user interfaces. This leads to a variety of fundamental challenges and opportunities for research and design of hybrid user interfaces regarding e.g., allocation of tasks, the interplay between views, user representations, and collaborative coupling that are outlined in this position paper.

  • Zaky, Abdelrahman; Zagermann, Johannes; Reiterer, Harald; Feuchtner, Tiare (2023): Opportunities and Challenges of Hybrid User Interfaces for Optimization of Mixed Reality Interfaces 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Piscataway, NJ: IEEE, 2023. ISBN 979-8-3503-2891-2. Available under: doi: 10.1109/ismar-adjunct60411.2023.00050

    Projekt : SFB-TRR 161 TP C07 Real-Time Optimization of XR User Interfaces

    Opportunities and Challenges of Hybrid User Interfaces for Optimization of Mixed Reality Interfaces

    ×

    Current research highlights the importance of adaptive mixed reality interfaces, as increased adoption leads to increasingly diverse, complex and unconstrained interaction scenarios. An interesting approach for adaptation, is the optimization of interface layout and behaviour. We thereby consider three distinct types of context to which the interface adapts: the user, the activity, and the environment. The latter of these includes a myriad of interactive devices surrounding the user, the capabilities of which we propose to take advantage of by integrating them in a hybrid user interface. Hybrid user interfaces offer many opportunities to address distinct usability issues, such as visibility, reachability, and ergonomics. However, considering additional interactive devices for optimizing mixed reality interfaces introduces a number of additional challenges, such as detecting available and suitable devices and modeling the respective interaction costs. Moreover, using different devices potentially introduces a switching cost e.g., in terms of cognitive load and time. In this paper, we aim to discuss different opportunities and challenges of using hybrid user interfaces for the optimization of mixed reality interfaces and thereby highlight directions for future work.

  • Lunding, Rasmus S.; Lystbæk, Mathias N.; Feuchtner, Tiare; Grønbæk, Kaj (2023): AR-supported Human-Robot Collaboration : Facilitating Workspace Awareness and Parallelized Assembly Tasks BRUDER, Gerd, ed., Anne-Hélène OLIVIER, ed., Andrew CUNNINGHAM, ed. and others. 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Piscataway, NJ: IEEE, 2023, pp. 1064-1073. ISBN 979-8-3503-2838-7. Available under: doi: 10.1109/ismar59233.2023.00123

    AR-supported Human-Robot Collaboration : Facilitating Workspace Awareness and Parallelized Assembly Tasks

    ×

    While technologies for human-robot collaboration are rapidly advancing, plenty of aspects still need further investigation, such as ensuring workspace awareness, enabling the operator to reschedule tasks on the fly, and how users prefer to coordinate and collaborate with robots. To address these, we propose an Augmented Reality interface that supports human-robot collaboration in an assembly task by (1) enabling the inspection of planned and ongoing robot processes through dynamic task lists and a path visualization, (2) allowing the operator to also delegate tasks to the robot, and (3) presenting step-by-step assembly instructions. We evaluate our AR interface in comparison to a state-of-the-art tablet interface in a user study, where participants collaborated with a robot arm in a shared workspace to complete an assembly task. Our findings confirm the feasibility and potential of AR-assisted human-robot collaboration, while pointing to some central challenges that require further work.

  • Kosch, Thomas; Karolus, Jakob; Zagermann, Johannes; Reiterer, Harald; Schmidt, Albrecht; Woźniak, Paweł W. (2023): A Survey on Measuring Cognitive Workload in Human-Computer Interaction ACM Computing Surveys. ACM. 2023, 55(13s), 283. ISSN 0360-0300. eISSN 1557-7341. Available under: doi: 10.1145/3582272

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    A Survey on Measuring Cognitive Workload in Human-Computer Interaction

    ×

    The ever-increasing number of computing devices around us results in more and more systems competing for our attention, making cognitive workload a crucial factor for the user experience of human-computer interfaces. Research in Human-Computer Interaction (HCI) has used various metrics to determine users’ mental demands. However, there needs to be a systematic way to choose an appropriate and effective measure for cognitive workload in experimental setups, posing a challenge to their reproducibility. We present a literature survey of past and current metrics for cognitive workload used throughout HCI literature to address this challenge. By initially exploring what cognitive workload resembles in the HCI context, we derive a categorization supporting researchers and practitioners in selecting cognitive workload metrics for system design and evaluation. We conclude with three following research gaps: (1) defining and interpreting cognitive workload in HCI, (2) the hidden cost of the NASA-TLX, and (3) HCI research as a catalyst for workload-aware systems, highlighting that HCI research has to deepen and conceptualize the understanding of cognitive workload in the context of interactive computing systems.

  • Cromjongh, Robin; Van Reenen, Quentin; König, Laura M.; Kanning, Martina; Reips, Ulf-Dietrich; Feuchtner, Tiare; Hauptmann, Hanna (2023): Group Adapted Avatar Recommendations for Exergames Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization. New York, NY, USA: ACM, 2023, pp. 283-290. ISBN 978-1-4503-9891-6. Available under: doi: 10.1145/3563359.3597389

    Group Adapted Avatar Recommendations for Exergames

    ×

    Exergames are a promising way to encourage physical activity in the population. Especially competitive gaming has been shown to boost physical activity during gameplay. However, differences in physical abilities and fitness can lead to anxiety, fear of failure, or frustration. One way to mitigate these inhibitors is to balance the exergaming difficulty between competing players. This paper investigates the expectations and attitudes towards adaptivity in sports games, both in real life and with digital support. To that end, we present a survey with 421 participants investigating the general reaction to group adaptivity in sports games as well as a focus group discussing the reactions to group adaptive avatar recommendations within the game Mario Tennis Aces. Our results show that there is potential for group adaptive exergames to increase engagement, especially for non-sporty and female users, and that the first prototypical implementation was perceived positively regarding fairness and expected physical activity.

  • Relaxed forced choice improves performance of visual quality assessment methods

    ×

    In image quality assessment, a collective visual quality score for an image or video is obtained from the individual ratings of many subjects. One commonly used format for these experiments is the two-alternative forced choice method. Two stimuli with the same content but differing visual quality are presented sequentially or side-by-side. Subjects are asked to select the one of better quality, and when uncertain, they are required to guess. The relaxed alternative forced choice format aims to reduce the cognitive load and the noise in the responses due to the guessing by providing a third response option, namely, "not sure". This work presents a large and comprehensive crowdsourcing experiment to compare these two response formats: the one with the ``not sure'' option and the one without it. To provide unambiguous ground truth for quality evaluation, subjects were shown pairs of images with differing numbers of dots and asked each time to choose the one with more dots. Our crowdsourcing study involved 254 participants and was conducted using a within-subject design. Each participant was asked to respond to 40 pair comparisons with and without the "not sure" response option and completed a questionnaire to evaluate their cognitive load for each testing condition. The experimental results show that the inclusion of the "not sure" response option in the forced choice method reduced mental load and led to models with better data fit and correspondence to ground truth. We also tested for the equivalence of the models and found that they were different. The dataset is available at database.mmsp-kn.de/cogvqa-database.html.

  • Hubenschmid, Sebastian; Zagermann, Johannes; Leicht, Daniel; Reiterer, Harald; Feuchtner, Tiare (2023): ARound the Smartphone : Investigating the Effects of Virtually-Extended Display Size on Spatial Memory Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). New York, NY, USA: ACM, 2023. ISBN 978-1-4503-9421-5. Available under: doi: https://doi.org/10.1145/3544548.3581438

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    ARound the Smartphone : Investigating the Effects of Virtually-Extended Display Size on Spatial Memory

    ×

    Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefts of familiar touch interaction with the near-infnite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using diferent virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.

  • Assländer, Lorenz; Albrecht, Matthias; Diehl, Moritz; Missen, Kyle J.; Carpenter, Mark G.; Streuber, Stephan (2023): Estimation of the visual contribution to standing balance using virtual reality Scientific Reports. Springer. 2023, 13(1), 2594. eISSN 2045-2322. Available under: doi: 10.1038/s41598-023-29713-7

    Estimation of the visual contribution to standing balance using virtual reality

    ×

    Sensory perturbations are a valuable tool to assess sensory integration mechanisms underlying balance. Implemented as systems-identification approaches, they can be used to quantitatively assess balance deficits and separate underlying causes. However, the experiments require controlled perturbations and sophisticated modeling and optimization techniques. Here we propose and validate a virtual reality implementation of moving visual scene experiments together with model-based interpretations of the results. The approach simplifies the experimental implementation and offers a platform to implement standardized analysis routines. Sway of 14 healthy young subjects wearing a virtual reality head-mounted display was measured. Subjects viewed a virtual room or a screen inside the room, which were both moved during a series of sinusoidal or pseudo-random room or screen tilt sequences recorded on two days. In a between-subject comparison of 10 × 6 min long pseudo-random sequences, each applied at 5 amplitudes, our results showed no difference to a real-world moving screen experiment from the literature. We used the independent-channel model to interpret our data, which provides a direct estimate of the visual contribution to balance, together with parameters characterizing the dynamics of the feedback system. Reliability estimates of single subject parameters from six repetitions of a 6 × 20-s pseudo-random sequence showed poor test–retest agreement. Estimated parameters show excellent reliability when averaging across three repetitions within each day and comparing across days (Intra-class correlation; ICC 0.7–0.9 for visual weight, time delay and feedback gain). Sway responses strongly depended on the visual scene, where the high-contrast, abstract screen evoked larger sway as compared to the photo-realistic room. In conclusion, our proposed virtual reality approach allows researchers to reliably assess balance control dynamics including the visual contribution to balance with minimal implementation effort.

  • Albrecht, Matthias; Assländer, Lorenz; Reiterer, Harald; Streuber, Stephan (2023): MoPeDT : A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR). Piscataway, NJ: IEEE, 2023, pp. 691-701. ISBN 979-8-3503-4815-6. Available under: doi: 10.1109/VR55154.2023.00084

    MoPeDT : A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research

    ×

    Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.

  • Evangelista Belo, João Marcelo; Wissing, Jon; Feuchtner, Tiare; Grønbæk, Kaj (2023): CADTrack : Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects Proceedings of the ACM on Human-Computer Interaction. ACM. 2023, 7(ISS), 426. eISSN 2573-0142. Available under: doi: 10.1145/3626462

    CADTrack : Instructions and Support for Orientation Disambiguation of Near-Symmetrical Objects

    ×

    Determining the correct orientation of objects can be critical to succeed in tasks like assembly and quality assurance. In particular, near-symmetrical objects may require careful inspection of small visual features to disambiguate their orientation. We propose CADTrack, a digital assistant for providing instructions and support for tasks where the object orientation matters but may be hard to disambiguate with the naked eye. Additionally, we present a deep learning pipeline for tracking the orientation of near-symmetrical objects. In contrast to existing approaches, which require labeled datasets involving laborious data acquisition and annotation processes, CADTrack uses a digital model of the object to generate synthetic data and train a convolutional neural network. Furthermore, we extend the architecture of Mask R-CNN with a confidence prediction branch to avoid errors caused by misleading orientation guidance. We evaluate CADTrack in a user study, comparing our tracking-based instructions to other methods to confirm the benefits of our approach in terms of preference and required effort.

  • Babic, Teo; Reiterer, Harald; Haller, Michael (2022): Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones Multimodal Technologies and Interaction. MDPI. 2022, 6(10), 94. eISSN 2414-4088. Available under: doi: 10.3390/mti6100094

    Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones

    ×

    Over decades, many researchers developed complex in-lab systems with the overall goal to track multiple body parts of the user for a richer and more powerful 2D/3D interaction with a distant display. In this work, we introduce a novel smartphone-based tracking approach that eliminates the need for complex tracking systems. Relying on simultaneous usage of the front and rear smartphone cameras, our solution enables rich spatial interactions with distant displays by combining touch input with hand-gesture input, body and head motion, as well as eye-gaze input. In this paper, we firstly present a taxonomy for classifying distant display interactions, providing an overview of enabling technologies, input modalities, and interaction techniques, spanning from 2D to 3D interactions. Further, we provide more details about our implementation—using off-the-shelf smartphones. Finally, we validate our system in a user study by a variety of 2D and 3D multimodal interaction techniques, including input refinement.

  • Chiossi, Francesco; Zagermann, Johannes; Karolus, Jakob; Rodrigues, Nils; Balestrucci, Priscilla; Weiskopf, Daniel; Ehinger, Benedikt; Feuchtner, Tiare; Reiterer, Harald; Chuang, Lewis L. (2022): Adapting visualizations and interfaces to the user it - Information Technology. De Gruyter Oldenbourg. 2022, 64(4-5), pp. 133-143. ISSN 1611-2776. eISSN 2196-7032. Available under: doi: 10.1515/itit-2022-0035

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    Adapting visualizations and interfaces to the user

    ×

    Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.

  • Fleck, Philipp; Sousa Calepso, Aimée; Hubenschmid, Sebastian; Sedlmair, Michael; Schmalstieg, Dieter (2022): RagRug : A Toolkit for Situated Analytics IEEE Transactions on Visualization and Computer Graphics. IEEE. ISSN 1077-2626. eISSN 1941-0506. Available under: doi: 10.1109/TVCG.2022.3157058

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    RagRug : A Toolkit for Situated Analytics

    ×

    We present RagRug, an open-source toolkit for situated analytics. The abilities of RagRug go beyond previous immersive analytics toolkits by focusing on specific requirements emerging when using augmented reality (AR) rather than virtual reality. RagRug combines state of the art visual encoding capabilities with a comprehensive physical-virtual model, which lets application developers systematically describe the physical objects in the real world and their role in AR. We connect AR visualization with data streams from the Internet of Things using distributed dataflow. To this aim, we use reactive programming patterns so that visualizations become context-aware, i.e., they adapt to events coming in from the environment. The resulting authoring system is low-code; it emphasises describing the physical and the virtual world and the dataflow between the elements contained therein. We describe the technical design and implementation of RagRug, and report on five example applications illustrating the toolkit's abilities.

  • Seinfeld, Sofia; Feuchtner, Tiare; Pinzek, Johannes; Müller, Jörg (2022): Impact of Information Placement and User Representations in VR on Performance and Embodiment IEEE Transactions on Visualization and Computer Graphics (T-VCG). IEEE. 2022, 28(3), pp. 1545-1556. ISSN 1077-2626. eISSN 1941-0506. Available under: doi: 10.1109/TVCG.2020.3021342

    Impact of Information Placement and User Representations in VR on Performance and Embodiment

    ×

    Human sensory processing is sensitive to the proximity of stimuli to the body. It is therefore plausible that these perceptual mechanisms also modulate the detectability of content in VR, depending on its location. We evaluate this in a user study and further explore the impact of the user's representation during interaction. We also analyze how embodiment and motor performance are influenced by these factors. In a dual-task paradigm, participants executed a motor task, either through virtual hands, virtual controllers, or a keyboard. Simultaneously, they detected visual stimuli appearing in different locations. We found that, while actively performing a motor task in the virtual environment, performance in detecting additional visual stimuli is higher when presented near the user's body. This effect is independent of how the user is represented and only occurs when the user is also engaged in a secondary task. We further found improved motor performance and increased embodiment when interacting through virtual tools and hands in VR, compared to interacting with a keyboard. This study contributes to better understanding the detectability of visual content in VR, depending on its location in the virtual environment, as well as the impact of different user representations on information processing, embodiment, and motor performance.

  • Fink, Daniel I.; Zagermann, Johannes; Reiterer, Harald; Jetter, Hans-Christian (2022): Re-locations : Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces WALLACE, James, ed. and others. Proceedings of the ACM on Human-Computer Interaction ; 6. New York, NY: ACM, 2022, 556. Available under: doi: 10.1145/3567709

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    Re-locations : Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces

    ×

    Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other’s activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR.

  • Design and Evaluation of ‘Post-WIMP' Systems to Promote the Ergonomic Transfer of Patients

    ×

    dc.title:

  • Hubenschmid, Sebastian; Wieland, Jonathan; Fink, Daniel I.; Batch, Andrea; Zagermann, Johannes; Elmqvist, Niklas; Reiterer, Harald (2022): ReLive : Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies CHI '22 : Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM, 2022, 24. ISBN 978-1-4503-9157-3. Available under: doi: 10.1145/3491102.3517550

    ReLive : Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies

    ×

    The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.

  • Wieland, Jonathan; Hegemann Garcia, Rudolf C.; Reiterer, Harald; Feuchtner, Tiare (2022): Arrow, Bézier Curve, or Halos? : Comparing 3D Out-of-View Object Visualization Techniques for Handheld Augmented Reality 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Piscataway, NJ: IEEE, 2022, pp. 797-806. ISBN 978-1-66545-325-7. Available under: doi: 10.1109/ISMAR55827.2022.00098

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    Arrow, Bézier Curve, or Halos? : Comparing 3D Out-of-View Object Visualization Techniques for Handheld Augmented Reality

    ×

    Handheld augmented reality (AR) applications allow users to interact with their virtually augmented environment on the screen of their tablet or smartphone by simply pointing its camera at nearby objects or “points of interest” (POIs). However, this often requires users to carefully scan their surroundings in search of POIs that are out of view. Proposed 2D guides for out-of-view POIs can, unfortunately, be ambiguous due to the projection of a 3D position to 2D screen space. We address this by using 3D visualizations that directly encode the POI’s 3D direction and distance. Based on related work, we implemented three such visualization techniques: (1) 3D Arrow, (2) 3D Bézier Curve, and (3) 3D Halos. We confirmed the applicability of these three techniques in a case study and then compared them in a user study, evaluating performance, workload, and user experience. Participants performed best using 3D Arrow, while surprisingly, 3D Halos led to poor results. We discuss the design implications of these results that can inform future 3D out-of-view object visualization techniques.

  • Zagermann, Johannes; Hubenschmid, Sebastian; Balestrucci, Priscilla; Feuchtner, Tiare; Mayer, Sven; Ernst, Marc O.; Schmidt, Albrecht; Reiterer, Harald (2022): Complementary interfaces for visual computing it - Information Technology. De Gruyter Oldenbourg. 2022, 64(4-5). ISSN 1611-2776. eISSN 2196-7032. Available under: doi: 10.1515/itit-2022-0031

    Projekt : SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

    Complementary interfaces for visual computing

    ×

    With increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies compelling? Based on insights from our research, each device or interface component must contribute a complementary characteristic to increase the quality of interaction and further support users in their current activity. We establish the term complementary interfaces for such meaningful combinations of devices and modalities and provide an initial set of challenges. In addition, we demonstrate the value of complementarity with examples from within our own research.

  • Rasmussen, Troels; Feuchtner, Tiare; Huang, Weidong; Grønbæk, Kaj (2022): Supporting workspace awareness in remote assistance through a flexible multi-camera system and Augmented Reality awareness cues Journal of Visual Communication and Image Representation. Elsevier. 2022, 89, 103655. ISSN 1047-3203. eISSN 1095-9076. Available under: doi: 10.1016/j.jvcir.2022.103655

    Supporting workspace awareness in remote assistance through a flexible multi-camera system and Augmented Reality awareness cues

    ×

    Workspace awareness is critical for remote assistance with physical tasks, yet it remains difficult to facilitate. For example, if the remote helper is limited to the single viewpoint provided by the worker’s hand-held or head-mounted camera, she lacks the ability to gain an overview of the workspace. This may be addressed by granting the helper view-independence, e.g., through a multi-camera system. However, it can be cumbersome to set up and calibrate multiple cameras, and it can be challenging for the local worker to identify the current viewpoint of the remote helper. We present CueCam, a multi-camera remote assistance system that supports mutual workspace awareness through a flexible ad-hoc camera calibration and various Augmented Reality cues that communicate the helper’s viewpoint and focus. In particular, we propose visual cues presented through a head-mounted Augmented Reality display (Virtual Hand, Color Cue), and sound cues emitted from the cameras’ physical locations (Spatial Sound). Findings from a lab study indicate that all proposed cues effectively support the worker’s awareness of helper’s location and focus, while the Color Cue demonstrated superiority in task performance and preference ratings during a search task.

Beim Zugriff auf die Publikationen ist ein Fehler aufgetreten. Bitte versuchen Sie es erneut und informieren Sie im Wiederholungsfall support@uni-konstanz.de