SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

Description

The main goal of this project is the definition and validation of new quantitative measurements to evaluate the interaction with visual computing systems. This includes the development of new quantitative models, metrics and tasks to unify usability test settings as well as the integration of the theoretical and methodological findings into a framework to evaluate interaction. To this end, we will collect and develop a set of tools to ease planning, conduction, analysis and comparison of replicable quantitative experiments in visual computing systems.

Institutions
  • WG Reiterer (Human-Computer Interaction)
Publications
    Hubenschmid, Sebastian; Zagermann, Johannes; Dachselt, Raimund; Elmqvist, Niklas; Feiner, Steven; Feuchtner, Tiare; Lee, Benjamin; Reiterer, Harald; Schmalstieg, Dieter (2023): Hybrid User Interfaces : Complementary Interfaces for Mixed Reality Interaction

Hybrid User Interfaces : Complementary Interfaces for Mixed Reality Interaction

×

dc.title:


dc.contributor.author: Hubenschmid, Sebastian; Zagermann, Johannes; Dachselt, Raimund; Elmqvist, Niklas; Feiner, Steven; Feuchtner, Tiare; Lee, Benjamin; Reiterer, Harald; Schmalstieg, Dieter

Origin (projects)

    Hubenschmid, Sebastian; Fink, Daniel I.; Zagermann, Johannes; Wieland, Jonathan; Reiterer, Harald; Feuchtner, Tiare (2023): Colibri : A Toolkit for Rapid Prototyping of Networking Across Realities 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Piscataway, NJ: IEEE, 2023, pp. 9-13. ISBN 979-8-3503-2891-2. Available under: doi: 10.1109/ismar-adjunct60411.2023.00010

Colibri : A Toolkit for Rapid Prototyping of Networking Across Realities

×

We present Colibri, an open source networking toolkit for data exchange, model synchronization, and voice transmission to support rapid development of distributed cross reality research prototypes. Development of such prototypes often involves multiple heterogeneous components, which necessitates data exchange across a network. However, existing networking solutions are often unsuitable for research prototypes as they require significant development resources and may be lacking in terms of data privacy, logging capabilities, latency requirements, or supporting heterogeneous devices. In contrast, Colibri is specifically designed for networking in interactive research prototypes: Colibri facilitates the most common tasks for establishing communication between cross reality components with little to no code necessary. We describe the usage and implementation of Colibri and report on its application in three cross reality prototypes to demonstrate the toolkit’s capabilities. Lastly, we discuss open challenges to better support the creation of cross reality prototypes.

Origin (projects)

    Zagermann, Johannes; Hubenschmid, Sebastian; Fink, Daniel I.; Wieland, Jonathan; Reiterer, Harald; Feuchtner, Tiare (2023): Challenges and Opportunities for Collaborative Immersive Analytics with Hybrid User Interfaces 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Piscataway, NJ: IEEE, 2023, pp. 191-195. ISBN 979-8-3503-2891-2. Available under: doi: 10.1109/ismar-adjunct60411.2023.00044

Challenges and Opportunities for Collaborative Immersive Analytics with Hybrid User Interfaces

×

Over the past years, we have seen an increase in the number of user studies involving mixed reality interfaces. As these environments usually exceed standardized user study settings that only measure time and error, we developed, designed, and evaluated a mixed- immersion evaluation framework called RELIVE. Its combination of in-situ and ex-situ analysis approaches allows for the holistic and malleable analysis and exploration of mixed reality user study data of an individual analyst in a step-by-step approach that we previously described as an asynchronous hybrid user interface. Yet, collaboration was coined as a key aspect for visual and immersive analytics – potentially allowing multiple analysts to synchronously explore mixed reality user study data from different but complemen- tary angles of evaluation using hybrid user interfaces. This leads to a variety of fundamental challenges and opportunities for research and design of hybrid user interfaces regarding e.g., allocation of tasks, the interplay between views, user representations, and collaborative coupling that are outlined in this position paper.

Origin (projects)

    Reinschlüssel, Anke; Zagermann, Johannes (2023): Exploring Hybrid User Interfaces for Surgery Planning 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Piscataway, NJ: IEEE, 2023, pp. 208-210. ISBN 979-8-3503-2891-2. Available under: doi: 10.1109/ismar-adjunct60411.2023.00048

Exploring Hybrid User Interfaces for Surgery Planning

×

Hybrid user interfaces are a great opportunity to combine complementary interfaces to make use of the best interface for specific steps in a workflow. This position paper outlines one diverse application field: surgery planning. Planning a surgery is a complex task as the surgical team has to get an overview and understanding of a patient's medical history and the internal anatomical structures of the organ or region of interest. In this position paper, we outline how different hardware (e.g., mixed reality head-worn devices and physical objects) and interaction concepts (e.g., gesture-based interaction or keyboard and mouse) can create an optimal workflow for surgery planning.

Origin (projects)

    Kosch, Thomas; Karolus, Jakob; Zagermann, Johannes; Reiterer, Harald; Schmidt, Albrecht; Woźniak, Paweł W. (2023): A Survey on Measuring Cognitive Workload in Human-Computer Interaction ACM Computing Surveys. ACM. 2023, 55(13s), 283. ISSN 0360-0300. eISSN 1557-7341. Available under: doi: 10.1145/3582272

A Survey on Measuring Cognitive Workload in Human-Computer Interaction

×

The ever-increasing number of computing devices around us results in more and more systems competing for our attention, making cognitive workload a crucial factor for the user experience of human-computer interfaces. Research in Human-Computer Interaction (HCI) has used various metrics to determine users’ mental demands. However, there needs to be a systematic way to choose an appropriate and effective measure for cognitive workload in experimental setups, posing a challenge to their reproducibility. We present a literature survey of past and current metrics for cognitive workload used throughout HCI literature to address this challenge. By initially exploring what cognitive workload resembles in the HCI context, we derive a categorization supporting researchers and practitioners in selecting cognitive workload metrics for system design and evaluation. We conclude with three following research gaps: (1) defining and interpreting cognitive workload in HCI, (2) the hidden cost of the NASA-TLX, and (3) HCI research as a catalyst for workload-aware systems, highlighting that HCI research has to deepen and conceptualize the understanding of cognitive workload in the context of interactive computing systems.

Origin (projects)

  Jenadeleh, Mohsen; Zagermann, Johannes; Reiterer, Harald; Reips, Ulf-Dietrich; Hamzaoui, Raouf; Saupe, Dietmar (2023): Relaxed forced choice improves performance of visual quality assessment methods

Relaxed forced choice improves performance of visual quality assessment methods

×

In image quality assessment, a collective visual quality score for an image or video is obtained from the individual ratings of many subjects. One commonly used format for these experiments is the two-alternative forced choice method. Two stimuli with the same content but differing visual quality are presented sequentially or side-by-side. Subjects are asked to select the one of better quality, and when uncertain, they are required to guess. The relaxed alternative forced choice format aims to reduce the cognitive load and the noise in the responses due to the guessing by providing a third response option, namely, "not sure". This work presents a large and comprehensive crowdsourcing experiment to compare these two response formats: the one with the ``not sure'' option and the one without it. To provide unambiguous ground truth for quality evaluation, subjects were shown pairs of images with differing numbers of dots and asked each time to choose the one with more dots. Our crowdsourcing study involved 254 participants and was conducted using a within-subject design. Each participant was asked to respond to 40 pair comparisons with and without the "not sure" response option and completed a questionnaire to evaluate their cognitive load for each testing condition. The experimental results show that the inclusion of the "not sure" response option in the forced choice method reduced mental load and led to models with better data fit and correspondence to ground truth. We also tested for the equivalence of the models and found that they were different. The dataset is available at database.mmsp-kn.de/cogvqa-database.html.

Origin (projects)

    Hubenschmid, Sebastian; Zagermann, Johannes; Leicht, Daniel; Reiterer, Harald; Feuchtner, Tiare (2023): ARound the Smartphone : Investigating the Effects of Virtually-Extended Display Size on Spatial Memory Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). New York, NY, USA: ACM, 2023. ISBN 978-1-4503-9421-5. Available under: doi: https://doi.org/10.1145/3544548.3581438

ARound the Smartphone : Investigating the Effects of Virtually-Extended Display Size on Spatial Memory

×

Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefts of familiar touch interaction with the near-infnite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using diferent virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.

Origin (projects)

    Chiossi, Francesco; Zagermann, Johannes; Karolus, Jakob; Rodrigues, Nils; Balestrucci, Priscilla; Weiskopf, Daniel; Ehinger, Benedikt; Feuchtner, Tiare; Reiterer, Harald; Chuang, Lewis L. (2022): Adapting visualizations and interfaces to the user it - Information Technology. De Gruyter Oldenbourg. 2022, 64(4-5), pp. 133-143. ISSN 1611-2776. eISSN 2196-7032. Available under: doi: 10.1515/itit-2022-0035

Adapting visualizations and interfaces to the user

×

Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.

Origin (projects)

    Fleck, Philipp; Sousa Calepso, Aimée; Hubenschmid, Sebastian; Sedlmair, Michael; Schmalstieg, Dieter (2022): RagRug : A Toolkit for Situated Analytics IEEE Transactions on Visualization and Computer Graphics. IEEE. ISSN 1077-2626. eISSN 1941-0506. Available under: doi: 10.1109/TVCG.2022.3157058

RagRug : A Toolkit for Situated Analytics

×

We present RagRug, an open-source toolkit for situated analytics. The abilities of RagRug go beyond previous immersive analytics toolkits by focusing on specific requirements emerging when using augmented reality (AR) rather than virtual reality. RagRug combines state of the art visual encoding capabilities with a comprehensive physical-virtual model, which lets application developers systematically describe the physical objects in the real world and their role in AR. We connect AR visualization with data streams from the Internet of Things using distributed dataflow. To this aim, we use reactive programming patterns so that visualizations become context-aware, i.e., they adapt to events coming in from the environment. The resulting authoring system is low-code; it emphasises describing the physical and the virtual world and the dataflow between the elements contained therein. We describe the technical design and implementation of RagRug, and report on five example applications illustrating the toolkit's abilities.

Origin (projects)

    Hubenschmid, Sebastian; Wieland, Jonathan; Fink, Daniel I.; Batch, Andrea; Zagermann, Johannes; Elmqvist, Niklas; Reiterer, Harald (2022): ReLive : Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies CHI '22 : Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM, 2022, 24. ISBN 978-1-4503-9157-3. Available under: doi: 10.1145/3491102.3517550

ReLive : Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies

×

The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.

Origin (projects)

    Wieland, Jonathan; Hegemann Garcia, Rudolf C.; Reiterer, Harald; Feuchtner, Tiare (2022): Arrow, Bézier Curve, or Halos? : Comparing 3D Out-of-View Object Visualization Techniques for Handheld Augmented Reality 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Piscataway, NJ: IEEE, 2022, pp. 797-806. ISBN 978-1-66545-325-7. Available under: doi: 10.1109/ISMAR55827.2022.00098

Arrow, Bézier Curve, or Halos? : Comparing 3D Out-of-View Object Visualization Techniques for Handheld Augmented Reality

×

Handheld augmented reality (AR) applications allow users to interact with their virtually augmented environment on the screen of their tablet or smartphone by simply pointing its camera at nearby objects or “points of interest” (POIs). However, this often requires users to carefully scan their surroundings in search of POIs that are out of view. Proposed 2D guides for out-of-view POIs can, unfortunately, be ambiguous due to the projection of a 3D position to 2D screen space. We address this by using 3D visualizations that directly encode the POI’s 3D direction and distance. Based on related work, we implemented three such visualization techniques: (1) 3D Arrow, (2) 3D Bézier Curve, and (3) 3D Halos. We confirmed the applicability of these three techniques in a case study and then compared them in a user study, evaluating performance, workload, and user experience. Participants performed best using 3D Arrow, while surprisingly, 3D Halos led to poor results. We discuss the design implications of these results that can inform future 3D out-of-view object visualization techniques.

Origin (projects)

    Fink, Daniel I.; Zagermann, Johannes; Reiterer, Harald; Jetter, Hans-Christian (2022): Re-locations : Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces WALLACE, James, ed. and others. Proceedings of the ACM on Human-Computer Interaction ; 6. New York, NY: ACM, 2022, 556. Available under: doi: 10.1145/3567709

Re-locations : Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces

×

Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other’s activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR.

Origin (projects)

    Zagermann, Johannes; Hubenschmid, Sebastian; Balestrucci, Priscilla; Feuchtner, Tiare; Mayer, Sven; Ernst, Marc O.; Schmidt, Albrecht; Reiterer, Harald (2022): Complementary interfaces for visual computing it - Information Technology. De Gruyter Oldenbourg. 2022, 64(4-5). ISSN 1611-2776. eISSN 2196-7032. Available under: doi: 10.1515/itit-2022-0031

Complementary interfaces for visual computing

×

With increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies compelling? Based on insights from our research, each device or interface component must contribute a complementary characteristic to increase the quality of interaction and further support users in their current activity. We establish the term complementary interfaces for such meaningful combinations of devices and modalities and provide an initial set of challenges. In addition, we demonstrate the value of complementarity with examples from within our own research.

Origin (projects)

    Hubenschmid, Sebastian; Zagermann, Johannes; Butscher, Simon; Reiterer, Harald (2021): STREAM : Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2021). New York: ACM, 2021. ISBN 978-1-4503-8096-6. Available under: doi: 10.1145/3411764.3445298

STREAM : Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics

×

Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.

Origin (projects)

    Vock, Katja; Hubenschmid, Sebastian; Zagermann, Johannes; Butscher, Simon; Reiterer, Harald (2021): IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies Mensch und Computer 2021 (MuC '21). New York, NY: ACM, 2021, pp. 248-259. ISBN 978-1-4503-8645-6. Available under: doi: 10.1145/3473856.3473876

IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies

×

Mobile intervention studies employ mobile devices to observe participants’ behavior change over several weeks. Researchers regularly monitor high-dimensional data streams to ensure data quality and prevent data loss (e.g., missing engagement or malfunctions). The multitude of problem sources hampers possible automated detection of such irregularities – providing a use case for interactive dashboards. With the advent of untethered head-mounted AR devices, these dashboards can be placed anywhere in the user's physical environment, leveraging the available space and allowing for flexible information arrangement and natural navigation. In this work, we present the user-centered design and the evaluation of IDIAR: Interactive Dashboards in AR, combining a head-mounted display with the familiar interaction of a smartphone. A user study with 15 domain experts for mobile intervention studies shows that participants appreciated the multimodal interaction approach. Based on our findings, we provide implications for research and design of interactive dashboards in AR.

Origin (projects)

    Wieland, Jonathan; Zagermann, Johannes; Müller, Jens; Reiterer, Harald (2021): Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality 2021 IEEE International Symposium on Mixed and Augmented Reality. Piscataway, NJ: IEEE, 2021, pp. 403-412. ISBN 978-1-66540-158-6. Available under: doi: 10.1109/ISMAR52148.2021.00057

Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality

×

Augmented Reality (AR) supported collaboration is a popular topic in HCI research. Previous work has shown the benefits of collaborative 3D object manipulation and identified two possibilities: Either separate or compose users’ inputs. However, their experimental comparison using handheld AR displays is still missing. We, therefore, conducted an experiment in which we tasked 24 dyads with collaboratively positioning virtual objects in handheld AR using three manipulation techniques: 1) Separation – performing only different manipulation tasks (i. e., translation or rotation) simultaneously, 2) Composition – performing only the same manipulation tasks simultaneously and combining individual inputs using a merge policy, and 3) Hybrid – performing any manipulation tasks simultaneously, enabling dynamic transitions between Separation and Composition. While all techniques were similarly effective, Composition was least efficient, with higher subjective workload and worse user experience. Preferences were polarized between clear work division (Separation) and freedom of action (Hybrid). Based on our findings, we offer research and design implications.

Origin (projects)

    Zagermann, Johannes; Pfeil, Ulrike; von Bauer, Philipp; Fink, Daniel; Reiterer, Harald (2020): "It’s in my other hand!" : Studying the Interplay of Interaction Techniques and Multi-Tablet Activities CHI '20 : Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM, 2020, 413. ISBN 978-1-4503-6708-0. Available under: doi: 10.1145/3313831.3376540

"It’s in my other hand!" : Studying the Interplay of Interaction Techniques and Multi-Tablet Activities

×

Cross-device interaction with tablets is a popular topic in HCI research. Recent work has shown the benefits of including multiple devices into users’ workflows while various interaction techniques allow transferring content across devices. However, users are only reluctantly using multiple devices in combination. At the same time, research on cross-device interaction struggles to find a frame of reference to compare techniques or systems. In this paper, we try to address these challenges by studying the interplay of interaction techniques, device utilization, and task-specific activities in a user study with 24 participants from different but complementary angles of evaluation using an abstract task, a sensemaking task, and three interaction techniques. We found that different interaction techniques have a lower influence than expected, that work behaviors and device utilization depend on the task at hand, and that participants value specific aspects of cross-device interaction.

Origin (projects)

    Bishop, Fearn; Zagermann, Johannes; Pfeil, Ulrike; Sanderson, Gemma; Reiterer, Harald; Hinrichs, Uta (2020): Construct-A-Vis : exploring the free-form visualization processes of children IEEE Transactions on Visualization and Computer Graphics. Institute of Electrical and Electronics Engineers (IEEE). 2020, 26(1), pp. 451-460. ISSN 1077-2626. eISSN 1941-0506. Available under: doi: 10.1109/TVCG.2019.2934804

Construct-A-Vis : exploring the free-form visualization processes of children

×

Building data analysis skills is part of modern elementary school curricula. Recent research has explored how to facilitate children's understanding of visual data representations through completion exercises which highlight links between concrete and abstract mappings. This approach scaffolds visualization activities by presenting a target visualization to children. But how can we engage children in more free-form visual data mapping exercises that are driven by their own mapping ideas? How can we scaffold a creative exploration of visualization techniques and mapping possibilities? We present Construct-A-Vis, a tablet-based tool designed to explore the feasibility of free-form and constructive visualization activities with elementary school children. Construct-A-Vis provides adjustable levels of scaffolding visual mapping processes. It can be used by children individually or as part of collaborative activities. Findings from a study with elementary school children using Construct-A-Vis individually and in pairs highlight the potential of this free-form constructive approach, as visible in children's diverse visualization outcomes and their critical engagement with the data and mapping processes. Based on our study findings we contribute insights into the design of free-form visualization tools for children, including the role of tool-based scaffolding mechanisms and shared interactions to guide visualization activities with children.

Origin (projects)

    Borowski, Marcel; Zagermann, Johannes; Klokmose, Clemens N.; Reiterer, Harald; Rädle, Roman (2020): Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments SIGCSE '20 : Proceedings of the 51st ACM Technical Symposium on Computer Science Education. New York, NY: ACM, 2020, pp. 468-474. ISBN 978-1-4503-6793-6. Available under: doi: 10.1145/3328778.3366887

Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments

×

Programming assignments in computer science courses are often processed in pairs or groups of students. While working together, students face several shortcomings in today's software: The lack of real-time collaboration capabilities, the setup time of the development environment, and the use of different devices or operating systems can hamper students when working together on assignments. Text processing platforms like Google Docs solve these problems for the writing process of prose text, and computational notebooks like Google Colaboratory for data analysis tasks. However, none of these platforms allows users to implement interactive applications. We deployed a web-based literate programming system for three months during an introductory course on application development to explore how collaborative programming practices unfold and how the structure of computational notebooks affect the development. During the course, pairs of students solved weekly programming assignments. We analyzed data from weekly questionnaires, three focus groups with students and teaching assistants, and keystroke-level log data to facilitate the understanding of the subtleties of collaborative programming with computational notebooks. Findings reveal that there are distinct collaboration patterns; the preferred collaboration pattern varied between pairs and even varied within pairs over the course of three months. Recognizing these distinct collaboration patterns can help to design future computational notebooks for collaborative programming assignments.

Origin (projects)

    Blumenschein, Michael; Behrisch, Michael; Schmid, Stefanie; Butscher, Simon; Wahl, Deborah R.; Villinger, Karoline; Renner, Britta; Reiterer, Harald; Keim, Daniel A. (2019): SMARTexplore : Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach IEEE Conference on Visual Analytics Science and Technology (VAST) 2018. Piscataway, NJ: IEEE, 2019. ISBN 978-1-5386-6861-0. Available under: doi: 10.1109/VAST.2018.8802486

SMARTexplore : Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach

×

We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst’s trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing highdimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLORE.

Origin (projects)

    Müller, Jens; Zagermann, Johannes; Wieland, Jonathan; Pfeil, Ulrike; Reiterer, Harald (2019): A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices ALT, Florian, ed., Andreas BULLING, ed., Tanja DÖRING, ed.. MuC'19 : Proceedings of Mensch und Computer 2019. New York, NY: ACM, 2019, pp. 399-410. ISBN 978-1-4503-7198-8. Available under: doi: 10.1145/3340764.3340773

A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices

×

Handheld Augmented Reality (AR) displays offer a see-through option to create the illusion of virtual objects being integrated into the viewer’s physical environment. Some AR display technologies also allow for the deactivation of the see-through option, turning AR tablets into Virtual Reality (VR) devices that integrate the virtual objects into an exclusively virtual environment. Both display configurations are typically available on handheld devices, raising the question of their influence on users’ experience during collaborative activities. In two experiments, we studied how the different display configurations influence user experience, workload, and team performance of co-located and distributed collaborators during a spatial referencing task. A mixed-methods approach revealed that participants’ opinions were polarized towards the two display configurations, regardless of the spatial distribution of collaboration. Based on our findings, we identify critical aspects to be addressed in future research to better understand and support co-located and distributed collaboration using AR and VR displays.

Origin (projects)

    Zagermann, Johannes; Pfeil, Ulrike; Reiterer, Harald (2018): Studying Eye Movements as a Basis for Measuring Cognitive Load Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM Press, 2018, LBW095. ISBN 978-1-4503-5621-3. Available under: doi: 10.1145/3170427.3188628

Studying Eye Movements as a Basis for Measuring Cognitive Load

×

Users' cognitive load while interacting with a system is a valuable metric for evaluations in HCI. We encourage the analysis of eye movements as an unobtrusive and widely available way to measure cognitive load. In this paper, we report initial findings from a user study with 26 participants working on three visual search tasks that represent different levels of difficulty. Also, we linearly increased the cognitive demand while solving the tasks. This allowed us to analyze the reaction of individual eye movements to different levels of task difficulty. Our results show how pupil dilation, blink rate, and the number of fixations and saccades per second individually react to changes in cognitive activity. We discuss how these measurements could be combined in future work to allow for a comprehensive investigation of cognitive load in interactive settings.

Origin (projects)

  Hubenschmid, Sebastian; Zagermann, Johannes; Butscher, Simon; Reiterer, Harald (2018): Employing Tangible Visualisations in Augmented Reality with Mobile Devices MultimodalVis ’18 Workshop at AVI 2018. 2018

Employing Tangible Visualisations in Augmented Reality with Mobile Devices

×

Recent research has demonstrated the benefits of mixed realities for information visualisation. Often the focus lies on the visualisation itself, leaving interaction opportunities through different modalities largely unexplored. Yet, mixed reality in particular can benefit from a combination of different modalities. This work examines an existing mixed reality visualisation which is combined with a large tabletop for touch interaction. Although this allows for familiar operation, the approach comes with some limitations which we address by employing mobile devices, thus adding tangibility and proxemics as input modalities.

Origin (projects)

    Chuang, Lewis L.; Pfeil, Ulrike (2018): Transparency and Openness Promotion Guidelines for HCI Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM Press, 2018, SIG04. ISBN 978-1-4503-5621-3. Available under: doi: 10.1145/3170427.3185377

Transparency and Openness Promotion Guidelines for HCI

×

This special interest group addresses the status quo of HCI research with regards to research practices of transparency and openness. Specifically, it discusses whether current practices are in line with the standards applied to other fields (e.g., psychology, economics, medicine). It seeks to identify current practices that are more progressive and worth communicating to other disciplines, while evaluating whether practices in other disciplines are likely to apply to HCI research constructively. Potential outcomes include: (1) a review of current HCI research policies, (2) a report on recommended practices, and (3) a replication project of key findings in HCI research.

Origin (projects)

    Jäckle, Dominik; Stoffel, Florian; Mittelstädt, Sebastian; Keim, Daniel A.; Reiterer, Harald (2017): Interpretation of Dimensionally-reduced Crime Data : A Study with Untrained Domain Experts Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Setúbal, Portugal: SCITEPRESS, 2017, pp. 164-175. ISBN 978-989-758-228-8. Available under: doi: 10.5220/0006265101640175

Interpretation of Dimensionally-reduced Crime Data : A Study with Untrained Domain Experts

×

Dimensionality reduction (DR) techniques aim to reduce the amount of considered dimensions, yet preserving as much information as possible. According to many visualization researchers, DR results lack interpretability, in particular for domain experts not familiar with machine learning or advanced statistics. Thus, interactive visual methods have been extensively researched for their ability to improve transparency and ease the interpretation of results. However, these methods have primarily been evaluated using case studies and interviews with experts trained in DR. In this paper, we describe a phenomenological analysis investigating if researchers with no or only limited training in machine learning or advanced statistics can interpret the depiction of a data projection and what their incentives are during interaction. We, therefore, developed an interactive system for DR, which unifies mixed data types as they appear in real-world data. Based on this system, we provided data analys ts of a Law Enforcement Agency (LEA) with dimensionally-reduced crime data and let them explore and analyze domain-relevant tasks without providing further conceptual information. Results of our study reveal that these untrained experts encounter few difficulties in interpreting the results and drawing conclusions given a domain relevant use case and their experience. We further discuss the results based on collected informal feedback and observations.

Origin (projects)

    Zagermann, Johannes; Pfeil, Ulrike; Acevedo, Carmela; Reiterer, Harald (2017): Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-Device Workspace Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. New York, NY: ACM, 2017. ISBN 978-1-4503-5378-6. Available under: doi: 10.1145/3152832.3152855

Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-Device Workspace

×

In recent years, research on cross-device interaction has become a popular topic in HCI leading to novel interaction techniques mutually interfering with new evolving theoretical paradigms. Building on previous research, we implemented an individual multi-device work environment for creative activities. In a study with 20 participants, we compared a traditional toolbar-based condition with two conditions facilitating spatially distributed tools on digital panels and on physical devices. We analyze participants’ interactions with the tools, encountered problems and corresponding solutions, as well as subjective task load and user experience. Our findings show that the spatial distribution of tools indeed offers advantages, but also elicits new problems, that can partly be leveraged by the physical affordances of mobile devices.

Origin (projects)

    Zagermann, Johannes; Pfeil, Ulrike; Fink, Daniel I.; von Bauer, Philipp; Reiterer, Harald (2017): Memory in Motion : The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory CHI'17 : Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2017, pp. 1899-1910. ISBN 978-1-4503-4655-9. Available under: doi: 10.1145/3025453.3026001

Memory in Motion : The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory

×

People's ability to remember and recall spatial information can be harnessed to improve navigation and search performances in interactive systems. In this paper, we investigate how display size and input modality influence spatial memory, especially in relation to efficiency and user satisfaction. Based on an experiment with 28 participants, we analyze the effect of three input modalities (trackpad, direct touch, and gesture-based motion controller) and two display sizes (10.6" and 55") on people's ability to navigate to spatially spread items and recall their positions. Our findings show that the impact of input modality and display size on spatial memory is not straightforward, but characterized by trade-offs between spatial memory, efficiency, and user satisfaction.

Origin (projects)

    Zagermann, Johannes; Pfeil, Ulrike; Reiterer, Harald (2016): Measuring Cognitive Load using Eye Tracking Technology in Visual Computing SEDLMAIR, Michael, ed. and others. Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization, BELIV '16. New York, NY: ACM Press, 2016, pp. 78-85. ISBN 978-1-4503-4818-8. Available under: doi: 10.1145/2993901.2993908

Measuring Cognitive Load using Eye Tracking Technology in Visual Computing

×

In this position paper we encourage the use of eye tracking measurements to investigate users' cognitive load while interacting with a system. We start with an overview of how eye movements can be interpreted to provide insight about cognitive processes and present a descriptive model representing the relations of eye movements and cognitive load. Then, we discuss how specific characteristics of human-computer interaction (HCI) interfere with the model and impede the application of eye tracking data to measure cognitive load in visual computing. As a result, we present a refined model, embedding the characteristics of HCI into the relation of eye tracking data and cognitive load. Based on this, we argue that eye tracking should be considered as a valuable instrument to analyze cognitive processes in visual computing and suggest future research directions to tackle outstanding issues.

Origin (projects)

    Zagermann, Johannes; Pfeil, Ulrike; Rädle, Roman; Jetter, Hans-Christian; Klokmose, Clemens; Reiterer, Harald (2016): When Tablets meet Tabletops : The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets KAYE, Jofish, ed. and others. CHI'16 : Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM Press, 2016, pp. 5470-5481. ISBN 978-1-4503-3362-7. Available under: doi: 10.1145/2858036.2858224

When Tablets meet Tabletops : The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets

×

Cross-device collaboration with tablets is an increasingly popular topic in HCI. Previous work has shown that tablet-only collaboration can be improved by an additional shared workspace on an interactive tabletop. However, large tabletops are costly and need space, raising the question to what extent the physical size of shared horizontal surfaces really pays off. In order to analyse the suitability of smaller-than-tabletop devices (e.g. tablets) as a low-cost alternative, we studied the effect of the size of a shared horizontal interactive workspace on users' attention, awareness, and efficiency during cross-device collaboration. In our study, 15 groups of two users executed a sensemaking task with two personal tablets (9.7") and a horizontal shared display of varying sizes (10.6", 27", and 55"). Our findings show that different sizes lead to differences in participants' interaction with the tabletop and in the groups' communication styles. To our own surprise we found that larger tabletops do not necessarily improve collaboration or sensemaking results, because they can divert users' attention away from their collaborators and towards the shared display.

Origin (projects)

    Müller, Jens; Rädle, Roman; Reiterer, Harald (2016): Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments : How They Shape Communication Behavior and User Task Load KAYE, Jofish, ed. and others. CHI'16 : Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM Press, 2016, pp. 1245-1249. ISBN 978-1-4503-3362-7. Available under: doi: 10.1145/2858036.2858043

Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments : How They Shape Communication Behavior and User Task Load

×

In collaborative activities, collaborators can use physical objects in their shared environment as spatial cues to guide each other's attention. Collaborative mixed reality environments (MREs) include both, physical and digital objects. To study how virtual objects influence collaboration and whether they are used as spatial cues, we conducted a controlled lab experiment with 16 dyads. Results of our study show that collaborators favored the digital objects as spatial cues over the physical environment and the physical objects: Collaborators used significantly less deictic gestures in favor of more disambiguous verbal references and a decreased subjective workload when virtual objects were present. This suggests adding additional virtual objects as spatial cues to MREs to improve user experience during collaborative mixed reality tasks.

Origin (projects)

    Lischke, Lars; Mayer, Sven; Wolf, Katrin; Henze, Niels; Reiterer, Harald; Schmidt, Albrecht (2016): Screen arrangements and interaction areas for large display work places OJALA, Timo, ed. and others. PerDis '16 : Proceedings of the 5th ACM International Symposium on Pervasive Displays. New York, NY: ACM Press, 2016, pp. 228-234. ISBN 978-1-4503-4366-4. Available under: doi: 10.1145/2914920.2915027

Screen arrangements and interaction areas for large display work places

×

Size and resolution of computer screens are constantly increasing. Individual screens can easily be combined to wall-sized displays. This enables computer displays that are folded, straight, bow shaped or even spread. As possibilities for arranging the screens are manifold, it is unclear what arrangements are appropriate. Moreover, it is unclear how content and applications should be arranged on such large displays. To determine guidelines for the arrangement of multiple screens and for content and application layouts, we conducted a design study. In the study, we asked 16 participants to arrange a large screen setup as well as to create layouts of multiple common application windows. Based on the results we provide a classification for screen arrangements and interaction areas. We identified, that screen space should be divided into a central area for interactive applications and peripheral areas, mainly for displaying additional content.

Origin (projects)

    Butscher, Simon; Reiterer, Harald (2016): Applying Guidelines for the Design of Distortions on Focus+Context Interfaces BUONO, Paolo, ed. and others. AVI '16 : Proceedings of the International Working Conference on Advanced Visual Interfaces. New York, NY: ACM Press, 2016, pp. 244-247. ISBN 978-1-4503-4131-8. Available under: doi: 10.1145/2909132.2909284

Applying Guidelines for the Design of Distortions on Focus+Context Interfaces

×

Distortion-based visualization techniques allow users to examine focused regions of a multiscale space at high scales but preserve their contextual information. However, the distortion can come at the coast of confusion, disorientation and impairment of the users' spatial memory. Yet, how distortions influence users' ability to build up spatial memory, while taking into account human skills of perception, interpretation and comprehension, remains underexplored. This note reports findings of an experimental comparison between a distortion-based focus+context interface and an undistorted overview+detail interface. The focus+context technique follows guidelines for the design of comprehensible distortions: make use of real-world metaphors, visual clues like shading, smooth transitions and scaled-only focus regions. The results show that the focus+context technique designed following these guidelines help to keep track of the position within the multiscale space and does not impair users' spatial memory.

Origin (projects)

    Zagermann, Johannes; Pfeil, Ulrike; Schreiner, Mario; Rädle, Roman; Jetter, Hans-Christian; Reiterer, Harald (2015): Reporting Experiences on Group Activities in Cross-Device Settings Accepted Paper for Surface 2015 : Workshop on Interacting with Multi-Device Ecologies in the Wild. 2015

Reporting Experiences on Group Activities in Cross-Device Settings

×

Even though mobile devices are ubiquitous and users often own several of them, using them in concert to achieve a common goal is not well supported and remains a challenge for HCI. In this paper, we report on our observations of cross-device usage within groups when they engaged in a dyadic collaborative sensemaking task. Based on our findings, we discuss limitations of a state-of-the-art cross-device setting and present a set of design recommendations. We then propose an alternative design that aims for greater flexibility when using mobile devices to enable a free configuration of workspaces depending on users’ current activity.

Origin (projects)

    Lischke, Lars; Mayer, Sven; Wolf, Katrin; Henze, Niels; Schmidt, Albrecht; Leifert, Svenja; Reiterer, Harald (2015): Using Space : Effect of Display Size on Users' Search Performance BO BEGOLE, , ed. and others. CHI EA '15 Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. New York: ACM, 2015, pp. 1845-1850. ISBN 978-1-4503-3146-3. Available under: doi: 10.1145/2702613.2732845

Using Space : Effect of Display Size on Users' Search Performance

×

Due to advances in technology large displays with very high resolution started to become affordable for daily work. Today it is possible to build display walls with a pixel density that is comparable to standard office screens. Previous work indicates that physical navigation enables a deeper engagement with the data set. In particular, the visibility of detailed data subsets on large screens supports the user's work and understanding of large data. In contrast to previous work we explore how users' performance scales with an increasing amount of large display space when working with text documents. In a controlled experiment, we determine participants' performance when searching for titles and images in large text documents using one to six 50" 4K monitors. Our results show that the users' visual search performance does not linearly increase with an increasing amount of display space.

Origin (projects)

Funding sources
Name Finanzierungstyp Kategorie Project no.
SFB third-party funds research funding program 634/15
Further information
Period: 01.07.2015 – 30.06.2019