SFB TRR 161 TP C 01 Quantitative Messung von Interaktion

Beschreibung

The main goal of this project is the definition and validation of new quantitative measurements to evaluate the interaction with visual computing systems. This includes the development of new quantitative models, metrics and tasks to unify usability test settings as well as the integration of the theoretical and methodological findings into a framework to evaluate interaction. To this end, we will collect and develop a set of tools to ease planning, conduction, analysis and comparison of replicable quantitative experiments in visual computing systems.

Institutionen
  • FB Informatik und Informationswissenschaft
  • Sonderforschungsbereiche
  • Forschungseinrichtungen
  • SFB TRR 161 Quantitative Methoden für Visual Computing
Publikationen
    Fleck, Philipp; Sousa Calepso, Aimée; Hubenschmid, Sebastian; Sedlmair, Michael; Schmalstieg, Dieter (2022): RagRug : A Toolkit for Situated Analytics IEEE Transactions on Visualization and Computer Graphics ; 2022. - IEEE. - ISSN 1077-2626. - eISSN 1941-0506

RagRug : A Toolkit for Situated Analytics

×

We present RagRug, an open-source toolkit for situated analytics. The abilities of RagRug go beyond previous immersive analytics toolkits by focusing on specific requirements emerging when using augmented reality (AR) rather than virtual reality. RagRug combines state of the art visual encoding capabilities with a comprehensive physical-virtual model, which lets application developers systematically describe the physical objects in the real world and their role in AR. We connect AR visualization with data streams from the Internet of Things using distributed dataflow. To this aim, we use reactive programming patterns so that visualizations become context-aware, i.e., they adapt to events coming in from the environment. The resulting authoring system is low-code; it emphasises describing the physical and the virtual world and the dataflow between the elements contained therein. We describe the technical design and implementation of RagRug, and report on five example applications illustrating the toolkit's abilities.

Forschungszusammenhang (Projekte)

    Hubenschmid, Sebastian; Wieland, Jonathan; Fink, Daniel Immanuel; Batch, Andrea; Zagermann, Johannes; Elmqvist, Niklas; Reiterer, Harald (2022): ReLive : Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies CHI Conference on Human Factors in Computing Systems (CHI ’22). - New York, NY : ACM, 2022. - ISBN 978-1-4503-9157-3

ReLive : Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies

×

The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.

Forschungszusammenhang (Projekte)

    Vock, Katja; Hubenschmid, Sebastian; Zagermann, Johannes; Butscher, Simon; Reiterer, Harald (2021): IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies Mensch und Computer 2021 (MuC '21). - New York, NY : ACM, 2021. - S. 248-259. - ISBN 978-1-4503-8645-6

IDIAR : Augmented Reality Dashboards to Supervise Mobile Intervention Studies

×

Mobile intervention studies employ mobile devices to observe participants’ behavior change over several weeks. Researchers regularly monitor high-dimensional data streams to ensure data quality and prevent data loss (e.g., missing engagement or malfunctions). The multitude of problem sources hampers possible automated detection of such irregularities – providing a use case for interactive dashboards. With the advent of untethered head-mounted AR devices, these dashboards can be placed anywhere in the user's physical environment, leveraging the available space and allowing for flexible information arrangement and natural navigation. In this work, we present the user-centered design and the evaluation of IDIAR: Interactive Dashboards in AR, combining a head-mounted display with the familiar interaction of a smartphone. A user study with 15 domain experts for mobile intervention studies shows that participants appreciated the multimodal interaction approach. Based on our findings, we provide implications for research and design of interactive dashboards in AR.

Forschungszusammenhang (Projekte)

    Wieland, Jonathan; Zagermann, Johannes; Müller, Jens; Reiterer, Harald (2021): Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality 2021 IEEE International Symposium on Mixed and Augmented Reality. - Piscataway, NJ : IEEE, 2021. - S. 403-412. - ISBN 978-1-66540-158-6

Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality

×

Augmented Reality (AR) supported collaboration is a popular topic in HCI research. Previous work has shown the benefits of collaborative 3D object manipulation and identified two possibilities: Either separate or compose users’ inputs. However, their experimental comparison using handheld AR displays is still missing. We, therefore, conducted an experiment in which we tasked 24 dyads with collaboratively positioning virtual objects in handheld AR using three manipulation techniques: 1) Separation – performing only different manipulation tasks (i. e., translation or rotation) simultaneously, 2) Composition – performing only the same manipulation tasks simultaneously and combining individual inputs using a merge policy, and 3) Hybrid – performing any manipulation tasks simultaneously, enabling dynamic transitions between Separation and Composition. While all techniques were similarly effective, Composition was least efficient, with higher subjective workload and worse user experience. Preferences were polarized between clear work division (Separation) and freedom of action (Hybrid). Based on our findings, we offer research and design implications.

Forschungszusammenhang (Projekte)

    Hubenschmid, Sebastian; Zagermann, Johannes; Butscher, Simon; Reiterer, Harald (2021): STREAM : Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2021). - New York : ACM, 2021. - ISBN 978-1-4503-8096-6

STREAM : Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics

×

Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.

Forschungszusammenhang (Projekte)

    Bishop, Fearn; Zagermann, Johannes; Pfeil, Ulrike; Sanderson, Gemma; Reiterer, Harald; Hinrichs, Uta (2020): Construct-A-Vis : exploring the free-form visualization processes of children IEEE Transactions on Visualization and Computer Graphics ; 26 (2020), 1. - S. 451-460. - Institute of Electrical and Electronics Engineers (IEEE). - ISSN 1077-2626. - eISSN 1941-0506

Construct-A-Vis : exploring the free-form visualization processes of children

×

Building data analysis skills is part of modern elementary school curricula. Recent research has explored how to facilitate children's understanding of visual data representations through completion exercises which highlight links between concrete and abstract mappings. This approach scaffolds visualization activities by presenting a target visualization to children. But how can we engage children in more free-form visual data mapping exercises that are driven by their own mapping ideas? How can we scaffold a creative exploration of visualization techniques and mapping possibilities? We present Construct-A-Vis, a tablet-based tool designed to explore the feasibility of free-form and constructive visualization activities with elementary school children. Construct-A-Vis provides adjustable levels of scaffolding visual mapping processes. It can be used by children individually or as part of collaborative activities. Findings from a study with elementary school children using Construct-A-Vis individually and in pairs highlight the potential of this free-form constructive approach, as visible in children's diverse visualization outcomes and their critical engagement with the data and mapping processes. Based on our study findings we contribute insights into the design of free-form visualization tools for children, including the role of tool-based scaffolding mechanisms and shared interactions to guide visualization activities with children.

Forschungszusammenhang (Projekte)

    Zagermann, Johannes; Pfeil, Ulrike; von Bauer, Philipp; Fink, Daniel; Reiterer, Harald (2020): "It’s in my other hand!" : Studying the Interplay of Interaction Techniques and Multi-Tablet Activities CHI '20 : Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. - New York, NY : ACM, 2020. - 413. - ISBN 978-1-4503-6708-0

"It’s in my other hand!" : Studying the Interplay of Interaction Techniques and Multi-Tablet Activities

×

Cross-device interaction with tablets is a popular topic in HCI research. Recent work has shown the benefits of including multiple devices into users’ workflows while various interaction techniques allow transferring content across devices. However, users are only reluctantly using multiple devices in combination. At the same time, research on cross-device interaction struggles to find a frame of reference to compare techniques or systems. In this paper, we try to address these challenges by studying the interplay of interaction techniques, device utilization, and task-specific activities in a user study with 24 participants from different but complementary angles of evaluation using an abstract task, a sensemaking task, and three interaction techniques. We found that different interaction techniques have a lower influence than expected, that work behaviors and device utilization depend on the task at hand, and that participants value specific aspects of cross-device interaction.

Forschungszusammenhang (Projekte)

    Borowski, Marcel; Zagermann, Johannes; Klokmose, Clemens N.; Reiterer, Harald; Rädle, Roman (2020): Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments SIGCSE '20 : Proceedings of the 51st ACM Technical Symposium on Computer Science Education. - New York, NY : ACM, 2020. - S. 468-474. - ISBN 978-1-4503-6793-6

Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments

×

Programming assignments in computer science courses are often processed in pairs or groups of students. While working together, students face several shortcomings in today's software: The lack of real-time collaboration capabilities, the setup time of the development environment, and the use of different devices or operating systems can hamper students when working together on assignments. Text processing platforms like Google Docs solve these problems for the writing process of prose text, and computational notebooks like Google Colaboratory for data analysis tasks. However, none of these platforms allows users to implement interactive applications. We deployed a web-based literate programming system for three months during an introductory course on application development to explore how collaborative programming practices unfold and how the structure of computational notebooks affect the development. During the course, pairs of students solved weekly programming assignments. We analyzed data from weekly questionnaires, three focus groups with students and teaching assistants, and keystroke-level log data to facilitate the understanding of the subtleties of collaborative programming with computational notebooks. Findings reveal that there are distinct collaboration patterns; the preferred collaboration pattern varied between pairs and even varied within pairs over the course of three months. Recognizing these distinct collaboration patterns can help to design future computational notebooks for collaborative programming assignments.

Forschungszusammenhang (Projekte)

    Blumenschein, Michael; Behrisch, Michael; Schmid, Stefanie; Butscher, Simon; Wahl, Deborah R.; Villinger, Karoline; Renner, Britta; Reiterer, Harald; Keim, Daniel A. (2019): SMARTexplore : Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach IEEE Conference on Visual Analytics Science and Technology (VAST) 2018. - Piscataway, NJ : IEEE, 2019. - ISBN 978-1-5386-6861-0

SMARTexplore : Simplifying High-Dimensional Data Analysis through a Table-Based Visual Analytics Approach

×

We present SMARTEXPLORE, a novel visual analytics technique that simplifies the identification and understanding of clusters, correlations, and complex patterns in high-dimensional data. The analysis is integrated into an interactive table-based visualization that maintains a consistent and familiar representation throughout the analysis. The visualization is tightly coupled with pattern matching, subspace analysis, reordering, and layout algorithms. To increase the analyst’s trust in the revealed patterns, SMARTEXPLORE automatically selects and computes statistical measures based on dimension and data properties. While existing approaches to analyzing highdimensional data (e.g., planar projections and Parallel coordinates) have proven effective, they typically have steep learning curves for non-visualization experts. Our evaluation, based on three expert case studies, confirms that non-visualization experts successfully reveal patterns in high-dimensional data when using SMARTEXPLORE.

Forschungszusammenhang (Projekte)

    Müller, Jens; Zagermann, Johannes; Wieland, Jonathan; Pfeil, Ulrike; Reiterer, Harald (2019): A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices MuC'19 : Proceedings of Mensch und Computer 2019 / Alt, Florian; Bulling, Andreas; Döring, Tanja (Hrsg.). - New York, NY : ACM, 2019. - S. 399-410. - ISBN 978-1-4503-7198-8

A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices

×

Handheld Augmented Reality (AR) displays offer a see-through option to create the illusion of virtual objects being integrated into the viewer’s physical environment. Some AR display technologies also allow for the deactivation of the see-through option, turning AR tablets into Virtual Reality (VR) devices that integrate the virtual objects into an exclusively virtual environment. Both display configurations are typically available on handheld devices, raising the question of their influence on users’ experience during collaborative activities. In two experiments, we studied how the different display configurations influence user experience, workload, and team performance of co-located and distributed collaborators during a spatial referencing task. A mixed-methods approach revealed that participants’ opinions were polarized towards the two display configurations, regardless of the spatial distribution of collaboration. Based on our findings, we identify critical aspects to be addressed in future research to better understand and support co-located and distributed collaboration using AR and VR displays.

Forschungszusammenhang (Projekte)

  Hubenschmid, Sebastian; Zagermann, Johannes; Butscher, Simon; Reiterer, Harald (2018): Employing Tangible Visualisations in Augmented Reality with Mobile Devices MultimodalVis ’18 Workshop at AVI 2018

Employing Tangible Visualisations in Augmented Reality with Mobile Devices

×

Recent research has demonstrated the benefits of mixed realities for information visualisation. Often the focus lies on the visualisation itself, leaving interaction opportunities through different modalities largely unexplored. Yet, mixed reality in particular can benefit from a combination of different modalities. This work examines an existing mixed reality visualisation which is combined with a large tabletop for touch interaction. Although this allows for familiar operation, the approach comes with some limitations which we address by employing mobile devices, thus adding tangibility and proxemics as input modalities.

Forschungszusammenhang (Projekte)

    Zagermann, Johannes; Pfeil, Ulrike; Reiterer, Harald (2018): Studying Eye Movements as a Basis for Measuring Cognitive Load Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. - New York, NY : ACM Press, 2018. - LBW095. - ISBN 978-1-4503-5621-3

Studying Eye Movements as a Basis for Measuring Cognitive Load

×

Users' cognitive load while interacting with a system is a valuable metric for evaluations in HCI. We encourage the analysis of eye movements as an unobtrusive and widely available way to measure cognitive load. In this paper, we report initial findings from a user study with 26 participants working on three visual search tasks that represent different levels of difficulty. Also, we linearly increased the cognitive demand while solving the tasks. This allowed us to analyze the reaction of individual eye movements to different levels of task difficulty. Our results show how pupil dilation, blink rate, and the number of fixations and saccades per second individually react to changes in cognitive activity. We discuss how these measurements could be combined in future work to allow for a comprehensive investigation of cognitive load in interactive settings.

Forschungszusammenhang (Projekte)

    Chuang, Lewis L.; Pfeil, Ulrike (2018): Transparency and Openness Promotion Guidelines for HCI Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. - New York, NY : ACM Press, 2018. - SIG04. - ISBN 978-1-4503-5621-3

Transparency and Openness Promotion Guidelines for HCI

×

This special interest group addresses the status quo of HCI research with regards to research practices of transparency and openness. Specifically, it discusses whether current practices are in line with the standards applied to other fields (e.g., psychology, economics, medicine). It seeks to identify current practices that are more progressive and worth communicating to other disciplines, while evaluating whether practices in other disciplines are likely to apply to HCI research constructively. Potential outcomes include: (1) a review of current HCI research policies, (2) a report on recommended practices, and (3) a replication project of key findings in HCI research.

Forschungszusammenhang (Projekte)

    Jäckle, Dominik; Stoffel, Florian; Mittelstädt, Sebastian; Keim, Daniel A.; Reiterer, Harald (2017): Interpretation of Dimensionally-reduced Crime Data : A Study with Untrained Domain Experts Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. - Setúbal, Portugal : SCITEPRESS, 2017. - S. 164-175. - ISBN 978-989-758-228-8

Interpretation of Dimensionally-reduced Crime Data : A Study with Untrained Domain Experts

×

Dimensionality reduction (DR) techniques aim to reduce the amount of considered dimensions, yet preserving as much information as possible. According to many visualization researchers, DR results lack interpretability, in particular for domain experts not familiar with machine learning or advanced statistics. Thus, interactive visual methods have been extensively researched for their ability to improve transparency and ease the interpretation of results. However, these methods have primarily been evaluated using case studies and interviews with experts trained in DR. In this paper, we describe a phenomenological analysis investigating if researchers with no or only limited training in machine learning or advanced statistics can interpret the depiction of a data projection and what their incentives are during interaction. We, therefore, developed an interactive system for DR, which unifies mixed data types as they appear in real-world data. Based on this system, we provided data analys ts of a Law Enforcement Agency (LEA) with dimensionally-reduced crime data and let them explore and analyze domain-relevant tasks without providing further conceptual information. Results of our study reveal that these untrained experts encounter few difficulties in interpreting the results and drawing conclusions given a domain relevant use case and their experience. We further discuss the results based on collected informal feedback and observations.

Forschungszusammenhang (Projekte)

    Zagermann, Johannes; Pfeil, Ulrike; Fink, Daniel Immanuel; von Bauer, Philipp; Reiterer, Harald (2017): Memory in Motion : The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory CHI'17 : Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. - New York, NY, USA : ACM, 2017. - S. 1899-1910. - ISBN 978-1-4503-4655-9

Memory in Motion : The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory

×

People's ability to remember and recall spatial information can be harnessed to improve navigation and search performances in interactive systems. In this paper, we investigate how display size and input modality influence spatial memory, especially in relation to efficiency and user satisfaction. Based on an experiment with 28 participants, we analyze the effect of three input modalities (trackpad, direct touch, and gesture-based motion controller) and two display sizes (10.6" and 55") on people's ability to navigate to spatially spread items and recall their positions. Our findings show that the impact of input modality and display size on spatial memory is not straightforward, but characterized by trade-offs between spatial memory, efficiency, and user satisfaction.

Forschungszusammenhang (Projekte)

    Zagermann, Johannes; Pfeil, Ulrike; Acevedo, Carmela; Reiterer, Harald (2017): Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-Device Workspace Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia. - New York, NY : ACM, 2017. - ISBN 978-1-4503-5378-6

Studying the Benefits and Challenges of Spatial Distribution and Physical Affordances in a Multi-Device Workspace

×

In recent years, research on cross-device interaction has become a popular topic in HCI leading to novel interaction techniques mutually interfering with new evolving theoretical paradigms. Building on previous research, we implemented an individual multi-device work environment for creative activities. In a study with 20 participants, we compared a traditional toolbar-based condition with two conditions facilitating spatially distributed tools on digital panels and on physical devices. We analyze participants’ interactions with the tools, encountered problems and corresponding solutions, as well as subjective task load and user experience. Our findings show that the spatial distribution of tools indeed offers advantages, but also elicits new problems, that can partly be leveraged by the physical affordances of mobile devices.

Forschungszusammenhang (Projekte)

    Zagermann, Johannes; Pfeil, Ulrike; Rädle, Roman; Jetter, Hans-Christian; Klokmose, Clemens; Reiterer, Harald (2016): When Tablets meet Tabletops : The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets CHI'16 : Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems / Kaye, Jofish et al. (Hrsg.). - New York, NY : ACM Press, 2016. - S. 5470-5481. - ISBN 978-1-4503-3362-7

When Tablets meet Tabletops : The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets

×

Cross-device collaboration with tablets is an increasingly popular topic in HCI. Previous work has shown that tablet-only collaboration can be improved by an additional shared workspace on an interactive tabletop. However, large tabletops are costly and need space, raising the question to what extent the physical size of shared horizontal surfaces really pays off. In order to analyse the suitability of smaller-than-tabletop devices (e.g. tablets) as a low-cost alternative, we studied the effect of the size of a shared horizontal interactive workspace on users' attention, awareness, and efficiency during cross-device collaboration. In our study, 15 groups of two users executed a sensemaking task with two personal tablets (9.7") and a horizontal shared display of varying sizes (10.6", 27", and 55"). Our findings show that different sizes lead to differences in participants' interaction with the tabletop and in the groups' communication styles. To our own surprise we found that larger tabletops do not necessarily improve collaboration or sensemaking results, because they can divert users' attention away from their collaborators and towards the shared display.

Forschungszusammenhang (Projekte)

    Lischke, Lars; Mayer, Sven; Wolf, Katrin; Henze, Niels; Reiterer, Harald; Schmidt, Albrecht (2016): Screen arrangements and interaction areas for large display work places PerDis '16 : Proceedings of the 5th ACM International Symposium on Pervasive Displays / Ojala, Timo et al. (Hrsg.). - New York, NY : ACM Press, 2016. - S. 228-234. - ISBN 978-1-4503-4366-4

Screen arrangements and interaction areas for large display work places

×

Size and resolution of computer screens are constantly increasing. Individual screens can easily be combined to wall-sized displays. This enables computer displays that are folded, straight, bow shaped or even spread. As possibilities for arranging the screens are manifold, it is unclear what arrangements are appropriate. Moreover, it is unclear how content and applications should be arranged on such large displays. To determine guidelines for the arrangement of multiple screens and for content and application layouts, we conducted a design study. In the study, we asked 16 participants to arrange a large screen setup as well as to create layouts of multiple common application windows. Based on the results we provide a classification for screen arrangements and interaction areas. We identified, that screen space should be divided into a central area for interactive applications and peripheral areas, mainly for displaying additional content.

Forschungszusammenhang (Projekte)

    Müller, Jens; Rädle, Roman; Reiterer, Harald (2016): Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments : How They Shape Communication Behavior and User Task Load CHI'16 : Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems / Kaye, Jofish et al. (Hrsg.). - New York, NY : ACM Press, 2016. - S. 1245-1249. - ISBN 978-1-4503-3362-7

Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments : How They Shape Communication Behavior and User Task Load

×

In collaborative activities, collaborators can use physical objects in their shared environment as spatial cues to guide each other's attention. Collaborative mixed reality environments (MREs) include both, physical and digital objects. To study how virtual objects influence collaboration and whether they are used as spatial cues, we conducted a controlled lab experiment with 16 dyads. Results of our study show that collaborators favored the digital objects as spatial cues over the physical environment and the physical objects: Collaborators used significantly less deictic gestures in favor of more disambiguous verbal references and a decreased subjective workload when virtual objects were present. This suggests adding additional virtual objects as spatial cues to MREs to improve user experience during collaborative mixed reality tasks.

Forschungszusammenhang (Projekte)

    Butscher, Simon; Reiterer, Harald (2016): Applying Guidelines for the Design of Distortions on Focus+Context Interfaces AVI '16 : Proceedings of the International Working Conference on Advanced Visual Interfaces / Buono, Paolo et al. (Hrsg.). - New York, NY : ACM Press, 2016. - S. 244-247. - ISBN 978-1-4503-4131-8

Applying Guidelines for the Design of Distortions on Focus+Context Interfaces

×

Distortion-based visualization techniques allow users to examine focused regions of a multiscale space at high scales but preserve their contextual information. However, the distortion can come at the coast of confusion, disorientation and impairment of the users' spatial memory. Yet, how distortions influence users' ability to build up spatial memory, while taking into account human skills of perception, interpretation and comprehension, remains underexplored. This note reports findings of an experimental comparison between a distortion-based focus+context interface and an undistorted overview+detail interface. The focus+context technique follows guidelines for the design of comprehensible distortions: make use of real-world metaphors, visual clues like shading, smooth transitions and scaled-only focus regions. The results show that the focus+context technique designed following these guidelines help to keep track of the position within the multiscale space and does not impair users' spatial memory.

Forschungszusammenhang (Projekte)

    Zagermann, Johannes; Pfeil, Ulrike; Reiterer, Harald (2016): Measuring Cognitive Load using Eye Tracking Technology in Visual Computing Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization, BELIV '16 / Sedlmair, Michael et al. (Hrsg.). - New York, NY : ACM Press, 2016. - S. 78-85. - ISBN 978-1-4503-4818-8

Measuring Cognitive Load using Eye Tracking Technology in Visual Computing

×

In this position paper we encourage the use of eye tracking measurements to investigate users' cognitive load while interacting with a system. We start with an overview of how eye movements can be interpreted to provide insight about cognitive processes and present a descriptive model representing the relations of eye movements and cognitive load. Then, we discuss how specific characteristics of human-computer interaction (HCI) interfere with the model and impede the application of eye tracking data to measure cognitive load in visual computing. As a result, we present a refined model, embedding the characteristics of HCI into the relation of eye tracking data and cognitive load. Based on this, we argue that eye tracking should be considered as a valuable instrument to analyze cognitive processes in visual computing and suggest future research directions to tackle outstanding issues.

Forschungszusammenhang (Projekte)

    Lischke, Lars; Mayer, Sven; Wolf, Katrin; Henze, Niels; Schmidt, Albrecht; Leifert, Svenja; Reiterer, Harald (2015): Using Space : Effect of Display Size on Users' Search Performance CHI EA '15 Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems / Bo Begole et al. (Hrsg.). - New York : ACM, 2015. - S. 1845-1850. - ISBN 978-1-4503-3146-3

Using Space : Effect of Display Size on Users' Search Performance

×

Due to advances in technology large displays with very high resolution started to become affordable for daily work. Today it is possible to build display walls with a pixel density that is comparable to standard office screens. Previous work indicates that physical navigation enables a deeper engagement with the data set. In particular, the visibility of detailed data subsets on large screens supports the user's work and understanding of large data. In contrast to previous work we explore how users' performance scales with an increasing amount of large display space when working with text documents. In a controlled experiment, we determine participants' performance when searching for titles and images in large text documents using one to six 50" 4K monitors. Our results show that the users' visual search performance does not linearly increase with an increasing amount of display space.

Forschungszusammenhang (Projekte)

    Zagermann, Johannes; Pfeil, Ulrike; Schreiner, Mario; Rädle, Roman; Jetter, Hans-Christian; Reiterer, Harald (2015): Reporting Experiences on Group Activities in Cross-Device Settings Accepted Paper for Surface 2015 : Workshop on Interacting with Multi-Device Ecologies in the Wild

Reporting Experiences on Group Activities in Cross-Device Settings

×

Even though mobile devices are ubiquitous and users often own several of them, using them in concert to achieve a common goal is not well supported and remains a challenge for HCI. In this paper, we report on our observations of cross-device usage within groups when they engaged in a dyadic collaborative sensemaking task. Based on our findings, we discuss limitations of a state-of-the-art cross-device setting and present a set of design recommendations. We then propose an alternative design that aims for greater flexibility when using mobile devices to enable a free configuration of workspaces depending on users’ current activity.

Forschungszusammenhang (Projekte)

Mittelgeber
Name Kennziffer Beschreibung Laufzeit
SFB634/15
Weitere Informationen
Laufzeit: 01.07.2015 – 30.06.2019
Link: Projekthomepage