Familienbesitz und -bewusstsein. Patrizierhäuser und familiare Identität im spätmittelalterlichen Nürnberg

Institutionen
  • Fach Geschichte
Publikationen
    Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philip; Goldlücke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A. (2018): Bring It to the Pitch : Combining Video and Movement Data to Enhance Team Sport Analysis IEEE transactions on visualization and computer graphics. 2018, 24(1), pp. 13-22. ISSN 1077-2626. eISSN 1941-0506. Available under: doi: 10.1109/TVCG.2017.2745181

Bring It to the Pitch : Combining Video and Movement Data to Enhance Team Sport Analysis

×

Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.

Forschungszusammenhang (Projekte)

  Iseringhausen, Julian; Goldlücke, Bastian; Pesheva, Nina; Iliev, Stanimir; Wender, Alexander; Fuchs, Martin; Hullin, Matthias B. (2017): 4D imaging through spray-on optics ACM Transactions on Graphics. 2017, 36(4), 35. ISSN 0730-0301. eISSN 1557-7368. Available under: doi: 10.1145/3072959.3073589

4D imaging through spray-on optics

×

Light fields are a powerful concept in computational imaging and a mainstay in image-based rendering; however, so far their acquisition required either carefully designed and calibrated optical systems (micro-lens arrays), or multi-camera/multi-shot settings. Here, we show that fully calibrated light field data can be obtained from a single ordinary photograph taken through a partially wetted window. Each drop of water produces a distorted view on the scene, and the challenge of recovering the unknown mapping from pixel coordinates to refracted rays in space is a severely underconstrained problem. The key idea behind our solution is to combine ray tracing and low-level image analysis techniques (extraction of 2D drop contours and locations of scene features seen through drops) with state-of-the-art drop shape simulation and an iterative refinement scheme to enforce photo-consistency across features that are seen in multiple views. This novel approach not only recovers a dense pixel-to-ray mapping, but also the refractive geometry through which the scene is observed, to high accuracy. We therefore anticipate that our inherently self-calibrating scheme might also find applications in other fields, for instance in materials science where the wetting properties of liquids on surfaces are investigated.

Forschungszusammenhang (Projekte)

  Johannsen, Ole; Sulc, Antonin; Marniok, Nico; Goldlücke, Bastian (2017): Layered Scene Reconstruction from Multiple Light Field Camera Views LAI, Shang-Hong, ed. and others. Computer Vision - ACCV 2016 : 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III. Cham: Springer, 2017, pp. 3-18. Lecture Notes in Computer Science. 10113. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-54186-0. Available under: doi: 10.1007/978-3-319-54187-7_1

Layered Scene Reconstruction from Multiple Light Field Camera Views

×

We propose a framework to infer complete geometry of a scene with strong reflections or hidden by partially transparent occluders from a set of 4D light fields captured with a hand-held light field camera. For this, we first introduce a variant of bundle adjustment specifically tailored to 4D light fields to obtain improved pose parameters. Geometry is recovered in a global framework based on convex optimization for a weighted minimal surface. To allow for non-Lambertian materials and semi-transparent occluders, the point-wise costs are not based on the principle of photo-consistency. Instead, we perform a layer analysis of the light field obtained by finding superimposed oriented patterns in epipolar plane image space to obtain a set of depth hypotheses and confidence scores, which are integrated into a single functional.

Forschungszusammenhang (Projekte)

  Honauer, Katrin; Johannsen, Ole; Kondermann, Daniel; Goldlücke, Bastian (2017): A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields LAI, Shang-Hong, ed. and others. Computer Vision - ACCV 2016 : 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III. Cham: Springer, 2017, pp. 19-34. Lecture Notes in Computer Science. 10113. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-54186-0. Available under: doi: 10.1007/978-3-319-54187-7_2

A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields

×

In computer vision communities such as stereo, optical flow, or visual tracking, commonly accepted and widely used benchmarks have enabled objective comparison and boosted scientific progress. In the emergent light field community, a comparable benchmark and evaluation methodology is still missing. The performance of newly proposed methods is often demonstrated qualitatively on a handful of images, making quantitative comparison and targeted progress very difficult. To overcome these difficulties, we propose a novel light field benchmark. We provide 24 carefully designed synthetic, densely sampled 4D light fields with highly accurate disparity ground truth. We thoroughly evaluate four state-of-the-art light field algorithms and one multi-view stereo algorithm using existing and novel error measures. This consolidated state-of-the art may serve as a baseline to stimulate and guide further scientific progress. We publish the benchmark website www.lightfield-analysis.net, an evaluation toolkit, and our rendering setup to encourage submissions of both algorithms and further datasets.

Forschungszusammenhang (Projekte)

  Alperovich, Anna; Goldlücke, Bastian (2017): A Variational Model for Intrinsic Light Field Decomposition LAI, Shang-Hong, ed., Vincent LEPETIT, ed., Ko NISHINO, ed., Yoichi SATO, ed.. Computer Vision – ACCV 2016 : 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III. Cham: Springer International Publishing, 2017, pp. 66-82. Lecture Notes in Computer Science. 10113. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-54186-0. Available under: doi: 10.1007/978-3-319-54187-7_5

A Variational Model for Intrinsic Light Field Decomposition

×

We present a novel variational model for intrinsic light field decomposition, which is performed on four-dimensional ray space instead of a traditional 2D image. As most existing intrinsic image algorithms are designed for Lambertian objects, their performance suffers when considering scenes which exhibit glossy surfaces. In contrast, the rich structure of the light field with many densely sampled views allows us to cope with non-Lambertian objects by introducing an additional decomposition term that models specularity. Regularization along the epipolar plane images further encourages albedo and shading consistency across views. In evaluations of our method on real-world data sets captured with a Lytro Illum plenoptic camera, we demonstrate the advantages of our approach with respect to intrinsic image decomposition and specular removal.

Forschungszusammenhang (Projekte)

  Johannsen, Ole; Sulc, Antonin; Goldlücke, Bastian (2016): Occlusion-Aware Depth Estimation Using Sparse Light Field Coding ROSENHAHN, Bodo, ed. and others. Pattern Recognition : 38th International Conference, GCPR 2016, Hannover, Germany, September 12-15, 2016, Proceedings. Cham: Springer, 2016, pp. 207-218. Lecture Notes in Computer Science. 9796. ISSN 0302-9743. eISSN 1611-3349. ISBN 978-3-319-45885-4. Available under: doi: 10.1007/978-3-319-45886-1_17

Occlusion-Aware Depth Estimation Using Sparse Light Field Coding

×

Disparity estimation for multi-layered light fields can robustly be performed with a statistical analysis of sparse light field coding coefficients [7]. The key idea is to explain each epipolar plane image patch with a dictionary composed of atoms with known disparity values. We significantly improve upon their approach in two ways. First, we reduce the number of necessary dictionary atoms, improving descriptive quality of each and reducing time complexity by an order of magnitude. Second, we introduce a way to explicitly handle occlusions, which is the main drawback in the previous work. Experiments demonstrate that we thus achieve substantially better results on both Lambertian as well as multi-layered scenes.

Forschungszusammenhang (Projekte)

  Johannsen, Ole; Sulc, Antonin; Goldlücke, Bastian (2015): On Linear Structure from Motion for Light Field Cameras 2015 IEEE International Conference on Computer Vision : ICCV 2015 : proceedings : 7.-13. Dec. 2015, Santiago, Chile. Los Alamitos, California: IEEE, 2015, pp. 720-728. ISBN 978-1-4673-8391-2. Available under: doi: 10.1109/ICCV.2015.89

On Linear Structure from Motion for Light Field Cameras

×

We present a novel approach to relative pose estimation which is tailored to 4D light field cameras. From the relationships between scene geometry and light field structure and an analysis of the light field projection in terms of Pluecker ray coordinates, we deduce a set of linear constraints on ray space correspondences between a light field camera pair. These can be applied to infer relative pose of the light field cameras and thus obtain a point cloud reconstruction of the scene. While the proposed method has interesting relationships to pose estimation for generalized cameras based on ray-to-ray correspondence, our experiments demonstrate that our approach is both more accurate and computationally more efficient. It also compares favourably to direct linear pose estimation based on aligning the 3D point clouds obtained by reconstructing depth for each individual light field. To further validate the method, we employ the pose estimates to merge light fields captured with hand-held consumer light field cameras into refocusable panoramas.

Forschungszusammenhang (Projekte)

Goldlücke, Bastian; Klehm, Oliver; Wanner, Sven; Eisemann, Elmar (2015): Plenoptic Cameras MAGNOR, Marcus A., ed. and others. Digital representations of the real world : how to capture, model, and render visual reality. Boca Raton: CRC Press, 2015, pp. 65-77. ISBN 978-1-4822-4381-9

Plenoptic Cameras

×

dc.title:


dc.contributor.author: Klehm, Oliver; Wanner, Sven; Eisemann, Elmar

Forschungszusammenhang (Projekte)

  Wender, Alexander; Iseringhausen, Julian; Goldlücke, Bastian; Fuchs, Martin; Hullin, Matthias B. (2015): Light Field Imaging through Household Optics BOMMES, David, ed. and others. VMV 2015 : Vision, Modeling & Visualization. Goslar: The Eurographics Association, 2015, pp. 159-166. ISBN 978-3-905674-95-8. Available under: doi: 10.2312/vmv.20151271

Light Field Imaging through Household Optics

×

Although light fields are well-established as a tool in image-based rendering and computer vision, their capture is still at a relatively early stage. In this article, we search for imaging situations similar to uncalibrated integral optics, noticing that they are common in everyday life. We investigate light field capturing scenarios which are provided by commonly available items like cutlery as optical building blocks. Using a generic calibration approach based on structured light, we reconstruct the light path providing an unorthodox light field capturing setup. As the resulting data is unstructured and poorly sampled and thus unsuited for standard image-based rendering pipelines, we propose techniques for the processing of such light fields. Additionally, we have implemented a novel depth estimation scheme to guide the rendering process. We demonstrate the potential of these techniques on different scenes, both static and dynamic, recorded by combining a DSLR camera with household items.

Forschungszusammenhang (Projekte)

  Johannsen, Ole; Sulc, Antonin; Goldlücke, Bastian (2015): Variational Separation of Light Field Layers BOMMES, David, ed. and others. VMV 2015 : Vision, Modeling & Visualization. Goslar: The Eurographics Association, 2015, pp. 135-142. ISBN 978-3-905674-95-8. Available under: doi: 10.2312/vmv.20151268

Variational Separation of Light Field Layers

×

Images of scenes which contain reflective or transparent surfaces are composed of different layers which are observed at different depths. Analyzing such a scene requires separating the image into its individual layers, which remains a challenging and important problem. While the problem is very much ill-posed when only a single image is considered, recent work has shown that depth estimation for two layers becomes quite tractable when one instead captures a 4D light field of the scene. In this paper, we propose a novel variational approach to layer separation which is based on these ideas. We formulate a linear generative model to reconstruct the light field from disparity and luminance information for the individual layers on the center view. Comparing the model with the observerd data yields a convex variational problem for layer reconstruction, which can be solved to global optimality with a primal-dual scheme. Layer disparity is estimated in a first step, for which we improve upon a model based on second order structure tensors on the epipolar plane images. In contrast to previous work, the resulting approach is robust enough to be able to deal with light fields from the Lytro Illum camera, for which we obtain a compelling separation of the reflectance layer in real-world scenes.

Forschungszusammenhang (Projekte)

Mittelgeber
Name Finanzierungstyp Kategorie Kennziffer
Deutsche Forschungsgemeinschaft Drittmittel Forschungsförderprogramm
Weitere Informationen
Laufzeit: 20.10.2006 – 31.10.2011