Scalar Topology in Visual Data Analysis
Organizers: | Gunther Weber |
Peer-Timo Bremer | |
Hamish Carr | |
Attila Gyulassy |
As scientific datasets continue to increase in size and complexity, topological tools that have been developed to capture significant features of the data at an abstract level that enables and facilitates understanding by the researcher. In particular, three topological structures — the Morse-Smale complex, the Reeb graph, and the contour tree, a special case of the Reeb graph — have been demonstrated to be effective at capturing and recognizing significant features in a disciplined fashion that allows abstract tracking, manipulation and presentation. These techniques are not yet, however, widely disseminated, and it is the intention of this tutorial to remedy this by presenting a systematic overview of current topological methods for the benefit of experienced researchers in visualization who are unfamiliar with them. While general fluency in the field of visualization must be assumed, the intention is to present these tools from the ground up.
» Monday, All day
Provenance-Enabled Data Exploration and Visualization
Organizers: | Emanuele Santos |
Claudio Silva | |
Juliana Freire | |
Erik Anderson |
Scientists are now faced with an incredible volume of data to analyze. To explore and understand the data, they need to assemble complex workflows (pipelines) to manipulate the data and create insightful visual representations. Provenance is essential in this process. The provenance of a digital artifact contains information about the process and data used to derive the artifact. This information is essential for preserving the data, for determining the data's quality and authorship, for both reproducing and validating results – all important elements of the scientific process. Provenance has shown to be particularly useful for enabling comparative visualization and data analysis. This tutorial will inform computational and visualization scientists, users and developers about different approaches to provenance and the trade-offs among them. Using the VisTrails project as a basis, we will cover different approaches to acquiring and reusing provenance, including techniques that attendees can use for provenance-enabling their own tools. The tutorial will also discuss uses of provenance that go beyond the ability to reproduce and share results.
» Monday, Morning
Interactive Massive Model Rendering
Organizers: | Sung-Eui Yoon |
Dinesh Manocha | |
David Kasik | |
Enrico Gobbetti | |
Renato Pajarola | |
Philipp Slusallek |
Users have consistently tried to manage and visualize more data than any computing system allows. 3D data used in scientfic visualization, medical imaging, seismic exploration, information visualizationfilm, games, CAD systems, etc. are most problematic. This course covers fundamental techniques that effectively overcome system constraints to allow real-time interaction with massive models.
» Monday, Afternoon
Visualization and Analysis Using VisIt
Organizer: | Hank Childs |
This tutorial will focus on VisIt, an open source visualization and analysis tool designed for processing large data. The tool is built around five primary use cases: data exploration, quantitative analysis, comparative analysis, visual debugging, and communication of results. VisIt has a client-server design for remote visualization, with a distributed memory parallel server. VisIt won an R&D 100 award in 2005, has been downloaded over 100,000 times, and is being developed by a large community. VisIt is currently being used to visualize and analyze the results of hero runs on six of the top eight machines on top500.org. The tutorial will introduce VisIt concepts, demonstrate how to do basic things in VisIt, and discuss how to do advanced analysis and visualizations. The last portion of the tutorial will discuss how to do VisIt development, including writing new plugin database readers, new operators, and new plot types.
» Tuesday, All day
Advanced ParaView Visualization
Organizers: | Kenneth Moreland |
James Ahrens | |
Dave DeMarle | |
David Thompson | |
Philippe Pébay | |
Fabian Nathan |
ParaView is a powerful open-source turnkey application for analyzing and visualizingscientific data sets ranging from small desktop-sized problems to the world's largest simulations and is used by numerous government, educational, and commercial institutions throughout the world. Designed to be configurable, extensible, and scalable, ParaView is built upon the Visualization Toolkit (VTK) to allow rapid deployment of visualization components. This tutorial brings together several of those who helped design and build ParaView to give visualization researchers and developers detailed guidance on the behavior and abilities of the ParaView application. This knowledge will allow the tutorial participants to solve their unique visualization problems, to modify the ParaView application to their specific problem domains, or to leverage the design into their own applications. A variety of topics will be discussed during the tutorial. A large focus of this year's tutorial will be on customizing ParaView. We will discuss using Python scripting for automated visualization and rapid prototyping, and we will discuss using ParaView's plugin mechanism for the simplified deployment of visualization andrendering algorithms and application customizations. Other topics include petascale distance visualization, visualization in-situ with simulations, and advanced statistics.
» Tuesday, Morning
Multivariate Temporal Features in Scientific Data
Organizers: | Jian Huang |
Chaoli Wang | |
Heike Jänicke | |
Jonathan Woodring |
A wave of large datasets amounting beyond terascale is now being produced by scientific applications on a daily basis. The ensuing challenge to manage and make sense of these data demands systematic breakthroughs in several areas of computer science. In this tutorial, we survey recent progresses made in addressing a particular difficulty of pressing user need. That is, the gap between users' conceptual domain knowledge vs. the way features have to be specified in traditional scientific visualizations. This gap is particularly acute at terascale and beyond, where a large amount of parallel automation is necessary to study the data at full-scale. Interactive techniques alone cannot solve the whole problem. Algorithmic methods must be studied. Recent research to visualize time-varying multivariate data has led to the study of several key technical hurdles. For instance, when visualization is used for exploratory purposes, it could be the case that even application scientists themselves do not fully understand the phenomena and are using visualization to start and refine their research. There are also general cases where a user has clear concepts of a pattern but the knowledge lacks specifics, as exemplified by the concept of "the start of a growing season" that is commonly studied in climate modeling. When a feature can be defined with rigor, there is still a common situation that the data cannot be viewed altogether, and that straightforward ways to generate hierarchical representations of the features do not lead to satisfactory results. In an effort to survey recent progress in addressing the above set of diverse challenges, our tutorial covers the following topics:
- Programming language interfaces for time-varying multivariate visualization;
- Purely mathematical ways to specify features for visualization;
- Importance-driven data analysis and visualization;
- Chronovolumes, comparative visualization, and time-varying transfer functions.
» Tuesday, Afternoon
Visualization of Time-Varying Vector Fields
Organizers: | Christoph Garth |
Filip Sadlo | |
Jens Krueger | |
Daniel Weiskopf | |
Hank Childs |
The study of vector fields resulting from simulation and measurement has a rich tradition in the Scientific Visualization community, and vector fields are at the heart of many application domains. While vector field visualization has in the past often focussed on vector fields representing fluid flows, other application domains such as e.g. astrophysics, geodynamics, life sciences, and fusion are rapidly catching up regarding the importance of high-quality visualization. In all these domains, state-of-the-art simulations produce time-varying vector fields that add additional complexity to the visualization process.
While the four-dimensional nature of such fields creates issues of representation and perception, the strongly increased data sizes present challenges in data handling and increased computational effort. Moreover, time-varying vector fields admit new phenomena and structures that do not exist in stationary fields. For these reasons, visualization techniques designed to visualize stationary vector fields are often not applicable in a time-varying setting, do not generate adequate visualization, or are prohibitive from a computational perspective. Selecting appropriate visualization techniques for a given application problem is a daunting task under these outlined constraints. The tutorial we propose here presents a cross-section of modern approaches to time-varying vector field visualization, and aimed at providing the audience with a comprehensive guide of proven methods and algorithms that can help to solve real-world visualization problems.
The last IEEE Visualization tutorial on "Feature- and Texture-based Flow Visualization" took place in 2006, and our proposal is partially motivated by the many novel visualization techniques that have been developed since. We have selected three major classes of techniques that we regard as most important, useful, and forward-looking in this context; these form the first part of the proposed tutorial.
First, the class of geometric visualization methods offers tools that draw on the direct depiction of and intuition about lines, surfaces, and volumes derived from the trajectories of particles. Spanning a broad range of both algorithmic and visualization complexity, these techniques offer satisfactory solutions to many visualization problems, and geometric methods have recently seen many contributions centering specifically on time-varying flows.
We will discuss such methods in a broad context and illustrate how they can be applied successfully to visualize key aspects of application datasets. The class of Lagrangian visualization methods is aimed at an abstract analysis. By identifying and analyzing the simultaneous behavior of many particles traversing a vector field, key vector field constituents such as e.g. Lagrangian coherent structures can be identified reliably. Since their relatively recent introduction to the research community, Lagrangian methods have generated significant interest due to their ability to faithfully capture the structure of time-varying vector fields. Furthermore, Lagrangian approaches lend themselves well to the extension of methods developed for stationary vector fields, such as e.g. vortex extraction in fluid flow.
As a drawback, Lagrangian techniques often require immense computational effort due to the complexity of tracing massive numbers of particles. We will introduce the ideas and concepts behind Lagrangian analysis and survey relevant visualization techniques, provide examples of their use in visualization and analysis, and discuss their computational characteristics, with the aim of providing the audience with a thorough understanding of the visualization capabilities and drawbacks of Lagrangian methods.
The third class of methods is centered around texture-based visualization of unsteady vector fields. Such methods provide dense visualization capabilities and naturally generalize from stationary to time-dependent settings, and can provide intuitive visualizations of vector fields in similarity to geometric techniques.
We will present an overview of such methods for both two- and three-dimensional visualization and discuss theoretical and perceptual aspects of such methods, augmented by application examples.
The second part of the proposed tutorial is concerned with the implementation of a subset of the techniques presented in the first part in specific settings. First, we will describe GPU-accelerated implementations of geometric and texture-based methods. Such implementations are able offer new levels of interactivity, and we discuss how this can be leveraged to enhance and extend visualization for time-varying vector fields. Second, for the very large amounts of data produced by state-of-the-art simulations, we examine the prospect of parallel visualization on clusters and supercomputers that allows adequate visualization of such data.
After discussing difficulties of applying the previously described methods to large time-varying vector fields such as the non-local, data-dependent nature of integral curves, we present adequate parallelization strategies that enable the use of these methods. Throughout the tutorial, we will invest significant time into documenting the presented material with examples from many different applications domains such as fluid flow, astrophysics, magnetically confined fusion, electrodynamics, geodynamics, and earthquake modeling; thereby, we will provide the audience with a visualization perspective that extends beyond the specific characteristics of fluid flow.
We have furthermore purposefully omitted a number of visualization methods that, while working well for stationary vector fields, have failed to demonstrate significant visualization capabilities in the unsteady case, such as for example topological methods. We will, however, take care to briefly point out some of these possibilities during the individual presentations. This allows us to limit the proposed tutorial to the half-day format; we aim at providing a concise and qualitatively modern perspective on time-varying vector field visualization.
We will compile supplementary materials to provide further information, references and software on all covered topics, as well as all materials shown during the tutorial.
» Tuesday, Afternoon
Exploring Design Decisions for Effective Information Visualization
Organizers: | Jo Wood |
Jason Dykes | |
Aidan Slingsby |
This tutorial provides an opportunity for participants to design their own information visualization of some sample datasets. Using interactive software and data provided by the instructors, issues of color, layout, symbolization and animation are explored in turn. Participants are challenged to find patterns in the data using the visualizations they have designed themselves. Results from participants' visualizations are compared along with those from the presenters allowing insights into the data and good practice in information visulization design to be gained. Participants should be equipped with their own laptop capable of running Java applications. Prior to the session participants are strongly encouraged to download the free software and data that will be used in the tutorial. The tutorial is suitable for anyone working with complex datasets who wishes to improve their data visualization design skills, in particular designing visualization solutions that match the research questions asked and the data to be analysed.