April 5-6, 2012

 

Modeling Initiation and Growth of an Atherosclerotic Lesion 

 Jay R. Walton (Texas A&M University)

 Abstract

Atherogenesis, or the initial phase in the development of an atherosclerotic lesion, is viewed as an inflammatory instability driven by oxidative modification of low density lipoproteins fueled by reactive oxygen species occurring as a byproduct of normal metabolic processes in the wall of muscular arteries. In this setting, atherogenesis is modeled as a Turing instability of a nonlinear reaction/diffusion/chemotaxis system of partial differential equations modeling inflammation occurring at a nascent lesion that leads to an inflammatory spiral. It is shown that antioxident concentration at a nascent lesion can act as a control mechanism inhibiting the Turing instability that gives rise to atherogenesis. However, once the inflammatory spiral is initiated, then arterial wall mechanics is coupled with the reaction/diffusion/chemotaxis equations modeling inflammation in order to predict lesion growth and evolution. It is conjectured that even after the inflammatory spiral is engaged and a lesion grows, the antioxident concentration can act as a control mechanism inhibiting lesion growth and enabling the normal immune response to cause lesion regression. Proving this conjecture remains one of many open problems and questions for the coupled nonlinear system of partial differential equations described in this lecture as a model of atherosclerotic lesion initiation and growth

Petascale Visualization for Neuroscience

 Johanna Beyer (KAUST)

 Abstract

Recent advances in high-resolution data acquisition such as electron microscopy (EM) result in volume data of extremely large size. In neuroscience, EM volumes of brain tissue have pixel resolutions of 3-5nm and slice distances of 25-50nm, which even for sub-millimeter tissue blocks result in hundreds of gigabytes to terabytes of raw data. The capability to interactively explore these volumes in 3D is crucial for analysis, for example to trace neural connections in the field of Connectomics. However, this poses significant challenges that require the development of novel, scalable systems for visualization and interactive analysis. This talk will give an overview of the research that we are doing in this area, with the goal of processing and visualizing data only on demand, driven by actual on-screen visibility.

In particular, this talk will focus on a new demand-driven volume rendering approach, that allows to view terabyte-sized EM volumes interactively, without pre-computing a 3D multi-resolution representation beforehand, and which is based on direct virtual memory access instead of standard octree traversal.

 

Collaborative Digital Pathology for Multi-Touch Mobile Platforms

Jens Schneider (KAUST)

Abstract

Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this talk, we present a digital pathology client-server system that allows multiple users to remotely collaborate over standard networks while providing interactive visualization of large histopathology slide images.

Large Data Management, Analysis and Visualization for Science Discover


Valerio Pascucci (University of Utah)

Abstract

Advanced techniques for understanding large scale data models are a crucial ingredient for the success of the activities of any supercomputing center and data intensive scientific investigation. Developing such techniques involves a number of major challenges such as the real-time management of massive data, or the quantitative analysis of scientific features of unprecedented complexity. In this talk, I will present the application of a discrete topological framework for the representation and analysis of large scale scientific data. Due to the combinatorial nature of this framework, we can implement the core constructs of Morse theory without the approximations and instabilities of classical numerical techniques. The inherent robustness of the combinatorial algorithms allows us to address the high complexity of the feature extraction problem for high resolution scientific data. To deal with massive amount information, we adopt a scalable approach for processing information with high performance selective queries on multiple terabytes of raw data. The use of progressive streaming techniques allows achieving interactive processing rates on a variety of computing devices ranging from handheld devices like an iPhone/iPad, to desktop workstations, to the I/O of parallel computers. Our system has enabled the successful quantitative analysis for several massively parallel simulations including the study turbulent hydrodynamic instabilities, porous material under stress and failure, and lifted flames that lead to clean energy production.

A Parametric Volume with Applications to Subsurface Modeling

Alyn Rockwood (KAUST)

Abstract

A generalized, transfinite Shepard’s Interpolant has been developed and will be presented for terrain modeling.  It allows a user to model ridges, valleys, peaks and other features; and then it passes a surface through these features with given slopes.  For this talk, we extend the methodology to volumes passing through given surfaces, curves and points in three dimensions.  In particular, we can apply the methods to create a 3D parametric volume which interpolates seismic horizons, faults and well log data.  It is a parametric model, in which the preimages of these features are simple planes and lines in parametric space.  It results in highly compressed, and highly editable models for representing and computing subsurface data.

 This is joint work with Ali Charara.

Roadmap-based Techniques for Modeling Group Behaviors in Multi-Agent Systems

Sam Rodriguez (Texas A&M University)

 Abstract

In our study of multi-agent group behavior, we look at simulations for groups of agents performing different actions. The system that we have developed is highly tunable and allows us to study a variety of behaviors and scenarios. The system is tunable in the kinds of agents that can exist and parameters that describe the agents. The agents can have any number of behaviors which dictate how they react throughout a simulation. An aspect that is unique to our approach to multi-agent group behavior is the environmental encoding that the agents use when navigating. Our roadmap-based approach can be utilized to encode both basic and very complex environments.  A roadmap as a representation is convenient in that it is a light-weight abstraction. Along with its usefulness in navigation, we show other benefits of using a roadmap in our behavioral strategies which include allowing implicit communication, tracking between agents and in encoding classical problems.

In this work, we use our multi-agent based system to focus on two applications: pursuit-evasion and evacuation planning. One aspect that has been unique in the study of these two applications is our ability to handle complex environments. This is mainly due to our usage of the roadmap to encode these complex environments. Because we are able to handle more complex environments, we have been able to study these applications in a more in depth way and in more complex scenarios than most previous approaches to both of the applications we focus on.

 

Rethinking Regular Grids

Paul Rosen (University of Utah)

Abstract

Regular grids are one of the most commonly used platforms for computation and visualization because of properties such as memory and computational efficiency, uniform sampling of the data domain, and natural mappings to the computer monitor. However, when data is no longer uniformly distributed, hot spots in the data can be undersampled and regions with low importance can waste space in visualizations along with the computation and memory resources required to process them. For simulations, AMR type approaches are often used to address these issues, though they have their own limitations as far as complex memory layouts, difficult to distribute workloads, and unnatural mappings to visualizations. In my talk, I will discuss recent advancements we have made towards generalizing regular grids, taking advantage of the beneficial qualities of regular girds while allowing increased flexibility. The approach retains the regular grid but applies sophisticated data domain transformations giving the flexibility to sample in a non-uniform manner. The result is a computationally efficient data structure with flexible sampling capabilities and a natural mapping to the screen. My talk will focus on the application of these approaches to computer graphics and visualization, though many of the approaches have direct corollaries in other scientific computing domains such as simulation and data storage.

Exact Fast Computation of Band Depth for Visualization of Large Functional Datasets

Ying Sun (SAMSI)

Abstract

 Band depth is an important nonparametric measure that generalizes order statistics and makes univariate methods based on order statistics possible for functional data. However, the computational burden of band depth limits its applicability when large functional or image datasets are considered. We propose an exact fast method to speed up the band depth computation. Remarkable computational gains are demonstrated through simulation studies comparing our proposal with the original computation and one existing approximate method. We illustrate the use of our procedure on functional boxplots and 3D surface boxplots for visualization of large space-time datasets. This talk is based on joint work with Marc G. Genton and Douglas W. Nychka.

 

Functional Median Polish, with Climate Applications

Marc G. Genton (Texas A&M University)

 Abstract

We propose functional median polish, an extension of univariate median polish, for one-way and two-way functional analysis of variance (ANOVA). The functional median polish estimates the functional grand effect and functional main factor effects based on functional medians in an additive functional ANOVA model assuming no interaction among factors. A functional rank test is used to assess whether the functional main factor effects are significant. The robustness of the functional median polish is demonstrated by comparing its performance with the traditional functional ANOVA fitted by means under different outlier models in simulation studies. The functional median polish is illustrated on various applications in climate science, including one-way and two-way ANOVA when functional data are either curves or images. Specifically, Canadian temperature data, U.S. precipitation observations and outputs of global and regional climate models. This talk is based on joint work with Ying Sun.

Uncertainty Visualization: Beyond Mean and Standard Deviation

Kristi Potter (University of Utah)

Abstract

As data sets becomes larger and more complex, information about uncertainties present becomes apparent. Due to the difficulties of visually expressing all of this information at once, statistics such as mean and standard deviation are often used to summarize uncertainty, thus encoding a general idea of the amount and location using only one or two numbers. While mean and standard deviation are, in general, effective summary statistics, they are not always appropriate or even feasible for particular types of data. In this talk, I will present an overview of uncertainty visualization using summarization, and discuss instances where mean and standard deviation fail as well as exemplary solutions for such cases.

Superresolution Seismic Imaging

 Gerard Schuster (KAUST)

Abstract 

Conventional seismic Imaging is Abbe limited  to the half wavelength of the minimum source frequency. Using scattering Green’s functions, we theoretically show that seismic imaging can go beyond Abbe resolution using evanescent fields. We confirm this prediction with seismic data using both simulated and field data examples. A crucial feature of far-field superresolution imaging is the use of multiple scattering.

Modern Numerical Methods for Modeling Convection in the Earth’s Mantle

Timo Heister (Texas A&M University)

Abstract

We present the new open source code ASPECT for modeling convection in the earth’s mantle, see [1,2]. ASPECT uses modern numerical methods and provides very good parallel scalability; this is achieved by building on the open source finite element library deal.II (see [3,4]). At the same time, we are striving to make the code easy to use and extend. In this talk we will highlight the main features, show the parallel scalability, and show numerical examples.

This is join work with Wolfgang Bangerth, Thomas Geenen, and Martin Kronbichler.

Bayesian Uncertainty Quantification for Subsurface Inversion Using Multiscale Hierarchical Model

 Bani Mallick (Texas A&M University)

 Abstract

 We consider a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a random field (spatial or temporal). The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from from heterogeneous sources and provide a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo\’eve expansion is used for dimension reduction of the random field. Furthermore, we use a hierarchical Bayes model to inject multiscale data in the modeling framework.  In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. Computation challenges in this construction arise from the need for repeated evaluations of the forward model (e.g. in the context of MCMC) and are compounded by high dimensionality of the posterior. We develop   two-stage reversible jump MCMC which has the ability to screen the bad proposals in the first inexpensive stage. Numerical results are presented for estimation of two dimensional permeability fields obtained from petroleum reservoirs.

 This is joint work with Anirban Mondal, Yalchin Efendiev and Akhil Datta-Gupta

 Exploratory Models for High Dimensional Data

Sam Gerber (University of Utah)

Abstract

High dimensional data arises in a variety of applications. In neurological studies, large data sets of diffusion tensor and magnetic resonance images, consisting of millions of measurements, are acquired. In climate science there is a growing demand to quantify the uncertainty in simulations, which are controlled by up to a hundred parameters. For scientists working with such data, it is often very difficult to gain a qualitative understanding to reason and hypothesize about the underlying process. Thus, data models that convey insights into structures present in the data are exceedingly important and are the focus of this talk.

Bayesian Subset Modeling for High Dimensional Generalized Linear Models

Qifan Song (Texas A&M University)

Abstract

In this talk, we propose a new prior setting for high dimensional generalized linear models (GLMs) which lead to a Bayesian subset regression (BSR) with maximum a posteriori model coinciding with the minimum extended BIC model. We establish the consistency of the posterior under mild conditions, and propose a variable screening procedure based on marginal posterior inclusion probabilities. We show that this procedure shares the same properties of sure screening and consistency with existing sure independent screening (SIS) procedure. Our numerical results indicate that BSR generally outperform the penalized likelihood methods, such as Lasso, elastic net, SIS and ISIS. The model selected by BSR tend to be sparser, and more importantly, of higher generalization ability.

Dynamic Particle System for Mesh Extraction on the GPU

Mark Kim (University of Utah)

Abstract

Extracting isosurfaces represented as high quality meshes from three- dimensional scalar fields is needed for many important applica- tions, particularly visualization and numerical simulations. One recent advance for extracting high quality meshes for isosurface computation is based on a dynamic particle system. Unfortunately, this state-of-the-art particle placement technique requires a signif- icant amount of time to produce a satisfactory mesh. To address this issue, we study the parallelism property of the particle place- ment and make use of CUDA, a parallel programming technique on the GPU, to significantly improve the performance of particle placement. This paper describes the curvature dependent sampling method used to extract high quality meshes and describes its im- plementation using CUDA on the GPU.