May 6-8, 2012
A Parametric Volume with Applications to Subsurface Modeling
Alyn Rockwood
Abstract
A generalized, transfinite Shepard’s Interpolant has been developed and will be presented for terrain modeling. It allows a user to model ridges, valleys, peaks and other features; and then it passes a surface through these features with given slopes. For this talk, we extend the methodology to volumes passing through given surfaces, curves and points in three dimensions. In particular, we can apply the methods to create a 3D parametric volume which interpolates seismic horizons, faults and well log data. It is a parametric model, in which the preimages of these features are simple planes and lines in parametric space. It results in highly compressed, and highly editable models for representing and computing subsurface data.
This is joint work with Ali Charara.
Spline-based Emulators for Radiative Shock Experiments with Measurement Error
Bani K. Mallick
Abstract
Radiation hydrodynamics and radiative shock are of fundamental interest in high-energy-density physics research due to their importance in understanding astrophysical phenomena such as supernovae. The shock waves of similar feature can also be reproduced in a controlled laboratory experiment. However, the cost and time constraints of the experiment necessitates use of of a computer algorithm to generate reasonable number of outputs for making valid inference. We focus on modeling emulators that can efficiently assimilate these two sources of information accounting for their intrinsic differences. The goal is to learn about how to predict the breakout time of the shock given the information on associated parameters such as pressure and energy. Under the framework of Kennedy-O’Hagan model, we introduced an emulator based on adaptive splines. Depending on the preference of having an interpolator for the computer code output or a computationally fast model, a couple of different variants were proposed. Those choices are shown to perform better than the conventional Gaussian process based emulator. For the shock experiment dataset, a number of features related to computer model validation such as using interpolator, necessity of discrepancy function or accounting for experimental heterogeneity were discussed, implemented and validated for the current dataset. In addition to the typical Gaussian measurement error for real data, we considered alternative specifications suitable to incorporate noninformativeness in error distributions, more in agreement with the current experiment. Comparative diagnostics, to highlight the effect of measurement error model on predictive uncertainty, will be also presented.
Dagger: A Tool for Design and Analysis of Decision Trees and Rules
Igor Chikalov and Mikhail Moshkov
Abstract
Dagger allows: (i) sequential optimization of exact and approximate decision trees relative to depth, average depth, number of nodes, etc.; (ii) study of relationships between two cost functions or between cost function and uncertainty measure (if we study approximate trees); (iii) counting of the number of optimal trees containing given attribute; (iv) use of various greedy heuristics for decision tree construction. The same situation is with decision rules, but here we concentrate on the study of length, coverage and number of misclassifications as cost functions. We will discuss functionality of Dagger and problems connected with its parallelization.
Petascale Hydrologic Modeling: Needs and Challenges
Craig Douglas
Abstract
Hydrologic modeling has been computing in the GigaFlop, single CPU era well into the TeraFlop and PetaFlop eras. Now it is moving into the PetaFlop era by combining new models with at least a 100 fold reduction in average mesh spacing. Out target is the upper Colorado River, which includes mountains, valleys, snow melt, deserts, and swamps. We will use unprecedented amounts of data that is LIDAR based. In this talk, a description of the NSF funded project involving three universities in Utah, one in Wyoming, NCAR, and the Army Corps of Engineers ERDC. A goal of this talk is to attract members of KAUST, IAMCS and partners, and the NumPor SRI to cooperate on trying to model all of the Kingdom’s water channels. This would lead to an ExaFlop capable model.
Efficient Time integration for computational fluid dynamics
David Ketcheson
Abstract
Computational efficiency is an increasingly important concern in CFD, as codes are applied on ever-larger computers to increasingly challenging problems with diminishing error tolerances. In many CFD codes, the simplest way to improve efficiency is through use of improved time integration schemes. We demonstrate the use of numerical optimization to design improved time integration schemes for a variety of circumstances, including improvement of the stable step size, improvement of accuracy, and reduction of memory requirements.
Numerical simulation of lifted n-heptane tribrachial laminar flames with detailed chemistry and transport
Fabrizio Bisetti
Abstract
Tribrachial (or triple) flames occur in the presence of composition gradients. Triple flames display a characteristic balance between chemistry and transport and may help explain laminar and turbulent flame stabilization. In this talk I will present results from our recent investigation of the physical behavior of lifted, laminar jet flames of the triple flame type. The computational setup mimics a laboratory jet flame for which data is available. For the first time in the literature, real transportation fuels are considered. The simulations employ a low Mach number formulation of the variable density, reactive Navier-Stokes equations. A semi-implicit Crank-Nicolson time advancement scheme is used and a pressure correction step ensures mass conservation. Spatial resolution and time stepping requirements will be discussed together with computational costs on parallel architectures.
Robust Numerical Methods for Flows in Highly Heterogeneous Porous Media
Raytcho Lazarov
Abstract
We shall present an overview of some approximation strategies for numerical solution of flows in highly heterogeneous porous media and robust solution methods for the resulting algebraic system. Our main goal is derivation of numerical methods that work well for both, Darcy and Brinkman equations, and could be used either (1) as a stand alone numerical up-scaling procedure or (2) as robust (with respect to the high contrast of the porous media) iterative methods for solving the finite element systems on a fine spatial scale. The preconditioner is based on domain decomposition technique and the robustness is achieved via stable decomposition of the coarse space of the coarse grid finite element space augmented by patched together eigenfunctions corresponding to the smallest eigenvalues of properly weighted local problems. This approach has a natural abstract framework which we shall discuss as well. Applications include numerical upscaling of highly heterogeneous media modeled by Brinkman, Darcy, and steady-state Richards’ equation (including Haverkamp and van Genuchten models for the relative permeability).
This was a joint project with Yalchin Efendiev, Juan Galvis, and J. Willems
Extracting 200 Hz information from 50 Hz Data: A Seismic Scanning Tunneling Macroscope
Gerard Schuster
Abstract
We propose a seismic scanning tunneling macroscope (SSTM) that can detect the presence of sub-wavelength scatterers in the near-field of either the source or the receivers. Analytic formulas for the time reverse mirror (TRM) profile associated with a single scatterer model show that the spatial resolution limit to be, unlike the Abbe limit of λ/2, independent of wavelength and linearly proportional to the source-scatterer separation as long as the point scatterer is in the near-field region; if the sub-wavelength scatterer is a spherical impedance discontinuity then the resolution will be limited by the radius of the sphere. This means that, as the scatterer approaches the source, spatial imaging with super-resolution can be achieved. This is analogous to an optical scanning tunneling microscope that has sub-wavelength resolution. Scaled to seismic frequencies, it is theoretically possible to extract 100 Hz information from 20 Hz data by imaging of near-field seismic energy.
Efficient Collocation Methods for Stochastic Characterization of EMC/EMI Phenomena on Electrically Large Platforms
Hakan Bağcı
Abstract
Stochastic methods have been used extensively to quantify effects due to uncertainty in system parameters (e.g. material properties, geometry, and electrical constants) and/or excitation on observables (e.g. voltages across mission-critical circuit elements) pertinent to electromagnetic compatibility and interference (EMC/EMI) analysis. In recent years, probabilistic/stochastic collocation (SC) methods, especially those leveraging generalized polynomial chaos (gPC) expansions, have received significant attention. SC-gPC methods probe surrogate models (i.e. compact polynomial input-output representations) to statistically characterize observables. They are nonintrusive, that is they use readily available deterministic simulators, and often cost only a fraction of direct Monte-Carlo (MC) methods. Unfortunately, SC-gPC-generated surrogate models often lack accuracy (i) when the number of random system variables is large and/or (ii) when the observables exhibit rapid variations.
The focus of this talk will be on the extensions of SC-gPC methods that permit accurate construction of efficient-to-probe surrogate models for EMC/EMI observables by addressing the above shortcomings. The methods that will be described leverage iteratively constructed high-dimensional model representation (HDMR) expansions, which express observables in terms of only the most significant contributions from random variables [addressing concern (i)]. The contributions that feature in HDMR expansion are approximated via an h-adaptive stochastic collocation method [addressing concern (ii)]. Adaptively and iteratively generated HDMR-based surrogate models enable the efficient and accurate stochastic characterization of wave interactions with electronic systems subject to many more manufacturing uncertainties that cannot be addressed using “classical” SC-gPC methods.
Bayesian Filtering in Large-Scale Geophysical Systems and Uncertainty Quantification
Ibrahim Hoteit
Abstract
Bayesian filters compute the probability distribution of the system, thus readily provide a framework for uncertainty quantification and reduction. In geophysical applications, ensemble filtering techniques are designed to produce a small sample of state estimates, as a way to reduce computational burden. This, coupled with poorly known model-observation deficiencies, would lead to distribution estimates that are far from optimal, yet still provide meaningful estimates. We present this problem and discuss possible approaches to produce improved uncertainty estimates.
Approximation of the MHD equations in heterogeneous domains and applications to the geodynamo problem
Jean-Luc Guermond
Abstract
The geodynamo mechanism is introduced and key related mathematical problems are identified. A finite element technique for solving some of these problems is described and illustrated numerically. The talk is mainly focused on Lagrange finite elements.
Challenges in Computational Fracture Mechanics of Shape Memory Alloys
Dimitris Lagoudas and Theocharis Baxevanis
Abstract
Numerous computational challenges emerge in the simulation of fracture in active materials, such as Shape Memory Alloys (SMAs). The highly nonlinear material response combined with mesh discretization requirements near the crack tip and/or along the crack path render such simulations costly even for the traditional continuum-based theories of fracture mechanics. Parallel solution strategies for the generation and assembly of element matrices and solution of the resulting system of linear equations have been implemented in our finite element analyses to reduce computational cost. Multiscale fracture description and new scale-bridging formulations are even more computationally demanding. Both “top-down” approaches, which couple continuum mechanics description to phenomenology and experimental calibration at the smallest scales and “bottom-up” approaches which link the atomistic scale to the macroscopic aspects of deformation necessitate the use of high performance parallel computing. Large-scale structural analysis is another natural framework for parallel computations. Challenges encountered in achieving cost-effective simulations of the various fracture processes related to SMAs are presented and parallel-solution strategies are proposed.
The STAPL Parallel Graph Library
Lawrence Rauchwerger
Abstract
The STAPL Graph Library includes a customizable distributed graph container (pGraph) and a collection of commonly used parallel graph algorithms (pAlgorithms). It provides an expressive and flexible high-level framework that allows the user to concentrate on parallel graph algorithm development and a shared object view that relieves them from the details of the underlying distributed environment while providing transparent support for locality-related optimizations and load balancing. Algorithms are expressed using pViews that allow for the separation of algorithm design from the container implementation. The library supports common algorithmic paradigms and introduces a novel k-level asynchronous paradigm that unifies the traditional level-synchronous and asynchronous paradigms by allowing the level of asynchrony to range from 0 (level-synchronous) to full (asynchronous). We describe results that demonstrate improved scalability in performance and data size over previous libraries on standard benchmarks.
Combined Uncertainty and History Matching Study of a Deepwater Turbidite Reservoir
Michael J. King
Abstract
History matching is a process wherein changes are made to an initial geologic model of a reservoir, so that the predicted reservoir performance matches with the known production history. Changes are made to the model parameters, including rock and fluid parameters or properties within the geologic model. Assisted History Matching (AHM) provides an algorithmic framework to minimize the mismatch in simulation, and aids in accelerating this process. The changes made by AHM techniques, however, cannot ensure a geologically consistent reservoir model; the performance of these techniques strongly depend on the initial starting model. In order to understand the impact of the initial model, this project explored the performance of the AHM approach using a specific field case, but working with multiple distinct geologic scenarios. This project involved an integrated seismic to simulation study, wherein we interpreted the seismic data, assembled the geological information, and performed petrophysical log evaluation along with well test data calibration. The ensemble of static models obtained was carried through the AHM methodology. The most important dynamic parameters govern the large scale changes in the reservoir description and are optimized using the Evolutionary Strategy Algorithm. Finally, streamline based techniques were used for local modifications to match the water cut well by well. The following general conclusions were drawn from this study- a)The use of multiple simple geologic models is extremely useful in screening possible geologic scenarios and especially for discarding unreasonable alternative models. This was especially true for the large scale architecture of the reservoir. b)The AHM methodology was very effective in exploring a large number of parameters, running the simulation cases, and generating the calibrated reservoir models. The calibration step consistently worked better if the models had more spatial detail, instead of the simple models used for screening. c) The AHM methodology implemented a sequence of pressure and water cut history matching. An examination of specific models indicated that a better geologic description minimized the conflict between these two match criteria.
Mizan: Optimizing Graph Mining in Large Parallel Systems
Panos Kalnis
Abstract
Extracting information from graphs, from finding shortest paths to complex graph mining, is essential for many applications. Due to the sheer size of modern graphs (e.g., social networks), processing must be done on large parallel computing infrastructures (e.g., the cloud). Earlier approaches relied on the Map Reduce framework, which proved inadequate for graph algorithms. Recently, the message-passing model (e.g., Pregel) has emerged. Although the Pregel model has many advantages, it is agnostic to the graph properties and the architecture of the underlying computing infrastructure, leading to suboptimal performance. In this talk, I will present Mizan, a layer between the users’ code and the computing infrastructure. Mizan considers the structure of the input graph and the architecture of the infrastructure in order to: (i) decide whether it is beneficial to generate a near-optimal partitioning of the graph in a pre-processing step, and (ii) choose between typical point-to-point message passing and a novel approach that puts computing nodes in a virtual overlay ring. We deployed Mizan on a small local Linux cluster, on the cloud (256 virtual machines in Amazon EC2), and on an IBM BlueGene/P supercomputer (1024 CPUs). Mizan executes common algorithms on very large graphs up to one order of magnitude faster than Map Reduce-based implementations and up to 4 times faster than implementations relying on Pregel-like hash-based graph partitioning.
Adaptive Multilevel Monte Carlo
Raul Tempone
Abstract
We will review our recent results on adaptive multilevel Monte Carlo methods for the approximation of expected values depending on the solution to an Ito stochastic differential equation. Giles proposed and analyzed a Forward Euler Multilevel Monte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. Here we introduce and analyze an adaptive hierarchy of non-uniform time discretizations, based on the single level adaptive algorithms and a posteriori error expansions previously introduced by us. Under sufficient regularity conditions, both our analysis and numerical results, which include one case with singular drift and one with stopped diffusion, exhibit savings in the computational cost to achieve an accuracy of O(TOL), from O(TOL^-3 ) using the single level adaptive algorithm to O( (TOL^-1 log (TOL))^2); for these test problems single level uniform Euler time stepping has a complexity O(TOL^-4).
Fast Estimation of expected information gain for Bayesian experimental design based on Laplace approximation
Suojin Wang
Abstract
Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several numerical examples, including a single parameter design of one dimensional cubic polynomial function, optimization of the resolution width and measurement time for a blurred single peak spectrum and the current pattern for impedance tomography.
Efficient Estimation of Dynamic Density Functions with an Application to Outlier Detection
Xiangliang Zhang
Abstract
Estimating the density of data streams characterizes the distribution of streaming data, which can be utilized for online clustering and outlier detection. However, it is a challenging task to estimate the underlying density function of streaming data as it changes over time in an unpredictable fashion. In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster- Kernels on estimation accuracy, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data, and on accurately detecting outliers.
Parallel Finite Element Earthquake Rupture Simulations on Large-Scale Multicore Supercomputers
Dr. Xingfu Wu
Abstract
Earthquakes are one of the most destructive natural hazards on our planet Earth. In this talk, we use the 2008 Ms 8.0 Wenchuan earthquake occurred in Wenchuan county, Sichuan province in China on May 12th, 2008 as an example to address our earthquake rupture simulations. We integrate a 3D mesh generator into the simulation, and use MPI to parallelize the 3D mesh generator, illustrate an element-based partitioning scheme for explicit finite element methods, and implement our hybrid MPI/OpenMP finite element earthquake simulation code in order to not only achieve multiple levels of parallelism of the code but also to reduce the communication overhead of MPI within a multicore node by taking advantage of the shared address space and on-chip high inter-core bandwidth and low inter-core latency. We evaluate the hybrid MPI/OpenMP finite element earthquake rupture simulations on quad- and hex-core Cray XT 4/5 systems from
Oak Ridge National Laboratory using the Southern California Earthquake Center (SCEC) benchmark TPV 210. Our experimental results indicate that the parallel finite element earthquake rupture simulation obtains the accurate output results and has good scalability on these Cray XT systems. In the end of this talk, we will mention our NSF-funded project MuMMI (Multiple Metrics Modeling Infrastructure) about power-performance tradeoff and modeling. This talk reports on joint work with Benchun Duan (TAMU) and Valerie Taylor (TAMU).
Bayesian uncertainty quantification for channelized subsurface Characterization
Yalchin Efendiev
Abstract
Uncertainties in the spatial distribution of subsurface properties play a significant role in predicting the fluid flow behavior in heterogeneous media. To quantify the uncertainties in flow and transport processes in heterogeneous porous formations, complex dynamic and static data need to be reconciled with stochastic description of subsurface properties. The dynamic data measure the flow and transport responses that are largely affected by the spatial distribution of distinct geologic facies with sharp contrasts in properties across facies boundaries. Subsurface systems represent a challenging example in which the orientation and geometric shape of the channels dominate the field scale flow behavior in the subsurface. Most existing approaches have been limited to modeling the channel boundaries with simplified (e.g., sinusoidal) functions. In subsurface characterization, facies features need to be properly accounted when constructing prior models for subsurface properties. Furthermore, stochastic conditioning of facies distributions to nonlinear dynamic flow data presents a major challenge in underground fluid flow prediction. In this talk, hierarchical Bayesian approaches will be discussed that use level set methods for facies deformation to model facies boundaries. These prior models can be used with fast multiscale forward simulation tools and parallel multi-stage sampling techniques to explore the uncertainties in the geologic facies description and the resulting flow predictions.
Contributions of mathematical and statistical methods to fisher management: Potential collaborative research
Masami Fujiwara
Abstract
Fishery management is inherently quantitative, often relying on complex mathematical models and statistical methods. However, clear gaps also exist between available data and needed information for actual management. This reduces the effectiveness of quantitative tools in fishery management. Based on my research experience with Pacific salmon and Gulf of Mexico Shrimp, I suggest the four major areas that require advancements in research. First, although fishery management is based on asymptotic population dynamics, observed data mostly contain information on transient dynamics. This mismatch is preventing effective use of fishery data. Second, although individual fish are often assumed to interact freely with each other within a population in fishery models, in reality, spatial structures exist in almost any fish populations. Consequently, the information on biological connectivity is very important. Third, although traditional fishery management deals with each stock separately, in reality, populations of different species interact with each other. Competition and predator-prey interactions are ubiquitous features of natural systems. Although models for such population interacts are common in population biology, they are over-simplified often ignoring differences among populations. Finally, the study for translating scientific information into management decisions is needed. Sound management decisions have to incorporate scientific information and balance the risks and benefits associated with decisions appropriately. I will discuss possible future collaborative research in these areas.
Interactive Volume Visualization of General Polhedral Grids
Markus Hadwiger
Abstract
Most volume visualization methods for unstructured grids assume purely tetrahedral meshes, or at least assume a fixed (small) number of different cell types that can occur. However, state-of-the-art CFD packages are increasingly using general polyhedral grids, where each cell can be an arbitrary polyhedron with no limits on the number of faces and vertices. Representing such grids for efficient volume visualization is therefore important, and straightforward tetrahedralization is often undesired. In this talk, i will describe a very compact, face-based data structure for representing general polyhedral grids, called two-sided face sequence lists (TSFSL), as well as an algorithm for direct GPU-based ray-casting using this representation. The TSFSL data structure is able to represent the entire mesh topology in a 1D texture accesses for visualization. In order to scale to large data sizes, we employ a mesh decomposition into bricks that can be handled independently, where each brick is then composed of its own TSFSL array. This bricking enables memory savings and performance improvements for large meshes. Overall, this approach shows that general polyhedral grids can be visualized interactively using a direct approach that does not require decomposing the original cells.
Coupling water-column bio-optics and coral reef ecology to predict impacts of climate change and coastal zone development
Thomas Lacher
Abstract
Over the last decade, considerable attention has been devoted to the modeling of development to project likely future impact of human activities on biological diversity. This trend builds on the Millennium Ecosystem Assessment, which in turn was heavily influenced by the scenarios modeling themes of the Intergovernmental Panel on Climate Change. This work is already gaining considerable policy traction, with the Integrated Model to Assess the Global Environment (IMAGE) and the Global Biodiversity (GLOBIO) model comprising, for example, an important component of the Global Environment Outlook 4 report, The Economics of Ecosystems and Biodiversity reports, and proposed to be central to the nascent Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES). However, nearly all modeling work to date has focused on impacts at population and ecosystem levels of ecological organization, represented in particular by projected changes in extents of habitats and land cover. By contrast, the use of scenario models to project impacts at the species level driven by global extinctions has yet to be explored in detail. moreover, nearly all modeling studies have projected future changes, with surprisingly little attention paid to ground-truthing the biological impact projected by the models. We propose to fill these gaps by testing the core hypothesis that land use and climate change models can robustly project changes in species extinction risk. This will involve a) developing mechanistic retrospective projections of extinction risk driven by land use and climate change over the last 30 years based on the outputs of IMAGE model runs setting 1980 as a starting point, b) capitalizing on existing primary biodiversity datasets on changing extinction risk over the last 30 years to validate these projections; and c) with this validation in hand, projecting extinction risk over the coming 40 years resulting from four land use and climate change scenarios produced by the IMAGE model. This project structure will also allow us to test two additional hypotheses. For the first of these, we will use high spatial resolution and great taxonomic breadth (spanning plant, invertebrate, and vertebrate taxa), while at the global scale, spatial resolution is coarse and taxonomic scope of comprehensive data is restricted to mammals, birds, and amphibians. For the second, building from this validation over the past 30 years, and across spatial scales and taxonomic extents, our application of the IMAGE model to project extinction risk over the coming four decades will allow us to test the hypothesis that land use, not climate, change to 2050 will remain the dominant driver of extinction.
Hydrogeology of Wadi Aquifers: Artificial Recharge with Wastewater
Thomas M. Missimer
Abstract
Wadi aquifers have been a source of freshwater used for thousands of years in Western Saudi Arabia. As population has grown and agricultural water use has risen, wadi aquifers been become depleted in terms of water quantity and the quality of the water has also been degraded. Also, the urban growth has intruded into wadi systems causing flooding during storms causing severe damage to property and loss of life. The hydraulic properties and general characteristics of wadi aquifers are not well known. Initial geologic and geophysical investigations are being conducted in Wadi Qidayd to begin to obtain the necessary information to manage groundwater within these critical natural systems. Wadi Qidayd formed within the Precambrian cratonic shield rocks, likely hundreds of millions of years ago. The primary channel of the drainage system was displaced southward as a result of younger Tertiary lava flows. The current geometry and locations of the main channel and the tributary channels were also influenced by the occurrence of listeric faults that parallel the Red Sea rift axis. The erosion-derived alluvium filled the channels within the wadi to form the modern aquifer system. The sediments are a complex mix of layered boulders, cobble, pebbles, sand, and clay. Investigation of the characteristics of these sediments shows that vertical recharge of the aquifer is poor during freshwater floods and will influence the methods that can be used to artificially recharge the aquifer to restore water levels and allow continued use of freshwater. It is planned to investigate the transport of municipal, treated wastewater into the middle reach of the wadi to recharge the aquifer. The sediments within the aquifer will provide additional treatment to remove any remaining pathogens and trace organics before the water is used for unrestricted irrigation of indirect potable use.
Coupling water-column bio-optics and coral reef ecology to predict impacts of climate change and coastal zone development
Christian Voolstra, Daniel Roelke et al
Abstract
Coral reefs of the Red Sea are among the few remaining pristine reef environments on Earth, many of these are located in the coastal waters of western Saudi Arabia. It is unknown how coastal zone development in this region might influence reefs, or the confounding effects of climate change. Both processes may influence water salinity and temperature, thereby influencing nutrient dynamics and vertical mixing. These directly influence corals, and indirectly affect them through shifts in plankton biomass and assemblage composition, and resulting changes to the underwater light field. The long-term objective of this developing project is to evaluate an approach to research that would enable estimation of how these future conditions might influence the dependence of coral reefs on their endosymbiotic algae for energy compared to alternative strategies corals employ for nutrition, i.e., heterotrophic processes, and how this might eventually influence coral reef health.
Feynman graph representations to the solution of SPDE driven by Levy noise
Boubaker Smii
Abstract
Stochastic differential equations driven by Levy noise are intensively studied. But so far there seems to be no recipe to find out which kind of noise given the general structure of the equation. In fact this can be obtained by recalling a graphical representation of the solution of the SPDE driven by Levy noise. The graphs introduced in this talk are called rooted trees with two types of leaves and a numerical value will be assigned to each graph. Our graphs formalism can be applied to different stochastic different stochastic differential equations such as Burger’s equations, KPZ equations,…….
Finite Element Modeling of Hydrology
Shuyu Sun
Abstract
Hydrology modeling has important applications in environmental sciences. One important application is the injection of CO$_2$ in subsurface reservoirs for carbon sequestration, which is becoming increasingly attractive due to issues related to global warming. Modeling equation system of such multiphase flow and transport can be generally split into 1) an elliptic partial differential equation (PDE) for the pressure and 2) one or multiple convection dominated convection-diffusion PDE for the saturation or for the chemical composition. Accurate simulation of the phenomena not only requires local mass conservation to be retained in discretization, but it also demands steep gradients to be preserved with minimal oscillation and numerical diffusion. The heterogeneous permeability of the media often comes with spatially varied capillary pressure functions, both of which impose additional difficulties to numerical algorithms. To address these issues, we solve the saturation equation (or species transport equation) by discontinuous Galerkin (DG) method, a specialized finite element method that utilizes disontiuous spaces to approximate solutions. Among other advantages, DG possesses local mass conservation, small numerical diffusion, and little oscillation. The pressure equation is solved by either a mixed finite element (MFE) scheme, a DG scheme, or a Galerkin finite element method with local conservative postprocessing. In this talk, we will present the theory and numerical examples of this combined finite element approach for simulating subsurface multiphase flow. We will also discuss our on-going work where Darcy-scale simulations are coupled with pore-scale and molecule-scale simulations.
Control of Invasive Species through the Trojin-Y Chromosome Strategy
Rana Parshad and Jay Walton
Abstract
An invasive species is a non-native species, usually introduced into an environment via an external means. These have been known to cause wide-spread ecological and economic damage. In this talk we describe recent work on a model for the control of certain invasive species, via the introduction of genetically modified organisms, into a target population. The model is eco friendly and relevant in cases of invasive species such as Nile tilapia. We present analysis and numerics, and discuss future directions.