May 12-13, 2013
Measuring, Modeling and Analyzing Variability in the Red Sea Marine System
Burt H. Jones (KAUST)
Abstract
The Red Sea ecosystem, like all other ecosystems on earth, is experiencing changes that extend from local development and multiple uses of the ocean system to global changes that may be mediated through exchanges with the global ocean and with the atmosphere. Simultaneously, the fundamental processes within the Red Sea are poorly understood due to lack of extended, coherent studies of the basin. As part of our basic research effort, and in collaboration with the newly formed Saudi Aramco Marine Environmental Research Center we are establishing a baseline marine ecological data sets and fundamental understanding of the physical and biogeochemical dynamics of the Red Sea that provide a basis for evaluating long term patterns and trends within the Red Sea. The planned observations include baseline ecological assessment of the Saudi Arabian Red Sea followed by long term monitoring and long term oceanographic observations using moored sensors, remote sensing and autonomous, robotic vehicles coupled with real-time and retrospective numerical models. The combined data sets will be organized into an archival database with GIS systems used to bring together the various types of data sets for visualization and analysis. The planned observations provide distinct challenges in our approach to designing the observations, analyzing the resulting data sets, and interpreting the processes so that long term processes can be resolved and, if possible, differentiation of sources of variability and change within the Red Sea can be identified. These are challenges for all types of environmental research worldwide, but with the rapid development of the Kingdom of Saudi Arabia, the scale of the basin, and with modern observing capabilities, we believe that the Red Sea provides a unique opportunity to study these changes with a distinct possibility of understanding the contribution of various natural and anthropogenic processes to the marine ecosystem.
Deep Red Sea Brine Pools Affect the Adjacent Pelagic Fauna
Stein Kaartvedt (KAUST)
Abstract
The deep brine pools of the Red Sea comprise unique, complex and inhospitable habitats, yet housing a high biodiversity of microbial communities. We searched for evidence of coupling between the productive brine interface and the adjacent pelagic macrofauna at two brines. The fauna appeared to be enriched at the Kebrit pool. Video footage evidenced a suspended layer of biofilm-like consistency as well as plankton and fish at the brine interface and submerged echosounders documented individuals exploring the brine surface. In contrast, waters just above the Atlantis II pool appeared depleted of macrofauna. We suggest that the microbial production at Kebrit contributes to fuelling the fauna in the water above, whereas the harsh environment of Atlantis II has a repellent effect, at least at close range. The ~25 brines along the Red Sea evidently affect adjacent organisms, yet the nature of the effects from these highly diverse environments will vary.
Potential Changes in Spectral Quality of Light in the Red Sea with Shifts in Phytoplankton Biomass and Composition
Frances Withrow (TAMU)
Abstract
Coral reefs of the Red Sea are among the few remaining pristine reef systems on earth. But human development of coastal zones neighboring the Red Sea and potential changes in global and local climate may influence water quality. This could impact phytoplankton communities, which in turn may influence the quality of the underwater light field. The health and pigmentation of coral reefs is affected by the spectral quality of light. Consequently, coral reefs in this region may be affected indirectly by coastal zone development and climate change. In this research, we are investigating through in vivo coral pigmentation measurements and computational simulation how changes in phytoplankton biomass and composition influence the spectral quality of light incident upon the sea floor and how that in turn relates to the pigment profiles of two common Red Sea corals. To achieve this we are following a three step process. First, a mathematical phytoplankton model within a one-dimensional water column framework with spectral light of diminishing intensity with depth and a nutrient source from advection is being developed and calibrated. Nutrient loading to the system and stratification are controlled parameters in this model, and phytoplankton biomass and composition are state variables. To the model output, taxon specific absorption spectra are used to predict underwater light fields. Then, using locations in the Red Sea to which the model is tailored, spectral data of the corals Pocillopora Verrucosa and Porites Lutea and their endosymbiotic dinoflagellates, are compared to the spectral light field at their respective sampling depths. Using an in-situ dataset of coral and light data, an algorithm (we coin a coral-light dissimilarity index, DI) is developed and applied that quantifies the fitness of the coral to the ambient light spectrum. Finally, the phytoplankton derived scenarios of the underwater light field are used as input variables for the DI to investigate potential future impact of changes in light on the pigment profiles of the corals and to determine possible thresholds that could affect their distributional ranges.
Acclimation Event Horizon of the Coral Pocillopora Verrucosa from the Central Red Sea
Maren Ziegler (University of Frankfurt)
Abstract
The coral – dinoflagellate symbiosis that provides the framework of coral reef ecosystems is highly resilient towards environmental changes. Both, host and symbiont are equipped with mechanisms to endure under environmental conditions differing from those regarded as optimal. For instance, with decreasing incident light the coral host may support its mainly photoautotrophic diet through heterotrophic feeding while at the same time the symbiont cells increase the chlorophyll content in their chloroplasts to increased photosynthetic efficiency. This acclimation is known as heterotrophic plasticity and is different for different coral species. Here, we investigate the influence of changes in the light field on Pocillopora Verrucosa and its dinoflagellate symbionts sampled at different depths from the Central Red Sea. Our data reveal some of the photosynthetic acclimatization strategies employed. All coral colonies adjusted to lower light levels displayed an increase in photosynthetic light harvesting pigments, which resulted in higher photosynthetic efficiency. In contrast, an adjustment of symbiont cell density was not observed. Total protein content was significantly decreased after 30 days under low light conditions, compared to medium- and high light samples. Stable isotope data suggests that heterotrophic input of carbon was not increased under low light and consequently decreasing protein levels were symptomatic of decreasing photosynthetic rates that could not be compensated through higher light harvesting efficiency. Our results provide insights into the acclimation event horizon of P. Verrucosa, and suggest that despite its high abundance in shallow reef waters, it possesses limited heterotrophic plasticity. We conclude that P. Verrucosa will be a species vulnerable to sudden changes in underwater light field resulting from processes such as increased turbidity caused by coastal development along the Saudi Arabian Red Sea coast.
Analysis, Modeling and Simulation of a Three Species Food Chain Model
Rana Parshad (KAUST)
Abstract
We investigate a three species food chain model for a generalist top predator-specialist middle predator-prey species. The model is applicable to real food chains, such as a mollusc-phytoplankton-zooplankton chain. We present analysis and modeling, concerning existence of solution, finite time blow up, and global attractor for the model. We next investigate numerically, Turing instability in the model. Lastly we reconstruct the attractor for the model, via nonlinear time series analysis.
The Trojan Y-Chromosome Technique for Eradication of an Invasive Species
Jay Walton (TAMU)
Abstract
The Trojan Y-Chromosome strategy for eradication of an invasive species that uses an X,Y-chromosome reproductive system is based upon both genotypic and phenotypic manipulation of the species. Genotypic manipulation is used to produce a YY-variant (super-male) which through developmental phenotypic manipulation is transformed into a feminized YY-variant, the so-called “Trojan Variant”. When introduced into a wild-type population, the Trojan variant, over time, reduces the ratio of wild-type females to male to the point that the species goes extinct due to an under production of reproductive females. Our first modeling of this process was deterministic, but more recently we have developed a stochastic analysis that sheds additional light on the efficacy of this eradication strategy. Also, we have undertaken a preliminary study of a reduced strategy that involves the introduction into the wild-type population of only the YY-super male variant, omitting the additional phenotypic manipulation required to produce the feminized super-male variant. In this talk, I discuss these recent developments.
A New Study of Divergence Metrics for Detecting Changes in Time Series
Abdulhakim Qahtan (KAUST)
Abstract
Streaming time series are dynamic in nature with frequent changes of the underlying distribution. To detect such changes, most methods measure the difference of data distributions in a current time window and a reference window. Our study shows that Kullback-Leibler (KL) divergence, the most popular metric for comparing distributions, fails to detect certain changes due to its asymmetric property and its dependence on the variance of testing data. We thus studied two metrics for detecting changes in univariate time series: a symmetric KL-divergence, and a divergence metric measuring the intersection area of two distributions. Both metrics are employed under our proposed framework that has an advantage of automating the parameter settings and thus demanding less user efforts. The evaluation results on both synthetic and real data show that our framework with new metrics outperforms two baseline methods on detection accuracy.
A Framework for Scalable Parameter Estimation of Gene Circuit Models Using Structural Information
Suojin Wang (TAMU)
Abstract
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. In this talk, we present a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic data sets and one time-series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. While many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to employ domain specific information may be a key to reverse-engineering of complex biological systems.
Fracture Calibration Test Analysis and Design
Christine Ehlig-Economides (TAMU)
Abstract
The fracture calibration test is performed by injection of fracturing fluid in a well at a high enough pressure to cause tensile failure in the rock. Continued injection propagates as a crack in the rock that is called a hydraulic fracture. At the end of pumping the pressure drops as the fracture closes. The original intent of this test was to determine the pressure at which the fracture closes, which is called the closure pressure and which corresponds to the minimum stress in the formation. Additionally, this test provides a measure for the leakoff coefficient, which is a property of the rock and the injected fluid. More recently these tests are being used to estimate the permeability of very low permeability formations where conventional tests do not work. This talk will show recent work on developing a combined global model for both before and after closure behavior as seen in the injection falloff response. The purpose of the model is both for analysis and design of these tests.
Local-Global Model Reduction for Large-Scale Models Integrating Systems-Theoretical Properties
Eduardo Gildin (TAMU)
Abstract
The development of algorithms for modeling and simulation of heterogeneous porous media poses challenges related to the variability of scales and requires efficient solution methodologies due to their large-scale nature. This is especially daunting in simulating unconventional reservoirs that are embedded in fractured media. In this talk, I will describe a local-global model reduction method that combines multiscale techniques with the reduced-order methods in a seamless fashion. The aim is to reduce the degrees of freedoms in the state-space by computing global reduced order models written on a coarser space. The trade-offs between the local and global approaches will be demonstrated using an oil reservoir simulation model.
Complexity Reduction of Nonlinear Flows in Heterogeneous Porous Media
Mehdi Ghommem (KAUST)
Abstract
Mode decomposition methods have been recently examined in the context of reservoir simulation and optimization in order to overcome the computational burden associated with the large-scale nature of the reservoir models while preserving the relevant physics. For instance, proper orthogonal decomposition (POD) techniques have been applied successfully in simulation and optimization of nonlinear flow models. However, its full applicability when dealing with highly-heterogeneous porous media can be limited due to its incapability to capture the relevant dynamic information associated with flows in high-contrast media. Furthermore, POD may not yield significant computational savings since evaluating the nonlinear term can be costly since it depends on the full dimension of the original system. As such, we consider a coupled approach that combines the discrete empirical interpolation method (DEIM) and the dynamic mode decomposition (DMD). The DMD is used as a decomposition method to extract the coherent and dynamically-relevant structures. The DEIM is based on selecting a small set of points and forming an approximation of the full nonlinear function by interpolation through the selected points. We present numerical results to show the capability of our approach to achieve a significant reduction in the system size, and then speed-up numerical simulations, while preserving the main flow dynamics.
Multiscale LBM Simulations of Flows in Porous Media
Jun Li (KAUST)
Abstract
Large-scale flows inside porous media are modeled using the Darcy or Brinkman approximations. In these models, the permeability parameter depends on the pore-scale geometry. The permeability parameter can be computed by pore-scale simulations using the lattice Boltzmann method (LBM) based on Navier-Stokes equation inside a representative elementary volume (REV). Then, a modified LBM algorithm is used to solve the Darcy or Brinkman equations using the computed permeability. For most problems of interest, the size of the whole computational domain is much larger than that of each REV and the fine grid number required to represent the distribution of permeability is very large making the computational effort unaffordable. We propose a multiscale LBM algorithm to reduce the computational requirements by using coarse resolutions with effective permeabilities. Relevant flow features of the fine scales are preserved in the coarse-scale resolution by using the multiscale LBM methodology proposed.
Two-Phase Flow in Porous Media Associated with Nanoparticle Injection
Mohamed El Amin (KAUST)
Abstract
In the recent years, the applications of nanoparticles have been reported in many disciplines including petroleum industry. The nanoparticles can be used in the oilfields to enhance water injection by virtue of changing the wettability of reservoir rock through their adsorption on porous walls. Moreover, the nanoparticles can modify the rheology, mobility, wettability, and other properties of the fluids. Solid properties such as porosity and permeability are also changed due to nanoparticles precipitation on the walls and throats of the pores. In this talk, a mathematical model to describe the nanoparticles transport carried by two-phase flow in porous media is presented. Buoyancy and capillary forces as well as Brownian diffusion are considered in the model. A numerical example of countercurrent water-oil imbibition is considered. The model includes negative capillary pressure and mixed relative permeabilities correlations to fit with the mixed-wet system. Another example presents nanoparticles transport carried by injected CO2 into water-CO2 two-phase system in porous media. Finally, we introduced a nonlinear iterative scheme for two-phase flow in porous media associated with nanoparticle injection. A linear approximation of capillary function is used to couple the implicit saturation equation into the pressure equation that is solved implicitly. This approximation couples the saturation equation into the pressure equation. The convergence theorem of our scheme is established under certain conditions.
Visualization of Statistical Uncertainty
Kristi Potter (SCI Institute, Univ. of UTAH)
Abstract
Statistics are commonly used to quantify and visualize uncertainty in data sets. In this talk, I will explore the use of statistics in understanding complex problems and describe the look of typical datasets created in this way. From there, I will discuss challenges of displaying these complex data sets, and statistical measures specific to expressing uncertainty within visualization. The remainder of the talk will focus on visualization methods, including a recounting of historical methods from the field of graphical data analysis including the boxplot, as well as an overview of methods from scientific and information visualization. Examples of current state-of-the art methods will be presented and a discussion of pending challenges in need of further exploration.
Visualization of Shared and Global Memory Behavior for Dynamic Performance Analysis of CUDA Kernels
Paul Rosen (SCI Institute, Univ. of UTAH)
Abstract
We present an approach to investigating the memory behavior of a parallel kernel executing on thousands of threads simultaneously within the CUDA architecture. Our top-down approach allows for quickly identifying any significant differences between the execution of the many blocks and warps. As interesting warps are identified, we allow further investigation of memory behavior by visualizing the shared memory bank conflicts and global memory coalescence, first with an overview of a single warp with many operations and, subsequently, with a detailed view of a single warp and a single operation. We demonstrate the strength of our approach in the context of a parallel matrix transpose kernel and a parallel 1D Haar Wavelet transform kernel.
A Bayesian Spatio-Temporal Geostatistical Model with an Auxiliary Lattice for Large Datasets
Ganggang Xu (TAMU)
Abstract
When spatio-temporal datasets are massive in size, the aggravated computational burden can often lead to failures in the implementation of traditional geostatistical tools. In this paper, we propose a computationally efficient Bayesian hierarchical spatio-temporal model in which the spatial dependence is approximated by a Gaussian Markov random field while the temporal correlation is described using a vector autoregressive model. By introducing an auxiliary lattice on the spatial region of interest, the proposed method is not only able to handle irregularly spaced observations in the spatial domain, but it is also nicely able to bypass the missing data problem in a spatio-temporal process. Because the computational complexity of the proposed Markov chain Monte Carlo algorithm is of the order O(n) with n being the total number of observations in space and time, our method can be used to handle very-large-to-massive spatio-temporal datasets with reasonable CPU times. The performance of the proposed model is illustrated using simulation studies and a real-world dataset of precipitation data from the coterminous United States.
A Projection Method for Under Determined Optimal Experimental Designs
Quan Long (KAUST)
Abstract
In this work, we extend a Laplace approximation method for Bayesian experimental design to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples. They include the designs of the scalar parameter in an one dimensional cubic polynomial function with two indistinguishable parameters forming a linear manifold, respectively, and the boundary source locations for impedance tomography in a square domain, considering the parameters as a piecewise linear continuous random field.
The Solution of the Low Mach Number Reactive Navier-Stokes Equations in Laboratory Scale Flames via an Operator-Split, Fractional-Step Method
Fabrizio Bisetti (KAUST)
Abstract
In this talk I will present the details of a fractional step, semi-implicit method for the solution of the reactive Navier-Stokes equations in the low Mach limit and its application to the simulation of laboratory flames for which experimental data is available. Exploiting an implicit treatment of the diffusive fluxes and an operator splitting for the stiff kinetics, relatively large stable time steps may be taken towards steady state. The implications of operator splitting errors on the flame propagation speed for one-dimensional model problems with detailed n-heptane chemistry are addressed via numerical experiments.
The Role of Advanced Algorithms and Simulations in Plasma Science
Ravi Samtaney (KAUST)
Abstract
In this talk, we explore the challenges of simulating physical processes in plasma physics, which span several decades in spatial and temporal scales. It is well known that the fundamental kinetic equations (Vlasov or Fokker-Planck) suffer from the so-called curse of dimensionality. Numerical solutions of these equations in 6D phase space with full fidelity is simply beyond the capability of modern supercomputers. Simplifications such as the 5D gyrokinetic or 4D drift kinetic models still pose a significant challenge. While the looming exascale capability may be a step towards meeting some of these challenges, it is clear that brute force computing has to be coupled with clever algorithms to enable discoveries. We discuss some of the intellectual scientific questions arising in plasma turbulence as a motivating example. We present results from recent drift kinetic simulations in 4D on Shaheen (IBM Blue Gene P at KAUST) to shed light on some of these issues.On the other end of spectrum of mathematical models describing a plasma is Magneto hydrodynamics (MHD) — arguably the most popular mathematical model for the macroscopic simulations of plasmas. Even “simple” fluid models such as resistive single-fluid MHD (which does not distinguish between electrons and ions), are governed by nonlinear partial differential equations. These are notoriously difficult to simulate because the solutions can exhibit near-singular layers (or even discontinuities in the absence of diffusion terms). We will discuss how advanced algorithmic techniques such as locally adaptive structured mesh refinement methods and fully implicit methods are employed to overcome the tyranny of spatial and temporal scales in MHD. We present results from recent simulations of magnetic reconnection on Shaheen as an illustrative example.
Detonation in Radial Outflow
Aslan Kasimov and Svyatoslav Korneev(KAUST)
Abstract
Detonations are self-sustained shock waves propagating in chemically reactive media. We consider the dynamics of two-dimensional detonation in a radial supersonic outflow of a detonable gas emanating from a source. It is shown that there exists a steady solution wherein a circular detonation stands at some distance from the source and that for given flow conditions the solution can be double-valued. We analyze the structure of these solutions and calculate their dynamics by means of numerical simulations based on the reactive Euler equations.
Entropy Stability and High-Order Approximation of the Compressible Euler Equations
Jean-Luc Guermond (TAMU)
Abstract
This talk will discuss questions regarding parabolic regularization of the Euler equations and entropy stability. A sub-class of parabolic regularizations is identified that yields a minimum entropy principle and various entropy inequalities independently of the equation of state, provided a convex entropy exists. It is shown in particular that the Navier-Stokes regularization is not an appropriate regularization of the Euler equations. The consequences of this property will be illustrated numerically using continuous Lagrange elements and a Galerkin technique that does not use any slope limiters.
Entropy Viscosity Method for Lagrangian Hydrodynamics
Vladimir Tomov (TAMU)
Abstract
We use curvilinear finite elements to solve the Euler equations of compressible gas dynamics in a moving Lagrangian frame. The equations are first regularized in a way that guarantees positivity of density, correct entropy inequality and minimum principle on the specific entropy. The amount of added diffusion is controlled by the entropy production which is large in shocks and almost none in smooth regions. Then we derive a Lagrangian form of the regularized system and propose a weak form. All variables (position, density, velocity and internal energy) are discretized by continuous basis functions of arbitrary polynomial degree. This requires the use of high-order mappings from a standard 2D/3D quadrilateral/hexahedral reference element. Finally we use Runge-Kutta time stepping to derive the fully discrete algorithm. We will show 2D and 3D numerical results for standard Lagrangian hydro test cases. Our code is developed by using the MFEM finite element library.
Entropy viscosity for High-Order Finite Elements
Murtazo Nazarov (TAMU)
Abstract
One great challenge of computational fluid dynamics is to design and implement efficient high-order accurate numerical methods for approximating nonlinear hyperbolic systems of conservation laws, in particular compressible Euler equations. Since high order discretizations produce spurious oscillations in the regions of shocks and sharp discontinuities, nonlinear stabilization technique are needed to avoid or control these oscillations, see e.g [1, 2]. This talk will discuss questions regarding parabolic regularization of the Euler equations and entropy stability. In particular a sub-class of parabolic regularization is identified that yields a minimum entropy principle. The consequences of this property will be illustrated on a high-order finite elements method.
Quantifying Dust Deposition and Radiative Impact on the Red Sea and Arabian Peninsula
Georgiy Stenchikov (KAUST)
Abstract
Dust is a major source of primary aerosols in the Middle East and North Africa region. Key to their influence on current and future climate is their role in perturbing the Earth’s radiation balance at the top of the atmosphere, surface and within the atmosphere via their impact on both the reflected and absorbed shortwave and the emitted longwave radiation. In this study we have combined ground based observations, satellite aerosol retrievals, and modeling to better quantify the dust induced radiative forcing and deposition patterns. We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during typical dust events. We utilized SEVIRI, MODIS, MISR and AERONET measurements of the aerosol optical depth to evaluate the radiative impact of aerosols. Our results clearly indicate that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reaches 100 Wm−2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future climate projections and for better understanding of the regional climate and ecological history of the Red Sea.
Morphological Control and Characterization of Porous Membranes
Suzana Nunes (KAUST)
Abstract
A large diversity of membranes and processes are needed for water application, but morphology control is one of the most important issues to guarantee performance with the right flux and selectivity. The lecture will be an overview of membranes available or under development for nano and ultrafilitration and membrane distillation. Challenges are the achievement of very regular pores, high porosity, control of hydrophobicity, added functionality like catalytic response, resistance to fouling, stimuli response. For developing the right membrane advanced methods of characterization are essential. We have been applying different techniques of electron microscopy and small angle x-ray scattering, associated to modeling and simulation to reveal new mechanisms of membrane formation and allow separation tasks hardly possible before. New polymers are also now available with improved stability to considerably enlarge the range of membrane applications.
Exploring the Connectome: Query-Driven Visualization
Ali Awami (KAUST)
Abstract
Reconstructing the human connectome is one of the major scientific endeavors of the 21st century. By deciphering the brain’s neural curcuits and their properties, scientists hope to gain an understanding of how the brain functions. However, the immense complexity of the mammalian connectome and the huge amount of imaging data that need to be acquired, stored, processed, and most importantly analyzed, present a big challenge for neuroscientists. In this talk we present ConnectomeExplorer, an application for the intereactive exploration and query-guided visual analysis of large volumetric electron microscopy (EM) datasets in connectomics research. Our system incorporates a knowledge-based query algebra that supports the interactive specification of dynamically evaluated queries, which enable neuroscientists to pose and answer domain-specific questions in an intuitive manner.
Optimizing Dense/Sparse Linear Algebra Operations on Manycore Systems
Hatem Ltaief (KAUST)
Abstract
The design of high performance dense and sparse linear algebra algorithms is critical in ensuring tomorrow’s high-end exascale systems containing million of cores are well exploited. These numerical operations are located at the bottom of the food chain for many applications. Kernel low-level optimizations, auto-tuning, fine-grained parallelism, data motion reduction, asynchronous execution and user productivity represent key concepts which have yet to be considered not only for maximizing performance but also for power efficiency. Performance results will be reported for solvers of dense systems of linear equations and symmetric eigenvalue problems, the sparse matrix-vector multiplication kernel and stencil computations. Performance comparisons against the existing state-of-the-art high performance libraries will illustrate the impact of these techniques on various architectures.
Don’t Parallelize– Think Parallel and Write Clean Code!
Lawrence Rauchwerger (TAMU)
Abstract
Parallel computers have come of age and need parallel software to justify their usefulness. We recognize two major avenues to get programs to run in parallel: Parallelizing compilers and parallel languages and/or libraries. In this talk we present our latest results using both approaches and draw some conclusions about their relative effectiveness and future. In the first part we introduce the Hybrid Analysis (HA) compiler framework which can seamlessly integrate all static and run-time analysis of memory references into a single framework capable of full automatic loop level parallelization. Experimental results on 26 benchmarks from the PERFECT-CLUB and SPEC suites show full program speedups superior to those obtained by the Intel and IBM Fortran compilers. In the second part of this talk we present the Standard Template Adaptive Parallel Library (STAPL) based approach to parallelizing code. STAPL is a collection of generic data structures and algorithms that provides a high productivity, parallel programming infrastructure analogous to the C++ Standard Template Library (STL). In this talk, we provide an overview of the major STAPL components and its programming model. We then present scalability results of real codes using peta scale machines such as IBM BG/Q and Cray.
iFrag: Interference-Aware Frame Fragmentation Scheme for Wireless Sensor Networks
Basem Shihada (KAUST)
Abstract
In wireless sensor networks, data transmission reliability is a fundamental challenge due to several physical constraints such as interference, power consumption, and environmental effects. In current wireless sensor implementations, a single bit error requires retransmitting the entire frame. This incurs extra processing overhead and power consumption, especially for large frames. Frame fragmentation into small blocks with individual error detection codes can reduce the unnecessary retransmission of the correctly received blocks. The optimal block size, however, varies based on the wireless channel conditions. In this talk, I will introduce a novel interference-aware frame fragmentation scheme called iFrag. iFrag effectively addresses the challenges associated with dynamic partitioning of blocks. iFrag achieves up to 3x improvement in throughput when the channel condition is noisy, while reduces the delay to 12% compared to other static fragmentation approach. On average, it shows 13% gain in throughput across all channel conditions.
On Modeling the Coexistence of WiFi and Wireless Sensor Networks
Radu Stoleru (TAMU)
Abstract
The explosion in the number of WiFi and Wireless Sensor Network (WSN) deployments is exacerbating the coexistence problem, observed and reported in the literature as significant performance degradation in co-located networks employing the two different wireless standards. The wireless coexistence problem has, thus far, been studied primarily using hardware, due to the lack of analytical results and lack of good wireless coexistence models in network simulators. Thus, the progress on addressing the wireless coexistence issues has been slow. In this talk, I will present the first analytical model and the first protocol coexistence simulator for coexisting WiFi and WSNs. We derive analytically, using Markov chains, the normalized saturation throughput of coexisting WiFi and WSNs, simulate the protocol coexistence using Monte Carlo methods, and validate our results through extensive experiments on real hardware. These tools can be used for convenient and accurate performance estimation of medium-large scale deployments. I will conclude this talk by briefly presenting several other research directions in the Laboratory for Embedded & Networked Sensor Systems at Texas A&M University.
PHD-Store: An Adaptive SPARQL Engine with Dynamic Partitioning for Distributed RDF Repositories
Panos Kalnis (KAUST)
Abstract
A variety of repositories utilize the versatile RDF model to publish their data. Repositories are typically distributed and geographically remote, but data are interconnected (e.g., the Semantic Web) and queried globally by a language such as SPARQL. Due to the network cost and the nature of the queries, the execution cost can be prohibitively large. Current solutions attempt to minimize the network cost by redistributing all data in a preprocessing phase that utilizes a smart partitioning algorithm, but there are two drawbacks: (i) partitioning is based on heuristics that may not benefit many of the future queries; and (ii) the preprocessing phase is very expensive even for moderate size datasets. In this talk we describe PHD-Store, a SPARQL engine for distributed RDF repositories. Our system does not assume any particular initial data placement and does not require pre-partitioning; hence, it minimizes the data to query time. PHD-Store starts answering queries using a potentially slow distributed semi-join algorithm, but adapts dynamically to the query load by incrementally indexing frequently accessed data. Indexing is done in a way that future queries can benefit from a fast hash-based parallel execution plan. Our experiments with synthetic and real data verify that PHD-Store scales to very large datasets and many repositories; converges to comparable or better quality of partitioning than existing methods; and executes large query loads 1 to 2 orders of magnitude faster than our competitors.
Parallel Motion Planning with Application to Crowd Simulation and Protein Folding
Nancy Amato (TAMU)
Abstract
Motion planning arises in many application domains such as computer animation (digital actors), mixed reality systems and intelligent CAD (virtual prototyping and training), and even computational biology and chemistry (protein folding and drug design). Although sampling-based planners have proven effective on problems from all these domains, there has been surprisingly little work developing parallel methods. In this talk, we describe scalable strategies for parallelizing the two main classes of sampling-based planners, the graph-based probabilistic roadmap methods (PRMs) and the tree-based rapidly-expanding random tree (RRT). We also describe our application of PRMs to crowd simulation, including evacuation planning and architectural design. Finally, we describe our application of PRMs to simulate molecular motions, such as protein and RNA folding.