February 24-25, 2011

The Use of Ensemble Methods for History Matching

Geir Evensen  (Statoil Research Centre)

Abstract

This paper compares two ensemble-based data-assimilation methods when solving the history-matching problem in reservoir-simulation models. The methods are the Ensemble Kalman Filter (EnKF) and the Ensemble Smoother (ES). EnKF has been extensively studied in petroleum applications while ES is now used for the first time for history matching. ES differs from EnKF by computing a global update in the space-time domain, rather than using recursive updates in time as in EnKF. Thus, the sequential updating of the realizations with associated restarts is avoided. EnKF and ES provide identical solutions for linear dynamical models. However, for nonlinear dynamical models, and in particular models with chaotic dynamics, EnKF is superior to ES, due to the fact that the recursive updates keep the model on track and close to the true solution. Thus, ES is not much used and EnKF has been the method of choice in most data assimilation studies where ensemble methods are used. On the other hand, reservoir simulation models are rather diffusive systems when compared to the chaotic dynamical models that were previously used to test ES. If we can assume that the model solution is stable with respect to small perturbations in the initial conditions and the history-matching parameters, then ES should give similar results to EnKF, and ES may be a more efficient and much simpler method to implement and apply. In this paper we compare EnKF and ES and show that ES indeed provides for an efficient ensemble-based method for history matching.

Authors: Geir Evensen, Jan-Arild Skjervheim, Joakim Hove, and Jon Gustav Vabø

 

Model Error Identification Using Sparsity-Based Inversion Techniques for Subsurface Characterization

Behnam Jafarpour (Texas A&M University)

Abstract

Insufficient data and imperfect modeling assumptions are two main contributors to uncertainty in subsurface characterization and predictive flow and transport modeling. Failure to account for these sources of uncertainty can lead to biased inverse modeling solutions and inaccurate predictions. Data scarcity necessitates the incorporation of direct or indirect prior information for well-posed inverse problem formulation and stable solution algorithms. Traditionally, in constraining predictive models, structural prior assumptions (e.g., a covariance model) are treated with certainty. However, prior continuity models are typically derived from incomplete information and can be subject to significant uncertainty that, if ignored, can lead to biased solutions with little predictive power. I will discuss a new algorithm for incorporation of prior information while protecting against errors in it. To do this, the prior information is included in the model calibration process by assuming a wide range of variability (uncertainty) in it. Incorporating a wide range of prior structural variability implies that a significant portion of the prior features has negligible contribution to reconstruction of the final solution. I will present a sparse model representation and inversion approach that provides an effective framework for posing and solving the resulting inverse problem.

Uncertainty Quantification & Dynamic State Estimation of Power Grid System

Guang Lin (Pacific Northwest National Laboratory)

Abstract

Experience suggests that uncertainties often play an important role in controlling the stability of power systems. Therefore, uncertainty needs to be treated as a core element in simulating and dynamic state estimation of power systems. In this talk, a probabilistic collocation method (PCM) will be employed to conduct uncertainty quantification of component level power system models, which can provide an error bar and confidence interval on component level modeling of power systems. Numerical results demonstrate that the PCM approach provides accurate error bar with much less computational cost comparing to classic Monte Carlo (MC) simulations. Additionally, a PCM based ensemble Kalman filter (EKF) will be discussed to conduct real-time fast dynamic state estimation for power systems. Comparing with MC based EKF approach, the proposed PCM based EKF implementation can solve the system of stochastic state equations much more efficient. Moreover, the PCM-EKF approach can sample the generalized polynomial chaos approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost. Hence, the PCM-EKF approach can drastically reduce the sampling errors and achieve a high accuracy at reduced computational cost, compared to the classical MC implementation of EKF. The PCM-EKF based dynamic state estimation is tested on multi-machine system with various random disturbances. Our numerical results demonstrate the validity and performance of the PCM-EKF approach and also indicate the PCM-EFK approach can include the full dynamics of the power systems and ensure an accurate representation of the changing states in the power systems.

Multilevel Methods for Uncertainty Quantification – Opportunities and Challenges

Xiao-Hui Wu (Exxon Mobil)

Abstract

Multilevel methods are becoming popular for uncertainty quantification and solving large-scale inverse problems. In this talk, we analyze the need for such methods and discuss strategies related to reservoir model evaluations. Successful application of these methods in practice need to overcome several significant challenges, which in the meantime provide abundant interesting research opportunities. These challenges and opportunities will be discussed.

Dealing with Uncertainties and the Curse of Dimensionality in Closed-Loop Reservoir Management

Hector Klie (ConocoPhillips Company)

Abstract

Real-time reservoir management and optimization poses multiple challenges due to the limited information content in data, the nonlinearity of the multiple physics involved and error measurements arising at multiple scales. In this presentation we highlight some of the main avenues that has been explored in both the industry and academia to be able to reliably quantify uncertainty and cope with the large number of decision parameters involved in history matching, production optimization and closed-loop reservoir studies. The discussion will include the latest advances in reduced/surrogate modeling, parameterization methods and high performance computing.

A Fast Algorithm for the Inverse Medium Problem with Multiple Sources

George Biros (Georgia Institute of Technology)

Abstract

We consider the inverse medium problem for the time-harmonic wave equation with broadband and multi-point illumination in the low frequency regime. Such a problem finds many applications in geosciences (e.g. ground penetrating radar), non-destructive evaluation (acoustics), and medicine (optical tomography). We use an integral-equation (Lippmann-Schwinger) formulation, which we discretize using a quadrature method. We consider only small perturbations (Born approximation). To solve this inverse problem, we use a least-squares formulation. We present a new fast algorithm for the efficient solution of this particular least-squares problem.

If Nω is the number of excitation frequencies, Ns the number of different source locations for the point illuminations, Nd the number of detectors, and N the parameterization for the scatterer, a dense singular value decomposition for the overall input-output map will have [min(NsNωNd, N)]2 × max(NsNωNd, N) cost. We have developed a fast SVD-based preconditioner that brings the cost down to Ο(NsNωNdN) thus, providing orders of magnitude improvements over a black-box dense SVD and an unpreconditioned linear iterative solver.

Authors: George Biros and Stephanie Chaillat

Model Reduction of Stochastic Systems in Random Heterogeneous Media

Nicholas Zabaras (Cornell University)

Abstract

Predictive modeling of physical processes in heterogeneous media requires innovations in mathematical and computational thinking. While recent multiscale approaches have been successful in modeling the effects of fine scales to macroscopic response, a significant grant challenge remains in understanding the effects of topological uncertainties in characterization of properties and in predictive modeling of processes in heterogeneous media. To address these problems, we need a paradigm shift in the predictive modeling of complex systems in the presence of uncertainties in order to address two major limitations in modeling stochastic PDEs: (1) The stochastic inputs are mostly based on ad hoc models, and (2) The number of independent stochastic parameters is typically very high. To address the former, we are developing non-linear data-driven model reduction strategies to utilize experimentally available information based on low-order realistic models of input uncertainties. To address the latter, we are developing low-complexity surrogate models of the high-dimensional stochastic multiscale system under consideration. A number of examples will be discussed in the data-driven representation of random heterogeneous media and in modeling physical processes in such media.

Local POD-Based Multiscale Mixed FEMs for Model Reduction of Multiphase Compressible Flow

Stein Krogstad (SINTEF)

Abstract

Model-based decision support for reservoir management can involve a high number of reservoir simulations, e.g., in optimization loops and uncertainty sampling. From one simulation to the next, changes in model parameters and input are typically limited, and accordingly the potential to “reuse” computations through model reduction is large. In this talk we present a local basis model-order reduction technique for approximation of flux/pressure fields based on local proper orthogonal decompositions (PODs) “glued” together using a Multiscale Mixed Finite Element Method (MsMFEM) framework on a coarse grid. Based on snapshots from one or more simulation runs, we perform SVDs for the flux distribution over coarse grid interfaces and use the singular vectors corresponding to the largest singular values as boundary conditions for the multiscale flux basis functions. The span of these basis functions matches (to prescribed accuracy) the span of the snapshots over coarse grid faces. Accordingly, the complementary span (what’s left) can be approximated by local PODs on each coarse block giving a second set of local/sparse basis functions.

One natural property of such a model reduction technique is that it should reproduce the tuning simulations to given accuracy, and in addition, to keep the storage requirements low, we wish to build a reduced basis for the velocity (or flux) only, and use piecewise constant basis functions for pressure (as in the original MsMFEM) when solving the coarse systems. We show that this is straight forward for incompressible flow, while when compressibility is present, a difficulty arises due to certain orthogonality requirements between the velocity-basis source-functions and the fine-scale pressure variations. We discuss and compare a few formulations to get around this problem, and present results for two-phase test problems including compressibility and gravity.

The numerical experiments suggest that the localized basis approach gives good results “further away” from the tuning run(s) than the global version. Another advantage of the local version is that local changes in the model can be accounted for by adding a few extra local basis functions.

Towards Geology-Guided Quantitative Uncertainty Management in Dynamic Model Inversion

Marko Maucec (Halliburton/Landmark Graphics Corporation)

Abstract

Optimal Improved Oil Recovery (IOR) depends to a large extent on the ability to estimate volumes and locations of bypassed oil from available historical data. Assisted simulation History Matching techniques are being used to estimate remaining reserves volumes and locations. The talk will address an approach to history match that more accurately captures model uncertainty. The novelty lies in direct interfacing between the DecisionSpace Desktop Earth Modeling software and a forward simulator, with the rapid generation of model updates in wave-number domain. A workflow is described, based on multi-step Bayesian Markov chain Monte Carlo (MCMC) inversion, outlining a method where proxy model is guided by streamline-based sensitivities, dispensing with the need to run forward simulation for every model realization. The method generates an ensemble of sufficiently diverse static model realizations at the high-resolution geological scale by obeying known geostatistics (variograms) and well constraints. Efficient model parameterization and updating, based on Discrete Cosine Transform is described for rapid characterization of the main features of geologic uncertainty space: structural framework, stratigraphic layering, facies distribution and petrophysical properties. The application of the history-matching workflow on a case study combining geological model with ~1M cells, four different depositional environments and 30 wells with 10-year water-flood history is presented. Finally, the ongoing activity will be presented to develop technology for rapid dynamic ranking of history matched models to optimize the business decisions. The main features include dynamic characterization of geological uncertainty by calculating pattern-dissimilarity, deployment of very fast streamline simulations in evaluating dissimilarity, application of multi-dimensional scaling pattern recognition techniques, and assignment of a few representative realizations for full-physics simulation.

Uncertainty Quantification for Subsurface Flow Problems Using Coarse-Scale Models

Yuguang Chen (Chevron Energy Technology Company)

Abstract

The multiscale nature of geological formations can have a strong impact on subsurface flow processes. In an attempt to characterize these formations at all relevant length scales, highly resolved property models are typically constructed. This high degree of detail greatly complicates flow simulations and uncertainty quantification. To address this issue, a variety of computational upscaling (numerical homogenization) procedures have been developed. In this talk, we present a number of existing approaches, including single-phase and multiphase flow parameters. Emphasis is placed on the performance of these techniques for uncertainty quantification, where many realizations of the geological model are considered. Along these lines, an ensemble-level upscaling approach is described, in which the goal is to provide coarse models that capture flow statistics (such as the cumulative distribution function for oil production) consistent with those of the underlying fine-scale models rather than agreement on a realization-by-realization basis. Numerical results highlighting the relative advantages and limitations of the various methods are presented. In particular, the ensemble-level upscaling approach is shown to provide accurate statistical predictions at an acceptable computational cost.

Emulating the Nonlinear Matter Power Spectrum for the Universe

David Higdon (Los Alamos National Laboratory)

Abstract

Very accurate simulations of the large scale structure of the universe are required to estimate and constrain cosmological parameters governing a cold dark matter model of the universe. This application makes use of a suite of nearly 1000 N-body simulations at different force and mass resolutions to explore 38 different cosmologies spanning a 5-dimensional parameter space. The statistical challenge of this application is combining information from these computational simulations which are run at different resolutions to build an emulator which predicts model output at untried parameter settings. With this emulator, we can then use physical observations to estimate/constrain cosmological parameters governing the model.

Data Analysis Tools for Uncertainty Quantification of Inverse Problems

Luis Tenorio (Colorado School of Mines)

Abstract

We present exploratory data analysis methods to assess inversion estimates using examples based on 1^2- and 1^1-regularization. These methods can be used to reveal the presence of systematic errors such as bias and discretization effects, or to validate assumptions made on the statistical model used in the analysis. The methods include: confidence intervals and bounds for the bias, resampling methods for model validation, and construction of training sets of functions with controlled local regularity.

Calibration and Prediction in a Radiative Shock Experiment

Bruce Fryxell  (University of Michigan)

Abstract

The CRASH experiment uses a laser to generate a strong shock in a Be disk μm thick. The shock breaks out of the disk after about 400 ps into a Xe-filled tube and produces sufficient radiation to modify the shock structure. The shock location is predicted using two simulation codes, Hyades and CRASH. Hyades models the laser-plasma interaction at times less than 1.1 ns and can predict the shock breakout time. The CRASH code is initialized at 1.1 ns and is used to predict the shock location at later times for comparison to experiment.

We use the simulation tools and experiments conducted in one region of input space to predict in a new region where no prior experiments exist. Two data sets exist on which to base predictions: shock break time data, and shock location data at 13, 14, and 16 ns, and we wish to predict shock locations at 20 and 26 ns to compare to subsequent experiments. We use two models of the Kennedy-O’Hagen form to combine experiments with simulations, using one to inform the other, and interpret the discrepancy in these models in a way that allows us to gain some understanding of model error separately from parameter tuning.

Shock breakout times are modeled by constructing tηBO(x, θ) + δBO(x) + ϵBO that jointly fits the field measurements T of shock breakout time t and a set of 1024 Hyades simulations over a 6 dimensional input space (4 experimental variables x and 2 calibration parameters θ). This model provides posterior distributions for the calibration parameters π(θT), as well as for the parameters in Gaussian Process (GP) models of the emulator ηBO, the discrepancy function δBO, and for the replication error ϵBO. If the discrepancy function is significant compared to measurement uncertainty, we would call this process “tuning,” but if the discrepancy is small (as in our case), we refer to this as calibration.

Next, we use the shock location field data at 13-16 ns along with shock locations from 1024 CRASH simulations to construct a model of the form zηSL(x, θ) + δSL(x) + ϵSL, with θ now treated as an experimental, rather than a calibration parameter, drawn from the posterior constructed in the previous step, so θπ(θT). The x are drawn from distributions representing uncertainties in the experimental parameters. This second model is used to construct the emulator ηSL, its discrepancy δSL, and a best estimate of the replication error ϵSL. The discrepancy can be studied to understand the defects of the physics model. The result shows that our model tends to under predict shock location.

Finally we can use ηSL(x, θ) + δSL(x) + ϵSL to predict shock location at 20 and 26 ns, times at which we had simulations but no previous measurements. In doing so we can separate the code prediction ηSL(x, θ) and the uncertainty due to this prediction (caused by uncertainty in θ, x, and in the GP modeling parameters) from the uncertainty due to discrepancy δSL(x). The uncertainty in discrepancy is of course large, because we are extrapolating the discrepancy to a new region of input space. The uncertainty in the emulator ηSL(x, θ) is significantly smaller because there were simulation data in this region. Finally, comparison of the predictions with field measurements at 20 and 26 ns show that even the smaller predictive interval from the emulator alone contains the actual field measurements. Authors: Bruce Fryxell and Members of the CRASH Team

Scalable Algorithms for Large-Scale Inverse Problems Under Uncertainty

Carsten Burstedde (University of Texas at Austin)

Abstract

We consider algorithms for inverse problems in seismic wave propagation with the goal of achieving large-scale parallel scalability. We present inexact Newton-Krylov iterative methods, where the Hessian is applied via the solution of forward and adjoint problems. These are solved in parallel using a discontinuous Galerkin method, where mesh adaptivity is applied to both the state and parameter fields. Finally, we link deterministic inversion to the Bayesian framework to quantify the uncertainty of the inversion, at the cost of a manageable number of forward and adjoint solves.

Model Order Reduction in Porous Media Flow

Eduardo Gildin (Texas A&M University)

Abstract

The development of efficient numerical reservoir simulation is an essential step in devising advanced production optimization strategies and uncertainty quantification methods applied to porous media flow. In this case, a highly accurate and detailed description of the underlying models leads to a solution of a set of partial differential equations which, after discretization, induce dynamical systems of very large dimensions. In order to overcome the computational costs associated with these large-scale models, several forms of model-order reduction have been proposed in the literature. In porous media flow, two different approaches are used: (1) a “coarsening” of the discretization grid in a process called upscaling and multiscale methods; and (2) a reduction in the number of state variables (i.e., pressure and saturations) directly in a process called approximation of dynamical systems. Recently, the idea of combining both approaches has been proposed using the multiscale formulation combined with balanced truncation.

In this talk, I will describe the model reduction methods in a systems framework and will show their applicability to mitigate the computational cost in optimization and uncertainty quantification. Several methods will be discussed in the linear and nonlinear settings and the connections to multiscale methods will be proposed.

A Dynamically Weighted Particle Filter for Sea Surface Temperature Prediction

Faming Liang  (Texas A&M University)

Abstract

In the climate system, the sea surface temperature (SST) is an important factor. An accurate understanding for the pattern of SST is essential for climate monitoring and prediction. We apply the dynamically weighted particle filter, which combines the radial basis function network and the dynamically weighted importance sampling algorithm, to analyze the SST in the Caribbean Islands area after a hurricane. The radial basis function network models the nonlinearity of SST and dynamically weighted importance sampling prevents the particles from degenerating in computation. In this study, we found that the hurricane disturbs the pattern of SST by mixing the ocean layers, while the dynamically weighted particle filter shows good prediction performance for SST with fast computational time. This is a joint work with Duchwan Ryu and Bani Mallick.