Kategorien
Seiten
-

IRTG Modern Inverse Problems (MIP)

Kategorie: ‘Research Topics’

Model-Controlled Bayesian Inversion for Geophysical Inverse Problems

09. Oktober 2018 | von

Prof. Florian Wellmann, Ph.D.

This sub-project be advised jointly by Prof. Florian Wellmann at RWTH Aachen and Prof. Omar Ghattas at UT Austin. In the context of inverse problems, the field of geosciences provides formidable challenges: the parameter space is large, the applied mathematical models are highly complex, and the available data of diverse quality. We seek here an outstanding candidate to investigate novel geological modeling approaches to address challenging geophysical inverse problems in subsurface applications, for example groundwater and geothermal exploration.

The specific aims of this project are:
•  Obtain novel schemes for combined geological modeling and geophysical inversion
•  Address challenging geophysical calibration and inverse problems

Model Order Reduction for Goal-Oriented Bayesian Inversion

09. Oktober 2018 | von

Prof. Karen Veroy-Grepl, Ph.D.

In this project, we focus on goal-oriented Bayesian inversion of problems governed by partial differential equations, particularly for applications with high-dimensional parameter spaces. The solution of the discretized inverse problem is often prohibitive due to the need to solve the forward problem numerous times. We thus build upon our expertise on Bayesian inversion for large-scale systems and model order reduction to investigate the use of model order reduction methods to accelerate the solution of Bayesian inverse problems. We intend to use the reduced-basis method and trust region methods to reduce the computational cost in problems with high-dimensional parameter spaces. 

The aims of this project are:
• Reduction of the computational cost to solve Bayesian inverse problems governed by partial differential equations. 
• Quantification of the uncertainty caused by the use of surrogate reduced-order models. 
• Application to large-scale inverse problems with high-dimensional parameter spaces, for instance in the geosciences.

Numerical Reconstruction Techniques for the Boltzmann Equation

09. Oktober 2018 | von

Prof. Dr. Manuel Torillhon

Rarefied gas flows, or equivalently flows in microscopic settings, are one of the most prominent examples of flow processes where classical models of thermodynamics fail and enhanced non-equilibrium models become mandatory for simulations. This requires to consider the probability density of the particle velocities in the context of kinetic gas theory an Boltzmann equation. This project has several aims.
There is need to develop a hybrid reconstruction technique for the probability density combining the context of moment models, discrete velocity schemes and direct simulation Monte-Carlo in order to switch between the methods whatever is optimal. Another related challenge is the efficient evaluation of the Boltzmann collision operator. These aspects must be combined in numerical experiments based on the new technique for near-vacuum and supersonic flows.

Constitutive Reconstruction for Evolving Surfaces

09. Oktober 2018 | von

Prof. Roger A. Sauer, Ph.D.

The aim of this research project is to develop a computational framework for the reconstruction of material models for deforming and evolving surfaces.
This is done in the framework of nonlinear finite elements and then applied to challenging problems from science and technology.
The work will build on recent work on the reconstruction of loads acting on deforming shells (see figure).
The collaboration partners at UT Austin are Chad Landis and Thomas J.R Hughes.

Methods for Demand-Side-Management in Process and Chemical Industry

09. Oktober 2018 | von

Prof. Alexander Mitsos, Ph.D.

The shift from fossil-based to renewable energy sources brings with it fluctuation of energy supply. At the same time the energy demand also has strong time variation. As a consequence it is not sufficient to provide energy, but rather it is necessary to match the times of supply and demand. Options to do so include dispatchable power sources, energy storage and demand-side management. The process industry consumes large amount of energy and thus demand-side management in that industry can substantially contribute to solving the time variability.
However, to be able to ensure such a shift in the process industry there is the need for new computational methods. In particular it has been recognized that merging of on the one hand design&operation and on the other scheduling&control is required. These subdisciplines of PSE are traditionally considered separately. Moreover, handling uncertainty of both demand and supply adds a big challenge.
Both AVT.SVT in RWTH and the labs of Baldea/Edgar at UT have been among the first groups to consider this challenge and propose both methodologies and novel technological options. The purpose of the common project is to combine our methodologies and in collaboration with other CES groups overcome the computational challenges associated.

Metric-Based Anisotropic Adaptation for Optimal Petrov-Galerkin Methods

09. Oktober 2018 | von

Prof. Georg May, Ph.D.

Certain Petrov-Galerkin methods (optimal PG schemes) deliver inherently stable formulations of variational problems on a given mesh, by selecting appropriate pairs of trial and test spaces. Depending on the choice of norms, quasi-optimal and robust schemes, e.g. for singularly perturbed problems, with stability constants independent of the perturbation parameter, may be obtained.

Optimal PG schemes can be interpreted as minimal residual methods, minimizing the residual in the dual norm (of the test space). In fact, some schemes compute the Riesz representation of the dual residual as part of the solution methodology. In this sense, these optimal Petrov-Galerkin schemes come with a built-in error estimate. Adaptation has therefore been traditionally strongly linked with optimal PG schemes.

Conventionally, adaptation means locally refining the given discrete mesh. A step forward would be the global optimization of the mesh with respect to the inherent error information coming from the optimal Petrov-Galerkin framework. In this context, metric-based continuous mesh models are ideal candidates. A metric-conforming mesh is a triangulation whose elements are (nearly) equilateral under the Riemannian metric induced the continuous mesh model. (Analytic) optimization techniques, based on calculus of variations, may be applied to the metric field, rather than the discrete mesh. The task to produce a mesh is thus delayed until an optimized metric is available.

The error models driving previous continuous-mesh optimization methods need to be adapted to the error representation coming directly from the optimal PG methodology. The goal is to formulate the correct continuous-mesh error models for the optimal Petrov-Galerkin methodology. Nontrivial applications, such as highly convection-dominated problems will be considered to validate the concept.

Boundary Conforming Smooth Spline Spaces for Isogeometric Analysis

09. Oktober 2018 | von

Prof. Dr. Leif Kobbelt

The generation, adaptation, and modification of digital 3D models is an
essential prerequisite for many steps in a simulation task. Hence it is of fundamental importance to have efficient algorithms that automatically optimize geometry representations for a given set of requirements. The typical simulation workflow is based on a number of different geometry representations that are considered most suitable for the various stages i.e. NURBS B-rep representations for design or polygon meshes for numerical analysis. Consequently, data conversion steps are necessary which often make the overall process inefficient and error-prone.

In this project, we will investigate to what extend design representations can be used for analysis („isogeometric analysis“) or, vice versa, analysis
representations can be used for design („geometry processing“).
Towards this goal, we need to develop new techniques for the analysis of the geometric structure of a 3D object such that we can (automatically) derive a layout on which smooth spline spaces can be defined. This is a (very costly) step that is typically done manually by CAD designers today. Algorithmic solutions enable the automation of this step and its integration into numerics-driven (shape) optimization processes.
For this project it would be helpful to have some background in differential geometry and numerical optimization but also to have some experience in
C++ programming.

Modeling and Simulation of Solidification with Isogeometric Interface Tracking Methods

09. Oktober 2018 | von

Prof. Dr. Stefanie Egleti

The proposed project originates in the field of production engineering—or more specifically—primary manufacturing processes. The underlying principles of these processes is that material is brought into a liquid state in order to make it shapable. This shape is then sought to be maintained throughout a subsequent solidification process, which returns the material to a solid state and makes the product usable under standard environmental conditions. The state of the material—here liquid or solid—depends on process conditions, such as temperature, pressure, etc. The transition from a liquid to a solid phase—the solidification—is a spontaneous process initiated by heat extraction (cooling). The process is locally initiated by nucleation and proceeds by movement of phase boundaries. During solidification, shrinkage and warpage may occur, leading to negative effects on the final product’s quality. In order to gain a better understanding of these effects, this project focuses on modeling and simulation of the solidification process. It covers the following aims:

Aim 1: Derivation of a nucleation model as well as definition of a tracking method for the phase interface within an isogeometric context;

Aim 2: Evaluation of properties resulting from solidification—examples are shrinkage, warpage, residual stresses—that are relevant to engineering applications.

Within the project, semi-crystalline polymers in an injection molding process will serve as a working example.

Model-Based Generation of Linear Algebra Software

09. Oktober 2018 | von

Prof. Paolo Bientinesi, Ph.D.

This project tackles the automatic optimization of linear algebra operations that represent the computational bottleneck in scientific simulations and in data analyses.

The PI’s have extensive experience in parallel computing and in the development of linear algebra kernels, as well as performance models and automation; the combined expertise will make it possible to streamline the generation of high-performance algorithms and code, tailored towards specific applications, and targeting existing and upcoming computing architectures. The methodology builds on stochastic performance models, and formal derivation methods, and will be applied to selected operations, such as those arising in global optimization.
Concisely, the project has three intertwined aims.

Aim 1: Identification and development of a set of high-performance low-level kernels, sufficient to support the set of target operations.

Aim 2: Derivation of performance and scalability models for the building blocks, with quantification of the associated uncertainties.

Aim 3: “Performance model”-based decomposition of high-level operations in terms of the available building blocks.

In sharp contrast to the typical library development, our approach follows a reverse order, akin to an inverse problem: Given a target known functionality, the objective is to identify the composition of kernels that minimizes a cost function.

Computational Tools for Chemical Imaging

09. Oktober 2018 | von

Prof. Benjamin Berkels

The project will advance the state of the art in segmentation and unximing of data from chemical imaging, which is an umbrella term for image acquisition techniques that record a full spectral band of data at each pixel. A major challenge for the analysis of hyperspectral data is its huge size and high dimensionality.

Segmentation of hyperspectral data is the task of partitioning the image domain into disjoint sub-regions based on a suitable notion of spectral homogeneity, i.e. assigning a class label to each pixel based on its spectral vector data. For instance, for hyperspectral satellite images, such classes could be soil, water, grass, street, etc. A more general task than segmentation is called unmixing. Instead of belonging to just one class, unmixing considers each pixel to be a mixture of a number of constituents. Unmixing then needs to determine both the mixture ratios and the constituents. Nowadays, numerous hyperspectral image modalities are available, which makes this research relevant for a wide range of applications. One considered application will be cancer treatment, where Fourier transform infrared spectroscopy data of human tissue samples needs to be classified into classes like “highly cancerous”, “mildly cancerous” or “normal”.

The main aims of this project are:
• Robust segmentation of hyperspectral data with large intra-class variation;
• Unmixing of mixture of  mixtures data using hierarchical matrix factorization.