## Fachbereich Mathematik

### Refine

#### Year of publication

#### Document Type

- Preprint (526)
- Doctoral Thesis (156)
- Report (35)
- Article (23)
- Diploma Thesis (20)
- Lecture (6)
- Study Thesis (2)
- Working Paper (2)
- Bachelor Thesis (1)
- Periodical (1)

#### Language

- English (772) (remove)

#### Keywords

- Wavelet (12)
- Inverses Problem (10)
- Mehrskalenanalyse (8)
- Boltzmann Equation (7)
- Location Theory (7)
- Approximation (6)
- Navier-Stokes-Gleichung (6)
- Elastoplastizität (5)
- Numerical Simulation (5)
- Algebraische Geometrie (4)

- A Bicriteria Approach to Robust Optimization (2014)
- The classic approach in robust optimization is to optimize the solution with respect to the worst case scenario. This pessimistic approach yields solutions that perform best if the worst scenario happens, but also usually perform bad on average. A solution that optimizes the average performance on the other hand lacks in worst-case performance guarantee. In practice it is important to find a good compromise between these two solutions. We propose to deal with this problem by considering it from a bicriteria perspective. The Pareto curve of the bicriteria problem visualizes exactly how costly it is to ensure robustness and helps to choose the solution with the best balance between expected and guaranteed performance. Building upon a theoretical observation on the structure of Pareto solutions for problems with polyhedral feasible sets, we present a column generation approach that requires no direct solution of the computationally expensive worst-case problem. In computational experiments we demonstrate the effectivity of both the proposed algorithm, and the bicriteria perspective in general.

- Robust Geometric Programming is co-NP hard (2014)
- Geometric Programming is a useful tool with a wide range of applications in engineering. As in real-world problems input data is likely to be affected by uncertainty, Hsiung, Kim, and Boyd introduced robust geometric programming to include the uncertainty in the optimization process. They also developed a tractable approximation method to tackle this problem. Further, they pose the question whether there exists a tractable reformulation of their robust geometric programming model instead of only an approximation method. We give a negative answer to this question by showing that robust geometric programming is co-NP hard in its natural posynomial form.

- Numerical solution of a nonstandard Darcy flow model (1999)
- We consider a Darcy flow model with saturation-pressure relation extended with a dynamic term, namely, the time derivative of the saturation. This model was proposed in works of J.Hulshof and J.R.King (1998), S.M.Hassanizadeh and W.G.Gray (1993), F.Stauffer (1978). We restrict ourself to one spatial dimension and strictly positive initial saturation. For this case we transform the initial-boundary value problem into combination of elliptic boundary-value problem and initial value problem for abstract Ordinary Differential Equation. This splitting is rather helpful both for theoretical aspects and numerical methods.

- Sink Location to Find Optimal Shelters in Evacuation Planning (2014)
- The sink location problem is a combination of network flow and location problems: From a given set of nodes in a flow network a minimum cost subset \(W\) has to be selected such that given supplies can be transported to the nodes in \(W\). In contrast to its counterpart, the source location problem which has already been studied in the literature, sinks have, in general, a limited capacity. Sink location has a decisive application in evacuation planning, where the supplies correspond to the number of evacuees and the sinks to emergency shelters. We classify sink location problems according to capacities on shelter nodes, simultaneous or non-simultaneous flows, and single or multiple assignments of evacuee groups to shelters. Resulting combinations are interpreted in the evacuation context and analyzed with respect to their worst-case complexity status. There are several approaches to tackle these problems: Generic solution methods for uncapacitated problems are based on source location and modifications of the network. In the capacitated case, for which source location cannot be applied, we suggest alternative approaches which work in the original network. It turns out that latter class algorithms are superior to the former ones. This is established in numerical tests including random data as well as real world data from the city of Kaiserslautern, Germany.

- A coverage-based Box-Algorithm to compute a representation for optimization problems with three objective functions (2014)
- A new algorithm for optimization problems with three objective functions is presented which computes a representation for the set of nondominated points. This representation is guaranteed to have a desired coverage error and a bound on the number of iterations needed by the algorithm to meet this coverage error is derived. Since the representation does not necessarily contain nondominated points only, ideas to calculate bounds for the representation error are given. Moreover, the incorporation of domination during the algorithm and other quality measures are discussed.

- Alternative Formulations for the Ordered Weighted Averaging Objective (2014)
- The ordered weighted averaging objective (OWA) is an aggregate function over multiple optimization criteria which received increasing attention by the research community over the last decade. Different to the ordered weighted sum, weights are attached to ordered objective functions (i.e., a weight for the largest value, a weight for the second-largest value and so on). As this contains max-min or worst-case optimization as a special case, OWA can also be considered as an alternative approach to robust optimization. For linear programs with OWA objective, compact reformulations exist, which result in extended linear programs. We present new such reformulation models with reduced size. A computational comparison indicates that these formulations improve solution times.

- On The Recoverable Robust Traveling Salesman Problem (2014)
- We consider an uncertain traveling salesman problem, where distances between nodes are not known exactly, but may stem from an uncertainty set of possible scenarios. This uncertainty set is given as intervals with an additional bound on the number of distances that may deviate from their expected, nominal value. A recoverable robust model is proposed, that allows a tour to change a bounded number of edges once a scenario becomes known. As the model contains an exponential number of constraints and variables, an iterative algorithm is proposed, in which tours and scenarios are computed alternately. While this approach is able to find a provably optimal solution to the robust model, it also needs to solve increasingly complex subproblems. Therefore, we also consider heuristic solution procedures based on local search moves using a heuristic estimate of the actual objective function. In computational experiments, these approaches are compared. Finally, an alternative recovery model is discussed, where a second-stage recovery tour is not required to visit all nodes of the graph. We show that the previously NP-hard evaluation of a fixed solution now becomes solvable in polynomial time.

- Optimization Models to Enhance Resilience in Evacuation Planning (2014)
- We argue that the concepts of resilience in engineering science and robustness in mathematical optimization are strongly related. Using evacuation planning as an example application, we demonstrate optimization techniques to improve solution resilience. These include a direct modelling of the uncertainty for stochastic or robust optimization, as well as taking multiple objective functions into account.

- Effective equations for anisotropic glioma spread with proliferation: a multiscale approach (2014)
- Glioma is a common type of primary brain tumor, with a strongly invasive potential, often exhibiting nonuniform, highly irregular growth. This makes it difficult to assess the degree of extent of the tumor, hence bringing about a supplementary challenge for the treatment. It is therefore necessary to understand the migratory behavior of glioma in greater detail. In this paper we propose a multiscale model for glioma growth and migration. Our model couples the microscale dynamics (reduced to the binding of surface receptors to the surrounding tissue) with a kinetic transport equation for the cell density on the mesoscopic level of individual cells. On the latter scale we also include the proliferation of tumor cells via effects of interaction with the tissue. An adequate parabolic scaling yields a convection-diffusion-reaction equation, for which the coefficients can be explicitly determined from the information about the tissue obtained by diffusion tensor imaging. Numerical simulations relying on DTI measurements confirm the biological findings that glioma spreads along white matter tracts.

- Variance Reduction Procedures for Market Risk Estimation (2014)
- Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors. Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study. The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given.