### Refine

#### Year of publication

#### Document Type

- Preprint (1179) (remove)

#### Keywords

- AG-RESY (17)
- Case-Based Reasoning (16)
- Mehrskalenanalyse (10)
- RODEO (10)
- Approximation (9)
- Fallbasiertes Schliessen (9)
- Wavelet (9)
- Boltzmann Equation (7)
- Inverses Problem (7)
- Location Theory (7)

#### Faculty / Organisational entity

SDE-driven modeling of phenotypically heterogeneous tumors: The influence of cancer cell stemness
(2018)

We deduce cell population models describing the evolution of a tumor (possibly interacting with its
environment of healthy cells) with the aid of differential equations. Thereby, different subpopulations
of cancer cells allow accounting for the tumor heterogeneity. In our settings these include cancer
stem cells known to be less sensitive to treatment and differentiated cancer cells having a higher
sensitivity towards chemo- and radiotherapy. Our approach relies on stochastic differential equations
in order to account for randomness in the system, arising e.g., by the therapy-induced decreasing
number of clonogens, which renders a pure deterministic model arguable. The equations are deduced
relying on transition probabilities characterizing innovations of the two cancer cell subpopulations,
and similarly extended to also account for the evolution of normal tissue. Several therapy approaches
are introduced and compared by way of tumor control probability (TCP) and uncomplicated tumor
control probability (UTCP). A PDE approach allows to assess the evolution of tumor and normal
tissue with respect to time and to cell population densities which can vary continuously in a given set
of states. Analytical approximations of solutions to the obtained PDE system are provided as well.

Multifacility location problems arise in many real world applications. Often, the facilities can only be placed in feasible regions such as development or industrial areas. In this paper we show the existence of a finite dominating set (FDS) for the planar multifacility location problem with polyhedral gauges as distance functions, and polyhedral feasible regions, if the interacting facilities form a tree. As application we show how to solve the planar 2-hub location problem in polynomial time. This approach will yield an ε-approximation for the euclidean norm case polynomial in the input data and 1/ε.

In this article a new numerical solver for simulations of district heating networks is presented. The numerical method applies the local time stepping introduced in [11] to networks of linear advection equations. In combination with the high order approach of [4] an accurate and very efficient scheme is developed. In several numerical test cases the advantages for simulations of district heating networks are shown.

We continue in this paper the study of k-adaptable robust solutions for combinatorial optimization problems with bounded uncertainty sets. In this concept not a single solution needs to be chosen to hedge against the uncertainty. Instead one is allowed to choose a set of k different solutions from which one can be chosen after the uncertain scenario has been revealed. We first show how the problem can be decomposed into polynomially many subproblems if k is fixed. In the remaining part of the paper we consider the special case where k=2, i.e., one is allowed to choose two different solutions to hedge against the uncertainty. We decompose this problem into so called coordination problems. The study of these coordination problems turns out to be interesting on its own. We prove positive results for the unconstrained combinatorial optimization problem, the matroid maximization problem, the selection problem, and the shortest path problem on series parallel graphs. The shortest path problem on general graphs turns out to be NP-complete. Further, we present for minimization problems how to transform approximation algorithms for the coordination problem to approximation algorithms for the original problem. We study the knapsack problem to show that this relation does not hold for maximization problems in general. We present a PTAS for the corresponding coordination problem and prove that the 2-adaptable knapsack problem is not at all approximable.

This paper presents a case study of duty rostering for physicians at a department of orthopedics and trauma surgery. We provide a detailed description of the rostering problem faced and present an integer programming model that has been used in practice for creating duty rosters at the department for more than a year. Using real world data, we compare the model output to a manually generated roster as used previously by the department and analyze the quality of the rosters generated by the model over a longer time span. Moreover, we demonstrate how unforeseen events such as absences of scheduled physicians are handled.

We extend the standard concept of robust optimization by the introduction of an alternative solution. In contrast to the classic concept, one is allowed to chose two solutions from which the best can be picked after the uncertain scenario has been revealed. We focus in this paper on the resulting robust problem for combinatorial problems with bounded uncertainty sets. We present a reformulation of the robust problem which decomposes it into polynomially many subproblems. In each subproblem one needs to find two solutions which are connected by a cost function which penalizes if the same element is part of both solutions. Using this reformulation, we show how the robust problem can be solved efficiently for the unconstrained combinatorial problem, the selection problem, and the minimum spanning tree problem. The robust problem corresponding to the shortest path problem turns out to be NP-complete on general graphs. However, for series-parallel graphs, the robust shortest path problem can be solved efficiently. Further, we show how approximation algorithms for the subproblem can be used to compute approximate solutions for the original problem.

We propose a multiscale model for tumor cell migration in a tissue network. The system of equations involves a structured population model for the tumor cell density, which besides time and
position depends on a further variable characterizing the cellular state with respect to the amount
of receptors bound to soluble and insoluble ligands. Moreover, this equation features pH-taxis and
adhesion, along with an integral term describing proliferation conditioned by receptor binding. The
interaction of tumor cells with their surroundings calls for two more equations for the evolution of
tissue fibers and acidity (expressed via concentration of extracellular protons), respectively. The
resulting ODE-PDE system is highly nonlinear. We prove the global existence of a solution and
perform numerical simulations to illustrate its behavior, paying particular attention to the influence
of the supplementary structure and of the adhesion.

We propose and analyze a multiscale model for acid-mediated tumor invasion
accounting for stochastic effects on the subcellular level.
The setting involves a PDE of reaction-diffusion-taxis type describing the evolution of the tumor cell density,
the movement being directed towards pH gradients in the local microenvironment,
which is coupled to a PDE-SDE system characterizing the
dynamics of extracellular and intracellular proton concentrations, respectively.
The global well-posedness of the model is shown and
numerical simulations are performed in order to illustrate the solution behavior.

This work presents a framework for the computation of complex geometries containing intersections of multiple patches with Reissner-Mindlin shell elements. The main objective is to provide an isogeometric finite element implementation which neither requires drilling rotation stabilization, nor user interaction to quantify the number of rotational degrees of freedom for every node. For this purpose, the following set of methods is presented. Control points with corresponding physical location are assigned to one common node for the finite element solution. A nodal basis system in every control point is defined, which ensures an exact interpolation of the director vector throughout the whole domain. A distinction criterion for the automatic quantification of rotational degrees of freedom for every node is presented. An isogeometric Reissner-Mindlin shell formulation is enhanced to handle geometries with kinks and allowing for arbitrary intersections of patches. The parametrization of adjacent patches along the interface has to be conforming. The shell formulation is derived from the continuum theory and uses a rotational update scheme for the current director vector. The nonlinear kinematic allows the computation of large deformations and large rotations. Two concepts for the description of rotations are presented. The first one uses an interpolation which is commonly used in standard Lagrange-based shell element formulations. The second scheme uses a more elaborate concept proposed by the authors in prior work, which increases the accuracy for arbitrary curved geometries. Numerical examples show the high accuracy and robustness of both concepts. The applicability of the proposed framework is demonstrated.

An isogeometric Reissner-Mindlin shell derived from the continuum theory is presented. The geometry is described by NURBS surfaces. The kinematic description of the employed shell theory requires the interpolation of the director vector and of a local basis system. Hence, the definition of nodal basis systems at the control points is necessary for the proposed formulation. The control points are in general not located on the shell reference surface and thus, several choices for the nodal values are possible. The proposed new method uses the higher continuity of the geometrical description to calculate nodal basis system and director vectors which lead to geometrical exact interpolated values thereof. Thus, the initial director vector coincides with the normal vector even for the coarsest mesh. In addition to that a more accurate interpolation of the current director and its variation is proposed. Instead of the interpolation of nodal director vectors the new approach interpolates nodal rotations. Account is taken for the discrepancy between interpolated basis systems and the individual nodal basis systems with an additional transformation. The exact evaluation of the initial director vector along with the interpolation of the nodal rotations lead to a shell formulation which yields precise results even for coarse meshes. The convergence behavior is shown to be correct for k-refinement allowing the use of coarse meshes with high orders of NURBS basis functions. This is potentially advantageous for applications with high numerical effort per integration point. The geometrically nonlinear formulation accounts for large rotations. The consistent tangent matrix is derived. Various standard benchmark examples show the superior accuracy of the presented shell formulation. A new benchmark designed to test the convergence behavior for free form surfaces is presented. Despite the higher numerical effort per integration point the improved accuracy yields considerable savings in computation cost for a predefined error bound.

In this contribution a mortar-type method for the coupling of non-conforming NURBS surface patches is proposed. The connection of non-conforming patches with shared degrees of freedom requires mutual refinement, which propagates throughout the whole patch due to the tensor-product structure of NURBS surfaces. Thus, methods to handle non-conforming meshes are essential in NURBS-based isogeometric analysis. The main objective of this work is to provide a simple and efficient way to couple the individual patches of complex geometrical models without altering the variational formulation. The deformations of the interface control points of adjacent patches are interrelated with a master-slave relation. This relation is established numerically using the weak form of the equality of mutual deformations along the interface. With the help of this relation the interface degrees of freedom of the slave patch can be condensated out of the system. A natural connection of the patches is attained without additional terms in the weak form. The proposed method is also applicable for nonlinear computations without further measures. Linear and geometrical nonlinear examples show the high accuracy and robustness of the new method. A comparison to reference results and to computations with the Lagrange multiplier method is given.

We propose and study a strongly coupled PDE-ODE-ODE system modeling cancer cell invasion through a tissue network
under the go-or-grow hypothesis asserting that cancer cells can either move or proliferate. Hence our setting features
two interacting cell populations with their mutual transitions and involves tissue-dependent degenerate diffusion and
haptotaxis for the moving subpopulation. The proliferating cells and the tissue evolution are characterized by way of ODEs
for the respective densities. We prove the global existence of weak solutions and illustrate the model behaviour by
numerical simulations in a two-dimensional setting.

In this paper, we discuss the problem of approximating ellipsoid uncertainty sets with bounded (gamma) uncertainty sets. Robust linear programs with ellipsoid uncertainty lead to quadratically constrained programs, whereas robust linear programs with bounded uncertainty sets remain linear programs which are generally easier to solve.
We call a bounded uncertainty set an inner approximation of an ellipsoid if it is contained in it. We consider two different inner approximation problems. The first problem is to find a bounded uncertainty set which sticks close to the ellipsoid such that a shrank version of the ellipsoid is contained in it. The approximation is optimal if the required shrinking is minimal. In the second problem, we search for a bounded uncertainty set within the ellipsoid with maximum volume. We present how both problems can be solved analytically by stating explicit formulas for the optimal solutions of these problems.
Further, we present in a computational experiment how the derived approximation techniques can be used to approximate shortest path and network flow problems which are affected by ellipsoidal uncertainty.

For the prediction of digging forces from a granular material simulation, the
Nonsmooth Contact Dynamics Method is examined. First, the equations of motion
for nonsmooth mechanical systems are laid out. They are a differential
variational inequality that has the same structure as classical discrete algebraic equations. Using a Galerkin projection in time, it becomes possible to derive
nonsmooth versions of the classical SHAK and RATTLE integrators.
A matrix-free Interior Point Method is used for the complementarity
problems that need to be solved in every time step. It is shown that this method
outperforms the Projected Gauss-Jacobi method by several orders of magnitude
and produces the same digging force result as the Discrete Element Method in comparable computing time.

We consider the problem to evacuate several regions due to river flooding, where sufficient time is given to plan ahead. To ensure a smooth evacuation procedure, our model includes the decision which regions to assign to which shelter, and when evacuation orders should be issued, such that roads do not become congested.
Due to uncertainty in weather forecast, several possible scenarios are simultaneously considered in a robust optimization framework. To solve the resulting integer program, we apply a Tabu search algorithm based on decomposing the problem into better tractable subproblems. Computational experiments on random instances and an instance based on Kulmbach, Germany, data show considerable improvement compared to an MIP solver provided with a strong starting solution.

We present a new approach to handle uncertain combinatorial optimization problems that uses solution ranking procedures to determine the degree of robustness of a solution. Unlike classic concepts for robust optimization, our approach is not purely based on absolute quantitative performance, but also includes qualitative aspects that are of major importance for the decision maker.
We discuss the two variants, solution ranking and objective ranking robustness, in more detail, presenting problem complexities and solution approaches. Using an uncertain shortest path problem as a computational example, the potential of our approach is demonstrated in the context of evacuation planning due to river flooding.

We investigate a PDE-ODE system describing cancer cell invasion in a tissue network. The model is an extension of the multiscale setting in [28,40], by considering two subpopulations of tumor cells interacting mutually and with the surrounding tissue. According to the go-or-grow hypothesis, these subpopulations consist of moving and proliferating cells, respectively. The mathematical setting also accommodates the effects of some therapy approaches. We prove the global existence of weak solutions to this model and perform numerical simulations to illustrate its behavior for different therapy strategies.

We propose and study a strongly coupled PDE-ODE system with tissue-dependent degenerate diffusion and haptotaxis that can serve as a model prototype for cancer cell invasion through the
extracellular matrix. We prove the global existence of weak solutions and illustrate the model behaviour by numerical simulations for a two-dimensional setting.

We discuss the problem of evaluating a robust solution.
To this end, we first give a short primer on how to apply robustification approaches to uncertain optimization problems using the assignment problem and the knapsack problem as illustrative examples.
As it is not immediately clear in practice which such robustness approach is suitable for the problem at hand,
we present current approaches for evaluating and comparing robustness from the literature, and introduce the new concept of a scenario curve. Using the methods presented in this paper, an easy guide is given to the decision maker to find, solve and compare the best robust optimization method for his purposes.

In this paper we consider the problem of decomposing a given integer matrix A into
a positive integer linear combination of consecutive-ones matrices with a bound on the
number of columns per matrix. This problem is of relevance in the realization stage
of intensity modulated radiation therapy (IMRT) using linear accelerators and multileaf
collimators with limited width. Constrained and unconstrained versions of the problem
with the objectives of minimizing beam-on time and decomposition cardinality are considered.
We introduce a new approach which can be used to find the minimum beam-on
time for both constrained and unconstrained versions of the problem. The decomposition
cardinality problem is shown to be NP-hard and an approach is proposed to solve the
lexicographic decomposition problem of minimizing the decomposition cardinality subject
to optimal beam-on time.

We consider storage loading problems where items with uncertain weights have
to be loaded into a storage area, taking into account stacking and
payload constraints. Following the robust optimization paradigm, we propose
strict and adjustable optimization models for finite and interval-based
uncertainties. To solve these problems, exact decomposition and heuristic
solution algorithms are developed.
For strict robustness, we also present a compact formulation based
on a characterization of worst-case scenarios.
Computational results show that computation times and algorithm
gaps are reasonable for practical applications.
Furthermore, we find that the robustness concepts show different
potential depending on the type of data being used.

Scheduling-Location (ScheLoc) Problems integrate the separate fields of
scheduling and location problems. In ScheLoc Problems the objective is to
find locations for the machines and a schedule for each machine subject to
some production and location constraints such that some scheduling object-
ive is minimized. In this paper we consider the Discrete Parallel Machine
Makespan (DPMM) ScheLoc Problem where the set of possible machine loc-
ations is discrete and a set of n jobs has to be taken to the machines and
processed such that the makespan is minimized. Since the separate location
and scheduling problem are both NP-hard, so is the corresponding ScheLoc
Problem. Therefore, we propose an integer programming formulation and
different versions of clustering heuristics, where jobs are split into clusters
and each cluster is assigned to one of the possible machine locations. Since
the IP formulation can only be solved for small scale instances we propose
several lower bounds to measure the quality of the clustering heuristics. Ex-
tensive computational tests show the efficiency of the heuristics.

A new solution approach for solving the 2-facility location problem in the plane with block norms
(2015)

Motivated by the time-dependent location problem over T time-periods introduced in
Maier and Hamacher (2015) we consider the special case of two time-steps, which was shown
to be equivalent to the static 2-facility location problem in the plane. Geometric optimality
conditions are stated for the median objective. When using block norms, these conditions
are used to derive a polygon grid inducing a subdivision of the plane based on normal cones,
yielding a new approach to solve the 2-facility location problem in polynomial time. Combinatorial algorithms for the 2-facility location problem based on geometric properties are
deduced and their complexities are analyzed. These methods differ from others as they are
completely working on geometric objects to derive the optimal solution set.

We study an online flow shop scheduling problem where each job consists of several tasks that have to be completed in t different stages and the goal is to maximize the total weight of accepted jobs.
The set of tasks of a job contains one task for each stage and each stage has a dedicated set of identical parallel machines corresponding to it that can only process tasks of this stage. In order to gain the weight (profit) associated with a job j, each of its tasks has to be executed between a task-specific release date and deadline subject to the constraint that all tasks of job j from stages 1, …, i-1 have to be completed before the task of the ith stage can be started. In the online version, jobs arrive over time and all information about the tasks of a job becomes available at the release date of its first task. This model can be used to describe production processes in supply chains when customer orders arrive online.
We show that even the basic version of the offline problem with a single machine in each stage, unit weights, unit processing times, and fixed execution times for all tasks (i.e., deadline minus release date equals processing time) is APX-hard. Moreover, we show that the approximation ratio of any polynomial-time approximation algorithm for this basic version of the problem must depend on the number t of stages.
For the online version of the basic problem, we provide a (2t-1)-competitive deterministic online algorithm and a matching lower bound. Moreover, we provide several (sometimes tight) upper and lower bounds on the competitive ratio of online algorithms for several generalizations of the basic problem involving different weights, arbitrary release dates and deadlines, different processing times of tasks, and several identical machines per stage.

To write about the history of a subject is a challenge that grows with the number of pages as the original goal of completeness is turning more and more into an impossibility. With this in mind, the present article takes a very narrow approach and uses personal side trips and memories on conferences,
workshops, and summer schools as the stage for some of the most important protagonists and their contributions to the field of Differential-Algebraic Equations (DAEs).

A nonlocal stochastic model for intra- and extracellular proton dynamics in a tumor is proposed.
The intracellular dynamics is governed by an SDE coupled to a reaction-diffusion
equation for the extracellular proton concentration on the macroscale. In a more general context
the existence and uniqueness of solutions for local and nonlocal
SDE-PDE systems are established allowing, in particular, to analyze the proton dynamics model both,
in its local version and the case with nonlocal path dependence.
Numerical simulations are performed
to illustrate the behavior of solutions, providing some insights into the effects of randomness on tumor acidity.

In this paper we propose a phenomenological model for the formation of an interstitial gap between the tumor and the stroma. The gap
is mainly filled with acid produced by the progressing edge of the tumor front. Our setting extends existing models for acid-induced tumor invasion models to incorporate
several features of local invasion like formation of gaps, spikes, buds, islands, and cavities. These behaviors are obtained mainly due to the random dynamics at the intracellular
level, the go-or-grow-or-recede dynamics on the population scale, together with the nonlinear coupling between the microscopic (intracellular) and macroscopic (population)
levels. The wellposedness of the model is proved using the semigroup technique and 1D and 2D numerical simulations are performed to illustrate model predictions and draw
conclusions based on the observed behavior.

We consider the multiscale model for glioma growth introduced in a previous work and extend it to account
for therapy effects. Thereby, three treatment strategies involving surgical resection, radio-, and
chemotherapy are compared for their efficiency. The chemotherapy relies on inhibiting the binding
of cell surface receptors to the surrounding tissue, which impairs both migration and proliferation.

A single facility problem in the plane is considered, where an optimal location has to be
identified for each of finitely many time-steps with respect to time-dependent weights and
demand points. It is shown that the median objective can be reduced to a special case of the
static multifacility median problem such that results from the latter can be used to tackle the
dynamic location problem. When using block norms as distance measure between facilities,
a Finite Dominating Set (FDS) is derived. For the special case with only two time-steps, the
resulting algorithm is analyzed with respect to its worst-case complexity. Due to the relation
between dynamic location problems for T time periods and T-facility problems, this algorithm
can also be applied to the static 2-facility location problem.

We consider the problem of finding efficient locations of surveillance cameras, where we distinguish
between two different problems. In the first, the whole area must be monitored and the number of cameras
should be as small as possible. In the second, the goal is to maximize the monitored area for a fixed number of
cameras. In both of these problems, restrictions on the ability of the cameras, like limited depth of view or range
of vision are taken into account. We present solution approaches for these problems and report on results of
their implementations applied to an authentic problem. We also consider a bicriteria problem with two objectives:
maximizing the monitored area and minimizing the number of cameras, and solve it for our study case.

We consider a network flow problem, where the outgoing flow is reduced by a certain percentage in each node. Given a maximum amount of flow that can leave the source node, the aim is to find a solution that maximizes the amount of flow which arrives at the sink.
Starting from this basic model, we include two new, additional aspects: On the one hand, we are able to reduce the loss at some of the nodes; on the other hand, the exact loss values are not known, but may come from a discrete uncertainty set of exponential size.
Applications for problems of this type can be found in evacuation planning, where one would like to improve the safety of nodes such that the number of evacuees reaching safety is maximized.
We formulate the resulting robust flow problem with losses and improvability as a mixed-integer program for finitely many scenarios, and present an iterative scenario-generation procedure that avoids the inclusion of all scenarios from the beginning. In a computational study using both randomly generated instance and realistic data based on the city of Nice, France, we compare our solution algorithms.

The classic approach in robust optimization is to optimize the solution with respect to the worst case scenario. This pessimistic approach yields solutions that perform best if the worst scenario happens, but also usually perform bad on average. A solution that optimizes the average performance on the other hand lacks in worst-case performance guarantee.
In practice it is important to find a good compromise between these two solutions. We propose to deal with this problem by considering it from a bicriteria perspective. The Pareto curve of the bicriteria problem visualizes exactly how costly it is to ensure robustness and helps to choose the solution with the best balance between expected and guaranteed performance.
Building upon a theoretical observation on the structure of Pareto solutions for problems with polyhedral feasible sets, we present a column generation approach that requires no direct solution of the computationally expensive worst-case problem. In computational experiments we demonstrate the effectivity of both the proposed algorithm, and the bicriteria perspective in general.

Geometric Programming is a useful tool with a wide range of applications in engineering. As in real-world problems input data is likely to be affected by uncertainty, Hsiung, Kim, and Boyd introduced robust geometric programming to include the uncertainty in the optimization process. They also developed a tractable approximation method to tackle this problem. Further, they pose the question whether there exists a tractable reformulation of their robust geometric programming model instead of only an approximation method. We give a negative answer to this question by showing that robust geometric programming is co-NP hard in its natural posynomial form.

The sink location problem is a combination of network flow and location problems: From a given set of nodes in a flow network a minimum cost subset \(W\) has to be selected such that given supplies can be transported to the nodes in \(W\). In contrast to its counterpart, the source location problem which has already been studied in the literature, sinks have, in general, a limited capacity. Sink location has a decisive application in evacuation planning, where the supplies correspond to the number of evacuees and the sinks to emergency shelters.
We classify sink location problems according to capacities on shelter nodes, simultaneous or non-simultaneous flows, and single or multiple assignments of evacuee groups to shelters. Resulting combinations are interpreted in the evacuation context and analyzed with respect to their worst-case complexity status.
There are several approaches to tackle these problems: Generic solution methods for uncapacitated problems are based on source location and modifications of the network. In the capacitated case, for which source location cannot be applied, we suggest alternative approaches which work in the original network. It turns out that latter class algorithms are superior to the former ones. This is established in numerical tests including random data as well as real world data from the city of Kaiserslautern, Germany.

Building interoperation among separately developed software units requires checking their conceptual assumptions and constraints. However, eliciting such assumptions and constraints is time consuming and is a challenging task as it requires analyzing each of the interoperating software units. To address this issue we proposed a new conceptual interoperability analysis approach which aims at decreasing the analysis cost and the conceptual mismatches between the interoperating software units. In this report we present the design of a planned controlled experiment for evaluating the effectiveness, efficiency, and acceptance of our proposed conceptual interoperability analysis approach. The design includes the study objectives, research questions, statistical hypotheses, and experimental design. It also provides the materials that will be used in the execution phase of the planned experiment.

A new algorithm for optimization problems with three objective functions is presented which computes a representation for the set of nondominated points. This representation is guaranteed to have a desired coverage error and a bound on the number of iterations needed by the algorithm to meet this coverage error is derived. Since the representation does not necessarily contain nondominated points only, ideas to calculate bounds for the representation error are given. Moreover, the incorporation of domination during the algorithm and other quality measures are discussed.

The ordered weighted averaging objective (OWA) is an aggregate function over multiple optimization criteria which received increasing attention by the research community over the last decade. Different to the ordered weighted sum, weights are attached to ordered objective functions (i.e., a weight for the largest value, a weight for the second-largest value and so on). As this contains max-min or worst-case optimization as a special case, OWA can also be considered as an alternative approach to robust optimization.
For linear programs with OWA objective, compact reformulations exist, which result in extended linear programs. We present new such reformulation models with reduced size. A computational comparison indicates that these formulations improve solution times.

We consider an uncertain traveling salesman problem, where distances between nodes are not known exactly, but may stem from an uncertainty set of possible scenarios. This uncertainty set is given as intervals with an additional bound on the number of distances that may deviate from their expected, nominal value.
A recoverable robust model is proposed, that allows a tour to change a bounded number of edges once a scenario becomes known. As the model contains an exponential number of constraints and variables, an iterative algorithm is proposed, in which tours and scenarios are computed alternately.
While this approach is able to find a provably optimal solution to the robust model, it also needs to solve increasingly complex subproblems. Therefore, we also consider heuristic solution procedures based on local search moves using a heuristic estimate of the actual objective function. In computational experiments, these approaches are compared.
Finally, an alternative recovery model is discussed, where a second-stage recovery tour is not required to visit all nodes of the graph. We show that the previously NP-hard evaluation of a fixed solution now becomes solvable in polynomial time.

We argue that the concepts of resilience in engineering science and robustness in mathematical optimization are strongly related. Using evacuation planning as an example application, we demonstrate optimization techniques to improve solution resilience. These include a direct modelling of the uncertainty for stochastic or robust optimization, as well as taking multiple objective functions into account.

Glioma is a common type of primary brain tumor, with a strongly invasive potential, often exhibiting nonuniform, highly irregular growth. This makes it difficult to assess
the degree of extent of the tumor, hence bringing about a supplementary challenge for the treatment. It is therefore necessary to understand the
migratory behavior of glioma in greater detail.
In this paper we propose a multiscale model for glioma growth and migration. Our model couples the microscale dynamics (reduced to the binding of surface receptors to the
surrounding tissue) with a kinetic transport equation for the cell density on the mesoscopic level of individual cells. On the latter scale we also include the
proliferation of tumor cells via effects of interaction with the tissue. An adequate parabolic scaling yields a convection-diffusion-reaction equation, for which the coefficients
can be explicitly determined from the information about the tissue obtained by diffusion tensor imaging. Numerical simulations relying on DTI measurements confirm the biological
findings that glioma spreads
along white matter tracts.

In this paper we propose a procedure to extend classical numerical schemes for
hyperbolic conservation laws to networks of hyperbolic conservation laws. At the
junctions of the network we solve the given coupling conditions and minimize the
contributions of the outgoing numerical waves. This flexible procedure allows
us to also use central schemes at the junctions. Several numerical examples are
considered to investigate the performance of this new approach compared to the
common Godunov solver and exact solutions.

Starting from the two-scale model for pH-taxis of cancer cells introduced in [1], we consider here an extension accounting for tumor heterogeneity w.r.t. treatment sensitivity and a treatment approach including chemo- and radiotherapy. The effect of peritumoral region alkalinization on such therapeutic combination is investigated with the aid of numerical simulations.

Variational methods in imaging are nowadays developing towards a quite universal and
exible
tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting,
segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form
D(Ku) + alpha R(u) to min_u
;
where the functional D is a data fidelity term also depending on some input data f and
measuring the deviation of Ku from such and R is a regularization functional. Moreover
K is a (often linear) forward operator modeling the dependence of data on an underlying
image, and alpha is a positive regularization parameter. While D is often smooth and (strictly)
convex, the current practice almost exclusively uses nonsmooth regularization functionals.
The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof, cf. [28, 31, 40], or l_1-norms of coeefficients arising
from scalar products with some frame system, cf. [73] and references therein.
The efficient solution of such variational problems in imaging demands for appropriate algorithms.
Taking into account the specific structure as a sum of two very different terms
to be minimized, splitting algorithms are a quite canonical choice. Consequently this field
has revived the interest in techniques like operator splittings or augmented Lagrangians. In
this chapter we shall provide an overview of methods currently developed and recent results
as well as some computational studies providing a comparison of different methods and also
illustrating their success in applications.
We start with a very general viewpoint in the first sections, discussing basic notations, properties
of proximal maps, firmly non-expansive and averaging operators, which form the basis
of further convergence arguments. Then we proceed to a discussion of several state-of-the
art algorithms and their (theoretical) convergence properties. After a section discussing issues
related to the use of analogous iterative schemes for ill-posed problems, we present some practical convergence studies in numerical examples related to PET and spectral CT reconstruction.

We develop a framework for shape optimization problems under state equation con-
straints where both state and control are discretized by B-splines or NURBS. In other
words, we use isogeometric analysis (IGA) for solving the partial differential equation and a nodal approach to change domains where control points take the place of nodes and where thus a quite general class of functions for representing optimal shapes and their boundaries becomes available. The minimization problem is solved by a gradient descent method where the shape gradient will be defined in isogeometric terms. This
gradient is obtained following two schemes, optimize first–discretize then and, reversely,
discretize first–optimize then. We show that for isogeometric analysis, the two schemes yield the same discrete system. Moreover, we also formulate shape optimization with respect to NURBS in the optimize first ansatz which amounts to finding optimal control points and weights simultaneously. Numerical tests illustrate the theory.

In this paper we give an overview on the system of rehabilitation clinics in Germany in general and the literature on patient scheduling applied to rehabilitation facilities in particular.
We apply a class-teacher model developed to this environment and then generalize it to meet some of the specific constraints of inpatient rehabilitation clinics. To this end we introduce a restricted edge coloring on undirected bipartite graphs which is called group-wise balanced. The problem considered is called patient-therapist-timetable problem with group-wise balanced constraints (PTTPgb). In order to specify weekly schedules further such that they produce a reasonable allocation to morning/afternoon (second level decision) and to the single periods (third level decision) we introduce (hierarchical PTTPgb). For the corresponding model, the hierarchical edge coloring problem, we present some first feasibility results.

Cancer research is not only a fast growing field involving many branches of science, but also an intricate and diversified field rife with anomalies. One such anomaly is the
consistent reliance of cancer cells on glucose metabolism for energy production even in a normoxic environment. Glycolysis is an inefficient pathway for energy production and normally is used during hypoxic conditions. Since cancer cells have a high demand for energy
(e.g. for proliferation) it is somehow paradoxical for them to rely on such a mechanism. An emerging conjecture aiming to explain this behavior is that cancer cells
preserve this aerobic glycolytic phenotype for its use in invasion and metastasis. We follow this hypothesis and propose a new model
for cancer invasion, depending on the dynamics of extra- and intracellular protons, by building upon the existing ones. We incorporate random perturbations in the intracellular proton dynamics to account
for uncertainties affecting the cellular machinery. Finally, we address the well-posedness of our setting and use numerical simulations to illustrate the model predictions.

We consider the problem of evacuating an urban area caused by a natural or man-made disaster. There are several planning aspects that need to be considered in such a scenario, which are usually considered separately, due to their computational complexity. These aspects include: Which shelters are used to accommodate evacuees? How to schedule public transport for transit-dependent evacuees? And how do public and individual traffic interact? Furthermore, besides evacuation time, also the risk of the evacuation needs to be considered.
We propose a macroscopic multi-criteria optimization model that includes all of these questions simultaneously. As a mixed-integer programming formulation cannot handle instances of real-world size, we develop a genetic algorithm of NSGA-II type that is able to generate feasible solutions of good quality in reasonable computation times.
We extend the applicability of these methods by also considering how to aggregate instance data, and how to generate solutions for the original instance starting from a reduced solution.
In computational experiments using real-world data modelling the cities of Nice in France and Kaiserslautern in Germany, we demonstrate the effectiveness of our approach and compare the trade-off between different levels of data aggregation.

Minmax regret optimization aims at finding robust solutions that perform best in the worst-case, compared to the respective optimum objective value in each scenario. Even for simple uncertainty sets like boxes, most polynomially solvable optimization problems have strongly NP-hard minmax regret counterparts. Thus, heuristics with performance guarantees can potentially be of great value, but only few such guarantees exist.
A very easy but effective approximation technique is to compute the midpoint solution of the original optimization problem, which aims at optimizing the average regret, and also the average nominal objective. It is a well-known result that the regret of the midpoint solution is at most 2 times the optimal regret. Besides some academic instances showing that this bound is tight, most instances reveal a way better approximation ratio.
We introduce a new lower bound for the optimal value of the minmax regret problem. Using this lower bound we state an algorithm that gives an instance dependent performance guarantee of the midpoint solution for combinatorial problems that is at most 2. The computational complexity of the algorithm depends on the minmax regret problem under consideration; we show that the sharpened guarantee can be computed in strongly polynomial time for several classes of combinatorial optimization problems.
To illustrate the quality of the proposed bound, we use it within a branch and bound framework for the robust shortest path problem. In an experimental study comparing this approach with a bound from the literature, we find a considerable improvement in computation times.

The hypervolume subset selection problem consists of finding a subset, with a given cardinality \(k\), of a set of nondominated points that maximizes the hypervolume indicator. This problem arises in selection procedures of evolutionary algorithms for multiobjective optimization, for which practically efficient algorithms are required. In this article, two new formulations are provided for the two-dimensional variant of this problem.
The first is a (linear) integer programming formulation that can be solved by solving its linear programming relaxation. The second formulation is a \(k\)-link shortest path formulation on a special digraph with the Monge property that can be solved by dynamic programming in \(\mathcal{O}(n(k + \log n))\) time. This improves upon the \(\mathcal{O}(n^2k)\) result of Bader (2009), and matches the recent result of Bringmann et al. (2014), which was developed independently from this work using different techniques. Moreover, it is shown that these bounds may be further improved under mild conditions on \(k\).