KLUEDO RSS FeedKLUEDO Dokumente/documents
https://kluedo.ub.uni-kl.de/index/index/
Fri, 20 Feb 2015 13:54:12 +0100Fri, 20 Feb 2015 13:54:12 +0100Bicriteria approach to the optimal location of surveillance cameras
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3997
We consider the problem of finding efficient locations of surveillance cameras, where we distinguish
between two different problems. In the first, the whole area must be monitored and the number of cameras
should be as small as possible. In the second, the goal is to maximize the monitored area for a fixed number of
cameras. In both of these problems, restrictions on the ability of the cameras, like limited depth of view or range
of vision are taken into account. We present solution approaches for these problems and report on results of
their implementations applied to an authentic problem. We also consider a bicriteria problem with two objectives:
maximizing the monitored area and minimizing the number of cameras, and solve it for our study case.Horst W. Hamacher; Gross Aleksandrapreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3997Fri, 20 Feb 2015 13:54:12 +0100Bicriteria approach to the optimal location of surveillance cameras
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3979
We consider the problem of finding efficient locations of surveillance cameras, where we distinguish
between two different problems. In the first, the whole area must be monitored and the number of cameras
should be as small as possible. In the second, the goal is to maximize the monitored area for a fixed number of
cameras. In both of these problems, restrictions on the ability of the cameras, like limited depth of view or range
of vision are taken into account. We present solution approaches for these problems and report on results of
their implementations applied to an authentic problem. We also consider a bicriteria problem with two objectives:
maximizing the monitored area and minimizing the number of cameras, and solve it for our study case.Aleksandra Gross; Horst W. Hamacherpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3979Thu, 29 Jan 2015 08:18:53 +0100Robust Flows with Losses and Improvability in Evacuation Planning
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3947
We consider a network flow problem, where the outgoing flow is reduced by a certain percentage in each node. Given a maximum amount of flow that can leave the source node, the aim is to find a solution that maximizes the amount of flow which arrives at the sink.
Starting from this basic model, we include two new, additional aspects: On the one hand, we are able to reduce the loss at some of the nodes; on the other hand, the exact loss values are not known, but may come from a discrete uncertainty set of exponential size.
Applications for problems of this type can be found in evacuation planning, where one would like to improve the safety of nodes such that the number of evacuees reaching safety is maximized.
We formulate the resulting robust flow problem with losses and improvability as a mixed-integer program for finitely many scenarios, and present an iterative scenario-generation procedure that avoids the inclusion of all scenarios from the beginning. In a computational study using both randomly generated instance and realistic data based on the city of Nice, France, we compare our solution algorithms.Marc Goerigk; Ismaila Abderhamanepreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3947Wed, 10 Dec 2014 15:24:43 +0100A Bicriteria Approach to Robust Optimization
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3940
The classic approach in robust optimization is to optimize the solution with respect to the worst case scenario. This pessimistic approach yields solutions that perform best if the worst scenario happens, but also usually perform bad on average. A solution that optimizes the average performance on the other hand lacks in worst-case performance guarantee.
In practice it is important to find a good compromise between these two solutions. We propose to deal with this problem by considering it from a bicriteria perspective. The Pareto curve of the bicriteria problem visualizes exactly how costly it is to ensure robustness and helps to choose the solution with the best balance between expected and guaranteed performance.
Building upon a theoretical observation on the structure of Pareto solutions for problems with polyhedral feasible sets, we present a column generation approach that requires no direct solution of the computationally expensive worst-case problem. In computational experiments we demonstrate the effectivity of both the proposed algorithm, and the bicriteria perspective in general.André Chassein; Marc Goerigkpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3940Mon, 08 Dec 2014 16:08:38 +0100Robust Geometric Programming is co-NP hard
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3938
Geometric Programming is a useful tool with a wide range of applications in engineering. As in real-world problems input data is likely to be affected by uncertainty, Hsiung, Kim, and Boyd introduced robust geometric programming to include the uncertainty in the optimization process. They also developed a tractable approximation method to tackle this problem. Further, they pose the question whether there exists a tractable reformulation of their robust geometric programming model instead of only an approximation method. We give a negative answer to this question by showing that robust geometric programming is co-NP hard in its natural posynomial form.André Chassein; Marc Goerigkpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3938Fri, 05 Dec 2014 14:17:25 +0100Sink Location to Find Optimal Shelters in Evacuation Planning
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3934
The sink location problem is a combination of network flow and location problems: From a given set of nodes in a flow network a minimum cost subset \(W\) has to be selected such that given supplies can be transported to the nodes in \(W\). In contrast to its counterpart, the source location problem which has already been studied in the literature, sinks have, in general, a limited capacity. Sink location has a decisive application in evacuation planning, where the supplies correspond to the number of evacuees and the sinks to emergency shelters.
We classify sink location problems according to capacities on shelter nodes, simultaneous or non-simultaneous flows, and single or multiple assignments of evacuee groups to shelters. Resulting combinations are interpreted in the evacuation context and analyzed with respect to their worst-case complexity status.
There are several approaches to tackle these problems: Generic solution methods for uncapacitated problems are based on source location and modifications of the network. In the capacitated case, for which source location cannot be applied, we suggest alternative approaches which work in the original network. It turns out that latter class algorithms are superior to the former ones. This is established in numerical tests including random data as well as real world data from the city of Kaiserslautern, Germany.Philipp Heßler; Horst W. Hamacherpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3934Mon, 24 Nov 2014 11:46:38 +0100Empirical Evaluation for the Conceptual Interoperability Analysis Approach: A Controlled Experiment Design
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3915
Building interoperation among separately developed software units requires checking their conceptual assumptions and constraints. However, eliciting such assumptions and constraints is time consuming and is a challenging task as it requires analyzing each of the interoperating software units. To address this issue we proposed a new conceptual interoperability analysis approach which aims at decreasing the analysis cost and the conceptual mismatches between the interoperating software units. In this report we present the design of a planned controlled experiment for evaluating the effectiveness, efficiency, and acceptance of our proposed conceptual interoperability analysis approach. The design includes the study objectives, research questions, statistical hypotheses, and experimental design. It also provides the materials that will be used in the execution phase of the planned experiment.Hadil Abukwaikpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3915Tue, 11 Nov 2014 11:13:20 +0100A coverage-based Box-Algorithm to compute a representation for optimization problems with three objective functions
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3911
A new algorithm for optimization problems with three objective functions is presented which computes a representation for the set of nondominated points. This representation is guaranteed to have a desired coverage error and a bound on the number of iterations needed by the algorithm to meet this coverage error is derived. Since the representation does not necessarily contain nondominated points only, ideas to calculate bounds for the representation error are given. Moreover, the incorporation of domination during the algorithm and other quality measures are discussed.Tobias Kuhn; Stefan Ruzikapreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3911Fri, 07 Nov 2014 10:55:37 +0100Alternative Formulations for the Ordered Weighted Averaging Objective
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3899
The ordered weighted averaging objective (OWA) is an aggregate function over multiple optimization criteria which received increasing attention by the research community over the last decade. Different to the ordered weighted sum, weights are attached to ordered objective functions (i.e., a weight for the largest value, a weight for the second-largest value and so on). As this contains max-min or worst-case optimization as a special case, OWA can also be considered as an alternative approach to robust optimization.
For linear programs with OWA objective, compact reformulations exist, which result in extended linear programs. We present new such reformulation models with reduced size. A computational comparison indicates that these formulations improve solution times.Andre Chassein; Marc Goerigkpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3899Thu, 16 Oct 2014 10:45:12 +0200On The Recoverable Robust Traveling Salesman Problem
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3898
We consider an uncertain traveling salesman problem, where distances between nodes are not known exactly, but may stem from an uncertainty set of possible scenarios. This uncertainty set is given as intervals with an additional bound on the number of distances that may deviate from their expected, nominal value.
A recoverable robust model is proposed, that allows a tour to change a bounded number of edges once a scenario becomes known. As the model contains an exponential number of constraints and variables, an iterative algorithm is proposed, in which tours and scenarios are computed alternately.
While this approach is able to find a provably optimal solution to the robust model, it also needs to solve increasingly complex subproblems. Therefore, we also consider heuristic solution procedures based on local search moves using a heuristic estimate of the actual objective function. In computational experiments, these approaches are compared.
Finally, an alternative recovery model is discussed, where a second-stage recovery tour is not required to visit all nodes of the graph. We show that the previously NP-hard evaluation of a fixed solution now becomes solvable in polynomial time.Andre Chassein; Marc Goerigkpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3898Thu, 16 Oct 2014 10:41:06 +0200Optimization Models to Enhance Resilience in Evacuation Planning
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3897
We argue that the concepts of resilience in engineering science and robustness in mathematical optimization are strongly related. Using evacuation planning as an example application, we demonstrate optimization techniques to improve solution resilience. These include a direct modelling of the uncertainty for stochastic or robust optimization, as well as taking multiple objective functions into account.Marc Goerigk; Horst Hamacherpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3897Thu, 16 Oct 2014 10:22:16 +0200Effective equations for anisotropic glioma spread with proliferation: a multiscale approach
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3893
Glioma is a common type of primary brain tumor, with a strongly invasive potential, often exhibiting nonuniform, highly irregular growth. This makes it difficult to assess
the degree of extent of the tumor, hence bringing about a supplementary challenge for the treatment. It is therefore necessary to understand the
migratory behavior of glioma in greater detail.
In this paper we propose a multiscale model for glioma growth and migration. Our model couples the microscale dynamics (reduced to the binding of surface receptors to the
surrounding tissue) with a kinetic transport equation for the cell density on the mesoscopic level of individual cells. On the latter scale we also include the
proliferation of tumor cells via effects of interaction with the tissue. An adequate parabolic scaling yields a convection-diffusion-reaction equation, for which the coefficients
can be explicitly determined from the information about the tissue obtained by diffusion tensor imaging. Numerical simulations relying on DTI measurements confirm the biological
findings that glioma spreads
along white matter tracts.Christian Engwer; Alexander Hunt; Christina Surulescupreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3893Tue, 14 Oct 2014 14:08:25 +0200Numerical schemes for networks of hyperbolic conservation laws
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3884
In this paper we propose a procedure to extend classical numerical schemes for
hyperbolic conservation laws to networks of hyperbolic conservation laws. At the
junctions of the network we solve the given coupling conditions and minimize the
contributions of the outgoing numerical waves. This flexible procedure allows
us to also use central schemes at the junctions. Several numerical examples are
considered to investigate the performance of this new approach compared to the
common Godunov solver and exact solutions.Raul Dr. Borschepreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3884Wed, 01 Oct 2014 09:37:55 +0200A multiscale model for acid-mediated tumor invasion: therapy approaches
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3870
Starting from the two-scale model for pH-taxis of cancer cells introduced in [1], we consider here an extension accounting for tumor heterogeneity w.r.t. treatment sensitivity and a treatment approach including chemo- and radiotherapy. The effect of peritumoral region alkalinization on such therapeutic combination is investigated with the aid of numerical simulations.Gülnihal Meral ; Christian Stinner; Christina Surulescupreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3870Fri, 12 Sep 2014 14:09:53 +0200First Order Algorithms in Variational Image Processing
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3852
Variational methods in imaging are nowadays developing towards a quite universal and
exible
tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting,
segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form
D(Ku) + alpha R(u) to min_u
;
where the functional D is a data fidelity term also depending on some input data f and
measuring the deviation of Ku from such and R is a regularization functional. Moreover
K is a (often linear) forward operator modeling the dependence of data on an underlying
image, and alpha is a positive regularization parameter. While D is often smooth and (strictly)
convex, the current practice almost exclusively uses nonsmooth regularization functionals.
The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof, cf. [28, 31, 40], or l_1-norms of coeefficients arising
from scalar products with some frame system, cf. [73] and references therein.
The efficient solution of such variational problems in imaging demands for appropriate algorithms.
Taking into account the specific structure as a sum of two very different terms
to be minimized, splitting algorithms are a quite canonical choice. Consequently this field
has revived the interest in techniques like operator splittings or augmented Lagrangians. In
this chapter we shall provide an overview of methods currently developed and recent results
as well as some computational studies providing a comparison of different methods and also
illustrating their success in applications.
We start with a very general viewpoint in the first sections, discussing basic notations, properties
of proximal maps, firmly non-expansive and averaging operators, which form the basis
of further convergence arguments. Then we proceed to a discussion of several state-of-the
art algorithms and their (theoretical) convergence properties. After a section discussing issues
related to the use of analogous iterative schemes for ill-posed problems, we present some practical convergence studies in numerical examples related to PET and spectral CT reconstruction.Martin Burger; Alexander Sawatzky; Gabriele Steidlpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3852Mon, 18 Aug 2014 08:34:49 +0200A Framework for Shape Optimization in the Context of Isogeometric Analysis
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3833
We develop a framework for shape optimization problems under state equation con-
straints where both state and control are discretized by B-splines or NURBS. In other
words, we use isogeometric analysis (IGA) for solving the partial differential equation and a nodal approach to change domains where control points take the place of nodes and where thus a quite general class of functions for representing optimal shapes and their boundaries becomes available. The minimization problem is solved by a gradient descent method where the shape gradient will be defined in isogeometric terms. This
gradient is obtained following two schemes, optimize first–discretize then and, reversely,
discretize first–optimize then. We show that for isogeometric analysis, the two schemes yield the same discrete system. Moreover, we also formulate shape optimization with respect to NURBS in the optimize first ansatz which amounts to finding optimal control points and weights simultaneously. Numerical tests illustrate the theory.Daniela Fußeder; Bernd Simeon; Anh-Vu Vuongpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3833Thu, 24 Jul 2014 08:25:17 +0200Hierarchical Edge Colorings and Rehabilitation Therapy Planning in Germany
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3832
In this paper we give an overview on the system of rehabilitation clinics in Germany in general and the literature on patient scheduling applied to rehabilitation facilities in particular.
We apply a class-teacher model developed to this environment and then generalize it to meet some of the specific constraints of inpatient rehabilitation clinics. To this end we introduce a restricted edge coloring on undirected bipartite graphs which is called group-wise balanced. The problem considered is called patient-therapist-timetable problem with group-wise balanced constraints (PTTPgb). In order to specify weekly schedules further such that they produce a reasonable allocation to morning/afternoon (second level decision) and to the single periods (third level decision) we introduce (hierarchical PTTPgb). For the corresponding model, the hierarchical edge coloring problem, we present some first feasibility results. Ines M. Raschendorfer; Horst W. Hamacherpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3832Tue, 22 Jul 2014 11:16:14 +0200A stochastic multiscale model for acid mediated cancer invasion
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3819
Cancer research is not only a fast growing field involving many branches of science, but also an intricate and diversified field rife with anomalies. One such anomaly is the
consistent reliance of cancer cells on glucose metabolism for energy production even in a normoxic environment. Glycolysis is an inefficient pathway for energy production and normally is used during hypoxic conditions. Since cancer cells have a high demand for energy
(e.g. for proliferation) it is somehow paradoxical for them to rely on such a mechanism. An emerging conjecture aiming to explain this behavior is that cancer cells
preserve this aerobic glycolytic phenotype for its use in invasion and metastasis. We follow this hypothesis and propose a new model
for cancer invasion, depending on the dynamics of extra- and intracellular protons, by building upon the existing ones. We incorporate random perturbations in the intracellular proton dynamics to account
for uncertainties affecting the cellular machinery. Finally, we address the well-posedness of our setting and use numerical simulations to illustrate the model predictions.Sandesh Hiremath; Christina Surulescupreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3819Wed, 02 Jul 2014 08:07:19 +0200A Comprehensive Evacuation Planning Model and Genetic Solution Algorithm
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3804
We consider the problem of evacuating an urban area caused by a natural or man-made disaster. There are several planning aspects that need to be considered in such a scenario, which are usually considered separately, due to their computational complexity. These aspects include: Which shelters are used to accommodate evacuees? How to schedule public transport for transit-dependent evacuees? And how do public and individual traffic interact? Furthermore, besides evacuation time, also the risk of the evacuation needs to be considered.
We propose a macroscopic multi-criteria optimization model that includes all of these questions simultaneously. As a mixed-integer programming formulation cannot handle instances of real-world size, we develop a genetic algorithm of NSGA-II type that is able to generate feasible solutions of good quality in reasonable computation times.
We extend the applicability of these methods by also considering how to aggregate instance data, and how to generate solutions for the original instance starting from a reduced solution.
In computational experiments using real-world data modelling the cities of Nice in France and Kaiserslautern in Germany, we demonstrate the effectiveness of our approach and compare the trade-off between different levels of data aggregation.Marc Goerigk; Kaouthar Deghdak; Philipp Heßlerpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3804Fri, 23 May 2014 10:39:02 +0200A New Bound for the Midpoint Solution in Minmax Regret Optimization with an Application to the Robust Shortest Path Problem
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3802
Minmax regret optimization aims at finding robust solutions that perform best in the worst-case, compared to the respective optimum objective value in each scenario. Even for simple uncertainty sets like boxes, most polynomially solvable optimization problems have strongly NP-hard minmax regret counterparts. Thus, heuristics with performance guarantees can potentially be of great value, but only few such guarantees exist.
A very easy but effective approximation technique is to compute the midpoint solution of the original optimization problem, which aims at optimizing the average regret, and also the average nominal objective. It is a well-known result that the regret of the midpoint solution is at most 2 times the optimal regret. Besides some academic instances showing that this bound is tight, most instances reveal a way better approximation ratio.
We introduce a new lower bound for the optimal value of the minmax regret problem. Using this lower bound we state an algorithm that gives an instance dependent performance guarantee of the midpoint solution for combinatorial problems that is at most 2. The computational complexity of the algorithm depends on the minmax regret problem under consideration; we show that the sharpened guarantee can be computed in strongly polynomial time for several classes of combinatorial optimization problems.
To illustrate the quality of the proposed bound, we use it within a branch and bound framework for the robust shortest path problem. In an experimental study comparing this approach with a bound from the literature, we find a considerable improvement in computation times.André Chassein; Marc Goerigkpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3802Fri, 16 May 2014 08:57:32 +0200Hypervolume Subset Selection in Two Dimensions: Formulations and Algorithms
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3798
The hypervolume subset selection problem consists of finding a subset, with a given cardinality \(k\), of a set of nondominated points that maximizes the hypervolume indicator. This problem arises in selection procedures of evolutionary algorithms for multiobjective optimization, for which practically efficient algorithms are required. In this article, two new formulations are provided for the two-dimensional variant of this problem.
The first is a (linear) integer programming formulation that can be solved by solving its linear programming relaxation. The second formulation is a \(k\)-link shortest path formulation on a special digraph with the Monge property that can be solved by dynamic programming in \(\mathcal{O}(n(k + \log n))\) time. This improves upon the \(\mathcal{O}(n^2k)\) result of Bader (2009), and matches the recent result of Bringmann et al. (2014), which was developed independently from this work using different techniques. Moreover, it is shown that these bounds may be further improved under mild conditions on \(k\).Tobias Kuhn; Carlos M. Fonseca; Luís Paquete; Stefan Ruzika; José Rui Figueirapreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3798Wed, 14 May 2014 15:38:00 +0200Nonlinear frequency response analysis of structural vibrations
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3788
In this paper we present a method for nonlinear frequency response analysis of mechanical vibrations of 3-dimensional solid structures.
For computing nonlinear frequency response to periodic excitations, we employ the well-established harmonic balance method.
A fundamental aspect for allowing a large-scale application of the method is model order reduction of the discretized equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information.
For an efficient spatial discretization of continuum mechanics nonlinear partial differential equations, including large deformations and hyperelastic material laws, we use the isogeometric finite element method, which has already been shown to possess advantages over classical finite element discretizations in terms of higher accuracy of numerical approximations in the fields of linear vibration and static large deformation analysis.
With several computational examples, we demonstrate the applicability and accuracy of the modal derivative reduction method for nonlinear static computations and vibration analysis.
Thus, the presented method opens a promising perspective on application of nonlinear frequency analysis to large-scale industrial problems.
Oliver Weeger; Utz Wever; Bernd Simeonpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3788Thu, 08 May 2014 11:04:50 +0200Hypervolume Subset Selection in Two Dimensions: Formulations and Algorithms
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3770
The hypervolume subset selection problem consists of finding a subset, with a given cardinality, of a nondominated set of points that maximizes the hypervolume indicator. This problem arises in selection procedures of population-based heuristics for multiobjective optimization, and for which practically efficient algorithms are strongly required. In this article, we provide two new formulations for the two-dimensional variant of this problem.
The first is an integer programming formulation that can be solved by solving its linear relaxation. The second formulation is a \(k\)-link shortest path formulation on a special digraph with Monge property that can be solved by dynamic programming in \(\mathcal{O}(n^2)\) time complexity. This improves upon the existing result of \(O(n^3)\) in Bader.Tobias Kuhn; Carlos M. Fonseca; Luís Paquete; Stefan Ruzika; José Rui Figueirapreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3770Mon, 31 Mar 2014 07:47:17 +0200Edgeworth expansions for lattice triangular arrays
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3765
Edgeworth expansions have been introduced as a generalization of the central limit theorem and allow to investigate the convergence properties of sums of i.i.d. random variables. We consider triangular arrays of lattice random vectors and obtain a valid Edgeworth expansion for this case. The presented results can be used, for example, to study the convergence behavior of lattice models.Alona Bockpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3765Wed, 26 Mar 2014 12:14:40 +0100A multiscale model for pH-tactic invasion with time-varying carrying capacities
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3757
We propose a model for acid-mediated tumor invasion involving two different scales: the microscopic one, for the dynamics of intracellular protons and their exchange with their extracellular counterparts, and the macroscopic scale of interactions between tumor cell and normal cell populations, along with the evolution of extracellular protons. We also account for the tactic behavior of cancer cells, the latter being assumed to biase their motion according to a gradient of extracellular protons (following [2,31] we call this pH taxis). A time dependent (and also time delayed) carrying capacity for the tumor cells in response to the effects of acidity is considered as well. The global well posedness of the resulting multiscale model is proved with a regularization and fixed point argument. Numerical simulations are performed in order to illustrate the behavior of the model.Christina Surulescu; Gülnihal Meral; Christian Stinnerpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3757Tue, 18 Mar 2014 14:06:43 +0100