## Fachbereich Mathematik

### Refine

#### Year of publication

- 2008 (28) (remove)

#### Document Type

- Doctoral Thesis (15)
- Preprint (8)
- Report (4)
- Study Thesis (1)

#### Keywords

- Level-Set-Methode (2)
- domain decomposition (2)
- mesh generation (2)
- Abgeschlossenheit (1)
- Adjoint system (1)
- Alter (1)
- Annulus (1)
- Automatische Spracherkennung (1)
- Bayes-Entscheidungstheorie (1)
- Behinderter (1)
- Berechnungskomplexität (1)
- Bernstein Kern (1)
- Biot Poroelastizitätgleichung (1)
- CDS (1)
- CPDO (1)
- Center Location (1)
- Circle Location (1)
- Combinatorial Optimization (1)
- Core (1)
- Credit Risk (1)
- Curvature (1)
- Defaultable Options (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Eigenschwingung (1)
- Entscheidungsbaum (1)
- FEM (1)
- Fiber spinning (1)
- Fiber suspension flow (1)
- Filtration (1)
- Finanzmathematik (1)
- First--order optimality system (1)
- Fluid-Struktur-Wechselwirkung (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gittererzeugung (1)
- Gravimetrie (1)
- Harmonische Funktion (1)
- Hub Location Problem (1)
- Hysterese (1)
- Inverses Problem (1)
- Kaktusgraph (1)
- Kopplungsproblem (1)
- Krümmung (1)
- Kugelflächenfunktion (1)
- Level set methods (1)
- Location (1)
- MBS (1)
- MKS (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Massendichte (1)
- Mathematical Finance (1)
- Mixed integer programming (1)
- Multicriteria optimization (1)
- Multiperiod planning (1)
- Multiskalenapproximation (1)
- Neumann Wavelets (1)
- Neumann wavelets (1)
- Newtonsches Potenzial (1)
- Nichtlineare Approximation (1)
- Niederschlag (1)
- Nonlinear Optimization (1)
- Optimal control (1)
- Orthonormalbasis (1)
- Order of printed copy (1)
- Punktprozess (1)
- Regularisierung (1)
- Semantik (1)
- Shapley value (1)
- Shapleywert (1)
- Signalanalyse (1)
- Sphäre (1)
- Spieltheorie (1)
- Spline (1)
- Standortprobleme (1)
- Stochastic Control (1)
- Stochastische optimale Kontrolle (1)
- Stokes Wavelets (1)
- Stokes wavelets (1)
- Stokes-Gleichung (1)
- Stoßdämpfer (1)
- Systemidentifikation (1)
- Tensorfeld (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Tropische Geometrie (1)
- Vektorfeld (1)
- Vollständigkeit (1)
- Wellengeschwindigkeit (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- benders decomposition (1)
- body wave velocity (1)
- cactus graph (1)
- cancer radiation therapy (1)
- change point (1)
- closure approximation (1)
- complexity (1)
- cooperative game (1)
- core (1)
- cuts (1)
- decision support systems (1)
- estimation (1)
- extreme equilibria (1)
- film casting (1)
- filtration (1)
- finite volume method (1)
- fluid structure interaction (1)
- free boundary (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- geometric ergodicity (1)
- gitter (1)
- harmonic density (1)
- harmonische Dichte (1)
- heuristic (1)
- interface problem (1)
- inverse problems (1)
- kooperative Spieltheorie (1)
- level set method (1)
- logical analysis (1)
- logische Analyse (1)
- mathematical modelling (1)
- matrix decomposition (1)
- minimaler Schnittbaum (1)
- minimum cut tree (1)
- monotropic programming (1)
- multiliead collimator sequencing (1)
- multiscale approximation (1)
- network congestion game (1)
- netzgenerierung (1)
- nichtlineare Modellreduktion (1)
- nonlinear model reduction (1)
- nonwovens (1)
- normal mode (1)
- optimal control (1)
- porous media (1)
- poröse Medien (1)
- reproducing kernel (1)
- reproduzierender Kern (1)
- rheology (1)
- schlecht gestellt (1)
- splitting function (1)
- stationarity (1)
- stochastic optimal control (1)
- tension problems (1)
- total latency (1)
- transmission conditions (1)
- well-posedness (1)
- Übergangsbedingungen (1)

The desire to model in ever increasing detail geometrical and physical features has lead to a steady increase in the number of points used in field solvers. While many solvers have been ported to parallel machines, grid generators have left behind. Sequential generation of meshes of large size is extremely problematic both in terms of time and memory requirements. Therefore, the need for developing parallel mesh generation technique is well justified. In this work a novel algorithm is presented for automatic parallel generation of tetrahedral computational meshes based on geometrical domain decomposition. It has a potential to remove this bottleneck. Different domain decomposition approaches and criteria have been investigated. Questions regarding time and memory consumption, efficiency of computations and quality of generated surface and volume meshes have been considered. As a result of the work parTgen (partitioner and parallel tetrahedral mesh generator) software package based on the developed algorithm has been created. Several real-life examples of relatively complex structures involving large meshes (of order 10^7-10^8 elements) are given. It has been shown that high mesh quality is achieved. Memory and time consumption are reduced significantly, and parallel algorithm is efficient.

We present an optimal control approach for the isothermal film casting process with free surfaces described by averaged Navier-Stokes equations. We control the thickness of the film at the take-up point using the shape of the nozzle. The control goal consists in finding an even thickness profile. To achieve this goal, we minimize an appropriate cost functional. The resulting minimization problem is solved numerically by a steepest descent method. The gradient of the cost functional is approximated using the adjoint variables of the problem with fixed film width. Numerical simulations show the applicability of the proposed method.

An optimal control problem for a mathematical model of a melt spinning process is considered. Newtonian and non--Newtonian models are used to describe the rheology of the polymeric material, the fiber is made of. The extrusion velocity of the polymer at the spinneret as well as the velocity and temperature of the quench air serve as control variables. A constrained optimization problem is derived and the first--order optimality system is set up to obtain the adjoint equations. Numerical solutions are carried out using a steepest descent algorithm.

Finding a delivery plan for cancer radiation treatment using multileaf collimators operating in ''step-and-shoot mode'' can be formulated mathematically as a problem of decomposing an integer matrix into a weighted sum of binary matrices having the consecutive-ones property - and sometimes other properties related to the collimator technology. The efficiency of the delivery plan is measured by both the sum of weights in the decomposition, known as the total beam-on time, and the number of different binary matrices appearing in it, referred to as the cardinality, the latter being closely related to the set-up time of the treatment. In practice, the total beam-on time is usually restricted to its minimum possible value, (which is easy to find), and a decomposition that minimises cardinality (subject to this restriction) is sought.

The purpose of this paper is the canonical connection of classical global gravity field determination following the concept of Stokes (1849), Bruns (1878), and Neumann (1887) on the one hand and modern locally oriented multiscale computation by use of adaptive locally supported wavelets on the other hand. Essential tools are regularization methods of the Green, Neumann, and Stokes integral representations. The multiscale approximation is guaranteed simply as linear difference scheme by use of Green, Neumann, and Stokes wavelets, respectively. As an application, gravity anomalies caused by plumes are investigated for the Hawaiian and Iceland areas.

In many medical, financial, industrial, e.t.c. applications of statistics, the model parameters may undergo changes at unknown moment of time. In this thesis, we consider change point analysis in a regression setting for dichotomous responses, i.e. they can be modeled as Bernoulli or 0-1 variables. Applications are widespread including credit scoring in financial statistics and dose-response relations in biometry. The model parameters are estimated using neural network method. We show that the parameter estimates are identifiable up to a given family of transformations and derive the consistency and asymptotic normality of the network parameter estimates using the results in Franke and Neumann Franke Neumann (2000). We use a neural network based likelihood ratio test statistic to detect a change point in a given set of data and derive the limit distribution of the estimator using the results in Gombay and Horvath (1994,1996) under the assumption that the model is properly specified. For the misspecified case, we develop a scaled test statistic for the case of one-dimensional parameter. Through simulation, we show that the sample size, change point location and the size of change influence change point detection. In this work, the maximum likelihood estimation method is used to estimate a change point when it has been detected. Through simulation, we show that change point estimation is influenced by the sample size, change point location and the size of change. We present two methods for determining the change point confidence intervals: Profile log-likelihood ratio and Percentile bootstrap methods. Through simulation, the Percentile bootstrap method is shown to be superior to profile log-likelihood ratio method.

We study the complexity of finding extreme pure Nash equilibria in symmetric network congestion games and analyse how it depends on the graph topology and the number of users. In our context best and worst equilibria are those with minimum respectively maximum total latency. We establish that both problems can be solved by a Greedy algorithm with a suitable tie breaking rule on parallel links. On series-parallel graphs finding a worst Nash equilibrium is NP-hard for two or more users while finding a best one is solvable in polynomial time for two users and NP-hard for three or more. Additionally we establish NP-hardness in the strong sense for the problem of finding a worst Nash equilibrium on a general acyclic graph.

In this work we study and investigate the minimum width annulus problem (MWAP), the circle center location or circle location problem (CLP) and the point center location or point location problem (PLP) on Rectilinear and Chebyshev planes as well as in networks. The relations between the problems have served as a basis for finding of elegant solution, algorithms for both new and well known problems. So, MWAP was formulated and investigated in Rectilinear space. In contrast to Euclidean metric, MWAP and PLP have at least one common optimal point. Therefore, MWAP on Rectilinear plane was solved in linear time with the help of PLP. Hence, the solution sequence was PLP-->MWAP. It was shown, that MWAP and CLP are equivalent. Thus, CLP can be also solved in linear time. The obtained results were analysed and transfered to Chebyshev metric. After that, the notions of circle, sphere and annulus in networks were introduced. It should be noted that the notion of a circle in a network is different from the notion of a cycle. An O(mn) time algorithm for solution of MWAP was constructed and implemented. The algorithm is based on the fact that the middle point of an edge represents an optimal solution of a local minimum width annulus on this edge. The resulting complexity is better than the complexity O(mn+n^2logn) in unweighted case of the fastest known algorithm for minimizing of the range function, which is mathematically equivalent to MWAP. MWAP in unweighted undirected networks was extended to the MWAP on subsets and to the restricted MWAP. Resulting problems were analysed and solved. Also the p–minimum width annulus problem was formulated and explored. This problem is NP–hard. However, the p–MWAP has been solved in polynomial O(m^2n^3p) time with a natural assumption, that each minimum width annulus covers all vertexes of a network having distances to the central point of annulus less than or equal to the radius of its outer circle. In contrast to the planar case MWAP in undirected unweighted networks have appeared to be a root problem among considered problems. During investigation of properties of circles in networks it was shown that the difference between planar and network circles is significant. This leads to the nonequivalence of CLP and MWAP in the general case. However, MWAP was effectively used in solution procedures for CLP giving the sequence MWAP-->CLP. The complexity of the developed and implemented algorithm is of order O(m^2n^2). It is important to mention that CLP in networks has been formulated for the first time in this work and differs from the well–studied location of cycles in networks. We have constructed an O(mn+n^2logn) algorithm for well–known PLP. The complexity of this algorithm is not worse than the complexity of the currently best algorithms. But the concept of the solution procedure is new – we use MWAP in order to solve PLP building the opposite to the planar case solution sequence MWAP-->PLP and this method has the following advantages: First, the lower bounds LB obtained in the solution procedure are proved to be in any case better than the strongest Halpern’s lower bound. Second, the developed algorithm is so simple that it can be easily applied to complex networks manually. Third, the empirical complexity of the algorithm is equal to O(mn). MWAP was extended to and explored in directed unweighted and weighted networks. The complexity bound O(n^2) of the developed algorithm for finding of the center of a minimum width annulus in the unweighted case does not depend on the number of edges in a network, because the problems can be solved in the order PLP-->MWAP. In the weighted case computational time is of order O(mn^2).

This paper provides a brief overview of two linear inverse problems concerned with the determination of the Earth’s interior: inverse gravimetry and normal mode tomography. Moreover, a vector spline method is proposed for a combined solution of both problems. This method uses localised basis functions, which are based on reproducing kernels, and is related to approaches which have been successfully applied to the inverse gravimetric problem and the seismic traveltime tomography separately.

Gegenstand dieser Arbeit ist die kanonische Verbindung klassischer globaler Schwerefeldmodellierung in der Konzeption von Stokes (1849) und Neumann (1887) und moderner lokaler Multiskalenberechnung mittels lokalkompakter adaptiver Wavelets. Besonderes Anliegen ist die "Zoom-in"-Ermittlung von Geoidhöhen aus lokal gegebenen Schwereanomalien bzw. Schwerestörungen.