## Fachbereich Mathematik

### Refine

#### Year of publication

- 2008 (28) (remove)

#### Document Type

- Doctoral Thesis (15)
- Preprint (8)
- Report (4)
- Study Thesis (1)

#### Keywords

- Level-Set-Methode (2)
- domain decomposition (2)
- mesh generation (2)
- Abgeschlossenheit (1)
- Adjoint system (1)
- Alter (1)
- Annulus (1)
- Automatische Spracherkennung (1)
- Bayes-Entscheidungstheorie (1)
- Behinderter (1)
- Berechnungskomplexität (1)
- Bernstein Kern (1)
- Biot Poroelastizitätgleichung (1)
- CDS (1)
- CPDO (1)
- Center Location (1)
- Circle Location (1)
- Combinatorial Optimization (1)
- Core (1)
- Credit Risk (1)
- Curvature (1)
- Defaultable Options (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Eigenschwingung (1)
- Entscheidungsbaum (1)
- FEM (1)
- Fiber spinning (1)
- Fiber suspension flow (1)
- Filtration (1)
- Finanzmathematik (1)
- First--order optimality system (1)
- Fluid-Struktur-Wechselwirkung (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gittererzeugung (1)
- Gravimetrie (1)
- Harmonische Funktion (1)
- Hub Location Problem (1)
- Hysterese (1)
- Inverses Problem (1)
- Kaktusgraph (1)
- Kopplungsproblem (1)
- Krümmung (1)
- Kugelflächenfunktion (1)
- Level set methods (1)
- Location (1)
- MBS (1)
- MKS (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Massendichte (1)
- Mathematical Finance (1)
- Mixed integer programming (1)
- Multicriteria optimization (1)
- Multiperiod planning (1)
- Multiskalenapproximation (1)
- Neumann Wavelets (1)
- Neumann wavelets (1)
- Newtonsches Potenzial (1)
- Nichtlineare Approximation (1)
- Niederschlag (1)
- Nonlinear Optimization (1)
- Optimal control (1)
- Orthonormalbasis (1)
- Order of printed copy (1)
- Punktprozess (1)
- Regularisierung (1)
- Semantik (1)
- Shapley value (1)
- Shapleywert (1)
- Signalanalyse (1)
- Sphäre (1)
- Spieltheorie (1)
- Spline (1)
- Standortprobleme (1)
- Stochastic Control (1)
- Stochastische optimale Kontrolle (1)
- Stokes Wavelets (1)
- Stokes wavelets (1)
- Stokes-Gleichung (1)
- Stoßdämpfer (1)
- Systemidentifikation (1)
- Tensorfeld (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Tropische Geometrie (1)
- Vektorfeld (1)
- Vollständigkeit (1)
- Wellengeschwindigkeit (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- benders decomposition (1)
- body wave velocity (1)
- cactus graph (1)
- cancer radiation therapy (1)
- change point (1)
- closure approximation (1)
- complexity (1)
- cooperative game (1)
- core (1)
- cuts (1)
- decision support systems (1)
- estimation (1)
- extreme equilibria (1)
- film casting (1)
- filtration (1)
- finite volume method (1)
- fluid structure interaction (1)
- free boundary (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- geometric ergodicity (1)
- gitter (1)
- harmonic density (1)
- harmonische Dichte (1)
- heuristic (1)
- interface problem (1)
- inverse problems (1)
- kooperative Spieltheorie (1)
- level set method (1)
- logical analysis (1)
- logische Analyse (1)
- mathematical modelling (1)
- matrix decomposition (1)
- minimaler Schnittbaum (1)
- minimum cut tree (1)
- monotropic programming (1)
- multiliead collimator sequencing (1)
- multiscale approximation (1)
- network congestion game (1)
- netzgenerierung (1)
- nichtlineare Modellreduktion (1)
- nonlinear model reduction (1)
- nonwovens (1)
- normal mode (1)
- optimal control (1)
- porous media (1)
- poröse Medien (1)
- reproducing kernel (1)
- reproduzierender Kern (1)
- rheology (1)
- schlecht gestellt (1)
- splitting function (1)
- stationarity (1)
- stochastic optimal control (1)
- tension problems (1)
- total latency (1)
- transmission conditions (1)
- well-posedness (1)
- Übergangsbedingungen (1)

Finding a delivery plan for cancer radiation treatment using multileaf collimators operating in ''step-and-shoot mode'' can be formulated mathematically as a problem of decomposing an integer matrix into a weighted sum of binary matrices having the consecutive-ones property - and sometimes other properties related to the collimator technology. The efficiency of the delivery plan is measured by both the sum of weights in the decomposition, known as the total beam-on time, and the number of different binary matrices appearing in it, referred to as the cardinality, the latter being closely related to the set-up time of the treatment. In practice, the total beam-on time is usually restricted to its minimum possible value, (which is easy to find), and a decomposition that minimises cardinality (subject to this restriction) is sought.

This thesis covers two important fields in financial mathematics, namely the continuous time portfolio optimisation and credit risk modelling. We analyse optimisation problems of portfolios of Call and Put options on the stock and/or the zero coupon bond issued by a firm with default risk. We use the martingale approach for dynamic optimisation problems. Our findings show that the riskier the option gets, the less proportion of his wealth the investor allocates to the risky asset. Further, we analyse the Credit Default Swap (CDS) market quotes on the Eurobonds issued by Turkish sovereign for building the term structure of the sovereign credit risk. Two methods are introduced and compared for bootstrapping the risk-neutral probabilities of default (PD) in an intensity based (or reduced form) credit risk modelling approach. We compare the market-implied PDs with the actual PDs reported by credit rating agencies based on historical experience. Our results highlight the market price of the sovereign credit risk depending on the assigned rating category in the sampling period. Finally, we find an optimal leverage strategy for delivering the payments promised by a Constant Proportion Debt Obligation (CPDO). The problem is solved via the introduction and explicit solution of a stochastic control problem by transforming the related Hamilton-Jacobi-Bellman Equation into its dual. Contrary to the industry practise, the optimal leverage function we derive is a non-linear function of the CPDO asset value. The simulations show promising behaviour of the optimal leverage function compared with the one popular among practitioners.

This paper provides a brief overview of two linear inverse problems concerned with the determination of the Earth’s interior: inverse gravimetry and normal mode tomography. Moreover, a vector spline method is proposed for a combined solution of both problems. This method uses localised basis functions, which are based on reproducing kernels, and is related to approaches which have been successfully applied to the inverse gravimetric problem and the seismic traveltime tomography separately.

This dissertation deals with the optimization of the web formation in a spunbond process for the production of artificial fabrics. A mathematical model of the process is presented. Based on the model, two kind of attributes to be optimized are considered, those related with the quality of the fabric and those describing the stability of the production process. The problem falls in the multicriteria and decision making framework. The functions involved on the model of the process are non linear, non convex and non differentiable. A strategy in two steps; exploration and continuation, is proposed to approximate numerically the Pareto frontier and alternative methods are proposed to navigate the set and support the decision making process. The proposed strategy is applied to a particular production process and numerical results are presented.

The purpose of this paper is the canonical connection of classical global gravity field determination following the concept of Stokes (1849), Bruns (1878), and Neumann (1887) on the one hand and modern locally oriented multiscale computation by use of adaptive locally supported wavelets on the other hand. Essential tools are regularization methods of the Green, Neumann, and Stokes integral representations. The multiscale approximation is guaranteed simply as linear difference scheme by use of Green, Neumann, and Stokes wavelets, respectively. As an application, gravity anomalies caused by plumes are investigated for the Hawaiian and Iceland areas.

Gegenstand dieser Arbeit ist die kanonische Verbindung klassischer globaler Schwerefeldmodellierung in der Konzeption von Stokes (1849) und Neumann (1887) und moderner lokaler Multiskalenberechnung mittels lokalkompakter adaptiver Wavelets. Besonderes Anliegen ist die "Zoom-in"-Ermittlung von Geoidhöhen aus lokal gegebenen Schwereanomalien bzw. Schwerestörungen.

The dissertation deals with the application of Hub Location models in public transport planning. The author proposes new mathematical models along with different solution approaches to solve the instances. Moreover, a novel multi-period formulation is proposed as an extension to the general model. Due to its high complexity heuristic approaches are formulated to find a good solution within a reasonable amount of time.

In many medical, financial, industrial, e.t.c. applications of statistics, the model parameters may undergo changes at unknown moment of time. In this thesis, we consider change point analysis in a regression setting for dichotomous responses, i.e. they can be modeled as Bernoulli or 0-1 variables. Applications are widespread including credit scoring in financial statistics and dose-response relations in biometry. The model parameters are estimated using neural network method. We show that the parameter estimates are identifiable up to a given family of transformations and derive the consistency and asymptotic normality of the network parameter estimates using the results in Franke and Neumann Franke Neumann (2000). We use a neural network based likelihood ratio test statistic to detect a change point in a given set of data and derive the limit distribution of the estimator using the results in Gombay and Horvath (1994,1996) under the assumption that the model is properly specified. For the misspecified case, we develop a scaled test statistic for the case of one-dimensional parameter. Through simulation, we show that the sample size, change point location and the size of change influence change point detection. In this work, the maximum likelihood estimation method is used to estimate a change point when it has been detected. Through simulation, we show that change point estimation is influenced by the sample size, change point location and the size of change. We present two methods for determining the change point confidence intervals: Profile log-likelihood ratio and Percentile bootstrap methods. Through simulation, the Percentile bootstrap method is shown to be superior to profile log-likelihood ratio method.

In this work we study and investigate the minimum width annulus problem (MWAP), the circle center location or circle location problem (CLP) and the point center location or point location problem (PLP) on Rectilinear and Chebyshev planes as well as in networks. The relations between the problems have served as a basis for finding of elegant solution, algorithms for both new and well known problems. So, MWAP was formulated and investigated in Rectilinear space. In contrast to Euclidean metric, MWAP and PLP have at least one common optimal point. Therefore, MWAP on Rectilinear plane was solved in linear time with the help of PLP. Hence, the solution sequence was PLP-->MWAP. It was shown, that MWAP and CLP are equivalent. Thus, CLP can be also solved in linear time. The obtained results were analysed and transfered to Chebyshev metric. After that, the notions of circle, sphere and annulus in networks were introduced. It should be noted that the notion of a circle in a network is different from the notion of a cycle. An O(mn) time algorithm for solution of MWAP was constructed and implemented. The algorithm is based on the fact that the middle point of an edge represents an optimal solution of a local minimum width annulus on this edge. The resulting complexity is better than the complexity O(mn+n^2logn) in unweighted case of the fastest known algorithm for minimizing of the range function, which is mathematically equivalent to MWAP. MWAP in unweighted undirected networks was extended to the MWAP on subsets and to the restricted MWAP. Resulting problems were analysed and solved. Also the p–minimum width annulus problem was formulated and explored. This problem is NP–hard. However, the p–MWAP has been solved in polynomial O(m^2n^3p) time with a natural assumption, that each minimum width annulus covers all vertexes of a network having distances to the central point of annulus less than or equal to the radius of its outer circle. In contrast to the planar case MWAP in undirected unweighted networks have appeared to be a root problem among considered problems. During investigation of properties of circles in networks it was shown that the difference between planar and network circles is significant. This leads to the nonequivalence of CLP and MWAP in the general case. However, MWAP was effectively used in solution procedures for CLP giving the sequence MWAP-->CLP. The complexity of the developed and implemented algorithm is of order O(m^2n^2). It is important to mention that CLP in networks has been formulated for the first time in this work and differs from the well–studied location of cycles in networks. We have constructed an O(mn+n^2logn) algorithm for well–known PLP. The complexity of this algorithm is not worse than the complexity of the currently best algorithms. But the concept of the solution procedure is new – we use MWAP in order to solve PLP building the opposite to the planar case solution sequence MWAP-->PLP and this method has the following advantages: First, the lower bounds LB obtained in the solution procedure are proved to be in any case better than the strongest Halpern’s lower bound. Second, the developed algorithm is so simple that it can be easily applied to complex networks manually. Third, the empirical complexity of the algorithm is equal to O(mn). MWAP was extended to and explored in directed unweighted and weighted networks. The complexity bound O(n^2) of the developed algorithm for finding of the center of a minimum width annulus in the unweighted case does not depend on the number of edges in a network, because the problems can be solved in the order PLP-->MWAP. In the weighted case computational time is of order O(mn^2).