## Fachbereich Mathematik

### Refine

#### Year of publication

- 2002 (24) (remove)

#### Document Type

- Preprint (13)
- Doctoral Thesis (8)
- Diploma Thesis (1)
- Lecture (1)
- Report (1)

#### Language

- English (24) (remove)

#### Keywords

- Anti-diffusion (1)
- Antidiffusion (1)
- Cauchy-Navier equation (1)
- Cauchy-Navier scaling function and wavelet (1)
- Combinatorial optimization (1)
- Consistencyanalysis (1)
- Diffusion (1)
- Dissertation (1)
- Doppelbarriereoption (1)
- Double Barrier Option (1)
- Education (1)
- Efficiency (1)
- Effizienz (1)
- Elektromagnetische Streuung (1)
- Erdmagnetismus (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Fokker-Planck-Gleichung (1)
- Fredholmsche Integralgleichung (1)
- Galerkin-Methode (1)
- Hierarchische Matrix (1)
- ITSM (1)
- Immiscible lattice BGK (1)
- Inverses Problem (1)
- K-best solution (1)
- Lattice Boltzmann (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Level sets (1)
- Location problems (1)
- Locational Planning (1)
- Mathematikunterricht (1)
- Matrixkompression (1)
- Mehrskalenanalyse (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Modellierung (1)
- Monte Carlo (1)
- Multiobjective programming (1)
- Multiresolution analysis (1)
- Multiskalen-Entrauschen (1)
- Neural Networks (1)
- Neuronales Netz (1)
- Nichtlineare Diffusion (1)
- Nonlinear time series analysis (1)
- Numerische Mathematik (1)
- Numerische Mathematik / Algorithmus (1)
- Oberflächenspannung (1)
- Partikel Methoden (1)
- Portfolio Optimization (1)
- Portfolio-Optimierung (1)
- QMC (1)
- Schwache Formulierung (1)
- Standortplanung (1)
- Stochastic Volatility (1)
- Stochastische Volatilität (1)
- Strahlungstransport (1)
- Time-delay-Netz (1)
- Transaction costs (1)
- Transaktionskosten (1)
- Two-phase flow (1)
- Vektorwavelets (1)
- Viskose Transportschemata (1)
- Wavelet (1)
- Zweiphasenströmung (1)
- control theory (1)
- discrepancy (1)
- displacement problem (1)
- domain decomposition methods (1)
- downward continuation (1)
- geomagnetism (1)
- harmonic wavelets (1)
- hierarchical matrix (1)
- limit and jump relations (1)
- low-rank approximation (1)
- modelling (1)
- multiscale denoising (1)
- praxis orientated (1)
- praxisorientiert (1)
- pyramid schemes (1)
- quasi-Monte Carlo (1)
- raum-zeitliche Analyse (1)
- reconstruction formula (1)
- scalar and vectorial wavelets (1)
- scheduling theory (1)
- systems (1)
- time delays (1)

The immiscible lattice BGK method for solving the two-phase incompressible Navier-Stokes equations is analysed in great detail. Equivalent moment analysis and local differential geometry are applied to examine how interface motion is determined and how surface tension effects can be included such that consistency to the two-phase incompressible Navier-Stokes equations can be expected. The results obtained from theoretical analysis are verified by numerical experiments. Since the intrinsic interface tracking scheme of immiscible lattice BGK is found to produce unsatisfactory results in two-dimensional simulations several approaches to improving it are discussed but all of them turn out to yield no substantial improvement. Furthermore, the intrinsic interface tracking scheme of immiscible lattice BGK is found to be closely connected to the well-known conservative volume tracking method. This result suggests to couple the conservative volume tracking method for determining interface motion with the Navier-Stokes solver of immiscible lattice BGK. Applied to simple flow fields, this coupled method yields much better results than plain immiscible lattice BGK.

While there exist closed-form solutions for vanilla options in the presence of stochastic volatility for nearly a decade, practitioners still depend on numerical methods - in particular the Finite Difference and Monte Carlo methods - in the case of double barrier options. It was only recently that Lipton proposed (semi-)analytical solutions for this special class of path-dependent options. Although he presents two different approaches to derive these solutions, he restricts himself in both cases to a less general model, namely one where the correlation and the interest rate differential are assumed to be zero. Naturally the question arises, if these methods are still applicable for the general stochastic volatility model without these restrictions. In this paper we show that such a generalization fails for both methods. We will explain why this is the case and discuss the consequences of our results.

One crucial assumption of continuous financial mathematics is that the portfolio can be rebalanced continuously and that there are no transaction costs. In reality, this of course does not work. On the one hand, continuous rebalancing is impossible, on the other hand, each transaction causes costs which have to be subtracted from the wealth. Therefore, we focus on trading strategies which are based on discrete rebalancing - in random or equidistant times - and where transaction costs are considered. These strategies are considered for various utility functions and are compared with the optimal ones of continuous trading.

Matrix Compression Methods for the Numerical Solution of Radiative Transfer in Scattering Media
(2002)

Radiative transfer in scattering media is usually described by the radiative transfer equation, an integro-differential equation which describes the propagation of the radiative intensity along a ray. The high dimensionality of the equation leads to a very large number of unknowns when discretizing the equation. This is the major difficulty in its numerical solution. In case of isotropic scattering and diffuse boundaries, the radiative transfer equation can be reformulated into a system of integral equations of the second kind, where the position is the only independent variable. By employing the so-called momentum equation, we derive an integral equation, which is also valid in case of linear anisotropic scattering. This equation is very similar to the equation for the isotropic case: no additional unknowns are introduced and the integral operators involved have very similar mapping properties. The discretization of an integral operator leads to a full matrix. Therefore, due to the large dimension of the matrix in practical applcation, it is not feasible to assemble and store the entire matrix. The so-called matrix compression methods circumvent the assembly of the matrix. Instead, the matrix-vector multiplications needed by iterative solvers are performed only approximately, thus, reducing, the computational complexity tremendously. The kernels of the integral equation describing the radiative transfer are very similar to the kernels of the integral equations occuring in the boundary element method. Therefore, with only slight modifications, the matrix compression methods, developed for the latter are readily applicable to the former. As apposed to the boundary element method, the integral kernels for radiative transfer in absorbing and scattering media involve an exponential decay term. We examine how this decay influences the efficiency of the matrix compression methods. Further, a comparison with the discrete ordinate method shows that discretizing the integral equation may lead to reductions in CPU time and to an improved accuracy especially in case of small absorption and scattering coefficients or if local sources are present.

Multiobjective combinatorial optimization problems have received increasing attention in recent years. Nevertheless, many algorithms are still restricted to the bicriteria case. In this paper we propose a new algorithm for computing all Pareto optimal solutions. Our algorithm is based on the notion of level sets and level curves and contains as a subproblem the determination of K best solutions for a single objective combinatorial optimization problem. We apply the method to the Multiobjective Quadratic Assignment Problem (MOQAP). We present two algorithms for ranking QAP solutions and nally give computational results comparing the methods.

In this work we present and estimate an explanatory model with a predefined system of explanatory equations, a so called lag dependent model. We present a locally optimal, on blocked neural network based lag estimator and theorems about consistensy. We define the change points in context of lag dependent model, and present a powerfull algorithm for change point detection in high dimensional high dynamical systems. We present a special kind of bootstrap for approximating the distribution of statistics of interest in dependent processes.

A Topology Primer
(2002)

These lecture notes give a completely self-contained introduction to the control theory of linear time-invariant systems. No prior knowledge is requried apart from linear algebra and some basic familiarity with ordinary differential equations. Thus, the course is suited for students of mathematics in their second or third year, and for theoretically inclined engineering students. Because of its appealing simplicity and elegance, the behavioral approch has been adopted to a large extend. A short list of recommended text books on the subject has been added, as a suggestion for further reading.

Spline functions that approximate (geostrophic) wind field or ocean circulation data are developed in a weighted Sobolev space setting on the (unit) sphere. Two problems are discussed in more detail: the modelling of the (geostrophic) wind field from (i)discrete scalar air pressure data and (ii) discrete vectorial velocity data. Domain decomposition methods based on the Schwarz alternating algorithm for positive definite symmetric matrices are described for solving large linear systems occuring in vectorial spline interpolation or smoothing of geostrophic flow.

This thesis builds a bridge between singularity theory and computer algebra. To an isolated hypersurface singularity one can associate a regular meromorphic connection, the Gauß-Manin connection, containing a lattice, the Brieskorn lattice. The leading terms of the Brieskorn lattice with respect to the weight and V-filtration of the Gauß-Manin connection define the spectral pairs. They correspond to the Hodge numbers of the mixed Hodge structure on the cohomology of the Milnor fibre and belong to the finest known invariants of isolated hypersurface singularities. The differential structure of the Brieskorn lattice can be described by two complex endomorphisms A0 and A1 containing even more information than the spectral pairs. In this thesis, an algorithmic approach to the Brieskorn lattice in the Gauß-Manin connection is presented. It leads to algorithms to compute the complex monodromy, the spectral pairs, and the differential structure of the Brieskorn lattice. These algorithms are implemented in the computer algebra system Singular.

In the present work, we investigated how to correct the questionable normality, linear and quadratic assumptions underlying existing Value-at-Risk methodologies. In order to take also into account the skewness, the heavy tailedness and the stochastic feature of the volatility of the market values of financial instruments, the constant volatility hypothesis widely used by existing Value-at-Risk appproches has also been investigated and corrected and the tails of the financial returns distributions have been handled via Generalized Pareto or Extreme Value Distributions. Artificial Neural Networks have been combined by Extreme Value Theory in order to build consistent and nonparametric Value-at-Risk measures without the need to make any of the questionable assumption specified above. For that, either autoregressive models (AR-GARCH) have been used or the direct characterization of conditional quantiles due to Bassett, Koenker [1978] and Smith [1987]. In order to build consistent and nonparametric Value-at-Risk estimates, we have proved some new results extending White Artificial Neural Network denseness results to unbounded random variables and provide a generalisation of the Bernstein inequality, which is needed to establish the consistency of our new Value-at-Risk estimates. For an accurate estimation of the quantile of the unexpected returns, Generalized Pareto and Extreme Value Distributions have been used. The new Artificial Neural Networks denseness results enable to build consistent, asymptotically normal and nonparametric estimates of conditional means and stochastic volatilities. The denseness results uses the Sobolev metric space L^m (my) for some m >= 1 and some probability measure my and which holds for a certain subclass of square integrable functions. The Fourier transform, the new extension of the Bernstein inequality for unbounded random variables from stationary alpha-mixing processes combined with the new generalization of a result of White and Wooldrige [1990] have been the main tool to establich the extension of White's neural network denseness results. To illustrate the goodness and level of accuracy of the new denseness results, we were able to demonstrate the applicability of the new Value-at-Risk approaches by means of three examples with real financial data mainly from the banking sector traded on the Frankfort Stock Exchange.

In this paper we study linear ill-posed problems Ax = y in a Hilbert space setting where instead of exact data y noisy data y^delta are given satisfying |y - y^delta| <= delta with known noise level delta. Regularized approximations are obtained by a general regularization scheme where the regularization parameter is chosen from Morozov's discrepancy principle. Assuming the unknown solution belongs to some general source set M we prove that the regularized approximation provides order optimal error bounds on the set M. Our results cover the special case of finitely smoothing operators A and extends recent results for infinitely smoothing operators.

This survey paper deals with multiresolution analysis from geodetically relevant data and its numerical realization for functions harmonic outside a (Bjerhammar) sphere inside the Earth. Harmonic wavelets are introduced within a suit- able framework of a Sobolev-like Hilbert space. Scaling functions and wavelets are defined by means of convolutions. A pyramid scheme provides efficient implementation und economical computation. Essential tools are the multiplicative Schwarz alternating algorithm (providing domain decomposition procedures) and fast multipole techniques (accelerating iterative solvers of linear systems).

Dealing with problems from locational planning in schools can enrich the mathematical education. In this report we describe planar locational problems which can be used in mathematical lessons. The problems production of a semiconductor plate, design of a fire brigade building and the warehouse problem are from real-world. The problems are worked out detailed so that the usage for school lessons is possible.

Strict order relations are defined as strict asymmetric and transitive binary relations. For classes of so-called levelled strict orders it is analyzed, under which conditions the endomorphism monoids of two relations coincide; in particular the case of direct sums of strict antichains is studied. Further, it is shown that these orders differ in their sets of binary order preserving functions.

Scheduling and location models are often used to tackle problems in production, logistics, and supply chain management. Instead of treating these models independent of each other, as is usually done in the literature, we consider in this paper an integrated model in which the locations of machines define release times for jobs. Polynomial solution algorithms are presented for single machine problems in which the scheduling part can be solved by the earliest release time rule.

The dissertation is concerned with the numerical solution of Fokker-Planck equations in high dimensions arising in the study of dynamics of polymeric liquids. Traditional methods based on tensor product structure are not applicable in high dimensions for the number of nodes required to yield a fixed accuracy increases exponentially with the dimension; a phenomenon often referred to as the curse of dimension. Particle methods or finite point set methods are known to break the curse of dimension. The Monte Carlo method (MCM) applied to such problems are 1/sqrt(N) accurate, where N is the cardinality of the point set considered, independent of the dimension. Deterministic version of the Monte Carlo method called the quasi Monte Carlo method (QMC) are quite effective in integration problems and accuracy of the order of 1/N can be achieved, up to a logarithmic factor. However, such a replacement cannot be carried over to particle simulations due to the correlation among the quasi-random points. The method proposed by Lecot (C.Lecot and F.E.Khettabi, Quasi-Monte Carlo simulation of diffusion, Journal of Complexity, 15 (1999), pp.342-359) is the only known QMC approach, but it not only leads to large particle numbers but also the proven order of convergence is 1/N^(2s) in dimension s. We modify the method presented there, in such a way that the new method works with reasonable particle numbers even in high dimensions and has better order of convergence. Though the provable order of convergence is 1/sqrt(N), the results show less variance and thus the proposed method still slightly outperforms standard MCM.

A geoscientifically relevant wavelet approach is established for the classical (inner) displacement problem corresponding to a regular surface (such as sphere, ellipsoid, actual earth's surface). Basic tools are the limit and jump relations of (linear) elastostatics. Scaling functions and wavelets are formulated within the framework of the vectorial Cauchy-Navier equation. Based on appropriate numerical integration rules a pyramid scheme is developed providing fast wavelet transform (FWT). Finally multiscale deformation analysis is investigated numerically for the case of a spherical boundary.

We consider the problem of locating a line with respect to some existing facilities in 3-dimensional space, such that the sum of weighted distances between the line and the facilities is minimized. Measuring distance using the l_p norm is discussed, along with the special cases of Euclidean and rectangular norms. Heuristic solution procedures for finding a local minimum are outlined.

In this paper we consider the location of stops along the edges of an already existing public transportation network. This can be the introduction of bus stops along some given bus routes, or of railway stations along the tracks in a railway network. The positive effect of new stops is given by the better access of the potential customers to their closest station, while the increasement of travel time caused by the additional stopping activities of the trains leads to a negative effect. The goal is to cover all given demand points with a minimal amount of additional traveling time, where covering may be defined with respect to an arbitrary norm (or even a gauge). Unfortunately, this problem is NP-hard, even if only the Euclidean distance is used. In this paper, we give a reduction to a finite candidate set leading to a discrete set covering problem. Moreover, we identify network structures in which the coefficient matrix of the resulting set covering problem is totally unimodular, and use this result to derive efficient solution approaches. Various extensions of the problem are also discussed.

Different aspects of geomagnetic field modelling from satellite data are examined in the framework of modern multiscale approximation. The thesis is mostly concerned with wavelet techniques, i.e. multiscale methods based on certain classes of kernel functions which are able to realize a multiscale analysis of the funtion (data) space under consideration. It is thus possible to break up complicated functions like the geomagnetic field, electric current densities or geopotentials into different pieces and study these pieces separately. Based on a general approach to scalar and vectorial multiscale methods, topics include multiscale denoising, crustal field approximation and downward continuation, wavelet-parametrizations of the magnetic field in Mie-representation as well as multiscale-methods for the analysis of time-dependent spherical vector fields. For each subject the necessary theoretical framework is established and numerical applications examine and illustrate the practical aspects.