## Fachbereich Mathematik

### Refine

#### Year of publication

- 2006 (31) (remove)

#### Document Type

- Doctoral Thesis (13)
- Preprint (13)
- Report (3)
- Diploma Thesis (2)

#### Keywords

- Approximation (2)
- Elastic BVP (2)
- Elastisches RWP (2)
- Elastoplastisches RWP (2)
- Hysterese (2)
- IMRT (2)
- Lokalisation (2)
- Multivariate Approximation (2)
- Optimization (2)
- Sphäre (2)
- Spline (2)
- approximate identity (2)
- integer programming (2)
- "Slender-Body"-Theorie (1)
- Algebraic dependence of commuting elements (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Approximative Identität (1)
- Asymptotik (1)
- Beam orientation (1)
- Beschichtungsprozess (1)
- Bewertung (1)
- Biorthogonalisation (1)
- CDSwaption (1)
- CHAMP <Satellitenmission> (1)
- Cauchy-Navier equation (1)
- Cauchy-Navier-Gleichung (1)
- Charakter <Gruppentheorie> (1)
- Coarse graining (1)
- Computer Algebra System (1)
- Computeralgebra System (1)
- Convex sets (1)
- Curved viscous fibers (1)
- Decision support (1)
- Decomposition and Reconstruction Schemes (1)
- Differenzmenge (1)
- Discriminatory power (1)
- Dispersionsrelation (1)
- EGM96 (1)
- Elastizität (1)
- Elastoplastic BVP (1)
- Elastoplasticity (1)
- Elastoplastizität (1)
- Endliche Geometrie (1)
- Endliche Lie-Gruppe (1)
- Entscheidungsunterstützung (1)
- Faden (1)
- Faltung (1)
- Faltung <Mathematik> (1)
- Fast Wavelet Transform (1)
- Finite-Elemente-Methode (1)
- Finite-Volumen-Methode (1)
- Fluid dynamics (1)
- Frequency Averaging (1)
- GOCE <Satellitenmission> (1)
- GRACE <Satellitenmission> (1)
- Gebogener viskoser Faden (1)
- Gleichmäßige Approximation (1)
- Gravimetrie (1)
- Gromov Witten (1)
- Harmonische Dichte (1)
- Hedging (1)
- Homotopie (1)
- Homotopy (1)
- Hysteresis (1)
- Inflation (1)
- Intensity modulated radiation therapy (1)
- Inverses Problem (1)
- Kompakter Träger <Mathematik> (1)
- Konvergenz (1)
- Konvexe Mengen (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kreditrisiko (1)
- Kugel (1)
- L2-Approximation (1)
- LIBOR (1)
- Laplace transform (1)
- Linear kinematic hardening (1)
- Linear kinematische Verfestigung (1)
- MOCO (1)
- Mathematisches Modell (1)
- McKay-Conjecture (1)
- McKay-Vermutung (1)
- Mehrkriterielle Optimierung (1)
- Mehrskalen (1)
- Mehrskalenanalyse (1)
- Molekulardynamik (1)
- Multicriteria optimization (1)
- Multiple objective combinatorial optimization (1)
- Networks (1)
- Nicht-Desarguessche Ebene (1)
- Numerische Mathematik (1)
- Numerisches Verfahren (1)
- Optimal semiconductor design (1)
- Optimierung (1)
- Order of printed copy (1)
- Planares Polynom (1)
- Polynomapproximation (1)
- Project prioritization (1)
- Project selection (1)
- Projektionsoperator (1)
- Radiative Heat Trasfer (1)
- Radiotherapy (1)
- Rate-independency (1)
- Ratenunabhängigkeit (1)
- Regressionsanalyse (1)
- Representation (1)
- Satellitendaten (1)
- Seismische Welle (1)
- Slender body theory (1)
- Spherical Multiresolution Analysis (1)
- Standorttheorie (1)
- Stop- and Play-Operators (1)
- Stop- und Play-Operator (1)
- Stop-und Play-Operator (1)
- Strahlentherapie (1)
- Strukturiertes Finanzprodukt (1)
- Strömungsdynamik (1)
- Traffic flow (1)
- Trennschärfe <Statistik> (1)
- Unreinheitsfunktion (1)
- Variational inequalities (1)
- Variationsungleichugen (1)
- Variationsungleichungen (1)
- Verkehsplanung (1)
- Viskoelastische Flüssigkeiten (1)
- Vorkonditionierer (1)
- Zeitabhängigkeit (1)
- Zeitintegrale Modelle (1)
- Zonal Kernel Functions (1)
- Zopfgruppe (1)
- adjacency (1)
- adjoints (1)
- aggressive space mapping (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- approximative Identität (1)
- best basis (1)
- biorthogonal bases of L^2 (1)
- connectedness (1)
- consecutive ones matrix (1)
- convergence (1)
- convex optimization (1)
- coupling methods (1)
- credit risk (1)
- decision support (1)
- descent algorithm (1)
- double exponential distribution (1)
- drift diffusion (1)
- elastoplastic BVP (1)
- energy transport (1)
- entropy (1)
- enumerative geometry (1)
- explicit representation (1)
- explizite Darstellung (1)
- facets (1)
- fast approximation (1)
- first hitting time (1)
- formants (1)
- forward-shooting grid (1)
- hedging (1)
- hub covering (1)
- hub location (1)
- hysteresis (1)
- impurity functions (1)
- inflation-linked product (1)
- integral constitutive equations (1)
- intensity map segmentation (1)
- jump-diffusion process (1)
- linear programming (1)
- local support (1)
- local trigonometric packets (1)
- locally compact (1)
- location theory (1)
- lokal kompakt (1)
- lokaler Träger (1)
- longevity bonds (1)
- macro derivative (1)
- multi scale (1)
- multileaf collimator (1)
- multiobjective optimization (1)
- neighborhood search (1)
- network flows (1)
- non-desarguesian plane (1)
- numerical methods (1)
- numerics (1)
- optimal capital structure (1)
- optimal investment (1)
- optimales Investment (1)
- optiman stopping (1)
- optimization (1)
- path-dependent options (1)
- penalty methods (1)
- planar polynomial (1)
- preconditioners (1)
- quadrinomial tree (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- radiotherapy (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regression analysis (1)
- reproducing kernel (1)
- reproduzierender Kern (1)
- schnelle Approximation (1)
- seismic wave (1)
- spectrogram (1)
- speech recognition (1)
- sphere (1)
- spline (1)
- sputtering process (1)
- stop- and play-operators (1)
- traffic planning (1)
- tropical geometry (1)
- valid inequalities (1)
- variational inequalities (1)
- viscoelastic fluids (1)
- wavelet packets (1)
- wavelets (1)

* naive examples which show drawbacks of discrete wavelet transform and windowed Fourier transform; * adaptive partition (with a 'best basis' approach) of speech-like signals by means of local trigonometric bases with orthonormal windows. * extraction of formant-like features from the cosine transform; * further proceedingings for classification of vowels or voiced speech are suggested at the end.

In this paper a known orthonormal system of time- and space-dependent functions, that were derived out of the Cauchy-Navier equation for elastodynamic phenomena, is used to construct reproducing kernel Hilbert spaces. After choosing one of the spaces the corresponding kernel is used to define a function system that serves as a basis for a spline space. We show that under certain conditions there exists a unique interpolating or approximating, respectively, spline in this space with respect to given samples of an unknown function. The name "spline" here refers to its property of minimising a norm among all interpolating functions. Moreover, a convergence theorem and an error estimate relative to the point grid density are derived. As numerical example we investigate the propagation of seismic waves.

We consider optimal design problems for semiconductor devices which are simulated using the energy transport model. We develop a descent algorithm based on the adjoint calculus and present numerical results for a ballistic diode. Further, we compare the optimal doping profile with results computed on basis of the drift diffusion model. Finally, we exploit the model hierarchy and test the space mapping approach, especially the aggressive space mapping algorithm, for the design problem. This yields a significant reduction of numerical costs and programming effort.

Tropical geometry is a rather new field of algebraic geometry. The main idea is to replace algebraic varieties by certain piece-wise linear objects in R^n, which can be studied with the aid of combinatorics. There is hope that many algebraically difficult operations become easier in the tropical setting, as the structure of the objects seems to be simpler. In particular, tropical geometry shows promise for application in enumerative geometry. Enumerative geometry deals with the counting of geometric objects that are determined by certain incidence conditions. Until around 1990, not many enumerative questions had been answered and there was not much prospect of solving more. But then Kontsevich introduced the moduli space of stable maps which turned out to be a very useful concept for the study of enumerative geometry. A well-known problem of enumerative geometry is to determine the numbers N_cplx(d,g) of complex genus g plane curves of degree d passing through 3d+g-1 points in general position. Mikhalkin has defined the analogous number N_trop(d,g) for tropical curves and shown that these two numbers coincide (Mikhalkin's Correspondence Theorem). Tropical geometry supplies many new ideas and concepts that could be helpful to answer enumerative problems. However, as a rather new field, tropical geometry has to be studied more thoroughly. This thesis is concerned with the ``translation'' of well-known facts of enumerative geometry to tropical geometry. More precisely, the main results of this thesis are: - a tropical proof of the invariance of N_trop(d,g) of the position of the 3d+g-1 points, - a tropical proof for Kontsevich's recursive formula to compute N_trop(d,0) and - a tropical proof of Caporaso's and Harris' algorithm to compute N_trop(d,g). All results were derived in joint work with my advisor Andreas Gathmann. (Note that tropical research is not restricted to the translation of classically well-known facts, there are actually new results shown by means of tropical geometry that have not been known before. For example, Mikhalkin gave a tropical algorithm to compute the Welschinger invariant for real curves. This shows that tropical geometry can indeed be a tool for a better understanding of classical geometry.)

This work deals with the mathematical modeling and numerical simulation of the dynamics of a curved inertial viscous Newtonian fiber, which is practically applicable to the description of centrifugal spinning processes of glass wool. Neglecting surface tension and temperature dependence, the fiber flow is modeled as a three-dimensional free boundary value problem via instationary incompressible Navier-Stokes equations. From regular asymptotic expansions in powers of the slenderness parameter leading-order balance laws for mass (cross-section) and momentum are derived that combine the unrestricted motion of the fiber center-line with the inner viscous transport. The physically reasonable form of the one-dimensional fiber model results thereby from the introduction of the intrinsic velocity that characterizes the convective terms. For the numerical simulation of the derived model a finite volume code is developed. The results of the numerical scheme for high Reynolds numbers are validated by comparing them with the analytical solution of the inviscid problem. Moreover, the influence of parameters, like viscosity and rotation on the fiber dynamics are investigated. Finally, an application based on industrial data is performed.

Stop Location Design in Public Transportation Networks: Covering and Accessibility Objectives
(2006)

In StopLoc we consider the location of new stops along the edges of an existing public transportation network. Examples of StopLoc include the location of bus stops along some given bus routes or of railway stations along the tracks in a railway system. In order to measure the ''convenience'' of the location decision for potential customers in given demand facilities, two objectives are proposed. In the first one, we give an upper bound on reaching a closest station from any of the demand facilities and minimize the number of stations. In the second objective, we fix the number of new stations and minimize the sum of the distances between demand facilities and stations. The resulting two problems CovStopLoc and AccessStopLoc are solved by a reduction to a classical set covering and a restricted location problem, respectively. We implement the general ideas in two different environments - the plane, where demand facilities are represented by coordinates and in networks, where they are nodes of a graph.

The new international capital standard for credit institutions (“Basel II”) allows banks to use internal rating systems in order to determine the risk weights that are relevant for the calculation of capital charge. Therefore, it is necessary to develop a system that enfolds the main practices and methods existing in the context of credit rating. The aim of this thesis is to give a suggestion of setting up a credit rating system, where the main techniques used in practice are analyzed, presenting some alternatives and considering the problems that can arise from a statistical point of view. Finally, we will set up some guidelines on how to accomplish the challenge of credit scoring. The judgement of the quality of a credit with respect to the probability of default is called credit rating. A method based on a multi-dimensional criterion seems to be natural, due to the numerous effects that can influence this rating. However, owing to governmental rules, the tendency is that typically one-dimensional criteria will be required in the future as a measure for the credit worthiness or for the quality of a credit. The problem as described above can be resolved via transformation of a multi-dimensional data set into a one-dimensional one while keeping some monotonicity properties and also keeping the loss of information (due to the loss of dimensionality) at a minimum level.

Linear and integer programs are considered whose coefficient matrices can be partitioned into K consecutive ones matrices. Mimicking the special case of K=1 which is well-known to be equivalent to a network flow problem we show that these programs can be transformed to a generalized network flow problem which we call semi-simultaneous (se-sim) network flow problem. Feasibility conditions for se-sim flows are established and methods for finding initial feasible se-sim flows are derived. Optimal se-sim flows are characterized by a generalization of the negative cycle theorem for the minimum cost flow problem. The issue of improving a given flow is addressed both from a theoretical and practical point of view. The paper concludes with a summary and some suggestions for possible future work in this area.

This thesis discusses methods for the classification of finite projective planes via exhaustive search. In the main part the author classifies all projective planes of order 16 admitting a large quasiregular group of collineations. This is done by a complete search using the computer algebra system GAP. Computational methods for the construction of relative difference sets are discussed. These methods are implemented in a GAP-package, which is available separately. As another result --found in cooperation with U. Dempwolff-- the projective planes defined by planar monomials are classified. Furthermore the full automorphism group of the non-translation planes defined by planar monomials are classified.

This thesis introduces so-called cone scalarising functions. They are by construction compatible with a partial order for the outcome space given by a cone. The quality of the parametrisations of the efficient set given by the cone scalarising functions are then investigated. Here, the focus lies on the (weak) efficiency of the generated solutions, the reachability of effiecient points and continuity of the solution set. Based on cone scalarising functions Pareto Navigation a novel, interactive, multiobjective optimisation method is proposed. It changes the ordering cone to realise bounds on partial tradeoffs. Besides, its use of an equality constraint for the changing component of the reference point is a new feature. The efficiency of its solutions, the reachability of efficient solutions and continuity is then analysed. Potential problems are demonstrated using a critical example. Furthermore, the use of Pareto Navigation in a two-phase approach and for nonconvex problems is discussed. Finally, its application for intensity-modulated radiotherapy planning is described. Thereby, its realisation in a graphical user interface is shown.

For the last decade, optimization of beam orientations in intensity-modulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity profiles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity profiles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity profiles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity profiles for every selection of beam orientations, making the dependence between beam orientations and its intensity profiles less important. This thesis takes advantage of this property to accelerate the optimization process through an approximation of the intensity profiles that are used for multiple selections of beam orientations, saving a considerable amount of calculation time. A dynamic algorithm (DA) and evolutionary algorithm (EA), for beam orientations in IMRT planning will be presented. The DA mimics, automatically, the methods of beam's eye view and observer's view which are recognized in conventional conformal radiation therapy. The EA is based on a dose-volume histogram evaluation function introduced as an attempt to minimize the deviation between the mathematical and clinical optima. To illustrate the efficiency of the algorithms they have been applied to different clinical examples. In comparison to the standard equally spaced beams plans, improvements are reported for both algorithms in all the clinical examples even when, for some cases, fewer beams are used. A smaller number of beams is always desirable without compromising the quality of the treatment plan. It results in a shorter treatment delivery time, which reduces potential errors in terms of patient movements and decreases discomfort.

Traffic flow on road networks has been a continuous source of challenging mathematical problems. Mathematical modelling can provide an understanding of dynamics of traffic flow and hence helpful in organizing the flow through the network. In this dissertation macroscopic models for the traffic flow in road networks are presented. The primary interest is the extension of the existing macroscopic road network models based on partial differential equations (PDE model). In order to overcome the difficulty of high computational costs of PDE model an ODE model has been introduced. In addition, steady state traffic flow model named as RSA model on road networks has been dicsussed. To obtain the optimal flow through the network cost functionals and corresponding optimal control problems are defined. The solution of these optimization problems provides an information of shortest path through the network subject to road conditions. The resulting constrained optimization problem is solved approximately by solving unconstrained problem invovling exact penalty functions and the penalty parameter. A good estimate of the threshold of the penalty parameter is defined. A well defined algorithm for solving a nonlinear, nonconvex equality and bound constrained optimization problem is introduced. The numerical results on the convergence history of the algorithm support the theoretical results. In addition to this, bottleneck situations in the traffic flow have been treated using a domain decomposition method (DDM). In particular this method could be used to solve the scalar conservation laws with the discontinuous flux functions corresponding to other physical problems too. This method is effective even when the flux function presents more than one discontinuity within the same spatial domain. It is found in the numerical results that the DDM is superior to other schemes and demonstrates good shock resolution.

This thesis deals with modeling aspects of generalized Newtonian and of non-Newtonian fluids, as well as with development and validation of algorithms used in simulation of such fluids. The main contribution in the modeling part are the introduction and analysis of a new model for the generalized Newtonian fluids, where constitutive equation is of an algebraic form. Distinction between shear and extensional viscosities leads to anisotropic viscosity model. It can be considered as a natural extension of the well known (isotropic viscosity) Carreau model, which deals only with shear viscosity properties of the fluid. The proposed model takes additionally into account extensional viscosity properties. Numerical results show that the anisotropic viscosity model gives much better agreement with experimental observations than the isotropic one. Another contribution of the thesis consists of the development and analysis of robust and reliable algorithms for simulation of generalized Newtonian fluids. For such fluids the momentum equations are strongly coupled through mixed derivatives appearing in the viscous term (unlike the case of Newtonian fluids). It is shown in this thesis, that a careful treatment of those derivatives is essential in deriving robust algorithms. A modification of a standard SIMPLE-like algorithm is given, where all the viscous terms from the momentum equations are discretized in an implicit manner. Moreover, it is shown that a block diagonal preconditioner to the viscous operator is good enough to be used in simulations. Furthermore, different solution techniques, namely projection type methods (consists of solving momentum equations and pressure correction equation) and fully coupled methods (momentum and continuity equations are solved together), are compared. It is shown, that explicit discretization of the mixed derivatives lead to stability problems. Further, analytical estimates of eigenvalue distribution for three different preconditioners, applied to the transformed system arising after discretization and linearization of the momentum and continuity equations, are provided. We propose to apply a block Gauss-Seidel preconditioner to the transformed system. The analysis shows, that this preconditioner is able to cluster eigenvalues around unity independent of the transformation step. It is not the case for other preconditioners applied to the transformed system as discussed in the thesis. The block Gauss-Seidel preconditioner has also shown the best behavior (among all preconditioners discussed in the thesis) in numerical experiments. Further contribution consists of comparison and validation of numerical algorithms applied in simulations of non-Newtonian fluids modeled by time integral constitutive equations. Numerical results from simulations of dilute polymer solutions, described by the integral Oldroyd B model, have shown very good quantitative agreement with the results obtained by differential Oldroyd B counterpart in 4:1 planar contraction domain at low Weissenberg numbers. In this case, the Weissenberg number is changed by changing the relaxation time. However, contrary to the differential Oldroyd B model, the integral one allows to perform stable simulations also in the range of high Weissenberg numbers. Moreover, very good agreement with experimental observations has been achieved. Simulations of concentrated polymer solutions (polystyrene and polybutadiene solutions), modeled by the integral Doi Edwards model, supplemented by chain length fluctuations, have shown very good qualitative agreement with the results obtained by its differential approximation in 4:1:4 constriction domain. Again, much higher Weissenberg numbers can be achieved when the integral model is used. Moreover, very good quantitative results with experimental data of polystyrene solution for the first normal stress difference and shear viscosity defined here as the quotient of a shear stress and a shear rate. Finally, comparison of the two methods used for approximating the time integral constitutive equation, namely Deformation Field Method (DFM) and Backward Lagrangian Particle Method (BLPM), is performed. In BLPM the particle paths are recalculated at every time step of the simulations, what has never been tried before. The results have shown, that in the considered geometries both methods give similar results.

The fast development of the financial markets in the last decade has lead to the creation of a variety of innovative interest rate related products that require advanced numerical pricing methods. Examples in this respect are products with a complicated strong path-dependence such as a Target Redemption Note, a Ratchet Cap, a Ladder Swap and others. On the other side, the usage of the standard in the literature one-factor Hull and White (1990) type of short rate models allows only for a perfect correlation between all continuously compounded spot rates or Libor rates and thus are not suited for pricing innovative products depending on several Libor rates such as for example a "steepener" option. One possible solution to this problem deliver the two-factor short rate models and in this thesis we consider a two-factor Hull and White (1990) type of a short rate process derived from the Heath, Jarrow, Morton (1992) framework by limiting the volatility structure of the forward rate process to a deterministic one. In this thesis, we often choose to use a variety of modified (binomial, trinomial and quadrinomial) tree constructions as a main numerical pricing tool due to their flexibility and fast convergence and (when there is no closed-form solution) compare their results with fine grid Monte Carlo simulations. For the purpose of pricing the already mentioned innovative short-rate related products, in this thesis we offer and examine two different lattice construction methods for the two-factor Hull-White type of a short rate process which are able to deal easily both with modeling of the mean-reversion of the underlying process and with the strong path-dependence of the priced options. Additionally, we prove that the so-called rotated lattice construction method overcomes the typical for the existing two-factor tree constructions problem with obtaining negative "risk-neutral probabilities". With a variety of numerical examples, we show that this leads to a stability in the results especially in cases of high volatility parameters and negative correlation between the base factors (which is typically the case in reality). Further, noticing that Chan et al (1992) and Ritchken and Sankarasubramanian (1995) showed that option prices are sensitive to the level of the short rate volatility, we examine the pricing of European and American options where the short rate process has a volatility structure of a Cheyette (1994) type. In this relation, we examine the application of the two offered lattice construction methods and compare their results with the Monte Carlo simulation ones for a variety of examples. Additionally, for the pricing of American options with the Monte Carlo method we expand and implement the simulation algorithm of Longstaff and Schwartz (2000). With a variety of numerical examples we compare again the stability and the convergence of the different lattice construction methods. Dealing with the problems of pricing strongly path-dependent options, we come across the cumulative Parisian barrier option pricing problem. We notice that in their classical form, the cumulative Parisian barrier options have been priced both analytically (in a quasi closed form) and with a tree approximation (based on the Forward Shooting Grid algorithm, see e.g. Hull and White (1993), Kwok and Lau (2001) and others). However, we offer an additional tree construction method which can be seen as a direct binomial tree integration that uses the analytically calculated conditional survival probabilities. The advantage of the offered method is on one side that the conditional survival probabilities are easier to calculate than the closed-form solution itself and on the other side that this tree construction is very flexible in the sense that it allows easy incorporation of additional features such as e.g a forward starting one. The obtained results are better than the Forward Shooting Grid tree ones and are very close to the analytical quasi closed form solution. Finally, we pay our attention to pricing another type of innovative interest rate alike products - namely the Longevity bond - whose coupon payments depend on the survival function of a given cohort. Due to the lack of a market for mortality, for the pricing of the Longevity bonds we develop (following Korn, Natcheva and Zipperer (2006)) a framework that contains principles from both Insurance and Financial mathematic. Further on, we calibrate the existing models for the stochastic mortality dynamics to historical German data and additionally offer new stochastic extensions of the classical (deterministic) models of mortality such as the Gompertz and the Makeham one. Finally, we compare and analyze the results of the application of all considered models to the pricing of a Longevity bond on the longevity of the German males.

We show the numerical applicability of a multiresolution method based on harmonic splines on the 3-dimensional ball which allows the regularized recovery of the harmonic part of the Earth's mass density distribution out of different types of gravity data, e.g. different radial derivatives of the potential, at various positions which need not be located on a common sphere. This approximated harmonic density can be combined with its orthogonal anharmonic complement, e.g. determined out of the splitting function of free oscillations, to an approximation of the whole mass density function. The applicability of the presented tool is demonstrated by several test calculations based on simulated gravity values derived from EGM96. The method yields a multiresolution in the sense that the localization of the constructed spline basis functions can be increased which yields in combination with more data a higher resolution of the resulting spline. Moreover, we show that a locally improved data situation allows a highly resolved recovery in this particular area in combination with a coarse approximation elsewhere which is an essential advantage of this method, e.g. compared to polynomial approximation.

We study model reduction techniques for frequency averaging in radiative heat transfer. Especially, we employ proper orthogonal decomposition in combination with the method of snapshots to devise an automated a posteriori algorithm, which helps to reduce significantly the dimensionality for further simulations. The reliability of the surrogate models is tested and we compare the results with two other reduced models, which are given by the approximation using the weighted sum of gray gases and by an frequency averaged version of the so-called \(\mathrm{SP}_n\) model. We present several numerical results underlining the feasibility of our approach.

We present a constructive theory for locally supported approximate identities on the unit ball in \(\mathbb{R}^3\). The uniform convergence of the convolutions of the derived kernels with an arbitrary continuous function \(f\) to \(f\), i.e. the defining property of an approximate identity, is proved. Moreover, an explicit representation for a class of such kernels is given. The original publication is available at www.springerlink.com

In this article, we give some generalisations of existing Lipschitz estimates for the stop and the play operator with respect to an arbitrary convex and closed characteristic a separable Hilbert space. We are especially concerned with the dependency of their outputs with respect to different scalar products.

In this thesis diverse problems concerning inflation-linked products are dealt with. To start with, two models for inflation are presented, including a geometric Brownian motion for consumer price index itself and an extended Vasicek model for inflation rate. For both suggested models the pricing formulas of inflation-linked products are derived using the risk-neutral valuation techniques. As a result Black and Scholes type closed form solutions for a call option on inflation index for a Brownian motion model and inflation evolution for an extended Vasicek model as well as for an inflation-linked bond are calculated. These results have been already presented in Korn and Kruse (2004) [17]. In addition to these inflation-linked products, for the both inflation models the pricing formulas of a European put option on inflation, an inflation cap and floor, an inflation swap and an inflation swaption are derived. Consequently, basing on the derived pricing formulas and assuming the geometric Brownian motion process for an inflation index, different continuous-time portfolio problems as well as hedging problems are studied using the martingale techniques as well as stochastic optimal control methods. These utility optimization problems are continuous-time portfolio problems in different financial market setups and in addition with a positive lower bound constraint on the final wealth of the investor. When one summarizes all the optimization problems studied in this work, one will have the complete picture of the inflation-linked market and both counterparts of market-participants, sellers as well as buyers of inflation-linked financial products. One of the interesting results worth mentioning here is naturally the fact that a regular risk-averse investor would like to sell and not buy inflation-linked products due to the high price of inflation-linked bonds for example and an underperformance of inflation-linked bonds compared to the conventional risk-free bonds. The relevance of this observation is proved by investigating a simple optimization problem for the extended Vasicek process, where as a result we still have an underperforming inflation-linked bond compared to the conventional bond. This situation does not change, when one switches to an optimization of expected utility from the purchasing power, because in its nature it is only a change of measure, where we have a different deflator. The negativity of the optimal portfolio process for a normal investor is in itself an interesting aspect, but it does not affect the optimality of handling inflation-linked products compared to the situation not including these products into investment portfolio. In the following, hedging problems are considered as a modeling of the other half of inflation market that is inflation-linked products buyers. Natural buyers of these inflation-linked products are obviously institutions that have payment obligations in the future that are inflation connected. That is why we consider problems of hedging inflation-indexed payment obligations with different financial assets. The role of inflation-linked products in the hedging portfolio is shown to be very important by analyzing two alternative optimal hedging strategies, where in the first one an investor is allowed to trade as inflation-linked bond and in the second one he is not allowed to include an inflation-linked bond into his hedging portfolio. Technically this is done by restricting our original financial market, which is made of a conventional bond, inflation index and a stock correlated with inflation index, to the one, where an inflation index is excluded. As a whole, this thesis presents a wide view on inflation-linked products: inflation modeling, pricing aspects of inflation-linked products, various continuous-time portfolio problems with inflation-linked products as well as hedging of inflation-related payment obligations.

Using covering problems (CoP) combined with binary search is a well-known and successful solution approach for solving continuous center problems. In this paper, we show that this is also true for center hub location problems in networks. We introduce and compare various formulations for hub covering problems (HCoP) and analyse the feasibility polyhedron of the most promising one. Computational results using benchmark instances are presented. These results show that the new solution approach performs better in most examples.