Kaiserslautern - Fachbereich Mathematik
Refine
Year of publication
- 2003 (39) (remove)
Document Type
- Preprint (19)
- Doctoral Thesis (13)
- Report (5)
- Diploma Thesis (2)
Keywords
- Lineare Algebra (4)
- Mathematikunterricht (4)
- Modellierung (4)
- Wavelet (4)
- linear algebra (4)
- modelling (4)
- praxisorientiert (4)
- Mehrskalenanalyse (3)
- mathematical education (3)
- CHAMP (2)
Faculty / Organisational entity
The thesis discusses discrete-time dynamic flows over a finite time horizon T. These flows take time, called travel time, to pass an arc of the network. Travel times, as well as other network attributes, such as, costs, arc and node capacities, and supply at the source node, can be constant or time-dependent. Here we review results on discrete-time dynamic flow problems (DTDNFP) with constant attributes and develop new algorithms to solve several DTDNFPs with time-dependent attributes. Several dynamic network flow problems are discussed: maximum dynamic flow, earliest arrival flow, and quickest flow problems. We generalize the hybrid capacity scaling and shortest augmenting path algorithmic of the static network flow problem to consider the time dependency of the network attributes. The result is used to solve the maximum dynamic flow problem with time-dependent travel times and capacities. We also develop a new algorithm to solve earliest arrival flow problems with the same assumptions on the network attributes. The possibility to wait (or park) at a node before departing on outgoing arc is also taken into account. We prove that the complexity of new algorithm is reduced when infinite waiting is considered. We also report the computational analysis of this algorithm. The results are then used to solve quickest flow problems. Additionally, we discuss time-dependent bicriteria shortest path problems. Here we generalize the classical shortest path problems in two ways. We consider two - in general contradicting - objective functions and introduce a time dependency of the cost which is caused by a travel time on each arc. These problems have several interesting practical applications, but have not attained much attention in the literature. Here we develop two new algorithms in which one of them requires weaker assumptions as in previous research on the subject. Numerical tests show the superiority of the new algorithms. We then apply dynamic network flow models and their associated solution algorithms to determine lower bounds of the evacuation time, evacuation routes, and maximum capacities of inhabited areas with respect to safety requirements. As a macroscopic approach, our dynamic network flow models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building.
We construct and study two surface measures on the space C([0,1],M) of paths in a compact Riemannian manifold M embedded into the Euclidean space R^n. The first one is induced by conditioning the usual Wiener measure on C([0,T],R^n) to the event that the Brownian particle does not leave the tubular epsilon-neighborhood of M up to time T, and passing to the limit. The second one is defined as the limit of the laws of reflected Brownian motions with reflection on the boundaries of the tubular epsilon-neighborhoods of M. We prove that the both surface measures exist and compare them with the Wiener measure W_M on C([0,T],M). We show that the first one is equivalent to W_M and compute the corresponding density explicitly in terms of the scalar curvature and the mean curvature vector of M. Further, we show that the second surface measure coincides with W_M. Finally, we study the limit behavior of the both surface measures as T tends to infinity.
In this paper we consider the location of stops along the edges of an already existing public transportation network, as introduced in [SHLW02]. This can be the introduction of bus stops along some given bus routes, or of railway stations along the tracks in a railway network. The goal is to achieve a maximal covering of given demand points with a minimal number of stops. This bicriterial problem is in general NP-hard. We present a nite dominating set yielding an IP-formulation as a bicriterial set covering problem. We use this formulation to observe that along one single straight line the bicriterial stop location problem can be solved in polynomial time and present an e cient solution approach for this case. It can be used as the basis of an algorithm tackling real-world instances.
In this paper we consider set covering problems with a coefficient matrix almost having the consecutive ones property, i.e., in many rows of the coefficient matrix, the ones appear consecutively. If this property holds for all rows it is well known that the set covering problem can be solved efficiently. For our case of almost consecutive ones we present a reformulation exploiting the consecutive ones structure to develop bounds and a branching scheme. Our approach has been tested on real-world data as well as on theoretical problem instances.
The central theme in this thesis concerns the development of enhanced methods and algorithms for appraising market and credit risks and their application within the context of standard and more advanced market models. Generally, methods and algorithms for analysing market risk of complex portfolios involve detailed knowledge of option sensitivities, the so-called "Greeks". Based on an analysis of symmetries in financial market models, relations between option sensitivities are obtained, which can be used for the efficient valuation of the Greeks. Mainly, the relations are derived within the Black Scholes model, however, some relations are also valid for more general models, for instance the Heston model. Portfolios are usually influenced by lots of underlyings, so it is necessary to characterise the dependencies of these basic instruments. It is usual to describe such dependencies by correlation matrices. However, estimations of correlation matrices in practice are disturbed by statistical noise and usually have the problem of rank deficiency due to missing data. A fast algorithm is presented which performs a generalized Cholesky decomposition of a perturbed correlation matrix. In contrast to the standard Cholesky algorithm, an advantage of the generalized method is that it works for semi-positive, rank deficient matrices as well. Moreover, it gives an approximative decomposition when the input matrix is indefinite. A comparison with known algorithms with similar features is performed and it turns out, that the new algorithm can be recommended in situations where computation time is the critical issue. The determination of a profit and loss distribution by Fourier inversion of its characteristic function is a powerful tool, but it can break down when the characteristic function is not integrable. In this thesis, methods for Fourier inversion of non-integrable characteristic functions are studied. In this respect, two theorems are obtained which are based on a suitable approximation of the unknown distribution with known density and characteristic function. Further it will be shown, that straightforward Fast Fourier inversion works, when the according density lives on a bounded interval. The above techniques are of crucial importance to determine the profit and loss distribution (P&L) of large portfolios efficiently. The so-called Delta Gamma normal approach has become industrial standard for the estimation of market risk. It is shown, that the performance of the Delta Gamma normal approach can be improved substantially by application of the developed methods. The same optimization procedure also applies to the Delta Gamma Student model. A standard tool for computing the P&L distribution of a loan portfolio is the CreditRisk+ model. Basically, the CreditRisk+ distribution is a discrete distribution which can be computed from its probability generating function. For this a numerically stable method is presented and as an alternative, a new algorithm based on Fourier inversion is proposed. Finally, an extension of the CreditRisk+ model to market risk is developed, which distribution can be obtained efficiently by the presented Fourier inversion methods as well.
In this thesis the combinatorial framework of toric geometry is extended to equivariant sheaves over toric varieties. The central questions are how to extract combinatorial information from the so developed description and whether equivariant sheaves can, like toric varieties, be considered as purely combinatorial objects. The thesis consists of three main parts. In the first part, by systematically extending the framework of toric geometry, a formalism is developed for describing equivariant sheaves by certain configurations of vector spaces. In the second part, homological properties of a certain class of equivariant sheaves are investigated, namely that of reflexive equivariant sheaves. Several kinds of resolutions for these sheaves are constructed which depend only on the configuration of their associated vector spaces. Thus a partially positive answer to the question of combinatorial representability is given. As a particular result, a new way for computing minimal resolutions for Z^n - graded modules over polynomial rings is obtained. In the third part a complete classification of the simplest nontrivial sheaves, equivariant vector bundles of rank two over smooth toric surfaces, is given. A combinatorial characterization is given and parameter spaces (moduli spaces) are constructed which depend only on this characterization. In appendices a outlook on equivariant sheaves and the relation of Chern classes to their combinatorial classification is given, particularly focussing on the case of the projective plane. A classification of equivariant vector bundles of rank three over the projective plane is given.
We study a possiblity to use the structure of the regularization error for a posteriori choice of the regularization parameter. As a result, a rather general form of a selection criterion is proposed, and its relation to the heuristical quasi-optimality principle of Tikhonov and Glasko (1964), and to an adaptation scheme proposed in a statistical context by Lepskii (1990), is discussed. The advantages of the proposed criterion are illustrated by using such examples as self-regularization of the trapezoidal rule for noisy Abel-type integral equations, Lavrentiev regularization for non-linear ill-posed problems and an inverse problem of the two-dimensional profile reconstruction.
Semiparametric estimation of conditional quantiles for time series, with applications in finance
(2003)
The estimation of conditional quantiles has become an increasingly important issue in insurance and financial risk management. The stylized facts of financial time series data has rendered direct applications of extreme value theory methodologies, in the estimation of extreme conditional quantiles, inappropriate. On the other hand, quantile regression based procedures work well in nonextreme parts of a given data but breaks down in extreme probability levels. In order to solve this problem, we combine nonparametric regressions for time series and extreme value theory approaches in the estimation of extreme conditional quantiles for financial time series. To do so, a class of time series models that is similar to nonparametric AR-(G)ARCH models but which does not depend on distributional and moments assumptions, is introduced. We discuss estimation procedures for the nonextreme levels using the models and consider the estimates obtained by inverting conditional distribution estimators and by direct estimation using Koenker-Basset (1978) version for kernels. Under some regularity conditions, the asymptotic normality and uniform convergence, with rates, of the conditional quantile estimator for strong mixing time series, are established. We study the estimation of scale function in the introduced models using similar procedures and show that under some regularity conditions, the scale estimate is weakly consistent and asymptotically normal. The application of introduced models in the estimation of extreme conditional quantiles is achieved by augmenting them with methods in extreme value theory. It is shown that the overal extreme conditional quantiles estimator is consistent. A Monte Carlo study is carried out to illustrate the good performance of the estimates and real data are used to demonstrate the estimation of Value-at-Risk and conditional expected shortfall in financial risk management and their multiperiod predictions discussed.