### Refine

#### Year of publication

- 1999 (131) (remove)

#### Document Type

- Preprint (121)
- Article (4)
- Lecture (3)
- Study Thesis (2)
- Diploma Thesis (1)

#### Keywords

#### Faculty / Organisational entity

- Fachbereich Mathematik (131) (remove)

Anhand des vom Gutachterausschuß der Stadt Kaiserlautern zur Verfügung gestellten Datenmaterials soll untersucht werden, welche Faktoren den Verkehrswert eines bebauten Grundstücks beeinflussen. Mit diesen Erkenntnissen soll eine möglichst einfache Formel ermittelt werden, die eine Schätzung für den Verkehrswert liefert, und die dabei die in der Vergangenheit erzielten Kaufpreise berücksichtigt. Für die Lösung dieser Aufgabe bietet sich das Verfahren der multiplen linearen Regression an. Auf die theoretischen Grundlagen soll hier nicht näher eingegangen werden, man findet sie in jedem Buch über mathematische Statistik, oder in [1]. Bei der Analyse der Daten wurde im großen und ganzen der Weg eingeschlagen, den Angelika Schwarz in [1] beschreibt. Ihre Ergebnisse lassen sich jedoch nicht direkt übertragen, da die dort betrachteten Grundstücke unbebaut waren. Da bei der statistischen Auswertung großer Datenmengen ein immenser Rechenaufwand anfällt, ist es unverzichtbar, professionelle statistische Software einzusetzen. Es stand das Programm S-Plus 2.0 (PC-Version für Windows) zur Verfügung. Sämtliche Berechnungen und alle Grafiken in diesem Bericht wurden in S-Plus erstellt.

Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.

We consider wavelet estimation of the time-dependent (evolutionary) power spectrum of a locally stationary time series. Allowing for departures from stationary proves useful for modelling, e.g., transient phenomena, quasi-oscillating behaviour or spectrum modulation. In our work wavelets are used to provide an adaptive local smoothing of a short-time periodogram in the time-freqeuncy plane. For this, in contrast to classical nonparametric (linear) approaches we use nonlinear thresholding of the empirical wavelet coefficients of the evolutionary spectrum. We show how these techniques allow for both adaptively reconstructing the local structure in the time-frequency plane and for denoising the resulting estimates. To this end a threshold choice is derived which is motivated by minimax properties w.r.t. the integrated mean squared error. Our approach is based on a 2-d orthogonal wavelet transform modified by using a cardinal Lagrange interpolation function on the finest scale. As an example, we apply our procedure to a time-varying spectrum motivated from mobile radio propagation.

A multiscale method is introduced using spherical (vector) wavelets for the computation of the earth's magnetic field within source regions of ionospheric and magnetospheric currents. The considerations are essentially based on two geomathematical keystones, namely (i) the Mie representation of solenoidal vector fields in terms of toroidal and poloidal parts and (ii) the Helmholtz decomposition of spherical (tangential) vector fields. Vector wavelets are shown to provide adequate tools for multiscale geomagnetic modelling in form of a multiresolution analysis, thereby completely circumventing the numerical obstacles caused by vector spherical harmonics. The applicability and efficiency of the multiresolution technique is tested with real satellite data.

Vigenere-Verschlüsselung
(1999)

The mathematical modelling of problems in science and engineering leads often to partial differential equations in time and space with boundary and initial conditions.The boundary value problems can be written as extremal problems(principle of minimal potential energy), as variational equations (principle of virtual power) or as classical boundary value problems.There are connections concerning existence and uniqueness results between these formulations, which will be investigated using the powerful tools of functional analysis.The first part of the lecture is devoted to the analysis of linear elliptic boundary value problems given in a variational form.The second part deals with the numerical approximation of the solutions of the variational problems.Galerkin methods as FEM and BEM are the main tools. The h-version will be discussed, and an error analysis will be done.Examples, especially from the elasticity theory, demonstrate the methods.

Value Preserving Strategies and a General Framework for Local Approaches to Optimal Portfolios
(1999)

We present some new general results on the existence and form of value preserving portfolio strategies in a general semimartingale setting. The concept of value preservation will be derived via a mean-variance argument. It will also be embedded into a framework for local approaches to the problem of portfolio optimisation.

The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent:
\[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p}
\mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.

In the paper we discuss the transition from kinetic theory to macroscopic fluid equations, where the macroscopic equations are defined as aymptotic limits of a kinetic equation. This relation can be used to derive computationally efficient domain decomposition schemes for the simulaion of rarefied gas flows close to the continuum limit. Moreover, we present some basic ideas for the derivation of kinetic induced numerical schemes for macroscopic equations, namely kinetic schemes for general conservation laws as well as Lattice-Boltzmann methods for the incompressible Navier-Stokes equations.

This paper is concerned with numerical algorithms for the bipolar quantum drift diffusion model. For the thermal equilibrium case a quasi-gradient method minimizing the energy functional is introduced and strong convergence is proven. The computation of current - voltage characteristics is performed by means of an extended emph{Gummel - iteration}. It is shown that the involved fixed point mapping is a contraction for small applied voltages. In this case the model equations are uniquely solvable and convergence of the proposed iteration scheme follows. Numerical simulations of a one dimensional resonant tunneling diode are presented. The computed current - voltage characteristics are in good qualitative agreement with experimental measurements. The appearance of negative differential resistances is verified for the first time in a Quantum Drift Diffusion model.

The thermal equilibrium state of a bipolar, isothermal quantum fluid confined to a bounded domain \(\Omega\subset I\!\!R^d,d=1,2\) or \( d=3\) is the minimizer of the total energy \({\mathcal E}_{\epsilon\lambda}\); \({\mathcal E}_{\epsilon\lambda}\) involves the squares of the scaled Planck's constant \(\epsilon\) and the scaled minimal Debye length \(\lambda\). In applications one frequently has \(\lambda^2\ll 1\). In these cases the zero-space-charge approximation is rigorously justified. As \(\lambda \to 0 \), the particle densities converge to the minimizer of a limiting quantum zero-space-charge functional exactly in those cases where the doping profile satisfies some compatibility conditions. Under natural additional assumptions on the internal energies one gets an differential-algebraic system for the limiting \((\lambda=0)\) particle densities, namely the quantum zero-space-charge model. The analysis of the subsequent limit \(\epsilon \to 0\) exhibits the importance of quantum gaps. The semiclassical zero-space-charge model is, for small \(\epsilon\), a reasonable approximation of the quantum model if and only if the quantum gap vanishes. The simultaneous limit \(\epsilon =\lambda \to 0\) is analyzed.

Tangent measure distributions are a natural tool to describe the local geometry of arbitrary measures of any dimension. We show that for every measure on a Euclidean space and every s, at almost every point, all s-dimensional tangent measure distributions define statistically self-similar random measures. Consequently, the local geometry of general measures is not different from the local geometry of self-similar sets. We illustrate the strength of this result by showing how it can be used to improve recently proved relations between ordinary and average densities.

Spektralsequenzen
(1999)

Starting from the uniqueness question for mixtures of distributions this review centers around the question under which formally weaker assumptions one can prove the existence of SPLIFs, in other words perfect statistics and tests. We mention a couple of positive and negative results which complement the basic contribution of David Blackwell in 1980. Typically the answers depend on the choice of the set theoretic axioms and on the particular concepts of measurability.

We consider three applications of impulse control in financial mathematics, a cash management problem, optimal control of an exchange rate, and portfolio optimisation under transaction costs. We sketch the different ways of solving these problems with the help of quasi-variational inequalities. Further, some viscosity solution results are presented.

In line location problems the objective is to find a straight line which minimizes the sum of distances, or the maximum distance, respectively to a given set of existing facilities in the plane. These problems have well solved. In this paper we deal with restricted line location problems, i.e. we have given a set in the plane where the line is not allowed to pass through. With the help of a geometric duality we solve such problems for the vertical distance and then extend these results to block norms and some of them even to arbitrary norms. For all norms we give a finite candidate set for the optimal line.

We compare different notions of differentiability of a measure along a vector field on a locally convex space. We consider in the L2-space of a differ entiable measure the analoga of the classical concepts of gradient, divergence and Laplacian (which coincides with the OrnsteinUhlenbeck operator in the Gaussian case). We use these operators for the extension of the basic results of Malliavin and Stroock on the smoothness of finite dimensional image measures under certain nonsmooth mappings to the case of non-Gaussian measures. The proof of this extension is quite direct and does not use any Chaos-decomposition. Finally, the role of this Laplacian in the procedure of quantization of anharmonic oscillators is discussed.

In 1979, J.M. Bernardo argued heuristically that in the case of regular product experiments his information theoretic reference prior is equal to Jeffreys' prior. In this context, B.S. Clarke and A.R. Barron showed in 1994, that in the same class of experiments Jeffreys' prior is asymptotically optimal in the sense of Shannon, or, in Bayesian terms, Jeffreys' prior is asymptotically least favorable under Kullback Leibler risk. In the present paper, we prove, based on Clarke and Barron's results, that every sequence of Shannon optimal priors on a sequence of regular iid product experiments converges weakly to Jeffreys' prior. This means that for increasing sample size Kullback Leibler least favorable priors tend to Jeffreys' prior.

In this paper relationships between Pareto points and saddle points in multiple objective programming are investigated. Convex and nonconvex problems are considered and the equivalence between Pareto points and saddle points is proved in both cases. The results are based on scalarizations of multiple objective programs and related linear and augmented Lagrangian functions. Partitions of the index sets of objectives and constranints are introduced to reduce the size of the problems. The relevance of the results in the context of decision making is also discussed.

In this paper a new trend is introduced into the field of multicriteria location problems. We combine the robustness approach using the minmax regret criterion together with Pareto-optimality. We consider the multicriteria Weber location problem which consists of simultaneously minimizing a number of weighted sum-distance functions and the set of Pareto-optimal locations as its solution concept. For this problem, we characterize the Pareto-optimal solutions within the set of robust locations for the original weighted sum-distance functions. These locations have both the properties of stability and non-domination which are required in robust and multicriteria programming.

Ramsey Numbers of K_m versus (n,k)-graphs and the Local Density of Graphs not Containing a K_m
(1999)

In this paper generalized Ramsey numbers of complete graphs K_m versus the set langle ,n,k angle of (n,k)-graphs are investigated. The value of r(K_m,langle n,k angle) is given in general for (relative to n) values of k small compared to n using a correlation with Turan numbers. These generalized Ramsey numbers con be used to determine the local densities of graphs not containing a subgraph K_m.

Many interesting problems arise from the study of the behavior of fluids. From a theoretical point of view Fluid Dynamics works with a well defined set of equat ions for which it is expected to get a clear description of the solutions. Unfortunately, in ge neral this is not easy even if the many experiments performed in the field seem to indicate which path to follow. Some of the basic questions are still either partially or widely open. For example we would like to have a better understanding on : 1. Questions for both bounded and unbounded domains on regularity, uniqueness, long time behavior of the solutions. 2. How well do solutions to the fluid equations fit to the real flow. Depending on the type of data most of the answers to these questions are knonw, when we work in two dimensions. For solutions in three dimensions, in general, we have only partial answers.

The Weber Problem for a given finite set of existing facilities {cal E}x = {Ex_1,Ex_2, ... ,Ex_M} subset R^2 with positive weights w_m (m = 1, ... ,M) is to find a new fcility X* such that sum_{m=1}^{M} w_{m}d(X,Ex_m) is minimized for some distance function d. A variation of this problem is obtained of the existing facilities are situated on two sides of a linear barrier. Such barriers like rivers, highways, borders or mountain ranges are frequently encountered in practice. Structural results as well as algorithms for this non-convex optimization problem depending on the distance function and on the number and location of passages through the barrier are presented. A reduction to convex optimization problems is used to derive efficient algorithms.

The Weber problem for a given finite set of existing facilities {cal E}x = {Ex_1,Ex_2, ... ,Ex_M} subset R^2 with positive weights w_m (m = 1, ... ,M) is to find a new facility X* in R^2 such that sum_{m=1}^{M} w_{m}d(X,Ex_m) is minimized for some distance function d. In this paper we consider distances defined by polyhedral gauges. A variation of this problem is obtained if barriers are introduced which are convex polygonal subsets of the plane where neither location of new facilities nor traveling is allowed. Such barriers like lakes, military regions, national parks or mountains are frequently encountered in practice.From a mathematical point of view barrier problems are difficult, since the prensence of barriers destroys the convexity of the objective function. Nevertheless, this paper establishes a descretization result: One of the grid points in the grid defined by the existing facilities and the fuundamental directions of the gauge distances can be proved to be an optimal location. Thus the barrier problem can be solved with a polynomial algorithm.

Neural networks are now a well-established tool for solving classification and forecasting problems in financial applications (compare, e.g., Bol et al., 1996, Evans, 1997, Rehkugler and Zimmermann, 1994, Refenes 1995, and Refenes et al. 1996a) though many practioners are still suspicious against too evident success stories. One reason may be that the construction of an appropriate network which provides a reasonable solution to a complex data-analytic problem is rarely made explicit in the literature. In this paper, we try to contribute to filling this gap by discussing in detail the problem of dynamically allocating capital to various components of a currency portfolio in such a manner that the average gain will be larger than for certain benchmark portfolios. We base our solution on feedforward neural networks which are constructed employing various statistical model selection procedures described in, e.g., (Anders, 1997, or Refenes et al., 1996b). Neural networks which are used as the basis of trading strategies in finance should be assessed differently than in technical applications. The task is not to construct a network which provides good forecasts with respect to mean-square error of some quantities of interest or to provide good approximation of some given target values, but to achieve a good performance in economic terms. For portfolio allocation, the main goal is to achieve on the average a large return combined with a small risk. Therefore, we do not consider forecasts of the foreign exchange (FX-) rate time series using neural networks, but we try to get the allocation directly as the output of a network. Furthermore, we do not minimize some estimation or prediction error, but we try to maximize an economically meaningful performance measure, the risk-adjusted return, directly (compare also Heitkamp, 1996). In the subsequent chapter, we describe the details of the portfolio allocation problem. The following two chapters provide some technical information on how the networks were fitted to the available data and how the network inputs and outputs were selected. In chapter 5, finally, we discuss the promising results.

A class of regularization methods using unbounded regularizing operators is considered for obtaining stable approximate solutions for ill-posed operator equations. With an a posteriori as well as an priori parameter choice strategy, it is shown that the method yields optimal order. Error estimates have also been obtained under stronger assumptions on the the generalized solution. The results of the paper unify and simplify many of the results available in the literature. For example, the optimal results of the paper includes, as particular cases for Tikhonov regularization, the main result of Mair (1994) with an a priori parameter choice and a result of Nair (1999) with an a posteriori parameter choice. Thus the observations of Mair (1994) on Tikhonov regularization of ill-posed problems involving finitely and infinitely smoothing operators is applicable to various other regularization procedures as well. Subsequent results on error estimates include, as special cases, an optimal result of Vainikko (1987) and also recent results of Tautenhahn (1996) in the setting Hilbert scales.

In a discrete-time financial market setting, the paper relates various concepts introduced for dynamic portfolios (both in discrete and in continuous time). These concepts are: value preserving portfolios, numeraire portfolios, interest oriented portfolios, and growth optimal portfolios. It will turn out that these concepts are all associated with a unique martingale measure which agrees with the minimal martingale measure only for complete markets.

Let (Epsilon_k) be a sequence of experiments with the same finite parameter set. Suppose only that identification of the parameter is possible asymptotically. For large classes of information functionals we show that their exponential rates of convergence towards complete information coincide. As a special case we obtain the rate of the Shannon capacity of product experiments.

In this paper we prove a reduction result for the number of criteria in convex multiobjective optimization. This result states that to decide wheter a point x in the decision space is pareto optimal it suffices to consider at most n? criteria at a time, where n is the dimension of the decision space. The main theorem is based on a geometric characterization of pareto, strict pareto and weak pareto solutions

We consider the "representation type" of the classification problem of vector bundles on a projective curve. We prove that this problem is always either finite, or tame, or wild and we completely describe those curves which are of finite, resp. tame, vector bundle type. We also give a complete list of indecomposable vector bundles for the finite and tame cases.

Nonlinear dissipativity, asymptotical stability, and contractivity of (ordinary) stochastic differential equations (SDEs) with some dissipative structure and their discretizations are studied in terms of their moments in the spirit of Pliss (1977). For this purpose, we introduce the notions and discuss related concepts of dissipativity, growth- bounded and monotone coefficient systems, asymptotical stability and contractivity in wide and narrow sense, nonlinear A-stability, AN-stability, B-stability and BN-stability for stochastic dynamical systems - more or less as stochastic counterparts to deterministic concepts. The test class of in a broad sense interpreted dissipative SDEs as natural analogon to dissipative deterministic differential systems is suggested for stochastic-numerical methods. Then, in particular, a kind of mean square calculus is developed, although most of ideas and analysis can be carried over to general "stochastic Lp-case" (p * 1). By this natural restriction, the new stochastic concepts are theoretically meaningful, as in deterministic analysis. Since the choice of step sizes then plays no essential role in related proofs, we even obtain nonlinear A-stability, AN-stability, B-stability and BN-stability in the mean square sense for this implicit method with respect to appropriate test classes of moment-dissipative SDEs.

Let P be a probability measure of the real line R such that each of the product measures P^{otimes n} assigns the value 1/2 to every half space in R^{n} having the origin as a boundary point. Then P is symmetric.Example: A strictly stable law on R is symmetric iff it has median zero. The treated symmetry problem is related to the problem of characterizing the distribution of X_1 by the distribution of (X_2 + X_1, ... ,X_n + X_1), with X_1, ... ,X_n being independent and identically distributed random variables.

Let rC and rD be two convexdistance funtions in the plane with convex unit balls C and D. Given two points, p and q, we investigate the bisector, B(p,q), of p and q, where distance from p is measured by rC and distance from q by rD. We provide the following results. B(p,q) may consist of many connected components whose precise number can be derived from the intersection of the unit balls, C nd D. The bisector can contain bounded or unbounded 2-dimensional areas. Even more surprising, pieces of the bisector may appear inside the region of all points closer to p than to q. If C and D are convex polygons over m and m vertices, respectively, the bisector B(p,q) can consist of at most min(m,n) connected components which contain at most 2(m+n) vertices altogether. The former bound is tight, the latter is tight up to an additive constant. We also present an optimal O(m+n) time algorithm for computing the bisector.

If \(A\) generates a bounded cosine function on a Banach space \(X\) then the negative square root \(B\) of \(A\) generates a holomorphic semigroup, and this semigroup is the conjugate potential transform of the cosine function. This connection is studied in detail, and it is used for a characterization of cosine function generators in terms of growth conditions on the semigroup generated by \(B\). This characterization relies on new results on the inversion of the vector-valued conjugate potential transform.

The interation of particular slender bodies with low Reynolds-number flows is in the limit 'slenderness to 0' described by a linear Fredholm integral equation of the second kind. The integral operator of this equation has a denumerable set of polynomial eigenfunctions whose corresponding eigenvalues are non-positive and of logarithmic growth. A theorem similiar to a classical result of Plemelj-Privalov for integral operators with Cauchy kernels is proven. In contrast to Cauchy kernel operators, the integral operator maps no Hölder space into itself. A spectral analysis of the integral operator restricted to an appropriate class of analytic functions is performed. The spectral properties of this restricted integral operator suggest a collocation-like method to solve the integral equation numerically. For this numerical scheme, convergence is proven and several computations are presented.

The asymptotic analysis of IBVPs for the singularly perturbed parabolic PDE ... in the limit epsilon to zero motivate investigations of certain recursively defined approximative series ("ping-pong expansions"). The recursion formulae rely on operators assigning to a boundary condition at the left or the right boundary a solution of the parabolic PDE. Sufficient conditions for uniform convergence of ping-pong expansions are derived and a detailed analysis for the model problem ... is given.

We consider a Darcy flow model with saturation-pressure relation extended
with a dynamic term, namely, the time derivative of the saturation.
This model was proposed in works of J.Hulshof and J.R.King (1998), S.M.Hassanizadeh and W.G.Gray (1993),
F.Stauffer (1978).
We restrict ourself to one spatial dimension and strictly positive
initial saturation. For this case we transform the initial-boundary value
problem into combination of elliptic boundary-value problem and initial
value problem for abstract Ordinary Differential Equation. This splitting
is rather helpful both for theoretical aspects and numerical methods.

In this paper we derive fluid dynamic equations byperforming asymptotic analysis for the generalized Boltzmann equationfor polyatomic gases. In particular, we consider the steady state,one-dimensional Boltzmann equation with one additional internal energyand different relaxation times. Moreover, we present a new approachto define coupling procedures for the Boltzmann equation and Navier-Stokesequations based on the 14-moments expansion of Levermore. These coupledmodels are validated by numerical simulations.

We consider nonparametric estimation of the coefficients a_i(.), i=1,...,p, on a time-varying autoregressive process. Choosing an orthonormal wavelet basis representation of the functions a_i(.), the empirical wavelet coefficients are derived from the time series data as the solution of a least squares minimization problem. In order to allow the a_i(.) to be functions of inhomogeneous regularity, we apply nonlinear thresholding to the empirical coefficients and obtain locally smoothed estimates of the a_i(.). We show that the resulting estimators attain the usual minimax L_2-rates up to a logarithmic factor, simultaneously in a large scale of Besov classes. The finite-sample behaviour of our procedure is demonstrated by application to two typical simulated examples.

The purpose of GPS-satellite-to-satellite tracking (GPS-SST) is to determine the gravitational potential at the earth's surface from measured ranges (geometrical distances) between a low-flying satellite and the high-flying satellites of the Global Posittioning System (GPS). In this paper GPS-satellite-to-satellite tracking is reformulated as the problem of determining the gravitational potential of the earth from given gradients at satellite altitude. Uniqueness and stability of the solution are investigated. The essential tool is to split the gradient field into a normal part (i.e. the first order radial derivative) and a tangential part (i.e. the surface gradient). Uniqueness is proved for polar, circular orbits corresponding to both types of data (first radial derivative and/or surface gradient). In both cases gravity recovery based on satellite-to-satellite tracking turns out to be an exponentially ill-posed problem. As an appropriate solution method regularization in terms of spherical wavelets is proposed based on the knowledge of the singular system. Finally, the extension of this method is generalized to a non-spherical earth and a non-spherical orbital surface based on combined terrestrial and satellite data material.

An approach to generating all efficient solutions of multiple objective programs with piecewise linear objective functions and linear constraints is presented. The approach is based on the decomposition of the feasible set into subsets, referred to as cells, so that the original problem reduces to a series of lenear multiple objective programs over the cells. The concepts of cell-efficiency and complex-efficiency are introduced and their relationship with efficiency is examined. A generic algorithm for finding efficent solutions is proposed. Applications in location theory as well as in worst case analysis are highlighted.

In this paper we deal with the determination of the whole set of Pareto-solutions of location problems with respect to Q general criteria.These criteria include median, center or cent-dian objective functions as particular instances.The paper characterizes the set of Pareto-solutions of a these multicriteria problems. An efficient algorithm for the planar case is developed and its complexity is established. Extensions to higher dimensions as well as to the non-convexcase are also considered.The proposed approach is more general than the previously published approaches to multi-criteria location problems and includes almost all of them as particular instances.

Multicriteria Optimization
(1999)

Life is about decisions. Decisions, no matter if taken by a group or an individual, involve several conflicting objectives. The observation that real world problems have to be solved optimally according to criteria, which prohibit an "ideal" solution - optimal for each decisionmaker under each of the criteria considered - , has led to the development of multicriteria optimization. From its first roots, which where laid by Pareto at the end of the 19th century the discilpine has prospered and grown, especially during the last three decades. Today, many decision support systems incorporate methods to deal with conflicting objectives. The foundation for such systems is a mathematical theory of optimaztion under multiple objectives. With this manuscript, which is based on lectures I taught in the winter semester 1998/99 at the University of Kaiserslautern, I intend to give an introduction to and overview of this fascinating field of mathematics. I tried to present theoretical questions such as existence of solutions as well as methodological issues and hope the reader finds the balance not too heavily on one side. The interested reader should be able to find classical results as well as up to date research. The text is accompanied by exercises, which hopefully help to deepen students' understanding of the topic.

In this paper network location problems with several objectives are discussed, where every single objective is a classical median objective function. We will lock at the problem of finding Pareto optimal locations and lexicographically optimal locations. It is shown that for Pareto optimal locations in undirected networks no node dominance result can be shown. Structural results as well as efficient algorithms for these multi-criteria problems are developed. In the special case of a tree network a generalization of Goldman's dominance algorithm for finding Pareto locations is presented.

Moment inequalities for the Boltzmann equation and applications to spatially homogeneous problems
(1999)

Some inequalities for the Boltzmann collision integral are proved. These inequalities can be considered as a generalization of the well-known Povzner inequality. The inequalities are used to obtain estimates of moments of solution to the spatially homogeneous Boltzmann equation for a wide class of intermolecular forces. We obtained simple necessary and sufficient conditions (on the potential) for the uniform boundedness of all moments. For potentials with compact support the following statement is proved. .....

Nonlinear stochastic dynamical systems as ordinary stochastic differential equations and stochastic difference methods are in the center of this presentation in view of the asymptotical behaviour of their moments. We study the exponential p-th mean growth behaviour of their solutions as integration time tends to infinity. For this purpose, the concepts of nonlinear contractivity and stability exponents for moments are introduced as generalizations of well-known moment Lyapunov exponents of linear systems. Under appropriate monotonicity assumptions we gain uniform estimates of these exponents from above and below. Eventually, these concepts are generalized to describe the exponential growth behaviour along certain Lyapunov-type functionals.

To present the decision maker's (DM) preferences in multicriteria decision problems as a partially ordered set is an effective method to catch the DM's purpose and avoid misleading results. Since our paper is focused on minimal path problems, we regard the ordered set of edges (E,=). Minimal paths are defined in repect to power-ordered sets which provides an essential tool to solve such problems. An algorithm to detect minimal paths on a multicriteria minimal path problem is presented

The notion of the balance number introduced in [3,page 139] through a certain set contraction procedure for nonscalarized multiobjective global optimization is represented via a min-max operation on the data of the problem. This representation yields a different computational procedure for the calculation of the balance number and allows us to generalize the approach for problems with countably many performance criteria.

In this survey we deal with the location of hyperplanes in n-dimensional normed spaces, i.e., we present all known results and a unifying approach to the so-called median hyperplane problem in Minkowski spaces. We describe how to find a hyperplane H minimizing the weighted sum f(H) of distances to a given, finite set of demand points. In robust statistics and operations research such an optimal hyperplane is called a median hyperplane.After summarizing the known results for the Euclidean and rectangular situation, we show that for all distance measures d derived from norms one of the hyperplanes minimizing f(H) is the affine hull of n of the demand points and, moreover, that each median hyperplane is a halving one (in a sense defined below) with respect to the geiven point set. Also an independence of norm result for finding optimal hyperplanes with fixed slope will be given. Furthermore we discuss how these geometric criteria can be used for algorithmical approaches to median hyperplanes, with an extra discussion for the case of polyhedral norms. And finally a characterizatio of all smooth norms by a sharpened incidence criterion for median hyperplanes is mentioned.

In this paper we deal with the location of hyperplanes in n-dimensional normed spaces. If d is a distance measure, our objective is to find a hyperplane H which minimizes f(H) = sum_{m=1}^{M} w_{m}d(x_m,H), where w_m ge 0 are non-negative weights, x_m in R^n, m=1, ... ,M demand points and d(x_m,H)=min_{z in H} d(x_m,z) is the distance from x_m to the hyperplane H. In robust statistics and operations research such an optimal hyperplane is called a median hyperplane. We show that for all distance measures d derived from norms, one of the hyperplanes minimizing f(H) is the affine hull of n of the demand points and, moreover, that each median hyperplane is (ina certain sense) a halving one with respect to the given point set.

The existence of maximum entropy solutions for a wide class of reduced moment problems on arbitrary open subsets of Rd is considered. In particular, new results for the case of unbounded domains are obtained. A precise condition is presented under which solvability of the moment problem implies existence of a maximum entropy solution.

Seinen Versuch, den Begriff der negativen Größen in die Weltweisheit einzuführen beginnt der neununddreißigjährige Immanuel Kant mit einer grundsätzlichen Erörterung über einen etwaigen Gebrauch, den man in der Weltweisheit von der Mathematik ma-chen kann. Dabei stellt er die These auf, daß Mathematik grundsätzlich nur auf zweierlei Art in die Philosophie eingreifen könne. Eine erste Möglichkeit sieht Kant in der Nachahmung mathematischer Methoden bei der Darstellung von Philosophie, die andere Möglichkeit besteht für ihn in der konkreten Anwendung mathematischer Theorien in der Naturlehre. Die zuerst genannte Möglichkeit beurteilt Kant ausgesprochen negativ; seine Kritik an dem von Comenius zunächst ganz allgemein formulierten und dann von Christian Wolff insbesondere für die Philosophie favorisierten Programm einer Präsentation der Philosophie nach mathematischem Vorbild einer Darstellung more geometrico demonstrata ist hinlänglich bekannt. Die Verwendung von Mathematik in der Naturlehre sieht Kant zwar durchaus positiv; in den Metaphysischen Anfangsgründen der Naturwissenschaft wird er gut zwei Jahrzehnte später sogar jene berühmte Behauptung hinzufügen, daß in jeder besonderen Naturlehre nur so viel eigentliche Wissenschaft angetroffen werden könne, als darin Mathematik anzutreffen ist. Dennoch weist Kant mit aller Deutlichkeit auf die engen Grenzen des Wirkungsbereichs solcher Anwendungen von Mathematik hin, denn seiner Meinung nach würden aber auch nur die zur Naturlehre gehörigen Einsichten von derartigem mathematischem Zugriff profitieren.

In this paper we deal with locating a line in the plane. If d is a distance measure our objective is to find a straight line l which minimizes f(l) of g(l) (see the paper for the definition of these functions). We show that for all distance measures d derived from norms, one of the lines minimizing f(l) contains at least two of the existing facilities. For the center objective we always get an optimal line which is at maximum distance from at least three of the existing facilities. If all weights are equal, there is an optimal line which is parallel to one facet of the convex hull of the existing facilities.

Locally Maximal Clones II
(1999)

In this paper we deal with an NP-hard combinatorial optimization problem, the k-cardinality tree problem in node weighted graphs. This problem has several applications , which justify the need for efficient methods to obtain good solutions. We review existing literature on the problem. Then we prove that under the condition that the graph contains exactly one trough, the problem can be solved in ploynomial time. For the general NP-hard problem we implemented several local search methods to obtain heuristics solutions, which are qualitatively better than solutions found by constructive heuristics and which require significantly less time than needed to obtain optimal solutions. We used the well known concepts of genetic algorithms and tabu search with useful extensions. We show that all the methods find optimal solutions for the class of graphs containing exactly one trough. The general performance of our methods as compared to other heuristics is illustrated by numerical results.

Two possible substitutes of the Fourier transform in geopotential determination are windowed Fourier transform (WFT) and wavelet transform (WT). In this paper we introduce harmonic WFT and WT and show how it can be used to give information about the geopotential simultaneously in the space domain and the frequency (angular momentum) domain. The counterparts of the inverse Fourier transform are derived, which allow us to reconstruct the geopotential from its WFT and WT, respectively. Moreover, we derive a necessary and sufficient condition that an otherwise arbitrary function of space and frequency has to satisfy to be the WFT or WT of a potential. Finally, least - squares approximation and minimum norm (i.e. least - energy) representation, which will play a particular role in geodetic applications of both WFT and WT, are discussed in more detail.

We study a model for learning periodic signals in recurrent neural networks proposed by Doya and Yoshizawa [7] that can be considered as a model for temporal pattern memory in animal motoric systems. A network receives an external oscillatory input and adjusts its weights so that this signal can be reproduced approximately as the network output after some time. We use tools from adaptive control theory to derive an algorithm for weight matrices with special structure. If the input is generated by a network of the same structure the algorithm converges globally and does not exhibit the deficiencies of the back-propagation based approach of Doya and Yoshizawa under a persistency of excitation condition. This simple algorithm can also be used for open loop identification under quite restructive assumptions. The persistency of excitation condition cannot be proven even for the matrices with special structure but for a 3d system. For higher dimensional systems we give connections to the theory of linear time-varying systems where this condition is generically true (under assumption which are also needed in the time-invariant case). However, we cannot show that the linearized system related to the nonlinear neural network fulfills these generic assumptions.

We consider regularizing iterative procedures for ill-possed problems with random and nonrandom additive errors. The rate of square-mean convergence for iterative procedures with random errors is studied. The comparison theorem is established for the convergence of procedures with and without additive errors.

After the notion of Gröbner bases and an algorithm for constructing them was introduced by Buchberger [Bu1, Bu2] algebraic geometers have used Gröbner bases as the main computational tool for many years, either to prove a theorem or to disprove a conjecture or just to experiment with examples in order to obtain a feeling about the structure of an algebraic variety. Nontrivial problems coming either from logic, mathematics or applications usually lead to nontrivial Gröbner basis computations, which is the reason why several improvements have been provided by many people and have been implemented in general purpose systems like Axiom, Maple, Mathematica, Reduce, etc., and systems specialized for use in algebraic geometry and commutative algebra like CoCoA, Macaulay and Singular. The present paper starts with an introduction to some concepts of algebraic geometry which should be understood by people with (almost) no knowledge in this field. In the second chapter we introduce standard bases (generalization of Gr"obner bases to non-well-orderings), which are needed for applications to local algebraic geometry (singularity theory), and a method for computing syzygies and free resolutions. The last chapter describes a new algorithm for computing the normalization of a reduced affine ring and gives an elementary introduction to singularity theory. Then we describe algorithms, using standard bases, to compute infinitesimal deformations and obstructions, which are basic for the deformation theory of isolated singularities. It is impossible to list all papers where Gr"obner bases have been used in local and global algebraic geometry, and even more impossible to give an overview about these contributions. We have, therefore, included only references to papers mentioned in this tutorial paper. The interested reader will find many more in the other contributions of this volume and in the literature cited there.

Let P2r be the projective plane blown up at r generic points. Denote by E0; E1; : : : ; Er the strict transform of a generic straight line on P2 and the exceptional divisors of the blown-up points on P2r respectively. We consider the variety Virr of all irreducible curves C with k nodes as the only singularities and give asymptotically nearly optimal sufficient conditions for its smoothness, irreducibility and non-emptiness. Moreover, we extend our conditions for the smoothness and the irreducibility on families of reducible curves. For r ^ 9 we give the complete answer concerning the existence of nodal curves in Virr.

Singular algebraic curves, their existence, deformation, families (from the local and global point of view) attract continuous attention of algebraic geometers since the last century. The aim of our paper is to give an account of results, new trends and bibliography related to the geometry of equisingular families of algebraic curves on smooth algebraic surfaces over an algebraically closed field of characteristic zero. This theory is founded in basic works of Plücker, Severi, Segre, Zariski, and has tight links and finds important applications in singularity theory, topology of complex algebraic curves and surfaces, and in real algebraic geometry.

Location problems with Q (in general conflicting) criteria are considered. After reviewing previous results of the authors dealing with lexicographic and Pareto location the main focus of the paper is on max-ordering locations. In these location problems the worst of the single objectives is minimized. After discussing some general results (including reductions to single criterion problems and the relation to lexicographic and Pareto locations) three solution techniques are introduced and exemplified using one location problem class, each: The direct approach, the decision space approach and the objective space approach. In the resulting solution algorithms emphasis is on the representation of the underlying geometric idea without fully exploring the computational complexity issue. A further specialization of max-ordering locations is obtained by introducing lexicographic max-ordering locations, which can be found efficiently. The paper is concluded by some ideas about future research topics related to max-ordering location problems.

In the following, we discuss a procedure for interpolating a spatial-temporal stochastic process. We stick to a particular, moderately general model but the approach can be easily transered to other similar problems. The original data, which motivated this work, are measurements of gas concentrations (SO2, NO, O2) and several meteorological parameters (temperature, sun radiation, procipitation, wind speed etc.). These date have been and are still recorded twice every hour at several irregularly located places in the forests of the state Rheinland-Pfalz as part of a program monitoring the air pollution in the forests.

In this paper we deal with the determination of the whole set of Pareto-solutions of location problems with respect to Q general criteria. These criteria include as particular instances median, center or cent-dian objective functions. The paper characterizes the set of Pareto-solutions of all these multicriteria problems. An efficient algorithm for the planar case is developed and its complexity is established. the proposed approach is more general than the previously published approaches to multicriteria location problems and includes almost all of them as particular instances.

In planar location problems with barriers one considers regions which are forbidden for the siting of new facilities as well as for trespassing. These problems areimportant since they reflect various real-world situations.The resulting mathematical models have a non-convex objectivefunction and are therefore difficult to tackle using standardmethods of location theory even in the case of simple barriershapes and distance funtions.For the case of center objectives with barrier distancesobtained from the rectilinear or Manhattan metric it is shown that the problem can be solved by identifying a finitedominating set (FDS) the cardinality of which is bounded bya polynomial in the size of the problem input. The resultinggenuinely polynomial algorithm can be combined with bound computations which are derived from solving closely connectedrestricted location and network location problems.It is shown that the results can be extended to barrier center problems with respect to arbitrary block norms having fourfundamental directions.

In this paper we discuss a special class of regularization methods for solving the satellite gravity gradiometry problem in a spherical framework based on band-limited spherical regularization wavelets. Considering such wavelets as a reesult of a combination of some regularization methods with Galerkin discretization based on the spherical harmonic system we obtain the error estimates of regularized solutions as well as the estimates for regularization parameters and parameters of band-limitation.

Facility location problems in the plane play an important role in mathematical programming. When looking for new locations in modeling real-word problems, we are often confronted with forbidden regions, that are not feasible for the placement of new locations. Furthermore these forbidden regions may habe complicated shapes. It may be more useful or even necessary to use approcimations of such forbidden regions when trying to solve location problems. In this paper we develop error bounds for the approximative solution of restricted planar location problems using the so called sandwich algorithm. The number of approximation steps required to achieve a specified error bound is analyzed. As examples of these approximation schemes, we discuss round norms and polyhedral norms. Also computational tests are included.

Chains of Recurrences (CRs) are a tool for expediting the evaluation of elementary expressions over regular grids. CR based evaluations of elementaryexpressions consist of 3 major stages: CR construction, simplification, and evaluation. This paper addresses CR simplifications. The goal of CRsimplifications is to manipulate a CR such that the resulting expression is more efficiently to evaluate. We develop CR simplification strategies which takethe computational context of CR evaluations into account. Realizing that it is infeasible to always optimally simplify a CR expression, we give heuristicstrategies which, in most cases, result in a optimal, or close-to-optimal expressions. The motivations behind our proposed strategies are discussed and theresults are illustrated by various examples.

Discretizations for the Incompressible Navier-Stokes Equations based on the Lattice Boltzmann Method
(1999)

A discrete velocity model with spatial and velocity discretization based on a lattice Boltzmann method is considered in the low Mach number limit. A uniform numerical scheme for this model is investigated. In the limit, the scheme reduces to a finite difference scheme for the incompressible Navier-Stokes equation which is a projection method with a second order spatial discretization on a regular grid. The discretization is analyzed and the method is compared to Chorin's original spatial discretization. Numerical results supporting the analytical statements are presented.

Discrete Decision Problems, Multiple Criteria Optimization Classes and Lexicographic Max-Ordering
(1999)

The topic of this paper are discrete decision problems with multiple criteria. We first define discrete multiple criteria decision problems and introduce a classification scheme for multiple criteria optimization problems. To do so we use multiople criteria optimization classes. The main result is a characterization of the class of lexicographic max-ordering problems by two very useful properties, reduction and regularity. Subsequently we discuss the assumptions under which the application of this specific MCO class is justified. Finally we provide (simple) solution methods to find optimal decisions in the case of discrete multiple criteria optimization problems.

In einem Beitrag zu Platons Philosophie des Abstiegs schreibt C.F. v. Weizsäcker, er sei "überzeugt, daß die griechische Philosophie, dieses in allen Weltkulturen einzigartige Kunstwerk, ohne das mathematische Paradigma undenkbar gewesen wäre" . Und in seiner berühmten Kant-Vorlesung im WS 1935/36 erklärte M. Heidegger, es sei "kein Zufall, daß die Kritik der reinen Vernunft... ständig von einer Besinnung auf das Wesen des Mathematischen und der Mathematik begleitet sei" . Was hier über Platon und Kant gesagt wird, trifft auf fast alle abendländischen Philosophen von Rang zu: Explizit oder implizit spielt die Mathematik eine entscheidende Rolle für die neue philosophische Konzeption. Welche Gründe sind es, die der Mathematik einen so hohen Stellenwert im Denken der maßgebenden Philosophen sichern? Mit welchen Intentionen und Zielvorstellungen montieren Philosophen seit Platon bis Heidegger, seit Aristoteles bis Bloch immer wieder Aussagen über Mathematik in ihre Philosophie? Weshalb war in den vergangenen zweieinhalb Jahrtausenden keine andere Wissenschaft für die Philosophie so >>frag-würdig<< wie die Mathematik? Die Philosophie hat - dies ist offensichtlich - den Dialog mit der Mathematik immer wieder gesucht. Und wie steht es um das Interesse der Mathematik an einem Dialog mit der Philosophie? In einem äußerst gehaltvollen und auch heute noch sehr lesenswerten Aufsatz Mathematik und Antike stellt der Mathematiker O. Toeplitz 1925 die Frage, "ob einmal im Dasein der Mathematik die Philosophie bestimmend in sie eingegriffen hat, ihre eigentliche definitive Gestalt gebildet hat" ? Eine derartige Initiative aus der Mathematik heraus zum Dialog mit der Philosophie ist kein Einzelfall. Cantor, Hilbert, Weyl, Gödel und Robinson - um nur einige Repräsentanten der neueren Mathematik in Erinnerung zu rufen - haben sich immer wieder um Kontakte mit der Philosophie bemüht.

We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall.

Convex Operators in Vector Optimization: Directional Derivatives and the Cone of Decrease Directions
(1999)

The paper is devoted to the investigation of directional derivatives and the cone of decrease directions for convex operators on Banach spaces. We prove a condition for the existence of directional derivatives which does not assume regularity of the ordering cone K. This result is then used to prove that for continuous convex operators the cone of decrease directions can be represented in terms of the directional derivatices . Decrease directions are those for which the directional derivative lies in the negative interior of the ordering cone K. Finally, we show that the continuity of the convex operator can be replaced by its K-boundedness.

Here we consider the Kohonen algorithm with a constant learning rate as a Markov process evolving in a topological space. it is shown that the process is an irreducible and aperiodic T-chain, regardless of the dimension of both data space and network and the special shape of the neighborhood function. Moreover the validity of Deoblin's condition is proved. These imply the convergence in distribution of the process to a finite invariant measure with a uniform geometric rate. In addition we show the process is positive Harris recurrent, which enables us to use statistical devices to measure its centrality and variability as the time goes to infinity.

This review article reports current activities and recent progress on constructive approximation and numerical analysis in physical geodesy. The paper focuses on two major topics of interest, namely trial systems for purposes of global and local approximation and methods for adequate geodetic application. A fundamental tool is an uncertainty principle, which gives appropriate bounds for the quantification of space and momentum localization of trial functions. The essential outcome is a better understanding of constructive approximation in terms of radial basis functions such as splines and wavelets.

Complete presentations provide a natural solution to the word problem in monoids and groups. Here we give a simple way to construct complete presentations for the direct product of groups, when such presentations are available for the factors. Actually, the construction we are referring to is just the classical construction for direct products of groups, which has been known for a long time, but whose completeness-preserving properties had not been detected. Using this result and some known facts about Coxeter groups, we sketch an algorithm to obtain the complete presentation of any finite Coxeter group. A similar application to Abelian and Hamiltonian groups is mentioned.

Comparison of kinetic theory and discrete element schemes for modelling granular Couette flows
(1999)

Discrete element based simulations of granular flow in a 2d velocity space are compared with a particle code that solves kinetic granular flow equations in two and three dimensions. The binary collisions of the latter are governed by the same forces as for the discrete elements. Both methods are applied to a granular shear flow of equally sized discs and spheres. The two dimensional implementation of the kinetic approach shows excellent agreement with the results of the discrete element simulations. When changing to a three dimensional velocity space, the qualitative features of the flow are maintained. However, some flow properties change quantitatively.

It is proved that if a finite non-trivial quasi-order is nota linear order then there exist continuum many clones, whichconsist of functions preserving the quasi-order and containall unary functions with this property. It is shown that, fora linear order on a three-element set, there are only 7 suchclones