### Refine

#### Year of publication

- 1999 (131) (remove)

#### Document Type

- Preprint (121)
- Article (4)
- Lecture (3)
- Study Thesis (2)
- Diploma Thesis (1)

#### Keywords

#### Faculty / Organisational entity

- Fachbereich Mathematik (131) (remove)

In einem Beitrag zu Platons Philosophie des Abstiegs schreibt C.F. v. Weizsäcker, er sei "überzeugt, daß die griechische Philosophie, dieses in allen Weltkulturen einzigartige Kunstwerk, ohne das mathematische Paradigma undenkbar gewesen wäre" . Und in seiner berühmten Kant-Vorlesung im WS 1935/36 erklärte M. Heidegger, es sei "kein Zufall, daß die Kritik der reinen Vernunft... ständig von einer Besinnung auf das Wesen des Mathematischen und der Mathematik begleitet sei" . Was hier über Platon und Kant gesagt wird, trifft auf fast alle abendländischen Philosophen von Rang zu: Explizit oder implizit spielt die Mathematik eine entscheidende Rolle für die neue philosophische Konzeption. Welche Gründe sind es, die der Mathematik einen so hohen Stellenwert im Denken der maßgebenden Philosophen sichern? Mit welchen Intentionen und Zielvorstellungen montieren Philosophen seit Platon bis Heidegger, seit Aristoteles bis Bloch immer wieder Aussagen über Mathematik in ihre Philosophie? Weshalb war in den vergangenen zweieinhalb Jahrtausenden keine andere Wissenschaft für die Philosophie so >>frag-würdig<< wie die Mathematik? Die Philosophie hat - dies ist offensichtlich - den Dialog mit der Mathematik immer wieder gesucht. Und wie steht es um das Interesse der Mathematik an einem Dialog mit der Philosophie? In einem äußerst gehaltvollen und auch heute noch sehr lesenswerten Aufsatz Mathematik und Antike stellt der Mathematiker O. Toeplitz 1925 die Frage, "ob einmal im Dasein der Mathematik die Philosophie bestimmend in sie eingegriffen hat, ihre eigentliche definitive Gestalt gebildet hat" ? Eine derartige Initiative aus der Mathematik heraus zum Dialog mit der Philosophie ist kein Einzelfall. Cantor, Hilbert, Weyl, Gödel und Robinson - um nur einige Repräsentanten der neueren Mathematik in Erinnerung zu rufen - haben sich immer wieder um Kontakte mit der Philosophie bemüht.

Multicriteria Optimization
(1999)

Life is about decisions. Decisions, no matter if taken by a group or an individual, involve several conflicting objectives. The observation that real world problems have to be solved optimally according to criteria, which prohibit an "ideal" solution - optimal for each decisionmaker under each of the criteria considered - , has led to the development of multicriteria optimization. From its first roots, which where laid by Pareto at the end of the 19th century the discilpine has prospered and grown, especially during the last three decades. Today, many decision support systems incorporate methods to deal with conflicting objectives. The foundation for such systems is a mathematical theory of optimaztion under multiple objectives. With this manuscript, which is based on lectures I taught in the winter semester 1998/99 at the University of Kaiserslautern, I intend to give an introduction to and overview of this fascinating field of mathematics. I tried to present theoretical questions such as existence of solutions as well as methodological issues and hope the reader finds the balance not too heavily on one side. The interested reader should be able to find classical results as well as up to date research. The text is accompanied by exercises, which hopefully help to deepen students' understanding of the topic.

Locally Maximal Clones II
(1999)

We consider a scale discrete wavelet approach on the sphere based on spherical radial basis functions. If the generators of the wavelets have a compact support, the scale and detail spaces are finite-dimensional, so that the detail information of a function is determined by only finitely many wavelet coefficients for each scale. We describe a pyramid scheme for the recursive determination of the wavelet coefficients from level to level, starting from an initial approximation of a given function. Basic tools are integration formulas which are exact for functions up to a given polynomial degree and spherical convolutions.

Moment inequalities for the Boltzmann equation and applications to spatially homogeneous problems
(1999)

Some inequalities for the Boltzmann collision integral are proved. These inequalities can be considered as a generalization of the well-known Povzner inequality. The inequalities are used to obtain estimates of moments of solution to the spatially homogeneous Boltzmann equation for a wide class of intermolecular forces. We obtained simple necessary and sufficient conditions (on the potential) for the uniform boundedness of all moments. For potentials with compact support the following statement is proved. .....

We consider nonparametric estimation of the coefficients a_i(.), i=1,...,p, on a time-varying autoregressive process. Choosing an orthonormal wavelet basis representation of the functions a_i(.), the empirical wavelet coefficients are derived from the time series data as the solution of a least squares minimization problem. In order to allow the a_i(.) to be functions of inhomogeneous regularity, we apply nonlinear thresholding to the empirical coefficients and obtain locally smoothed estimates of the a_i(.). We show that the resulting estimators attain the usual minimax L_2-rates up to a logarithmic factor, simultaneously in a large scale of Besov classes. The finite-sample behaviour of our procedure is demonstrated by application to two typical simulated examples.

We study families V of curves in P2(C) of degree d having exactly r singular points of given topological or analytic types. We derive new sufficient conditions for V to be T-smooth (smooth of the expected dimension), respectively to be irreducible. For T-smoothness these conditions involve new invariants of curve singularities and are conjectured to be asymptotically proper, i.e., optimal up to a constant factor. To obtain the results, we study the Castelnuovo function, prove the irreducibility of the Hilbert scheme of zero-dimensional schemes associated to a cluster of infinitely near points of the singularities and deduce new vanishing theorems for ideal sheaves of zero-dimensional schemes in P2. Moreover, we give a series of examples of cuspidal curves where the family V is reducible, but where ss1(P2nC) coincides (and is abelian) for all C 2 V .

After the notion of Gröbner bases and an algorithm for constructing them was introduced by Buchberger [Bu1, Bu2] algebraic geometers have used Gröbner bases as the main computational tool for many years, either to prove a theorem or to disprove a conjecture or just to experiment with examples in order to obtain a feeling about the structure of an algebraic variety. Nontrivial problems coming either from logic, mathematics or applications usually lead to nontrivial Gröbner basis computations, which is the reason why several improvements have been provided by many people and have been implemented in general purpose systems like Axiom, Maple, Mathematica, Reduce, etc., and systems specialized for use in algebraic geometry and commutative algebra like CoCoA, Macaulay and Singular. The present paper starts with an introduction to some concepts of algebraic geometry which should be understood by people with (almost) no knowledge in this field. In the second chapter we introduce standard bases (generalization of Gr"obner bases to non-well-orderings), which are needed for applications to local algebraic geometry (singularity theory), and a method for computing syzygies and free resolutions. The last chapter describes a new algorithm for computing the normalization of a reduced affine ring and gives an elementary introduction to singularity theory. Then we describe algorithms, using standard bases, to compute infinitesimal deformations and obstructions, which are basic for the deformation theory of isolated singularities. It is impossible to list all papers where Gr"obner bases have been used in local and global algebraic geometry, and even more impossible to give an overview about these contributions. We have, therefore, included only references to papers mentioned in this tutorial paper. The interested reader will find many more in the other contributions of this volume and in the literature cited there.

Algorithmic ideal theory
(1999)

Algebraic geometers have used Gröbner bases as the main computational tool for many years, either to prove a theorem or to disprove a conjecture or just to experiment with examples in order to obtain a feeling about the structure of an algebraic variety. Non-trivial mathematical problems usually lead to non-trivial Gröbner basis computations, which is the reason why several improvements and efficient implementations have been provided by algebraic geometers (for example, the systems CoCoA, Macaulay and SINGULAR). The present paper starts with an introduction to some concepts of algebraic geometry which should be understood by people with (almost) no knowledge in this field. In the second chapter we introduce standard bases (generalization of Gröbner bases to non-well-orderings), which are needed for applications to local algebraic geometry (singularity theory), and a method for computing syzygies and free resolutions. In the third chapter several algorithms for primary decomposition of polynomial ideals are presented, together with a discussion of improvements and preferable choices. We also describe a newly invented algorithm for computing the normalization of a reduced affine ring. The last chapter gives an elementary introduction to singularity theory and then describes algorithms, using standard bases, to compute infinitesimal deformations and obstructions, which are basic for the deformation theory of isolated singularities. It is impossible to list all papers where Gröbner basis have been used in local and global algebraic geometry, and even more impossible to give an overview about these contributions. We have, therefore, included only a few references to papers which contain interesting applications and which are not mentioned in this tutorial paper. The interested reader will find many more in the other contributions of this volume and in the literature cited there.

Singular algebraic curves, their existence, deformation, families (from the local and global point of view) attract continuous attention of algebraic geometers since the last century. The aim of our paper is to give an account of results, new trends and bibliography related to the geometry of equisingular families of algebraic curves on smooth algebraic surfaces over an algebraically closed field of characteristic zero. This theory is founded in basic works of Plücker, Severi, Segre, Zariski, and has tight links and finds important applications in singularity theory, topology of complex algebraic curves and surfaces, and in real algebraic geometry.

If \(A\) generates a bounded cosine function on a Banach space \(X\) then the negative square root \(B\) of \(A\) generates a holomorphic semigroup, and this semigroup is the conjugate potential transform of the cosine function. This connection is studied in detail, and it is used for a characterization of cosine function generators in terms of growth conditions on the semigroup generated by \(B\). This characterization relies on new results on the inversion of the vector-valued conjugate potential transform.

Let \(X\) be a Banach lattice. Necessary and sufficient conditions for a linear operator \(A:D(A) \to X\), \(D(A)\subseteq X\), to be of positive \(C^0\)-scalar type are given. In addition, the question is discussed which conditions on the Banach lattice imply that every operator of positive \(C^0\)-scalar type is necessarily of positive scalar type.

We consider regularizing iterative procedures for ill-possed problems with random and nonrandom additive errors. The rate of square-mean convergence for iterative procedures with random errors is studied. The comparison theorem is established for the convergence of procedures with and without additive errors.

Vigenere-Verschlüsselung
(1999)

Complete presentations provide a natural solution to the word problem in monoids and groups. Here we give a simple way to construct complete presentations for the direct product of groups, when such presentations are available for the factors. Actually, the construction we are referring to is just the classical construction for direct products of groups, which has been known for a long time, but whose completeness-preserving properties had not been detected. Using this result and some known facts about Coxeter groups, we sketch an algorithm to obtain the complete presentation of any finite Coxeter group. A similar application to Abelian and Hamiltonian groups is mentioned.

Compared to standard numerical methods for hyperbolic systems of conservation laws, Kinetic Schemes model propagation of information by particles instead of waves. In this article, the wave and the particle concept are shown to be closely related. Moreover, a general approach to the construction of Kinetic Schemes for hyperbolic conservation laws is given which summarizes several approaches discussed by other authors. The approach also demonstrates why Kinetic Schemes are particularly well suited for scalar conservation laws and why extensions to general systems are less natural.

Nonlinear dissipativity, asymptotical stability, and contractivity of (ordinary) stochastic differential equations (SDEs) with some dissipative structure and their discretizations are studied in terms of their moments in the spirit of Pliss (1977). For this purpose, we introduce the notions and discuss related concepts of dissipativity, growth- bounded and monotone coefficient systems, asymptotical stability and contractivity in wide and narrow sense, nonlinear A-stability, AN-stability, B-stability and BN-stability for stochastic dynamical systems - more or less as stochastic counterparts to deterministic concepts. The test class of in a broad sense interpreted dissipative SDEs as natural analogon to dissipative deterministic differential systems is suggested for stochastic-numerical methods. Then, in particular, a kind of mean square calculus is developed, although most of ideas and analysis can be carried over to general "stochastic Lp-case" (p * 1). By this natural restriction, the new stochastic concepts are theoretically meaningful, as in deterministic analysis. Since the choice of step sizes then plays no essential role in related proofs, we even obtain nonlinear A-stability, AN-stability, B-stability and BN-stability in the mean square sense for this implicit method with respect to appropriate test classes of moment-dissipative SDEs.

Nonlinear stochastic dynamical systems as ordinary stochastic differential equations and stochastic difference methods are in the center of this presentation in view of the asymptotical behaviour of their moments. We study the exponential p-th mean growth behaviour of their solutions as integration time tends to infinity. For this purpose, the concepts of nonlinear contractivity and stability exponents for moments are introduced as generalizations of well-known moment Lyapunov exponents of linear systems. Under appropriate monotonicity assumptions we gain uniform estimates of these exponents from above and below. Eventually, these concepts are generalized to describe the exponential growth behaviour along certain Lyapunov-type functionals.

In this paper we discuss a special class of regularization methods for solving the satellite gravity gradiometry problem in a spherical framework based on band-limited spherical regularization wavelets. Considering such wavelets as a reesult of a combination of some regularization methods with Galerkin discretization based on the spherical harmonic system we obtain the error estimates of regularized solutions as well as the estimates for regularization parameters and parameters of band-limitation.

Seinen Versuch, den Begriff der negativen Größen in die Weltweisheit einzuführen beginnt der neununddreißigjährige Immanuel Kant mit einer grundsätzlichen Erörterung über einen etwaigen Gebrauch, den man in der Weltweisheit von der Mathematik ma-chen kann. Dabei stellt er die These auf, daß Mathematik grundsätzlich nur auf zweierlei Art in die Philosophie eingreifen könne. Eine erste Möglichkeit sieht Kant in der Nachahmung mathematischer Methoden bei der Darstellung von Philosophie, die andere Möglichkeit besteht für ihn in der konkreten Anwendung mathematischer Theorien in der Naturlehre. Die zuerst genannte Möglichkeit beurteilt Kant ausgesprochen negativ; seine Kritik an dem von Comenius zunächst ganz allgemein formulierten und dann von Christian Wolff insbesondere für die Philosophie favorisierten Programm einer Präsentation der Philosophie nach mathematischem Vorbild einer Darstellung more geometrico demonstrata ist hinlänglich bekannt. Die Verwendung von Mathematik in der Naturlehre sieht Kant zwar durchaus positiv; in den Metaphysischen Anfangsgründen der Naturwissenschaft wird er gut zwei Jahrzehnte später sogar jene berühmte Behauptung hinzufügen, daß in jeder besonderen Naturlehre nur so viel eigentliche Wissenschaft angetroffen werden könne, als darin Mathematik anzutreffen ist. Dennoch weist Kant mit aller Deutlichkeit auf die engen Grenzen des Wirkungsbereichs solcher Anwendungen von Mathematik hin, denn seiner Meinung nach würden aber auch nur die zur Naturlehre gehörigen Einsichten von derartigem mathematischem Zugriff profitieren.

Many discrepancy principles are known for choosing the parameter \(\alpha\) in the regularized operator equation \((T^*T+ \alpha I)x_\alpha^\delta = T^*y^\delta\), \(||y-y^d||\leq \delta\), in order to approximate the minimal norm least-squares solution of the operator equation \(Tx=y\). In this paper we consider a class of discrepancy principles for choosing the regularization parameter when \(T^*T\) and \(T^*y^\delta\) are approximated by \(A_n\) and \(z_n^\delta\) respectively with \(A_n\) not necessarily self - adjoint. Thisprocedure generalizes the work of Engl and Neubauer (1985),and particular cases of the results are applicable to the regularized projection method as well as to a degenerate kernel method considered by Groetsch (1990).

Spektralsequenzen
(1999)

In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized.

A compact subset E of the complex plane is called removable if all bounded analytic functions on its complement are constant or, equivalently, i f its analytic capacity vanishes. The problem of finding a geometric characterization of the removable sets is more than a hundred years old and still not comp letely solved.

We consider the "representation type" of the classification problem of vector bundles on a projective curve. We prove that this problem is always either finite, or tame, or wild and we completely describe those curves which are of finite, resp. tame, vector bundle type. We also give a complete list of indecomposable vector bundles for the finite and tame cases.

Let P2r be the projective plane blown up at r generic points. Denote by E0; E1; : : : ; Er the strict transform of a generic straight line on P2 and the exceptional divisors of the blown-up points on P2r respectively. We consider the variety Virr of all irreducible curves C with k nodes as the only singularities and give asymptotically nearly optimal sufficient conditions for its smoothness, irreducibility and non-emptiness. Moreover, we extend our conditions for the smoothness and the irreducibility on families of reducible curves. For r ^ 9 we give the complete answer concerning the existence of nodal curves in Virr.

Let (Epsilon_k) be a sequence of experiments with the same finite parameter set. Suppose only that identification of the parameter is possible asymptotically. For large classes of information functionals we show that their exponential rates of convergence towards complete information coincide. As a special case we obtain the rate of the Shannon capacity of product experiments.

In 1979, J.M. Bernardo argued heuristically that in the case of regular product experiments his information theoretic reference prior is equal to Jeffreys' prior. In this context, B.S. Clarke and A.R. Barron showed in 1994, that in the same class of experiments Jeffreys' prior is asymptotically optimal in the sense of Shannon, or, in Bayesian terms, Jeffreys' prior is asymptotically least favorable under Kullback Leibler risk. In the present paper, we prove, based on Clarke and Barron's results, that every sequence of Shannon optimal priors on a sequence of regular iid product experiments converges weakly to Jeffreys' prior. This means that for increasing sample size Kullback Leibler least favorable priors tend to Jeffreys' prior.

The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent:
\[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p}
\mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.

We compare different notions of differentiability of a measure along a vector field on a locally convex space. We consider in the L2-space of a differ entiable measure the analoga of the classical concepts of gradient, divergence and Laplacian (which coincides with the OrnsteinUhlenbeck operator in the Gaussian case). We use these operators for the extension of the basic results of Malliavin and Stroock on the smoothness of finite dimensional image measures under certain nonsmooth mappings to the case of non-Gaussian measures. The proof of this extension is quite direct and does not use any Chaos-decomposition. Finally, the role of this Laplacian in the procedure of quantization of anharmonic oscillators is discussed.

Starting from the uniqueness question for mixtures of distributions this review centers around the question under which formally weaker assumptions one can prove the existence of SPLIFs, in other words perfect statistics and tests. We mention a couple of positive and negative results which complement the basic contribution of David Blackwell in 1980. Typically the answers depend on the choice of the set theoretic axioms and on the particular concepts of measurability.

An a posteriori stopping rule connected with monitoringthe norm of second residual is introduced forBrakhage's implicit nonstationary iteration method, applied to ill-posed problems involving linear operatorswith closed range. It is also shown that for someclasses of equations with such operators the algorithmconsisting in combination of Brakhage's method withsome new discretization scheme is order optimal in the sense of Information Complexity.

We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall.

It is proved that if a finite non-trivial quasi-order is nota linear order then there exist continuum many clones, whichconsist of functions preserving the quasi-order and containall unary functions with this property. It is shown that, fora linear order on a three-element set, there are only 7 suchclones

In this paper we show that for each prime p=7 there exists a translation plane of order p^2 of Mason-Ostrom type. These planes occur as 6-dimensional ovoids being projections of the 8-dimensional binary ovoids of Conway, Kleidman and Wilson. In order to verify the existence of such projections we prove certain properties of two particular quadratic forms using classical methods form number theory.

Two possible substitutes of the Fourier transform in geopotential determination are windowed Fourier transform (WFT) and wavelet transform (WT). In this paper we introduce harmonic WFT and WT and show how it can be used to give information about the geopotential simultaneously in the space domain and the frequency (angular momentum) domain. The counterparts of the inverse Fourier transform are derived, which allow us to reconstruct the geopotential from its WFT and WT, respectively. Moreover, we derive a necessary and sufficient condition that an otherwise arbitrary function of space and frequency has to satisfy to be the WFT or WT of a potential. Finally, least - squares approximation and minimum norm (i.e. least - energy) representation, which will play a particular role in geodetic applications of both WFT and WT, are discussed in more detail.

A class of regularization methods using unbounded regularizing operators is considered for obtaining stable approximate solutions for ill-posed operator equations. With an a posteriori as well as an priori parameter choice strategy, it is shown that the method yields optimal order. Error estimates have also been obtained under stronger assumptions on the the generalized solution. The results of the paper unify and simplify many of the results available in the literature. For example, the optimal results of the paper includes, as particular cases for Tikhonov regularization, the main result of Mair (1994) with an a priori parameter choice and a result of Nair (1999) with an a posteriori parameter choice. Thus the observations of Mair (1994) on Tikhonov regularization of ill-posed problems involving finitely and infinitely smoothing operators is applicable to various other regularization procedures as well. Subsequent results on error estimates include, as special cases, an optimal result of Vainikko (1987) and also recent results of Tautenhahn (1996) in the setting Hilbert scales.

A multiscale method is introduced using spherical (vector) wavelets for the computation of the earth's magnetic field within source regions of ionospheric and magnetospheric currents. The considerations are essentially based on two geomathematical keystones, namely (i) the Mie representation of solenoidal vector fields in terms of toroidal and poloidal parts and (ii) the Helmholtz decomposition of spherical (tangential) vector fields. Vector wavelets are shown to provide adequate tools for multiscale geomagnetic modelling in form of a multiresolution analysis, thereby completely circumventing the numerical obstacles caused by vector spherical harmonics. The applicability and efficiency of the multiresolution technique is tested with real satellite data.

Tangent measure distributions are a natural tool to describe the local geometry of arbitrary measures of any dimension. We show that for every measure on a Euclidean space and every s, at almost every point, all s-dimensional tangent measure distributions define statistically self-similar random measures. Consequently, the local geometry of general measures is not different from the local geometry of self-similar sets. We illustrate the strength of this result by showing how it can be used to improve recently proved relations between ordinary and average densities.

Location problems with Q (in general conflicting) criteria are considered. After reviewing previous results of the authors dealing with lexicographic and Pareto location the main focus of the paper is on max-ordering locations. In these location problems the worst of the single objectives is minimized. After discussing some general results (including reductions to single criterion problems and the relation to lexicographic and Pareto locations) three solution techniques are introduced and exemplified using one location problem class, each: The direct approach, the decision space approach and the objective space approach. In the resulting solution algorithms emphasis is on the representation of the underlying geometric idea without fully exploring the computational complexity issue. A further specialization of max-ordering locations is obtained by introducing lexicographic max-ordering locations, which can be found efficiently. The paper is concluded by some ideas about future research topics related to max-ordering location problems.

In this paper we deal with locating a line in the plane. If d is a distance measure our objective is to find a straight line l which minimizes f(l) of g(l) (see the paper for the definition of these functions). We show that for all distance measures d derived from norms, one of the lines minimizing f(l) contains at least two of the existing facilities. For the center objective we always get an optimal line which is at maximum distance from at least three of the existing facilities. If all weights are equal, there is an optimal line which is parallel to one facet of the convex hull of the existing facilities.

In this paper relationships between Pareto points and saddle points in multiple objective programming are investigated. Convex and nonconvex problems are considered and the equivalence between Pareto points and saddle points is proved in both cases. The results are based on scalarizations of multiple objective programs and related linear and augmented Lagrangian functions. Partitions of the index sets of objectives and constranints are introduced to reduce the size of the problems. The relevance of the results in the context of decision making is also discussed.

Discrete Decision Problems, Multiple Criteria Optimization Classes and Lexicographic Max-Ordering
(1999)

The topic of this paper are discrete decision problems with multiple criteria. We first define discrete multiple criteria decision problems and introduce a classification scheme for multiple criteria optimization problems. To do so we use multiople criteria optimization classes. The main result is a characterization of the class of lexicographic max-ordering problems by two very useful properties, reduction and regularity. Subsequently we discuss the assumptions under which the application of this specific MCO class is justified. Finally we provide (simple) solution methods to find optimal decisions in the case of discrete multiple criteria optimization problems.

In line location problems the objective is to find a straight line which minimizes the sum of distances, or the maximum distance, respectively to a given set of existing facilities in the plane. These problems have well solved. In this paper we deal with restricted line location problems, i.e. we have given a set in the plane where the line is not allowed to pass through. With the help of a geometric duality we solve such problems for the vertical distance and then extend these results to block norms and some of them even to arbitrary norms. For all norms we give a finite candidate set for the optimal line.

In this survey we deal with the location of hyperplanes in n-dimensional normed spaces, i.e., we present all known results and a unifying approach to the so-called median hyperplane problem in Minkowski spaces. We describe how to find a hyperplane H minimizing the weighted sum f(H) of distances to a given, finite set of demand points. In robust statistics and operations research such an optimal hyperplane is called a median hyperplane.After summarizing the known results for the Euclidean and rectangular situation, we show that for all distance measures d derived from norms one of the hyperplanes minimizing f(H) is the affine hull of n of the demand points and, moreover, that each median hyperplane is a halving one (in a sense defined below) with respect to the geiven point set. Also an independence of norm result for finding optimal hyperplanes with fixed slope will be given. Furthermore we discuss how these geometric criteria can be used for algorithmical approaches to median hyperplanes, with an extra discussion for the case of polyhedral norms. And finally a characterizatio of all smooth norms by a sharpened incidence criterion for median hyperplanes is mentioned.

In this paper we prove a reduction result for the number of criteria in convex multiobjective optimization. This result states that to decide wheter a point x in the decision space is pareto optimal it suffices to consider at most n? criteria at a time, where n is the dimension of the decision space. The main theorem is based on a geometric characterization of pareto, strict pareto and weak pareto solutions

Ramsey Numbers of K_m versus (n,k)-graphs and the Local Density of Graphs not Containing a K_m
(1999)

In this paper generalized Ramsey numbers of complete graphs K_m versus the set langle ,n,k angle of (n,k)-graphs are investigated. The value of r(K_m,langle n,k angle) is given in general for (relative to n) values of k small compared to n using a correlation with Turan numbers. These generalized Ramsey numbers con be used to determine the local densities of graphs not containing a subgraph K_m.

The Weber problem for a given finite set of existing facilities {cal E}x = {Ex_1,Ex_2, ... ,Ex_M} subset R^2 with positive weights w_m (m = 1, ... ,M) is to find a new facility X* in R^2 such that sum_{m=1}^{M} w_{m}d(X,Ex_m) is minimized for some distance function d. In this paper we consider distances defined by polyhedral gauges. A variation of this problem is obtained if barriers are introduced which are convex polygonal subsets of the plane where neither location of new facilities nor traveling is allowed. Such barriers like lakes, military regions, national parks or mountains are frequently encountered in practice.From a mathematical point of view barrier problems are difficult, since the prensence of barriers destroys the convexity of the objective function. Nevertheless, this paper establishes a descretization result: One of the grid points in the grid defined by the existing facilities and the fuundamental directions of the gauge distances can be proved to be an optimal location. Thus the barrier problem can be solved with a polynomial algorithm.

Kernel smoothing in nonparametric autoregressive schemes offers a powerful tool in modelling time series. In this paper it is shown that the bootstrap can be used for estimating the distribution of kernel smoothers. This can be done by mimicking the stochastic nature of the whole process in the bootstrap resampling or by generating a simple regression model. Consistency of these bootstrap procedures will be shown.

In this paper we consider generalizations of multifacility location problems in which as an additional constraint the new facilities are not allowed to be located in a presprcified region. We propose several different solution schemes for this non-convex optimization problem. These include a linear programming type approach, penalty approaches and barrier approaches. Moreover, structural results as well as illustratrive examples showing the difficulties of this problem are presented

To present the decision maker's (DM) preferences in multicriteria decision problems as a partially ordered set is an effective method to catch the DM's purpose and avoid misleading results. Since our paper is focused on minimal path problems, we regard the ordered set of edges (E,=). Minimal paths are defined in repect to power-ordered sets which provides an essential tool to solve such problems. An algorithm to detect minimal paths on a multicriteria minimal path problem is presented

Let P be a probability measure of the real line R such that each of the product measures P^{otimes n} assigns the value 1/2 to every half space in R^{n} having the origin as a boundary point. Then P is symmetric.Example: A strictly stable law on R is symmetric iff it has median zero. The treated symmetry problem is related to the problem of characterizing the distribution of X_1 by the distribution of (X_2 + X_1, ... ,X_n + X_1), with X_1, ... ,X_n being independent and identically distributed random variables.

In continous location problems we are given a set of existing facilities and we are looking for the location of one or several new facilities. In the classical approaches weights are assigned to existing facilities expressing the importance of the new facilities for the existing ones. In this paper, we consider a pointwise defined objective function where the weights are assigned to the existing facilities depending on the location of the new facility. This approach is shown to be a generalization of the median, center and centdian objective functions. In addition, this approach allows to formulate completely new location models. Efficient algorithms as well as structure results for this algebraic approach for location problems are presented. Extensions to the multifacility and restricted case are also considered.

In this paper we consider the problem of optimizing a piecewise-linear objective function over a non-convex domain. In particular we do not allow the solution to lie in the interior of a prespecified region R. We discuss the geometrical properties of this problems and present algorithms based on combinatorial arguments. In addition we show how we can construct quite complicated shaped sets R while maintaining the combinatorial properties.

In this paper we deal with the determination of the whole set of Pareto-solutions of location problems with respect to Q general criteria. These criteria include as particular instances median, center or cent-dian objective functions. The paper characterizes the set of Pareto-solutions of all these multicriteria problems. An efficient algorithm for the planar case is developed and its complexity is established. the proposed approach is more general than the previously published approaches to multicriteria location problems and includes almost all of them as particular instances.

The computational complexity of combinatorial multiple objective programming problems is investigated. NP-completeness and #P-completeness results are presented. Using two definitions of approximability, general results are presented, which outline limits for approximation algorithms. The performance of the well known tree and Christofides' heuristics for the TSP is investigated in the multicriteria case with respect to the two definitions of approximability.

Convex Operators in Vector Optimization: Directional Derivatives and the Cone of Decrease Directions
(1999)

The paper is devoted to the investigation of directional derivatives and the cone of decrease directions for convex operators on Banach spaces. We prove a condition for the existence of directional derivatives which does not assume regularity of the ordering cone K. This result is then used to prove that for continuous convex operators the cone of decrease directions can be represented in terms of the directional derivatices . Decrease directions are those for which the directional derivative lies in the negative interior of the ordering cone K. Finally, we show that the continuity of the convex operator can be replaced by its K-boundedness.

The notion of the balance number introduced in [3,page 139] through a certain set contraction procedure for nonscalarized multiobjective global optimization is represented via a min-max operation on the data of the problem. This representation yields a different computational procedure for the calculation of the balance number and allows us to generalize the approach for problems with countably many performance criteria.

In this paper we consider the problem of locating one new facility in the plane with respect to a given set of existing facility where a set of polygonal barriers restricts traveling. This non-convex optimization problem can be reduced to a finite set of convex subproblems if the objective function is a convex function of the travel distances between the new and the existing facilities (like e.g. the Median and Center objective functions). An exact Algorithm and a heuristic solution procedure based on this reduction result are developed.

Let rC and rD be two convexdistance funtions in the plane with convex unit balls C and D. Given two points, p and q, we investigate the bisector, B(p,q), of p and q, where distance from p is measured by rC and distance from q by rD. We provide the following results. B(p,q) may consist of many connected components whose precise number can be derived from the intersection of the unit balls, C nd D. The bisector can contain bounded or unbounded 2-dimensional areas. Even more surprising, pieces of the bisector may appear inside the region of all points closer to p than to q. If C and D are convex polygons over m and m vertices, respectively, the bisector B(p,q) can consist of at most min(m,n) connected components which contain at most 2(m+n) vertices altogether. The former bound is tight, the latter is tight up to an additive constant. We also present an optimal O(m+n) time algorithm for computing the bisector.

In planar location problems with barriers one considers regions which are forbidden for the siting of new facilities as well as for trespassing. These problems areimportant since they reflect various real-world situations.The resulting mathematical models have a non-convex objectivefunction and are therefore difficult to tackle using standardmethods of location theory even in the case of simple barriershapes and distance funtions.For the case of center objectives with barrier distancesobtained from the rectilinear or Manhattan metric it is shown that the problem can be solved by identifying a finitedominating set (FDS) the cardinality of which is bounded bya polynomial in the size of the problem input. The resultinggenuinely polynomial algorithm can be combined with bound computations which are derived from solving closely connectedrestricted location and network location problems.It is shown that the results can be extended to barrier center problems with respect to arbitrary block norms having fourfundamental directions.

In this paper we deal with an NP-hard combinatorial optimization problem, the k-cardinality tree problem in node weighted graphs. This problem has several applications , which justify the need for efficient methods to obtain good solutions. We review existing literature on the problem. Then we prove that under the condition that the graph contains exactly one trough, the problem can be solved in ploynomial time. For the general NP-hard problem we implemented several local search methods to obtain heuristics solutions, which are qualitatively better than solutions found by constructive heuristics and which require significantly less time than needed to obtain optimal solutions. We used the well known concepts of genetic algorithms and tabu search with useful extensions. We show that all the methods find optimal solutions for the class of graphs containing exactly one trough. The general performance of our methods as compared to other heuristics is illustrated by numerical results.

Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.

In this paper a new trend is introduced into the field of multicriteria location problems. We combine the robustness approach using the minmax regret criterion together with Pareto-optimality. We consider the multicriteria Weber location problem which consists of simultaneously minimizing a number of weighted sum-distance functions and the set of Pareto-optimal locations as its solution concept. For this problem, we characterize the Pareto-optimal solutions within the set of robust locations for the original weighted sum-distance functions. These locations have both the properties of stability and non-domination which are required in robust and multicriteria programming.

The problem of finding an optimal location X* minimizing the maximum Euclidean distance to existing facilities is well solved by e.g. the Elzinga-Hearn algorithm. In practical situations X* will however often not be feasible. We therefore suggest in this note a polynomial algorithm which will find an optimal location X^F in a feasible subset F of the plane R^2

In the following, we discuss a procedure for interpolating a spatial-temporal stochastic process. We stick to a particular, moderately general model but the approach can be easily transered to other similar problems. The original data, which motivated this work, are measurements of gas concentrations (SO2, NO, O2) and several meteorological parameters (temperature, sun radiation, procipitation, wind speed etc.). These date have been and are still recorded twice every hour at several irregularly located places in the forests of the state Rheinland-Pfalz as part of a program monitoring the air pollution in the forests.

Anhand des vom Gutachterausschuß der Stadt Kaiserlautern zur Verfügung gestellten Datenmaterials soll untersucht werden, welche Faktoren den Verkehrswert eines bebauten Grundstücks beeinflussen. Mit diesen Erkenntnissen soll eine möglichst einfache Formel ermittelt werden, die eine Schätzung für den Verkehrswert liefert, und die dabei die in der Vergangenheit erzielten Kaufpreise berücksichtigt. Für die Lösung dieser Aufgabe bietet sich das Verfahren der multiplen linearen Regression an. Auf die theoretischen Grundlagen soll hier nicht näher eingegangen werden, man findet sie in jedem Buch über mathematische Statistik, oder in [1]. Bei der Analyse der Daten wurde im großen und ganzen der Weg eingeschlagen, den Angelika Schwarz in [1] beschreibt. Ihre Ergebnisse lassen sich jedoch nicht direkt übertragen, da die dort betrachteten Grundstücke unbebaut waren. Da bei der statistischen Auswertung großer Datenmengen ein immenser Rechenaufwand anfällt, ist es unverzichtbar, professionelle statistische Software einzusetzen. Es stand das Programm S-Plus 2.0 (PC-Version für Windows) zur Verfügung. Sämtliche Berechnungen und alle Grafiken in diesem Bericht wurden in S-Plus erstellt.

We consider a multiple objective linear program (MOLP) max{Cx|Ax = b,x in N_{0}^{n}} where C = (c_ij) is the p x n - matrix of p different objective functions z_i(x) = c_{i1}x_1 + ... + c_{in}x_n , i = 1,...,p and A is the m x n - matrix of a system of m linear equations a_{k1}x_1 + ... + a_{kn}x_n = b_k , k=1,...,m which form the set of constraints of the problem. All coefficients are assumed to be natural numbers or zero. The set M of admissable solutions {hat x} is an admissible solution such that there exists no other admissable solution x' with C{hat x} Cx'. The efficient solutions play the role of optimal solutions for the MOLP and it is our aim to determine the set of all efficient solutions

In this paper we give the definition of a solution concept in multicriteria combinatorial optimization. We show how Pareto, max-ordering and lexicographically optimal solutions can be incorporated in this framework. Furthermore we state some properties of lexicographic max-ordering solutions, which combine features of these three kinds of optimal solutions. Two of these properties, which are desirable from a decision maker" s point of view, are satisfied if and only of the solution concept is that of lexicographic max-ordering.

The Weber Problem for a given finite set of existing facilities {cal E}x = {Ex_1,Ex_2, ... ,Ex_M} subset R^2 with positive weights w_m (m = 1, ... ,M) is to find a new fcility X* such that sum_{m=1}^{M} w_{m}d(X,Ex_m) is minimized for some distance function d. A variation of this problem is obtained of the existing facilities are situated on two sides of a linear barrier. Such barriers like rivers, highways, borders or mountain ranges are frequently encountered in practice. Structural results as well as algorithms for this non-convex optimization problem depending on the distance function and on the number and location of passages through the barrier are presented. A reduction to convex optimization problems is used to derive efficient algorithms.

In this paper we introduce a new type of single facility location problems on networks which includes as special cases most of the classical criteria in the literature. Structural results as well as a finite dominationg set for the optimal locations are developed. Also the extension to the multi-facility case is discussed.

In this paper network location problems with several objectives are discussed, where every single objective is a classical median objective function. We will lock at the problem of finding Pareto optimal locations and lexicographically optimal locations. It is shown that for Pareto optimal locations in undirected networks no node dominance result can be shown. Structural results as well as efficient algorithms for these multi-criteria problems are developed. In the special case of a tree network a generalization of Goldman's dominance algorithm for finding Pareto locations is presented.

An approach to generating all efficient solutions of multiple objective programs with piecewise linear objective functions and linear constraints is presented. The approach is based on the decomposition of the feasible set into subsets, referred to as cells, so that the original problem reduces to a series of lenear multiple objective programs over the cells. The concepts of cell-efficiency and complex-efficiency are introduced and their relationship with efficiency is examined. A generic algorithm for finding efficent solutions is proposed. Applications in location theory as well as in worst case analysis are highlighted.

Facility location problems in the plane play an important role in mathematical programming. When looking for new locations in modeling real-word problems, we are often confronted with forbidden regions, that are not feasible for the placement of new locations. Furthermore these forbidden regions may habe complicated shapes. It may be more useful or even necessary to use approcimations of such forbidden regions when trying to solve location problems. In this paper we develop error bounds for the approximative solution of restricted planar location problems using the so called sandwich algorithm. The number of approximation steps required to achieve a specified error bound is analyzed. As examples of these approximation schemes, we discuss round norms and polyhedral norms. Also computational tests are included.

In this paper we deal with the location of hyperplanes in n-dimensional normed spaces. If d is a distance measure, our objective is to find a hyperplane H which minimizes f(H) = sum_{m=1}^{M} w_{m}d(x_m,H), where w_m ge 0 are non-negative weights, x_m in R^n, m=1, ... ,M demand points and d(x_m,H)=min_{z in H} d(x_m,z) is the distance from x_m to the hyperplane H. In robust statistics and operations research such an optimal hyperplane is called a median hyperplane. We show that for all distance measures d derived from norms, one of the hyperplanes minimizing f(H) is the affine hull of n of the demand points and, moreover, that each median hyperplane is (ina certain sense) a halving one with respect to the given point set.

There are several good reasons to introduce classification schemes for optimization problems including, for instance, the ability for concise problem statement opposed to verbal, often ambiguous, descriptions or simple data encoding and information retrieval in bibliographical information systems or software libraries. In some branches like scheduling and queuing theory classification is therefore a widely accepted and appreciated tool. The aim of this paper is to propose a 5-position classification which can be used to cover all location problems. We will provide a list of currentliy available symbols and indicate its usefulness in a - necessarily non-comprehensive - list of classical location problems. The classification scheme is in use since 1992 and has since proved to be useful in research, software development, classroom, and for overview articles.

We consider wavelet estimation of the time-dependent (evolutionary) power spectrum of a locally stationary time series. Allowing for departures from stationary proves useful for modelling, e.g., transient phenomena, quasi-oscillating behaviour or spectrum modulation. In our work wavelets are used to provide an adaptive local smoothing of a short-time periodogram in the time-freqeuncy plane. For this, in contrast to classical nonparametric (linear) approaches we use nonlinear thresholding of the empirical wavelet coefficients of the evolutionary spectrum. We show how these techniques allow for both adaptively reconstructing the local structure in the time-frequency plane and for denoising the resulting estimates. To this end a threshold choice is derived which is motivated by minimax properties w.r.t. the integrated mean squared error. Our approach is based on a 2-d orthogonal wavelet transform modified by using a cardinal Lagrange interpolation function on the finest scale. As an example, we apply our procedure to a time-varying spectrum motivated from mobile radio propagation.

The problem of providing connectivity for a collection of applications is largely one of data integration: the communicating parties must agree on thesemantics and syntax of the data being exchanged. In earlier papers [#!mp:jsc1!#,#!sg:BSG1!#], it was proposed that dictionaries of definitions foroperators, functions, and symbolic constants can effectively address the problem of semantic data integration. In this paper we extend that earlier work todiscuss the important issues in data integration at the syntactic level and propose a set of solutions that are both general, supporting a wide range of dataobjects with typing information, and efficient, supporting fast transmission and parsing.

Chains of Recurrences (CRs) are a tool for expediting the evaluation of elementary expressions over regular grids. CR based evaluations of elementaryexpressions consist of 3 major stages: CR construction, simplification, and evaluation. This paper addresses CR simplifications. The goal of CRsimplifications is to manipulate a CR such that the resulting expression is more efficiently to evaluate. We develop CR simplification strategies which takethe computational context of CR evaluations into account. Realizing that it is infeasible to always optimally simplify a CR expression, we give heuristicstrategies which, in most cases, result in a optimal, or close-to-optimal expressions. The motivations behind our proposed strategies are discussed and theresults are illustrated by various examples.

Algorithms in Singular
(1999)

The purpose of GPS-satellite-to-satellite tracking (GPS-SST) is to determine the gravitational potential at the earth's surface from measured ranges (geometrical distances) between a low-flying satellite and the high-flying satellites of the Global Posittioning System (GPS). In this paper GPS-satellite-to-satellite tracking is reformulated as the problem of determining the gravitational potential of the earth from given gradients at satellite altitude. Uniqueness and stability of the solution are investigated. The essential tool is to split the gradient field into a normal part (i.e. the first order radial derivative) and a tangential part (i.e. the surface gradient). Uniqueness is proved for polar, circular orbits corresponding to both types of data (first radial derivative and/or surface gradient). In both cases gravity recovery based on satellite-to-satellite tracking turns out to be an exponentially ill-posed problem. As an appropriate solution method regularization in terms of spherical wavelets is proposed based on the knowledge of the singular system. Finally, the extension of this method is generalized to a non-spherical earth and a non-spherical orbital surface based on combined terrestrial and satellite data material.

Here we consider the Kohonen algorithm with a constant learning rate as a Markov process evolving in a topological space. it is shown that the process is an irreducible and aperiodic T-chain, regardless of the dimension of both data space and network and the special shape of the neighborhood function. Moreover the validity of Deoblin's condition is proved. These imply the convergence in distribution of the process to a finite invariant measure with a uniform geometric rate. In addition we show the process is positive Harris recurrent, which enables us to use statistical devices to measure its centrality and variability as the time goes to infinity.