## Fachbereich Mathematik

### Refine

#### Year of publication

- 1997 (36) (remove)

#### Document Type

- Preprint (31)
- Article (2)
- Diploma Thesis (1)
- Periodical (1)
- Report (1)

#### Keywords

- Anisotropic smoothness classes (1)
- Bayesrisiko (1)
- Brownian motion (1)
- Dense gas (1)
- Diffusionsprozess (1)
- Elliptic-parabolic equation (1)
- Enskog equation (1)
- Function of bounded variation (1)
- Integral transform (1)
- Kohonen's SOM (1)

We derive minimax rates for estimation in anisotropic smoothness classes. This rate is attained by a coordinatewise thresholded wavelet estimator based on a tensor product basis with separate scale parameter for every dimension. It is shown that this basis is superior to its one-scale multiresolution analog, if different degrees of smoothness in different directions are present.; As an important application we introduce a new adaptive wavelet estimator of the time-dependent spectrum of a locally stationary time series. Using this model which was resently developed by Dahlhaus, we show that the resulting estimator attains nearly the rate, which is optimal in Gaussian white noise, simultaneously over a wide range of smoothness classes. Moreover, by our new approach we overcome the difficulty of how to choose the right amount of smoothing, i.e. how to adapt to the appropriate resolution, for reconstructing the local structure of the evolutionary spectrum in the time-frequency plane.

Many problems arising in (geo)physics and technology can be formulated as compact operator equations of the first kind \(A F = G\). Due to the ill-posedness of the equation a variety of regularization methods are in discussion for an approximate solution, where particular emphasize must be put on balancing the data and the approximation error. In doing so one is interested in optimal parameter choice strategies. In this paper our interest lies in an efficient algorithmic realization of a special class of regularization methods. More precisely, we implement regularization methods based on filtered singular value decomposition as a wavelet analysis. This enables us to perform, e.g., Tikhonov-Philips regularization as multiresolution. In other words, we are able to pass over from one regularized solution to another one by adding or subtracting so-called detail information in terms of wavelets. It is shown that regularization wavelets as proposed here are efficiently applicable to a future problem in satellite geodesy, viz. satellite gravity gradiometry.

An asymptotic-induced scheme for nonstationary transport equations with thediffusion scaling is developed. The scheme works uniformly for all ranges ofmean free paths. It is based on the asymptotic analysis of the diffusion limit ofthe transport equation. A theoretical investigation of the behaviour of thescheme in the diffusion limit is given and an approximation property is proven.Moreover, numerical results for different physical situations are shown and atheuniform convergence of the scheme is established numerically.

Metaharmonic wavelets are introduced for constructing the solution of theHelmholtz equation (reduced wave equation) corresponding to Dirichlet's orNeumann's boundary values on a closed surface approach leading to exactreconstruction formulas is considered in more detail. A scale discrete version ofmultiresolution is described for potential functions metaharmonic outside theclosed surface and satisfying the radiation condition at infinity. Moreover, wediscuss fully discrete wavelet representations of band-limited metaharmonicpotentials. Finally, a decomposition and reconstruction (pyramid) scheme foreconomical numerical implementation is presented for Runge-Walsh waveletapproximation.

Liegruppen
(1997)

Sudakov's typical marginals, random linear functionals and a conditional central limit theorem
(1997)

V.N. Sudakov [Sud78] proved that the one-dimensional marginals of a highdimensional second order measure are close to each other in most directions. Extending this and a related result in the context of projection pursuit of P. Diaconis and D. Freedman [Dia84], we give for a probability measure P and a random (a.s.) linear functional F on a Hilbert space simple sufficient conditions under which most of the one-dimensional images of P under F are close to their canonical mixture which turns out to be almost a mixed normal distribution. Using the concept of approximate conditioning we deduce a conditional central limit theorem (theorem 3) for random averages of triangular arrays of random variables which satisfy only fairly weak asymptotic orthogonality conditions.

Primary decomposition of an ideal in a polynomial ring over a field belongs to the indispensable theoretical tools in commutative algebra and algebraic geometry. Geometrically it corresponds to the decomposition of an affine variety into irreducible components and is, therefore, also an important geometric concept.The decomposition of a variety into irreducible components is, however, slightly weaker than the full primary decomposition, since the irreducible components correspond only to the minimal primes of the ideal of the variety, which is a radical ideal. The embedded components, although invisible in the decomposition of the variety itself, are, however, responsible for many geometric properties, in particular, if we deform the variety slightly. Therefore, they cannot be neglected and the knowledge of the full primary decomposition is important also in a geometric context.In contrast to the theoretical importance, one can find in mathematical papers only very few concrete examples of non-trivial primary decompositions because carrying out such a decomposition by hand is almost impossible. This experience corresponds to the fact that providing efficient algorithms for primary decomposition of an ideal I ae K[x1; : : : ; xn], K a field, is also a difficult task and still one of the big challenges for computational algebra and computational algebraic geometry.All known algorithms require Gr"obner bases respectively characteristic sets and multivariate polynomial factorization over some (algebraic or transcendental) extension of the given field K. The first practical algorithm for computing the minimal associated primes is based on characteristic sets and the Ritt-Wu process ([R1], [R2], [Wu], [W]), the first practical and general primary decomposition algorithm was given by Gianni, Trager and Zacharias [GTZ]. New ideas from homological algebra were introduced by Eisenbud, Huneke and Vasconcelos in [EHV]. Recently, Shimoyama and Yokoyama [SY] provided a new algorithm, using Gr"obner bases, to obtain the primary decompositon from the given minimal associated primes.In the present paper we present all four approaches together with some improvements and with detailed comparisons, based upon an analysis of 34 examples using the computer algebra system SINGULAR [GPS]. Since primary decomposition is a fairly complicated task, it is, therefore, best explained by dividing it into several subtasks, in particular, while sometimes only one of these subtasks is needed in practice. The paper is organized in such a way that we consider the subtasks separately and present the different approaches of the above-mentioned authors, with several tricks and improvements incorporated. Some of these improvements and the combination of certain steps from the different algorithms are essential for improving the practical performance.

\(C^0\)-scalar-type spectrality criterions for operators \(A\), whose resolvent set contains the negative reals, are provided. The criterions are given in terms of growth conditions on the resolvent of \(A\) and the semi-group generated by \(A\).These criterions characterize scalar-type operators on the Banach space \(X\), if and only if \(X\) has no subspace isomorphic to the space of complex null-sequences.

It is of basic interest to assess the quality of the decisions of a statistician, based on the outcoming data of a statistical experiment, in the context of a given model class P of probability distributions. The statistician picks a particular distribution P , suffering a loss by not picking the 'true' distribution P' . There are several relevant loss functions, one being based on the the relative entropy function or Kullback Leibler information distance. In this paper we prove a general 'minimax risk equals maximin (Bayes) risk' theorem for the Kullback Leibler loss under the hypothesis of a dominated and compact family of distributions over a Polish observation space with suitably integrable densities. We also find that there is always an optimal Bayes strategy (i.e. a suitable prior) achieving the minimax value. Further, we see that every such minimax optimal strategy leads to the same distribution P in the convex closure of the model class. Finally, we give some examples to illustrate the results and to indicate, how the minimax result reflects in the structure of least favorable priors. This paper is mainly based on parts of this author's doctorial thesis.

In this note, answering a question of N. Maslova, we give a two-dimensional elementary example of the phenomenon indicated in the title. Perhaps this simple example may serve as an object of comparison for more refined models like in the theory of kinetic differential equations where similar questions still seem to be unsettled.

The observation of an ergodic Markov chain asymptotically allows perfect identification of the transition matrix. In this paper we determine the rate of the information contained in the first n observations, provided the unknown transition matrix belongs to a known finite set. As an essential tool we prove new refinements of the large deviation theory of the empirical pair measure of finite Markov chains. Keywords: Markov Chain, Entropy, Bayes risk, Large Deviations.

In the Banach space co there exists a continuous function of bounded semivariation which does not correspond to a countably additive vector measure. This result is in contrast to the scalar case, and it has consequences for the characterization of scalar-type operators. Besides this negative result we introduce the notion of functions of unconditionally bounded variation which are exactly the generators of countably additive vector measures.

An analogue of the classical Riemann-Siegel integral formula for Dirichlet series associated to cusp forms is developed. As an application of the formula, we give a comparatively simple proof of the approximate functional equation for this type of Dirichlet series.

We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.

The Multiple Objective Median Problem involves locating a new facility so that a vector of performance criteria is optimized over a given set of existing facilities. A variation of this problem is obtained if the existing facilities are situated on two sides of a linear barrier. Such barriers like rivers, highways, borders, or mountain ranges are frequently encountered in practice. In this paper, theory of the Multiple Objective Median Problem with line barriers is developped. As this problem is nonconvex but specially-structured, a reduction to a series of convex optimization problems is proposed. The general results lead to a polynomial algorithm for finding the set of efficient solutions. The algorithm is proposed for bi-criteria problems with different measures of distance.

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerifa, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issue discussed includes modelling aspects in discrete, network and continuous location, heuristic techniques, the state of technology and undesirable facility location. Some general questions are stated reagrding the applicability of location models, promising research directions and the way technology affects the development of solution techniques.

MP Prototype Specification
(1997)

A first explicit connection between finitely presented commutative monoids and ideals in polynomial rings was used 1958 by Emelichev yielding a solution tothe word problem in commutative monoids by deciding the ideal membership problem. The aim of this paper is to show in a similar fashion how congruenceson monoids and groups can be characterized by ideals in respective monoid and group rings. These characterizations enable to transfer well known resultsfrom the theory of string rewriting systems for presenting monoids and groups to the algebraic setting of subalgebras and ideals in monoid respectively grouprings. Moreover, natural one-sided congruences defined by subgroups of a group are connected to one-sided ideals in the respective group ring and hencethe subgroup problem and the ideal membership problem are directly related. For several classes of finitely presented groups we show explicitly howGröbner basis methods are related to existing solutions of the subgroup problem by rewriting methods. For the case of general monoids and submonoidsweaker results are presented. In fact it becomes clear that string rewriting methods for monoids and groups can be lifted in a natural fashion to definereduction relations in monoid and group rings.