## Fachbereich Mathematik

### Refine

#### Year of publication

#### Document Type

- Preprint (604) (remove)

#### Keywords

- Mehrskalenanalyse (10)
- Wavelet (9)
- Approximation (8)
- Boltzmann Equation (7)
- Inverses Problem (7)
- Location Theory (7)
- Numerical Simulation (7)
- Gravitationsfeld (5)
- integer programming (5)
- wavelets (5)

Anhand des vom Gutachterausschuß der Stadt Kaiserlautern zur Verfügung gestellten Datenmaterials soll untersucht werden, welche Faktoren den Verkehrswert eines bebauten Grundstücks beeinflussen. Mit diesen Erkenntnissen soll eine möglichst einfache Formel ermittelt werden, die eine Schätzung für den Verkehrswert liefert, und die dabei die in der Vergangenheit erzielten Kaufpreise berücksichtigt. Für die Lösung dieser Aufgabe bietet sich das Verfahren der multiplen linearen Regression an. Auf die theoretischen Grundlagen soll hier nicht näher eingegangen werden, man findet sie in jedem Buch über mathematische Statistik, oder in [1]. Bei der Analyse der Daten wurde im großen und ganzen der Weg eingeschlagen, den Angelika Schwarz in [1] beschreibt. Ihre Ergebnisse lassen sich jedoch nicht direkt übertragen, da die dort betrachteten Grundstücke unbebaut waren. Da bei der statistischen Auswertung großer Datenmengen ein immenser Rechenaufwand anfällt, ist es unverzichtbar, professionelle statistische Software einzusetzen. Es stand das Programm S-Plus 2.0 (PC-Version für Windows) zur Verfügung. Sämtliche Berechnungen und alle Grafiken in diesem Bericht wurden in S-Plus erstellt.

We consider the problem to evacuate several regions due to river flooding, where sufficient time is given to plan ahead. To ensure a smooth evacuation procedure, our model includes the decision which regions to assign to which shelter, and when evacuation orders should be issued, such that roads do not become congested.
Due to uncertainty in weather forecast, several possible scenarios are simultaneously considered in a robust optimization framework. To solve the resulting integer program, we apply a Tabu search algorithm based on decomposing the problem into better tractable subproblems. Computational experiments on random instances and an instance based on Kulmbach, Germany, data show considerable improvement compared to an MIP solver provided with a strong starting solution.

Zeitreihen und Modalanalyse
(1987)

Die Arbeit ist zu verstehen als ein Teil im großen Projekt der Universität Kaiserslautern, das sich unter dem Namen Technomathematik um die dringend erforderliche Verständigung zwischen Technik und Mathematik bemüht.; Der große Leitfaden war das Buch von Natke: Einführung in Theorie und Praxis der Zeitreihen- und Modalanalyse, Schilderung der wesentlichen dort verwendeten Ideen der indirekten Systemidentifikation sowie des wahrscheinlichkeitstheoretischen und physikalisch-technischen Hintergrundes.

Using particle methods to solve the Boltzmann equation for rarefied gases numerically, in realistic streaming problems, huge differences in the total number of particles per cell arise. In order to overcome the resulting numerical difficulties the application of a weighted particle concept is well-suited. The underlying idea is to use different particle masses in different cells depending on the macroscopic density of the gas. Discrepance estimates and numerical results are given.

Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.

We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued time series models based on other types of thinning operations.

By means of the limit and jump relations of classical potential theory the framework of a wavelet approach on a regular surface is established. The properties of a multiresolution analysis are verified, and a tree algorithm for fast computation is developed based on numerical integration. As applications of the wavelet approach some numerical examples are presented, including the zoom-in property as well as the detection of high frequency perturbations. At the end we discuss a fast multiscale representation of the solution of (exterior) Dirichlet's or Neumann's boundary-value problem corresponding to regular surfaces.

This work is dedicated to the wavelet modelling of regional and temporal variations of the Earth's gravitational potential observed by GRACE. In the first part, all required mathematical tools and methods involving spherical wavelets are introduced. Then we apply our method to monthly GRACE gravity fields. A strong seasonal signal can be identified, which is restricted to areas, where large-scale redistributions of continental water mass are expected. This assumption is analyzed and verified by comparing the time series of regionally obtained wavelet coefficients of the gravitational signal originated from hydrology models and the gravitational potential observed by GRACE. The results are in good agreement to previous studies and illustrate that wavelets are an appropriate tool to investigate regional time-variable effects in the gravitational field.

In this paper we introduce a multiscale technique for the analysis of deformation phenomena of the Earth. Classically, the basis functions under use are globally defined and show polynomial character. In consequence, only a global analysis of deformations is possible such that, for example, the water load of an artificial reservoir is hardly to model in that way. Up till now, the alternative to realize a local analysis can only be established by assuming the investigated region to be flat. In what follows we propose a local analysis based on tools (Navier scaling functions and wavelets) taking the (spherical) surface of the Earth into account. Our approach, in particular, enables us to perform a zooming-in procedure. In fact, the concept of Navier wavelets is formulated in such a way that subregions with larger or smaller data density can accordingly be modelled with a higher or lower resolution of the model, respectively.

Wavelets on closed surfaces in Euclidean space R3 are introduced starting from a scale discrete wavelet transform for potentials harmonic down to a spherical boundary. Essential tools for approximation are integration formulas relating an integral over the sphere to suitable linear combinations of functional values (resp. normal derivatives) on the closed surface under consideration. A scale discrete version of multiresolution is described for potential functions harmonic outside the closed surface and regular at infinity. Furthermore, an exact fully discrete wavelet approximation is developed in case of band-limited wavelets. Finally, the role of wavelets is discussed in three problems, namely (i) the representation of a function on a closed surface from discretely given data, (ii) the (discrete) solution of the exterior Dirichlet problem, and (iii) the (discrete) solution of the exterior Neumann problem.

A multiscale method is introduced using spherical (vector) wavelets for the computation of the earth's magnetic field within source regions of ionospheric and magnetospheric currents. The considerations are essentially based on two geomathematical keystones, namely (i) the Mie representation of solenoidal vector fields in terms of toroidal and poloidal parts and (ii) the Helmholtz decomposition of spherical (tangential) vector fields. Vector wavelets are shown to provide adequate tools for multiscale geomagnetic modelling in form of a multiresolution analysis, thereby completely circumventing the numerical obstacles caused by vector spherical harmonics. The applicability and efficiency of the multiresolution technique is tested with real satellite data.

In this paper, the reflection and refraction of a plane wave at an interface between .two half-spaces composed of triclinic crystalline material is considered. It is shown that due to incidence of a plane wave three types of waves namely quasi-P (qP), quasi-SV (qSV) and quasi-SH (qSH) will be generated governed by the propagation condition involving the acoustic tensor. A simple procedure has been presented for the calculation of all the three phase velocities of the quasi waves. It has been considered that the direction of particle motion is neither parallel nor perpendicular to the direction of propagation. Relations are established between directions of motion and propagation, respectively. The expressions for reflection and refraction coefficients of qP, qSV and qSH waves are obtained. Numerical results of reflection and refraction coefficients are presented for different types of anisotropic media and for different types of incident waves. Graphical representation have been made for incident qP waves and for incident qSV and qSH waves numerical data are presented in two tables.

Vigenere-Verschlüsselung
(1999)

The mathematical modelling of problems in science and engineering leads often to partial differential equations in time and space with boundary and initial conditions.The boundary value problems can be written as extremal problems(principle of minimal potential energy), as variational equations (principle of virtual power) or as classical boundary value problems.There are connections concerning existence and uniqueness results between these formulations, which will be investigated using the powerful tools of functional analysis.The first part of the lecture is devoted to the analysis of linear elliptic boundary value problems given in a variational form.The second part deals with the numerical approximation of the solutions of the variational problems.Galerkin methods as FEM and BEM are the main tools. The h-version will be discussed, and an error analysis will be done.Examples, especially from the elasticity theory, demonstrate the methods.

The shortest path problem in which the \((s,t)\)-paths \(P\) of a given digraph \(G =(V,E)\) are compared with respect to the sum of their edge costs is one of the best known problems in combinatorial optimization. The paper is concerned with a number of variations of this problem having different objective functions like bottleneck, balanced, minimum deviation, algebraic sum, \(k\)-sum and \(k\)-max objectives, \((k_1, k_2)-max, (k_1, k_2)\)-balanced and several types of trimmed-mean objectives. We give a survey on existing algorithms and propose a general model for those problems not yet treated in literature. The latter is based on the solution of resource constrained shortest path problems with equality constraints which can be solved in pseudo-polynomial time if the given graph is acyclic and the number of resources is fixed. In our setting, however, these problems can be solved in strongly polynomial time. Combining this with known results on \(k\)-sum and \(k\)-max optimization for general combinatorial problems, we obtain strongly polynomial algorithms for a variety of path problems on acyclic and general digraphs.

Value Preserving Strategies and a General Framework for Local Approaches to Optimal Portfolios
(1999)

We present some new general results on the existence and form of value preserving portfolio strategies in a general semimartingale setting. The concept of value preservation will be derived via a mean-variance argument. It will also be embedded into a framework for local approaches to the problem of portfolio optimisation.

Universal Shortest Paths
(2010)

We introduce the universal shortest path problem (Univ-SPP) which generalizes both - classical and new - shortest path problems. Starting with the definition of the even more general universal combinatorial optimization problem (Univ-COP), we show that a variety of objective functions for general combinatorial problems can be modeled if all feasible solutions have the same cardinality. Since this assumption is, in general, not satisfied when considering shortest paths, we give two alternative definitions for Univ-SPP, one based on a sequence of cardinality contrained subproblems, the other using an auxiliary construction to establish uniform length for all paths between source and sink. Both alternatives are shown to be (strongly) NP-hard and they can be formulated as quadratic integer or mixed integer linear programs. On graphs with specific assumptions on edge costs and path lengths, the second version of Univ-SPP can be solved as classical sum shortest path problem.

An asymptotic preserving numerical scheme (with respect to diffusion scalings) for a linear transport equation is investigated. The scheme is adopted from a class of recently developped schemes. Stability is proven uniformly in the mean free path under a CFL type condition turning into a parabolic CFL condition in the diffusion limit.

In diesem Projekt soll die Bildung von Wirbeln bei der Strömung eines Gases um eine Ecke numerisch untersucht werden. Dabei sollen verschiedene numerische Verfahren getestet und die Ergebnisse mit Versuchsdaten verglichen werden. Ferner soll untersucht werden, wie gut sich diese Verfahren vektorisieren lassen, da komplizierte zweidimensionale und selbst einfache dreidimensionale Probleme der Strömungsdynamik auf den heute üblichen Universalrechnern nicht mit vertretbarem Zeitaufwand zu lösen sind. Die numerischen Berechnungen werden auf der CYBER 205 in Karlsruhe durchgeführt.

In diesem Projekt soll die Bildung von Wirbeln bei der Strömung eines Gases um eine Ecke numerisch untersucht werden. Dabei sollen verschiedene numerische Verfahren getestet und die Ergebnisse mit Versuchsdaten verglichen werden. Ferner soll untersucht werden, wie gut sich diese Verfahren vektorisieren lassen, da kompliziertere zweidimensionale und selbst einfache dreidimensionale Probleme der Strömungsdynamik auf den heute üblichen Universalrechnern nicht mit vertretbarem Zeitaufwand zu lösen sind, besonders, wenn, wie an der Universität Kaiserslautern, nur eine relativ langsame Anlage (Siemens 7551/7561) zur Verfügung steht. Die numerischen Rechnungen werden auf der CYBER 205 in Karlsruhe durchgeführt.

In the paper we discuss the transition from kinetic theory to macroscopic fluid equations, where the macroscopic equations are defined as aymptotic limits of a kinetic equation. This relation can be used to derive computationally efficient domain decomposition schemes for the simulaion of rarefied gas flows close to the continuum limit. Moreover, we present some basic ideas for the derivation of kinetic induced numerical schemes for macroscopic equations, namely kinetic schemes for general conservation laws as well as Lattice-Boltzmann methods for the incompressible Navier-Stokes equations.

Toying with Jordan matrices
(1996)

Several topological necessary conditions of smooth stabilization in the large have been obtained. In particular, if a smooth single-input nonlinear system is smoothly stabilizable in the large at some point of a connected component of equilibria set, then the connected component is to be an unknoted, unbounded curve.

This paper presents a wavelet analysis of temporal and spatial variations of the Earth's gravitational potential based on tensor product wavelets. The time--space wavelet concept is realized by combining Legendre wavelets for the time domain and spherical wavelets for the space domain. In consequence, a multiresolution analysis for both, temporal and spatial resolution, is formulated within a unified concept. The method is then numerically realized by using first synthetically generated data and, finally, several real data sets.

In this paper a known orthonormal system of time- and space-dependent functions, that were derived out of the Cauchy-Navier equation for elastodynamic phenomena, is used to construct reproducing kernel Hilbert spaces. After choosing one of the spaces the corresponding kernel is used to define a function system that serves as a basis for a spline space. We show that under certain conditions there exists a unique interpolating or approximating, respectively, spline in this space with respect to given samples of an unknown function. The name "spline" here refers to its property of minimising a norm among all interpolating functions. Moreover, a convergence theorem and an error estimate relative to the point grid density are derived. As numerical example we investigate the propagation of seismic waves.

The Trippstadt Problem
(1984)

Close to Kaiserslautern is the town of Trippstadt, which, together with five other small towns forms a local administration unit (Verbandsgemeinde) called Kaiserslautern-Süd. Trippstadt has its own beautiful public swimming pool, which causes problems though; the cost for the upkeep of the pool is higher than the income and thus has to be divided among the towns belonging to the Verbandsgemeinde. Because of this problem the administration wanted to find out which fraction of the total number of pool visitors came from the different towns. They planned to ask each pool guest where he came from. They did this for only three days though because the waiting lines at the cashiers became unbearably long and they could see that because of this the total number of guests would decrease. Then they wondered how to find a better method to get the same data and that was when I was asked to help with the solution of the problem.

This paper is concerned with numerical algorithms for the bipolar quantum drift diffusion model. For the thermal equilibrium case a quasi-gradient method minimizing the energy functional is introduced and strong convergence is proven. The computation of current - voltage characteristics is performed by means of an extended emph{Gummel - iteration}. It is shown that the involved fixed point mapping is a contraction for small applied voltages. In this case the model equations are uniquely solvable and convergence of the proposed iteration scheme follows. Numerical simulations of a one dimensional resonant tunneling diode are presented. The computed current - voltage characteristics are in good qualitative agreement with experimental measurements. The appearance of negative differential resistances is verified for the first time in a Quantum Drift Diffusion model.

In this work we introduce a new bandlimited spherical wavelet: The Bernstein wavelet. It possesses a couple of interesting properties. To be specific, we are able to construct bandlimited wavelets free of oscillations. The scaling function of this wavelet is investigated with regard to the spherical uncertainty principle, i.e., its localization in the space domain as well as in the momentum domain is calculated and compared to the well-known Shannon scaling function. Surprisingly, they possess the same localization in space although one is highly oscillating whereas the other one shows no oscillatory behavior. Moreover, the Bernstein scaling function turns out to be the first bandlimited scaling function known to the literature whose uncertainty product tends to the minimal value 1.

In this paper we consider a certain class of geodetic linear inverse problems LambdaF=G in a reproducing kernel Hilbert space setting to obtain a bounded generalized inverse operator Lambda. For a numerical realization we assume G to be given at a finite number of discrete points to which we employ a spherical spline interpolation method adapted to the Hilbertspaces. By applying Lambda to the obtained spline interpolant we get an approximation of the solution F. Finally our main task is to show some properties of the approximated solution and to prove convergence results if the data set increases.

The performance of a combustion engine is essentially determined by the charge cycle, i.e. by the inflow of fresh air through the inlet pipe into the cylinder after a combustion cycle. The amount of air, exchanged during this process, depends on many factors, e.g. the number of revolutions per minute, the temperature, the engine and valve geometry. In order to have a tool in designing the engine one is interested in calculating this amount. The proper calculation would involve the solution of three-dimensional hydrodynamical equations governing the gas flow including chemical reactions in a complicated geometry, consisting of the cylinder, valves, inlet and outlet pipe. Since this is clearly too ambitious, we consider a simplified model.

We consider optimal design problems for semiconductor devices which are simulated using the energy transport model. We develop a descent algorithm based on the adjoint calculus and present numerical results for a ballistic diode. Further, we compare the optimal doping profile with results computed on basis of the drift diffusion model. Finally, we exploit the model hierarchy and test the space mapping approach, especially the aggressive space mapping algorithm, for the design problem. This yields a significant reduction of numerical costs and programming effort.

By natural or man-made disasters, the evacuation of a whole region or city may become necessary. Apart from private traffic, the evacuation from collection points to secure shelters outside the endangered region will be realized by a bus fleet made available by emergency relief. The arising Bus Evacuation Problem (BEP) is a vehicle scheduling problem, in which a given number of evacuees needs to be transported from a set of collection points to a set of capacitated shelters, minimizing the total evacuation time, i.e., the time needed until the last person is brought to safety.
In this paper we consider an extended version of the BEP, the Robust Bus Evacuation Problem (RBEP), in which the exact numbers of evacuees are not known, but may stem from a set of probable scenarios. However, after a given reckoning time, this uncertainty is eliminated and planners are given exact figures. The problem is to decide for each bus, if it is better to send it right away -- using uncertain numbers of evacuees -- or to wait until the numbers become known.
We present a mixed-integer linear programming formulation for the RBEP and discuss solution approaches; in particular, we present a tabu search framework for finding heuristic solutions of acceptable quality within short computation time. In computational experiments using both randomly generated instances and the real-world scenario of evacuating the city of Kaiserslautern, we compare our solution approaches.

In the present paper we investigate the Rayleigh-Benard convection in rarefied gases and demonstrate by numerical experiments the transition from purely thermal conduction to a natural convective flow for a large range of Knudsen numbers from 0.02 downto 0.001. We address to the problem how the critical value for the Rayleigh number defined for incompressible vsicous flows may be translated to rarefied gas flows. Moreover, the simulations obtained for a Knudsen number Kn=0.001 and Froude number Fr=1 show a further transition from regular Rayleigh-Benard cells to a pure unsteady behavious with moving vortices.

The thermal equilibrium state of a bipolar, isothermal quantum fluid confined to a bounded domain \(\Omega\subset I\!\!R^d,d=1,2\) or \( d=3\) is the minimizer of the total energy \({\mathcal E}_{\epsilon\lambda}\); \({\mathcal E}_{\epsilon\lambda}\) involves the squares of the scaled Planck's constant \(\epsilon\) and the scaled minimal Debye length \(\lambda\). In applications one frequently has \(\lambda^2\ll 1\). In these cases the zero-space-charge approximation is rigorously justified. As \(\lambda \to 0 \), the particle densities converge to the minimizer of a limiting quantum zero-space-charge functional exactly in those cases where the doping profile satisfies some compatibility conditions. Under natural additional assumptions on the internal energies one gets an differential-algebraic system for the limiting \((\lambda=0)\) particle densities, namely the quantum zero-space-charge model. The analysis of the subsequent limit \(\epsilon \to 0\) exhibits the importance of quantum gaps. The semiclassical zero-space-charge model is, for small \(\epsilon\), a reasonable approximation of the quantum model if and only if the quantum gap vanishes. The simultaneous limit \(\epsilon =\lambda \to 0\) is analyzed.

In this article we prove existence and uniqueness results for solutions to the outer oblique boundary problem for the Poisson equation under very weak assumptions on boundary, coefficients and inhomogeneities. Main tools are the Kelvin transformation and the solution operator for the regular inner problem, provided in [1]. Moreover we prove regularisation results for the weak solutions of both, the inner and the outer problem. We investigate the non-admissible direction for the oblique vector field, state results with stochastic inhomogeneities and provide a Ritz-Galerkinm approximation. The results are applicable to problems from Geomathematics, see e.g. [2] and [3].

Primary decomposition of an ideal in a polynomial ring over a field belongs to the indispensable theoretical tools in commutative algebra and algebraic geometry. Geometrically it corresponds to the decomposition of an affine variety into irreducible components and is, therefore, also an important geometric concept.The decomposition of a variety into irreducible components is, however, slightly weaker than the full primary decomposition, since the irreducible components correspond only to the minimal primes of the ideal of the variety, which is a radical ideal. The embedded components, although invisible in the decomposition of the variety itself, are, however, responsible for many geometric properties, in particular, if we deform the variety slightly. Therefore, they cannot be neglected and the knowledge of the full primary decomposition is important also in a geometric context.In contrast to the theoretical importance, one can find in mathematical papers only very few concrete examples of non-trivial primary decompositions because carrying out such a decomposition by hand is almost impossible. This experience corresponds to the fact that providing efficient algorithms for primary decomposition of an ideal I ae K[x1; : : : ; xn], K a field, is also a difficult task and still one of the big challenges for computational algebra and computational algebraic geometry.All known algorithms require Gr"obner bases respectively characteristic sets and multivariate polynomial factorization over some (algebraic or transcendental) extension of the given field K. The first practical algorithm for computing the minimal associated primes is based on characteristic sets and the Ritt-Wu process ([R1], [R2], [Wu], [W]), the first practical and general primary decomposition algorithm was given by Gianni, Trager and Zacharias [GTZ]. New ideas from homological algebra were introduced by Eisenbud, Huneke and Vasconcelos in [EHV]. Recently, Shimoyama and Yokoyama [SY] provided a new algorithm, using Gr"obner bases, to obtain the primary decompositon from the given minimal associated primes.In the present paper we present all four approaches together with some improvements and with detailed comparisons, based upon an analysis of 34 examples using the computer algebra system SINGULAR [GPS]. Since primary decomposition is a fairly complicated task, it is, therefore, best explained by dividing it into several subtasks, in particular, while sometimes only one of these subtasks is needed in practice. The paper is organized in such a way that we consider the subtasks separately and present the different approaches of the above-mentioned authors, with several tricks and improvements incorporated. Some of these improvements and the combination of certain steps from the different algorithms are essential for improving the practical performance.

In this report we treat an optimization task, which should make the choice of nonwoven for making diapers faster. A mathematical model for the liquid transport in nonwoven is developed. The main attention is focussed on the handling of fully and partially saturated zones, which leads to a parabolic-elliptic problem. Finite-difference schemes are proposed for numerical solving of the differential problem. Paralle algorithms are considered and results of numerical experiments are given.

Fast reconstruction formulae in x-ray computerized tomography demand the directions, in which the measurements are taken, to be equally distributed over the whole circle. In many applications data can only be provided in a restricted range. Here the intrinsic difficulties are studied by giving a singular value decomposition of the Radon transform in a restricted range. Practical limitations are deduced.

Integral equations on the half of line are commonly approximated by the finite-section approximation, in which the infinite upper limit is replaced by apositie number called finite-section parameter. In this paper we consider the finite-section approximation for first kind intgral equations which are typically ill-posed and call for regularization. For some classes of such equations corresponding to inverse problems from optics and astronomy we indicate the finite-section parameters that allows to apply standard regularization techniques. Two discretization schemes for the finite-section equations ar also proposed and their efficiency is studied.

In this paper we consider the location of stops along the edges of an already existing public transportation network. This can be the introduction of bus stops along some given bus routes, or of railway stations along the tracks in a railway network. The positive effect of new stops is given by the better access of the potential customers to their closest station, while the increasement of travel time caused by the additional stopping activities of the trains leads to a negative effect. The goal is to cover all given demand points with a minimal amount of additional traveling time, where covering may be defined with respect to an arbitrary norm (or even a gauge). Unfortunately, this problem is NP-hard, even if only the Euclidean distance is used. In this paper, we give a reduction to a finite candidate set leading to a discrete set covering problem. Moreover, we identify network structures in which the coefficient matrix of the resulting set covering problem is totally unimodular, and use this result to derive efficient solution approaches. Various extensions of the problem are also discussed.

The balance space approach (introduced by Galperin in 1990) provides a new view on multicriteria optimization. Looking at deviations from global optimality of the different objectives, balance points and balance numbers are defined when either different or equal deviations for each objective are allowed. Apportioned balance numbers allow the specification of proportions among the deviations. Through this concept the decision maker can be involved in the decision process. In this paper we prove that the apportioned balance number can be formulated by a min-max operator. Furthermore we prove some relations between apportioned balance numbers and the balance set, and see the representation of balance numbers in the balance set. The main results are necessary and sufficient conditions for the balance set to be exhaustive, which means that by multiplying a vector of weights (proportions of deviation) with its corresponding apportioned balance number a balance point is attained. The results are used to formulate an interactive procedure for multicriteria optimization. All results are illustrated by examples.

We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.

In this paper we develop testing procedures for the detection of structural changes in nonlinear autoregressive processes. For the detection procedure we model the regression function by a single layer feedforward neural network. We show that CUSUM-type tests based on cumulative sums of estimated residuals, that have been intensively studied for linear regression, can be extended to this case. The limit distribution under the null hypothesis is obtained, which is needed to construct asymptotic tests. For a large class of alternatives it is shown that the tests have asymptotic power one. In this case, we obtain a consistent change-point estimator which is related to the test statistics. Power and size are further investigated in a small simulation study with a particular emphasis on situations where the model is misspecified, i.e. the data is not generated by a neural network but some other regression function. As illustration, an application on the Nile data set as well as S&P log-returns is given.

In this paper, we deal with the problem of spherical interpolation of discretely given data of tensorial type. To this end, spherical tensor fields are investigated and a decomposition formula is described. Tensor spherical harmonics are introduced as eigenfunctions of a tensorial analogon to the Beltrami operator and discussed in detail. Based on these preliminaries, a spline interpolation process is described and error estimates are presented. Furthermore, some relations between the spline basis functions and the theory of radial basis functions are developed.

Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher.

Tangent measure distributions are a natural tool to describe the local geometry of arbitrary measures of any dimension. We show that for every measure on a Euclidean space and every s, at almost every point, all s-dimensional tangent measure distributions define statistically self-similar random measures. Consequently, the local geometry of general measures is not different from the local geometry of self-similar sets. We illustrate the strength of this result by showing how it can be used to improve recently proved relations between ordinary and average densities.

Symmetry properties of average densities and tangent measure distributions of measures on the line
(1995)

Answering a question by Bedford and Fisher we show that for every Radon measure on the line with positive and finite lower and upper densities the one-sided average densities always agree with one half of the circular average densities at almost every point. We infer this result from a more general formula, which involves the notion of a tangent measure distribution introduced by Bandt and Graf. This formula shows that the tangent measure distributions are Palm distributions and define self-similar random measures in the sense of U. Zähle.

This paper considers supervised multi-class image segmentation: from a labeled set of
pixels in one image, we learn the segmentation and apply it to the rest of the image or to other similar images. We study approaches with p-Laplacians, (vector-valued) Reproducing Kernel Hilbert
Spaces (RKHSs) and combinations of both. In all approaches we construct segment membership
vectors. In the p-Laplacian model the segment membership vectors have to fulfill a certain probability simplex constraint. Interestingly, we could prove that this is not really a constraint in the case p=2 but is automatically fulfilled. While the 2-Laplacian model gives a good general segmentation, the case of the 1-Laplacian tends to neglect smaller segments. The RKHS approach has
the benefit of fast computation. This direction is motivated by image colorization, where a given
dab of color is extended to a nearby region of similar features or to another image. The connection
between colorization and multi-class segmentation is explored in this paper with an application to
medical image segmentation. We further consider an improvement using a combined method. Each
model is carefully considered with numerical experiments for validation, followed by medical image
segmentation at the end.

We consider universal adaptive stabilization and tracking controllers for classes of linear systems. Under the technical assumption of linear scaling invariance necessary and sufficient conditions for adaptive stabilization are derived. For scalar systems sufficient conditions for adaptive tracking of finite dimensional reference signals are explored.

Sudakov's typical marginals, random linear functionals and a conditional central limit theorem
(1997)

V.N. Sudakov [Sud78] proved that the one-dimensional marginals of a highdimensional second order measure are close to each other in most directions. Extending this and a related result in the context of projection pursuit of P. Diaconis and D. Freedman [Dia84], we give for a probability measure P and a random (a.s.) linear functional F on a Hilbert space simple sufficient conditions under which most of the one-dimensional images of P under F are close to their canonical mixture which turns out to be almost a mixed normal distribution. Using the concept of approximate conditioning we deduce a conditional central limit theorem (theorem 3) for random averages of triangular arrays of random variables which satisfy only fairly weak asymptotic orthogonality conditions.

Mit der vorliegenden Veröffentlichung soll der Versuch unternommen werden, mathematischen Schulstoff aus konkreten Problemen herzuentwickeln. Im Mittelpunkt der vorliegenden Arbeit stehen betriebswirtschaftliche Planungs- und Entscheidungsprobleme, wie sie von fast allen Wirtschaftsunternehmen zu lösen sind. Dabei wird im besonderen auf folgende Optimierungsprobleme eingegangen: Berechnung des Rohstoffbedarfs bei gegebenen Bestellungen, Aufarbeitung von vorhandenen Lagerbeständen und das Stücklistenproblem.

Das Programmsystem PROMO wird in der Industrie zur Berechnung von instationären Gasströmungen in Mehrzylinder-Verbrennungsmotoren eingesetzt. PROMO wurde in den Jahren von 1970 bis 1977 an der Ruhr-Universität Bochum entwickelt. Nach den ersten Erfahrungen von Anwendern wurde das Programmsystem 1979/80 überarbeitet, und es entstand die neue Version PROMO 2.; Instationäre, kompressible und eindimensionale Rohrströmungen können berechnet werden. Außerdem sind die verschiedensten Rand- und Übergangsbedingungen zwischen den einzelnen Rohrstücken realisiert.; Einerseits hat sich PROMO in der Praxis bewährt, andererseits wurden auch deutliche Abweichungen der Rechenergebnisse von Messungen beobachtet. Aus dem Anwenderbereich hat die Firma Gillet (Hersteller von Auspuffanlagen) folgende Fragen aufgeworfen: Wie muß die Orts- bzw. Zeitschrittweite gewählt werden, um eine bestimmte Genauigkeit der numerischen Lösung zu sichern? Wie können die erzielten Ergebnisse theoretisch beurteilt werden (Fehlerschätzung)?; Deshalb erscheint eine Betrachtung des Programmsystems aus mathematischer Sicht sinnvoll.

The concept of algebraic simplification is of great importance for the field of symbolic computation in computer algebra. In this paper we review somefundamental concepts concerning reduction rings in the spirit of Buchberger. The most important properties of reduction rings are presented. Thetechniques for presenting monoids or groups by string rewriting systems are used to define several types of reduction in monoid and group rings. Gröbnerbases in this setting arise naturally as generalizations of the corresponding known notions in the commutative and some non-commutative cases. Severalresults on the connection of the word problem and the congruence problem are proven. The concepts of saturation and completion are introduced formonoid rings having a finite convergent presentation by a semi-Thue system. For certain presentations, including free groups and context-free groups, theexistence of finite Gröbner bases for finitely generated right ideals is shown and a procedure to compute them is given.

We want to study solid objects in real three dimensional space aiming at two issues:; (i1) modelling solids subject to boolean set algebra, including wire models,; (i2) determining the behaviour of moving solids, e.g. when they collide and the resulting points of contact.; ; This research has been initiated by the FORD Motor Company, Cologne. It is motivated by the intention to provide for a model of an automatical car gear, which gives a high precision basis to the optimization of moving tolerances.

Stop Location Design in Public Transportation Networks: Covering and Accessibility Objectives
(2006)

In StopLoc we consider the location of new stops along the edges of an existing public transportation network. Examples of StopLoc include the location of bus stops along some given bus routes or of railway stations along the tracks in a railway system. In order to measure the ''convenience'' of the location decision for potential customers in given demand facilities, two objectives are proposed. In the first one, we give an upper bound on reaching a closest station from any of the demand facilities and minimize the number of stations. In the second objective, we fix the number of new stations and minimize the sum of the distances between demand facilities and stations. The resulting two problems CovStopLoc and AccessStopLoc are solved by a reduction to a classical set covering and a restricted location problem, respectively. We implement the general ideas in two different environments - the plane, where demand facilities are represented by coordinates and in networks, where they are nodes of a graph.

Fragestellungen der Standortplanung sollen den Mathematikunterricht der Schule bereichern, dort behandelt und gelöst werden. In dieser Arbeit werden planare Standortprobleme vorgestellt, die im Mathematikunterricht behandelt werden können. Die Probleme Produktion von Halbleiterplatinen, Planung eines Feuerwehrhauses und das Zentrallagerproblem, die ausnahmlos real und nicht konstruiert sind, werden ausführlich durchgearbeitet, so dass es schnell möglich ist, daraus Unterrichtseinheiten zu entwickeln.

Elements of the differential topology are used to prove necessary conditions for stabilizability in large by a smooth feedback. Criteria for the smooth feedback stabilizing a smooth nonlinear system locally to have the smooth piecewise smooth extension, which stabilizes the system over a given compact set, have been obtained.

Stability and Robustness Properties of Universal Adaptive Controllers for First Order Linear Systems
(1987)

The question: What is an adaptive controller? is as old as the word adaptive control itself. In this paper we will adopt a pragmatic viewpoint which identifies adaptive controllers with nonlinear feedback controllers, designed for classes (families) of linear systems. In contrast to classical linear feedback controllers which are designed for individual systems, these non-linear controllers are required to achieve a specific design objective (such as e.g. stability, tracking or decoupling) for a whole prescribed family of linear systems.

Die Bestimmung des Erdgravitationspotentials aus den Meßdaten des Forschungssatelliten CHAMP lässt sich als Operatorgleichung formulieren (SST-Problem). Dieser Ansatz geht davon aus, dass ein geometrischer Orbit des Satelliten CHAMP vorliegt. Mittels numerischer Differentiation unter Einsatz eines geeigneten Denoising Verfahrens kann dann aus dem geometrischen Orbit der Gradient des Potentials längs der Bahn bestimmt werden. Damit sind insbesondere die Radialableitung (und der Flächengradient) auf einem Punktgitter auf der Bahn bekannt. In einem erdfesten System stellt sich dies als eine nahezu vollständige Überdeckung der Erde (bis auf Polar Gaps) mit einem ziemlich dichten Datengitter auf Flughöhe des Satelliten dar. Die Lösung der SST-Operatorgleichung (Bestimmung des Potentials auf der Erdoberfläche aus Kenntnis der Radialableitung auf einem Datengitter auf Flughöhe) ist ein schlecht gestelltes inverses Problem, das mit einer geeigneten Regularisierungstechnik gelöst werden muß. Im vorliegenden Fall wurde eine solche Regularisierung mit Hilfe von nicht-bandlimitierten Regularisierungsskalierungsfunktionen und Regularisierungswavelets umgesetzt. Diese sind stark ortslokalisierend und führen daher auf ein Potentialmodell, welches eine Linearkombination stark ortslokalisierender Funktionen ist. Ein solches Modell kann als Lokalmodell auch aus nur lokalen Daten berechnet werden und bietet daher gegenüber Kugelfunktionsmodellen wie EGM96 erhebliche Vorteile für die moderne Geopotentialbestimmung. Die Diskretisierung und numerische Umsetzung der Berechnung eines solchen Modells erfolgt mit Splines, die hier ebenfalls Linearkombinationen stark ortslokalisierender Funktionen sind. Die großen linearen Gleichungssysteme, die zur Berechnung der glättenden oder interpolierenden Splines gelöst werden müssen, können auf schnelle und effiziente Weise mit dem Schwarzschen alternierenden Algorithmus in Verbindung mit schnellen Summationsverfahren (Fast Multipole Methods) gelöst werden. Eine Kombination des Schwarzschen alternierenden Algorithmus mit solchen schnellen Summationsverfahren ermöglicht eine weitere erhebliche Beschleunigung beim Lösen dieser Gleichungssysteme. Zur Bestimmung von Glättungsparametern (Spline-Smoothing) und Regularisierungsparametern kann die L-Curve Method zum Einsatz kommen.

In the field of gravity determination a special kind of boundary value problem respectively ill-posed satellite problem occurs; the data and hence side condition of our PDE are oblique second order derivatives of the gravitational potential. In mathematical terms this means that our gravitational potential \(v\) fulfills \(\Delta v = 0\) in the exterior space of the Earth and \(\mathscr D v = f\) on the discrete data location which is on the Earth's surface for terrestrial measurements and on a satellite track in the exterior for spaceborne measurement campaigns. \(\mathscr D\) is a first order derivative for methods like geometric astronomic levelling and satellite-to-satellite tracking (e.g. CHAMP); it is a second order derivative for other methods like terrestrial gradiometry and satellite gravity gradiometry (e.g. GOCE). Classically one can handle first order side conditions which are not tangential to the surface and second derivatives pointing in the radial direction employing integral and pseudo differential equation methods. We will present a different approach: We classify all first and purely second order operators \(\mathscr D\) which fulfill \(\Delta \mathscr D v = 0\) if \(\Delta v = 0\). This allows us to solve the problem with oblique side conditions as if we had ordinary i.e. non-derived side conditions. The only additional work which has to be done is an inversion of \(\mathscr D\), i.e. integration.

In this paper we construct spline functions based on a reproducing kernel Hilbert space to interpolate/approximate the velocity field of earthquake waves inside the Earth based on traveltime data for an inhomogeneous grid of sources (hypocenters) and receivers (seismic stations). Theoretical aspects including error estimates and convergence results as well as numerical results are demonstrated.

Spline functions that approximate (geostrophic) wind field or ocean circulation data are developed in a weighted Sobolev space setting on the (unit) sphere. Two problems are discussed in more detail: the modelling of the (geostrophic) wind field from (i)discrete scalar air pressure data and (ii) discrete vectorial velocity data. Domain decomposition methods based on the Schwarz alternating algorithm for positive definite symmetric matrices are described for solving large linear systems occuring in vectorial spline interpolation or smoothing of geostrophic flow.

A continuous version of spherical multiresolution is described, starting from continuous wavelet transform on the sphere. Scale discretization enables us to construct spherical counterparts to Daubechies wavelets and wavelet packets (known from Euclidean theory). Essential tool is the theory of singular integrals on the sphere. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to (pyramidal) algorithms.

Spherical Tikhonov Regularization Wavelets in Satellite Gravity Gradiometry with Random Noise
(2000)

This paper considers a special class of regularization methods for satellite gravity gradiometry based on Tikhonov spherical regularization wavelets with particular emphasis on the case of data blurred by random noise. A convergence rate is proved for the regularized solution, and a method is discussed for choosing the regularization level a posteriori from the gradiometer data.

In modern approximation methods linear combinations in terms of (space localizing) radial basis functions play an essential role. Areas of application are numerical integration formulas on the uni sphere omega corresponding to prescribed nodes, spherical spline interpolation, and spherical wavelet approximation. the evaluation of such a linear combination is a time consuming task, since a certain number of summations, multiplications and the calculation of scalar products are required. This paper presents a generalization of the panel clustering method in a spherical setup. The economy and efficiency of panel clustering is demonstrated for three fields of interest, namely upward continuation of the earth's gravitational potential, geoid computation by spherical splines and wavelet reconstruction of the gravitational potential.

Using a stereographical projection to the plane we construct an O(N log(N)) algorithm to approximate scattered data in N points by orthogonal, compactly supported wavelets on the surface of a 2-sphere or a local subset of it. In fact, the sphere is not treated all at once, but is split into subdomains whose results are combined afterwards. After choosing the center of the area of interest the scattered data points are mapped from the sphere to the tangential plane through that point. By combining a k-nearest neighbor search algorithm and the two dimensional fast wavelet transform a fast approximation of the data is computed and mapped back to the sphere. The algorithm is tested with nearly 1 million data points and yields an approximation with 0.35% relative errors in roughly 2 minutes on a standard computer using our MATLAB implementation. The method is very flexible and allows the application of the full range of two dimensional wavelets.

Spektralsequenzen
(1999)

We present results and views about a project in assisted living. The scenario is a room in which an elderly and/or disabled person lives who is not able to perform certain actions due to restricted mobility. We enable the person to express commands verbally that will then be executed automatically. There are several severe problems involved that complicate the situation. The person may utter the command in a rather unexpected way, the person makes an error or the action cannot be performed due to several reasons. In our approach we present an architecture with three components: The recognition component that contains novel features in the signal processing, the analysis component that logically analyzes the command, and the execution component that performs the action automatically. All three components communicate with each other.

In these notes we will discuss some aspects of a problem arising in carindustry. For the sake of clarity we will set the problem into an extremely simplified scheme. Suppose that we have a body which is emitting sound, and that the sound is measured at a finite number of points around the body. We wish to determine the intensity of the sound at an observation point which is moving.

Starting from the uniqueness question for mixtures of distributions this review centers around the question under which formally weaker assumptions one can prove the existence of SPLIFs, in other words perfect statistics and tests. We mention a couple of positive and negative results which complement the basic contribution of David Blackwell in 1980. Typically the answers depend on the choice of the set theoretic axioms and on the particular concepts of measurability.

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerifa, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issue discussed includes modelling aspects in discrete, network and continuous location, heuristic techniques, the state of technology and undesirable facility location. Some general questions are stated reagrding the applicability of location models, promising research directions and the way technology affects the development of solution techniques.

Some formulae, containing logarithmic derivatives of (smooth) measures on infinitedimensional spaces, arise in quite different situations. In particular, logarithmic derivatives of a measure are inserted in the Schr"odinger equastion in the space consisting of functions that are square integrable with respect to this measure, what allows us to describe very simply a procedure of (canonical) quantization of infinite-dimensional Hamiltonian systems with the linear phase space. Further, the problem of reconstructing of a measure by its logarithmic derivative (that was posed in [1] independently of any applications) can be equivalent either to the problem of finding the "ground state" (considered as some measure) for infinite-dimensional Schr"odinger equation, or to the problem of finding an invariant measure for a stochastic differential equation (that is a central question of so-called stochastic quantization), or to the problem of recenstruc ting "Gibbsian measure by its specification" (i.e. by a collection of finite-dimensional conditional distributions). Logarithmic derivatives of some measure appear in Cameron-Martin-Girsanov-Maruyama formulae and in its generalizations related to arbitrary smooth measures; they allow also to connect these formulae and the Feynman-Kac formulae. This note discusses all these topics. Of course due to its shortness the presentation is formal in main, and precise analitical assumptions are usually absent. Actually only a list of formulae with small comments is given. Let us mention also that we do not consider at all so-called Dirichlet forms to which a great deal of literature is devoted (cf. [3] and references therein to the works of S. Alberion and others).

The paper presents some new estimates on the gain term of the Boltzmann collision operator. For Maxwellian molecules, it is shown that the L -norm of the gain term can be bounded in terms of the L1 and L -norm of the density function f. In the case of more general collision kernels, like the hard-sphere interaction potential, the gain term is estimated pointwise by the L -norm of the density function and the loss term of the Boltzmann collision operator.

Many polynomially solvable combinatorial optimization problems (COP) become NP when we require solutions to satisfy an additional cardinality constraint. This family of problems has been considered only recently. We study a newproblem of this family: the k-cardinality minimum cut problem. Given an undirected edge-weighted graph the k-cardinality minimum cut problem is to find a partition of the vertex set V in two sets V 1 , V 2 such that the number of the edges between V 1 and V 2 is exactly k and the sum of the weights of these edges is minimal. A variant of this problem is the k-cardinality minimum s-t cut problem where s and t are fixed vertices and we have the additional request that s belongs to V 1 and t belongs to V 2 . We also consider other variants where the number of edges of the cut is constrained to be either less or greater than k. For all these problems we show complexity results in the most significant graph classes.

We derive some asymptotics for a new approach to curve estimation proposed by Mr'{a}zek et al. cite{MWB06} which combines localization and regularization. This methodology has been considered as the basis of a unified framework covering various different smoothing methods in the analogous two-dimensional problem of image denoising. As a first step for understanding this approach theoretically, we restrict our discussion here to the least-squares distance where we have explicit formulas for the function estimates and where we can derive a rather complete asymptotic theory from known results for the Priestley-Chao curve estimate. In this paper, we consider only the case where the bias dominates the mean-square error. Other situations are dealt with in subsequent papers.

Rewriting techniques have been applied successfully to various areas of symbolic computation. Here we consider the notion of prefix-rewriting and give a survey on its applications to the subgroup problem in combinatorial group theory. We will see that for certain classes of finitely presented groups finitely generated subgroups can be described through convergent prefix-rewriting systems, which can be obtained from a presentation of the group considered and a set of generators for the subgroup through a specialized Knuth-Bendix style completion procedure. In many instances a finite presentation for the subgroup considered can be constructed from such a convergent prefix-rewriting system, thus solving the subgroup presentation problem. Finally we will see that the classical procedures for computing Nielsen reduced sets of generators for a finitely generated subgroup of a free group and the Todd-Coxeter coset enumeration can be interpreted as particular instances of prefix-completion. Further, both procedures are closely related to the computation of prefix Gr"obner bases for right ideals in free group rings.

We consider three applications of impulse control in financial mathematics, a cash management problem, optimal control of an exchange rate, and portfolio optimisation under transaction costs. We sketch the different ways of solving these problems with the help of quasi-variational inequalities. Further, some viscosity solution results are presented.

In line location problems the objective is to find a straight line which minimizes the sum of distances, or the maximum distance, respectively to a given set of existing facilities in the plane. These problems have well solved. In this paper we deal with restricted line location problems, i.e. we have given a set in the plane where the line is not allowed to pass through. With the help of a geometric duality we solve such problems for the vertical distance and then extend these results to block norms and some of them even to arbitrary norms. For all norms we give a finite candidate set for the optimal line.

SST (satellite-to-satellite tracking) and SGG (satellite gravity gradiometry) provide data that allows the determination of the first and second order radial derivative of the earth's gravitational potential on the satellite orbit, respectively. The modeling of the gravitational potential from such data is an exponentially ill-posed problem that demands regularization. In this paper, we present the numerical studies of an approach, investigated in [24] and [25], that reconstructs the potential with spline smoothing. In this case, spline smoothing is not just an approximation procedure but it solves the underlying compact operator equation of the SST-problem and the SGG-problem. The numerical studies in this paper are performed for a simplified geometrical scenario with simulated data, but the approach is designed to handle first or second order radial derivative data on a real satellite orbit.