### Refine

#### Year of publication

- 1995 (81) (remove)

#### Document Type

- Preprint (50)
- Report (17)
- Article (12)
- Doctoral Thesis (1)
- Working Paper (1)

#### Language

- English (81) (remove)

#### Keywords

- Boltzmann Equation (3)
- Numerical Simulation (3)
- Case Based Reasoning (2)
- Hysteresis (2)
- Boundary Value Problems (1)
- CAQ (1)
- Case-Based Reasoning (1)
- Case-Based Reasoning Systems (1)
- Energiespektrum (1)
- Energy spectra (1)

#### Faculty / Organisational entity

- Fachbereich Mathematik (40)
- Fachbereich Informatik (31)
- Fachbereich Physik (10)

In this paper, the complexity of full solution of Fredholm integral equations of the second kind with data from the Sobolev class \(W^r_2\) is studied. The exact order of information complexity is derived. The lower bound is proved using a Gelfand number technique. The upper bound is shown by providing a concrete algorithm of optimal order, based on a specific hyperbolic cross approximation of the kernel function. Numerical experiments are included, comparing the optimal algorithm with the standard Galerkin method.

Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data.

In this paper an analytic hidden surface removal algorithm is presented which uses a combination
of 2D and 3D BSP trees without involving point sampling or scan conversion. Errors like aliasing
which result from sampling do not occur while using this technique. An application of this
algorithm is outlined which computes the energy locally reflected from a surface having an
arbitrary BRDF. A simplification for diffuse reflectors is described, which has been implemented
to compute analytic form factors from diffuse light sources to differential receivers as they are needed for shading and radiosity algorithms.

A new variance reduction technique for the Monte Carlo solution of integral
equations is introduced. It is based on separation of the main part. A neighboring equation with exactly known solution is constructed by the help of a deterministic Galerkin scheme. The variance of the method is analyzed, and an application to the radiosity equation of computer graphics, together with numerical test results is given.

The \(L_2\)-discrepancy is a quantitative measure of precision for multivariate quadrature rules. It can be computed explicitly. Previously known algorithms needed \(O(m^2\)) operations, where \(m\) is the number of nodes. In this paper we present algorithms which require
\(O(m(log m)^d)\) operations.

Computer processing of free form surfaces forms the basis of a closed construction process starting with surface design and up to NC-production.
Numerical simulation and visualization allow quality analysis before manufacture. A new aspect in surface analysis is described, the stability
of surfaces versus infinitesimal bendings. The stability concept is derived
from the kinetic meaning of a special vector field which is given by the deformation. Algorithms to calculate this vector field together with an appropriate visualization method give a tool able to analyze surface stability.

The CAD/CAM-based design of free-form surfaces is the beginning of a chain of operations, which ends with the numerically controlled (NC-) production of the designed object. During this process the shape control is an important step to amount efficiency. Several surface interrogation methods already exist to analyze curvature and continuity behaviour of the shape. This paper deals with a new aspect of shape control: the stability of surfaces with respect to infnitesimal bendings. Each inEnitesimal bending of a surface determines a so called instability surface, which is used for the stability investigations. The kinematic meaning of this instability surface will be discussed and we present algorithms to calculate it.

The local solution problem of multivariate Fredholm integral equations is studied. Recent research proved that for several function classes the complexity of this problem is closely related to the Gelfand numbers of some characterizing operators. The generalization of this approach to the situation of arbitrary Banach spaces is the subject of the present paper.
Furthermore, an iterative algorithm is described which - under some additional conditions - realizes the optimal error rate. The way these general theorems work is demonstrated by applying them to integral equations in a Sobolev space of periodic functions with dominating mixed derivative of various order.

Optimal degree reductions, i.e. best approximations of \(n\)-th degree Bezier curves
by Bezier curves of degree \(n\) - 1, with respect to different norms are studied. It
is shown that for any \(L_p\)-norm the euclidean degree reduction where the norm is applied to the euclidean distance function of two curves is identical to componentwise degree reduction. The Bezier points of the degree reductions are found to lie on parallel lines through the Bezier points of any Taylor expansion of degree \(n\) - 1 of the original curve. This geometric situation is shown to hold also in the case of constrained degree reduction. The Bezier points of the degree reduction are explicitly given in the unconstrained case for \(p\) = 1 and \(p\) = 2 and in the constrained case for \(p\) = 2.

Intellectual control over software development projects requires the existence of an integrated set of explicit models of the products to be developed, the processes used to develop them, the resources needed, and the productivity and quality aspects involved. In recent years the development of languages, methods and tools for modeling software processes, analyzing and enacting them has become a major emphasis of software engineering research. The majority of current process research concentrates on prescriptive modeling of small, completely formalizable processes and their execution entirely on computers. This research direction has produced process modeling languages suitable for machine rather than human consumption. The MVP project, launched at the University of Maryland and continued at Universität Kaiserslautern, emphasizes building descriptive models of large, real-world processes and their use by humans and computers for the purpose of understanding, analyzing, guiding and improving software development projects. The language MVP-L has been developed with these purposes in mind. In this paper, we
motivate the need for MVP-L, introduce the prototype language, and demonstrate its uses. We assume that further improvements to our language will be triggered by lessons learned from applications and experiments.

Experience gathered from applying the software process modeling language MVP-L in software development organizations has shown the need for graphical representations of process models. Project members (i.e„ non MVP-L specialists) review models much more easily by using graphical representations. Although several various graphical notations were developed for individual projects in which MVP-L was applied, there was previously no consistent definition of a mapping between textual MVP-L models and graphical representations. This report defines a graphical representation schema for MVP-L
descriptions and combines previous results in a unified form. A basic set of building blocks (i.e., graphical symbols and text fragments) is defined, but because we must first gain experience with the new symbols, only rudimentary guidelines are given for composing basic
symbols into a graphical representation of a model.

This paper introduces a new high Level programming language for a novel
class of computational devices namely data-procedural machines. These machines are by up to several orders of magnitude more efficient than the von Neumann paradigm of computers and are as flexible and as universal as computers. Their efficiency and flexibility is achieved by using field-programmable logic as the essential technology platform. The paper briefly summarizes and illustrates the essential new features of this language by means of two example programs.

In this paper we investigate two optimization problems for matroids with multiple objective functions, namely finding the pareto set and the max-ordering problem which conists in finding a basis such that the largest objective value is minimal. We prove that the decision versions of both problems are NP-complete. A solution procedure for the max-ordering problem is presented and a result on the relation of the solution sets of the two problems is given. The main results are a characterization of pareto bases by a basis exchange property and finally a connectivity result for proper pareto solutions.

In this paper we will introduce the concept of lexicographic max-ordering solutions for multicriteria combinatorial optimization problems. Section 1 provides the basic notions of
multicriteria combinatorial optimization and the definition of lexicographic max-ordering solutions. In Section 2 we will show that lexicographic max-ordering solutions are pareto optimal as well as max-ordering optimal solutions. Furthermore lexicographic max-ordering solutions can be used to characterize the set of pareto solutions. Further properties of lexicographic max-ordering solutions are given. Section 3 will be devoted to algorithms. We give a polynomial time algorithm for the two criteria case where one criterion is a sum and one is a bottleneck objective function, provided that the one criterion sum problem is solvable in polynomial time. For bottleneck functions an algorithm for the general case of Q criteria is presented.

In multiple criteria optimization an important research topic is the topological structure of the set \( X_e \) of efficient solutions. Of major interest is the connectedness of \( X_e \), since it would allow the determination of \( X_e \) without considering non-efficient solutions in the
process. We review general results on the subject,including the connectedness result for efficient solutions in multiple criteria linear programming. This result can be used to derive a definition of connectedness for discrete optimization problems. We present a counterexample to a previously stated result in this area, namely that the set of efficient solutions of the shortest path problem is connected. We will also show that connectedness does not hold for another important problem in discrete multiple criteria optimization: the spanning tree problem.

Ion energy spectra of a laser-produced Ta plasma have been investigated as a function of the flight distance from the focus. The laser (Nd:YAG, 20 ns, 210 mJ) is incident obliquely (45°) and focused to an intensity of about 10^11 W cm-2. The changes in the ion distributions have been analysed for the Ta+ to Ta6+ ions in an expansion range 64 - 220 cm. With increasing distance from the target, a weak but monotonic decrease is observed for the total number of ions, which is essentially due to the decrease in the number of the more highly charged species. For the Ta+ and Ta2+ ions the net changes approximately cancel. A more sophisticated picture of the recombination dynamics is obtained, however, if the changes within individual groups of ions expanding with different velocities are compared. Here, in the same spectrum, both increasing and decreasing ion numbers can be observed. This can be interpreted as direct evidence of recombination and its dependence on temperature, density and charge.

Abstract: It is shown that nonvacuum pseudoparticles can account forquantum tunneling and metastability. In particular the saddle-point nature of the pseudoparticles is demonstrated, and the evaluation of path-integrals in their neighbourhood. Finally the relation between instantons and bounces is used to derive a result conjectured by Bogomolny andFateyev.

This report is intended to provide an introduction to the method of SmoothedParticle Hydrodynamics or SPH. SPH is a very versatile, fully Lagrangian, particle based code for solving fluid dynamical problems. Many technical aspects of the method are explained which can then be employed to extend the application of SPH to new problems.

Cloudy inhomogenities in artificial fabrics are graded by a fast method which is based on a Laplacian pyramid decomposition of the fabric image. This band-pass representation takes into account the scale character of the cloudiness. A quality measure of the entire cloudiness is obtained as a weighted mean over the variances of all scales.

By the use of locally supported basis functions for spherical spline interpolation the applicability of this approximation method is spread out since the resulting interpolation matrix is sparse and thus efficient solvers can be used. In this paper we study locally supported kernels in detail. Investigations on the Legendre coefficients allow a characterization of the underlying Hilbert space structure. We show now spherical spline interpolation with polynomial precision can be managed with locally supported kernels, thus giving the possibility to combine approximation techniques based on spherical harmonic expansions with those based on locally supported kernels.

Recently, Xu and Cheney (1992) have proved that if all the Legendre coefficients of a zonal function defined on a sphere are positive then the function is strictly positive definite. It will be shown in this paper, that even if finitely many of the Legendre coefficients are zero, the strict positive definiteness can be assured. The results are based on approximation properties of singular integrals, and provide also a completely different proof of the results ofXu and Cheney.

The paper describes the concepts and background theory for the analysis of a neural-like network for learning and replication of periodic signals containing a finite number of distinct frequency components. The approach is based on the combination of ideas from dynamic neural networks and systems and control theory where concepts of dynamics, adaptive control and tracking of specified time signals are fundamental. The proposed procedure is a two stage process consisting of a learning phase when the network is driven by the required signal followed by a replication phase where the network operates in an autonomous feedback mode whilst continuing to generate the required signal to a desired acccuracy for a specified time. The analysis draws on currently available control theory and, in particular, on concepts from model reference adaptive control.

In this paper we consider a certain class of geodetic linear inverse problems LambdaF=G in a reproducing kernel Hilbert space setting to obtain a bounded generalized inverse operator Lambda. For a numerical realization we assume G to be given at a finite number of discrete points to which we employ a spherical spline interpolation method adapted to the Hilbertspaces. By applying Lambda to the obtained spline interpolant we get an approximation of the solution F. Finally our main task is to show some properties of the approximated solution and to prove convergence results if the data set increases.

The paper describes the concepts and background theory of the analysis of a neural-like network for the learning and replication of periodic signals containing a finite number of distinct frequency components. The approach is based on a two stage process consisting of a learning phase when the network is driven by the required signal followed by a replication phase where the network operates in an autonomous feedback mode whilst continuing to generate the required signal to a desired accuracy for a specified time. The analysis focusses on stability properties of a model reference adaptive control based learning scheme via the averaging method. The averaging analysis provides fast adaptive algorithms with proven convergence properties.

Double Scaling Limits, Airy Functions and Multicritical Behaviour in O(N) Vektor Sigma Models
(1995)

O(N) vector sigma models possessing catastrophes in their action are studied. Coupling the limit N - > infinity with an appropriate scaling behaviour of the coupling constants, the partition function develops a singular factor. This is a generalized Airy function in the case of spacetime dimension zero and the partition function of a scalar field theory for positive spacetime dimension.

World models for mobile robots as introduced in many projects, are mostly redundant regarding similar situations detected in different places. The present paper proposes a method for dynamic generation of a minimal world model based on these redundancies. The technique is an extention of the qualitative topologic world modelling methods. As a central aspect the reliability regarding errortolerance and stability will be emphasized. The proposed technique demands very low constraints on the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard realtime constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "

We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is built on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been implemented that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.

Structured domains are characterized by the fact that there is an intrinsic dependency between certain key elements in the domain. Considering these dependencies leads to better performance of the planning systems, and it is an important factor for determining the relevance of the cases stored in a case-base. However, testing for cases that meet these dependencies, decreases the performance of case-based planning, as other criterions need also to be consider for determining this relevance. We present a domain-independent architecture that explicitly represents these dependencies so that retrieving relevant cases is ensured without negatively affecting the performance of the case-based planning process.

In dieser Dissertation wird das Konzept der Gröbnerbasen für endlich erzeugte Monoid-und Gruppenringe verallgemeinert. Dabei werden Reduktionsmethoden sowohl zurDarstellung der Monoid- beziehungsweise Gruppenelemente, als auch zur Beschreibungder Rechtsidealkongruenz in den entsprechenden Monoid- beziehungsweise Gruppenrin-gen benutzt. Da im allgemeinen Monoide und insbesondere Gruppen keine zulässigenOrdnungen mehr erlauben, treten bei der Definition einer geeigneten Reduktionsrela-tion wesentliche Probleme auf: Zum einen ist es schwierig, die Terminierung einer Re-duktionsrelation zu garantieren, zum anderen sind Reduktionsschritte nicht mehr mitMultiplikationen verträglich und daher beschreiben Reduktionen nicht mehr unbedingteine Rechtsidealkongruenz. In dieser Arbeit werden verschiedene Möglichkeiten Reduk-tionsrelationen zu definieren aufgezeigt und im Hinblick auf die beschriebenen Problemeuntersucht. Dabei wird das Konzept der Saturierung, d.h. eine Polynommenge so zu er-weitern, daß man die von ihr erzeugte Rechtsidealkongruenz durch Reduktion erfassenkann, benutzt, um Charakterisierungen von Gröbnerbasen bezüglich der verschiedenenReduktionen durch s-Polynome zu geben. Mithilfe dieser Konzepte ist es gelungenfür spezielle Klassen von Monoiden, wie z.B. endliche, kommutative oder freie, undverschiedene Klassen von Gruppen, wie z.B. endliche, freie, plain, kontext-freie odernilpotente, unter Ausnutzung struktureller Eigenschaften spezielle Reduktionsrelatio-nen zu definieren und terminierende Algorithmen zur Berechnung von Gröbnerbasenbezüglich dieser Reduktionsrelationen zu entwickeln.

The feature interaction problem in telecommunications systems increasingly ob-structs the evolution of such systems. We develop formal detection criteria whichrender a necessary (but less than sufficient) condition for feature interactions. It can be checked mechanically and points out all potentially critical spots. Thesehave to be analysed manually. The resulting resolution decisions are incorporatedformally. Some prototype tool support is already available. A prerequisite forformal criteria is a formal definition of the problem. Since the notions of featureand feature interaction are often used in a rather fuzzy way, we attempt a formaldefinition first and discuss which aspects can be included in a formalization (andtherefore in a detection method). This paper describes ongoing work.

We describe a hybrid architecture supporting planning for machining workpieces. The architecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.

Evaluation is an important issue for every scientific field and a necessity for an emerging soft-ware technology like case- based reasoning. This paper is a supplementation to the review of industrial case-based reasoning tools by K.-D. Althoff, E. Auriol, R. Barletta and M. Manago which describes the most detailed evaluation of commercial case-based reasoning tools currently available. The author focuses on some important aspects that correspond to the evaluation ofcase-based reasoning systems and gives links to ongoing research.

Some new approximation methods are described for harmonic functions corresponding to boundary values on the (unit) sphere. Starting from the usual Fourier (orthogonal) series approach, we propose here nonorthogonal expansions, i.e. series expansions in terms of overcomplete systems consisting of localizing functions. In detail, we are concerned with the so-called Gabor, Toeplitz, and wavelet expansions. Essential tools are modulations, rotations, and dilations of a mother wavelet. The Abel-Poisson kernel turns out to be the appropriate mother wavelet in approximation of harmonic functions from potential values on a spherical boundary.

In this paper the autonomous mobile vehicle MOBOT-IV is presented, which is capable of exploring an indoor-environment while building up an internal representation of its world. This internal model is used for the navigation of the vehicle during and after the exploration phase. In contrast to methods, which use a grid based or line based environment representation, in the approach presented in this paper, local sector maps are the basic data structure of the world model. This paper describes the method of the view-point-planning for map building, the use of this map for navigation and the method of external position estimation including the hand- ling of an position error in a moving real-time system.

Self-localization in unknown environments respectively correlation of current and former impressions of the world is an essential ability for most mobile robots. The method,proposed in this article is the construction of a qualitative, topological world model as a basis for self-localization. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed techniques demand very low constraints for the kind and quality of the employed sensors as well as for the kinematic precisionof the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot.

Correctness and runtime efficiency are essential properties of software ingeneral and of high-speed protocols in particular. Establishing correctnessrequires the use of FDTs during protocol design, and to prove the protocolcode correct with respect to its formal specification. Another approach toboost confidence in the correctness of the implementation is to generateprotocol code automatically from the specification. However, the runtimeefficiency of this code is often insufficient. This has turned out to be amajor obstacle to the use of FDTs in practice.One of the FDTs currently applied to communication protocols is Es-telle. We show how runtime efficiency can be significantly improved byseveral measures carried out during the design, implementation and run-time of a protocol. Recent results of improvements in the efficiency ofEstelle-based protocol implementations are extended and interpreted.

Case-Based Reasoning for Decision Support and Diagnostic Problem Solving: The INRECA Approach
(1995)

INRECA offers tools and methods for developing, validating, and maintaining decision support systems. INRECA's basic technologies are inductive and case-based reasoning, namely KATE -INDUCTION (cf., e.g., Manago, 1989; Manago, 1990) and S3-CASE, a software product based on PATDEX (cf., e.g., Wess,1991; Richter & Wess, 1991; Althoff & Wess, 1991). Induction extracts decision knowledge from case databases. It brings to light patterns among cases and helps monitoring trends over time. Case-based rea -soning relates the engineer's current problem to past experiences.

With this article we first like to give a brief review on wavelet thresholding methods in non-Gaussian and non-i.i.d. situations, respectively. Many of these applications are based on Gaussian approximations of the empirical coefficients. For regression and density estimation with independent observations, we establish joint asymptotic normality of the empirical coefficients by means of strong approximations. Then we describe how one can prove asymptotic normality under mixing conditions on the observations by cumulant techniques.; In the second part, we apply these non-linear adaptive shrinking schemes to spectral estimation problems for both a stationary and a non-stationary time series setup. For the latter one, in a model of Dahlhaus on the evolutionary spectrum of a locally stationary time series, we present two different approaches. Moreover, we show that in classes of anisotropic function spaces an appropriately chosen wavelet basis automatically adapts to possibly different degrees of regularity for the different directions. The resulting fully-adaptive spectral estimator attains the rate that is optimal in the idealized Gaussian white noise model up to a logarithmic factor.

This paper is devoted to the mathematica l description of the solution of the so-called rainflow reconstruction problem, i.e. the problem of constructing a time series with an a priori given rainflow m atrix. The algorithm we present is mathematically exact in the sense that no app roximations or heuristics are involved. Furthermore it generates a uniform distr ibution of all possible reconstructions and thus an optimal randomization of the reconstructed series. The algorithm is a genuine on-line scheme. It is easy adj ustable to all variants of rainflow such as sysmmetric and asymmetric versions a nd different residue techniques.

In the automotive industry both the loca l strain approach and rainflow counting are well known and approved tools in the numerical estimation of the lifetime of a new developed part especially in the automotive industry. This paper is devoted to the combination of both tools and a new algorithm is given that takes advantage of the inner structure of the most used damage parameters.

This paper deals with domain decomposition methods for kinetic and drift diffusion semiconductor equations. In particular accurate coupling conditions at the interface between the kinetic and drift diffusion domain are given. The cases of slight and strong nonequilibrium situations at the interface are considered and some numerical examples are shown.

A way to derive consistently kinetic models for vehicular traffic from microscopic follow the leader models is presented. The obtained class of kinetic equations is investigated. Explicit examples for kinetic models are developed with a particular emphasis on obtaining models, that give realistic results. For space homogeneous traffic flow situations numerical examples are given including stationary distributions and fundamental diagrams.

The ideas of texture analysis by means of the structure tensor are combined with the scale-space concept of anisotropic diffusion filtering. In contrast to many other nonlinear diffusion techniques, the proposed one uses a diffusion tensor instead of a scalar diffusivity. This allows true anisotropic behaviour. The preferred diffusion direction is determined according to the phase angle of the structure tensor. The diffusivity in this direction is increasing with the local coherence of the signal. This filter is constructed in such a way that it gives a mathematically well-funded scale-space representation of the original image. Experiments demonstrate its usefulness for the processing of interrupted one-dimensional structures such as fingerprint and fabric images.

Second Order Scheme for the Spatially Homogeneous Boltzmann Equation with Maxwellian Molecules
(1995)

In the standard approach, particle methods for the Boltzmann equation are obtained using an explicit time discretization of the spatially homogeneous Boltzmann equation. This kind of discretization leads to a restriction of the discretization parameter as well as on the differential cross section in the case of the general Boltzmann equation. Recently, it was shown, how to construct an implicit particle scheme for the Boltzmann equation with Maxwellian molecules. The present paper combines both approaches using a linear combination of explicit and implicit discretizations. It is shown that the new method leads to a second order particle method, when using an equiweighting of explicit and implicit discretization.

Numerical Simulation of the Stationary One-Dimensional Boltzmann Equation by Particle Methods
(1995)

The paper presents a numerical simulation technique - based on the well-known particle methods - for the stationary, one-dimensional Boltzmann equation for Maxwellian molecules. In contrast to the standard splitting methods, where one works with the instationary equation, the current approach simulates the direct solution of the stationary problem. The model problem investigated is the heat transfer between two parallel plates in the rarefied gas regime. An iteration process is introduced which leads to the stationary solution of the exact - space discretized - Boltzmann equation, in the sense of weak convergence.

Normalized Coprime Factorizations in Continuous and Discrete Time - A Joint State-Space Approach
(1995)

Based on state-space formulas for coprime factorizations over ... and an algebraic characterization of J-inner functions, normalized doubly-coprime factorizations for different classes of continuous- and discrete-time transfer functions are derived by using a single general construction method. The parametrization of the factors is in terms of the stabilizing solutions of general degenerate continuous- respectively discrete-time Riccati equations, which are obtained by examining state-space representations of J-normalized factor matrices.

A concept of generalized discrepancy, which involves pseudodifferential operators to give a criterion of equidistributed pointsets, is developed on the sphere. A simply structured formula in terms of elementary functions is established for the computation of the generalized discrepancy. With the help of this formula five kinds of point systems on the sphere, namely lattices in polar coordinates, transformed 2-dimensional sequences, rotations on the sphere, triangulation, and sum of three squares sequence, are investigated. Quantitative tests are done, and the results are compared with each other. Our calculations exhibit different orders of convergence of the generalized discrepancy for different types of point systems.

The basic theory of spherical singular integrals is recapitulated. Criteria are given for measuring the space-frequency localization of functions on the sphere. The trade off between space localization on the sphere and frequency localization in terms of spherical harmonics is described in form of an uncertainty principle. A continuous version of spherical multiresolution is introduced, starting from continuous wavelet transform corresponding to spherical wavelets with vanishing moments up to a certain order. The wavelet transform is characterized by least-squares properties. Scale discretization enables us to construct spherical counterparts of wavelet packets and scale discrete Daubechies" wavelets. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to pyramyd algorithms. Fully discretized wavelet transforms are obtained via approximate integration rules on the sphere. Finally applications to (geo-)physical reality are discussed in more detail. A combined method is proposed for approximating the low frequency parts of a physical quantity by spherical harmonics and the high frequency parts by spherical wavelets. The particular significance of this combined concept is motivated for the situation of today" s physical geodesy, viz. the determination of the high frequency parts of the earth" s gravitational potential under explicit knowledge of the lower order part in terms of a spherical harmonic expansion.

The paper presents numerical results on the simulation of boundary value problems for the Boltzmann equation in one and two dimensions. In the one-dimensional case, we use prescribed fluxes at the left and diffusive conditions on the right end of a slab to study the resulting steady state solution. Moreover, we compute the numerical density function in velocity space and compare the result with the Chapman-Enskog distribution obtained in the limit for continuous media. The aim of the two-dimensional simulations is to investigate the possibility of a symmetry break in the numerical solution.

A survey on continuous, semidiscrete and discrete well-posedness and scale-space results for a class of nonlinear diffusion filters is presented. This class does not require any monotony assumption (comparison principle) and, thus, allows image restoration as well. The theoretical results include existence, uniqueness, continuous dependence on the initial image, maximum-minimum principles, average grey level invariance, smoothing Lyapunov functionals, and convergence to a constant steady state.

Spline functions that approximate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A convergence theorem containing explicit constants yields a usable error bound. Our survey ends with the discussion of spherical splines in geodetically relevant pseudodifferential equations.

This survey contains a description of different types of mathematical models used for the simulation of vehicular traffic. It includes models based on ordinary differential equations, fluid dynamic equations and on equations of kinetic type. Connections between the different types of models are mentioned. Particular emphasis is put on kinetic models and on simulation methods for these models.

The well-known and powerful proof principle by well-founded induction says that for verifying \(\forall x : P (x)\) for some property \(P\) it suffices to show \(\forall x : [[\forall y < x :P (y)] \Rightarrow P (x)] \) , provided \(<\) is a well-founded partial ordering on the domainof interest. Here we investigate a more general formulation of this proof principlewhich allows for a kind of parameterized partial orderings \(<_x\) which naturallyarises in some cases. More precisely, we develop conditions under which theparameterized proof principle \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)]\) is sound in thesense that \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)] \Rightarrow \forall x : P (x)\) holds, and givecounterexamples demonstrating that these conditions are indeed essential.

We study the combination of the following already known ideas for showing confluence ofunconditional or conditional term rewriting systems into practically more useful confluence criteria forconditional systems: Our syntactic separation into constructor and non-constructor symbols, Huet's intro-duction and Toyama's generalization of parallel closedness for non-noetherian unconditional systems, theuse of shallow confluence for proving confluence of noetherian and non-noetherian conditional systems, theidea that certain kinds of limited confluence can be assumed for checking the fulfilledness or infeasibilityof the conditions of conditional critical pairs, and the idea that (when termination is given) only primesuperpositions have to be considered and certain normalization restrictions can be applied for the sub-stitutions fulfilling the conditions of conditional critical pairs. Besides combining and improving alreadyknown methods, we present the following new ideas and results: We strengthen the criterion for overlayjoinable noetherian systems, and, by using the expressiveness of our syntactic separation into constructorand non-constructor symbols, we are able to present criteria for level confluence that are not criteria forshallow confluence actually and also able to weaken the severe requirement of normality (stiffened withleft-linearity) in the criteria for shallow confluence of noetherian and non-noetherian conditional systems tothe easily satisfied requirement of quasi-normality. Finally, the whole paper also gives a practically usefuloverview of the syntactic means for showing confluence of conditional term rewriting systems.

Problems stemming from the study of logic calculi in connection with an infer-ence rule called "condensed detachment" are widely acknowledged as prominenttest sets for automated deduction systems and their search guiding heuristics. Itis in the light of these problems that we demonstrate the power of heuristics thatmake use of past proof experience with numerous experiments.We present two such heuristics. The first heuristic attempts to re-enact aproof of a proof problem found in the past in a flexible way in order to find a proofof a similar problem. The second heuristic employs "features" in connection withpast proof experience to prune the search space. Both these heuristics not onlyallow for substantial speed-ups, but also make it possible to prove problems thatwere out of reach when using so-called basic heuristics. Moreover, a combinationof these two heuristics can further increase performance.We compare our results with the results the creators of Otter obtained withthis renowned theorem prover and this way substantiate our achievements.

We present a method for learning heuristics employed by an automated proverto control its inference machine. The hub of the method is the adaptation of theparameters of a heuristic. Adaptation is accomplished by a genetic algorithm.The necessary guidance during the learning process is provided by a proof prob-lem and a proof of it found in the past. The objective of learning consists infinding a parameter configuration that avoids redundant effort w.r.t. this prob-lem and the particular proof of it. A heuristic learned (adapted) this way canthen be applied profitably when searching for a proof of a similar problem. So,our method can be used to train a proof heuristic for a class of similar problems.A number of experiments (with an automated prover for purely equationallogic) show that adapted heuristics are not only able to speed up enormously thesearch for the proof learned during adaptation. They also reduce redundancies inthe search for proofs of similar theorems. This not only results in finding proofsfaster, but also enables the prover to prove theorems it could not handle before.

Hyperbolic planes
(1995)

A new approach with BRST invariance is suggested to cure the degeneracy problem of ill defined path integrals in the path- integral calculation of quantum mechanical tunneling effects in which the problem arises due to the occurrence of zero modes. The Faddeev-Popov procedure is avoided and the integral over the zero mode is transformed in a systematic way into a well defined integral over instanton positions. No special procedure has to be adopted as in the Faddeev-Popov method in calculating the Jacobian of the transformation. The quantum mechanical tunneling for the Sine-Gordon potential is used as a test of the method and the width of the lowest energy band is obtained in exact agreement with that of WKB calculations.

Symmetry properties of average densities and tangent measure distributions of measures on the line
(1995)

Answering a question by Bedford and Fisher we show that for every Radon measure on the line with positive and finite lower and upper densities the one-sided average densities always agree with one half of the circular average densities at almost every point. We infer this result from a more general formula, which involves the notion of a tangent measure distribution introduced by Bandt and Graf. This formula shows that the tangent measure distributions are Palm distributions and define self-similar random measures in the sense of U. Zähle.

It is shown that nonvacuum pseudoparticles can account for quantum tunneling and metastability. In particular the saddle- point nature of the pseudoparticles is demonstrated, and the evaluation of path-integrals in their neighbourhood. Finally the relation between instantons and bounces is used to derive a result conjectured by Bogomolny and Fateyev.

2D quantum dilaton gravitational Hamiltonian, boundary terms and new definition for total energy
(1995)

The ADM and Bondi mass for the RST model have been first discussed from Hawking and Horowitz's argument. Since there is a nonlocal term in the RST model, the RST lagrangian has to be localized so that Hawking and Horowitz's proposal can be carried out. Expressing the localized RST action in terms of the ADM formulation, the RST Hamiltonian can be derived, meanwhile keeping track of all boundary terms. Then the total boundary terms can be taken as the total energy for the RST model. Our result shows that the previous expression for the ADM and Bondi mass actually needs to be modified at quantum level, but at classical level, our mass formula can be reduced to that given by Bilal and Kogan [5] and de Alwis [6]. It has been found that there is a new contribution to the ADM and Bondi mass from the RST boundary due to the existence of the hidden dynamical field. The ADM and Bondi mass with and without the RST boundary for the static and dynamical solutions have been discussed respectively in detail, and some new properties have been found. The thunderpop of the RST model has also been encountered in our new Bondi mass formula.