Refine
Year of publication
- 1995 (81) (remove)
Document Type
- Preprint (50)
- Report (17)
- Article (12)
- Doctoral Thesis (1)
- Working Paper (1)
Language
- English (81) (remove)
Keywords
- Boltzmann Equation (3)
- Numerical Simulation (3)
- Case Based Reasoning (2)
- Hysteresis (2)
- Boundary Value Problems (1)
- CAQ (1)
- Case-Based Reasoning (1)
- Case-Based Reasoning Systems (1)
- Energiespektrum (1)
- Energy spectra (1)
Faculty / Organisational entity
In this paper, the complexity of full solution of Fredholm integral equations of the second kind with data from the Sobolev class \(W^r_2\) is studied. The exact order of information complexity is derived. The lower bound is proved using a Gelfand number technique. The upper bound is shown by providing a concrete algorithm of optimal order, based on a specific hyperbolic cross approximation of the kernel function. Numerical experiments are included, comparing the optimal algorithm with the standard Galerkin method.
Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data.
In this paper an analytic hidden surface removal algorithm is presented which uses a combination
of 2D and 3D BSP trees without involving point sampling or scan conversion. Errors like aliasing
which result from sampling do not occur while using this technique. An application of this
algorithm is outlined which computes the energy locally reflected from a surface having an
arbitrary BRDF. A simplification for diffuse reflectors is described, which has been implemented
to compute analytic form factors from diffuse light sources to differential receivers as they are needed for shading and radiosity algorithms.
A new variance reduction technique for the Monte Carlo solution of integral
equations is introduced. It is based on separation of the main part. A neighboring equation with exactly known solution is constructed by the help of a deterministic Galerkin scheme. The variance of the method is analyzed, and an application to the radiosity equation of computer graphics, together with numerical test results is given.
The \(L_2\)-discrepancy is a quantitative measure of precision for multivariate quadrature rules. It can be computed explicitly. Previously known algorithms needed \(O(m^2\)) operations, where \(m\) is the number of nodes. In this paper we present algorithms which require
\(O(m(log m)^d)\) operations.
Computer processing of free form surfaces forms the basis of a closed construction process starting with surface design and up to NC-production.
Numerical simulation and visualization allow quality analysis before manufacture. A new aspect in surface analysis is described, the stability
of surfaces versus infinitesimal bendings. The stability concept is derived
from the kinetic meaning of a special vector field which is given by the deformation. Algorithms to calculate this vector field together with an appropriate visualization method give a tool able to analyze surface stability.
The CAD/CAM-based design of free-form surfaces is the beginning of a chain of operations, which ends with the numerically controlled (NC-) production of the designed object. During this process the shape control is an important step to amount efficiency. Several surface interrogation methods already exist to analyze curvature and continuity behaviour of the shape. This paper deals with a new aspect of shape control: the stability of surfaces with respect to infnitesimal bendings. Each inEnitesimal bending of a surface determines a so called instability surface, which is used for the stability investigations. The kinematic meaning of this instability surface will be discussed and we present algorithms to calculate it.
The local solution problem of multivariate Fredholm integral equations is studied. Recent research proved that for several function classes the complexity of this problem is closely related to the Gelfand numbers of some characterizing operators. The generalization of this approach to the situation of arbitrary Banach spaces is the subject of the present paper.
Furthermore, an iterative algorithm is described which - under some additional conditions - realizes the optimal error rate. The way these general theorems work is demonstrated by applying them to integral equations in a Sobolev space of periodic functions with dominating mixed derivative of various order.
Optimal degree reductions, i.e. best approximations of \(n\)-th degree Bezier curves
by Bezier curves of degree \(n\) - 1, with respect to different norms are studied. It
is shown that for any \(L_p\)-norm the euclidean degree reduction where the norm is applied to the euclidean distance function of two curves is identical to componentwise degree reduction. The Bezier points of the degree reductions are found to lie on parallel lines through the Bezier points of any Taylor expansion of degree \(n\) - 1 of the original curve. This geometric situation is shown to hold also in the case of constrained degree reduction. The Bezier points of the degree reduction are explicitly given in the unconstrained case for \(p\) = 1 and \(p\) = 2 and in the constrained case for \(p\) = 2.
Intellectual control over software development projects requires the existence of an integrated set of explicit models of the products to be developed, the processes used to develop them, the resources needed, and the productivity and quality aspects involved. In recent years the development of languages, methods and tools for modeling software processes, analyzing and enacting them has become a major emphasis of software engineering research. The majority of current process research concentrates on prescriptive modeling of small, completely formalizable processes and their execution entirely on computers. This research direction has produced process modeling languages suitable for machine rather than human consumption. The MVP project, launched at the University of Maryland and continued at Universität Kaiserslautern, emphasizes building descriptive models of large, real-world processes and their use by humans and computers for the purpose of understanding, analyzing, guiding and improving software development projects. The language MVP-L has been developed with these purposes in mind. In this paper, we
motivate the need for MVP-L, introduce the prototype language, and demonstrate its uses. We assume that further improvements to our language will be triggered by lessons learned from applications and experiments.
Experience gathered from applying the software process modeling language MVP-L in software development organizations has shown the need for graphical representations of process models. Project members (i.e„ non MVP-L specialists) review models much more easily by using graphical representations. Although several various graphical notations were developed for individual projects in which MVP-L was applied, there was previously no consistent definition of a mapping between textual MVP-L models and graphical representations. This report defines a graphical representation schema for MVP-L
descriptions and combines previous results in a unified form. A basic set of building blocks (i.e., graphical symbols and text fragments) is defined, but because we must first gain experience with the new symbols, only rudimentary guidelines are given for composing basic
symbols into a graphical representation of a model.
This paper introduces a new high Level programming language for a novel
class of computational devices namely data-procedural machines. These machines are by up to several orders of magnitude more efficient than the von Neumann paradigm of computers and are as flexible and as universal as computers. Their efficiency and flexibility is achieved by using field-programmable logic as the essential technology platform. The paper briefly summarizes and illustrates the essential new features of this language by means of two example programs.
In this paper we investigate two optimization problems for matroids with multiple objective functions, namely finding the pareto set and the max-ordering problem which conists in finding a basis such that the largest objective value is minimal. We prove that the decision versions of both problems are NP-complete. A solution procedure for the max-ordering problem is presented and a result on the relation of the solution sets of the two problems is given. The main results are a characterization of pareto bases by a basis exchange property and finally a connectivity result for proper pareto solutions.
In this paper we will introduce the concept of lexicographic max-ordering solutions for multicriteria combinatorial optimization problems. Section 1 provides the basic notions of
multicriteria combinatorial optimization and the definition of lexicographic max-ordering solutions. In Section 2 we will show that lexicographic max-ordering solutions are pareto optimal as well as max-ordering optimal solutions. Furthermore lexicographic max-ordering solutions can be used to characterize the set of pareto solutions. Further properties of lexicographic max-ordering solutions are given. Section 3 will be devoted to algorithms. We give a polynomial time algorithm for the two criteria case where one criterion is a sum and one is a bottleneck objective function, provided that the one criterion sum problem is solvable in polynomial time. For bottleneck functions an algorithm for the general case of Q criteria is presented.
In multiple criteria optimization an important research topic is the topological structure of the set \( X_e \) of efficient solutions. Of major interest is the connectedness of \( X_e \), since it would allow the determination of \( X_e \) without considering non-efficient solutions in the
process. We review general results on the subject,including the connectedness result for efficient solutions in multiple criteria linear programming. This result can be used to derive a definition of connectedness for discrete optimization problems. We present a counterexample to a previously stated result in this area, namely that the set of efficient solutions of the shortest path problem is connected. We will also show that connectedness does not hold for another important problem in discrete multiple criteria optimization: the spanning tree problem.
Ion energy spectra of a laser-produced Ta plasma have been investigated as a function of the flight distance from the focus. The laser (Nd:YAG, 20 ns, 210 mJ) is incident obliquely (45°) and focused to an intensity of about 10^11 W cm-2. The changes in the ion distributions have been analysed for the Ta+ to Ta6+ ions in an expansion range 64 - 220 cm. With increasing distance from the target, a weak but monotonic decrease is observed for the total number of ions, which is essentially due to the decrease in the number of the more highly charged species. For the Ta+ and Ta2+ ions the net changes approximately cancel. A more sophisticated picture of the recombination dynamics is obtained, however, if the changes within individual groups of ions expanding with different velocities are compared. Here, in the same spectrum, both increasing and decreasing ion numbers can be observed. This can be interpreted as direct evidence of recombination and its dependence on temperature, density and charge.
Abstract: It is shown that nonvacuum pseudoparticles can account forquantum tunneling and metastability. In particular the saddle-point nature of the pseudoparticles is demonstrated, and the evaluation of path-integrals in their neighbourhood. Finally the relation between instantons and bounces is used to derive a result conjectured by Bogomolny andFateyev.
This report is intended to provide an introduction to the method of SmoothedParticle Hydrodynamics or SPH. SPH is a very versatile, fully Lagrangian, particle based code for solving fluid dynamical problems. Many technical aspects of the method are explained which can then be employed to extend the application of SPH to new problems.
The paper describes the concepts and background theory of the analysis of a neural-like network for the learning and replication of periodic signals containing a finite number of distinct frequency components. The approach is based on a two stage process consisting of a learning phase when the network is driven by the required signal followed by a replication phase where the network operates in an autonomous feedback mode whilst continuing to generate the required signal to a desired accuracy for a specified time. The analysis focusses on stability properties of a model reference adaptive control based learning scheme via the averaging method. The averaging analysis provides fast adaptive algorithms with proven convergence properties.
Cloudy inhomogenities in artificial fabrics are graded by a fast method which is based on a Laplacian pyramid decomposition of the fabric image. This band-pass representation takes into account the scale character of the cloudiness. A quality measure of the entire cloudiness is obtained as a weighted mean over the variances of all scales.
By the use of locally supported basis functions for spherical spline interpolation the applicability of this approximation method is spread out since the resulting interpolation matrix is sparse and thus efficient solvers can be used. In this paper we study locally supported kernels in detail. Investigations on the Legendre coefficients allow a characterization of the underlying Hilbert space structure. We show now spherical spline interpolation with polynomial precision can be managed with locally supported kernels, thus giving the possibility to combine approximation techniques based on spherical harmonic expansions with those based on locally supported kernels.
Recently, Xu and Cheney (1992) have proved that if all the Legendre coefficients of a zonal function defined on a sphere are positive then the function is strictly positive definite. It will be shown in this paper, that even if finitely many of the Legendre coefficients are zero, the strict positive definiteness can be assured. The results are based on approximation properties of singular integrals, and provide also a completely different proof of the results ofXu and Cheney.
The paper describes the concepts and background theory for the analysis of a neural-like network for learning and replication of periodic signals containing a finite number of distinct frequency components. The approach is based on the combination of ideas from dynamic neural networks and systems and control theory where concepts of dynamics, adaptive control and tracking of specified time signals are fundamental. The proposed procedure is a two stage process consisting of a learning phase when the network is driven by the required signal followed by a replication phase where the network operates in an autonomous feedback mode whilst continuing to generate the required signal to a desired acccuracy for a specified time. The analysis draws on currently available control theory and, in particular, on concepts from model reference adaptive control.
In this paper we consider a certain class of geodetic linear inverse problems LambdaF=G in a reproducing kernel Hilbert space setting to obtain a bounded generalized inverse operator Lambda. For a numerical realization we assume G to be given at a finite number of discrete points to which we employ a spherical spline interpolation method adapted to the Hilbertspaces. By applying Lambda to the obtained spline interpolant we get an approximation of the solution F. Finally our main task is to show some properties of the approximated solution and to prove convergence results if the data set increases.
Double Scaling Limits, Airy Functions and Multicritical Behaviour in O(N) Vektor Sigma Models
(1995)
O(N) vector sigma models possessing catastrophes in their action are studied. Coupling the limit N - > infinity with an appropriate scaling behaviour of the coupling constants, the partition function develops a singular factor. This is a generalized Airy function in the case of spacetime dimension zero and the partition function of a scalar field theory for positive spacetime dimension.
Hyperbolic planes
(1995)
A survey on continuous, semidiscrete and discrete well-posedness and scale-space results for a class of nonlinear diffusion filters is presented. This class does not require any monotony assumption (comparison principle) and, thus, allows image restoration as well. The theoretical results include existence, uniqueness, continuous dependence on the initial image, maximum-minimum principles, average grey level invariance, smoothing Lyapunov functionals, and convergence to a constant steady state.
Spline functions that approximate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A convergence theorem containing explicit constants yields a usable error bound. Our survey ends with the discussion of spherical splines in geodetically relevant pseudodifferential equations.
This survey contains a description of different types of mathematical models used for the simulation of vehicular traffic. It includes models based on ordinary differential equations, fluid dynamic equations and on equations of kinetic type. Connections between the different types of models are mentioned. Particular emphasis is put on kinetic models and on simulation methods for these models.
With this article we first like to give a brief review on wavelet thresholding methods in non-Gaussian and non-i.i.d. situations, respectively. Many of these applications are based on Gaussian approximations of the empirical coefficients. For regression and density estimation with independent observations, we establish joint asymptotic normality of the empirical coefficients by means of strong approximations. Then we describe how one can prove asymptotic normality under mixing conditions on the observations by cumulant techniques.; In the second part, we apply these non-linear adaptive shrinking schemes to spectral estimation problems for both a stationary and a non-stationary time series setup. For the latter one, in a model of Dahlhaus on the evolutionary spectrum of a locally stationary time series, we present two different approaches. Moreover, we show that in classes of anisotropic function spaces an appropriately chosen wavelet basis automatically adapts to possibly different degrees of regularity for the different directions. The resulting fully-adaptive spectral estimator attains the rate that is optimal in the idealized Gaussian white noise model up to a logarithmic factor.
This paper is devoted to the mathematica l description of the solution of the so-called rainflow reconstruction problem, i.e. the problem of constructing a time series with an a priori given rainflow m atrix. The algorithm we present is mathematically exact in the sense that no app roximations or heuristics are involved. Furthermore it generates a uniform distr ibution of all possible reconstructions and thus an optimal randomization of the reconstructed series. The algorithm is a genuine on-line scheme. It is easy adj ustable to all variants of rainflow such as sysmmetric and asymmetric versions a nd different residue techniques.
In the automotive industry both the loca l strain approach and rainflow counting are well known and approved tools in the numerical estimation of the lifetime of a new developed part especially in the automotive industry. This paper is devoted to the combination of both tools and a new algorithm is given that takes advantage of the inner structure of the most used damage parameters.
This paper deals with domain decomposition methods for kinetic and drift diffusion semiconductor equations. In particular accurate coupling conditions at the interface between the kinetic and drift diffusion domain are given. The cases of slight and strong nonequilibrium situations at the interface are considered and some numerical examples are shown.
A way to derive consistently kinetic models for vehicular traffic from microscopic follow the leader models is presented. The obtained class of kinetic equations is investigated. Explicit examples for kinetic models are developed with a particular emphasis on obtaining models, that give realistic results. For space homogeneous traffic flow situations numerical examples are given including stationary distributions and fundamental diagrams.
The ideas of texture analysis by means of the structure tensor are combined with the scale-space concept of anisotropic diffusion filtering. In contrast to many other nonlinear diffusion techniques, the proposed one uses a diffusion tensor instead of a scalar diffusivity. This allows true anisotropic behaviour. The preferred diffusion direction is determined according to the phase angle of the structure tensor. The diffusivity in this direction is increasing with the local coherence of the signal. This filter is constructed in such a way that it gives a mathematically well-funded scale-space representation of the original image. Experiments demonstrate its usefulness for the processing of interrupted one-dimensional structures such as fingerprint and fabric images.
Some new approximation methods are described for harmonic functions corresponding to boundary values on the (unit) sphere. Starting from the usual Fourier (orthogonal) series approach, we propose here nonorthogonal expansions, i.e. series expansions in terms of overcomplete systems consisting of localizing functions. In detail, we are concerned with the so-called Gabor, Toeplitz, and wavelet expansions. Essential tools are modulations, rotations, and dilations of a mother wavelet. The Abel-Poisson kernel turns out to be the appropriate mother wavelet in approximation of harmonic functions from potential values on a spherical boundary.
A concept of generalized discrepancy, which involves pseudodifferential operators to give a criterion of equidistributed pointsets, is developed on the sphere. A simply structured formula in terms of elementary functions is established for the computation of the generalized discrepancy. With the help of this formula five kinds of point systems on the sphere, namely lattices in polar coordinates, transformed 2-dimensional sequences, rotations on the sphere, triangulation, and sum of three squares sequence, are investigated. Quantitative tests are done, and the results are compared with each other. Our calculations exhibit different orders of convergence of the generalized discrepancy for different types of point systems.
The basic theory of spherical singular integrals is recapitulated. Criteria are given for measuring the space-frequency localization of functions on the sphere. The trade off between space localization on the sphere and frequency localization in terms of spherical harmonics is described in form of an uncertainty principle. A continuous version of spherical multiresolution is introduced, starting from continuous wavelet transform corresponding to spherical wavelets with vanishing moments up to a certain order. The wavelet transform is characterized by least-squares properties. Scale discretization enables us to construct spherical counterparts of wavelet packets and scale discrete Daubechies" wavelets. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to pyramyd algorithms. Fully discretized wavelet transforms are obtained via approximate integration rules on the sphere. Finally applications to (geo-)physical reality are discussed in more detail. A combined method is proposed for approximating the low frequency parts of a physical quantity by spherical harmonics and the high frequency parts by spherical wavelets. The particular significance of this combined concept is motivated for the situation of today" s physical geodesy, viz. the determination of the high frequency parts of the earth" s gravitational potential under explicit knowledge of the lower order part in terms of a spherical harmonic expansion.