Refine
Year of publication
- 1997 (97) (remove)
Document Type
- Preprint (66)
- Article (17)
- Report (9)
- Doctoral Thesis (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical (1)
Keywords
- AG-RESY (7)
- PARO (7)
- SKALP (2)
- Anisotropic smoothness classes (1)
- Bayesrisiko (1)
- Bewegungsplanung (1)
- Brownian motion (1)
- C (1)
- CAx-Anwendungen (1)
- CODET (1)
- CoMo-Kit (1)
- Dense gas (1)
- Diffusionsprozess (1)
- Elliptic-parabolic equation (1)
- Enskog equation (1)
- Function of bounded variation (1)
- Integral transform (1)
- Intelligent Object Fusion (1)
- Internet knowledge base (1)
- Internet knowledge reuse (1)
- Jacobian (1)
- Java (1)
- Kohonen's SOM (1)
- Laplace transform (1)
- Locally stationary processes (1)
- Moment sequence (1)
- Netz-Architekturen (1)
- Netzwerkmanagement (1)
- Neural networks (1)
- PVM (1)
- Panel clustering (1)
- Parallel Virtual Machines (1)
- Robotik (1)
- Scalar-type operator (1)
- Software Agents (1)
- Stieltjes transform (1)
- Suchve (1)
- Tcl (1)
- Workstation-Cluster (1)
- adaptive estimation (1)
- asymptotic analysis (1)
- authentication (1)
- automated theorem proving (1)
- autonomous systems (1)
- average density (1)
- business process reengineering (1)
- byte code (1)
- compact operator equation (1)
- density distribution (1)
- drift-diffusion limit (1)
- dynamical systems (1)
- entropy (1)
- finite pointset method (1)
- finite-difference methods (1)
- higher-order calculi (1)
- interpreter (1)
- kinetic semiconductor equations (1)
- kinetic theory (1)
- lacunarity distribution (1)
- local stationarity (1)
- localization (1)
- logarithmic averages (1)
- migration (1)
- minimax estimation (1)
- motion planning (1)
- multi-language (1)
- mutiresolution (1)
- non-linear wavelet thresholding (1)
- non-stationary time series (1)
- numerical integration (1)
- numerical methods for stiff equations (1)
- object-oriented software modeling (1)
- occupation measure (1)
- one-dimensional self-organization (1)
- optimal rate of convergence (1)
- order-three density (1)
- parallel algorithms (1)
- parallel numerical algorithms (1)
- parallel processing (1)
- parallelism and concurrency (1)
- particle method (1)
- persistence (1)
- phase-space (1)
- porous media (1)
- quantum chaos (1)
- quantum mechanics (1)
- quantum tunneling (1)
- regularization wavelets (1)
- review (1)
- robot control (1)
- robot kinematics (1)
- robotics (1)
- security domain (1)
- semiclassical quantisation (1)
- shock wave (1)
- software reuse (1)
- spline and wavelet based determination of the geoid and the gravitational potential (1)
- stationarity (1)
- tensor product basis (1)
- test (1)
- threshold choice (1)
- time series (1)
- time-frequency plan (1)
- time-varying covariance (1)
- wavelet thresholding (1)
- wavelets (1)
- winner definition (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (36)
- Kaiserslautern - Fachbereich Informatik (35)
- Kaiserslautern - Fachbereich Physik (18)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (4)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (3)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (1)
Diese Diplomarbeit gibt eine kurze Einführung in das Gebiet der Diffusionsprozesse (beschrieben als Lösungen stochastischer Differentialgleichungen) und der großen Abweichungen. Mit Methoden aus dem Gebiet der großen Abweichungen wird dann das asymptotische Verhalten des Bayesrisikos für die unterscheidung zweier Diffusionsprozesse untersucht.
The Internet has fallen prey to its most successful service, the World-Wide Web. The networksdo not keep up with the demands incurred by the huge amount of Web surfers. Thus, it takeslonger and longer to obtain the information one wants to access via the World-Wide Web.Many solutions to the problem of network congestion have been developed in distributed sys-tems research in general and distributed file and database systems in particular. The introduc-tion of caching and replication strategies has proven to help in many situations and thereforethese techniques are also applied to the WWW. Although most problems and associated solu-tions are known, some circumstances are different with the Web, forcing the adaptation ofknown strategies. This paper gives an overview about these differences and about currentlydeployed, developed, and evaluated solutions.
We derive minimax rates for estimation in anisotropic smoothness classes. This rate is attained by a coordinatewise thresholded wavelet estimator based on a tensor product basis with separate scale parameter for every dimension. It is shown that this basis is superior to its one-scale multiresolution analog, if different degrees of smoothness in different directions are present.; As an important application we introduce a new adaptive wavelet estimator of the time-dependent spectrum of a locally stationary time series. Using this model which was resently developed by Dahlhaus, we show that the resulting estimator attains nearly the rate, which is optimal in Gaussian white noise, simultaneously over a wide range of smoothness classes. Moreover, by our new approach we overcome the difficulty of how to choose the right amount of smoothing, i.e. how to adapt to the appropriate resolution, for reconstructing the local structure of the evolutionary spectrum in the time-frequency plane.
We present a distributed system, Dott, for approximately solving the Trav-eling Salesman Problem (TSP) based on the Teamwork method. So-calledexperts and specialists work independently and in parallel for given time pe-riods. For TSP, specialists are tour construction algorithms and experts usemodified genetic algorithms in which after each application of a genetic operatorthe resulting tour is locally optimized before it is added to the population. Aftera given time period the work of each expert and specialist is judged by a referee.A new start population, including selected individuals from each expert and spe-cialist, is generated by the supervisor, based on the judgments of the referees.Our system is able to find better tours than each of the experts or specialistsworking alone. Also results comparable to those of single runs can be found muchfaster by a team.
The intuitionistic calculus mj for sequents, in which no other logical symbols than those for implication and universal quantification occur, is introduced and analysed. It allows a simple backward application, called mj-reduction here, for searching for derivation trees. Terms needed in mj-reduction can be found with the unification algorithm. mj-Reduction with unification can be seen as a natural extension of SLD-resolution. mj-Derivability of the sequents considered here coincides with derivability in Johansson's minimal intuitionistic calculus LHM in [6]. Intuitionistic derivability of formulae with negation and classical derivability of formulae with all usual logical symbols can be expressed with mj-derivability and hence be verified by mj-reduction. mj-Derivations can be easily translated into LJ-derivations without
"Schnitt", or into NJ-derivations in a slightly sharpened form of Prawitz' normal form. In the first three sections, the systematic use of mj-reduction for proving in predicate logic is emphasized. Although the fourth section, the last and largest, is exclusively devoted to the mathematical analysis of the calculus mj, the first three sections may be of interest to a wider readership, including readers looking for applications of symbolic logic. Unfortunately, the mathematical analysis of the calculus mj, as the study of Gentzen's calculi, demands a large amount of technical work that obscures the natural unfolding of the argumentation. To alleviate this, definitions and theorems are completely embedded in the text to provide a fluent and balanced mathematical discourse: new concepts are indicated with bold-face, proofs of assertions are outlined, or omitted when it is assumed that the reader can provide them.
Primary decomposition of an ideal in a polynomial ring over a field belongs to the indispensable theoretical tools in commutative algebra and algebraic geometry. Geometrically it corresponds to the decomposition of an affine variety into irreducible components and is, therefore, also an important geometric concept.The decomposition of a variety into irreducible components is, however, slightly weaker than the full primary decomposition, since the irreducible components correspond only to the minimal primes of the ideal of the variety, which is a radical ideal. The embedded components, although invisible in the decomposition of the variety itself, are, however, responsible for many geometric properties, in particular, if we deform the variety slightly. Therefore, they cannot be neglected and the knowledge of the full primary decomposition is important also in a geometric context.In contrast to the theoretical importance, one can find in mathematical papers only very few concrete examples of non-trivial primary decompositions because carrying out such a decomposition by hand is almost impossible. This experience corresponds to the fact that providing efficient algorithms for primary decomposition of an ideal I ae K[x1; : : : ; xn], K a field, is also a difficult task and still one of the big challenges for computational algebra and computational algebraic geometry.All known algorithms require Gr"obner bases respectively characteristic sets and multivariate polynomial factorization over some (algebraic or transcendental) extension of the given field K. The first practical algorithm for computing the minimal associated primes is based on characteristic sets and the Ritt-Wu process ([R1], [R2], [Wu], [W]), the first practical and general primary decomposition algorithm was given by Gianni, Trager and Zacharias [GTZ]. New ideas from homological algebra were introduced by Eisenbud, Huneke and Vasconcelos in [EHV]. Recently, Shimoyama and Yokoyama [SY] provided a new algorithm, using Gr"obner bases, to obtain the primary decompositon from the given minimal associated primes.In the present paper we present all four approaches together with some improvements and with detailed comparisons, based upon an analysis of 34 examples using the computer algebra system SINGULAR [GPS]. Since primary decomposition is a fairly complicated task, it is, therefore, best explained by dividing it into several subtasks, in particular, while sometimes only one of these subtasks is needed in practice. The paper is organized in such a way that we consider the subtasks separately and present the different approaches of the above-mentioned authors, with several tricks and improvements incorporated. Some of these improvements and the combination of certain steps from the different algorithms are essential for improving the practical performance.
In this report we treat an optimization task, which should make the choice of nonwoven for making diapers faster. A mathematical model for the liquid transport in nonwoven is developed. The main attention is focussed on the handling of fully and partially saturated zones, which leads to a parabolic-elliptic problem. Finite-difference schemes are proposed for numerical solving of the differential problem. Paralle algorithms are considered and results of numerical experiments are given.
We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.
We describe a platform for the portable and secure execution of mobile agents writtenin various interpreted languages on top of a common run-time core. Agents may migrate at anypoint in their execution, fully preserving their state, and may exchange messages with otheragents. One system may contain many virtual places, each establishing a domain of logicallyrelated services under a common security policy governing all agents at this place. Agents areequipped with allowances limiting their resource accesses, both globally per agent lifetime andlocally per place. We discuss aspects of this architecture and report about ongoing work.
Sudakov's typical marginals, random linear functionals and a conditional central limit theorem
(1997)
V.N. Sudakov [Sud78] proved that the one-dimensional marginals of a highdimensional second order measure are close to each other in most directions. Extending this and a related result in the context of projection pursuit of P. Diaconis and D. Freedman [Dia84], we give for a probability measure P and a random (a.s.) linear functional F on a Hilbert space simple sufficient conditions under which most of the one-dimensional images of P under F are close to their canonical mixture which turns out to be almost a mixed normal distribution. Using the concept of approximate conditioning we deduce a conditional central limit theorem (theorem 3) for random averages of triangular arrays of random variables which satisfy only fairly weak asymptotic orthogonality conditions.
Um stationäre bzw. quasi-stationäre Ohmsche Ströme in leitenden Medien berechnen zu können, wird aus komplexifizierten Maxwellschen Gleichungen mittels des Clifford Produktes eine vereinheitlichte hyperkomplexe Feldgleichung hergeleitet. Für, längs einer Achse translationsinvariante, komplexe Leitfähigkeitsfelder wird eine Dimension absepariert und die verbleibenden 2 Raumdimensionen mit der komplexen Zahlenebene identifiziert. Diese Identifikation kann durch den Clifford Formalismus explizit und völlig kanonisch definiert werden, da sowohl die komplexen Zahlen als auch Ortsvektoren in der Clifford Algebra enthalten sind. Da direkt die Spinor Feldgleichung gelöst wird, treten Eichprobleme, wie sie bei entsprechenden Potentialgleichungen üblich sind, erst gar nicht auf. Durch die Liftung der Spinor Feldgleichung vom \(\mathbb{R}^2 \to \mathbb{C}^2\) wird sofort ersichtlich wie wichtig monogene (holomorphe) Funktionen für die Lösung dieser Gleichung sind.
Die zugehörige Randbedingung ist im allgemeinen weder rein vom Neumannschen noch vom Diricheltschen Typ. Ausgehend von elementaren Lösungen für \(\delta\)-Quellen in Gebieten konstanter Leitfähigkeit, werden durch Fortsetzung dieser Lösungen mittels der Randbedingung Feldlösungen für zusammengesetzte Gebiete konstruiert.
Im Gegensatz zu Gebieten mit nur einem Rand, ist es für mehrfach berandete Gebiete viel schwieriger, die lokalen Lösungen so anzupassen, daß alle Randbedingungen erfüllt sind. Deshalb wird eine neue Lösungsmethode vorgestellt, welche die lokalen Feldgleichungen und alle Randbedingungen durch sukzessive Konstruktion von Spiegelpolreihen löst. Dieses Verfahren wird anhand einiger Klassen von geometrischen Konfigurationen erläutert, deren topologische Unterschiede sich direkt auf die Struktur der Spiegelpolverteilungen auswirkt.
Bei der Diskussion wird besonders der Fall von N kreisförmigen Anomalien in einer Kreisscheibe hervorgehoben, da diese Klasse von Problemen auch von besonderem Interesse in der medizinischen Physik, im Bereich der Impedanz-Tomographie ist. Die Lösungen erlauben die Variation der Zusammenhangszahl über die relativen Leitfähigkeitsdifferenzen. Studien der Potentialverteilung auf dem Rand, wie sie für die elektrische Impedanz-Tomographie wesentlich sind, werden zum Teil durch numerische, als auch durch analytische Berechnungen durchgeführt. Komplexe Potentiale können aus den Feldlösungen leicht berechnet werden, indem die typischen Polterme \(\displaystyle{1 \over z-p}\) durch die komplexen Logarithmen \(- \log(z-p)\)
ersetzt werden.
Das elektrische Potential ergibt sich aus dem Komplexen als dessen Realteil. Der Imaginärteil hat eine große Bedeutung bei der Visualisierung der Vektorfelder. Es wird gezeigt, daß die Höhenlinien dieses Imaginärteils, der aus der Strömungsmechanik auch als Strömungsfunktion bekannt ist, gerade die Feldlinien des zugehörigen Feldes liefert.
Für die elektrische Impedanz-Tomographie wird am Beispiel einer kleinen, konzentrisch positionierten Anomalie das Auflösungsvermögen diskutiert, woraus unter anderem eine optimale Lage der Einprägepole resultiert. Aus den analytischen Ergebnissen ist eindeutig zu erkennen, daß sich maximale Potentialänderungen auf dem Rand bei diametral angeordneten Einprägepolen ergeben.
Die für die Visualisierung der Felder nötigen Studien von Strömungsfunktionen, lieferte unter anderem auch eine Berechnungsmöglichkeit von Strömungsfunktionen für Felder im \(\mathbb{R}^3\)! Des weitern wird eine mögliche Wahl der Schnitte dieser mehrblättrigen Funktion für den Fall der Kreisscheibe mit N Anomalien explizit gegeben und die Vorteile dieser speziellen Wahl anhand numerischer Studien aufgezeigt. Typische Darstellungen von Feld- und Potentiallinien, von Verteilungen von Spiegelpolen, sowie von Potential und Strömungsfunktionen selbst, verdeutlichen die Vorteile dieses Lösungsverfahrens. Für sehr viele, in der Praxis wichtige Konfigurationen ist vor allem die große Konvergenzgeschwindigkeit ein Vorteil, welcher es ermöglicht Feldlinienbilder dieser Lösungen in kurzer Zeit auf einem PC zu erstellen.
The concept of algebraic simplification is of great importance for the field of symbolic computation in computer algebra. In this paper we review somefundamental concepts concerning reduction rings in the spirit of Buchberger. The most important properties of reduction rings are presented. Thetechniques for presenting monoids or groups by string rewriting systems are used to define several types of reduction in monoid and group rings. Gröbnerbases in this setting arise naturally as generalizations of the corresponding known notions in the commutative and some non-commutative cases. Severalresults on the connection of the word problem and the congruence problem are proven. The concepts of saturation and completion are introduced formonoid rings having a finite convergent presentation by a semi-Thue system. For certain presentations, including free groups and context-free groups, theexistence of finite Gröbner bases for finitely generated right ideals is shown and a procedure to compute them is given.
Static magnetic and spin wave properties of square lattices of permalloy micron dots with thicknesses of 500 Å and 1000 Å and with varying dot separations have been investigated. The spin wave frequencies can be well described taking into account the demagnetization factor of each single dot. A magnetic four-fold anisotropy was found for the lattice with dot diameters of 1 micrometer and a dot separation of 0.1 micrometer. The anisotropy is attributed to an anisotropic dipole-dipole interaction between magnetically unsaturated parts of the dots. The anisotropy strength (order of 100000 erg/cm^3 ) decreases with increasing in-plane applied magnetic field.
In modern approximation methods linear combinations in terms of (space localizing) radial basis functions play an essential role. Areas of application are numerical integration formulas on the uni sphere omega corresponding to prescribed nodes, spherical spline interpolation, and spherical wavelet approximation. the evaluation of such a linear combination is a time consuming task, since a certain number of summations, multiplications and the calculation of scalar products are required. This paper presents a generalization of the panel clustering method in a spherical setup. The economy and efficiency of panel clustering is demonstrated for three fields of interest, namely upward continuation of the earth's gravitational potential, geoid computation by spherical splines and wavelet reconstruction of the gravitational potential.
Das Modell des Intelligenten ist eine Abstraktion von Telefonvermittlungs-systemen und beschreibt auch deren Erweiterungen. Zunächst wird ein einfachesBasissystem spezifiziert, das dann um weitere Leistungsmerkmale, sog. Features, erweitert wird. Im Rahmen dieser Arbeit haben wir ein bereits bestehendes, in Estellespezifiziertes Basissystem um sechs Features erweitert. Dabei konnten wir verschiedene Stile für die Featurespezifikation in Estelle überprüfen. Wir entwerfen Prinzipien füreine verhaltenerhaltende Transformation, die geeignete Ansatzpunkte für neueFeatures schaffen kann. Für das Ergänzen von neuen Rufnummern haben wir eine einfache Methode entwickelt. Wir zeigen zwei Schwächen von Estelle beim Erweitern vonSystemen auf. Schließlich berichten wir über unsere Erfahrungen mit dem im IN-Modellverwendeten Prinzip der Detection Points.
Viele Entwicklungsprozesse, wie sie z.B. beim Entwurf von grossen Softwaresystemen benötigt werden, basieren in erster Linie auf dem Wissen der mit der Entwicklung betrauten Mitarbeiter. Mit wachsender Komplexität der Entwurfsaufgaben und mit wachsender Anzahl der Mitarbeiter in einem Projekt wird die Koordination und Verteilung dieses Wissens immer problematischer. Aus diesem Grund versucht man zunehmend, das Wissen der Mitarbeiter in elektronischer Form, d.h. in Rechnern zu speichern und zu verwalten. Dadurch, dass der Entwurf eines komplexen Systems ebenfalls am Rechner modelliert wird, steht benötigtes Wissen sofort zur Verfügung und kann zur Entscheidungsunterstützung herangezogen werden. Gerade bei der Planung grosser Projekte stehen jedoch oft Entscheidungen aus, die erst später, während der Abwicklung getroffen werden können. Da gängige Workflow-Management-System zumeist eine komplette Modellierung verlangen, bevor die Abwicklung eines Projektmodells beginnen kann, habt sich dieser Ansatz gerade für umfangreiche Projekte als eher ungeeignet herausgestellt.
In this paper a group of participants of the 12th European Summer Institute which took place in Tenerifa, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issue discussed includes modelling aspects in discrete, network and continuous location, heuristic techniques, the state of technology and undesirable facility location. Some general questions are stated reagrding the applicability of location models, promising research directions and the way technology affects the development of solution techniques.
Sokrates und das Nichtwissen
(1997)
Software Products As Objects
(1997)
This paper describes our experiences in modeling entire software products (trees of software files) as objects. Container pnodes (product nodes) have user-defined Internetunique names, data types, and methods (operations). Pnodes can contain arbitrary collections of software files that represent programs, libraries, documents, or other software products. Pnodes can contain multiple software products, so that header files, libraries, and program products may all be stored within one pnode. Pnodes can contain views that list other pnodes in order to form large conceptual structures of pnodes. Typical pnode -object methods include: fetching and storing into version controlled repositories; dynamic analysis of pnode contents to generate makefiles of arbitrary complexity; local automated build operations; Internet-scalable distributed repository synchroni- zations; Internet-scalable, multi-platform, distributed build operations; extraction and generation of online API documen- tation, spell checking of document pnodes, and so on. Since methods are user-defined, they can be arbitrarily complex. Modelling software products as objects provides a large amount of effort leverage, since one person can define the methods and many people can use them in extensively automated ways.
Techniques for modular software design are presented applying software agents. The conceptual designs are domain independent and make use of specificdomain aspects applying Multiagent AI. The stages of conceptualization, design and implementation are defined by new techniques coordinated by objects. Software systemsare designed by knowledge acquisition, specification, and multiagent implementations.
Skyrme Sphalerons of an O(3)-oe Model and the Calculation of Transition Rates at Finite Temperature
(1997)
The reduced O(3)-oe model with an O(3) ! O(2) symmetry breaking potential is considered with an additional Skyrmionic term, i. e. a totally antisymmetric quartic term in the field derivatives. This Skyrme term does not affect the classical static equations of motion which, however, allow an unstable sphaleron solution. Quantum fluctuations around the static classical solution are considered for the determination of the rate of thermally induced transitions between topologically distinct vacua mediated by the sphaleron. The main technical effect of the Skyrme term is to produce an extra measure factor in one of the fluctuation path integrals which is therefore evaluated using a measure-modified Fourier-Matsubara decomposition (this being one of the few cases permitting this explicit calculation). The resulting transition rate is valid in a temperature region different from that of the original Skyrme-less model, and the crossover from transitions dominated by thermal fluctuations to those dominated by tunneling at the lower limit of this range depends on the strength of the Skyrme coupling.
For periodically driven systems, quantum tunneling between classical resonant stability islands in phase space separated by invariant KAM curves or chaotic regions manifests itself by oscillatory motion of wave packets centered on such an island, by multiplet splittings of the quasienergy spectrum, and by phase space localisation of the quasienergy states on symmetry related ,ux tubes. Qualitatively di,erent types of classical resonant island formation | due to discrete symmetries of the system | and their quantum implications are analysed by a (uniform) semiclassical theory. The results are illustrated by a numerical study of a driven non-harmonic oscillator.
Here the self-organization property of one-dimensional Kohonen's algorithm in its 2k-neighbour setting with a general type of stimuli distribution and non-increasing learning rate is considered. We prove that the probability of self-organization for all initial values of neurons is uniformly positive. For the special case of a constant learning rate, it implies that the algorithm self-organizes with probability one.
Metaharmonic wavelets are introduced for constructing the solution of theHelmholtz equation (reduced wave equation) corresponding to Dirichlet's orNeumann's boundary values on a closed surface approach leading to exactreconstruction formulas is considered in more detail. A scale discrete version ofmultiresolution is described for potential functions metaharmonic outside theclosed surface and satisfying the radiation condition at infinity. Moreover, wediscuss fully discrete wavelet representations of band-limited metaharmonicpotentials. Finally, a decomposition and reconstruction (pyramid) scheme foreconomical numerical implementation is presented for Runge-Walsh waveletapproximation.
Ist "Programmieren ganz ohne Code" auch im CAx-Bereich möglich? Die Vielzahl heterogener CAx-Anwendungen und die wachsende Komplexität der Entwicklungsprozesse bedarf neuer Lösun-gen in der CAx-Technik. Ziel dieses Beitrages ist es, die richtungsweisende Rolle der Komponenten-technologie im CAx-Bereich aufzuzeigen. Es werden die Grundlagen der Komponenten sowie die wichtigen Komponentenarchitekturen (ActiveX und Java Beans) vorgestellt. Die Erwartungen der Anwender und der Systemhersteller, die Potentiale und die Auswirkungen dieser Technologie auf die neuen Systeme werden analysiert. Die zur Zeit verfügbaren ersten Ansätze werden präsentiert. Die Rolle der internationalen Standards für die technische Umsetzung und für die Akzeptanz von CAx-Komponentensystemen wird aufgezeigt.
The Filter-Diagonalization Method is used to ,nd the broad and even overlapping resonances of a 1D Hamiltonian used before as a test model for new resonance theories and computational methods. It is found that the use of several complex-scaled cross-correlation probability amplitudes from short time propagation enables the calculation of broad overlapping resonances, which can not be resolved from the amplitude of a single complex-scaled autocorrelation calculation.
A first explicit connection between finitely presented commutative monoids and ideals in polynomial rings was used 1958 by Emelichev yielding a solution tothe word problem in commutative monoids by deciding the ideal membership problem. The aim of this paper is to show in a similar fashion how congruenceson monoids and groups can be characterized by ideals in respective monoid and group rings. These characterizations enable to transfer well known resultsfrom the theory of string rewriting systems for presenting monoids and groups to the algebraic setting of subalgebras and ideals in monoid respectively grouprings. Moreover, natural one-sided congruences defined by subgroups of a group are connected to one-sided ideals in the respective group ring and hencethe subgroup problem and the ideal membership problem are directly related. For several classes of finitely presented groups we show explicitly howGröbner basis methods are related to existing solutions of the subgroup problem by rewriting methods. For the case of general monoids and submonoidsweaker results are presented. In fact it becomes clear that string rewriting methods for monoids and groups can be lifted in a natural fashion to definereduction relations in monoid and group rings.
Many problems arising in (geo)physics and technology can be formulated as compact operator equations of the first kind \(A F = G\). Due to the ill-posedness of the equation a variety of regularization methods are in discussion for an approximate solution, where particular emphasize must be put on balancing the data and the approximation error. In doing so one is interested in optimal parameter choice strategies. In this paper our interest lies in an efficient algorithmic realization of a special class of regularization methods. More precisely, we implement regularization methods based on filtered singular value decomposition as a wavelet analysis. This enables us to perform, e.g., Tikhonov-Philips regularization as multiresolution. In other words, we are able to pass over from one regularized solution to another one by adding or subtracting so-called detail information in terms of wavelets. It is shown that regularization wavelets as proposed here are efficiently applicable to a future problem in satellite geodesy, viz. satellite gravity gradiometry.
Like other industries, the aircraft industry is under high pressure to meet drastically increased customer goals for market price and flexibility. This while at the same time share holders request for short term profit guarantees. Daimler-Benz Aerospace Airbus has met this challenge using business process reengineering methods which led to total company restructuring from functional orientation to customer and product orientation. This paper will show how business process modelling techniques have been applied. Especially concurrent engineering methods are used to integrate the various disciplines involved from market analysts over design, commercial to industrialization staff.
This paper discusses the benefits and drawbacks of caching and replication strategies in the WWW with respect to the Internet infrastructure. Bandwidth consumption, latency, and overall error rates are considered to be most important from a network point of view. The dependencies of these values with input parameters like degree of replication, document popularity, actual cache hit rates, and error rates are highlighted. In order to determine the influence of different caching and replication strategies on the behavior of a single proxy server with respect to these values, trace-based simulations are used. Since the overall effects of such strate- gies can hardly be decided with this approach alone, a mathematical model has been developed to deal with their influence on the network as a whole. Together, this two-tiered approach permits us to propose quantita- tive assessments on the influence different caching and replication proposals (are going to) have on the Inter- net infrastructure.
Process Chain in Automotive Industry - Present Day Demands versus Long Term Open CAD/CAM Strategies
(1997)
The automotive industry was a pioneer in using CAD/CAM technology. Now the car manufacturers development process is almost completely done with this technology. Substantial initiative for the standardisation of CAD/CAM technics comes from the automotive industry, as e.g. for neutral CAD data interfaces. The R&D departments of German car manufacturers have founded a working group ii with the aim to develop a common long term CAD/CAM strategy. One important result is the concept of a future CAx iii architecture based on the standard data structure STEP iv . The commitment of the car manufactures to STEP and open system architectures is in contradiction to their attitude towards suppliers and subcontractors: Recently, more and more contractors are contractually bound to use exactly the same CAD system as the orderer. The German car industry tries to find a way out of this contradiction and to improve the co-operation between the companies in short term. Therefore they proposed a "Dual CAD Strategy", i.e. to put improvements in CAD communication into practice which are possible today - even proprietary solutions - and in parallel to invest in strategic concepts to prepare tomorrow's open system landscape.
The problem of constructing a geometric model of an existing object from a set of boundary points arises in many areas of industry. In this paper we present a new solution to this problem which is an extension of Boissonnat's method [2]. Our approach uses the well known Delaunay triangulation of the data points as an intermediate step. Starting with this structure, we eliminate tetrahedra until we get an appropriate approximation of the desired shape. The method proposed in this paper is capable of reconstructing objects with arbitrary genus and can cope with different point densities in different regions of the object. The
problems which arise during the elimination process, i.e. which tetrahedra can be eliminated, which order has to be used to control the process and finally, how to stop the elimination procedure at the right time, are discussed in detail. Several examples are given to show the validity of the method.
This paper provides a description of PLATIN. With PLATIN we present an imple-mented system for planning inductive theorem proofs in equational theories that arebased on rewrite methods. We provide a survey of the underlying architecture ofPLATIN and then concentrate on details and experiences of the current implementa-tion.
This paper presents the different possibilities for parallel processing in robot control architectures. At the beginning, we shortly review the historic development of control architectures. Then, a list of requirements for control architectures is set up from a parallel processing point of view. As our main topic, we identify the levels of parallel processing in robot control architectures. With each level of parallelism, examples for a typical robot control architecture are presented. Finally, a list of keywords is provided for each previous work we refer to.
The observation of an ergodic Markov chain asymptotically allows perfect identification of the transition matrix. In this paper we determine the rate of the information contained in the first n observations, provided the unknown transition matrix belongs to a known finite set. As an essential tool we prove new refinements of the large deviation theory of the empirical pair measure of finite Markov chains. Keywords: Markov Chain, Entropy, Bayes risk, Large Deviations.
In the modeling of biological phenomena, in living organisms whether the measurements are of blood pressure, enzyme levels, biomechanical movements or heartbeats, etc., one of the important aspects is time variation in the data. Thus, the recovery of a "smooth" regression or trend function from noisy time-varying sampled data becomes a problem of particular interest. Here we use non-linear wavelet thresholding to estimate a regression or a trend function in the presence of additive noise which, in contrast to most existing models, does not need to be stationary. (Here, nonstationarity means that the spectral behaviour of the noise is allowed to change slowly over time.). We develop a procedure to adapt existing threshold rules to such situations, e.g., that of a time-varying variance in the errors. Moreover, in the model of curve estimation for functions belonging to a Besov class with locally stationary errors, we derive a near-optimal rate for the L2-risk between the unknown function and our soft or hard threshold estimator, which holds in the general case of an error distribution with bounded cumulants. In the case of Gaussian errors, a lower bound on the asymptotic minimax rate in the wavelet coefficient domain is also obtained. Also it is argued that a stronger adaptivity result is possible by the use of a particular location and level dependent threshold obtained by minimizing Stein's unbiased estimate of the risk. In this respect, our work generalizes previous results, which cover the situation of correlated, but stationary errors. A natural application of our approach is the estimation of the trend function of nonstationary time series under the model of local stationarity. The method is illustrated on both an interesting simulated example and a biostatistical data-set, measurements of sheep luteinizing hormone, which exhibits a clear nonstationarity in its variance.
MP Prototype Specification
(1997)
We study the problem of global solution of Fredholm integral equations. This means that we seek to approximate the full solution function (as opposed to the local problem, where only the value of the solution in a single point or a functional of the solution is sought). We analyze the Monte Carlo complexity, i.e. the complexity of stochastic solution of this problem. The framework for this analysis is provided by information based complexity theory. Our investigations complement previous ones on stochastic complexity of local solution and on deterministic complexity of
both local and global solution. The results show that even in the global case Monte Carlo algorithms can perform better than deterministic ones, although the difference is not as large as in the local case.
In this paper we provide a semantical meta-theory that will support the development of higher-order calculi for automated theorem proving like the corresponding methodology has in first-order logic. To reach this goal, we establish classes of models that adequately characterize the existing theorem-proving calculi, that is, so that they are sound and complete to these calculi, and a standard methodology of abstract consistency methods (by providing the necessary model existence theorems) needed to analyze completeness of machine-oriented calculi.