Refine
Year of publication
- 1998 (147) (remove)
Document Type
- Preprint (109)
- Article (21)
- Doctoral Thesis (7)
- Lecture (3)
- Report (3)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical Part (1)
- Working Paper (1)
Keywords
- AG-RESY (13)
- PARO (12)
- SKALP (9)
- Case Based Reasoning (4)
- industrial robots (4)
- motion planning (3)
- parallel processing (3)
- CIM-OSA (2)
- HANDFLEX (2)
- Kalman filtering (2)
- TOVE (2)
- coset enumeration (2)
- on-line algorithms (2)
- particle methods (2)
- path planning (2)
- search algorithms (2)
- subgroup problem (2)
- Analysis (1)
- Boltzmann Equation (1)
- C5H3(SiMe3)2-Liganden (1)
- CFD (1)
- Chirale Induktion (1)
- Cobalt-Halbsandwichkomplexe (1)
- Complexity (1)
- Correspondence with other notations (1)
- Dependency Factors (1)
- Dirichlet series (1)
- Distributed Software Development (1)
- EXPRESS-G (1)
- Electron states in low-dimensional structures (1)
- Enterprise modeling (1)
- Enterprise modelling (1)
- Funktionalanalysis (1)
- Grid Graphs (1)
- Gröbner base (1)
- Gröbner bases (1)
- Gröbner bases in monoid and group rings (1)
- HOT (1)
- Hilbert transform (1)
- Ill-Posed Problems (1)
- Indexierung (1)
- Inhaltserschließung (1)
- Integral (1)
- Integration (1)
- Internet Based Software Process Management Environment (1)
- Kallianpur-Robbins law (1)
- Koordinationschemie (1)
- Kristallographie (1)
- LCSH (1)
- Learning systems (1)
- Lebesque-Integral (1)
- Linear Integral Equations (1)
- MEGI (1)
- MILOS (1)
- Monoid and group rings (1)
- Monotone dynamical systems (1)
- Navier-Stokes (1)
- Nonlinear dynamics (1)
- Numerical Simulation (1)
- Ontolingua (1)
- Ontology (1)
- PC-based robot control (1)
- PERA (1)
- Phosphor (1)
- Pn (1)
- Quantum mechanics (1)
- RSWK (1)
- Rarefied Gas Flows (1)
- Rayleigh Number (1)
- Recurrent neural networks (1)
- Riemann-Siegel formula (1)
- Riemannsche Summen (1)
- Robust reliability (1)
- SWEEPING (1)
- Simultaneous quantifier elimination (1)
- Singularity theory (1)
- Treppenfunktionen (1)
- Tunneling (1)
- UML (1)
- Vorlesungsskript (1)
- WETICE 98 (1)
- Wannier-Bloch resonance states (1)
- Wannier-Stark systems (1)
- adaptive grid generation (1)
- area loss (1)
- automated proof planner (1)
- bi-directional search (1)
- center and median problems (1)
- chaos (1)
- cholesterische Phasen (1)
- client/server-architecture (1)
- confluence (1)
- convex models (1)
- crack diagnosis (1)
- cusp forms (1)
- da (1)
- damage diagnosis (1)
- discretization (1)
- distributed and parallel processing (1)
- distributed control system (1)
- distributed processing (1)
- domain decomposition (1)
- exact fully discrete vectorial wavelet transform (1)
- fixpoint theorem (1)
- fluid dynamic equations (1)
- graph search (1)
- higher order (1)
- higher order tableau (1)
- initial value representation (1)
- kinetic equations (1)
- kinetic models (1)
- konvexe Analysis (1)
- level set method (1)
- lifetime statistics (1)
- lifetimes (1)
- locational analysis (1)
- log averaging methods (1)
- molekularer Chiralitätswechselwirkungstensor (1)
- monoid- and group-presentations (1)
- moving contact line (1)
- multi-hypothesis diagnosis (1)
- natural language semantics (1)
- non-linear dynamics (1)
- numerics for pdes (1)
- occupation measure (1)
- off-line programming (1)
- planar Brownian motion (1)
- prefix reduction (1)
- prefix string rewriting (1)
- prefix-rewriting (1)
- proof presentation (1)
- pyramid scheme (1)
- quantum chaos (1)
- quantum mechanics (1)
- quasienergy (1)
- rarefied gas flows (1)
- ratio ergodic theorem (1)
- reinitialization (1)
- resonances (1)
- robot calibration (1)
- robot control architectures (1)
- robot motion planning (1)
- rotating machinery (1)
- scale discrete spherical vector wavelets (1)
- search algorithm (1)
- search alogorithms (1)
- semiclassical (1)
- sequent calculus (1)
- skolemization (1)
- stationary solutions (1)
- steady Boltzmann equation (1)
- strong theorems (1)
- subgroup presentation problem (1)
- theorem prover (1)
- trajectory optimization (1)
- variable cardinality case (1)
- vectorial multiresolution analysis (1)
- vehicular traffic (1)
Faculty / Organisational entity
- Fachbereich Informatik (38)
- Fachbereich Mathematik (35)
- Fachbereich Physik (35)
- Fraunhofer (ITWM) (12)
- Fachbereich Wirtschaftswissenschaften (9)
- Fachbereich Elektrotechnik und Informationstechnik (6)
- Fachbereich Maschinenbau und Verfahrenstechnik (6)
- Fachbereich Biologie (3)
- Fachbereich Chemie (2)
- Universitätsbibliothek (1)
As the previous chapters of this book have shown, case-based reasoning is a technology that has been successfully applied to a large range of different tasks. Through all the different CBR projects, both basic research projects as well as industrial development projects, lots of knowledge and experience about how to build a CBR application has been collected. Today, there is already an increasing number of successful companies developing industrial CBR applications. In former days, these companies could develop their early pioneering CBR applications in an ad-hoc manner. The highly-skilled CBR expert of the company was able to manage these projects and to provide the developers with the required expertise.
This paper presents a brief overview of the INRECA-II methodology for building and maintaining CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development and maintenance experience is documented using software process models and stored in a three-layered experience packet.
Although several systematic analyses of existing approaches to adaptation have been published recently, a general formal adaptation framework is still missing. This paper presents a step into the direction of developing such a formal model of transformational adaptation. The model is based on the notion of the quality of a solution to a problem, while quality is meant in a more general sense and can also denote some kind of appropriateness, utility, or degree of correctness. Adaptation knowledge is then defined in terms of functions transforming one case into a successor case. The notion of quality provides us with a semantics for adaptation knowledge and allows us to define terms like soundness, correctness and completeness. In this view, adaptation (and even the whole CBR process) appears to be a special instance of an optimization problem.
For defining attribute types to be used in the case representation, taxonomies occur quite often. The symbolic values at any node of the taxonomy tree are used as attribute values in a case or a query. A taxonomy type represents a relationship between the symbols through their position within the taxonomy-tree which expresses knowledge about the similarity between the symbols. This paper analyzes several situations in which taxonomies are used in different ways and proposes a systematic way of specifying local similarity measures for taxonomy types. The proposed similarity measures have a clear semantics and are easy to compute at runtime.
This paper motivates the necessity for support for negotiation during Sales Support on the Internet within Case-Based Reasoning solutions. Different negotiation approaches are discussed and a general model of the sales process is presented. Further, the tradition al CBR-cycle is modified in such a way that iterative retrieval during a CBR consulting session is covered by the new model. Several gen eral characteristics of negotiation are described and a case study is shown where preliminary approaches are used to negotiate with a cu stomer about his demands and available products in a 'CBR-based' Electronic Commerce solution.
Object-oriented case representations require approaches for similarity assessment that allow to compare two differently structured objects, in particular, objects belonging to different object classes. Currently, such similarity measures are developed more or less in an ad-hoc fashion. It is mostly unclear, how the structure of an object-oriented case model, e.g., the class hierarchy, influences similarity assessment. Intuitively, it is obvious that the class hierarchy contains knowledge about the similarity of the objects. However, how this knowledge relates to the knowledge that could be represented in similarity measures is not obvious at all. This paper analyzes several situations in which class hierarchies are used in different ways for case modeling and proposes a systematic way of specifying similarity measures for comparing arbitrary objects from the hierarchy. The proposed similarity measures have a clear semantics and are computationally inexpensive to compute at run-time.
Contrary to symbolic learning approaches, that represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Vorgestellt wird ein System basierend auf einem 3D-Scanner nach dem Licht- schnitt-Prinzip mit dem es möglich ist, einen Menschen innerhalb von 1,5 Sekun- den dreidimensional zu erfassen. Mit Hilfe von Evolutionären Algorithmen wird über eine modellbasierte Dateninterpretation die Auswertung der Meßdaten betrie- ben, so daß beliebige Körpermaße ermittelt werden können. Das Ergebnis ist ein individualisiertes CAD-Modells der Person im Rechner. Ein derartiges Modell kann als virtuelle Kleiderpuppe zur Produktion von Maßbekleidung dienen.
Ein verhaltensorientierter Ansatz zum flächendeckenden Fahren in a priori unbekannter Umgebung
(1998)
In diesem Aufsatz wird ein Verfahren zum flächendeckenden Fahren in zu- nächst unbekannter Umgebung beschrieben, wie es z.B. für Reinigungsanwen- dungen im Heimbereich benötigt wird. Parallel zur Durchführung der Reini- gungsaufgabe wird dabei die Umgebung exploriert und kartiert. Der verhaltensorientierte Ansatz ermöglicht eine robuste, zielgerichtete und dennoch ressourcenschonende Implementierung und gestattet es, einzelne Ver- haltensweisen leicht durch verbesserte oder auch speziell erlernte Versionen auszutauschen. Das vorgestellte Verfahren wurde simulativ getestet und wird in Kürze auf einem realen Roboter erprobt.
In der vorliegenden Arbeit wurden Wirkstoffe aus unterschiedlichen Substanzklassen auf ihre antineoplastische Aktivität an humanen Tumorzellinien untersucht. Einige der getesteten Substanzen zeigten gute Ansätze für einen möglichen späteren Einsatz in der Tumortherapie. Die Alkaloide Lycorin und Lycobetain zeigten im Sulforhodamin B-Assay eine sehr gute Wachstumshemmung. Der IC50-Wert der Substanzen lag bei allen getesteten humanen Tumorzellinien unter 3 microM. Trotz der hervorragenden Wachstumshemmung konnte für Lycorin kein Wirkmechanismus gefunden werden. Ein Arrest der Zellen in der G2/M-Phase des Zellzyklus konnte jedoch gezeigt werden. Für Lycobetain kristallisierte sich eine duale Hemmung von Topoisomerase I und II als möglicher Wirkmechanismus heraus. Die Aktivität von Topoisomerase II konnte durch 100 microM Lycobetain vollständig inhibiert werden. Bereits bei 10 microM Lycobetain war keine Topoisomerase I Aktivität mehr detektierbar, was nachweislich auf der Stabilisierung des DNA-Topoisomerase-Komplexes beruhte. Diese resultierte in einer Induktion von DNA-Strangbrüchen, Arrest der Zellen in der G2/M-Phase des Zellzyklus und schließlich der Induktion von Apoptose. Alle diese Eigenschaften deuteten auf eine Topoisomerase-Hemmung als Wirkmechanismus von Lycobetain hin. Untersuchungen von fünf Flavonoiden aus der chinesischen Heilpflanze Scutellaria baicalensis zeigten, daß Baicalin, Baicalein, Skullcapflavon II und Wogonin das Wachstum verschiedener humaner Tumorzellinien hemmten, während Wogonosid bis zu Konzentrationen von 100 microM keine Wirkung zeigte. Baicalein und Baicalin wiesen trotz großer Strukturübereinstimmung keinen einheitlichen Wirkmechanismus auf. Baicalein hemmt, wie Lycobetain, die Aktivität der Topoisomerasen I und II, wobei für Topoisomerase I eine Stabilisierung des binären Intermediates von Topo-isomerase und DNA nachgewiesen werden konnte. Im Gegensatz zu Lycobetain interkaliert Baicalein nicht in doppelsträngige DNA, kompetiert jedoch mit dem Hoechst-Farbstoff H33258 um die Bindung an die kleine Furche der DNA. Die Induktion von DNA-Strangbrüchen, sowie der Arrest der Zellen in der G2/M-Phase des Zellzyklus und die Induktion von Apoptose konnten auch für Baicalein gezeigt werden. Messungen mit Baicalin ergaben keinen Hinweis auf Hemmung von humanen Topoisomerasen. Zellzyklusanalysen zeigten einen Arrest in der G0/G1-Phase, was im Gegensatz zu Baicalein auf einen völlig unterschiedlichen, bisher nicht geklärten, Mechanismus hindeutet. Skullcapflavon II hemmte Topoisomerase I und II ab Konzentrationen von 100 microM. Die Stabilisierung des DNA-Topoisomerase-Komplexes konnte jedoch für diese Substanz nicht nachgewiesen werden. Aufgrund der Struktur könnte jedoch auf einen anderen Mechanismus der Topoisomerase-Inhibierung, möglicherweise die Bindung an die freie Topoisomerase I und die damit verbundene Hemmung der Komplexbildung, geschlossen werden. Für Wogonin konnte in dieser Arbeit kein Wirkmechanismus gefunden werden. In unserer Arbeitsgruppe wurden verschiedene indigoide Bisindole synthetisiert [Hössel, 1996; Hössel, nicht veröffentlicht], die ebenfalls auf ihre antineoplastische Wirkung untersucht wurden. Ein großes Problem stellte dabei die schlechte Löslichkeit dieser Substanzklasse dar. Der Vergleich der Substanzen im Sulforhodamin B-Assay an der Zellinie LXFL529L ergab sehr unterschiedliche Ergebnisse. Indirubin und 5-Jod-Indirubin waren die besten Wachstumshemmstoffe mit IC50-Werten unter 10 microM. Die indigoiden Bisindole besaßen die Fähigkeit in Doppelstrang-DNA zu interkalieren und an die kleine Furche der DNA zu binden, während dies für Deoxytopsentin nicht nachzuweisen war. Im Tubulinpolymerisationsassay hemmten Indirubin, 5-Jod-Indirubin und Bisindolylindol die Polymerisation von Tubulinmonomeren. Im Vergleich zur Referenzsubstanz Colchicin war diese Hemmung jedoch für die Wirkung der Substanzen nicht von Bedeutung. Messungen zur Hemmung der cyclinabhängigen Kinase 1 (cdk1) weisen darauf hin, daß die indigoiden Substanzen ein inhibitorisches Potential für Zellzykluskinasen besitzen. Neben cdk1 hemmen Indirubin und 5-Jod-Indirubin unter anderem cdk5, eine Kinase, die an Mikrotubuliproteinen assoziiert vorliegen kann. Mittels Western-Blotting konnte gezeigt werden, daß die eingesetzte Tubulinpräperation cdk5 enthält. Die geringfügige Hemmung der Tubulinpolymerisation läßt sich möglicherweise durch die Hemmung von mit Tubulin assoziierter cdk5 erklären. Aus dem Labor von Stalina Melnik, Moskau, Rußland wurden unserer Arbeitsgruppe 13 Wirkstoffe zur Untersuchung zur Verfügung gestellt. Bei den Substanzen handelte es sich um Indolocarbazole und Bisindolylmaleimide, die sich durch verschiedene Zuckersubstitutionen an R1 sowie verschiedene Reste an X und R2 unterschieden. Im Sulforhodamin B-Assay konnte mit Ausnahme der Substanzen 1 und 11 für alle Substanzen ein IC50-Wert unter 10 microM ermittelt werden, was auf eine hohe wachstumshemmende Potenz hinweist. Wegen der Strukturähnlichkeit mit Staurosporin wurde zunächst die Inhibierung der Proteinkinase C überprüft. Am isolierten zytosolischen Extrakt konnte, ebenfalls mit Ausnahme der Substanzen 1 und 11, eine Hemmung der PKC nachgewiesen werden, wobei sich die IC50-Werte zwischen 0,4 und 34 microM bewegten. Im Gegensatz dazu waren nur zwei Substanzen (4 und 10) in der Lage in der Zellkultur die PKC-Aktivität im niedrigen Konzentrationsbereich zu hemmen. Bei den Substanzen 2 und 5 war ein IC50-Wert ermittelbar, der aber mindestens zehnfach höher lag als bei der Messung am isolierten Enzym. Eine mögliche Erklärung für dieses Phänomen ist, daß die Substanz das Zielprotein in der Zelle nicht erreichen kann. Die Strukturverwandschaft mit den Topoisomerase-Inhibitoren NB 506 und Rebeccamycin, deutete auf humane Topoisomerasen als potentiellen Angriffspunkt der Wirkstoffe hin. Tatsächlich erwiesen sich mehrere Substanzen als mögliche Hemmstoffe von Topoisomerase I und/oder II (siehe Tabelle 13). Ein weiteres Phänomen war die Fähigkeit aller Substanzen, sich an die kleine Furche der DNA anzulagern, während keine Interkalationsfähigkeit nachweisbar war. Die Induktion von Strangbrüchen war lediglich für die Substanzen 4 und 8 im unteren mikromolaren Bereich nachweisbar, während die meisten erst ab 50 microM DNA-Schäden induzierten. Die beste Wirkung zeigten aber auch diese Substanzen bei Untersuchungen zur Hemmung von cyclinabhängigen Kinasen. Dabei muß berücksichtigt werden, daß es sich bei der cdk1 um ein isoliertes Enzym handelt und die Hemmung im zellulären System möglicherweise nicht meßbar ist. Aufgrund der in dieser Arbeit erhaltenen Daten konnte für drei Indolocarbazole ein potentieller Wirkmechanismus gefunden werden. Für Substanz 8 ist die Hemmung von Topoisomerasen ein möglicher Wirkmechanismus, während Substanz 10 sich als möglicher Proteinkinase C-Hemmstoff herausstellte. Substanz 4 scheint einen potentiellen cdk1 Hemmstoff darzustellen.
In dieser Arbeit wurden Cophoto- und Cothermolysen einer Serie von alkylierten und silylierten Cyclopentadienylcobaltcarbonylen mit weißem Phosphor untersucht. Dazu wurden einige neue Komplexe des Typus [CpRCo(CO)2] mit CpR= (C5(Me2-1,3)iPr3), (C5H3(Me3Si)2-1,3), (C5H2(Me3Si)3-1,2,4) sowie des Zweikernkomplexes [{CpRCo(micro-CO)}2] mit CpR= (C5(Me2-1,3)iPr3) hergestellt. Neben den vorgenannten Komplexen wurden die bereits literaturbekannten Verbindungen [CpRCo(CO)2] mit CpR= (C5Me5), (C5H4(Me3Si)) als Edukte eingesetzt. Die präparative Zugänglichkeit von drei- bis vierkernigen Cobaltkomplexen mit unsubstituierten P8-, P10- und P12-Liganden konnte beträchtlich erweitert werden. Die thermische Reaktion von Dicarbonyl(trimethylsilylcyclopentadienyl)cobalt mit weißem Phosphor führt in sehr guten Ausbeuten zur Bildung des Clusters [{CpRCo}4P4] (CpR= C5H4(Me3Si)). Die Röntgenstrukturanalyse eines Trillingskristalls läßt lediglich eine ungefähre Bestimmung des Schweratomgerüstes als quadratisches Antiprisma zu. Die Umsetzung von Dicarbonyl(1,3-bis(trimethylsilyl)cyclopentadienyl)cobalt mit weißem Phosphor ergibt unter geeigneten Bedingungen (140°C, 3d) selektiv und in sehr hoher Ausbeute den Vierkernkomplex [{CpRCo}4P10] (I) (CpR= C5H3(Me3Si)2-1,3). Mit I konnte erstmals ein Cobaltkomplex mit einem P10-Liganden röntgenographisch charakterisiert werden. Der Schweratomkäfig in I läßt sich von der Nortricyclanstruktur ableiten. Interessantes Merkmal ist die Koordination eines Cobaltfragmentes an eine P-P-Kante in einer Weise, die einen Zustand zwischen der side-on-Koordination an diese sigma-Bindung und der Insertion in diese Kante darstellt (d(P-P) = 2.47 Å). Die photochemische Reaktion von Dicarbonyl(1,3-bis(trimethylsilyl)cyclopentadienyl)cobalt mit weißem Phosphor ergibt je nach Stöchiometrie die Komplexe [{CpRCo}3P4(CO)] (II) bzw. [{CpRCo}2P4] (III) (CpR= C5H3(Me3Si)2-1,3), die röntgenographisch charakterisiert wurden. Komplex II ist ein arachno-Cluster, der formal von einem zweifach überkappten trigonalen Prisma abgeleitet werden kann. Die längsten P-P-Abstände in II liegen mit d(P-P, Mw.) = 2.51 Å an der Obergrenze bekannter bindender P-P-Wechselwirkungen. Verbindung III ist ein Vertreter einer Serie von [{CpRCo}2(micro,eta2:2-P2)2]-Komplexen, welche ein rechteckig verzerrtes Co2P4-Oktaeder als Schweratomgerüst aufweisen. Es wurden Röntgenstrukturen der Komplexe mit CpR= (C5((CH3)2-1,3)iPr3), (C5H3(Me3Si)2-1,3), (C5H2(Me3Si)3-1,2,4) bestimmt. Diese Verbindungen weisen kurze P-P-Abstände mit d(P-P) = 2.054 bis 2.064 Å sowie P-P-Kontakte von d(P...P) = 2.679 bis 2.713 Å auf.
In der vorliegenden Arbeit wurde die chirale Induktion cholesterischer Phasen von unverbrückten 1,1'- Binaphthylen und über die 2,2'-Position verbrückten 1,1'-Binaphthylen untersucht, um eine Struktur/Wirkungsbeziehung zu entwickeln. Dazu wurden enantiomerenreine 1,1'-Binaphthyle (2 - 7) sowie ihre deuterierten Analoga (1, 3 - 7) für die 2H-NMR-Spektroskopie und die Racemate für die UV-Spektroskopie (3 - 7) synthetisiert. Die Verbindungen 6 und 7 sind bisher in der Literatur nicht beschrieben. Alle untersuchten Verbindungen bestehen aus Molekülen mit der Symmetriegruppe C2, sind inhärent dissymmetrisch (Klasse C) und als hinreichend starr anzusehen, so daß nur der intermolekulare Chiralitätstransfer zu diskutieren ist. Es zeigt sich, daß die chirale Induktion, ausgedrückt durch die helical twisting power (HTP), dann zum einen durch die chiralen Strukturelemente der Verbindungen und zum zweiten durch die Orientierung der Verbindungen in der Phase bestimmt wird. Verbrückte und unverbrückte 1,1'-Binaphthyle verhalten sich so unterschiedlich, daß sie in ihren Mechanismen getrennt diskutiert werden müssen. Über die 2H-NMR-Spektroskopie und z.T. über die polarisierte UV-Spektroskopie ergab sich für die verbrückten 1,1'-Binaphthyle (4, 5 und 6) daß die Orientierungsachse x3* etwa in Richtung der Naphthyl-Naphthyl- Bindungsrichtung liegt. Nimmt man Verbindung 4 als Basis und führt am Brückenatom einen spiro-verknüpften Cyclohexylring (Verbindung 5) oder als Brücke eine Di-tert.-butyl-Silizium-Gruppe ein (Verbindung 6), so wird die Ordnung bezüglich S* erniedrigt und die molekulare Biaxialität D* vergrößert. Ein spiro-verknüpfter Acetal- Fünfring zusätzlich zum Cyclohexylring von 5 (Verbindung 7), führt bei 7 zu einem Kippen der Orientierungsachse in Richtung der C2-Symmetrieachse. Das Ordnungsverhalten von 3 ließ sich nach I. Kiesewalter [Dissertation, Universität Kaiserslautern, 1999] und E. Dorr [Dissertation, Universität Kaiserslautern, 1999] nicht eindeutig bestimmen. Die Ergebnisse aus dem 2H-NMR sprechen bei 3 für eine Drehung der Orientierungsachse, die dann senkrecht zur C2-Symmetrieachse und der Naphthyl-Naphthyl-Bindungsrichtung steht. Mit den so erhaltenen Ordnungsparametern führt bei 3 der Versuch, die Lage des Hauptachsensystems aus dem 2H-NMR mit den Ergebnissen aus der anisotropen UV-Spektroskopie zu verifizieren, was für alle anderen untersuchten Verbindungen gelingt, zu Widersprüchen, die bisher nicht aufgelöst werden konnten. Das Chiralitätselement, das die HTP der verbrückten 1,1'-Binaphthyle bestimmt, wird durch die Verdrehung der beiden Naphthyl-Ebenen gegeneinander um einen Winkel theta gebildet, wobei zwischen theta = 0 Grad und theta = 180 Grad entgegen der Literaturbeschreibung keine achirale Nullstelle exisitiert. Die Substitution am Brückenatom erhöht die HTP woraus geschlossen werden kann, daß durch den Cyclohexyl-Substituenten eine dritte Ebene eingeführt wird, die mit den Ebenen der Naphthyle ein bzw. zwei neue Chiralitätselemente bildet, die zur HTP einen Beitrag leisten. Bei Verbindung 7 führt die veränderte Ordnung zu einem stärkeren Induktionseffekt, d.h. einem verstärkten intermolekularen Chiralitätstransfer. Bei unverbrückten 1,1'-Binaphthylen (1 bis 3), die weit niedrigere HTP-Werte als 4 bis 7 besitzen, findet man bei einer Verbindung (3) mit sterisch aufwendigen Substituenten in 2,2'-Position eine Vorzeichenumkehr der HTP. Dieser Effekt wurde von Gottarelli et al. als eine Umkehr der Helizität der 1,1'-Binaphthyle interpretiert. Diese Interpretation erweist sich als unzulässig, weil sie keine reale physikalische Basis hat. Bei unverbrückten 1,1'- Binaphthylen kann sich durch Aufdrehen des Diederwinkels theta das Orientierungsverhalten verändern und die Orientierungsachse drehen, was durch das Modell von Nordio bestätigt wird. Da dem Modell von Nordio ein spurloser Helizitätstensor zugrunde liegt, kann eine Veränderung der Tensorkoordinaten des Ordnungstensors unter bestimmten Voraussetzungen zu einer Vorzeichenumkehr der HTP führen. Wegen der Befunde aus der CD- Spektroskopie, die eine im Mittel transoide Konformation für die unverbrückten 1,1'-Binaphthyle ausschließen, kommt die Nordio'sche Interpretation für die hier untersuchten unverbrückten 1,1'-Binaphthyle nur unter der Annahme in Frage, daß bei den unverbrückten 1,1'-Binaphthyle eine breite Verteilung über den Diederwinkel theta vorliegt (LAM-Verhalten). Die theoretische Beschreibung der HTP, die von Nordio eingeführt wurde, erlaubt es nicht, die HTP eines Dotierstoffes in seiner Temperaturabhängigkeit zu beschreiben. Berechnet man mit Hilfe experimentell bestimmter Ordnungsparameter auf Basis der Gleichung von Nordio und Ferrarini aus den temperaturabhängigen HTP- Kurven die Tensorkoordinaten des Helizitätstensors, so zeigen sich systematische Abweichungen in den "rückgerechneten" HTP-Kurven. Im Rahmen der vorliegenden Arbeit wurde deshalb ein neuer Ansatz zur quantitativen Beschreibung der HTP eingeführt, der auf Annahmen einer Theorie basiert, die die ACD- Spektroskopie (CD anisotroper Proben) beschreibt. Es wurde ein Chiralitätswechselwirkungstensor eingeführt, dessen Koordinaten durch multiple Regression aus den temperaturabhängigen HTP-Kurven mit den Ordnungsparametern aus dem 2H-NMR erhalten wurden. Mit dem neuen Ansatz ergibt sich eine sehr gute Beschreibung der Größe und Temperaturabhängigkeit der experimentellen HTP-Werte. Die Analyse dieser Daten zeigt, daß die mittlere Lage der HTP-Kurven von der Spur des Chiralitätswechselwirkungstensors (bzw. von dem Term W/3 mit W = Summe(Wii*)) bestimmt wird, also dem Anteil an der HTP, der durch ein Dotierstoff-Molekül induziert werden würde, das in der anisotropen Phase isotrop verteilt ist. Die Krümmung der HTP-Kurven wird durch den D*-Anteil des Effekts verursacht. Der S*- Anteil führt bei niedriger Ordnung nur zu einer geringen Verschiebung der HTP-Kurve. Bei hoher Ordnung kann durch den S*-Anteil bei den Verbindungen 1, 3, 4 und 5 eine Helixinversion in der (theoretischen) HTP-Kurve vorausgesagt werden. Analysiert man den Gesamteffekt hinsichtlich seiner Anteile aus den Richtungen der Hauptachsen des Ordnungstensors, d.h. die Produkte gii33*Wii*, so findet man bei verbrückten 1,1'-Binaphthylen 4 bis 7, daß entlang der C2-Symmetrieachse der 1,1'-Binaphthyle der bei weitem größte Anteil am Gesamteffekt gefunden wird. Dieser Befund erklärt auch warum die Beschreibung von Nordio zu einer guten Übereinstimmung mit den experimentellen HTP-Werten führt, obwohl nach Nordio ein isotrop verteilter Dotierstoff keine HTP hat. Auch im Modell von Nordio wird der Effekt von dem Produkt einer Tensorkoordinate des Helizitätstensors und einer Tensorkoordinate des Ordnungstensors bestimmt, das der Richtung der C2-Symmetrieachse zuzuordnen ist. Allerdings ist das Modell von Nordio nur für eine "mittlere Ordnung" des Dotierstoffs anwendbar, da für sehr kleine Ordnung des Dotierstoffes die HTP gegen Null geht. Die Interpretation der Tensorkoordinaten des Chiralitätswechselwirkungstensors der unverbrückten 1,1'- Binaphthyle ist problematischer als bei den verbrückten 1,1'-Binaphthylen weil für die unverbrückte Verbindung 3 der Widerspruch zwischen den Resultaten aus der anisotropen UV-Spektroskopie und dem 2H-NMR besteht. Nach den vorliegenden Daten dominiert wie auch bei 4 bis 6 der Term g2233*W22* den Effekt, dieser ist aber nicht der Richtung der C2-Symmetrieachse zugeordnet. Die Tensorkoordinaten Wii* sind signifikant kleiner als bei den verbrückten 1,1'-Binaphthylen. Möglicherweise führt das LAM-Verhalten zu einer Verteilung über den Winkel theta und damit zu einer Orientierungsverteilung, die zu kleinen Werten im Chiralitätswechselwirkungstensor und der HTP führt.
Programs are linguistic structures which contain identifications of individuals: memory locations, data types, classes, objects, relations, functions etc. must be identified selectively or definingly. The first part of the essay which deals with identification by showing and designating is rather short, whereas the remaining part dealing with paraphrasing is rather long. The reason is that for an identification by showing or designating no linguistic compositions are needed, in contrast to the case of identification by paraphrasing. The different types of functional paraphrasing are covered here in great detail because the concept of functional paraphrasing is the foundation of functional programming. The author had to decide whether to cover this subject here or in his essay Purpose versus Form of Programs where the concept of functional programming is presented. Finally, the author came to the conclusion that this essay on identification is the more appropriate place.
In system theory, state is a key concept. Here, the word state refers to condition, as in the example Since he went into the hospital, his state of health worsened daily. This colloquial meaning was the starting point for defining the concept of state in system theory. System theory describes the relationship between input X and output Y, that is, between influence and reaction. In system theory, a system is something that shows an observable behavior that may be influenced. Therefore, apart from the system, there must be something else influencing and observing the reaction of the system. This is called the environment of the system.
Diese Arbeit beschäftigt sich mit einer Möglichkeit zur Effizienzverbesserung, wobei das SNLP-basierte Planungssystem CAPlan verwendet wird. Dabei werden neue, zu lösende Probleme einer Vorverarbeitung unterzogen. Dort werden bestimmte Eigenschaften ermittelt, ohne jedoch das Problem zu lösen. Anschliessend wird dem Planungssystem das neue Problem mit dem Zusatzwissen in Form der analysierten Eigenschaften übergeben. Das Planungssystem verwendet das Wissen, um effizienter eine Lösung zu finden.
The paper presents a process-oriented view on knowledge management in software development. We describe requirements on knowledge management systems from a process-oriented perspective, introduce a process modeling language MILOS and its use for knowledge management. Then we explain how a process-oriented knowledge management system can be implemented using advanced but available information technologies.
The term enterprise modelling, synonymous with enterprise engineering, refers to methodologies developed for modelling activities, states, time, and cost within an enterprise architecture. They serve as a vehicle for evaluating and modelling activities resources etc. CIM - OSA (Computer Integrated Manufacturing Open Systems Architecture) is a methodology for modelling computer integrated environments, and its major objective is the appropriate integration of enterprise operations by means of efficient information exchange within the enterprise. PERA is another methodology for developing models of computer integrated manufacturing environments. The department of industrial engineering in Toronto proposed the development of ontologies as a vehicle for enterprise integration. The paper reviews the work carried out by various researchers and computing departments on the area of enterprise modelling and points out other modelling problems related to enterprise integration.
The term enterprise modeling, synonymous with enterprise engineering, often refers to methodologies, developed for modeling activities, states, time, and cost within an enterprise architecture. They serve as a vehicle for evaluating and modeling activities resources and so on. CIM - OSA (Computer Integrated Manufacturing Open Systems Architecture) is a methodology for modeling computer integrated environments, and its major objective is the appropriate integration of enterprise operations by means of efficient information exchange within the enterprise. Although there are other methodo- logies in the industry that serve the same purpose, most of them concentrate on the internal aspect of an enterprise. The paper is concerned with the modeling of the links between enterprises. The aim is to examine these relationships or links in detail and suggest a method for modeling enterprise networks drawing on the methodologies currently used in the industry and extending with the method proposed here.
The paper addresses two problems of comprehensible proof presentation, the hierarchically structured presentation at the level of proof methods and different presentation styles of construction proofs. It provides solutions for these problems that can make use of proof plans generated by an automated proof planner.
Der vorliegende Artikel setzt die Beitragsreihe zur Vorstellung der Ergebnisse der FEMEX fort, die mit der Präsentation einer allgemeinen Feature-Definition in [BWE-96] begonnen wurde. FEMEX (Feature Modelling Experts) ist eine internationale und interdisziplinäre Gruppe von Forschern, Entwicklern und Anwendern aus Universitäten, Forschungsinstituten und Industrie, die sich zum Ziel gesetzt haben, Grundlagen für eine Feature-basierte Produktentwicklung zu erarbeiten. Der Anwender steht dabei im Mittelpunkt der Bemühungen: die Feature-Technologie hat die Aufgabe, ihm Methoden und Werkzeuge an die Hand zu geben, mit denen er in den unterschiedlichen Phasen einer komplexen Prozeßkette effizient arbeiten kann. Vier Arbeitsgruppen wurden gebildet, die sich mit unterschiedlichen Aspekten der Feature-Technologie beschäftigen. In diesem Beitrag werden die Ergebnisse der Arbeitsgruppe II Feature Modelling Methods and Application Areas" vorgestellt. Ihre Aufgabe ist es, die Modellierungsmethoden und Anwendungsgebiete der Feature-Technologie im Kontext des Produktentwicklungs- prozesses zu untersuchen. Ausgangspunkt für die Arbeiten ist neben den benutzerspezifischen Anforderungen die Feature-Definition der Arbeitsgruppe I [BWE-96]. An dieser Definition ist hervorzuheben, daß Features keine physikalischen Elemente sind und auch keine physikalischen Entsprechungen haben müssen, sondern nur in der Welt der informationstechnischen Modelle existieren. Desweiteren sind die für den Anwender relevanten Eigenschaften der bearbeiteten Objekte, welcher Art sie auch sein mögen (beispielsweise die Funktion des Bauteils), die eigentliche Grundlage der Definition. Keiner Eigenschaft wird von vorneherein eine höhere Priorität gegeben, wodurch die Bauteilgeometrie ihre tragende Rolle bei der Modellierung verliert (bei den meisten der heute angebotenen CAD/CAM-Systemen wird dagegen üblicherweise davon ausgegangen, daß die in einem System verarbeitete Produktgeometrie die Basis für das gesamte Produktmodell darstellt).
Aesthetic Design bzw. Styling ist mehr und mehr ein zentrales Merkmal für den Erfolg von Automobilen auf dem Weltmarkt. Entsprechend den firmenspezifischen Vorstellungen werden diese Eigenschaften der Karosserien in komplexen Abläufen herausgearbeitet. Computer Aided Styling (CAS), Computer Aided Aesthetic Design (CAAD) sind die Werkzeuge zur Schaffung optimaler Karosserieformen. Die Abläufe sind von Unternehmen zu Unternehmen unterschiedlich, haben aber ähnliche Strukturen: es wird die Form der Karosserie erstellt, anschließend wird mit Hilfe geeigneter Werkzeuge die Qualität der Flächen beurteilt. In einem nächsten Schritt werden die Flächen entsprechend dieser Beurteilung wieder verändert. Diese Schleifen werden wiederholt, bis das Ergebnis die Verantwortlichen zufriedenstellt. Im Brite-EuRam-Projekt FIORES von 12 Partnern aus 6 Ländern, mit Automobilunternehmen (BMW, Saab), Design-Firmen (Eiger, Formtech, Pininfarina, Taurus), Systemherstelllern und Forschungsinstituten wird jetzt versucht, Methoden zu entwickeln, die den Design-Ablauf verbessern könnten: Die Bewertungskriterien für ästhetische Flächen sollen formalisiert werden und dann direkt zur Modifikation der Freiformflächen benutzt werden im Sinne einer zielgesteuerten Modellierung (Engineering in Reverse, EiR). Dieser Artikel stellt die Ergebnisse des Projekts innerhalb des ersten Jahres dar: der Design-Prozeß in verschiedenen Unternehmen wird analysiert, die sich daraus ergebenden Beurteilungskriterien für ästhetische Formen werden formalisiert und der zielgesteuerten Modellierung zugeführt. Ausblicke auf weitere Ziele des Projekts werden gegeben. Die vorgestellten Arbeiten sind das gemeinsame Ergebnis des Projekt-Konsortiums.
On the one hand, in the world of Product Data Technology (PDT), the ISO standard STEP (STandard for the Exchange of Product model data) gains more and more importance. STEP includes the information model specification language EXPRESS and its graphical notation EXPRESS-G. On the other hand, in the Software Engineering world in general, mainly other modelling languages are in use - particularly the Unified Modeling Language (UML), recently adopted to become a standard by the Object Management Group, will probably achieve broad acceptance. Despite a strong interconnection of PDT with the Software Engineering area, there is a lack of bridging elements concerning the modelling language level. This paper introduces a mapping between EXPRESS-G and UML in order to define a linking bridge and bring the best of both worlds together. Hereby the feasibility of a mapping is shown with representative examples; several problematic cases are discussed as well as possible solutions presented.
Interoperability between different CAx systems involved in the development process of cars is presently one of the most critical issues in the automotive industry. None of the existing CAx systems meets all requirements of the very complex process network of the lifecycle of a car. With this background, industrial engineers have to use various CAx systems to get an optimal support for their daily work. Today, the communication between different CAx systems is done via data files using special direct converters or neutral system independent standards like IGES, VDAFS, and recently STEP, the international standard for product data description. To reduce the dependency on individual CAx s ystem vendors, the German automotive industry developed an open CAx system architecture based on STEP as guiding principle for CAx system development. The central component of this architecture is a common, system-independent access interface to CAx functions and data of all involved CAx systems, which is under development in the project ANICA. Within this project, a CAx object bus has been developed based on a STEP data description using CORBA as an integration platform. This new approach allows a transparent access to data and functions of the integrated CAx systems without file-based data exchange. The product development process with various CAx systems concerns objects from different CAx systems. Thus, mechanisms are needed to handle the persistent storage of the CAx objects distributed over the CAx object bus to give the developing engineers a consistent view of the data model of their product. The following paper discusses several possibilities to guarantee consistent data management and storage of distributed CAx models. One of the most promising approaches is the enhancement of the CAx object bus by a STEP-based object-oriented data server to realise a central data management.
Functional Analysis
(1998)
The aim of this course is to give a very modest introduction to the extremely rich and well-developed theory of Hilbert spaces, an introduction that hopefully will provide the students with a knowledge of some of the fundamental results of the theory and will make them familiar with everything needed in order to understand, believe and apply the spectral theorem for selfadjoint operators in Hilbert space. This implies that the course will have to give answers to such questions as - What is a Hilbert space? - What is a bounded operator in Hilbert space? - What is a selfadjoint operator in Hilbert space? - What is the spectrum of such an operator? - What is meant by a spectral decomposition of such an operator?
Convex Analysis
(1998)
Preface Convex analysis is one of the mathematical tools which is used both explicitly and indirectly in many mathematical disciplines. However, there are not so many courses which have convex analysis as the main topic. More often, parts of convex analysis are taught in courses like linear or nonlinear optimization, probability theory, geometry, location theory, etc.. This manuscript gives a systematic introduction to the concepts of convex analysis. A focus is set to the geometrical interpretation of convex analysis. This focus was one of the reasons why I have decided to restrict myself to the finite dimensional case. Another reason for this restriction is that in the infinite dimensional case many proofs become more difficult and more technical. Therefore, it would not have been possible (for me) to cover all the topics I wanted to discuss in this introductory text in the infinite dimensional case, too. Anyway, I am convinced that even for someone who is interested in the infinite dimensional case this manuscript will be a good starting point. When I offered a course in convex analysis in the Wintersemester 1997/1998 (upon which this manuscript is based) a lot of students asked me how this course fits in their own studies. Because this manuscript will (hopefully) be used by some students in the future, I will give here some of the possible statements to answer this very question. - Convex analysis can be seen as an extension of classical analysis, in which still we get many of the results, like a mean-value theorem, with less assumptions on the smoothness of the function. - Convex analysis can be seen as a foundation of linear and nonlinear optimization which provides many tools to handle concepts in optimization much easier (for example the Lemma of Farkas). - Finally, convex analysis can be seen as a link between abstract geometry and very algorithmic oriented computational geometry. As already explained before, this manuscript is based on a one semester course and therefore cannot cover all topics and discuss all aspects of convex analysis in detail. To guide the interested reader I have included a list of nice books about this subject at the end of the manuscript. It should be noted that the philosophy of this course follows [3], [4] and THE BOOK of modern convex analysis [6]. The geometrical emphasis however, is also related to intentions of [1].^L
The Kallianpur-Robbins law describes the long term asymptotic behaviour of the distribution of the occupation measure of a Brownian motion in the plane. In this paper we show that this behaviour can be seen at every typical Brownian path by choosing either a random time or a random scale according to the logarithmic laws of order three. We also prove a ratio ergodic theorem for small scales outside an exceptional set of vanishing logarithmic density of order three.
In the following an introduction to the level set method will be givenso that one becomes aware of the arising problems, which lead to the needof reinitialization. The problems concerning reinitialization itself will be analysed more detailed and a solution for area loss will be proposed. This solution consists in a combination of the commonly used PDE for reinitialization and extrapolation around the zero level set. Numericalexperiments show rather satisfactory results as far as area loss and computation of curvature are concerned.
The quasienergy spectrum of a periodically driven quantum system is constructed from classical dynamics by means of the semiclassical initial value representation using coherent states. For the first time, this method is applied to explicitly time dependent systems. For an anharmonic oscillator system with mixed chaotic and regular classical dynamics, the entire quantum spectrum (both regular and chaotic states) is reproduced semiclassically with surprising accuracy. In particular, the method is capable to account for the very small tunneling splittings.
The paper discusses the metastable states of a quantum particle in a periodic potential under a constant force (the model of a crystal electron in a homogeneous electric ,eld), which are known as the Wannier-Stark ladder of resonances. An ecient procedure to ,nd the positions and widths of resonances is suggested and illustrated by numerical calculation for a cosine potential.
The dispersions of dipolar (Damon-Eshbach modes) and exchange dominated spin waves are calculated for in-plane magnetized thin and ultrathin cubic films with (111) crystal orientation and the results are compared with those obtained for the other principal planes. The properties of these magnetic excitations are examined from the point of view of Brillouin light scattering experiments. Attention is paid to study the spin-wave frequency variation as a function of the magnetization direction in the film plane for different film thicknesses. Interface anisotropies and the bulk magnetocrystalline anisotropy are considered in the calculation. A quantitative comparison between an analytical expression obtained in the limit of small film thickness and wave vector and the full numerical calculation is given.
A formalism is developed for calculating the quasienergy states and spectrum for time-periodic quantum systems when a time-periodic dynamical invariant operator with a nondegenerate spectrum is known. The method, which circumvents the integration of the Schr-odinger equation, is applied to an integrable class of systems, where the global invariant operator is constructed. Furthermore, a local integrable approximation for more general non-integrable systems is developed. Numerical results are presented for the doubleresonance model.
We consider N coupled linear oscillators with time-dependent coecients. An exact complex amplitude - real phase decomposition of the oscillatory motion is constructed. This decomposition is further used to derive N exact constants of motion which generalise the so-called Ermakov-Lewis invariant of a single oscillator. In the Floquet problem of periodic oscillator coecients we discuss the existence of periodic complex amplitude functions in terms of existing Floquet solutions.
We have computed ensembles of complete spectra of the staggered Dirac operator using four-dimensional SU(2) gauge fields, both in the quenched approximation and with dynamical fermions. To identify universal features in the Dirac spectrum, we compare the lattice data with predictions from chiral random matrix theory for the distribution of the low-lying eigenvalues. Good agreement is found up to some limiting energy, the so-called Thouless energy, above which random matrix theory no longer applies. We determine the dependence of the Thouless energy on the simulation parameters using the scalar susceptibility and the number variance.
The Wannier-Bloch resonance states are metastable states of a quantum particle in a space-periodic potential plus a homogeneous field. Here we analyze the states of quantum particle in space- and time-periodic potential. In this case the dynamics of the classical counterpart of the quantum system is either quasiregular or chaotic depending on the driving frequency. It is shown that both the quasiregular and the chaotic motion can also support quantum resonances. The relevance of the obtained result to the problem a of crystal electron under simultaneous influence of d.c. and a.c. electric fields is briefly discussed. PACS: 73.20Dx, 73.40Gk, 05.45.+b
We study the statistics of the Wigner delay time and resonance width for a Bloch particle in ac and dc fields in the regime of quantum chaos. It is shown that after appropriate rescaling the distributions of these quantities have universal character predicted by the random matrix theory of chaotic scattering.
The tunneling splitting of the energy levels of a ferromagnetic particle in the presence of an applied magnetic field - previously derived only for the ground state with the path integral method - is obtained in a simple way from Schr"odinger theory. The origin of the factors entering the result is clearly understood, in particular the effect of the asymmetry of the barriers of the potential. The method should appeal particularly to experimentalists searching for evidence of macroscopic spin tunneling.
Transitions from classical to quantum behaviour in a spin system with two degenerate ground states separated by twin energy barriers which are asymmetric due to an applied magnetic field are investigated. It is shown that these transitions can be interpreted as first- or second-order phase transitions depending on the anisotropy and magnetic parameters defining the system in an effective Lagrangian description.
The greybody factors in BTZ black holes are evaluated from 2D CFT in the spirit of AdS3/CFT correspondence. The initial state of black holes in the usual calculation of greybody factors by effective CFT is described as Poincar'e vacuum state in 2D CFT. The normalization factor which cannot be fixed in the effective CFT without appealing to string theory is shown to be determined by the normalized bulk-to-boundary Green function. The relation among the greybody factors in different dimensional black holes is exhibited. Two kinds of (h; _h) = (1; 1) operators which couple with the boundary value of massless scalar field are discussed.
The light-cone Hamiltonian approach is applied to the super D2- brane, and the equivalent area-preserving and U(1) gauge-invariant effective Lagrangian, which is quadratic in the U(1) gauge field, is derived. The latter is recognised to be that of the three- dimensional U(1) gauge theory, interacting with matter supermultiplets, in a special external induced supergravity metric and the gravitino field, depending on matter fields. The duality between this theory and 11d supermembrane theory is demonstrated in the light-cone gauge.
The pure-Skyrme limit of a scale-breaking Skyrmed O(3) sigma model in 1+1 dimensions is employed to study the effect of the Skyrme term on the semiclassical analysis of a field theory with instantons. The instantons of this model are self-dual and can be evaluated explicitly. They are also localised to an absolute scale, and their fluctuation action can be reduced to a scalar subsystem. This permits the explicit calculation of the fluctuation determinant and the shift in vacuum energy due to instantons. The model also illustrates the semiclassical quantisation of a Skyrmed field theory.
We derive a new class of particle methods for conservation laws, which are based on numerical flux functions to model the interactions between moving particles. The derivation is similar to that of classical Finite-Volume methods; except that the fixed grid structure in the Finite-Volume method is substituted by so-called mass packets of particles. We give some numerical results on a shock wave solution for Burgers equation as well as the well-known one-dimensional shock tube problem.
The lowest resonant frequency of a cavity resonator is usually approximated by the classical Helmholtz formula. However, if the opening is rather large and the front wall is narrow this formula is no longer valid. Here we present a correction which is of third order in the ratio of the diameters of aperture and cavity. In addition to the high accuracy it allows to estimate the damping due to radiation. The result is found by applying the method of matched asymptotic expansions. The correction contains form factors describing the shapes of opening and cavity. They are com- puted for a number of standard geometries. Results are compared with numerical computations.
In this paper, a combined approach to damage diagnosis of rotors is proposed. The intention is to employ signal-based as well as model-based procedures for an improved detection of size and location of the damage. In a first step, Hilbert transform signal processing techniques allow for a computation of the signal envelope and the instantaneous frequency, so that various types of non-linearities due to a damage may be identified and classified based on measured response data. In a second step, a multi-hypothesis bank of Kalman Filters is employed for the detection of the size and location of the damage based on the information of the type of damage provided by the results of the Hilbert transform.
Wavelets on closed surfaces in Euclidean space R3 are introduced starting from a scale discrete wavelet transform for potentials harmonic down to a spherical boundary. Essential tools for approximation are integration formulas relating an integral over the sphere to suitable linear combinations of functional values (resp. normal derivatives) on the closed surface under consideration. A scale discrete version of multiresolution is described for potential functions harmonic outside the closed surface and regular at infinity. Furthermore, an exact fully discrete wavelet approximation is developed in case of band-limited wavelets. Finally, the role of wavelets is discussed in three problems, namely (i) the representation of a function on a closed surface from discretely given data, (ii) the (discrete) solution of the exterior Dirichlet problem, and (iii) the (discrete) solution of the exterior Neumann problem.
For the determination of the earth" s gravity field many types of observations are available nowadays, e.g., terrestrial gravimetry, airborne gravimetry, satellite-to-satellite tracking, satellite gradiometry etc. The mathematical connection between these observables on the one hand and gravity field and shape of the earth on the other hand, is called the integrated concept of physical geodesy. In this paper harmonic wavelets are introduced by which the gravitational part of the gravity field can be approximated progressively better and better, reflecting an increasing flow of observations. An integrated concept of physical geodesy in terms of harmonic wavelets is presented. Essential tools for approximation are integration formulas relating an integral over an internal sphere to suitable linear combinations of observation functionals, i.e., linear functionals representing the geodetic observables. A scale discrete version of multiresolution is described for approximating the gravitational potential outside and on the earth" s surface. Furthermore, an exact fully discrete wavelet approximation is developed for the case of band-limited wavelets. A method for combined global outer harmonic and local harmonic wavelet modelling is proposed corresponding to realistic earth" s models. As examples, the role of wavelets is discussed for the classical Stokes problem, the oblique derivative problem, satellite-to-satellite tracking, satellite gravity gradiometry, and combined satellite-to-satellite tracking and gradiometry.
Annual Report 1997
(1998)
Stand des strategischen Controlling-Berichtwesens und Übertragungsmöglichkeiten auf die Universität
(1998)
Rewriting techniques have been applied successfully to various areas of symbolic computation. Here we consider the notion of prefix-rewriting and give a survey on its applications to the subgroup problem in combinatorial group theory. We will see that for certain classes of finitely presented groups finitely generated subgroups can be described through convergent prefix-rewriting systems, which can be obtained from a presentation of the group considered and a set of generators for the subgroup through a specialized Knuth-Bendix style completion procedure. In many instances a finite presentation for the subgroup considered can be constructed from such a convergent prefix-rewriting system, thus solving the subgroup presentation problem. Finally we will see that the classical procedures for computing Nielsen reduced sets of generators for a finitely generated subgroup of a free group and the Todd-Coxeter coset enumeration can be interpreted as particular instances of prefix-completion. Further, both procedures are closely related to the computation of prefix Gr"obner bases for right ideals in free group rings.
Todd and Coxeter's method for enumerating cosets of finitely generated subgroups in finitely presented groups (abbreviated by Tc here) is one famous method from combinatorial group theory for studying the subgroup problem. Since prefix string rewriting is also an appropriate method to study this problem, prefix string rewriting methods have been compared to Tc. We recall and compare two of them briefly, one by Kuhn and Madlener [4] and one by Sims [15]. A new approach using prefix string rewriting in free groups is derived from the algebraic method presented by Reinert, Mora and Madlener in [14] which directly emulates Tc. It is extended to free monoids and an algebraic characterization for the "cosets" enumerated in this setting is provided.
We prove that there exists a positive \(\alpha\) such thatfor any integer \(\mbox{$d\ge 3$}\) and any topological types \(\mbox{$S_1,\dots,S_n$}\) of plane curve singularities, satisfying \(\mbox{$\mu(S_1)+\dots+\mu(S_n)\le\alpha d^2$}\), there exists a reduced irreducible plane curve of degree \(d\) with exactly \(n\) singular points of types \(\mbox{$S_1,\dots,S_n$}\), respectively. This estimate is optimal with respect to theexponent of \(d\). In particular, we prove that for any topological type \(S\) there exists an irreducible polynomial of degree \(\mbox{$d\le 14\sqrt{\mu(S)}$}\) having a singular point of type \(S\).
On a family F of probability measures on a measure space we consider the Hellinger and Kullback-Leibler distances. We show that under suitable regulari ty conditions Jeffreys' prior is proportional to the k-dimensional Hausdorff measure w.r.t. Hellinger dis tance respectively to the k2 -dimensional Hausdorff measure w.r.t. Kullback-Leibler distance. The proof i s based on an area-formula for the Hausdorff measure w.r.t. to generalized distances.
Bekanntlich gibt es keinen befriedigenden unendlich dimensionalen Ersatz für das Lebesgue-Mass. Andererseits lassen sich viele Techniken klassischer Analysis auch auf unendlich dimensionale Situationen übertragen. Eine Möglichkeit hierzu gibt die Theorie differenzierbarer Masse. Man definiert Richtungsableitungen für Masse ähnlich wie für Funktionen. Eines der zentralen Beispiele ist das Wiener-Mass. Stochastische Integration bezüglich der Brownschen Bewegung, insbesondere das Skorokhod-Integral ergeben sich in natürlicher Weise durch diesen Ansatz und auch die Grundideen des MalliavinKalküls lassen sich in diesem Rahmen einfach erläutern. Die Vorträge geben die meisten Beweise.
The paper studies differential and related properties of functions of a real variable with values in the space of signed measures. In particular the connections between different definitions of differentiability are described corresponding to different topologies on the measures. Some conditions are given for the equivalence of the measures in the range of such a function. These conditions are in terms of socalled logarithmic derivatives and yield a generalization of the Cameron-Martin-Maruyama-Girsanov formula. Questions of this kind appear both in the theory of differentiable measures on infinite-dimensional spaces and in the theory of statistical experiments.
Robust Reliability of Diagnostic Multi-Hypothesis Algorithms: Application to Rotating Machinery
(1998)
Damage diagnosis based on a bank of Kalman filters, each one conditioned on a specific hypothesized system condition, is a well recognized and powerful diagnostic tool. This multi-hypothesis approach can be applied to a wide range of damage conditions. In this paper, we will focus on the diagnosis of cracks in rotating machinery. The question we address is: how to optimize the multi-hypothesis algorithm with respect to the uncertainty of the spatial form and location of cracks and their resulting dynamic effects. First, we formulate a measure of the reliability of the diagnostic algorithm, and then we discuss modifications of the diagnostic algorithm for the maximization of the reliability. The reliability of a diagnostic algorithm is measured by the amount of uncertainty consistent with no-failure of the diagnosis. Uncertainty is quantitatively represented with convex models.
For the numerical simulation of 3D radiative heat transfer in glasses and glass melts, practically applicable mathematical methods are needed to handle such problems optimal using workstation class computers. Since the exact solution would require super-computer capabilities we concentrate on approximate solutions with a high degree of accuracy. The following approaches are studied: 3D diffusion approximations and 3D ray-tracing methods.
In the present paper multilane models for vehicular traffic are considered. A microscopic multilane model based on reaction thresholds is developed. Based on this model an Enskog like kinetic model is developed. In particular, care is taken to incorporate the correlations between the vehicles. From the kinetic model a fluid dynamic model is derived. The macroscopic coefficients are deduced from the underlying kinetic model. Numerical simulations are presented for all three levels of description in [10]. Moreover, a comparison of the results is given there.
In this paper the work presented in [6] is continued. The present paper contains detailed numerical investigations of the models developed there. A numerical method to treat the kinetic equations obtained in [6] are presented and results of the simulations are shown. Moreover, the stochastic correlation model used in [6] is described and investigated in more detail.
In this paper domain decomposition methods for radiative transfer problems including conductive heat transfer are treated. The paper focuses on semi-transparent materials, like glass, and the associated conditions at the interface between the materials. Using asymptotic analysis we derive conditions for the coupling of the radiative transfer equations and a diffusion approximation. Several test cases are treated and a problem appearing in glass manufacturing processes is computed. The results clearly show the advantages of a domain decomposition approach. Accuracy equivalent to the solution of the global radiative transfer solution is achieved, whereas computation time is strongly reduced.
A new approach is proposed to model and simulate numerically heterogeneous catalysis in rarefied gas flows. It is developed to satisfy all together the following points: i) describe the gas phase at the microscopic scale, as required in rarefied flows, ii) describe the wall at the macroscopic scale, to avoid prohibitive computational costs and consider not only crystalline but also amorphous surfaces, iii) reproduce on average macroscopic laws correlated with experimental results and iv) derive ana- lytic models in a systematic and exact way. The problem is stated in the general framework of a non static flow in the vicinity of a catalytic and non porous surface (without ageing). It is shown that the exact and systematic resolution method based on the Laplace transform, introduced previously by the author to model collisions in the gas phase, can be extended to the present problem. The proposed approach is applied to the modelling of the Eley-Rideal and Langmuir-Hinshelwood recombinations, assuming that the coverage is locally at equilibrium. The models are developed considering one atomic species and extended to the gen eral case of several atomic species. Numerical calculations show that the models derived in this way reproduce with accuracy behaviours observed experimentally.
A new method of determining some characteristics of binary images is proposed based on a special linear filtering. This technique enables the estimation of the area fraction, the specific line length, and the specific integral of curvature. Furthermore, the specific length of the total projection is obtained, which gives detailed information about the texture of the image. The influence of lateral and directional resolution depending on the size of the applied filter mask is discussed in detail. The technique includes a method of increasing directional resolution for texture analysis while keeping lateral resolution as high as possible.
A multi-phase composite with periodic distributed inclusions with a smooth boundary is considered in this contribution. The composite component materials are supposed to be linear viscoelastic and aging (of the non-convolution integral type, for which the Laplace transform with respect to time is not effectively applicable) and are subjected to isotropic shrinkage. The free shrinkage deformation can be considered as a fictitious temperature deformation in the behavior law. The procedure presented in this paper proposes a way to determine average (effective homogenized) viscoelastic and shrinkage (temperature) composite properties and the homogenized stress-field from known properties of the components. This is done by the extension of the asymptotic homogenization technique known for pure elastic non-homogeneous bodies to the non-homogeneous thermo-viscoelasticity of the integral non-convolution type. Up to now, the homogenization theory has not covered viscoelasticity of the integral type. Sanchez-Palencia (1980), Francfort & Suquet (1987) (see [2], [9]) have consid- ered homogenization for viscoelasticity of the differential form and only up to the first derivative order. The integral-modeled viscoelasticity is more general then the differential one and includes almost all known differential models. The homogenization procedure is based on the construction of an asymptotic solution with respect to a period of the composite structure. This reduces the original problem to some auxiliary boundary value problems of elasticity and viscoelasticity on the unit periodic cell, of the same type as the original non-homogeneous problem. The existence and uniqueness results for such problems were obtained for kernels satisfying some constrain conditions. This is done by the extension of the Volterra integral operator theory to the Volterra operators with respect to the time, whose 1 kernels are space linear operators for any fixed time variables. Some ideas of such approach were proposed in [11] and [12], where the Volterra operators with kernels depending additionally on parameter were considered. This manuscript delivers results of the same nature for the case of the space-operator kernels.
We propose a new discretization scheme for solving ill-posed integral equations of the third kind. Combining this scheme with Morozov's discrepancy principle for Landweber iteration we show that for some classes of equations in such method a number of arithmetic operations of smaller order than in collocation method is required to appoximately solve an equation with the same accuracy.
In this paper we study the space-time asymptotic behavior of the solutions and derivatives to th incompressible Navier-Stokes equations. Using moment estimates we obtain that strong solutions to the Navier-Stokes equations which decay in \(L^2\) at the rate of \(||u(t)||_2 \leq C(t+1)^{-\mu}\) will have the following pointwise space-time decay \[|D^{\alpha}u(x,t)| \leq C_{k,m} \frac{1}{(t+1)^{ \rho_o}(1+|x|^2)^{k/2}} \]
where \( \rho_o = (1-2k/n)( m/2 + \mu) + 3/4(1-2k/n)\), and \(|a |= m\). The dimension n is \(2 \leq n \leq 5\) and \(0\leq k\leq n\) and \(\mu \geq n/4\)
Finding "good" cycles in graphs is a problem of great interest in graph theory as well as in locational analysis. We show that the center and median problems are NP hard in general graphs. This result holds both for the variable cardinality case (i.e. all cycles of the graph are considered) and the fixed cardinality case (i.e. only cycles with a given cardinality p are feasible). Hence it is of interest to investigate special cases where the problem is solvable in polynomial time. In grid graphs, the variable cardinality case is, for instance, trivially solvable if the shape of the cycle can be chosen freely. If the shape is fixed to be a rectangle one can analyse rectangles in grid graphs with, in sequence, fixed dimension, fixed cardinality, and variable cardinality. In all cases a com plete characterization of the optimal cycles and closed form expressions of the optimal objective values are given, yielding polynomial time algorithms for all cases of center rectangle problems. Finally, it is shown that center cycles can be chosen as rectangles for small cardinalities such that the center cycle problem in grid graphs is in these cases completely solved.
In order to improve the distribution system for the Nordic countries the BASF AG considered 13 alternative scenarios to the existing system. These involved the construction of warehouses at various locations. For every scenario the transportation, storage, and handling cost incurred was to be as low as possible, where restrictions on the delivery time were given. The scenarios were evaluated according to (minimal) total cost and weighted average delivery time. The results led to a restriction to only three cases, involving only one new warehouse each. For these a more accurate model for the cost was developped and evaluated, yielding results similar to a simple linear model. Since there were no clear preferences between cost and delivery time, the final decision was chosen to represent a compromise between the two criteria.
Robust facility location
(1998)
Let A be a nonempty finite subset of R^2 representing the geographical coordinates of a set of demand points (towns, ...), to be served by a facility, whose location within a given region S is sought. Assuming that the unit cost for a in A if the facility is located at x in S is proportional to dist(x,a) - the distance from x to a - and that demand of point a is given by w_a, minimizing the total trnsportation cost TC(w,x) amounts to solving the Weber problem. In practice, it may be the case, however, that the demand vector w is not known, and only an estimator {hat w} can be provided. Moreover the errors in sich estimation process may be non-negligible. We propose a new model for this situation: select a threshold valus B 0 representing the highest admissible transportation cost. Define the robustness p of a location x as the minimum increase in demand needed to become inadmissible, i.e. p(x) = min{||w^*-{hat w}|| : TC(w^*,x) B, w^* = 0} and solve then the optimization problem max_{x in S} p(x) to get the most robust location.
Knowledge about the distribution of a statistical estimator is important for various purposes like, for example, the construction of confidence intervals for model parameters or the determiation of critical values of tests. A widely used method to estimate this distribution is the so-called bootstrap which is based on an imitation of the probabilistic structure of the data generating process on the basis of the information provided by a given set of random observations. In this paper we investigate this classical method in the context of artificial neural networks used for estimating a mapping from input to output space. We establish consistency results for bootstrap estimates of the distribution of parameter estimates.
The notion of formal description techniques for timed systems (T-FDTs) has been introduced in [EDK98a] to provide a unifying framework for description techniques that are formal and that allow to describe the ongoing behavior of systems. In this paper we show that three well known temporal logics, MTL, MTL-R , and CTL*, can be embedded in this framework. Moreover, we provide evidence that a large number of dioeerent kinds of temporal logics can be considered as T-FDTs.
Das Problem der Integration heterogener Softwaresysteme stellt sich auch auf dem Gebiet der CAx-Systeme, wie sie in vielfältigen Ausprägungen etwa in der Automobilbranche für die Fahrzeugentwicklung eingesetzt werden. Zunächst werden die heute in diesem Bereich
praktizierten Lösungen und die dabei auftretenden Probleme kurz dargestellt. Danach werden der neue Standard für Produktdaten, STEP, und der Standard für die Interoperabilität heterogener Softwaresysteme, CORBA, sowie einige CORBA-Entwurfsmuster erläutert. Als nächstes wird eine auf diesen beiden Standards basierende CAx-Integrationsarchitektur, die im Projekt ANICA entwickelt wurde, vorgestellt und die prinzipielle Vorgehensweise bei
ihrer Realisierung beschrieben. Daran anschließend wird über eine erste Umsetzung dieser Architektur in die Praxis berichtet. Zum Abschluß wird kurz auf die gewonnenen Erfahrungen eingegangen und ein Ausblick auf zukünftige Entwicklungen gegeben.
Die virtuelle Produktentwicklung in verteilter Umgebung erfordert eine intensive Kommunika-tion zwischen den beteiligten CAx-Systemen. Diese findet bisher in Form des dateibasierten Datenaustausches mit Hilfe von Direktkonvertern oder neutralen Schnittstellen statt. Der Datenaustausch wird hierbei meist in mehreren Iterationsschleifen durchgeführt und ist oft mit Datenverlusten sowie Unterbrechungen der Entwicklungsaktivitäten verbunden. Demgegenüber steht als neuer Ansatz für die Interoperabilität zwischen CAx-Systemen das Konzept eines CAx-Objektbusses auf Basis von CORBA und STEP. Dieser Ansatz ermög-licht eine plattformübergreifende Online-Kopplung heterogener CAx-Systeme. Im Gegensatz zum dateibasierten Datenaustausch ist hierbei ein transparenter Zugriff sowohl auf Daten als auch auf Funktionen der angebundenen Systeme möglich. Dadurch kann die Durchgängigkeit der Produktdaten in der Prozeßkette deutlich erhöht werden. Zur Beurteilung der Praxistauglichkeit wird dieser neue Ansatz dem dateibasierten Daten-austausch am Beispiel virtueller Einbauuntersuchungen gegenübergestellt. Dabei werden für unterschiedliche praxisrelevante Modellgrößen die für die Übertragung von Geometrie und Topologie erforderlichen Zeiten analysiert und verglichen. Weiterhin werden die generellen Vor- und Nachteile der beiden Lösungen dargestellt. Abschließend wird auf die Potentiale des neuen Ansatzes für den Einsatz in anderen Bereichen eingegangen.
This paper describes a tableau-based higher-order theorem prover HOT and an application to natural language semantics. In this application, HOT is used to prove equivalences using world knowledge during higher-order unification (HOU). This extended form of HOU is used to compute the licensing conditions for corrections.
Simultaneous quantifier elimination in sequent calculus is an improvement over the well-known skolemization. It allows a lazy handling of instantiations as well as of the order of certain reductions. We prove the soundness of a sequent calculus which incorporates a rule for simultaneous quantifier elimination. The proof is performed by semantical arguments and provides some insights into the dependencies between various formulas in a sequent.
Monomial representations and operations for Gröbner bases computations are investigated from an implementation point of view. The technique ofvectorized monomial operations is introduced and it is shown how it expedites computations of Gröbner bases. Furthermore, a rank-based monomialrepresentation and comparison technique is examined and it is concluded that this technique does not yield an additional speedup over vectorizedcomparisons. Extensive benchmark tests with the Computer Algebra System SINGULAR are used to evaluate these concepts.
Groups can be studied using methods from different fields such as combinatorial group theory or string rewriting. Recently techniques from Gröbner basis theory for free monoid rings (non-commutative polynomial rings) respectively free group rings have been added to the set of methods due to the fact that monoid and group presentations (in terms of string rewriting systems) can be linked to special polynomials called binomials. In the same mood, the aim of this paper is to discuss the relation between Nielsen reduced sets of generators and the Todd-Coxeter coset enumeration procedure on the one side and the Gröbner basis theory for free group rings on the other. While it is well-known that there is a strong relationship between Buchberger's algorithm and the Knuth-Bendix completion procedure, and there are interpretations of the Todd-Coxeter coset enumeration procedure using the Knuth-Bendix procedure for special cases, our aim is to show how a verbatim interpretation of the Todd-Coxeter procedure can be obtained by linking recent Gröbner techniques like prefix Gröbner bases and the FGLM algorithm as a tool to study the duality of ideals. As a side product our procedure computes Nielsen reduced generating sets for subgroups in finitely generated free groups.
In this paper we study a particular class of \(n\)-node recurrent neural networks (RNNs).In the \(3\)-node case we use monotone dynamical systems theory to show,for a well-defined set of parameters, that,generically, every orbit of the RNN is asymptotic to a periodic orbit.Then, within the usual 'learning' context of NeuralNetworks, we investigate whether RNNs of this class can adapt their internal parameters soas to 'learn' and then replicate autonomously certain external periodic signals.Our learning algorithm is similar to identification algorithms in adaptivecontrol theory. The main feature of the adaptation algorithm is that global exponential convergenceof parameters is guaranteed. We also obtain partial convergence results in the \(n\)-node case.
We present a particle method for the numerical simulation of boundary value problems for the steady-state Boltzmann equation. Referring to some recent results concerning steady-state schemes, the current approach may be used for multi-dimensional problems, where the collision scattering kernel is not restricted to Maxwellian molecules. The efficiency of the new approach is demonstrated by some numerical results obtained from simulations for the (two-dimensional) BEnard's instability in a rarefied gas flow.
In this paper we present a domain decomposition approach for the coupling of Boltzmann and Euler equations. Particle methods are used for both equations. This leads to a simple implementation of the coupling procedure and to natural interface conditions between the two domains. Adaptive time and space discretizations and a direct coupling procedure leads to considerable gains in CPU time compared to a solution of the full Boltzmann equation. Several test cases involving a large range of Knudsen numbers are numerically investigated.