Refine
Year of publication
- 1999 (397)
- 1998 (109)
- 2000 (81)
- 1996 (70)
- 1997 (66)
- 1995 (58)
- 1994 (40)
- 2001 (37)
- 2014 (28)
- 1993 (26)
- 1991 (21)
- 2003 (21)
- 1992 (20)
- 2002 (18)
- 2013 (18)
- 2005 (14)
- 2006 (14)
- 2007 (14)
- 2015 (14)
- 1990 (11)
- 2004 (11)
- 2011 (10)
- 1985 (9)
- 2008 (8)
- 2009 (8)
- 1984 (7)
- 2012 (7)
- 2016 (7)
- 1986 (6)
- 1987 (6)
- 1989 (6)
- 1988 (5)
- 2010 (4)
- 2017 (3)
- 2018 (3)
- 2019 (2)
- 2021 (2)
- 1979 (1)
- 1981 (1)
- 1983 (1)
- 2023 (1)
- 2024 (1)
Document Type
- Preprint (1186) (remove)
Keywords
- AG-RESY (17)
- Case-Based Reasoning (16)
- Mehrskalenanalyse (10)
- RODEO (10)
- Approximation (9)
- Fallbasiertes Schliessen (9)
- Wavelet (9)
- Boltzmann Equation (7)
- Inverses Problem (7)
- Location Theory (7)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (608)
- Kaiserslautern - Fachbereich Informatik (346)
- Kaiserslautern - Fachbereich Physik (159)
- Fraunhofer (ITWM) (19)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (18)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (17)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (15)
- Kaiserslautern - Fachbereich Sozialwissenschaften (2)
- Universitätsbibliothek (2)
The Lagrangian field-antifield formalism of Batalin and Vilkovisky (BV) is used to investigate the application of the collec- tive coordinate method to soliton quantisation. In field theories with soliton solutions, the Gaussian fluctuation operator has zero modes due to the breakdown of global symmetries of the Lagrangian in the soliton solutions. It is shown how Noether identities and local symmetries of the Lagrangian arise when collective coordinates are introduced in order to avoid divergences related to these zero modes. This transformation to collective and fluctuation degrees of freedom is interpreted as a canonical transformation in the symplectic field-antifield space which induces a time-local gauge symmetry. Separating the corresponding Lagrangian path integral of the BV scheme in lowest order into harmonic quantum fluctuations and a free motion of the collective coordinate with the classical mass of the soliton, we show how the BV approach clarifies the relation between zero modes, collective coordinates, gauge invariance and the center- of-mass motion of classical solutions in quantum fields. Finally, we apply the procedure to the reduced nonlinear O(3) oe-model.^L
Based on experiences from an autonomous mobile robot project called MOBOT -III, we found hard realtime-constraints for the operating-system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operatingsystem-kernel", not limited to mobile- robot-projects only, but which might be useful also wherever you have to guarantee a high reliability of a realtime-system. The focus in this article is on a communication-scheme fulfilling the demanded (hard realtime-) assurances although not implying time-delays or jitters on the critical informationchannels. The central chapters discuss a locking-free shared buffer management, without the need for interrupts and a way to arrange the communication architecture in order to produce minimal protocol-overhead and short cycle-times. Most of the remaining communication-capacity (if there is any) is used for redundant transfers, increasing the reliability of the whole system. ALBATROSS is actually implemented on a multi-processor VMEbus-system.
This paper refers to the problem of adaptability over an infinite period of time, regarding dynamic networks. A never ending flow of examples have to be clustered, based on a distance measure. The developed model is based on the self-organizing feature maps of Kohonen [6], [7] and some adaptations by Fritzke [3]. The problem of dynamic surface classification is embedded in the SPIN project, where sub-symbolic abstractions, based on a 3-d scanned environment is being done.
The problem to be discussed here, is the usage of neural network clustering techniques on a mobile robot, in order to build qualitative topologic environment maps. This has to be done in realtime, i.e. the internal world model has to be adapted by the flow of sensor- samples without the possibility to stop this data-flow.Our experiments are done in a simulation environment as well as on a robot, called ALICE.
Based on the experiences from an autonomous mobile robot project called MOBOT-III, we found hard realtime-constraints for the operating- system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operating-system-kernel". The focusin this article is on a communication-scheme fulfilling the previous demanded assurances. The centralchapters discuss the shared buffer management and the way to design the communication architecture.Some further aspects beside the strict realtime-requirements like the possibilities to control and watch a running system, are mentioned. ALBATROSS is actually implemented on a multi-processor VMEbus-system.
Based on the idea of using topologic feature-mapsinstead of geometric environment maps in practical mobile robot tasks, we show an applicable way tonavigate on such topologic maps. The main features regarding this kind of navigation are: handling of very inaccurate position (and orientation) information as well as implicit modelling of complex kinematics during an adaptation phase. Due to the lack of proper a-priori knowledge, a re-inforcement based model is used for the translation of navigator commands to motor actions. Instead of employing a backpropagation network for the cen-tral associative memory module (attaching actionprobabilities to sensor situations resp. navigatorcommands) a much faster dynamic cell structure system based on dynamic feature maps is shown. Standard graph-search heuristics like A* are applied in the planning phase.
SPIN-NFDS Learning and Preset Knowledge for Surface Fusion - A Neural Fuzzy Decision System -
(1993)
The problem to be discussed in this paper may be characterized in short by the question: "Are these two surface fragments belonging together (i.e. belonging to the same surface)?" The presented techniques try to benefit from some predefined knowledge as well as from the possibility to refine and adapt this knowledge according to a (changing) real environment, resulting in a combination of fuzzy-decision systems and neural networks. The results are encouraging (fast convergence speed, high accuracy), and the model might be used for a wide range of applications. The general frame surrounding the work in this paper is the SPIN- project, where emphasis is on sub-symbolic abstractions, based on a 3-d scanned environment.
This article will discuss a qualitative, topological and robust world-modelling technique with special regard to navigation-tasks for mobile robots operating in unknownenvironments. As a central aspect, the reliability regarding error-tolerance and stability will be emphasized. Benefits and problems involved in exploration, as well as in navigation tasks, are discussed. The proposed method demands very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot
Self-localization in unknown environments respectively correlation of current and former impressions of the world is an essential ability for most mobile robots. The method,proposed in this article is the construction of a qualitative, topological world model as a basis for self-localization. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed techniques demand very low constraints for the kind and quality of the employed sensors as well as for the kinematic precisionof the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot.
Visual Search has been investigated by many researchers inspired by the biological fact, that the sensory elements on the mammal retina are not equably distributed. Therefore the focus of attention (the area of the retina with the highest density of sensory elements) has to be directed in a way to efficiently gather data according to certain criteria. The work discussed in this article concentrates on applying a laser range finder instead of a silicon retina. The laser range finder is maximal focused at any time, but therefore a low resolution total-scene-image, available with camera-like devices from scratch on, cannot be used here. By adapting a couple of algorithms, the edge-scanning module steering the laser range finder is able to trace a detected edge. Based on the data scanned so far , two questions have to be answered. First: "Should the actual (edge-) scanning be interrupted in order to give another area of interest a chance of being investigated?" and second: "Where to start a new edge-scanning, after being interrupted?". These two decision-problems might be solved by a range of decision systems. The correctness of the decisions depends widely on the actual environment and the underlying rules may not be well initialized with a-priori knowledge. So we will present a version of a reinforcement decision system together with an overall scheme for efficiently controlling highly focused devices.
World models for mobile robots as introduced in many projects, are mostly redundant regarding similar situations detected in different places. The present paper proposes a method for dynamic generation of a minimal world model based on these redundancies. The technique is an extention of the qualitative topologic world modelling methods. As a central aspect the reliability regarding errortolerance and stability will be emphasized. The proposed technique demands very low constraints on the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard realtime constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "
ALICE
(1994)
Abstract: We calculate exact analytical expressions for O(alpha s) 3-jet and O (alpha^2 s ) 4-jet cross sections in polarized deep inelastic lepton nucleon scattering. Introducing an invariant jet definition scheme, we present differential distributions of 3- and 4-jet cross sections in the basic kinematical variables x and W^2 as well as total jet cross sections and show their dependence on the chosen spin-dependent (polarized) parton distributions. Noticebly differences in the predictions are found for the two extreme choices, i.e. a large negative sea-quark density or a large positive gluon density. Therefore, it may be possible to discriminate between different parametrizations of polarized parton densities, and hence between the different physical pictures of the proton spin underlying these parametrizations.
We propose and study a strongly coupled PDE-ODE system with tissue-dependent degenerate diffusion and haptotaxis that can serve as a model prototype for cancer cell invasion through the
extracellular matrix. We prove the global existence of weak solutions and illustrate the model behaviour by numerical simulations for a two-dimensional setting.
We propose and study a strongly coupled PDE-ODE-ODE system modeling cancer cell invasion through a tissue network
under the go-or-grow hypothesis asserting that cancer cells can either move or proliferate. Hence our setting features
two interacting cell populations with their mutual transitions and involves tissue-dependent degenerate diffusion and
haptotaxis for the moving subpopulation. The proliferating cells and the tissue evolution are characterized by way of ODEs
for the respective densities. We prove the global existence of weak solutions and illustrate the model behaviour by
numerical simulations in a two-dimensional setting.
A new method is used to investigate the tunneling between two weakly-linked Bose-Einstein con- densates confined in double-well potential traps. The nonlinear interaction between the atoms in each well contributes to a finite chemical potential, which, with consideration of periodic instantons, leads to a remarkably high tunneling frequency. This result can be used to interpret the newly found Macroscopic Quantum Self Trapping (MQST) effect. Also a new kind of first-order crossover between different regions is predicted.
Conditional Compilation (CC) is frequently used as a variation mechanism in software product lines (SPLs). However, as a SPL evolves the variable code realized by CC erodes in the sense that it becomes overly complex and difficult to understand and maintain. As a result, the SPL productivity goes down and puts expected advantages more and more at risk. To investigate the variability erosion and keep the productivity above a sufficiently good level, in this paper we 1) investigate several erosion symptoms in an industrial SPL; 2) present a variability improvement process that includes two major improvement strategies. While one strategy is to optimize variable code within the scope of CC, the other strategy is to transition CC to a new variation mechanism called Parameterized Inclusion. Both of these two improvement strategies can be conducted automatically, and the result of CC optimization is provided. Related issues such as applicability and cost of the improvement are also discussed.
As a Software Product Line (SPL) evolves with increasing number of features and feature values, the feature correlations become extremely intricate, and the specifications of these correlations tend to be either incomplete or inconsistent with their realizations, causing misconfigurations in practice. In order to guide product configuration processes, we present a solution framework to recover complex feature correlations from existing product configurations. These correlations are further pruned automatically and validated by domain experts. During implementation, we use association mining techniques to automatically extract strong association rules as potential feature correlations. This approach is evaluated using a large-scale industrial SPL in the embedded system domain, and finally we identify a large number of complex feature correlations.
These lecture notes give a completely self-contained introduction to the control theory of linear time-invariant systems. No prior knowledge is requried apart from linear algebra and some basic familiarity with ordinary differential equations. Thus, the course is suited for students of mathematics in their second or third year, and for theoretically inclined engineering students. Because of its appealing simplicity and elegance, the behavioral approch has been adopted to a large extend. A short list of recommended text books on the subject has been added, as a suggestion for further reading.
Die Theorie der mehrdimensionalen Systeme ist ein relativ junges Forschungsgebiet innerhalb der Systemtheorie, erste Arbeiten stammen aus den 70er Jahren. Hauptmotiv für das Studium multidimensionaler Systeme war die Notwendigkeit einer Erweiterung der Theorie der digitalen Filter, die in der klassischen, eindimensionalen Signalverarbeitung (zeitabhängige Signale) Anwendung finden, auf den Bereich der Bildverarbeitung, also auf zweidimensionale Signale.; Die Vorlesung beschäftigt sich daher in ihrem ersten Teil mit skalaren zweidimensionalen Systemen und beschränkt sich im wesentlichen auf den linearen Fall. Untersucht werden zweidimensionale Filter, ihre wichtigsten Eigenschaften, Kausalität und Stabilität, sowie ihre Zustandsraum- realisierungen, etwa die Modelle von Roesser und Fornasini-Marchesini. Parallelen und Unterschiede zur eindimensionalen Systemtheorie werden betont.; Im zweiten Teil der Vorlesung werden allgemeine höherdimensionale und multivariable Systeme behandelt. Für diese Systeme erweist sich der von Jan Willems begründete Zugang zur Systemtheorie, der sogenannte behavioral approach, als zweckmäßig. Grundlegende Ideen dieses Ansatzes sowie eine der wichtigsten Methoden zum Rechnen mit Polynomen in mehreren Variablen, die Theorie der Gröbnerbasen, werden vorgestellt.
The paper focuses on the problem of trajectory planning of flexible redundant robot manipulators (FRM) in joint space. Compared to irredundant flexible manipulators, FRMs present additional possibilities in trajectory planning due to their kinematics redundancy. A trajectory planning method to minimize vibration of FRMs is presented based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as a planning variable. Quadrinomial and quintic polynomials are used to describe the segments which connect the initial, intermediate, and final points in joint space. The trajectory planning of FRMs is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. A case study shows that the method is applicable.
Point-to-Point Trajectory Planning of Flexible Redundant Robot Manipulators Using Genetic Algorithms
(2001)
The paper focuses on the problem of point-to-point trajectory planning for flexible redundant robot manipulators (FRM) in joint space. Compared with irredundant flexible manipulators, a FRM possesses additional possibilities during point-to-point trajectory planning due to its kinematics redundancy. A trajectory planning method to minimize vibration and/or executing time of a point-to-point motion is presented for FRM based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as planning variables. Quadrinomial and quintic polynomial are used to describe the segments that connect the initial, intermediate, and final points in joint space. The trajectory planning of FRM is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. Case studies show that the method is applicable.
The vibration induced in a deformable object upon automatic handling by robot manipulators can often be bothersome. This paper presents a force/torque sensor-based method for handling deformable linear objects (DLOs) in a manner suitable to eliminate acute vibration. An adjustment-motion that can be attached to the end of an arbitrary end-effector's trajectory is employed to eliminate vibration of deformable objects. Differently from model-based methods, the presented sensor-based method does not employ any information from previous motions. The adjustment-motion is generated automatically by analyzing data from a force/torque sensor mounted on the robot wrist. Template matching technique is used to find out the matching point between the vibrational signal of the DLO and a template. Experiments are conducted to test the new method under various conditions. Results demonstrate the effectiveness of the sensor-based adjustment-motion.
Manipulating Deformable Linear Objects: Attachable Adjustment-Motions for Vibration Reduction
(2001)
This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. Different types of adjustment-motions that eliminate vibration of deformable objects and can be attached to the end of an arbitrary end-effector trajectory are presented. For describing the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment motion for each simulation example. Experiments are conducted to verify the presented manipulating method.
Manipulating Deformable Linear Objects: Model-Based Adjustment-Motion for Vibration Reduction
(2001)
This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. An adjustment-motion that eliminates vibration of DLOs and can be attached to the end of any arbitrary end-effector's trajectory is presented, based on the concept of open-loop control. The presented adjustment-motion is a kind of agile end-effector motion with limited scope. To describe the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment-motion for each simulation example. In contrast to previous approaches, the presented method can be treated as one of the manipulation skills and can be applied to different cases without major changes to the method.
Abstract: We present experimental and theoretical results of a detailed study of laser-induced continuum structures (LICS) in the photoionization continuum of helium out of the metastable state 2s^1 S_0. The continuum dressing with a 1064 nm laser, couples the same region of the continuum to the 4s^1 S_0 state. The experimental data, presented for a range of intensities, show pronounced ionization suppression (by asmuch as 70% with respect to the far-from-resonance value) as well as enhancement, in a Beutler-Fano resonance profile. This ionization suppression is a clear indication of population trapping mediated by coupling to a contiuum. We present experimental results demonstrating the effect of pulse delay upon the LICS, and for the behavior of LICS for both weak and strong probe pulses. Simulations based upon numerical solution of the Schrödinger equation model the experimental results. The atomic parameters (Rabi frequencies and Stark shifts) are calculated using a simple model-potential method for the computation of the needed wavefunctions. The simulations of the LICS profiles are in excellent agreement with experiment. We also present an analytic formulation of pulsed LICS. We show that in the case of a probe pulse shorter than the dressing one the LICS profile is the convolution of the power spectra of the probe pulse with the usual Fano profile of stationary LICS. We discuss some consequences of deviation from steady-state theory.
This article presents contributions in the field of path planning for industrial robots with 6 degrees of freedom. This work presents the results of our research in the last 4 years at the Institute for Process Control and Robotics at the University of Karlsruhe. The path planning approach we present works in an implicit and discretized C-space. Collisions are detected in the Cartesian workspace by a hierarchical distance computation. The method is based on the A* search algorithm and needs no essential off-line computation. A new optimal discretization method leads to smaller search spaces, thus speeding up the planning. For a further acceleration, the search was parallelized. With a static load distribution good speedups can be achieved. By extending the algorithm to a bidirectional search, the planner is able to automatically select the easier search direction. The new dynamic switching of start and goal leads finally to the multi-goal path planning, which is able to compute a collision-free path between a set of goal poses (e.g., spot welding points) while minimizing the total path length.
An interrupter for use in a daisy-chained VME bus interrupt system has beendesigned and implemented as an asynchronous sequential circuit. The concur-rency of the processes posed a design problem that was solved by means of asystematic design procedure that uses Petri nets for specifying system and in-terrupter behaviour, and for deriving a primitive flow table. Classical designand additional measures to cope with non-fundamental mode operation yieldeda coded state-machine representation. This was implemented on a GAL 22V10,chosen for its hazard-preventing structure and for rapid prototyping in studentlaboratories.
Phase velocities of surface acoustic waves in several boron nitride films were investigated by Brillouin light scattering. In the case of films with predominantly hexagonal crystal structure, grown under conditions close to the nucleation threshold of cubic BN, four independent elastic constants have been determined from the dispersion of the Rayleigh and the first Sezawa mode. The large elastic anisotropy of up to c11/c33 = 0.1 is attributed to a pronounced texture with the c-axes of the crystallites parallel to the film plane. In the case of cubic BN films the dispersion of the Rayleigh wave provides evidence for the existence of a more compliant layer at the substrate-film interface. The observed broadening of the Rayleigh mode is identified to be caused by the film morphology.
Hexagonal BN films have been deposited by rf-magnetron sputtering with simultaneous ion plating. The elastic properties of the films grown on silicon substrates under identical coating conditions have been de-termined by Brillouin light scattering from thermally excited surface phonons. Four of the five independent elastic constants of the deposited material are found to be c11 = 65 GPa, c13 = 7 GPa, c33 = 92 GPa and c44 = 53 GPa exhibiting an elastic anisotropy c11/c33 of 0.7. The Young's modulus determined with load indenta-tion is distinctly larger than the corresponding value taken from Brillouin light scattering. This discrepancy is attributed to the specific morphology of the material with nanocrystallites embedded in an amorphous matrix.
We present a convenient notation for positive/negativeADconditional equations. Theidea is to merge rules specifying the same function by using caseAD, ifAD, matchAD, and letADexpressions.Based on the presented macroADruleADconstruct, positive/negativeADconditional equational specifiADcations can be written on a higher level. A rewrite system translates the macroADruleADconstructsinto positive/negativeADconditional equations.
We present an inference system for clausal theorem proving w.r.t. various kinds of inductivevalidity in theories specified by constructor-based positive/negative-conditional equations. The reductionrelation defined by such equations has to be (ground) confluent, but need not be terminating. Our con-structor-based approach is well-suited for inductive theorem proving in the presence of partially definedfunctions. The proposed inference system provides explicit induction hypotheses and can be instantiatedwith various wellfounded induction orderings. While emphasizing a well structured clear design of theinference system, our fundamental design goal is user-orientation and practical usefulness rather thantheoretical elegance. The resulting inference system is comprehensive and relatively powerful, but requiresa sophisticated concept of proof guidance, which is not treated in this paper.This research was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D4-Projekt)
We study the combination of the following already known ideas for showing confluence ofunconditional or conditional term rewriting systems into practically more useful confluence criteria forconditional systems: Our syntactic separation into constructor and non-constructor symbols, Huet's intro-duction and Toyama's generalization of parallel closedness for non-noetherian unconditional systems, theuse of shallow confluence for proving confluence of noetherian and non-noetherian conditional systems, theidea that certain kinds of limited confluence can be assumed for checking the fulfilledness or infeasibilityof the conditions of conditional critical pairs, and the idea that (when termination is given) only primesuperpositions have to be considered and certain normalization restrictions can be applied for the sub-stitutions fulfilling the conditions of conditional critical pairs. Besides combining and improving alreadyknown methods, we present the following new ideas and results: We strengthen the criterion for overlayjoinable noetherian systems, and, by using the expressiveness of our syntactic separation into constructorand non-constructor symbols, we are able to present criteria for level confluence that are not criteria forshallow confluence actually and also able to weaken the severe requirement of normality (stiffened withleft-linearity) in the criteria for shallow confluence of noetherian and non-noetherian conditional systems tothe easily satisfied requirement of quasi-normality. Finally, the whole paper also gives a practically usefuloverview of the syntactic means for showing confluence of conditional term rewriting systems.
In diesem Artikel diskutieren wir Anforderungen aus der Kreditwürdigkeitsprüfung und ihre Erfüllung mit Hilfe der Technik des fallbasierten Schliessens. Innerhalb eines allgemeinen Ansatzes zur fallbasierten Systementwicklung wird ein Lernverfahren zur Optimierung von Entscheidungskosten ausführlich beschrieben. Dieses Verfahren wird, auf der Basis realer Kundendaten, mit dem fallbasierten Entwicklungswerkzeug INRECA empirisch bewertet. Die Voraussetzungen für den Einsatz fallbasierter Systeme zur Kreditwürdigkeitsprüfung werden abschliessend dargestellt und ihre Nüt zlichkeit diskutiert.
Planabstraktion ist eine Möglichkeit, den Aufwand bei der Suche nach einem Plan zur Lösung eines konkreten Problems zu reduzieren. Hierbei wird eine konkrete Welt mit einer Problemstellung auf eine abstrakte Welt abgebildet. Die abstrakte Problemstellung wird nun in der abstrakten Welt gelöst. Durch die Rückabbildung der abstrakten Lösung auf eine konkrete Lösung erhält man eine Lösung für das konkrete Problem. Da die Anzahl der zur Lösung des abstrakten Problems benötigten Operationen geringer ist und die abstrakten Zustände und Operatoren einer weniger komplexen Beschreibung genügen, wird der Aufwand zur Suche einer konkreten Problemlösung reduziert.
Fallbasiertes Schliessen (engl.: Case-based Reasoning) hat in den vergangenen Jahren zunehmende Bedeutung für den praktischen Einsatz in realen Anwendungsbereichen erlangt. In dieser Arbeit werden zunächst die allgemeine Vorgehensweise und die verschiedenen Teilaufgaben des fallbasierten Schliessens vorgestellt. Anschliessend wird auf die charakteristischen Eigenschaften eines Anwendungsbereiches eingegangen und an der konkreten Aufgabe der Kreditwürdigkeitsprüfung die Realisierung eines fallbasierten Ansatzes in der Finanzwelt beschrieben.
Die Verwendung von existierenden Planungsansätzen zur Lösung von realen Anwendungs- problemen führt meist schnell zur Erkenntnis, dass eine vorliegende Problemstellung im Prinzip zwar lösbar ist, der exponentiell anwachsende Suchraum jedoch nur die Behandlung relativ kleiner Aufgabenstellungen erlaubt. Beobachtet man jedoch menschliche Planungsexperten, so sind diese in der Lage bei komplexen Problemen den Suchraum durch Abstraktion und die Verwendung bekannter Fallbeispiele als Heuristiken, entscheident zu verkleinern und so auch für schwierige Aufgabenstellungen zu einer akzeptablen Lösung zu gelangen. In dieser Arbeit wollen wir am Beispiel der Arbeitsplanung ein System vorstellen, das Abstraktion und fallbasierte Techniken zur Steuerung des Inferenzprozesses eines nichtlinearen, hierarchischen Planungssystems einsetzt und so die Komplexität der zu lösenden Gesamtaufgabe reduziert.
We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.
We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.
While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Im Bereich der Expertensysteme ist das Problemlösen auf der Basis von bekannten Fallbeispielen ein derzeit sehr aktuelles Thema. Auch für Diagnoseaufgaben gewinnt der fallbasierte Ansatz immer mehr an Bedeutung. In diesem Papier soll der im Rahmen des Moltke -Projektes1 an der Universität Kaiserslautern entwickelte fallbasierte Problemlöser Patdex/22 vorgestellt werden. Ein erster Prototyp, Patdex/1, wurde bereits 1988 entwickelt.
Forschungsprojekte im Bereich des fallbasierten Schliessens in den USA, die Verfügbarkeit kommerzieller fallbasierter Shells, sowie erste Forschungsergebnisse initialer deutscher Projekte haben auch in Deutschland verstärkte Aktivitäten auf dem Gebiet des fallbasierten Schliessens ausgelöst. In diesem Artikel sollen daher Projekte, die sich als Schwerpunkt oder als Teilaspekt mit fallbasierten Aspekten beschäftigen, einer breiteren Öffentlichkeit kurz vorgestellt werden.
Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.
Formalismen und Anschauung
(1999)
In der Philosophie ist es selbstverständlich, daß Autoren, die Erkenntnisse früherer Philosophen weitergeben oder kommentieren, die Originalliteratur kennen und sich in ihrer Argumentation explizit auf bestimmte Stellen in den Originaldarstellungen beziehen. In der Technik dagegen ist es allgemein akzeptierte Praxis, daß Autoren von Lehrbüchern, in denen Erkenntnisse früherer Forscher dargestellt oder kommentiert werden, nicht die Originaldarstellungen zugrunde legen, sondern sich mit den Darstellungen in der Sekundärliteratur begnügen. Man denke an die Erkenntnisse von Boole oder Maxwell, die in sehr vielen Lehrbüchern der Digitaltechnik bzw. der theoretischen Elektrotechnik vermittelt werden, ohne daß die Autoren dieser Lehrbücher auf die Originalschriften von Boole oder Maxwell Bezug nehmen. Dagegen wird man wohl kaum ein Buch über Erkenntnisse von Aristoteles oder Kant finden, dessen Autor sich nicht explizit auf bestimmte Stellen in den Schriften dieser Philosophen bezieht.
Die systemtheoretische Begründung für die Einführung des Zustandsbegriffs findet man im Mosaik-stein "Der Zustandsbegriff in der Systemtheorie". Während sich die dortige Betrachtung sowohl mitkontinuierlichen als auch mit diskreten Systemen befaßt, wird hier die Betrachtung auf diskrete Sy-steme beschränkt.
Umgangssprachlich wurde das Wort Daten schon gebraucht, lange bevor der Computer erfundenwurdeund die AbkürzungEDV für "Elektronische Datenverarbeitung" in die Alltagssprache gelangte.So sagte beispielsweise der Steuerberater zu seinem Klienten: "Bevor ich Ihre Steuererklärung fertigmachen kann, brauche ich von Ihnen noch ein paar Daten." Oder der Straßenbaureferent einer Stadtschrieb an den Oberbürgermeister: "Für die Entscheidung, welche der beiden in Frage stehenden Stra-ßen vorrangig ausgebaut werden soll, müssen wir noch eine Datenerhebung durchführen." Bei diesenDaten ging es zwar oft um Zahlen - Geldbeträge, Anzahl der Kinder, Anzahl der Beschäftigungsmo-nate, gezählte Autos - , aber eine Gleichsetzung von Daten mit Zahlen wäre falsch. Zum einen wärenZahlen ohne mitgelieferte Wörter wie Monatseinkommen, Kinderzahl u.ä. für den Steuerberater nutz-los, zum anderen will das Finanzamt u.a. auch den Arbeitgeber des Steuerpflichtigen wissen, und dazumuß eine Adresse angegeben werden, aber keine Zahl.
Für die Systemtheorie ist der Begriff Zustand ein sehr zentraler Begriff. Das Wort "Zustand" wird um-gangssprachlich recht häufig verwendet, aber wenn man die Leute fragen würde, was sie denn meinen,wenn sie das Wort Zustand benützen, dann würde man sicher nicht die präzise Definition bekommen,die man für die Systemtheorie braucht.
Sokrates und das Nichtwissen
(1997)
Programs are linguistic structures which contain identifications of individuals: memory locations, data types, classes, objects, relations, functions etc. must be identified selectively or definingly. The first part of the essay which deals with identification by showing and designating is rather short, whereas the remaining part dealing with paraphrasing is rather long. The reason is that for an identification by showing or designating no linguistic compositions are needed, in contrast to the case of identification by paraphrasing. The different types of functional paraphrasing are covered here in great detail because the concept of functional paraphrasing is the foundation of functional programming. The author had to decide whether to cover this subject here or in his essay Purpose versus Form of Programs where the concept of functional programming is presented. Finally, the author came to the conclusion that this essay on identification is the more appropriate place.
In system theory, state is a key concept. Here, the word state refers to condition, as in the example Since he went into the hospital, his state of health worsened daily. This colloquial meaning was the starting point for defining the concept of state in system theory. System theory describes the relationship between input X and output Y, that is, between influence and reaction. In system theory, a system is something that shows an observable behavior that may be influenced. Therefore, apart from the system, there must be something else influencing and observing the reaction of the system. This is called the environment of the system.
In diesem Aufsatz geht es um eine Klassifikation von Programmen nach zwei orthogonalen Kriterien.Programm und Software werden dabei nicht als Synonyme angesehen; Programm sein wird hiergleichgesetzt mit ausführbar sein, d.h. etwas ist dann und nur dann ein Programm, wenn man die Fragebeantworten kann, was es denn heißen solle, dieses Etwas werde ausgeführt. Es gibt durchaus Softwa-regebilde, bezüglich derer diese Frage keinen Sinn hat und die demzufolge auch keine Programme sind - beispielsweise eine Funktions - oder eine Klassenbibliothek.Klassifikation ist von Nutzen, wenn sie Vielfalt überschaubarer macht - die Vielfalt der Schüler einergroßen Schule wird überschaubarer, wenn die Schüler "klassifiziert" sind, d.h. wenn sie in ihren Klas-senzimmern sitzen. Die im folgenden vorgestellte Klassifikation soll die Vielfalt von Programmenüberschaubarer machen.
Bei der Programmierung geht es in vielfältiger Form um Identifikation von Individuen: Speicherorte,Datentypen, Werte, Klassen, Objekte, Funktionen u.ä. müssen definierend oder selektierend identifiziert werden.Die Ausführungen zur Identifikation durch Zeigen oder Nennen sind verhältnismäßig kurz gehalten,wogegen der Identifikation durch Umschreiben sehr viel Raum gewidmet ist. Dies hat seinen Grunddarin, daß man zum Zeigen oder Nennen keine strukturierten Sprachformen benötigt, wohl aber zumUmschreiben. Daß die Betrachtungen der unterschiedlichen Formen funktionaler Umschreibungen soausführlich gehalten sind, geschah im Hinblick auf ihre Bedeutung für die Begriffswelt der funktionalen Programmierung. Man hätte zwar die Formen funktionaler Umschreibungen auch im Mosaikstein "Programmzweck versus Programmform" im Kontext des dort dargestellten Konzepts funktionaler Programme behandeln können, aber der Autor meint, daß der vorliegende Aufsatz der angemessenerePlatz dafür sei.
One of the problems of autonomous mobile systems is the continuous tracking of position and orientation. In most cases, this problem is solved by dead reckoning, based on measurement of wheel rotations or step counts and step width. Unfortunately dead reckoning leads to accumulation of drift errors and is very sensitive against slippery. In this paper an algorithm for tracking position and orientation is presented being nearly independent from odometry and its problems with slippery. To achieve this results, a rotating range-finder is used, delivering scans of the environmental structure. The properties of this structure are used to match the scans from different locations in order to find their translational and rotational displacement. For this purpose derivatives of range-finder scans are calculated which can be used to find position and orientation by crosscorrelation.
A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.
We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.
Sudakov's typical marginals, random linear functionals and a conditional central limit theorem
(1997)
V.N. Sudakov [Sud78] proved that the one-dimensional marginals of a highdimensional second order measure are close to each other in most directions. Extending this and a related result in the context of projection pursuit of P. Diaconis and D. Freedman [Dia84], we give for a probability measure P and a random (a.s.) linear functional F on a Hilbert space simple sufficient conditions under which most of the one-dimensional images of P under F are close to their canonical mixture which turns out to be almost a mixed normal distribution. Using the concept of approximate conditioning we deduce a conditional central limit theorem (theorem 3) for random averages of triangular arrays of random variables which satisfy only fairly weak asymptotic orthogonality conditions.
Bekanntlich gibt es keinen befriedigenden unendlich dimensionalen Ersatz für das Lebesgue-Mass. Andererseits lassen sich viele Techniken klassischer Analysis auch auf unendlich dimensionale Situationen übertragen. Eine Möglichkeit hierzu gibt die Theorie differenzierbarer Masse. Man definiert Richtungsableitungen für Masse ähnlich wie für Funktionen. Eines der zentralen Beispiele ist das Wiener-Mass. Stochastische Integration bezüglich der Brownschen Bewegung, insbesondere das Skorokhod-Integral ergeben sich in natürlicher Weise durch diesen Ansatz und auch die Grundideen des MalliavinKalküls lassen sich in diesem Rahmen einfach erläutern. Die Vorträge geben die meisten Beweise.
Starting from the uniqueness question for mixtures of distributions this review centers around the question under which formally weaker assumptions one can prove the existence of SPLIFs, in other words perfect statistics and tests. We mention a couple of positive and negative results which complement the basic contribution of David Blackwell in 1980. Typically the answers depend on the choice of the set theoretic axioms and on the particular concepts of measurability.
We study a model for learning periodic signals in recurrent neural networks proposed by Doya and Yoshizawa [7] that can be considered as a model for temporal pattern memory in animal motoric systems. A network receives an external oscillatory input and adjusts its weights so that this signal can be reproduced approximately as the network output after some time. We use tools from adaptive control theory to derive an algorithm for weight matrices with special structure. If the input is generated by a network of the same structure the algorithm converges globally and does not exhibit the deficiencies of the back-propagation based approach of Doya and Yoshizawa under a persistency of excitation condition. This simple algorithm can also be used for open loop identification under quite restructive assumptions. The persistency of excitation condition cannot be proven even for the matrices with special structure but for a 3d system. For higher dimensional systems we give connections to the theory of linear time-varying systems where this condition is generically true (under assumption which are also needed in the time-invariant case). However, we cannot show that the linearized system related to the nonlinear neural network fulfills these generic assumptions.
The edge enhancement property of a nonlinear diffusion equation with a suitable expression for the diffusivity is an important feature for image processing. We present an algorithm to solve this equation in a wavelet basis and discuss its one dimensional version in some detail. Sample calculations demonstrate principle effects and treat in particular the case of highly noise perturbed signals. The results are discussed with respect to performance, efficiency, choice of parameters and are illustrated by a large number of figures. Finally, a comparison with a Fourier method and a finite volume method is performed.
In spite of its lack of theoretical justification, nonlinear diffusion filtering has become a powerful image enhancement tool in the recent years. The goal of the present paper is to provide a mathematical foundation for nonlinear diffusion filtering as a scale-space transformation which is flexible enough to simplify images without loosing the capability of enhancing edges. By stuying the Lyapunow functional, it is shown that nonlinear diffusion reduces Lp norms and central moments and increases the entropy of images. The proposed anisotropic class utilizes a diffusion tensor which may be adapted to the image structure. It permits existence, uniqueness and regularity results, the solution depends continuously on the initial image, and it fulfills an extremum principle. All considerations include linear and certain nonlinear isotropic models and apply to m-dimensional vector-valued images. The results are juxtaposed to linear and morphological scale-spaces.
A way to derive consistently kinetic models for vehicular traffic from microscopic follow the leader models is presented. The obtained class of kinetic equations is investigated. Explicit examples for kinetic models are developed with a particular emphasis on obtaining models, that give realistic results. For space homogeneous traffic flow situations numerical examples are given including stationary distributions and fundamental diagrams.
In this paper we analyze the vibrations of nonlinear structures by means of the novel approach of isogeometric finite elements. The fundamental idea of isogeometric finite elements is to apply the same functions, namely B-Splines and NURBS (Non-Uniform Rational B-Splines), for describing the geometry and for representing the numerical solution. In case of linear vibrational analysis, this approach has already been shown to possess substantial advantages over classical finite elements, and we extend it here to a nonlinear framework based on the harmonic balance principle.
As application, the straight nonlinear Euler-Bernoulli beam is used, and overall, it is demonstrated that isogeometric finite elements with B-Splines in combination with the harmonic balance method are a powerful means for the analysis of nonlinear structural vibrations. In particular, the smoother k-method provides higher accuracy than the p-method for isogeometric nonlinear vibration analysis.
In this paper we present a method for nonlinear frequency response analysis of mechanical vibrations of 3-dimensional solid structures.
For computing nonlinear frequency response to periodic excitations, we employ the well-established harmonic balance method.
A fundamental aspect for allowing a large-scale application of the method is model order reduction of the discretized equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information.
For an efficient spatial discretization of continuum mechanics nonlinear partial differential equations, including large deformations and hyperelastic material laws, we use the isogeometric finite element method, which has already been shown to possess advantages over classical finite element discretizations in terms of higher accuracy of numerical approximations in the fields of linear vibration and static large deformation analysis.
With several computational examples, we demonstrate the applicability and accuracy of the modal derivative reduction method for nonlinear static computations and vibration analysis.
Thus, the presented method opens a promising perspective on application of nonlinear frequency analysis to large-scale industrial problems.
Im Rahmen dieser Arbeit beschreiben wir die wesentlichen Merkmale der CAPlan-Architektur, die die interaktive Bearbeitung von Planungsproblemen ermöglichen. Anhand des SNLP-Algorithmus, der der Architektur zugrunde liegt, werden die im Laufe eines Planungsprozesses auftretenden Entscheidungspunkte charakterisiert. Mit Hilfe von frei definierbaren Kontrollkomponenten kann das Verhalten an diesen Entscheidungspunkte festgelegt werden, wodurch eine flexible Steuerung des Planungsprozesses ermöglicht wird. Planungsziele und -entscheidungen werden in einem gerichteten azyklischen Graphen verwaltet, der ihre kausalen Abhängigkeiten widerspiegelt. Im Gegensatz zu einem Stack, der typischerweise zur Verwaltung von Entscheidungen eingesetzt wird, erlaubt die graphbasierte Repräsentation die flexible Rücknahme einer Entscheidung, ohne alle zeitlich danach getroffenen Entscheidungen ebenfalls zurücknehmen zu müssen.
Problem specifications for classical planners based on a STRIPS-like representation typically consist of an initial situation and a partially defined goal state. Hierarchical planning approaches, e.g., Hierarchical Task Network (HTN) Planning, have not only richer representations for actions but also for the representation of planning problems. The latter are defined by giving an initial state and an initial task network in which the goals can be ordered with respect to each other. However, studies with a specification of the domain of process planning for the plan-space planner CAPlan (an extension of SNLP) have shown that even without hierarchical domain representation typical properties called goal orderings can be identified in this domain that allow more efficient and correct case retrieval strategies for the case-based planner CAPlan/CbC. Motivated by that, this report describes an extension of the classical problem specifications for plan-space planners like SNLP and descendants. These extended problem specifications allow to define a partial order on the planning goals which can interpreted as an order in which the solution plan should achieve the goals. These goal ordering can theoretically and empirically be shown to improve planning performance not only for case-based but also for generative planning. As a second but different way we show how goal orderings can be used to address the control problem of partial order planners. These improvements can be best understood with a refinement of Barrett's and Weld's extended taxonomy of subgoal collections.
Real world planning tasks like manufacturing process planning often don't allow to formalize all of the relevant knowledge. Especially, preferences between alternatives are hard to acquire but have high influence on the efficiency of the planning process and the quality of the solution. We describe the essential features of the CAPlan planning architecture that supports cooperative problem solving to narrow the gap caused by absent preference and control knowledge. The architecture combines an SNLP-like base planner with mechanisms for explict representation and maintenance of dependencies between planning decisions. The flexible control interface of CAPlan allows a combination of autonomous and interactive planning in which a user can participate in the problem solving process. Especially, the rejection of arbitrary decisions by a user or dependency-directed backtracking mechanisms are supported by CAPlan.
The magnetic anisotropy of Co/Cu~001! films has been investigated by the magneto-optical Kerr effect, both in the pseudomorphic growth regime and above the critical thickness where strain relaxation sets in. A clear correlation between the onset of strain relaxation as measured by means of reflection high-energy electron diffraction and changes of the magnetic anisotropy has been found.
The efficient numerical treatment of the Boltzmann equation is a very important task in many fields of application. Most of the practically relevant numerical schemes are based on the simulation of large particle systems that approximate the evolution of the distribution function described by the Boltzmann equation. In particular, stochastic particle systems play an important role in the construction of various numerical algorithms.
Software development is becoming a more and more distributed process, which urgently needs supporting tools in the field of configuration management, software process/w orkflow management, communication and problem tracking. In this paper we present a new distributed software configuration management framework COMAND. It offers high availabilit y through replication and a mechanism to easily change and adapt the project structure to new business needs. To better understand and formally prove some properties of COMAND, we have modeled it in a formal technique based on distributed graph transformations. This formalism provides an intuitive rule-based description technique mainly for the dynamic behavior of the system on an abstract level. We use it here to model the replication subsystem.
If \(A\) generates a bounded cosine function on a Banach space \(X\) then the negative square root \(B\) of \(A\) generates a holomorphic semigroup, and this semigroup is the conjugate potential transform of the cosine function. This connection is studied in detail, and it is used for a characterization of cosine function generators in terms of growth conditions on the semigroup generated by \(B\). This characterization relies on new results on the inversion of the vector-valued conjugate potential transform.
\(C^0\)-scalar-type spectrality criterions for operators \(A\), whose resolvent set contains the negative reals, are provided. The criterions are given in terms of growth conditions on the resolvent of \(A\) and the semi-group generated by \(A\).These criterions characterize scalar-type operators on the Banach space \(X\), if and only if \(X\) has no subspace isomorphic to the space of complex null-sequences.
In the Banach space co there exists a continuous function of bounded semivariation which does not correspond to a countably additive vector measure. This result is in contrast to the scalar case, and it has consequences for the characterization of scalar-type operators. Besides this negative result we introduce the notion of functions of unconditionally bounded variation which are exactly the generators of countably additive vector measures.
The thermal equilibrium state of a bipolar, isothermal quantum fluid confined to a bounded domain \(\Omega\subset I\!\!R^d,d=1,2\) or \( d=3\) is the minimizer of the total energy \({\mathcal E}_{\epsilon\lambda}\); \({\mathcal E}_{\epsilon\lambda}\) involves the squares of the scaled Planck's constant \(\epsilon\) and the scaled minimal Debye length \(\lambda\). In applications one frequently has \(\lambda^2\ll 1\). In these cases the zero-space-charge approximation is rigorously justified. As \(\lambda \to 0 \), the particle densities converge to the minimizer of a limiting quantum zero-space-charge functional exactly in those cases where the doping profile satisfies some compatibility conditions. Under natural additional assumptions on the internal energies one gets an differential-algebraic system for the limiting \((\lambda=0)\) particle densities, namely the quantum zero-space-charge model. The analysis of the subsequent limit \(\epsilon \to 0\) exhibits the importance of quantum gaps. The semiclassical zero-space-charge model is, for small \(\epsilon\), a reasonable approximation of the quantum model if and only if the quantum gap vanishes. The simultaneous limit \(\epsilon =\lambda \to 0\) is analyzed.
Caloric Restriction (CR) is the only intervention proven to retard aging and extend maximum lifespan in mammalians. A possible mechanism for the beneficial effects of CR is that the mild metabolic stress associated with CR induces cells to express stress proteins that increase their resistance to disease processes. In this article we therefore model the retardation of aging by dietary restriction within a mathematical framework. The resulting model comprises food intake, stress proteins, body growth and survival. We successfully applied our model to growth and survival data of mice exposed to different food levels.
Universal Shortest Paths
(2010)
We introduce the universal shortest path problem (Univ-SPP) which generalizes both - classical and new - shortest path problems. Starting with the definition of the even more general universal combinatorial optimization problem (Univ-COP), we show that a variety of objective functions for general combinatorial problems can be modeled if all feasible solutions have the same cardinality. Since this assumption is, in general, not satisfied when considering shortest paths, we give two alternative definitions for Univ-SPP, one based on a sequence of cardinality contrained subproblems, the other using an auxiliary construction to establish uniform length for all paths between source and sink. Both alternatives are shown to be (strongly) NP-hard and they can be formulated as quadratic integer or mixed integer linear programs. On graphs with specific assumptions on edge costs and path lengths, the second version of Univ-SPP can be solved as classical sum shortest path problem.
It is well known that the greedy algorithm solves matroid base problems for all linear cost functions and is, in fact, correct if and only if the underlying combinatorial structure of the problem is a matroid. Moreover, the algorithm can be applied to problems with sum, bottleneck, algebraic sum or \(k\)-sum objective functions.
The shortest path problem in which the \((s,t)\)-paths \(P\) of a given digraph \(G =(V,E)\) are compared with respect to the sum of their edge costs is one of the best known problems in combinatorial optimization. The paper is concerned with a number of variations of this problem having different objective functions like bottleneck, balanced, minimum deviation, algebraic sum, \(k\)-sum and \(k\)-max objectives, \((k_1, k_2)-max, (k_1, k_2)\)-balanced and several types of trimmed-mean objectives. We give a survey on existing algorithms and propose a general model for those problems not yet treated in literature. The latter is based on the solution of resource constrained shortest path problems with equality constraints which can be solved in pseudo-polynomial time if the given graph is acyclic and the number of resources is fixed. In our setting, however, these problems can be solved in strongly polynomial time. Combining this with known results on \(k\)-sum and \(k\)-max optimization for general combinatorial problems, we obtain strongly polynomial algorithms for a variety of path problems on acyclic and general digraphs.
Laser-induced thermotherapy (LITT) is an established minimally invasive percutaneous technique of tumor ablation. Nevertheless, there is a need to predict the effect of laser applications and optimizing irradiation planning in LITT. Optical attributes (absorption, scattering) change due to thermal denaturation. The work presents the possibility to identify these temperature dependent parameters from given temperature measurements via an optimal control problem. The solvability of the optimal control problem is analyzed and results of successful implementations are shown.
In this paper we study a particular class of \(n\)-node recurrent neural networks (RNNs).In the \(3\)-node case we use monotone dynamical systems theory to show,for a well-defined set of parameters, that,generically, every orbit of the RNN is asymptotic to a periodic orbit.Then, within the usual 'learning' context of NeuralNetworks, we investigate whether RNNs of this class can adapt their internal parameters soas to 'learn' and then replicate autonomously certain external periodic signals.Our learning algorithm is similar to identification algorithms in adaptivecontrol theory. The main feature of the adaptation algorithm is that global exponential convergenceof parameters is guaranteed. We also obtain partial convergence results in the \(n\)-node case.
Convex Operators in Vector Optimization: Directional Derivatives and the Cone of Decrease Directions
(1999)
The paper is devoted to the investigation of directional derivatives and the cone of decrease directions for convex operators on Banach spaces. We prove a condition for the existence of directional derivatives which does not assume regularity of the ordering cone K. This result is then used to prove that for continuous convex operators the cone of decrease directions can be represented in terms of the directional derivatices . Decrease directions are those for which the directional derivative lies in the negative interior of the ordering cone K. Finally, we show that the continuity of the convex operator can be replaced by its K-boundedness.