Refine
Year of publication
Document Type
- Preprint (1186) (remove)
Keywords
- AG-RESY (17)
- Case-Based Reasoning (16)
- Mehrskalenanalyse (10)
- RODEO (10)
- Approximation (9)
- Fallbasiertes Schliessen (9)
- Wavelet (9)
- Boltzmann Equation (7)
- Inverses Problem (7)
- Location Theory (7)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (608)
- Kaiserslautern - Fachbereich Informatik (346)
- Kaiserslautern - Fachbereich Physik (159)
- Fraunhofer (ITWM) (19)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (18)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (17)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (15)
- Kaiserslautern - Fachbereich Sozialwissenschaften (2)
- Universitätsbibliothek (2)
Anhand des vom Gutachterausschuß der Stadt Kaiserlautern zur Verfügung gestellten Datenmaterials soll untersucht werden, welche Faktoren den Verkehrswert eines bebauten Grundstücks beeinflussen. Mit diesen Erkenntnissen soll eine möglichst einfache Formel ermittelt werden, die eine Schätzung für den Verkehrswert liefert, und die dabei die in der Vergangenheit erzielten Kaufpreise berücksichtigt. Für die Lösung dieser Aufgabe bietet sich das Verfahren der multiplen linearen Regression an. Auf die theoretischen Grundlagen soll hier nicht näher eingegangen werden, man findet sie in jedem Buch über mathematische Statistik, oder in [1]. Bei der Analyse der Daten wurde im großen und ganzen der Weg eingeschlagen, den Angelika Schwarz in [1] beschreibt. Ihre Ergebnisse lassen sich jedoch nicht direkt übertragen, da die dort betrachteten Grundstücke unbebaut waren. Da bei der statistischen Auswertung großer Datenmengen ein immenser Rechenaufwand anfällt, ist es unverzichtbar, professionelle statistische Software einzusetzen. Es stand das Programm S-Plus 2.0 (PC-Version für Windows) zur Verfügung. Sämtliche Berechnungen und alle Grafiken in diesem Bericht wurden in S-Plus erstellt.
We consider the problem to evacuate several regions due to river flooding, where sufficient time is given to plan ahead. To ensure a smooth evacuation procedure, our model includes the decision which regions to assign to which shelter, and when evacuation orders should be issued, such that roads do not become congested.
Due to uncertainty in weather forecast, several possible scenarios are simultaneously considered in a robust optimization framework. To solve the resulting integer program, we apply a Tabu search algorithm based on decomposing the problem into better tractable subproblems. Computational experiments on random instances and an instance based on Kulmbach, Germany, data show considerable improvement compared to an MIP solver provided with a strong starting solution.
Zeitreihen und Modalanalyse
(1987)
Die Arbeit ist zu verstehen als ein Teil im großen Projekt der Universität Kaiserslautern, das sich unter dem Namen Technomathematik um die dringend erforderliche Verständigung zwischen Technik und Mathematik bemüht.; Der große Leitfaden war das Buch von Natke: Einführung in Theorie und Praxis der Zeitreihen- und Modalanalyse, Schilderung der wesentlichen dort verwendeten Ideen der indirekten Systemidentifikation sowie des wahrscheinlichkeitstheoretischen und physikalisch-technischen Hintergrundes.
Bei der Programmierung geht es in vielfältiger Form um Identifikation von Individuen: Speicherorte,Datentypen, Werte, Klassen, Objekte, Funktionen u.ä. müssen definierend oder selektierend identifiziert werden.Die Ausführungen zur Identifikation durch Zeigen oder Nennen sind verhältnismäßig kurz gehalten,wogegen der Identifikation durch Umschreiben sehr viel Raum gewidmet ist. Dies hat seinen Grunddarin, daß man zum Zeigen oder Nennen keine strukturierten Sprachformen benötigt, wohl aber zumUmschreiben. Daß die Betrachtungen der unterschiedlichen Formen funktionaler Umschreibungen soausführlich gehalten sind, geschah im Hinblick auf ihre Bedeutung für die Begriffswelt der funktionalen Programmierung. Man hätte zwar die Formen funktionaler Umschreibungen auch im Mosaikstein "Programmzweck versus Programmform" im Kontext des dort dargestellten Konzepts funktionaler Programme behandeln können, aber der Autor meint, daß der vorliegende Aufsatz der angemessenerePlatz dafür sei.
We present a convenient notation for positive/negativeADconditional equations. Theidea is to merge rules specifying the same function by using caseAD, ifAD, matchAD, and letADexpressions.Based on the presented macroADruleADconstruct, positive/negativeADconditional equational specifiADcations can be written on a higher level. A rewrite system translates the macroADruleADconstructsinto positive/negativeADconditional equations.
The Internet has fallen prey to its most successful service, the World-Wide Web. The networksdo not keep up with the demands incurred by the huge amount of Web surfers. Thus, it takeslonger and longer to obtain the information one wants to access via the World-Wide Web.Many solutions to the problem of network congestion have been developed in distributed sys-tems research in general and distributed file and database systems in particular. The introduc-tion of caching and replication strategies has proven to help in many situations and thereforethese techniques are also applied to the WWW. Although most problems and associated solu-tions are known, some circumstances are different with the Web, forcing the adaptation ofknown strategies. This paper gives an overview about these differences and about currentlydeployed, developed, and evaluated solutions.
We have developed a middleware framework for workgroup environments that can support distributed software development and a variety of other application domains requiring document management and change management for distributed projects. The framework enables hypermedia-based integration of arbitrary legacy and new information resources available via a range of protocols, not necessarily known in advance to us as the general framework developers nor even to the environment instance designers. The repositories in which such information resides may be dispersed across the Internet and/or an organizational intranet. The framework also permits a range of client models for user and tool interaction, and applies an extensible suite of collaboration services, including but not limited to multi-participant workflow and coordination, to their information retrievals and updates. That is, the framework is interposed between clients, services and repositories - thus "middleware". We explain how our framework makes it easy to realize a comprehensive collection of workgroup and workflow features we culled from a requirements survey conducted by NASA.
Abstract: Winding number transitions from quantum to classical behavior are studied in the case of the 1+1 dimensional Mottola-Wipf model with the space coordinate on a circle for exploring the possibility of obtaining transitions of second order. The model is also studied as a prototype theory which demonstrates the procedure of such investigations. In the model at hand we find that even on a circle the transitions remain those of first order.
Abstract: Following our earlier investigations we examine the quantum-classical winding number transition in the Abelian-Higgs system. It is demonstrated that the winding number transition in this system is of the smooth second order type in the full range of parameter space. Comparison of the action of classical vortices with that of the sphaleron supports our finding.
In recent years several computational systems and techniques fortheorem proving by analogy have been developed. The obvious prac-tical question, however, as to whether and when to use analogy hasbeen neglected badly in these developments. This paper addresses thisquestion, identifies situations where analogy is useful, and discussesthe merits of theorem proving by analogy in these situations. Theresults can be generalized to other domains.
Using particle methods to solve the Boltzmann equation for rarefied gases numerically, in realistic streaming problems, huge differences in the total number of particles per cell arise. In order to overcome the resulting numerical difficulties the application of a weighted particle concept is well-suited. The underlying idea is to use different particle masses in different cells depending on the macroscopic density of the gas. Discrepance estimates and numerical results are given.
Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.
We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued time series models based on other types of thinning operations.
By means of the limit and jump relations of classical potential theory the framework of a wavelet approach on a regular surface is established. The properties of a multiresolution analysis are verified, and a tree algorithm for fast computation is developed based on numerical integration. As applications of the wavelet approach some numerical examples are presented, including the zoom-in property as well as the detection of high frequency perturbations. At the end we discuss a fast multiscale representation of the solution of (exterior) Dirichlet's or Neumann's boundary-value problem corresponding to regular surfaces.
This work is dedicated to the wavelet modelling of regional and temporal variations of the Earth's gravitational potential observed by GRACE. In the first part, all required mathematical tools and methods involving spherical wavelets are introduced. Then we apply our method to monthly GRACE gravity fields. A strong seasonal signal can be identified, which is restricted to areas, where large-scale redistributions of continental water mass are expected. This assumption is analyzed and verified by comparing the time series of regionally obtained wavelet coefficients of the gravitational signal originated from hydrology models and the gravitational potential observed by GRACE. The results are in good agreement to previous studies and illustrate that wavelets are an appropriate tool to investigate regional time-variable effects in the gravitational field.
In this paper we introduce a multiscale technique for the analysis of deformation phenomena of the Earth. Classically, the basis functions under use are globally defined and show polynomial character. In consequence, only a global analysis of deformations is possible such that, for example, the water load of an artificial reservoir is hardly to model in that way. Up till now, the alternative to realize a local analysis can only be established by assuming the investigated region to be flat. In what follows we propose a local analysis based on tools (Navier scaling functions and wavelets) taking the (spherical) surface of the Earth into account. Our approach, in particular, enables us to perform a zooming-in procedure. In fact, the concept of Navier wavelets is formulated in such a way that subregions with larger or smaller data density can accordingly be modelled with a higher or lower resolution of the model, respectively.
Wavelets on closed surfaces in Euclidean space R3 are introduced starting from a scale discrete wavelet transform for potentials harmonic down to a spherical boundary. Essential tools for approximation are integration formulas relating an integral over the sphere to suitable linear combinations of functional values (resp. normal derivatives) on the closed surface under consideration. A scale discrete version of multiresolution is described for potential functions harmonic outside the closed surface and regular at infinity. Furthermore, an exact fully discrete wavelet approximation is developed in case of band-limited wavelets. Finally, the role of wavelets is discussed in three problems, namely (i) the representation of a function on a closed surface from discretely given data, (ii) the (discrete) solution of the exterior Dirichlet problem, and (iii) the (discrete) solution of the exterior Neumann problem.
A multiscale method is introduced using spherical (vector) wavelets for the computation of the earth's magnetic field within source regions of ionospheric and magnetospheric currents. The considerations are essentially based on two geomathematical keystones, namely (i) the Mie representation of solenoidal vector fields in terms of toroidal and poloidal parts and (ii) the Helmholtz decomposition of spherical (tangential) vector fields. Vector wavelets are shown to provide adequate tools for multiscale geomagnetic modelling in form of a multiresolution analysis, thereby completely circumventing the numerical obstacles caused by vector spherical harmonics. The applicability and efficiency of the multiresolution technique is tested with real satellite data.
In this paper, the reflection and refraction of a plane wave at an interface between .two half-spaces composed of triclinic crystalline material is considered. It is shown that due to incidence of a plane wave three types of waves namely quasi-P (qP), quasi-SV (qSV) and quasi-SH (qSH) will be generated governed by the propagation condition involving the acoustic tensor. A simple procedure has been presented for the calculation of all the three phase velocities of the quasi waves. It has been considered that the direction of particle motion is neither parallel nor perpendicular to the direction of propagation. Relations are established between directions of motion and propagation, respectively. The expressions for reflection and refraction coefficients of qP, qSV and qSH waves are obtained. Numerical results of reflection and refraction coefficients are presented for different types of anisotropic media and for different types of incident waves. Graphical representation have been made for incident qP waves and for incident qSV and qSH waves numerical data are presented in two tables.
Wannier-Stark states for semiconductor superlattices in strong static fields, where the interband Landau-Zener tunneling cannot be neglected, are rigorously calculated. The lifetime of these metastable states was found to show multiscale oscillations as a function of the static field, which is explained by an interaction with above-barrier resonances. An equation, expressing the absorption spectrum of semiconductor superlattices in terms of the resonance Wannier-Stark states is obtained and used to calculate the absorption spectrum in the region of high static fields.
In this work, we discuss the resonance states of a quantum particle in a periodic potential plus static force. Originally this problem was formulated for a crystalline electron subject to the static electric field and is known nowadays as the Wannier-Stark problem. We describe a novel approach to the Wannier-Stark problem developed in recent years. This approach allows to compute the complex energy spectrum of a Wannier-Stark system as the poles of a rigorously constructed scattering matrix and, in this sense, solves the Wannier-Stark problem without any approximation. The suggested method is very efficient from the numerical point of view and has proven to be a powerful analytic tool for Wannier-Stark resonances appearing in different physical systems like optical or semiconductor superlattices.
In this report we give an overview of the development of our new Waldmeisterprover for equational theories. We elaborate a systematic stepwise design process, startingwith the inference system for unfailing Knuth - Bendix completion and ending up with animplementation which avoids the main diseases today's provers suffer from: overindulgencein time and space.Our design process is based on a logical three - level system model consisting of basicoperations for inference step execution, aggregated inference machine, and overall controlstrategy. Careful analysis of the inference system for unfailing completion has revealed thecrucial points responsible for time and space consumption. For the low level of our model,we introduce specialized data structures and algorithms speeding up the running system andcutting it down in size - both by one order of magnitude compared with standard techniques.Flexible control of the mid - level aggregation inside the resulting prover is made possible by acorresponding set of parameters. Experimental analysis shows that this flexibility is a pointof high importance. We go on with some implementation guidelines we have found valuablein the field of deduction.The resulting new prover shows that our design approach is promising. We compare oursystem's throughput with that of an established system and finally demonstrate how twovery hard problems could be solved by Waldmeister.
Mit der schnellen Verbreitung der CAx-Techniken in der deutschen Automobilindustrie wächst die Notwendigkeit einer besseren Integration der CAx-Systeme in die Prozeßketten und der Beherrschung der Produktinformationsflüsse. Aufgrund dieser Tatsachen ist in den letzten Jah-ren ein Wandel der CAx-Systemarchitekturen von geschloßenen, monolithischen zu offen inte-grierten Systemen erkennbar. Im folgenden wird dieser Prozeß sowie dessen Implikationen auf die Anwendung und auf die Systemhersteller analysiert. Ausgehend von der Initiative der deutschen Automobilindustrie wurde das Projekt ANICA (Analysis of Interfaces of various CAD/CAM-Systems) gestartet. In diesem Projekt werden die Schnittstellen zu den Systemkernen einiger CAx-Hersteller untersucht und ein Konzept für kooperierende CAx-Systeme in der Automobilindustrie wird entwickelt.
This paper presents the systematic synthesis of a fairly complex digitalcircuit and its CPLD implementation as an assemblage of communicatingasynchronous sequential circuits. The example, a VMEbus controller, waschosen because it has to control concurrent processes and to arbitrateconflicting requests.
Vigenere-Verschlüsselung
(1999)
Die Verfahren der Induktiven Logischen Programmierung (ILP) [Mug93] haben die Aufgabe, aus einer Menge von positiven Beispielen E+, einer Menge von negativen Beispielen E und dem Hintergrundwissen B ein logisches Programm P zu lernen, das aus einer Menge von definiten Klauseln C : l0 l1, : : : ,ln besteht. Da der Hypothesenraum für Hornlogik unendlich ist, schränken viele Verfahren die Hypothesensprache auf eine endliche ein. Auch wird oft versucht, die Hypothesensprache so einzuschränken, dass nur Programme gelernt werden können, für die die Konsistenz entscheidbar ist. Eine andere Motivation, die Hypothesensprache zu beschränken, ist, dass das Wissen über das Zielprogramm, das schon vorhanden ist, ausgenutzt werden soll. So sind für bestimmte Anwendungen funktionsfreie Hypothesenklauseln ausreichend, oder es ist bekannt, dass das Zielprogramm funktional ist.
Verbale Sacherschließung
(1998)
Das Skript gibt eine Einführung in die Geschichte, die Terminologie und die Verfahren der verbalen Sacherschließung. Im deutschsprachigen und englischsprachigen Raum etablierte Verfahren, wie die "Regeln für den Schlagwortkatalog (RSWK)" und die "Library of Congress Subject Headings (LCSH)", werden eingehend beschrieben und Aspekte der Kooperation und Tauglichkeit für Online-Kataloge diskutiert. Charakteristika sowie Vor- und Nachteile der automatischen Indexierung werden anhand des Verfahrens "Maschinelle Indexierung zur verbesserten Literaturerschließung in Online Systemen (MILOS)" dargestellt.
The mathematical modelling of problems in science and engineering leads often to partial differential equations in time and space with boundary and initial conditions.The boundary value problems can be written as extremal problems(principle of minimal potential energy), as variational equations (principle of virtual power) or as classical boundary value problems.There are connections concerning existence and uniqueness results between these formulations, which will be investigated using the powerful tools of functional analysis.The first part of the lecture is devoted to the analysis of linear elliptic boundary value problems given in a variational form.The second part deals with the numerical approximation of the solutions of the variational problems.Galerkin methods as FEM and BEM are the main tools. The h-version will be discussed, and an error analysis will be done.Examples, especially from the elasticity theory, demonstrate the methods.
The shortest path problem in which the \((s,t)\)-paths \(P\) of a given digraph \(G =(V,E)\) are compared with respect to the sum of their edge costs is one of the best known problems in combinatorial optimization. The paper is concerned with a number of variations of this problem having different objective functions like bottleneck, balanced, minimum deviation, algebraic sum, \(k\)-sum and \(k\)-max objectives, \((k_1, k_2)-max, (k_1, k_2)\)-balanced and several types of trimmed-mean objectives. We give a survey on existing algorithms and propose a general model for those problems not yet treated in literature. The latter is based on the solution of resource constrained shortest path problems with equality constraints which can be solved in pseudo-polynomial time if the given graph is acyclic and the number of resources is fixed. In our setting, however, these problems can be solved in strongly polynomial time. Combining this with known results on \(k\)-sum and \(k\)-max optimization for general combinatorial problems, we obtain strongly polynomial algorithms for a variety of path problems on acyclic and general digraphs.
Conditional Compilation (CC) is frequently used as a variation mechanism in software product lines (SPLs). However, as a SPL evolves the variable code realized by CC erodes in the sense that it becomes overly complex and difficult to understand and maintain. As a result, the SPL productivity goes down and puts expected advantages more and more at risk. To investigate the variability erosion and keep the productivity above a sufficiently good level, in this paper we 1) investigate several erosion symptoms in an industrial SPL; 2) present a variability improvement process that includes two major improvement strategies. While one strategy is to optimize variable code within the scope of CC, the other strategy is to transition CC to a new variation mechanism called Parameterized Inclusion. Both of these two improvement strategies can be conducted automatically, and the result of CC optimization is provided. Related issues such as applicability and cost of the improvement are also discussed.
Value Preserving Strategies and a General Framework for Local Approaches to Optimal Portfolios
(1999)
We present some new general results on the existence and form of value preserving portfolio strategies in a general semimartingale setting. The concept of value preservation will be derived via a mean-variance argument. It will also be embedded into a framework for local approaches to the problem of portfolio optimisation.
We present a distributed system, Dott, for approximately solving the Trav-eling Salesman Problem (TSP) based on the Teamwork method. So-calledexperts and specialists work independently and in parallel for given time pe-riods. For TSP, specialists are tour construction algorithms and experts usemodified genetic algorithms in which after each application of a genetic operatorthe resulting tour is locally optimized before it is added to the population. Aftera given time period the work of each expert and specialist is judged by a referee.A new start population, including selected individuals from each expert and spe-cialist, is generated by the supervisor, based on the judgments of the referees.Our system is able to find better tours than each of the experts or specialistsworking alone. Also results comparable to those of single runs can be found muchfaster by a team.
Rules are an important knowledge representation formalism in constructive problem solving. On the other hand, object orientation is an essential key technology for maintaining large knowledge bases as well as software applications. Trying to take advantage of the benefits of both paradigms, we integrated Prolog and Smalltalk to build a common base architecture for problem solving. This approach has proven to be useful in the development of two knowledge-based systems for planning and configuration design (CAPlan and Idax). Both applications use Prolog as an efficient computational source for the evaluation of knowledge represented as rules.
Retrieval of cases is one important step within the case-based reasoning paradigm. We propose an improvement of this stage in the process model for finding most similar cases with an average effort of O[log2n], n number of cases. The basic idea of the algorithm is to use the heterogeneity of the search space for a density-based structuring and to employ this precomputed structure, a k-d tree, for efficient case retrieval according to a given similarity measure sim. In addition to illustrating the basic idea, we present the expe- rimental results of a comparison of four different k-d tree generating strategies as well as introduce the notion of virtual bounds as a new one that significantly reduces the retrieval effort from a more pragmatic perspective. The presented approach is fully implemented within the (Patdex) system, a case-based reasoning system for diagnostic applications in engineering domains.
We present the adaptation process in a CBR application for decision support in the domain of industrial supervision. Our approach uses explanations to approximate relations between a problem description and its solution, and the adaptation process is guided by these explanations (a more detailed presentation has been done in [4]).
The paper explores the role of artificial intelligence techniques in the development of an enhanced software project management tool, which takes account of the emerging requirement for support systems to address the increasing trend towards distributed multi-platform software development projects. In addressing these aims this research devised a novel architecture and framework for use as the basis of an intelligent assistance system for use by software project managers, in the planning and managing of a software project. This paper also describes the construction of a prototype system to implement this architecture and the results of a series of user trials on this prototype system.
Requirements engineering (RE) is a necessary part of the software development process, as it helps customers and designers identify necessary system requirements. If these stakeholders are separated by distance, we argue that a distributed groupware environment supporting a cooperative requirements engineering process must be supplied that allows them to negotiate software requirements. Such a groupware environment must support aspects of joint work relevant to requirements negotiation: synchronous and asynchronous collaboration, telepresence, and teledata. It should also add explicit support for a structured RE process, which includes the team's ability to discuss multiple perspectives during requirements acquisition and traceability. We chose the TeamWave software platform as an environment that supplied the basic collaboration capabilities, and tailored it to fit the specific needs of RE.
To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this paper a frame-based knowledge representation formalismis presented, which supports a conceptual representation and to a large extent guar-antees the consistency of the built-up knowledge bases. We define a semantics ofthe representation by giving a translation into the underlaying logic.
We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.
Abstract: The calculation of absorption cross sections for minimal scalars in supergravity backgrounds is an important aspect of the investigation of AdS/CFT correspondence and requires a matching of appropriate wave functions. The low energy case has attracted particular attention. In the following the dependence of the cross section on the matching point is investigated. It is shown that the low energy limit is independent of the matching point and hence exhibits universality. In the high energy limit the independence is not maintained, but the result is believed to possess the correct energy dependence.
Universal Shortest Paths
(2010)
We introduce the universal shortest path problem (Univ-SPP) which generalizes both - classical and new - shortest path problems. Starting with the definition of the even more general universal combinatorial optimization problem (Univ-COP), we show that a variety of objective functions for general combinatorial problems can be modeled if all feasible solutions have the same cardinality. Since this assumption is, in general, not satisfied when considering shortest paths, we give two alternative definitions for Univ-SPP, one based on a sequence of cardinality contrained subproblems, the other using an auxiliary construction to establish uniform length for all paths between source and sink. Both alternatives are shown to be (strongly) NP-hard and they can be formulated as quadratic integer or mixed integer linear programs. On graphs with specific assumptions on edge costs and path lengths, the second version of Univ-SPP can be solved as classical sum shortest path problem.
We have computed ensembles of complete spectra of the staggered Dirac operator using four-dimensional SU(2) gauge fields, both in the quenched approximation and with dynamical fermions. To identify universal features in the Dirac spectrum, we compare the lattice data with predictions from chiral random matrix theory for the distribution of the low-lying eigenvalues. Good agreement is found up to some limiting energy, the so-called Thouless energy, above which random matrix theory no longer applies. We determine the dependence of the Thouless energy on the simulation parameters using the scalar susceptibility and the number variance.
An asymptotic preserving numerical scheme (with respect to diffusion scalings) for a linear transport equation is investigated. The scheme is adopted from a class of recently developped schemes. Stability is proven uniformly in the mean free path under a CFL type condition turning into a parabolic CFL condition in the diffusion limit.
In diesem Projekt soll die Bildung von Wirbeln bei der Strömung eines Gases um eine Ecke numerisch untersucht werden. Dabei sollen verschiedene numerische Verfahren getestet und die Ergebnisse mit Versuchsdaten verglichen werden. Ferner soll untersucht werden, wie gut sich diese Verfahren vektorisieren lassen, da komplizierte zweidimensionale und selbst einfache dreidimensionale Probleme der Strömungsdynamik auf den heute üblichen Universalrechnern nicht mit vertretbarem Zeitaufwand zu lösen sind. Die numerischen Berechnungen werden auf der CYBER 205 in Karlsruhe durchgeführt.
In diesem Projekt soll die Bildung von Wirbeln bei der Strömung eines Gases um eine Ecke numerisch untersucht werden. Dabei sollen verschiedene numerische Verfahren getestet und die Ergebnisse mit Versuchsdaten verglichen werden. Ferner soll untersucht werden, wie gut sich diese Verfahren vektorisieren lassen, da kompliziertere zweidimensionale und selbst einfache dreidimensionale Probleme der Strömungsdynamik auf den heute üblichen Universalrechnern nicht mit vertretbarem Zeitaufwand zu lösen sind, besonders, wenn, wie an der Universität Kaiserslautern, nur eine relativ langsame Anlage (Siemens 7551/7561) zur Verfügung steht. Die numerischen Rechnungen werden auf der CYBER 205 in Karlsruhe durchgeführt.
This paper addresses two modi of analogical reasoning. Thefirst modus is based on the explicit representation of the justificationfor the analogical inference. The second modus is based on the repre-sentation of typical instances by concept structures. The two kinds ofanalogical inferences rely on different forms of relevance knowledge thatcause non-monotonicity. While the uncertainty and non-monotonicity ofanalogical inferences is not questioned, a semantic characterization ofanalogical reasoning has not been given yet. We introduce a minimalmodel semantics for analogical inference with typical instances.
Abstract: A Born-Infeld theory describing a D2-brane coupled to a 4-form RR field strength is considered, and the general solutions of the static and Euclidean time equations are derived and discussed. The period of the bounce solutions is shown to allow a consideration of tunneling and quantum-classical transitions in the sphaleron region. The order of such transitions, depending on the strength of the RR field strength, is determined. A criterion is then derived to confirm these findings.
This work presents a framework for the computation of complex geometries containing intersections of multiple patches with Reissner-Mindlin shell elements. The main objective is to provide an isogeometric finite element implementation which neither requires drilling rotation stabilization, nor user interaction to quantify the number of rotational degrees of freedom for every node. For this purpose, the following set of methods is presented. Control points with corresponding physical location are assigned to one common node for the finite element solution. A nodal basis system in every control point is defined, which ensures an exact interpolation of the director vector throughout the whole domain. A distinction criterion for the automatic quantification of rotational degrees of freedom for every node is presented. An isogeometric Reissner-Mindlin shell formulation is enhanced to handle geometries with kinks and allowing for arbitrary intersections of patches. The parametrization of adjacent patches along the interface has to be conforming. The shell formulation is derived from the continuum theory and uses a rotational update scheme for the current director vector. The nonlinear kinematic allows the computation of large deformations and large rotations. Two concepts for the description of rotations are presented. The first one uses an interpolation which is commonly used in standard Lagrange-based shell element formulations. The second scheme uses a more elaborate concept proposed by the authors in prior work, which increases the accuracy for arbitrary curved geometries. Numerical examples show the high accuracy and robustness of both concepts. The applicability of the proposed framework is demonstrated.
In the paper we discuss the transition from kinetic theory to macroscopic fluid equations, where the macroscopic equations are defined as aymptotic limits of a kinetic equation. This relation can be used to derive computationally efficient domain decomposition schemes for the simulaion of rarefied gas flows close to the continuum limit. Moreover, we present some basic ideas for the derivation of kinetic induced numerical schemes for macroscopic equations, namely kinetic schemes for general conservation laws as well as Lattice-Boltzmann methods for the incompressible Navier-Stokes equations.
The paper focuses on the problem of trajectory planning of flexible redundant robot manipulators (FRM) in joint space. Compared to irredundant flexible manipulators, FRMs present additional possibilities in trajectory planning due to their kinematics redundancy. A trajectory planning method to minimize vibration of FRMs is presented based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as a planning variable. Quadrinomial and quintic polynomials are used to describe the segments which connect the initial, intermediate, and final points in joint space. The trajectory planning of FRMs is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. A case study shows that the method is applicable.
Toying with Jordan matrices
(1996)
We present first steps towards fully automated deduction that merely requiresthe user to submit proof problems and pick up results. Essentially, this necessi-tates the automation of the crucial step in the use of a deduction system, namelychoosing and configuring an appropriate search-guiding heuristic. Furthermore,we motivate why learning capabilities are pivotal for satisfactory performance.The infrastructure for automating both the selection of a heuristic and integra-tion of learning are provided in form of an environment embedding the "core"deduction system.We have conducted a case study in connection with a deduction system basedon condensed detachment. Our experiments with a fully automated deductionsystem 'AutoCoDe' have produced remarkable results. We substantiate Au-toCoDe's encouraging achievements with a comparison with the renowned the-orem prover Otter. AutoCoDe outperforms Otter even when assuming veryfavorable conditions for Otter.
In order to improve the quality of software systems and to set up a more effective process for their development, many attempts have been made in the field of software engineering. Reuse of existing knowledge is seen as a promising way to solve the outstanding problems in this field. In previous work we have integrated the design pattern concept with the formal design language SDL, resulting in a certain kind of pattern formalization. For the domain of communication systems we have also developed a pool of SDL patterns with an accompanying process model for pattern application. In this paper we present an extension that combines the SDL pattern approach with the experience base concept. This extension supports a systematic method for empirical evaluation and continuous improvement of the SDL pattern approach. Thereby the experience base serves as a repository necessary for effective reuse of the captured knowledge. A comprehensive usage scenario is described which shows the advantages of the combined approach. To demonstrate its feasibility, first results of a research case study are given.
Although several systematic analyses of existing approaches to adaptation have been published recently, a general formal adaptation framework is still missing. This paper presents a step into the direction of developing such a formal model of transformational adaptation. The model is based on the notion of the quality of a solution to a problem, while quality is meant in a more general sense and can also denote some kind of appropriateness, utility, or degree of correctness. Adaptation knowledge is then defined in terms of functions transforming one case into a successor case. The notion of quality provides us with a semantics for adaptation knowledge and allows us to define terms like soundness, correctness and completeness. In this view, adaptation (and even the whole CBR process) appears to be a special instance of an optimization problem.
This paper focuses on the issues involved when multiple mobile agents interact in multiagent systems. The application is an intelligent agent market place, where buyer and seller agents cooperate and compete to process sales transactions for their owners. The market place manager acts as afacilitator by giving necessary information to agents and managing communication between agents, and also as a mediator by proposing solutions to agents or stopping them to get into infinite loops bargaining back and forth.The buyer and seller agents range from using hardcoded logic to rule-based inferencing in their negotiation strategies. However these agents must support some communication skills using KQML or FIPA-ACL.So in contrast with other approaches to multiagent negotiation, we introduce an explicit mediator (market place manager) into the negotiation, and we propose a negotiation strategy based on dependence theory [1] implemented by our best buyers and best sellers.
Several topological necessary conditions of smooth stabilization in the large have been obtained. In particular, if a smooth single-input nonlinear system is smoothly stabilizable in the large at some point of a connected component of equilibria set, then the connected component is to be an unknoted, unbounded curve.
This paper presents a wavelet analysis of temporal and spatial variations of the Earth's gravitational potential based on tensor product wavelets. The time--space wavelet concept is realized by combining Legendre wavelets for the time domain and spherical wavelets for the space domain. In consequence, a multiresolution analysis for both, temporal and spatial resolution, is formulated within a unified concept. The method is then numerically realized by using first synthetically generated data and, finally, several real data sets.
In this paper a known orthonormal system of time- and space-dependent functions, that were derived out of the Cauchy-Navier equation for elastodynamic phenomena, is used to construct reproducing kernel Hilbert spaces. After choosing one of the spaces the corresponding kernel is used to define a function system that serves as a basis for a spline space. We show that under certain conditions there exists a unique interpolating or approximating, respectively, spline in this space with respect to given samples of an unknown function. The name "spline" here refers to its property of minimising a norm among all interpolating functions. Moreover, a convergence theorem and an error estimate relative to the point grid density are derived. As numerical example we investigate the propagation of seismic waves.
Abstract: We analyze the above-threshold behavior of a mirrorless parametric oscillator based on resonantly enhanced four wave mixing in a coherently driven dense atomic vapor. It is shown that, in the ideal limit, an arbitrary small flux of pump photons is sufficient to reach the oscillator threshold. We demonstrate that due to the large group velocity delays associated with coherent media, an extremely narrow oscillator linewidth is possible, making a narrow-band source of non-classical radiation feasible.
For the numerical simulation of 3D radiative heat transfer in glasses and glass melts, practically applicable mathematical methods are needed to handle such problems optimal using workstation class computers. Since the exact solution would require super-computer capabilities we concentrate on approximate solutions with a high degree of accuracy. The following approaches are studied: 3D diffusion approximations and 3D ray-tracing methods.
Thermal Properties of Interacting Bose Fields and Imaginary-Time Stochastic Differential Equations
(1998)
Abstract: Matsubara Green's functions for interacting bosons are expressed as classical statistical averages corresponding to a linear imaginary-time stochastic differential equation. This makes direct numerical simulations applicable to the study of equilibrium quantum properties of bosons in the non-perturbative regime. To verify our results we discuss an oscillator with quartic anharmonicity as a prototype model for an interacting Bose gas. An analytic expression for the characteristic function in a thermal state is derived and a Higgs-type phase transition discussed, which occurs when the oscillator frequency becomes negative.
In situ condition monitoring of rotary shaft seals could significantly improve the reliability of future seals in numerous applications. A superficial application of strain gauges capturing the state of deformation could offer a cost-effective retrofit solution for indirect measurements of central operational parameters. Within a simulative investigation of the sealing system, possible sensor positions for determination of the preload as well as the friction torque prevailing in the sealing contact are therefore identified as two parameters directly related to the operating condition. Further investigations of the potential sensor signal with focus on its time-dependent behavior prove the theoretical feasibility of the measurement concepts developed and provide promising prospects for an initial technical implementation.
In this paper we are interested in an algebraic specification language that (1) allowsfor sufficient expessiveness, (2) admits a well-defined semantics, and (3) allows for formalproofs. To that end we study clausal specifications over built-in algebras. To keep thingssimple, we consider built-in algebras only that are given as the initial model of a Hornclause specification. On top of this Horn clause specification new operators are (partially)defined by positive/negative conditional equations. In the first part of the paper wedefine three types of semantics for such a hierarchical specification: model-theoretic,operational, and rewrite-based semantics. We show that all these semantics coincide,provided some restrictions are met. We associate a distinguished algebra A spec to ahierachical specification spec. This algebra is initial in the class of all models of spec.In the second part of the paper we study how to prove a theorem (a clause) valid in thedistinguished algebra A spec . We first present an abstract framework for inductive theoremprovers. Then we instantiate this framework for proving inductive validity. Finally wegive some examples to show how concrete proofs are carried out.This report was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D4-Projekt)
This paper shows how a new approach to theorem provingby analogy is applicable to real maths problems. This approach worksat the level of proof-plans and employs reformulation that goes beyondsymbol mapping. The Heine-Borel theorem is a widely known result inmathematics. It is usually stated in R 1 and similar versions are also truein R 2 , in topology, and metric spaces. Its analogical transfer was proposedas a challenge example and could not be solved by previous approachesto theorem proving by analogy. We use a proof-plan of the Heine-Boreltheorem in R 1 as a guide in automatically producing a proof-plan of theHeine-Borel theorem in R 2 by analogy-driven proof-plan construction.
In this contribution a mortar-type method for the coupling of non-conforming NURBS surface patches is proposed. The connection of non-conforming patches with shared degrees of freedom requires mutual refinement, which propagates throughout the whole patch due to the tensor-product structure of NURBS surfaces. Thus, methods to handle non-conforming meshes are essential in NURBS-based isogeometric analysis. The main objective of this work is to provide a simple and efficient way to couple the individual patches of complex geometrical models without altering the variational formulation. The deformations of the interface control points of adjacent patches are interrelated with a master-slave relation. This relation is established numerically using the weak form of the equality of mutual deformations along the interface. With the help of this relation the interface degrees of freedom of the slave patch can be condensated out of the system. A natural connection of the patches is attained without additional terms in the weak form. The proposed method is also applicable for nonlinear computations without further measures. Linear and geometrical nonlinear examples show the high accuracy and robustness of the new method. A comparison to reference results and to computations with the Lagrange multiplier method is given.
The Trippstadt Problem
(1984)
Close to Kaiserslautern is the town of Trippstadt, which, together with five other small towns forms a local administration unit (Verbandsgemeinde) called Kaiserslautern-Süd. Trippstadt has its own beautiful public swimming pool, which causes problems though; the cost for the upkeep of the pool is higher than the income and thus has to be divided among the towns belonging to the Verbandsgemeinde. Because of this problem the administration wanted to find out which fraction of the total number of pool visitors came from the different towns. They planned to ask each pool guest where he came from. They did this for only three days though because the waiting lines at the cashiers became unbearably long and they could see that because of this the total number of guests would decrease. Then they wondered how to find a better method to get the same data and that was when I was asked to help with the solution of the problem.
This paper is concerned with numerical algorithms for the bipolar quantum drift diffusion model. For the thermal equilibrium case a quasi-gradient method minimizing the energy functional is introduced and strong convergence is proven. The computation of current - voltage characteristics is performed by means of an extended emph{Gummel - iteration}. It is shown that the involved fixed point mapping is a contraction for small applied voltages. In this case the model equations are uniquely solvable and convergence of the proposed iteration scheme follows. Numerical simulations of a one dimensional resonant tunneling diode are presented. The computed current - voltage characteristics are in good qualitative agreement with experimental measurements. The appearance of negative differential resistances is verified for the first time in a Quantum Drift Diffusion model.
In this work we introduce a new bandlimited spherical wavelet: The Bernstein wavelet. It possesses a couple of interesting properties. To be specific, we are able to construct bandlimited wavelets free of oscillations. The scaling function of this wavelet is investigated with regard to the spherical uncertainty principle, i.e., its localization in the space domain as well as in the momentum domain is calculated and compared to the well-known Shannon scaling function. Surprisingly, they possess the same localization in space although one is highly oscillating whereas the other one shows no oscillatory behavior. Moreover, the Bernstein scaling function turns out to be the first bandlimited scaling function known to the literature whose uncertainty product tends to the minimal value 1.
In this paper we consider a certain class of geodetic linear inverse problems LambdaF=G in a reproducing kernel Hilbert space setting to obtain a bounded generalized inverse operator Lambda. For a numerical realization we assume G to be given at a finite number of discrete points to which we employ a spherical spline interpolation method adapted to the Hilbertspaces. By applying Lambda to the obtained spline interpolant we get an approximation of the solution F. Finally our main task is to show some properties of the approximated solution and to prove convergence results if the data set increases.
The performance of a combustion engine is essentially determined by the charge cycle, i.e. by the inflow of fresh air through the inlet pipe into the cylinder after a combustion cycle. The amount of air, exchanged during this process, depends on many factors, e.g. the number of revolutions per minute, the temperature, the engine and valve geometry. In order to have a tool in designing the engine one is interested in calculating this amount. The proper calculation would involve the solution of three-dimensional hydrodynamical equations governing the gas flow including chemical reactions in a complicated geometry, consisting of the cylinder, valves, inlet and outlet pipe. Since this is clearly too ambitious, we consider a simplified model.
We consider optimal design problems for semiconductor devices which are simulated using the energy transport model. We develop a descent algorithm based on the adjoint calculus and present numerical results for a ballistic diode. Further, we compare the optimal doping profile with results computed on basis of the drift diffusion model. Finally, we exploit the model hierarchy and test the space mapping approach, especially the aggressive space mapping algorithm, for the design problem. This yields a significant reduction of numerical costs and programming effort.
Abstract: We study the roughening transition of an interface in an Ising system on a 3D simple cubic lattice using a finite size scaling method. The particular method has recently been proposed and successfully tested for various solid on solid models. The basic idea is the matching of the renormalization-groupflow of the interface with that of the exactly solvable body centered cubic solid on solid model. We unambiguously confirm the Kosterlitz-Thouless nature of the roughening transition of the Ising interface. Our result for the inverse transition temperature K_R = 0.40754(5) is almost by two orders of magnitude more accurate than the estimate of Mon, Landau and Stauffer [9].
Continuous and discrete superselection rules induced by the interaction with the environment are investigated for a class of exactly soluble Hamiltonian models. The environment is given by a Boson field. Stable superselection sectors can only emerge if the low frequences dominate and the ground state of the Boson field disappears due to infrared divergence. The models allow uniform estimates of all transition matrix elements between different superselection sectors.
By natural or man-made disasters, the evacuation of a whole region or city may become necessary. Apart from private traffic, the evacuation from collection points to secure shelters outside the endangered region will be realized by a bus fleet made available by emergency relief. The arising Bus Evacuation Problem (BEP) is a vehicle scheduling problem, in which a given number of evacuees needs to be transported from a set of collection points to a set of capacitated shelters, minimizing the total evacuation time, i.e., the time needed until the last person is brought to safety.
In this paper we consider an extended version of the BEP, the Robust Bus Evacuation Problem (RBEP), in which the exact numbers of evacuees are not known, but may stem from a set of probable scenarios. However, after a given reckoning time, this uncertainty is eliminated and planners are given exact figures. The problem is to decide for each bus, if it is better to send it right away -- using uncertain numbers of evacuees -- or to wait until the numbers become known.
We present a mixed-integer linear programming formulation for the RBEP and discuss solution approaches; in particular, we present a tabu search framework for finding heuristic solutions of acceptable quality within short computation time. In computational experiments using both randomly generated instances and the real-world scenario of evacuating the city of Kaiserslautern, we compare our solution approaches.
In the present paper we investigate the Rayleigh-Benard convection in rarefied gases and demonstrate by numerical experiments the transition from purely thermal conduction to a natural convective flow for a large range of Knudsen numbers from 0.02 downto 0.001. We address to the problem how the critical value for the Rayleigh number defined for incompressible vsicous flows may be translated to rarefied gas flows. Moreover, the simulations obtained for a Knudsen number Kn=0.001 and Froude number Fr=1 show a further transition from regular Rayleigh-Benard cells to a pure unsteady behavious with moving vortices.
The thermal equilibrium state of a bipolar, isothermal quantum fluid confined to a bounded domain \(\Omega\subset I\!\!R^d,d=1,2\) or \( d=3\) is the minimizer of the total energy \({\mathcal E}_{\epsilon\lambda}\); \({\mathcal E}_{\epsilon\lambda}\) involves the squares of the scaled Planck's constant \(\epsilon\) and the scaled minimal Debye length \(\lambda\). In applications one frequently has \(\lambda^2\ll 1\). In these cases the zero-space-charge approximation is rigorously justified. As \(\lambda \to 0 \), the particle densities converge to the minimizer of a limiting quantum zero-space-charge functional exactly in those cases where the doping profile satisfies some compatibility conditions. Under natural additional assumptions on the internal energies one gets an differential-algebraic system for the limiting \((\lambda=0)\) particle densities, namely the quantum zero-space-charge model. The analysis of the subsequent limit \(\epsilon \to 0\) exhibits the importance of quantum gaps. The semiclassical zero-space-charge model is, for small \(\epsilon\), a reasonable approximation of the quantum model if and only if the quantum gap vanishes. The simultaneous limit \(\epsilon =\lambda \to 0\) is analyzed.
Most automated theorem provers suffer from the problem that theycan produce proofs only in formalisms difficult to understand even forexperienced mathematicians. Efforts have been made to transformsuch machine generated proofs into natural deduction (ND) proofs.Although the single steps are now easy to understand, the entire proofis usually at a low level of abstraction, containing too many tedioussteps. Therefore, it is not adequate as input to natural language gen-eration systems.To overcome these problems, we propose a new intermediate rep-resentation, called ND style proofs at the assertion level . After illus-trating the notion intuitively, we show that the assertion level stepscan be justified by domain-specific inference rules, and that these rulescan be represented compactly in a tree structure. Finally, we describea procedure which substantially shortens ND proofs by abstractingthem to the assertion level, and report our experience with furthertransformation into natural language.
In this article we prove existence and uniqueness results for solutions to the outer oblique boundary problem for the Poisson equation under very weak assumptions on boundary, coefficients and inhomogeneities. Main tools are the Kelvin transformation and the solution operator for the regular inner problem, provided in [1]. Moreover we prove regularisation results for the weak solutions of both, the inner and the outer problem. We investigate the non-admissible direction for the oblique vector field, state results with stochastic inhomogeneities and provide a Ritz-Galerkinm approximation. The results are applicable to problems from Geomathematics, see e.g. [2] and [3].
Primary decomposition of an ideal in a polynomial ring over a field belongs to the indispensable theoretical tools in commutative algebra and algebraic geometry. Geometrically it corresponds to the decomposition of an affine variety into irreducible components and is, therefore, also an important geometric concept.The decomposition of a variety into irreducible components is, however, slightly weaker than the full primary decomposition, since the irreducible components correspond only to the minimal primes of the ideal of the variety, which is a radical ideal. The embedded components, although invisible in the decomposition of the variety itself, are, however, responsible for many geometric properties, in particular, if we deform the variety slightly. Therefore, they cannot be neglected and the knowledge of the full primary decomposition is important also in a geometric context.In contrast to the theoretical importance, one can find in mathematical papers only very few concrete examples of non-trivial primary decompositions because carrying out such a decomposition by hand is almost impossible. This experience corresponds to the fact that providing efficient algorithms for primary decomposition of an ideal I ae K[x1; : : : ; xn], K a field, is also a difficult task and still one of the big challenges for computational algebra and computational algebraic geometry.All known algorithms require Gr"obner bases respectively characteristic sets and multivariate polynomial factorization over some (algebraic or transcendental) extension of the given field K. The first practical algorithm for computing the minimal associated primes is based on characteristic sets and the Ritt-Wu process ([R1], [R2], [Wu], [W]), the first practical and general primary decomposition algorithm was given by Gianni, Trager and Zacharias [GTZ]. New ideas from homological algebra were introduced by Eisenbud, Huneke and Vasconcelos in [EHV]. Recently, Shimoyama and Yokoyama [SY] provided a new algorithm, using Gr"obner bases, to obtain the primary decompositon from the given minimal associated primes.In the present paper we present all four approaches together with some improvements and with detailed comparisons, based upon an analysis of 34 examples using the computer algebra system SINGULAR [GPS]. Since primary decomposition is a fairly complicated task, it is, therefore, best explained by dividing it into several subtasks, in particular, while sometimes only one of these subtasks is needed in practice. The paper is organized in such a way that we consider the subtasks separately and present the different approaches of the above-mentioned authors, with several tricks and improvements incorporated. Some of these improvements and the combination of certain steps from the different algorithms are essential for improving the practical performance.
We present an empirical study of mathematical proofs by diagonalization, the aim istheir mechanization based on proof planning techniques. We show that these proofs canbe constructed according to a strategy that (i) finds an indexing relation, (ii) constructsa diagonal element, and (iii) makes the implicit contradiction of the diagonal elementexplicit. Moreover we suggest how diagonal elements can be represented.
In this report we treat an optimization task, which should make the choice of nonwoven for making diapers faster. A mathematical model for the liquid transport in nonwoven is developed. The main attention is focussed on the handling of fully and partially saturated zones, which leads to a parabolic-elliptic problem. Finite-difference schemes are proposed for numerical solving of the differential problem. Paralle algorithms are considered and results of numerical experiments are given.