### Refine

#### Year of publication

- 1997 (92) (remove)

#### Document Type

- Preprint (62)
- Article (17)
- Report (9)
- Doctoral Thesis (2)
- Diploma Thesis (1)
- Master's Thesis (1)

#### Has Fulltext

- yes (92) (remove)

#### Keywords

- AG-RESY (7)
- PARO (7)
- SKALP (2)
- Anisotropic smoothness classes (1)
- Bayesrisiko (1)
- Bewegungsplanung (1)
- Brownian motion (1)
- C (1)
- CAx-Anwendungen (1)
- CODET (1)

#### Faculty / Organisational entity

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerifa, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issue discussed includes modelling aspects in discrete, network and continuous location, heuristic techniques, the state of technology and undesirable facility location. Some general questions are stated reagrding the applicability of location models, promising research directions and the way technology affects the development of solution techniques.

MP Prototype Specification
(1997)

This paper discusses the benefits and drawbacks of caching and replication strategies in the WWW with respect to the Internet infrastructure. Bandwidth consumption, latency, and overall error rates are considered to be most important from a network point of view. The dependencies of these values with input parameters like degree of replication, document popularity, actual cache hit rates, and error rates are highlighted. In order to determine the influence of different caching and replication strategies on the behavior of a single proxy server with respect to these values, trace-based simulations are used. Since the overall effects of such strate- gies can hardly be decided with this approach alone, a mathematical model has been developed to deal with their influence on the network as a whole. Together, this two-tiered approach permits us to propose quantita- tive assessments on the influence different caching and replication proposals (are going to) have on the Inter- net infrastructure.

The first observation of self-focusing of dipolar spin waves in garnet film media is reported. In particular, we show that the quasi-stationary diffraction of a finite-aperture spin wave beam in a focusing medium leads to the concentration of the wave power in one focal point rather than along a certain line (channel). The obtained results demonstrate the wide applicability of non-linear spin wave media to study non-linear wave phenomena using an advanced combined microwave-Brillouin light scattering technique for a two-dimensional mapping of the spin wave amplitudes.

In this paper we provide a semantical meta-theory that will support the development of higher-order calculi for automated theorem proving like the corresponding methodology has in first-order logic. To reach this goal, we establish classes of models that adequately characterize the existing theorem-proving calculi, that is, so that they are sound and complete to these calculi, and a standard methodology of abstract consistency methods (by providing the necessary model existence theorems) needed to analyze completeness of machine-oriented calculi.

Fabric reinforced thermoplastic composites, suitable for the production of thin-walled, high
strength structural parts, are available on the market today with various fibre/matrix combinations.
However, further market penetration and series production are inhibited as long as forming
technologies are not well understood. In this thesis, the potential for series production of different
forming technologies is evaluated. Stamp forming is an efficient way to produce parts in
very short cycle times. A limiting factor to part complexibilty is undesired wrinkle formation as
a consequence of insufficient fabric shear. Fabric shear and other important deformations of impregnated
fabrics were examined by means of new test devices. Evidence was found that membrane
tension is the crucial factor to avoid wrinkle formation. New tool concepts and processing
Windows were developed to produce fabric reinforced thermoplastic parts free of wrinkles and
distortions.

Liegruppen
(1997)

In this report we treat an optimization task, which should make the choice of nonwoven for making diapers faster. A mathematical model for the liquid transport in nonwoven is developed. The main attention is focussed on the handling of fully and partially saturated zones, which leads to a parabolic-elliptic problem. Finite-difference schemes are proposed for numerical solving of the differential problem. Paralle algorithms are considered and results of numerical experiments are given.

Process Chain in Automotive Industry - Present Day Demands versus Long Term Open CAD/CAM Strategies
(1997)

The automotive industry was a pioneer in using CAD/CAM technology. Now the car manufacturers development process is almost completely done with this technology. Substantial initiative for the standardisation of CAD/CAM technics comes from the automotive industry, as e.g. for neutral CAD data interfaces. The R&D departments of German car manufacturers have founded a working group ii with the aim to develop a common long term CAD/CAM strategy. One important result is the concept of a future CAx iii architecture based on the standard data structure STEP iv . The commitment of the car manufactures to STEP and open system architectures is in contradiction to their attitude towards suppliers and subcontractors: Recently, more and more contractors are contractually bound to use exactly the same CAD system as the orderer. The German car industry tries to find a way out of this contradiction and to improve the co-operation between the companies in short term. Therefore they proposed a "Dual CAD Strategy", i.e. to put improvements in CAD communication into practice which are possible today - even proprietary solutions - and in parallel to invest in strategic concepts to prepare tomorrow's open system landscape.

In dieser Arbeit wird die Problematik der sich rapide wandelnden industriellen CAx-Anwendungen betrachtet. Durch die Einfu"hrung der Feature-Technologie scheinen einige Probleme der Parallelisierung der Prozesse, des Simultaneous und des Concurrent Engineering sowie des Outsourcing überwindbar zu sein. Allerdings entwickelte sich die Feature-Technologie bisher ohne ausreichenden Bezug zur Konstruktionspraxis, was zu erheblichen Defiziten im industriellen Einsatz führte. Untersuchungen in der Automobilindustrie (AIFEMInitiative) zeigen, dass dies vielfach auf mangelnde Kommunikation zwischen Konstrukteuren und CAx-Experten zurückgeführt werden kann. Aufgrund des jetzigen Ansatzes der Feature-Technologie im Zusammenwirken mit dem extremen Zeitdruck in der Produktentwicklung besteht aber die Gefahr, die Produktdefinitionsprozesse nur nach den Kriterien Entwicklungszeit, Kosten und Produktqualität zu optimieren. Features dienen dabei nur als speziell angepasste Werkzeuge. Damit wird eine echte Innovation der Produkte behindert. Es wird aufgezeigt, wie die Feature-Technologie erweitert werden muss, um die Kreativität der Konstrukteure zu fördern und somit neuartige Produkte zu ermöglichen. Näher ausgeführt werden die Aspekte der benutzerdefinierten Features, der Datenstandardisierung, der Verarbeitung unvollsta"ndiger Information und der dynamischen Prozessunterstützung.

Primary decomposition of an ideal in a polynomial ring over a field belongs to the indispensable theoretical tools in commutative algebra and algebraic geometry. Geometrically it corresponds to the decomposition of an affine variety into irreducible components and is, therefore, also an important geometric concept.The decomposition of a variety into irreducible components is, however, slightly weaker than the full primary decomposition, since the irreducible components correspond only to the minimal primes of the ideal of the variety, which is a radical ideal. The embedded components, although invisible in the decomposition of the variety itself, are, however, responsible for many geometric properties, in particular, if we deform the variety slightly. Therefore, they cannot be neglected and the knowledge of the full primary decomposition is important also in a geometric context.In contrast to the theoretical importance, one can find in mathematical papers only very few concrete examples of non-trivial primary decompositions because carrying out such a decomposition by hand is almost impossible. This experience corresponds to the fact that providing efficient algorithms for primary decomposition of an ideal I ae K[x1; : : : ; xn], K a field, is also a difficult task and still one of the big challenges for computational algebra and computational algebraic geometry.All known algorithms require Gr"obner bases respectively characteristic sets and multivariate polynomial factorization over some (algebraic or transcendental) extension of the given field K. The first practical algorithm for computing the minimal associated primes is based on characteristic sets and the Ritt-Wu process ([R1], [R2], [Wu], [W]), the first practical and general primary decomposition algorithm was given by Gianni, Trager and Zacharias [GTZ]. New ideas from homological algebra were introduced by Eisenbud, Huneke and Vasconcelos in [EHV]. Recently, Shimoyama and Yokoyama [SY] provided a new algorithm, using Gr"obner bases, to obtain the primary decompositon from the given minimal associated primes.In the present paper we present all four approaches together with some improvements and with detailed comparisons, based upon an analysis of 34 examples using the computer algebra system SINGULAR [GPS]. Since primary decomposition is a fairly complicated task, it is, therefore, best explained by dividing it into several subtasks, in particular, while sometimes only one of these subtasks is needed in practice. The paper is organized in such a way that we consider the subtasks separately and present the different approaches of the above-mentioned authors, with several tricks and improvements incorporated. Some of these improvements and the combination of certain steps from the different algorithms are essential for improving the practical performance.

An unusual interlayer coupling, recently discovered in layered magnetic systems, is analysed from the experimental and theoretical points of view. This coupling favours the 90 orientation of the magnetization of the adjacent magnetic films. It can be phenomenologically described by a term in the energy expression, which is biquadratic with respect to the magnetizations of the two films. The main experimental findings, as well as the theoretical models, explaining the phenomenon are discussed.

We present a distributed system, Dott, for approximately solving the Trav-eling Salesman Problem (TSP) based on the Teamwork method. So-calledexperts and specialists work independently and in parallel for given time pe-riods. For TSP, specialists are tour construction algorithms and experts usemodified genetic algorithms in which after each application of a genetic operatorthe resulting tour is locally optimized before it is added to the population. Aftera given time period the work of each expert and specialist is judged by a referee.A new start population, including selected individuals from each expert and spe-cialist, is generated by the supervisor, based on the judgments of the referees.Our system is able to find better tours than each of the experts or specialistsworking alone. Also results comparable to those of single runs can be found muchfaster by a team.

This paper provides a description of PLATIN. With PLATIN we present an imple-mented system for planning inductive theorem proofs in equational theories that arebased on rewrite methods. We provide a survey of the underlying architecture ofPLATIN and then concentrate on details and experiences of the current implementa-tion.

We present a general framework for developing search heuristics for au-tomated theorem provers. This framework allows for the construction ofheuristics that are on the one hand able to replay (parts of) a given prooffound in the past but are on the other hand flexible enough to deviate fromthe given proof path in order to solve similar proof problems. We substanti-ate the abstract framework by the presentation of three distinct techniquesfor learning appropriate search heuristics based on soADcalled features. Wedemonstrate the usefulness of these techniques in the area of equational de-duction. Comparisons with the renowned theorem prover Otter validatethe applicability and strength of our approach.

We present a method for making use of past proof experience called flexiblere-enactment (FR). FR is actually a search-guiding heuristic that uses past proofexperience to create a search bias. Given a proof P of a problem solved previouslythat is assumed to be similar to the current problem A, FR searches for P andin the "neighborhood" of P in order to find a proof of A.This heuristic use of past experience has certain advantages that make FRquite profitable and give it a wide range of applicability. Experimental studiessubstantiate and illustrate this claim.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).

We investigate in how far interpolation mechanisms based on the nearest-neighbor rule (NNR) can support cancer research. The main objective is to usethe NNR to predict the likelihood of tumorigenesis based on given risk factors.By using a genetic algorithm to optimize the parameters of the nearest-neighbourprediction, the performance of this interpolation method can be improved sub-stantially. Furthermore, it is possible to detect risk factors which are hardly ornot relevant to tumorigenesis. Our preliminary studies demonstrate that NNR-based interpolation is a simple tool that nevertheless has enough potential to beseriously considered for cancer research or related research.