### Refine

#### Year of publication

- 1997 (66) (remove)

#### Document Type

- Preprint (66) (remove)

#### Keywords

- AG-RESY (1)
- Brownian motion (1)
- CAx-Anwendungen (1)
- CoMo-Kit (1)
- Dense gas (1)
- Elliptic-parabolic equation (1)
- Enskog equation (1)
- Function of bounded variation (1)
- Integral transform (1)
- Intelligent Object Fusion (1)

#### Faculty / Organisational entity

We report on the observation of quantized surface spin waves in periodic arrays of magnetic Ni81Fe19 wires by means of Brillouin light scattering spectroscopy. At small wavevectors (q_1 = 0 - 0.9*100000 cm^-1 ) several discrete, dispersionless modes with a frequency splitting of up to 0.9 GHz were observed for the wavevector oriented perpendicular to the wires. From the frequencies of the modes and the wavevector interval, where each mode is observed, the modes are identified as dipole-exchange surface spin wave modes of the film with quantized wavevector values determined by the boundary conditions at the lateral edges of the wires. With increasing wavevector the separation of the modes becomes smaller, and the frequencies of the discrete modes converge to the dispersion of the dipole-exchange surface mode of a continuous film.

The Fock space of bosons and fermions and its underlying superalgebra are represented by algebras of functions on a superspace. We define Gaussian integration on infinite dimensional superspaces, and construct superanalogs of the classical function spaces with a reproducing kernel - including the Bargmann-Fock representation - and of the Wiener-Segal representation. The latter representation requires the investigation of Wick ordering on Z 2 -graded algebras. As application we derive a Mehler formula for the Ornstein-Uhlenbeck semigroup on the Fock space.

An asymptotic-induced scheme for nonstationary transport equations with thediffusion scaling is developed. The scheme works uniformly for all ranges ofmean free paths. It is based on the asymptotic analysis of the diffusion limit ofthe transport equation. A theoretical investigation of the behaviour of thescheme in the diffusion limit is given and an approximation property is proven.Moreover, numerical results for different physical situations are shown and atheuniform convergence of the scheme is established numerically.

The Multiple Objective Median Problem involves locating a new facility so that a vector of performance criteria is optimized over a given set of existing facilities. A variation of this problem is obtained if the existing facilities are situated on two sides of a linear barrier. Such barriers like rivers, highways, borders, or mountain ranges are frequently encountered in practice. In this paper, theory of the Multiple Objective Median Problem with line barriers is developped. As this problem is nonconvex but specially-structured, a reduction to a series of convex optimization problems is proposed. The general results lead to a polynomial algorithm for finding the set of efficient solutions. The algorithm is proposed for bi-criteria problems with different measures of distance.

In this paper a group of participants of the 12th European Summer Institute which took place in Tenerifa, Spain in June 1995 present their views on the state of the art and the future trends in Locational Analysis. The issue discussed includes modelling aspects in discrete, network and continuous location, heuristic techniques, the state of technology and undesirable facility location. Some general questions are stated reagrding the applicability of location models, promising research directions and the way technology affects the development of solution techniques.

Retrieving multiple cases is supposed to be an adequate retrieval strategy for guiding partial-order planners because of the recognized flexibility of these planners to interleave steps in the plans. Cases are combined by merging them. In this paper, we will examine two different kinds of merging cases in the context of partial-order planning. We will see that merging cases can be very difficult if the cases are merged eagerly. On the other hand, if cases are merged by avoiding redundant steps, the guidance of the additional cases tends to decrease with the number of covered goals and retrieved cases in domains having a certain kind of interactions. Thus, to retrieve a single case covering many of the goals of the problem or to retrieve fewer cases covering many of the goals is at least equally effective as to retrieve several cases covering all goals in these domains.

This paper shows an approach to profit from type information about planning objects in a partial-order planner. The approach turns out to combine representational and computational advantages. On the one hand, type hierarchies allow better structuring of domain specifications. On the other hand, operators contain type constraints which reduce the search space of the planner as they partially achieve the functionality of filter conditions.

In this paper we provide a semantical meta-theory that will support the development of higher-order calculi for automated theorem proving like the corresponding methodology has in first-order logic. To reach this goal, we establish classes of models that adequately characterize the existing theorem-proving calculi, that is, so that they are sound and complete to these calculi, and a standard methodology of abstract consistency methods (by providing the necessary model existence theorems) needed to analyze completeness of machine-oriented calculi.

MP Prototype Specification
(1997)

A first explicit connection between finitely presented commutative monoids and ideals in polynomial rings was used 1958 by Emelichev yielding a solution tothe word problem in commutative monoids by deciding the ideal membership problem. The aim of this paper is to show in a similar fashion how congruenceson monoids and groups can be characterized by ideals in respective monoid and group rings. These characterizations enable to transfer well known resultsfrom the theory of string rewriting systems for presenting monoids and groups to the algebraic setting of subalgebras and ideals in monoid respectively grouprings. Moreover, natural one-sided congruences defined by subgroups of a group are connected to one-sided ideals in the respective group ring and hencethe subgroup problem and the ideal membership problem are directly related. For several classes of finitely presented groups we show explicitly howGröbner basis methods are related to existing solutions of the subgroup problem by rewriting methods. For the case of general monoids and submonoidsweaker results are presented. In fact it becomes clear that string rewriting methods for monoids and groups can be lifted in a natural fashion to definereduction relations in monoid and group rings.

The concept of algebraic simplification is of great importance for the field of symbolic computation in computer algebra. In this paper we review somefundamental concepts concerning reduction rings in the spirit of Buchberger. The most important properties of reduction rings are presented. Thetechniques for presenting monoids or groups by string rewriting systems are used to define several types of reduction in monoid and group rings. Gröbnerbases in this setting arise naturally as generalizations of the corresponding known notions in the commutative and some non-commutative cases. Severalresults on the connection of the word problem and the congruence problem are proven. The concepts of saturation and completion are introduced formonoid rings having a finite convergent presentation by a semi-Thue system. For certain presentations, including free groups and context-free groups, theexistence of finite Gröbner bases for finitely generated right ideals is shown and a procedure to compute them is given.

This paper is a continuation of a joint paper with B. Martin [MS] dealing with the problem of direct sum decompositions. The techniques of that paper areused to decide wether two modules are isomorphic or not. An positive answer to this question has many applications - for example for the classification ofmaximal Cohen-Macaulay module over local algebras as well as for the study of projective modules. Up to now computer algebra is normally dealing withequality of ideals or modules which depends on chosen embeddings. The present algorithm allows to switch to isomorphism classes which is more natural inthe sense of commutative algebra and algebraic geometry.

In dieser Arbeit wird die Problematik der sich rapide wandelnden industriellen CAx-Anwendungen betrachtet. Durch die Einfu"hrung der Feature-Technologie scheinen einige Probleme der Parallelisierung der Prozesse, des Simultaneous und des Concurrent Engineering sowie des Outsourcing überwindbar zu sein. Allerdings entwickelte sich die Feature-Technologie bisher ohne ausreichenden Bezug zur Konstruktionspraxis, was zu erheblichen Defiziten im industriellen Einsatz führte. Untersuchungen in der Automobilindustrie (AIFEMInitiative) zeigen, dass dies vielfach auf mangelnde Kommunikation zwischen Konstrukteuren und CAx-Experten zurückgeführt werden kann. Aufgrund des jetzigen Ansatzes der Feature-Technologie im Zusammenwirken mit dem extremen Zeitdruck in der Produktentwicklung besteht aber die Gefahr, die Produktdefinitionsprozesse nur nach den Kriterien Entwicklungszeit, Kosten und Produktqualität zu optimieren. Features dienen dabei nur als speziell angepasste Werkzeuge. Damit wird eine echte Innovation der Produkte behindert. Es wird aufgezeigt, wie die Feature-Technologie erweitert werden muss, um die Kreativität der Konstrukteure zu fördern und somit neuartige Produkte zu ermöglichen. Näher ausgeführt werden die Aspekte der benutzerdefinierten Features, der Datenstandardisierung, der Verarbeitung unvollsta"ndiger Information und der dynamischen Prozessunterstützung.

Ist "Programmieren ganz ohne Code" auch im CAx-Bereich möglich? Die Vielzahl heterogener CAx-Anwendungen und die wachsende Komplexität der Entwicklungsprozesse bedarf neuer Lösun-gen in der CAx-Technik. Ziel dieses Beitrages ist es, die richtungsweisende Rolle der Komponenten-technologie im CAx-Bereich aufzuzeigen. Es werden die Grundlagen der Komponenten sowie die wichtigen Komponentenarchitekturen (ActiveX und Java Beans) vorgestellt. Die Erwartungen der Anwender und der Systemhersteller, die Potentiale und die Auswirkungen dieser Technologie auf die neuen Systeme werden analysiert. Die zur Zeit verfügbaren ersten Ansätze werden präsentiert. Die Rolle der internationalen Standards für die technische Umsetzung und für die Akzeptanz von CAx-Komponentensystemen wird aufgezeigt.

Process Chain in Automotive Industry - Present Day Demands versus Long Term Open CAD/CAM Strategies
(1997)

The automotive industry was a pioneer in using CAD/CAM technology. Now the car manufacturers development process is almost completely done with this technology. Substantial initiative for the standardisation of CAD/CAM technics comes from the automotive industry, as e.g. for neutral CAD data interfaces. The R&D departments of German car manufacturers have founded a working group ii with the aim to develop a common long term CAD/CAM strategy. One important result is the concept of a future CAx iii architecture based on the standard data structure STEP iv . The commitment of the car manufactures to STEP and open system architectures is in contradiction to their attitude towards suppliers and subcontractors: Recently, more and more contractors are contractually bound to use exactly the same CAD system as the orderer. The German car industry tries to find a way out of this contradiction and to improve the co-operation between the companies in short term. Therefore they proposed a "Dual CAD Strategy", i.e. to put improvements in CAD communication into practice which are possible today - even proprietary solutions - and in parallel to invest in strategic concepts to prepare tomorrow's open system landscape.

In the modeling of biological phenomena, in living organisms whether the measurements are of blood pressure, enzyme levels, biomechanical movements or heartbeats, etc., one of the important aspects is time variation in the data. Thus, the recovery of a "smooth" regression or trend function from noisy time-varying sampled data becomes a problem of particular interest. Here we use non-linear wavelet thresholding to estimate a regression or a trend function in the presence of additive noise which, in contrast to most existing models, does not need to be stationary. (Here, nonstationarity means that the spectral behaviour of the noise is allowed to change slowly over time.). We develop a procedure to adapt existing threshold rules to such situations, e.g., that of a time-varying variance in the errors. Moreover, in the model of curve estimation for functions belonging to a Besov class with locally stationary errors, we derive a near-optimal rate for the L2-risk between the unknown function and our soft or hard threshold estimator, which holds in the general case of an error distribution with bounded cumulants. In the case of Gaussian errors, a lower bound on the asymptotic minimax rate in the wavelet coefficient domain is also obtained. Also it is argued that a stronger adaptivity result is possible by the use of a particular location and level dependent threshold obtained by minimizing Stein's unbiased estimate of the risk. In this respect, our work generalizes previous results, which cover the situation of correlated, but stationary errors. A natural application of our approach is the estimation of the trend function of nonstationary time series under the model of local stationarity. The method is illustrated on both an interesting simulated example and a biostatistical data-set, measurements of sheep luteinizing hormone, which exhibits a clear nonstationarity in its variance.

Starting from the mollified version of the Enskog equation for a hard-sphere fluid, a grid-free algorithm to obtain the solution is proposed. The algorithm is based on the finite pointset method. For illustration, it is applied to a Riemann problem. The shock-wave solution is compared to the results of Frezzotti and Sgarra where a good agreement is found.

Here the self-organization property of one-dimensional Kohonen's algorithm in its 2k-neighbour setting with a general type of stimuli distribution and non-increasing learning rate is considered. We prove that the probability of self-organization for all initial values of neurons is uniformly positive. For the special case of a constant learning rate, it implies that the algorithm self-organizes with probability one.

We develop a test for stationarity of a time series against the alternative of a time-changing covariance structure. Using localized versions of the periodogram, we obtain empirical versions of a reasonable notion of a time-varying spectral density. Coefficients w.r.t. a Haar wavelet series expansion of such a time-varying periodogram are a possible indicator whether there is some deviation from covariance stationarity. We propose a test based on the limit distribution of these empirical coefficients.

In this report we treat an optimization task, which should make the choice of nonwoven for making diapers faster. A mathematical model for the liquid transport in nonwoven is developed. The main attention is focussed on the handling of fully and partially saturated zones, which leads to a parabolic-elliptic problem. Finite-difference schemes are proposed for numerical solving of the differential problem. Paralle algorithms are considered and results of numerical experiments are given.

An asymptotic-induced scheme for kinetic semiconductor equations with the diffusion scaling is developed. The scheme is based on the asymptotic analysis of the kinetic semiconductor equation. It works uniformly for all ranges of mean free paths. The velocity discretization is done using quadrature points equivalent to a moment expansion method. Numerical results for different physical situations are presented.

This paper provides a description of PLATIN. With PLATIN we present an imple-mented system for planning inductive theorem proofs in equational theories that arebased on rewrite methods. We provide a survey of the underlying architecture ofPLATIN and then concentrate on details and experiences of the current implementa-tion.

We present a distributed system, Dott, for approximately solving the Trav-eling Salesman Problem (TSP) based on the Teamwork method. So-calledexperts and specialists work independently and in parallel for given time pe-riods. For TSP, specialists are tour construction algorithms and experts usemodified genetic algorithms in which after each application of a genetic operatorthe resulting tour is locally optimized before it is added to the population. Aftera given time period the work of each expert and specialist is judged by a referee.A new start population, including selected individuals from each expert and spe-cialist, is generated by the supervisor, based on the judgments of the referees.Our system is able to find better tours than each of the experts or specialistsworking alone. Also results comparable to those of single runs can be found muchfaster by a team.

We investigate in how far interpolation mechanisms based on the nearest-neighbor rule (NNR) can support cancer research. The main objective is to usethe NNR to predict the likelihood of tumorigenesis based on given risk factors.By using a genetic algorithm to optimize the parameters of the nearest-neighbourprediction, the performance of this interpolation method can be improved sub-stantially. Furthermore, it is possible to detect risk factors which are hardly ornot relevant to tumorigenesis. Our preliminary studies demonstrate that NNR-based interpolation is a simple tool that nevertheless has enough potential to beseriously considered for cancer research or related research.

We present a general framework for developing search heuristics for au-tomated theorem provers. This framework allows for the construction ofheuristics that are on the one hand able to replay (parts of) a given prooffound in the past but are on the other hand flexible enough to deviate fromthe given proof path in order to solve similar proof problems. We substanti-ate the abstract framework by the presentation of three distinct techniquesfor learning appropriate search heuristics based on soADcalled features. Wedemonstrate the usefulness of these techniques in the area of equational de-duction. Comparisons with the renowned theorem prover Otter validatethe applicability and strength of our approach.

Software Products As Objects
(1997)

This paper describes our experiences in modeling entire software products (trees of software files) as objects. Container pnodes (product nodes) have user-defined Internetunique names, data types, and methods (operations). Pnodes can contain arbitrary collections of software files that represent programs, libraries, documents, or other software products. Pnodes can contain multiple software products, so that header files, libraries, and program products may all be stored within one pnode. Pnodes can contain views that list other pnodes in order to form large conceptual structures of pnodes. Typical pnode -object methods include: fetching and storing into version controlled repositories; dynamic analysis of pnode contents to generate makefiles of arbitrary complexity; local automated build operations; Internet-scalable distributed repository synchroni- zations; Internet-scalable, multi-platform, distributed build operations; extraction and generation of online API documen- tation, spell checking of document pnodes, and so on. Since methods are user-defined, they can be arbitrarily complex. Modelling software products as objects provides a large amount of effort leverage, since one person can define the methods and many people can use them in extensively automated ways.

Techniques for modular software design are presented applying software agents. The conceptual designs are domain independent and make use of specificdomain aspects applying Multiagent AI. The stages of conceptualization, design and implementation are defined by new techniques coordinated by objects. Software systemsare designed by knowledge acquisition, specification, and multiagent implementations.

Like other industries, the aircraft industry is under high pressure to meet drastically increased customer goals for market price and flexibility. This while at the same time share holders request for short term profit guarantees. Daimler-Benz Aerospace Airbus has met this challenge using business process reengineering methods which led to total company restructuring from functional orientation to customer and product orientation. This paper will show how business process modelling techniques have been applied. Especially concurrent engineering methods are used to integrate the various disciplines involved from market analysts over design, commercial to industrialization staff.

Viele Entwicklungsprozesse, wie sie z.B. beim Entwurf von grossen Softwaresystemen benötigt werden, basieren in erster Linie auf dem Wissen der mit der Entwicklung betrauten Mitarbeiter. Mit wachsender Komplexität der Entwurfsaufgaben und mit wachsender Anzahl der Mitarbeiter in einem Projekt wird die Koordination und Verteilung dieses Wissens immer problematischer. Aus diesem Grund versucht man zunehmend, das Wissen der Mitarbeiter in elektronischer Form, d.h. in Rechnern zu speichern und zu verwalten. Dadurch, dass der Entwurf eines komplexen Systems ebenfalls am Rechner modelliert wird, steht benötigtes Wissen sofort zur Verfügung und kann zur Entscheidungsunterstützung herangezogen werden. Gerade bei der Planung grosser Projekte stehen jedoch oft Entscheidungen aus, die erst später, während der Abwicklung getroffen werden können. Da gängige Workflow-Management-System zumeist eine komplette Modellierung verlangen, bevor die Abwicklung eines Projektmodells beginnen kann, habt sich dieser Ansatz gerade für umfangreiche Projekte als eher ungeeignet herausgestellt.

Skyrme Sphalerons of an O(3)-oe Model and the Calculation of Transition Rates at Finite Temperature
(1997)

The reduced O(3)-oe model with an O(3) ! O(2) symmetry breaking potential is considered with an additional Skyrmionic term, i. e. a totally antisymmetric quartic term in the field derivatives. This Skyrme term does not affect the classical static equations of motion which, however, allow an unstable sphaleron solution. Quantum fluctuations around the static classical solution are considered for the determination of the rate of thermally induced transitions between topologically distinct vacua mediated by the sphaleron. The main technical effect of the Skyrme term is to produce an extra measure factor in one of the fluctuation path integrals which is therefore evaluated using a measure-modified Fourier-Matsubara decomposition (this being one of the few cases permitting this explicit calculation). The resulting transition rate is valid in a temperature region different from that of the original Skyrme-less model, and the crossover from transitions dominated by thermal fluctuations to those dominated by tunneling at the lower limit of this range depends on the strength of the Skyrme coupling.

Sudakov's typical marginals, random linear functionals and a conditional central limit theorem
(1997)

V.N. Sudakov [Sud78] proved that the one-dimensional marginals of a highdimensional second order measure are close to each other in most directions. Extending this and a related result in the context of projection pursuit of P. Diaconis and D. Freedman [Dia84], we give for a probability measure P and a random (a.s.) linear functional F on a Hilbert space simple sufficient conditions under which most of the one-dimensional images of P under F are close to their canonical mixture which turns out to be almost a mixed normal distribution. Using the concept of approximate conditioning we deduce a conditional central limit theorem (theorem 3) for random averages of triangular arrays of random variables which satisfy only fairly weak asymptotic orthogonality conditions.

Primary decomposition of an ideal in a polynomial ring over a field belongs to the indispensable theoretical tools in commutative algebra and algebraic geometry. Geometrically it corresponds to the decomposition of an affine variety into irreducible components and is, therefore, also an important geometric concept.The decomposition of a variety into irreducible components is, however, slightly weaker than the full primary decomposition, since the irreducible components correspond only to the minimal primes of the ideal of the variety, which is a radical ideal. The embedded components, although invisible in the decomposition of the variety itself, are, however, responsible for many geometric properties, in particular, if we deform the variety slightly. Therefore, they cannot be neglected and the knowledge of the full primary decomposition is important also in a geometric context.In contrast to the theoretical importance, one can find in mathematical papers only very few concrete examples of non-trivial primary decompositions because carrying out such a decomposition by hand is almost impossible. This experience corresponds to the fact that providing efficient algorithms for primary decomposition of an ideal I ae K[x1; : : : ; xn], K a field, is also a difficult task and still one of the big challenges for computational algebra and computational algebraic geometry.All known algorithms require Gr"obner bases respectively characteristic sets and multivariate polynomial factorization over some (algebraic or transcendental) extension of the given field K. The first practical algorithm for computing the minimal associated primes is based on characteristic sets and the Ritt-Wu process ([R1], [R2], [Wu], [W]), the first practical and general primary decomposition algorithm was given by Gianni, Trager and Zacharias [GTZ]. New ideas from homological algebra were introduced by Eisenbud, Huneke and Vasconcelos in [EHV]. Recently, Shimoyama and Yokoyama [SY] provided a new algorithm, using Gr"obner bases, to obtain the primary decompositon from the given minimal associated primes.In the present paper we present all four approaches together with some improvements and with detailed comparisons, based upon an analysis of 34 examples using the computer algebra system SINGULAR [GPS]. Since primary decomposition is a fairly complicated task, it is, therefore, best explained by dividing it into several subtasks, in particular, while sometimes only one of these subtasks is needed in practice. The paper is organized in such a way that we consider the subtasks separately and present the different approaches of the above-mentioned authors, with several tricks and improvements incorporated. Some of these improvements and the combination of certain steps from the different algorithms are essential for improving the practical performance.

\(C^0\)-scalar-type spectrality criterions for operators \(A\), whose resolvent set contains the negative reals, are provided. The criterions are given in terms of growth conditions on the resolvent of \(A\) and the semi-group generated by \(A\).These criterions characterize scalar-type operators on the Banach space \(X\), if and only if \(X\) has no subspace isomorphic to the space of complex null-sequences.

In the Banach space co there exists a continuous function of bounded semivariation which does not correspond to a countably additive vector measure. This result is in contrast to the scalar case, and it has consequences for the characterization of scalar-type operators. Besides this negative result we introduce the notion of functions of unconditionally bounded variation which are exactly the generators of countably additive vector measures.

A formula suitable for a quantitative evaluation of the tunneling effect in a ferromagnetic particle is derived with the help of the instanton method. The tunneling between n-th degenerate states of neighboring wells is dominated by a periodic pseudoparticle configuration. The low-lying level-splitting previously obtained with the LSZ method in field theory in which the tunneling is viewed as the transition of n bosons induced by the usual(vacuum) instanton is recovered.The observation made with our new result is that the tunneling effect increases at excited states. The results should be useful in analyzing results of experimental tests of macroscopic quantum coherence in ferromagnetic particles.

The tunneling splitting of the energy levels of a ferromagnetic particle in the presence of an applied magnetic field - previously derived only for the ground state with the path integral method - is obtained in a simple way from Schr"odinger theory. The origin of the factors entering the result is clearly understood, in particular the effect of the asymmetry of the barriers of the potential. The method should appeal particularly to experimentalists searching for evidence of macroscopic spin tunneling.

Due to continuously increasing demands in the area of advanced robot control, it became necessary to speed up the computation. One way to reduce the computation time is to distribute the computation onto several processing units. In this survey we present different approaches to parallel computation of robot kinematics and Jacobian. Thereby, we discuss both the forward and the reverse problem. We introduce a classification scheme and classify the references by this scheme.

It is of basic interest to assess the quality of the decisions of a statistician, based on the outcoming data of a statistical experiment, in the context of a given model class P of probability distributions. The statistician picks a particular distribution P , suffering a loss by not picking the 'true' distribution P' . There are several relevant loss functions, one being based on the the relative entropy function or Kullback Leibler information distance. In this paper we prove a general 'minimax risk equals maximin (Bayes) risk' theorem for the Kullback Leibler loss under the hypothesis of a dominated and compact family of distributions over a Polish observation space with suitably integrable densities. We also find that there is always an optimal Bayes strategy (i.e. a suitable prior) achieving the minimax value. Further, we see that every such minimax optimal strategy leads to the same distribution P in the convex closure of the model class. Finally, we give some examples to illustrate the results and to indicate, how the minimax result reflects in the structure of least favorable priors. This paper is mainly based on parts of this author's doctorial thesis.

In this note, answering a question of N. Maslova, we give a two-dimensional elementary example of the phenomenon indicated in the title. Perhaps this simple example may serve as an object of comparison for more refined models like in the theory of kinetic differential equations where similar questions still seem to be unsettled.

The observation of an ergodic Markov chain asymptotically allows perfect identification of the transition matrix. In this paper we determine the rate of the information contained in the first n observations, provided the unknown transition matrix belongs to a known finite set. As an essential tool we prove new refinements of the large deviation theory of the empirical pair measure of finite Markov chains. Keywords: Markov Chain, Entropy, Bayes risk, Large Deviations.

An analogue of the classical Riemann-Siegel integral formula for Dirichlet series associated to cusp forms is developed. As an application of the formula, we give a comparatively simple proof of the approximate functional equation for this type of Dirichlet series.

We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.

We present a method for making use of past proof experience called flexiblere-enactment (FR). FR is actually a search-guiding heuristic that uses past proofexperience to create a search bias. Given a proof P of a problem solved previouslythat is assumed to be similar to the current problem A, FR searches for P andin the "neighborhood" of P in order to find a proof of A.This heuristic use of past experience has certain advantages that make FRquite profitable and give it a wide range of applicability. Experimental studiessubstantiate and illustrate this claim.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).