### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (1288)
- Preprint (1179)
- Report (474)
- Article (240)
- Periodical Part (219)
- Master's Thesis (104)
- Working Paper (80)
- Conference Proceeding (30)
- Diploma Thesis (29)
- Study Thesis (20)

#### Language

- English (2328)
- German (1410)
- Spanish (4)
- Multiple languages (1)

#### Keywords

- AG-RESY (64)
- PARO (31)
- Stadtplanung (26)
- Modellierung (22)
- Case-Based Reasoning (20)
- Simulation (19)
- Visualisierung (18)
- SKALP (16)
- CoMo-Kit (15)
- Wavelet (14)

#### Faculty / Organisational entity

- Fachbereich Mathematik (1048)
- Fachbereich Informatik (809)
- Fachbereich Chemie (306)
- Fachbereich Physik (266)
- Fachbereich Maschinenbau und Verfahrenstechnik (262)
- Fraunhofer (ITWM) (222)
- Fachbereich Sozialwissenschaften (213)
- Fachbereich Elektrotechnik und Informationstechnik (124)
- Fachbereich Biologie (107)
- Fachbereich ARUBI (83)

A Case Study on Specifikation,Detection and Resolution of IN Feature Interactions with Estelle
(1994)

We present an approach for the treatment of Feature Interactions in Intelligent Networks. The approach is based on the formal description technique Estelle and consists of three steps. For the first step, a specification style supporting the integration of additional features into a basic service is introduced . As a result, feature integration is achieved by adding specification text, i.e . on a purely syntactical level. The second step is the detection of feature interactions resulting from the integration of additional features. A formal criterion is given that can be used for the automatic detection of a particular class of feature interactions. In the third step, previously detected feature interactions are resolved. An algorithm has been devised that allows the automatical incorporation of high-level design decisions into the formal specification. The presented approach is applied to the Basic Call Service and several supplementary interacting features.

A large set of criteria to evaluate formal methods for reactive systems is presented. To make this set more comprehensible, it is structured according to a Concept-Model of formal methods. It is made clear that it is necessary to make the catalogue more specific before applying it. Some of the steps needed to do so are explained. As an example the catalogue is applied within the context of the application domain building automation systems to three different formal methods: SDL, statecharts, and a temporallogic.

In this paper we give the definition of a solution concept in multicriteria combinatorial optimization. We show how Pareto, max-ordering and lexicographically optimal solutions can be incorporated in this framework. Furthermore we state some properties of lexicographic max-ordering solutions, which combine features of these three kinds of optimal solutions. Two of these properties, which are desirable from a decision maker" s point of view, are satisfied if and only of the solution concept is that of lexicographic max-ordering.

In this paper we develop a data-driven mixture of vector autoregressive models with exogenous components. The process is assumed to change regimes according to an underlying Markov process. In contrast to the hidden Markov setup, we allow the transition probabilities of the underlying Markov process to depend on past time series values and exogenous variables. Such processes have potential applications to modeling brain signals. For example, brain activity at time t (measured by electroencephalograms) will can be modeled as a function of both its past values as well as exogenous variables (such as visual or somatosensory stimuli). Furthermore, we establish stationarity, geometric ergodicity and the existence of moments for these processes under suitable conditions on the parameters of the model. Such properties are important for understanding the stability properties of the model as well as deriving the asymptotic behavior of various statistics and model parameter estimators.

A new approach for modelling time that does not rely on the concept of a clock is proposed. In order to establish a notion of time, system behaviour is represented as a joint progression of multiple threads of control, which satisfies a certain set of axioms. We show that the clock-independent time model is related to the well-known concept of a global clock and argue that both approaches establish the same notion of time.

Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic differences of terms to guide the proof search. Annotations (colors) to terms are used to maintain this information. This technique has several advantages, e.g. it is highly goal oriented and involves little search. In this paper we give a general formalization of coloring terms in a higher-order setting. We introduce a simply-typed lambda calculus with color annotations and present an appropriate (pre-)unification algorithm. Our work is a formal basis to the implementation of rippling in a higher-order setting which is required e.g. in case of middle-out reasoning. Another application is in the construction of natural language semantics, where the color annotations rule out linguistically invalid readings that are possible using standard higher-order unification.

This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.

In this work, we analyze two important and simple models of short rates, namely Vasicek and CIR models. The models are described and then the sensitivity of the models with respect to changes in the parameters are studied. Finally, we give the results for the estimation of the model parameters by using two different ways.

Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)),
\(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation
functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in
each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework
of continuous and \(f\)- additive polytope functionals.

Treating polyatomic gases in kinetic gas theory requires an appropriate molecule model taking into account the additional internal structure of the gas particles. In this paper we describe two such models, each arising from quite different approaches to this problem. A simulation scheme for solving the corresponding kinetic equations is presented and some numerical results to 1D shockwaves are compared.

Simulation methods like DSMC are an efficient tool to compute rarefied gas flows. Using supercomputers it is possible to include various real gas effects like vibrational energies or chemical reactions in a gas mixture. Nevertheless it is still necessary to improve the accuracy of the current simulation methods in order to reduce the computational effort. To support this task the paper presents a comparison of the classical DSMC method with the so called finite Pointset Method. This new approach was developed during several years in the framework of the European space project HERMES. The comparison given in the paper is based on two different testcases: a spatially homogeneous relaxation problem and a 2-dimensional axisymmetric flow problem at high Mach numbers.

We consider the problem of evacuating an urban area caused by a natural or man-made disaster. There are several planning aspects that need to be considered in such a scenario, which are usually considered separately, due to their computational complexity. These aspects include: Which shelters are used to accommodate evacuees? How to schedule public transport for transit-dependent evacuees? And how do public and individual traffic interact? Furthermore, besides evacuation time, also the risk of the evacuation needs to be considered.
We propose a macroscopic multi-criteria optimization model that includes all of these questions simultaneously. As a mixed-integer programming formulation cannot handle instances of real-world size, we develop a genetic algorithm of NSGA-II type that is able to generate feasible solutions of good quality in reasonable computation times.
We extend the applicability of these methods by also considering how to aggregate instance data, and how to generate solutions for the original instance starting from a reduced solution.
In computational experiments using real-world data modelling the cities of Nice in France and Kaiserslautern in Germany, we demonstrate the effectiveness of our approach and compare the trade-off between different levels of data aggregation.

This paper describes a system that supports softwaredevelopment processes in virtual software corporations. A virtual software corporation consists of a set of enterprisesthat cooperate in projects to fulfill customer needs. Contracts are negotiated in the whole lifecycle of asoftware development project. The negotiations really influence the performance of a company. Therefore, it isuseful to support negotiations and planning decisions with software agents. Our approach integrates software agentapproaches for negotiation support with flexible multiserver workflow engines.

In this article we give a sufficient condition that a simply connected flexible body does not penetrate itself, if it is subjected to a continuous deformation. It is shown that the deformation map is automatically injective, if it is just locally injective and injective on the boundary of the body. Thereby, it is very remarkable that no higher regularity assumption than continuity for the deformation map is required. The proof exclusively relies on homotopy methods and the Jordan-Brouwer separation theorem.

A Consistent Large Eddy Approach for Lattice Boltzmann Methods and its Application to Complex Flows
(2015)

Lattice Boltzmann Methods have shown to be promising tools for solving fluid flow problems. This is related to the advantages of these methods, which are among others, the simplicity in handling complex geometries and the high efficiency in calculating transient flows. Lattice Boltzmann Methods are mesoscopic methods, based on discrete particle dynamics. This is in contrast to conventional Computational Fluid Dynamics methods, which are based on the solution of the continuum equations. Calculations of turbulent flows in engineering depend in general on modeling, since resolving of all turbulent scales is and will be in near future far beyond the computational possibilities. One of the most auspicious modeling approaches is the large eddy simulation, in which the large, inhomogeneous turbulence structures are directly computed and the smaller, more homogeneous structures are modeled.
In this thesis, a consistent large eddy approach for the Lattice Boltzmann Method is introduced. This large eddy model includes, besides a subgrid scale model, appropriate boundary conditions for wall resolved and wall modeled calculations. It also provides conditions for turbulent domain inlets. For the case of wall modeled simulations, a two layer wall model is derived in the Lattice Boltzmann context. Turbulent inlet conditions are achieved by means of a synthetic turbulence technique within the Lattice Boltzmann Method.
The proposed approach is implemented in the Lattice Boltzmann based CFD package SAM-Lattice, which has been created in the course of this work. SAM-Lattice is feasible of the calculation of incompressible or weakly compressible, isothermal flows of engineering interest in complex three dimensional domains. Special design targets of SAM-Lattice are high automatization and high performance.
Validation of the suggested large eddy Lattice Boltzmann scheme is performed for pump intake flows, which have not yet been treated by LBM. Even though, this numerical method is very suitable for this kind of vortical flows in complicated domains. In general, applications of LBM to hydrodynamic engineering problems are rare. The results of the pump intake validation cases reveal that the proposed numerical approach is able to represent qualitatively and quantitatively the very complex flows in the intakes. The findings provided in this thesis can serve as the basis for a broader application of LBM in hydrodynamic engineering problems.

We propose a constraint-based approach for the two-dimensional rectangular packing problem with orthogonal orientations. This problem is to arrange a set of rectangles that can be rotated by 90 degrees into a rectangle of minimal size such that no two rectangles overlap. It arises in the placement of electronic devices during the layout of 2.5D System-in-Package integrated electronic systems. Moffitt et al. [8] solve the packing without orientations with a branch and bound approach and use constraint propagation. We generalize their propagation techniques to allow orientations. Our approach is compared to a mixed-integer program and we provide results that outperform it.

The notion of Q-Gorenstein smoothings has been introduced by Kollar. ([KoJ], 6.2.3). This notion is essential for formulating Kollar's conjectures on smoothing components for rational surface singularities. He conjectures, loosely speaking, that every smoothing of a rational surface singularity can be obtained by blowing down a deformation of a partial resolution, this partial resolution having the property (among others) that the singularities occuring on it all have qG-smoothings. (For more details and precise statements see [Ko], ch. 6.). It is therefore of interest to construct singularities having qG-smoothings.

A counter-based read circuit tolerant to process variation for low-voltage operating STT-MRAM
(2016)

The capacity of embedded memory on LSIs has kept increasing. It is important to reduce the leakage power of embedded memory for low-power LSIs. In fact, the ITRS predicts that the leakage power in embedded memory will account for 40% of all power consumption by 2024 [1]. A spin transfer torque magneto-resistance random access memory (STT-MRAM) is promising for use as non-volatile memory to reduce the leakage power. It is useful because it can function at low voltages and has a lifetime of over 1016 write cycles [2]. In addition, the STT-MRAM technology has a smaller bit cell than an SRAM. Making the STT-MRAM is suitable for use in high-density products [3–7]. The STT-MRAM uses magnetic tunnel junction (MTJ). The MTJ has two states: a parallel state and an anti-parallel state. These states mean that the magnetization direction of the MTJ’s layers are the same or different. The directions pair determines the MTJ’s magneto- resistance value. The states of MTJ can be changed by the current flowing. The MTJ resistance becomes low in the parallel state and high in the anti-parallel state. The MTJ potentially operates at less than 0.4 V [8]. In other hands, it is difficult to design peripheral circuitry for an STT-MRAM array at such a low voltage. In this paper, we propose a counter-based read circuit that functions at 0.4 V, which is tolerant of process variation and temperature fluctuation.