### Refine

#### Year of publication

#### Document Type

- Report (475) (remove)

#### Keywords

- Mathematikunterricht (7)
- modelling (7)
- numerical upscaling (7)
- Modellierung (6)
- praxisorientiert (6)
- Ambient Intelligence (5)
- Regelung (5)
- hub location (5)
- Elastoplastizität (4)
- Integer programming (4)

#### Faculty / Organisational entity

In this work, we analyze two important and simple models of short rates, namely Vasicek and CIR models. The models are described and then the sensitivity of the models with respect to changes in the parameters are studied. Finally, we give the results for the estimation of the model parameters by using two different ways.

Algebraic Systems Theory
(2004)

Control systems are usually described by differential equations, but their properties of interest are most naturally expressed in terms of the system trajectories, i.e., the set of all solutions to the equations. This is the central idea behind the so-called "behavioral approach" to systems and control theory. On the other hand, the manipulation of linear systems of differential equations can be formalized using algebra, more precisely, module theory and homological methods ("algebraic analysis"). The relationship between modules and systems is very rich, in fact, it is a categorical duality in many cases of practical interest. This leads to algebraic characterizations of structural systems properties such as autonomy, controllability, and observability. The aim of these lecture notes is to investigate this module-system correspondence. Particular emphasis is put on the application areas of one-dimensional rational systems (linear ODE with rational coefficients), and multi-dimensional constant systems (linear PDE with constant coefficients).

In this paper mathematical models for liquid films generated by impinging jets are discussed. Attention is stressed to the interaction of the liquid film with some obstacle. S. G. Taylor [Proc. R. Soc. London Ser. A 253, 313 (1959)] found that the liquid film generated by impinging jets is very sensitive to properties of the wire which was used as an obstacle. The aim of this presentation is to propose a modification of the Taylor's model, which allows to simulate the film shape in cases, when the angle between jets is different from 180°. Numerical results obtained by discussed models give two different shapes of the liquid film similar as in Taylors experiments. These two shapes depend on the regime: either droplets are produced close to the obstacle or not. The difference between two regimes becomes larger if the angle between jets decreases. Existence of such two regimes can be very essential for some applications of impinging jets, if the generated liquid film can have a contact with obstacles.

Granular systems in solid-like state exhibit properties like stiffness
dependence on stress, dilatancy, yield or incremental non-linearity
that can be described within the continuum mechanical framework.
Different constitutive models have been proposed in the literature either based on relations between some components of the stress tensor or on a quasi-elastic description. After a brief description of these
models, the hyperelastic law recently proposed by Jiang and Liu [1]
will be investigated. In this framework, the stress-strain relation is
derived from an elastic strain energy density where the stable proper-
ties are linked to a Drucker-Prager yield criteria. Further, a numerical method based on the finite element discretization and Newton-
Raphson iterations is presented to solve the force balance equation.
The 2D numerical examples presented in this work show that the stress
distributions can be computed not only for triangular domains, as previoulsy done in the literature, but also for more complex geometries.
If the slope of the heap is greater than a critical value, numerical instabilities appear and no elastic solution can be found, as predicted by
the theory. As main result, the dependence of the material parameter
Xi on the maximum angle of repose is established.

Wireless LANs operating within unlicensed frequency bands require random access schemes such as CSMA/ CA, so that wireless networks from different administrative domains (for example wireless community networks) may co-exist without central coordination, even when they happen to operate on the same radio channel. Yet, it is evident that this Jack of coordination leads to an inevitable loss in efficiency due to contention on the MAC layer. The interesting question is, which efficiency may be gained by adding coordination to existing, unrelated wireless networks, for example by self-organization. In this paper, we present a methodology based on a mathematical programming formulation to determine the
parameters (assignment of stations to access points, signal strengths and channel assignment of both access points and stations) for a scenario of co-existing CSMA/ CA-based wireless networks, such that the contention between these networks is minimized. We demonstrate how it is possible to solve this discrete, non-linear optimization problem exactly for small
problems. For larger scenarios, we present a genetic algorithm specifically tuned for finding near-optimal solutions, and compare its results to theoretical lower bounds. Overall, we provide a benchmark on the minimum contention problem for coordination mechanisms in CSMA/CA-based wireless networks.

Radiotherapy is one of the major forms in cancer treatment. The patient is irradiated with high-energetic photons or charged particles with the primary goal of delivering sufficiently high doses to the tumor tissue while simultaneously sparing the surrounding healthy tissue. The inverse search for the treatment plan giving the desired dose distribution is done by means of numerical optimization [11, Chapters 3-5]. For this purpose, the aspects of dose quality in the tissue are modeled as criterion functions, whose mathematical properties also affect the type of the corresponding optimization problem. Clinical practice makes frequent use of criteria that incorporate volumetric and spatial information about the shape of the dose distribution. The resulting optimization problems are of global type by empirical knowledge and typically computed with generic global solver concepts, see for example [16]. The development of good global solvers to compute radiotherapy optimization problems is an important topic of research in this application, however, the structural properties of the underlying criterion functions are typically not taken into account in this context.

On the Complexity of the Uncapacitated Single Allocation p-Hub Median Problem with Equal Weights
(2007)

The Super-Peer Selection Problem is an optimization problem in network topology construction. It may be cast as a special case of a Hub Location Problem, more exactly an Uncapacitated Single Allocation p-Hub Median Problem with equal weights. We show that this problem is still NP-hard by reduction from Max Clique.

This report reviews selected image binarization and segmentation methods that have been proposed and which are suitable for the processing of volume images. The focus is on thresholding, region growing, and shape–based methods. Rather than trying to give a complete overview of the field, we review the original ideas and concepts of selected methods, because we believe this information to be important for judging when and under what circumstances a segmentation algorithm can be expected to work properly.

W-Lisp Sprachbeschreibung
(1993)

W-Lisp [Wippennann 91] ist eine Sprache, die im Bereich der Implementierung höherer
Programmiersprachen verwendet wird. Ihre Anwendung ist nicht auf diesen Bereich beschränkt. Gute Lesbarkeit der W-Lisp-Notation wird durch zahlreiche Anleihen aus dem Bereich der bekannten imperativen Sprachen erzielt. W-Lisp-Programme können im Rahmen eines Common Lisp-Systems ausgeführt werden. In der WLisp Notation können alle Lisp-Funktionen (inkl. MCS) verwendet werden, so daß die Mächtigkeit von Common-Lisp [Steele 90] in dieser Hinsicht auch in W-Lisp verfügbar ist.

We consider a volume maximization problem arising in gemstone cutting industry. The problem is formulated as a general semi-infinite program (GSIP) and solved using an interiorpoint method developed by Stein. It is shown, that the convexity assumption needed for the convergence of the algorithm can be satisfied by appropriate modelling. Clustering techniques are used to reduce the number of container constraints, which is necessary to make the subproblems practically tractable. An iterative process consisting of GSIP optimization and adaptive refinement steps is then employed to obtain an optimal solution which is also feasible for the original problem. Some numerical results based on realworld data are also presented.

Wireless sensor networks are the driving force behind many popular and interdisciplinary research areas, such as environmental monitoring, building automation, healthcare and assisted living applications. Requirements like compactness, high integration of sensors, flexibility, and power efficiency are often very different and cannot be fulfilled by state-of-the-art node platforms at once. In this paper, we present and analyze AmICA: a flexible, compact, easy-to-program, and low-power node platform. Developed from scratch and including a node, a basic communication protocol, and a debugging toolkit, it assists in an user-friendly rapid application development. The general purpose nature of AmICA was evaluated in two practical applications with diametric requirements. Our analysis shows that AmICA nodes are 67% smaller than BTnodes, have five times more sensors than Mica2Dot and consume 72% less energy than the state-of-the-art TelosB mote in sleep mode.

The stationary heat equation is solved with periodic boundary conditions in geometrically complex composite materials with high contrast in the thermal conductivities of the individual phases. This is achieved by harmonic averaging and explicitly introducing the jumps across the material interfaces as additional variables. The continuity of the heat flux yields the needed extra equations for these variables. A Schur-complent formulation for the new variables is derived that is solved using the FFT and BiCGStab methods. The EJ-HEAT solver is given as a 3-page Matlab program in the Appendix. The C++ implementation is used for material design studies. It solves 3-dimensional problems with around 190 Mio variables on a 64-bit AMD Opteron desktop system in less than 6 GB memory and in minutes to hours, depending on the contrast and required accuracy. The approach may also be used to compute effective electric conductivities because they are governed by the stationary heat equation.

Four aspects are important in the design of hydraulic lters. We distinguish between two cost factors and two performance factors. Regarding performance, filter eciencynd lter capacity are of interest. Regarding cost, there are production considerations such as spatial restrictions, material cost and the cost of manufacturing the lter. The second type of cost is the operation cost, namely the pressure drop. Albeit simulations should and will ultimately deal with all 4 aspects, for the moment our work is focused on cost. The PleatGeo Module generates three-dimensional computer models of a single pleat of a hydraulic lter interactively. PleatDict computes the pressure drop that will result for the particular design by direct numerical simulation. The evaluation of a new pleat design takes only a few hours on a standard PC compared to days or weeks used for manufacturing and testing a new prototype of a hydraulic lter. The design parameters are the shape of the pleat, the permeabilities of one or several layers of lter media and the geometry of a supporting netting structure that is used to keep the out ow area open. Besides the underlying structure generation and CFD technology, we present some trends regarding the dependence of pressure drop on design parameters that can serve as guide lines for the design of hydraulic lters. Compared to earlier two-dimensional models, the three-dimensional models can include a support structure.

A fully automatic procedure is proposed to rapidly compute the permeability of porous materials from their binarized microstructure. The discretization is a simplified version of Peskin’s Immersed Boundary Method, where the forces are applied at the no-slip grid points. As needed for the computation of permeability, steady flows at zero Reynolds number are considered. Short run-times are achieved by eliminating the pressure and velocity variables using an Fast Fourier Transform-based and 4 Poisson problembased fast inversion approach on rectangular parallelepipeds with periodic boundary conditions. In reference to calling it a fast method using fictitious or artificial forces, the implementation is called FFF-Stokes. Large scale computations on 3d images are quickly and automatically performed to estimate the permeability of some sample materials. A matlab implementation is provided to allow readers to experience the automation and speed of the method for realistic three-dimensional models.

We study the efficient computation of Nash and strong equilibria in weighted bottleneck games. In such a game different players interact on a set of resources in the way that every player chooses a subset of the resources as her strategy. The cost of a single resource depends on the total weight of players choosing it and the personal cost every player tries to minimize is the cost of the most expensive resource in her strategy, the bottleneck value. To derive efficient algorithms for finding Nash equilibria in these games, we generalize a tranformation of a bottleneck game into a special congestion game introduced by Caragiannis et al. [1]. While investigating the transformation we introduce so-called lexicographic games, in which the aim of a player is not only to minimize her bottleneck value but to lexicographically minimize the ordered vector of costs of all resources in her strategy. For the special case of network bottleneck games, i.e., the set of resources are the edges of a graph and the strategies are paths, we analyse different Greedy type methods and their limitations for extension-parallel and series-parallel graphs.

Neuronale Netze sind ein derzeit (wieder) aktuelles Thema. Trotz der oft eher schlagwortartigen
Verwendung dieses Begriffs beinhaltet er eine Vielfalt von Ideen, unterschiedlichste methodische
Ansätze und konkrete Anwendungsmöglichkeiten. Die grundlegenden Vorstellungen sind dabei nicht neu, sondern haben eine mitunter recht lange Tradition in angrenzenden Disziplinen wie Biologie, Kybernetik , Mathematik und Physik . Vielversprechende Forschungsergebnisse der letzten Zeit haben dieses Thema wieder in den Mittelpunkt des Interesses gerückt und eine Vielzahl neuer Querbezüge zur Informatik und Neurobiologie sowie zu anderen, auf den ersten Blick weit entfernten Gebieten offenbart. Gegenstand des Forschungsgebiets Neuronale Netze ist dabei die Untersuchung und Konstruktion informationsverarbeitender Systeme, die sich aus vielen mitunter nur sehr primitiven, uniformen Einheiten zusammensetzen und deren wesentliches Verarbeitungsprinzip die Kommunikation zwischen diesen Einheiten ist, d.h. die Übertragung von Nachrichten oder Signalen. Ein weiteres
Charakteristikum dieser Systeme ist die hochgradig parallele Verarbeitung von Information innerhalb
des Systems. Neben der Modellierung kognitiver Prozesse und dem Interesse, wie das menschliche Gehirn komplexe kognitive Leistungen vollbringt, ist über das rein wissenschaftliche Interesse hinaus in zunehmendem Maße auch der konkrete Einsatz neuronaler Netze in verschiedenen technischen Anwendungsgebieten zu sehen. Der vorliegende Report beinhaltet die schriftlichen Ausarbeitungen der Teilnehmerinnen des Seminars Theorie und Praxis neuronaler Netze , das von der Arbeitsgruppe Richter im Sommersemester 1993 an der Universität Kaiserslautern veranstaltet wurde. Besonderer Wert wurde darauf gelegt, nicht nur die theoretischen Grundlagen neuronaler Netze zu behandeln, sondern auch deren Einsatz in der Praxis zu diskutieren. Die Themenauswahl spiegelt einen Teil des weiten Spektrums der Arbeiten auf diesem Gebiet wider. Ein Anspruch auf Vollständigkeit kann daher nicht erhoben werden. Insbesondere sei darauf verwiesen, daß für eine intensive, vertiefende Beschäftigung mit einem Thema auf die jeweiligen Originalarbeiten zurückgegriffen werden sollte. Ohne die Mitarbeit der Teilnehmerinnen und Teilnehmer des Seminars wäre dieser Report nicht möglich gewesen. Wir bedanken uns daher bei Frank Hauptmann, Peter Conrad, Christoph Keller, Martin Buch, Philip Ziegler, Frank Leidermann, Martin Kronenburg, Michael Dieterich, Ulrike Becker, Christoph Krome, Susanne Meyfarth , Markus Schmitz, Kenan Çarki, Oliver Schweikart, Michael Schick und Ralf Comes.

This report presents a generalization of tensor-product B-spline surfaces. The new scheme permits knots whose endpoints lie in the interior of the domain rectangle of a surface. This allows local refinement of the knot structure for approximation purposes as well as modeling surfaces with local tangent or curvature discontinuities. The surfaces are represented in terms of B-spline basis functions, ensuring affine invariance, local control, the convex hull property, and evaluation by de Boor's algorithm. A dimension formula for a class of generalized tensor-product spline spaces is developed.