### Refine

#### Year of publication

- 2006 (67) (remove)

#### Document Type

- Report (25)
- Doctoral Thesis (24)
- Preprint (14)
- Diploma Thesis (2)
- Conference Proceeding (1)
- Periodical Part (1)

#### Language

- English (67) (remove)

#### Keywords

- Elastic BVP (3)
- Approximation (2)
- Elastisches RWP (2)
- Elastoplastisches RWP (2)
- Hysterese (2)
- IMRT (2)
- Kontinuumsmechanik (2)
- Lokalisation (2)
- Multivariate Approximation (2)
- Optimization (2)

#### Faculty / Organisational entity

In the Iranian public media, it was widely reported that by the end of 2004, 380 hectares of the eastern farthest end of the Peninsula Mianqala (northern part of Iran, located in the southeastern coasts of Caspian Sea) were sold to an organisation – the result is that "Asurada" Island will be turned into a so-called “Tourist Village”. The decision has been made and civil works are to begin. The village planned as a new settlement is specifically considered to work with Mianqala, which since June 1976 is an international biosphere reserve and since 1969, an Iranian nature protected area. Considering the special condition of the region as a biosphere reserve, this paper introduces the current situation of the Island Āŝūrāda and the suggested program by the aforementioned organisation. Subsequently, it tries to find an optimal answer to the question of whether "Āŝūrāda" is appropriate for such a purpose and how far it is allowed to be interfered with, through this new settlement. The paper asserts for this development, there is consideration of the settlement’s urban and architectural concept; subsequently analysis is conducted for the spatial development of the settlement, in terms of its influences on the ecological sources, the rural structure and the financial as well as social aspects. Such study is required, particularly due to the chain of tourist influences, which certainly will introduce a new pattern of urban character in terms of quality and quantity. Finally, with the assistance of the case presented, this paper poses the question of whether a new urban pattern like this can endanger a traditional and above all a nature protected context or not.

In this thesis we present the implementation of libraries center.lib and perron.lib for the non-commutative extension Plural of the Computer Algebra System Singular. The library center.lib was designed for the computation of elements of the centralizer of a set of elements and the center of a non-commutative polynomial algebra. It also provides solutions to related problems. The library perron.lib contains a procedure for the computation of relations between a set of pairwise commuting polynomials. The thesis comprises the theory behind the libraries, aspects of the implementation and some applications of the developed algorithms. Moreover, we provide extensive benchmarks for the computation of elements of the center. Some of our examples were never computed before.

During the recent years, multiobjective evolutionary algorithms have matured as a flexible optimization tool which can be used in various areas of reallife applications. Practical experiences showed that typically the algorithms need an essential adaptation to the specific problem for a successful application. Considering these requirements, we discuss various issues of the design and application of multiobjective evolutionary algorithms to real-life optimization problems. In particular, questions on problem-specific data structures and evolutionary operators and the determination of method parameters are treated. As a major issue, the handling of infeasible intermediate solutions is pointed out. Three application examples in the areas of constrained global optimization (electronic circuit design), semi-infinite programming (design centering problems), and discrete optimization (project scheduling) are discussed.

This paper analyzes and solves a patient transportation problem arising in several large hospitals. The aim is to provide an efficient and timely transport service to patients between several locations on a hospital campus. Transportation requests arrive in a dynamic fashion and the solution methodology must therefore be capable of quickly inserting new requests in the current vehicle routes. Contrary to standard dial-a-ride problems, the problem under study contains several complicating constraints which are specific to a hospital context. The paper provides a detailed description of the problem and proposes a two-phase heuristic procedure capable of handling its many features. In the first phase a simple insertion scheme is used to generate a feasible solution, which is improved in the second phase with a tabu search algorithm. The heuristic procedure was extensively tested on real data provided by a German hospital. Results show that the algorithm is capable of handling the dynamic aspect of the problem and of providing high quality solutions. In particular, it succeeded in reducing waiting times for patients while using fewer vehicles.

It is commonly believed that not all degrees of freedom are needed to produce good solutions for the treatment planning problem in intensity modulated radiotherapy treatment (IMRT). However, typical methods to exploit this fact have either increased the complexity of the optimization problem or were heuristic in nature. In this work we introduce a technique based on adaptively refining variable clusters to successively attain better treatment plans. The approach creates approximate solutions based on smaller models that may get arbitrarily close to the optimal solution. Although the method is illustrated using a specific treatment planning model, the components constituting the variable clustering and the adaptive refinement are independent of the particular optimization problem.

This thesis introduces so-called cone scalarising functions. They are by construction compatible with a partial order for the outcome space given by a cone. The quality of the parametrisations of the efficient set given by the cone scalarising functions are then investigated. Here, the focus lies on the (weak) efficiency of the generated solutions, the reachability of effiecient points and continuity of the solution set. Based on cone scalarising functions Pareto Navigation a novel, interactive, multiobjective optimisation method is proposed. It changes the ordering cone to realise bounds on partial tradeoffs. Besides, its use of an equality constraint for the changing component of the reference point is a new feature. The efficiency of its solutions, the reachability of efficient solutions and continuity is then analysed. Potential problems are demonstrated using a critical example. Furthermore, the use of Pareto Navigation in a two-phase approach and for nonconvex problems is discussed. Finally, its application for intensity-modulated radiotherapy planning is described. Thereby, its realisation in a graphical user interface is shown.

The topic of this thesis is the coupling of an atomistic and a coarse scale region in molecular dynamics simulations with the focus on the reflection of waves at the interface between the two scales and the velocity of waves in the coarse scale region for a non-equilibrium process. First, two models from the literature for such a coupling, the concurrent coupling of length scales and the bridging scales method are investigated for a one dimensional system with harmonic interaction. It turns out that the concurrent coupling of length scales method leads to the reflection of fine scale waves at the interface, while the bridging scales method gives an approximated system that is not energy conserving. The velocity of waves in the coarse scale region is in both models not correct. To circumvent this problems, we present a coupling based on the displacement splitting of the bridging scales method together with choosing appropriate variables in orthogonal subspaces. This coupling allows the derivation of evolution equations of fine and coarse scale degrees of freedom together with a reflectionless boundary condition at the interface directly from the Lagrangian of the system. This leads to an energy conserving approximated system with a clear separation between modeling errors an errors due to the numerical solution. Possible approximations in the Lagrangian and the numerical computation of the memory integral and other numerical errors are discussed. We further present a method to choose the interpolation from coarse to atomistic scale in such a way, that the fine scale degrees of freedom in the coarse scale region can be neglected. The interpolation weights are computed by comparing the dispersion relations of the coarse scale equations and the fully atomistic system. With this new interpolation weights, the number of degrees of freedom can be drastically reduced without creating an error in the velocity of the waves in the coarse scale region. We give an alternative derivation of the new coupling with the Mori-Zwanzig projection operator formalism, and explain how the method can be extended to non-zero temperature simulations. For the comparison of the results of the approximated with the fully atomistic system, we use a local stress tensor and the energy in the atomistic region. Examples for the numerical solution of the approximated system for harmonic potentials are given in one and two dimensions.

We present a constructive theory for locally supported approximate identities on the unit ball in \(\mathbb{R}^3\). The uniform convergence of the convolutions of the derived kernels with an arbitrary continuous function \(f\) to \(f\), i.e. the defining property of an approximate identity, is proved. Moreover, an explicit representation for a class of such kernels is given. The original publication is available at www.springerlink.com

The fast development of the financial markets in the last decade has lead to the creation of a variety of innovative interest rate related products that require advanced numerical pricing methods. Examples in this respect are products with a complicated strong path-dependence such as a Target Redemption Note, a Ratchet Cap, a Ladder Swap and others. On the other side, the usage of the standard in the literature one-factor Hull and White (1990) type of short rate models allows only for a perfect correlation between all continuously compounded spot rates or Libor rates and thus are not suited for pricing innovative products depending on several Libor rates such as for example a "steepener" option. One possible solution to this problem deliver the two-factor short rate models and in this thesis we consider a two-factor Hull and White (1990) type of a short rate process derived from the Heath, Jarrow, Morton (1992) framework by limiting the volatility structure of the forward rate process to a deterministic one. In this thesis, we often choose to use a variety of modified (binomial, trinomial and quadrinomial) tree constructions as a main numerical pricing tool due to their flexibility and fast convergence and (when there is no closed-form solution) compare their results with fine grid Monte Carlo simulations. For the purpose of pricing the already mentioned innovative short-rate related products, in this thesis we offer and examine two different lattice construction methods for the two-factor Hull-White type of a short rate process which are able to deal easily both with modeling of the mean-reversion of the underlying process and with the strong path-dependence of the priced options. Additionally, we prove that the so-called rotated lattice construction method overcomes the typical for the existing two-factor tree constructions problem with obtaining negative "risk-neutral probabilities". With a variety of numerical examples, we show that this leads to a stability in the results especially in cases of high volatility parameters and negative correlation between the base factors (which is typically the case in reality). Further, noticing that Chan et al (1992) and Ritchken and Sankarasubramanian (1995) showed that option prices are sensitive to the level of the short rate volatility, we examine the pricing of European and American options where the short rate process has a volatility structure of a Cheyette (1994) type. In this relation, we examine the application of the two offered lattice construction methods and compare their results with the Monte Carlo simulation ones for a variety of examples. Additionally, for the pricing of American options with the Monte Carlo method we expand and implement the simulation algorithm of Longstaff and Schwartz (2000). With a variety of numerical examples we compare again the stability and the convergence of the different lattice construction methods. Dealing with the problems of pricing strongly path-dependent options, we come across the cumulative Parisian barrier option pricing problem. We notice that in their classical form, the cumulative Parisian barrier options have been priced both analytically (in a quasi closed form) and with a tree approximation (based on the Forward Shooting Grid algorithm, see e.g. Hull and White (1993), Kwok and Lau (2001) and others). However, we offer an additional tree construction method which can be seen as a direct binomial tree integration that uses the analytically calculated conditional survival probabilities. The advantage of the offered method is on one side that the conditional survival probabilities are easier to calculate than the closed-form solution itself and on the other side that this tree construction is very flexible in the sense that it allows easy incorporation of additional features such as e.g a forward starting one. The obtained results are better than the Forward Shooting Grid tree ones and are very close to the analytical quasi closed form solution. Finally, we pay our attention to pricing another type of innovative interest rate alike products - namely the Longevity bond - whose coupon payments depend on the survival function of a given cohort. Due to the lack of a market for mortality, for the pricing of the Longevity bonds we develop (following Korn, Natcheva and Zipperer (2006)) a framework that contains principles from both Insurance and Financial mathematic. Further on, we calibrate the existing models for the stochastic mortality dynamics to historical German data and additionally offer new stochastic extensions of the classical (deterministic) models of mortality such as the Gompertz and the Makeham one. Finally, we compare and analyze the results of the application of all considered models to the pricing of a Longevity bond on the longevity of the German males.

In this thesis, we have dealt with two modeling approaches of the credit risk, namely the structural (firm value) and the reduced form. In the former one, the firm value is modeled by a stochastic process and the first hitting time of this stochastic process to a given boundary defines the default time of the firm. In the existing literature, the stochastic process, triggering the firm value, has been generally chosen as a diffusion process. Therefore, on one hand it is possible to obtain closed form solutions for the pricing problems of credit derivatives and on the other hand the optimal capital structure of a firm can be analysed by obtaining closed form solutions of firm's corporate securities such as; equity value, debt value and total firm value, see Leland(1994). We have extended this approach by modeling the firm value as a jump-diffusion process. The choice of the jump-diffusion process was a crucial step to obtain closed form solutions for corporate securities. As a result, we have chosen a jump-diffusion process with double exponentially distributed jump heights, which enabled us to analyse the effects of jump on the optimal capital structure of a firm. In the second part of the thesis, by following the reduced form models, we have assumed that the default is triggered by the first jump of a Cox process. Further, by following Schönbucher(2005), we have modeled the forward default intensity of a firm as a geometric Brownian motion and derived pricing formulas for credit default swap options in a more general setup than the ones in Schönbucher(2005).

Matter-wave Optics of Dark-state Polaritons: Applications to Interferometry and Quantum Information
(2006)

The present work "Materwave Optics with Dark-state Polaritons: Applications to Interferometry and Quantum Information" deals in a broad sense with the subject of dark-states and in particular with the so-called dark-state polaritons introduced by M. Fleischhauer and M. D. Lukin. The dark-state polaritons can be regarded as a combined excitation of electromagnetic fields and spin/matter-waves. Within the framework of this thesis the special optical properties of the combined excitation are studied. On one hand a new procedure to spatially manipulate and to increase the excitation density of stored photons is described and on the other hand the properties are used to construct a new type of Sagnac Hybrid interferometer. The thesis is devided into four parts. In the introduction all notions necessary to understand the work are described, e.g.: electromagnetically induced transparency (EIT), dark-state polaritons and the Sagnac effect. The second chapter considers the method developed by A. Andre and M. D. Lukin to create stationary light pulses in specially dressed EIT-media. In a first step a set of field equations is derived and simplified by introducing a new set of normal modes. The absorption of one of the normal modes leads to the phenomenon of pulse-matching for the other mode and thereby to a diffusive spreading of its field envelope. All these considerations are based on a homogeneous field setup of the EIT preparation laser. If this restriction is dismissed one finds that a drift motion is superimposed to the diffusive spreading. By choosing a special laser configuration the drift motion can be tailored such that an effective force is created that counteracts the spreading. Moreover, the force can not only be strong enough to compensate the diffusive spreading but also to exceed this dynamics and hence to compress the field envelope of the excitation. The compression can be discribed using a Fokker-Planck equation of the Ornstein-Uhlenbeck type. The investigations show that the compression leads to an excitation of higher-order modes which decay very fast. In the last section of the chapter this exciation will be discussed in more detail and conditions will be given how the excitation of higher-order modes can be avoided or even suppressed. All results given in the chapter are supported by numerical simulatons. In the third chapter the matterwave optical properties of the dark-state polaritons will be studied. They will be used to construct a light-matterwave hybrid Sagnac interferometer. First the principle setup of such an interferometer will be sketched and the relevant equations of motion of light-matter interaction in a rotating frame will be derived. These form the basis of the following considerations of the dark-state polariton dynamics with and without the influence of external trapping potentials on the matterwave part of the polariton. It will be shown that a sensitivity enhancement compared to a passive laser gyroscope can be anticipated if the gaseous medium is initially in a superfluid quantum state in a ring-trap configuration. To achieve this enhancement a simultaneous coherence and momentum transfer is furthermore necessary. In the last part of the chapter the quantum sensitivity limit of the hybrid interferometer is derived using the one-particle density matrix equations incorporating the motion of the particles. To this end the Maxwell-Bloch equations are considered perturbatively in the rotation rate of the noninertial frame of reference and the susceptibility of the considered 3-level \(\Lambda\)-type system is derived in arbitrary order of the probe-field. This is done to determine the optimum operation point. With its help the anticipated quantum sensitivity of the light-matterwave hybrid Sagnac interferometer is calculated at the shot-noise limit and the results are compared to state-of-the-art laser and matterwave Sagnac interferometers. The last chapter of the thesis originates from a joint theoretical and experimental project with the AG Bergmann. This chapter does no longer consider the dark-state polaritons of the last two chapters but deals with the more general concept of dark states and in particular with the transient velocity selective dark states as introduced by E. Arimondo et al. In the experiment we could for the first time measure these states. The chapter starts with an introduction into the concept of velocity selective dark states as they occur in a \(\Lambda\)-configuration. Then we introduce the transient velocity selective dark-states as they occur in an particular extension of the \(\Lambda\)-system. For later use in the simulations the relevant equations of motion are derived in detail. The simulations are based on the solution of the generalized optical Bloch equations. Finally the experimental setup and procedure are explained and the theoretical and experimental results are compared.

We show the numerical applicability of a multiresolution method based on harmonic splines on the 3-dimensional ball which allows the regularized recovery of the harmonic part of the Earth's mass density distribution out of different types of gravity data, e.g. different radial derivatives of the potential, at various positions which need not be located on a common sphere. This approximated harmonic density can be combined with its orthogonal anharmonic complement, e.g. determined out of the splitting function of free oscillations, to an approximation of the whole mass density function. The applicability of the presented tool is demonstrated by several test calculations based on simulated gravity values derived from EGM96. The method yields a multiresolution in the sense that the localization of the constructed spline basis functions can be increased which yields in combination with more data a higher resolution of the resulting spline. Moreover, we show that a locally improved data situation allows a highly resolved recovery in this particular area in combination with a coarse approximation elsewhere which is an essential advantage of this method, e.g. compared to polynomial approximation.

We introduce a method to construct approximate identities on the 2-sphere which have an optimal localization. This approach can be used to accelerate the calculations of approximations on the 2-sphere essentially with a comparably small increase of the error. The localization measure in the optimization problem includes a weight function which can be chosen under some constraints. For each choice of weight function existence and uniqueness of the optimal kernel are proved as well as the generation of an approximate identity in the bandlimited case. Moreover, the optimally localizing approximate identity for a certain weight function is calculated and numerically tested.

In this thesis diverse problems concerning inflation-linked products are dealt with. To start with, two models for inflation are presented, including a geometric Brownian motion for consumer price index itself and an extended Vasicek model for inflation rate. For both suggested models the pricing formulas of inflation-linked products are derived using the risk-neutral valuation techniques. As a result Black and Scholes type closed form solutions for a call option on inflation index for a Brownian motion model and inflation evolution for an extended Vasicek model as well as for an inflation-linked bond are calculated. These results have been already presented in Korn and Kruse (2004) [17]. In addition to these inflation-linked products, for the both inflation models the pricing formulas of a European put option on inflation, an inflation cap and floor, an inflation swap and an inflation swaption are derived. Consequently, basing on the derived pricing formulas and assuming the geometric Brownian motion process for an inflation index, different continuous-time portfolio problems as well as hedging problems are studied using the martingale techniques as well as stochastic optimal control methods. These utility optimization problems are continuous-time portfolio problems in different financial market setups and in addition with a positive lower bound constraint on the final wealth of the investor. When one summarizes all the optimization problems studied in this work, one will have the complete picture of the inflation-linked market and both counterparts of market-participants, sellers as well as buyers of inflation-linked financial products. One of the interesting results worth mentioning here is naturally the fact that a regular risk-averse investor would like to sell and not buy inflation-linked products due to the high price of inflation-linked bonds for example and an underperformance of inflation-linked bonds compared to the conventional risk-free bonds. The relevance of this observation is proved by investigating a simple optimization problem for the extended Vasicek process, where as a result we still have an underperforming inflation-linked bond compared to the conventional bond. This situation does not change, when one switches to an optimization of expected utility from the purchasing power, because in its nature it is only a change of measure, where we have a different deflator. The negativity of the optimal portfolio process for a normal investor is in itself an interesting aspect, but it does not affect the optimality of handling inflation-linked products compared to the situation not including these products into investment portfolio. In the following, hedging problems are considered as a modeling of the other half of inflation market that is inflation-linked products buyers. Natural buyers of these inflation-linked products are obviously institutions that have payment obligations in the future that are inflation connected. That is why we consider problems of hedging inflation-indexed payment obligations with different financial assets. The role of inflation-linked products in the hedging portfolio is shown to be very important by analyzing two alternative optimal hedging strategies, where in the first one an investor is allowed to trade as inflation-linked bond and in the second one he is not allowed to include an inflation-linked bond into his hedging portfolio. Technically this is done by restricting our original financial market, which is made of a conventional bond, inflation index and a stock correlated with inflation index, to the one, where an inflation index is excluded. As a whole, this thesis presents a wide view on inflation-linked products: inflation modeling, pricing aspects of inflation-linked products, various continuous-time portfolio problems with inflation-linked products as well as hedging of inflation-related payment obligations.

In this study, 27 marine bacteria were screened for production of bioactive metabolites. Two strains from the surface of the soft coral Sinularia polydactyla, collected from the Red Sea, and three strains from different habitats in the North Sea were selected as a promising candidates for isolation of antimicrobial substances. A total of 50 compounds were isolated from the selected bacterial strains. From these metabolites 25 substances were known from natural sources, 10 substances were known as synthetic chemical and herein are reported as new natural products, and 13 metabolites are new. Two substances are still under elucidation. All new compounds were chemically and biologically characterized. Pseudoalteromonas sp. T268 produced simple phenol and oxindole derivatives. Production of homogentisic acid and WZ 268S-6 from this bacteria was affected by the salinity stress. WZ 268S-6 shows antimicrobial and cytotoxic activities. Its target is still unclear. Isolation of isatin from this strain points out for the possibility of using this substance as a chemotaxonomical marker for Alteromonas-like bacteria. A large number of nitro-substituted aromatic compounds were isolated from both Salegentibacter sp. T436 and Vibrio sp. WMBA1-4. They may be derived from metabolism of phenylalanine or tyrosine. From Salegentibacter sp. T436, 24 compounds were isolated, of which four compounds are new and six compounds were known as synthetic chemicals. WZ 436S-16 (dinitro-β-styrene) is the most potent antimicrobial and cytotoxic compound. It inhibits the oxygen uptake by N. coryli and causes apoptosis in the human promyelocytic leukaemia (HL-60 cells). From Vibrio sp. WMBA1-4, 13 new alkaloids were isolated, of which four were known as synthetic products and herein are reported as new substances from natural sources. The majority of these compounds show antimicrobial and cytotoxic activities. The cytotoxic activity of WMB4S-11 against the mouse lymphocytic leukaemia (L1210 cells) is due to the inhibition in the protein biosynthesis, while the remaining cytotoxic alkaloids have no effect on the synthesis of macromolecules in this cell line. The antibacterial activity of WMB4S-2, -11, -12, -13 and the antifungal activity of WMB4S-9 are not due to the inhibition in the macromolecules biosynthesis or in the oxygen uptake by the microorganisms. The biological activity of these nitro-aromatic compounds from Salegentibacter sp. T436 and Vibrio sp. WMBA1-4 is influenced by the presence of a nitro group and its position in respect to the hydroxyl group, number of the nitro groups, and the type of substitutions on the side chain. In diaryl-maleimide derivatives, types and position of substitution on the aryl rings, on the maleimide moity, and the hydrophobicity of the aryl ring itself lead to variations in the extent of the bioactivity of these derivatives. This is the first time that vibrindole (WMB4S-14) and turbomycin B or its noncationic form (WMB4S-15), isolated from Vibrio sp., are reported as cytotoxic compounds. WMB4S-15 inhibits the biosynthesis of macromolecules in L1210 cells. The structural similarity between some of the metabolites in this study and previously reported compounds from sponges, ascidians, and bryozoan indicates that the microbial origin of these compounds must be considered.

In this paper we present and investigate a stochastic model for the lay-down of fibers on a conveyor belt in the production process of nonwovens. The model is based on a stochastic differential equation taking into account the motion of the ber under the influence of turbulence. A reformulation as a stochastic Hamiltonian system and an application of the stochastic averaging theorem lead to further simplications of the model. Finally, the model is used to compute the distribution of functionals of the process that might be helpful for the quality assessment of industrial fabrics.

Connectedness of efficient solutions is a powerful property in multiple objective combinatorial optimization since it allows the construction of the complete efficient set using neighborhood search techniques. In this paper we show that, however, most of the classical multiple objective combinatorial optimization problems do not possess the connectedness property in general, including, among others, knapsack problems (and even several special cases of knapsack problems) and linear assignment problems. We also extend already known non-connectedness results for several optimization problems on graphs like shortest path, spanning tree and minimum cost flow problems. Different concepts of connectedness are discussed in a formal setting, and numerical tests are performed for different variants of the knapsack problem to analyze the likelihood with which non-connected adjacency graphs occur in randomly generated problem instances.

This thesis discusses methods for the classification of finite projective planes via exhaustive search. In the main part the author classifies all projective planes of order 16 admitting a large quasiregular group of collineations. This is done by a complete search using the computer algebra system GAP. Computational methods for the construction of relative difference sets are discussed. These methods are implemented in a GAP-package, which is available separately. As another result --found in cooperation with U. Dempwolff-- the projective planes defined by planar monomials are classified. Furthermore the full automorphism group of the non-translation planes defined by planar monomials are classified.

A translation contract is a binary predicate corrTransl(S,T) for source programs S and target programs T. It precisely specifies when T is considered to be a correct translation of S. A certifying compiler generates --in addittion to the target T-- a proof for corrTransl(S,T). Certifying compilers are important for the development of safety critical systems to establish the behavioral equivalence of high-level programs with their compiled assembler code. In this paper, we report on a certifying compiler, its proof techniques, and the underlying formal framework developed within the proof assistent Isabelle/HOL. The compiler uses a tiny C-like language as input, has an optimization phase, and generates MIPS code. The underlying translation contract is based on a trace semantics. We investigate design alternatives and discuss our experiences.

Web-based authentication is a popular mechanism implemented by Wireless Internet Service Providers (WISPs) because it allows a simple registration and authentication of customers, while avoiding the high resource requirements of the new IEEE 802.11i security standard and the backward compatibility issues of legacy devices. In this work we demonstrate two different and novel attacks against web-based authentication. One attack exploits operational anomalies of low- and middle-priced devices in order to hijack wireless clients, while the other exploits an already known vulnerability within wired-networks, which in dynamic wireless environments turns out to be even harder to detect and protect against.

Multileaf Collimators (MLC) consist of (currently 20-100) pairs of movable metal leaves which are used to block radiation in Intensity Modulated Radiation Therapy (IMRT). The leaves modulate a uniform source of radiation to achieve given intensity profiles. The modulation process is modeled by the decomposition of a given non-negative integer matrix into a non-negative linear combination of matrices with the (strict) consecutive ones property.

Over a period of 30 years, ITU-T’s Specification and Description Language (SDL) has matured to a sophisticated formal modelling language for distributed systems and communication protocols. The language definition of SDL-2000, the latest version of SDL, is complex and difficult to maintain. Full tool support for SDL is costly to implement. Therefore, only subsets of SDL are currently supported by tools. These SDL subsets - called SDL profiles - already cover a wide range of systems, and are often suffcient in practice. In this report, we present our approach for extracting the formal semantics for SDL profiles from the complete SDL semantics. We then formalise the approach, present our SDL-profile tool, and report on our experiences.

For the last decade, optimization of beam orientations in intensitymodulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity proles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity proles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity proles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity proles for every selection of beam orientations, making the dependence between beam orientations and its intensity proles less important. We take advantage of this property to present a dynamic algorithm for beam orientation in IMRT which is based on multicriteria inverse planning. The algorithm approximates beam intensity proles iteratively instead of doing it for every selection of beam orientation, saving a considerable amount of calculation time. Every iteration goes from an N-beam plan to a plan with N + 1 beams. Beam selection criteria are based on a score function that minimizes the deviation from the prescribed dose, in addition to a reject-accept criterion. To illustrate the eciency of the algorithm it has been applied to an articial example where optimality is trivial and to three real clinical cases: a prostate carcinoma, a tumor in the head and neck region and a paraspinal tumor. In comparison to the standard equally spaced beam plans, improvements are reported in all of the three clinical examples, even, in some cases with a fewer number of beams.

Testing a new suspension based on real load data is performed on elaborate multi channel test rigs. Usually wheel forces and moments measured during driving maneuvers are reproduced on the rig. Because of the complicated interaction between rig and suspension each new rig configuration has to prove its efficiency with respect to the requirements and the configuration might be subject to optimization. This paper deals with modeling a new rig concept based on two hexapods. The real physical rig has been designed and meanwhile built by MOOG-FCS for VOLKSWAGEN. The aim of the simulation project reported here was twofold: First the simulation of the rig together with real VOLKSWAGEN suspension models at a time where the design was not yet finalized was used to verify and optimize the desired properties of the rig. Second the simulation environment was set up in a way that it can be used to prepare real tests on the rig. The model contains the geometric configuration as well as the hydraulics and the controller. It is implemented as an ADAMS/Car template and can be combined with different suspension models to get a complete assembly representing the entire test rig. Using this model, all steps required for a real test run such as controller adaptation, drive file iteration and simulation can be performed. Geometric or hydraulic parameters can be modified easily to improve the setup and adapt the system to the suspension and the load data.

In this article, we consider the quasistatic boundary value problems of linear elasticity and nonlinear elastoplasticity, with linear Hooke’s law in the elastic regime for both problems and with the linear kinematic hardening law for the plastic regime in the latter problem. We derive expressions and estimates for the difference of the solutions of both models, i.e. for the stresses, the strains and the displacements. To this end, we use the stop and play operators of nonlinear functional analysis. Further, we give an explicit example of a homotopy between the solutions of both problems.

A unified approach to Credit Default Swaption and Constant Maturity Credit Default Swap valuation
(2006)

In this paper we examine the pricing of arbitrary credit derivatives with the Libor Market Model with Default Risk. We show, how to setup the Monte Carlo-Simulation efficiently and investigate the accuracy of closed-form solutions for Credit Default Swaps, Credit Default Swaptions and Constant Maturity Credit Default Swaps. In addition we derive a new closed-form solution for Credit Default Swaptions which allows for time-dependent volatility and abitrary correlation structure of default intensities.1

In this paper we propose a finite volume discretization for the threedimensional Biot poroelasticity system in multilayered domains. For the stability reasons, staggered grids are used. The discretization accounts for discontinuity of the coefficients across the interfaces between layers with different physical properties. Numerical experiments, based on the proposed discretization showed second order convergence in the maximum norm for the primary as well as flux unknowns of the system. A certain application example is presented as well.

This thesis deals with modeling aspects of generalized Newtonian and of non-Newtonian fluids, as well as with development and validation of algorithms used in simulation of such fluids. The main contribution in the modeling part are the introduction and analysis of a new model for the generalized Newtonian fluids, where constitutive equation is of an algebraic form. Distinction between shear and extensional viscosities leads to anisotropic viscosity model. It can be considered as a natural extension of the well known (isotropic viscosity) Carreau model, which deals only with shear viscosity properties of the fluid. The proposed model takes additionally into account extensional viscosity properties. Numerical results show that the anisotropic viscosity model gives much better agreement with experimental observations than the isotropic one. Another contribution of the thesis consists of the development and analysis of robust and reliable algorithms for simulation of generalized Newtonian fluids. For such fluids the momentum equations are strongly coupled through mixed derivatives appearing in the viscous term (unlike the case of Newtonian fluids). It is shown in this thesis, that a careful treatment of those derivatives is essential in deriving robust algorithms. A modification of a standard SIMPLE-like algorithm is given, where all the viscous terms from the momentum equations are discretized in an implicit manner. Moreover, it is shown that a block diagonal preconditioner to the viscous operator is good enough to be used in simulations. Furthermore, different solution techniques, namely projection type methods (consists of solving momentum equations and pressure correction equation) and fully coupled methods (momentum and continuity equations are solved together), are compared. It is shown, that explicit discretization of the mixed derivatives lead to stability problems. Further, analytical estimates of eigenvalue distribution for three different preconditioners, applied to the transformed system arising after discretization and linearization of the momentum and continuity equations, are provided. We propose to apply a block Gauss-Seidel preconditioner to the transformed system. The analysis shows, that this preconditioner is able to cluster eigenvalues around unity independent of the transformation step. It is not the case for other preconditioners applied to the transformed system as discussed in the thesis. The block Gauss-Seidel preconditioner has also shown the best behavior (among all preconditioners discussed in the thesis) in numerical experiments. Further contribution consists of comparison and validation of numerical algorithms applied in simulations of non-Newtonian fluids modeled by time integral constitutive equations. Numerical results from simulations of dilute polymer solutions, described by the integral Oldroyd B model, have shown very good quantitative agreement with the results obtained by differential Oldroyd B counterpart in 4:1 planar contraction domain at low Weissenberg numbers. In this case, the Weissenberg number is changed by changing the relaxation time. However, contrary to the differential Oldroyd B model, the integral one allows to perform stable simulations also in the range of high Weissenberg numbers. Moreover, very good agreement with experimental observations has been achieved. Simulations of concentrated polymer solutions (polystyrene and polybutadiene solutions), modeled by the integral Doi Edwards model, supplemented by chain length fluctuations, have shown very good qualitative agreement with the results obtained by its differential approximation in 4:1:4 constriction domain. Again, much higher Weissenberg numbers can be achieved when the integral model is used. Moreover, very good quantitative results with experimental data of polystyrene solution for the first normal stress difference and shear viscosity defined here as the quotient of a shear stress and a shear rate. Finally, comparison of the two methods used for approximating the time integral constitutive equation, namely Deformation Field Method (DFM) and Backward Lagrangian Particle Method (BLPM), is performed. In BLPM the particle paths are recalculated at every time step of the simulations, what has never been tried before. The results have shown, that in the considered geometries both methods give similar results.

Selection of new projects is one of the major decision making activities in any company. Given a set of potential projects to invest, a subset which matches the company's strategy and internal resources best has to be selected. In this paper, we propose a multicriteria model for portfolio selection of projects, where we take into consideration that each of the potential projects has several - usually conflicting - values.

Katja is a tool generating order-sorted recursive data types as well as position types for Java, from specifications using an enhanced ML like notation. Katja’s main features are its conciseness of specifications, the rich interface provided by the generated code and the Java atypical immutability of types. After several stages of extending and maintaining the Katja project, it became apparent many changes had to be done. The original design of Katja wasn’t prepared for the introduction of several backends, the introduction of position sorts and constant feature enhancements and bug fixes. By supplying this report Katja reaches release status for the first time.

For the last decade, optimization of beam orientations in intensity-modulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity profiles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity profiles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity profiles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity profiles for every selection of beam orientations, making the dependence between beam orientations and its intensity profiles less important. This thesis takes advantage of this property to accelerate the optimization process through an approximation of the intensity profiles that are used for multiple selections of beam orientations, saving a considerable amount of calculation time. A dynamic algorithm (DA) and evolutionary algorithm (EA), for beam orientations in IMRT planning will be presented. The DA mimics, automatically, the methods of beam's eye view and observer's view which are recognized in conventional conformal radiation therapy. The EA is based on a dose-volume histogram evaluation function introduced as an attempt to minimize the deviation between the mathematical and clinical optima. To illustrate the efficiency of the algorithms they have been applied to different clinical examples. In comparison to the standard equally spaced beams plans, improvements are reported for both algorithms in all the clinical examples even when, for some cases, fewer beams are used. A smaller number of beams is always desirable without compromising the quality of the treatment plan. It results in a shorter treatment delivery time, which reduces potential errors in terms of patient movements and decreases discomfort.

With the UML 2.0 standard, the Unified Modeling Language took a big step towards SDL, incorporating many features of the language. SDL is a mature and complete language with formal semantics. The Z.109 standard defines a UML Profile for SDL, mapping UML constructs to corresponding counterparts in SDL, giving them a precise semantics. In this report, we present a case study for the formalisation of the Z.109 standard. The formal definition makes the mapping precise and can be used to derive tool support.

This paper presents a method for approximating spherical functions from discrete data of a block-wise grid structure. The essential ingredients of the approach are scaling and wavelet functions within a biorthogonalisation process generated by locally supported zonal kernel functions. In consequence, geophysically and geodetically relevant problems involving rotation-invariant pseudodifferential operators become attackable. A multiresolution analysis is formulated enabling a fast wavelet transform similar to the algorithms known from one-dimensional Euclidean theory.

The stationary heat equation is solved with periodic boundary conditions in geometrically complex composite materials with high contrast in the thermal conductivities of the individual phases. This is achieved by harmonic averaging and explicitly introducing the jumps across the material interfaces as additional variables. The continuity of the heat flux yields the needed extra equations for these variables. A Schur-complent formulation for the new variables is derived that is solved using the FFT and BiCGStab methods. The EJ-HEAT solver is given as a 3-page Matlab program in the Appendix. The C++ implementation is used for material design studies. It solves 3-dimensional problems with around 190 Mio variables on a 64-bit AMD Opteron desktop system in less than 6 GB memory and in minutes to hours, depending on the contrast and required accuracy. The approach may also be used to compute effective electric conductivities because they are governed by the stationary heat equation.

* naive examples which show drawbacks of discrete wavelet transform and windowed Fourier transform; * adaptive partition (with a 'best basis' approach) of speech-like signals by means of local trigonometric bases with orthonormal windows. * extraction of formant-like features from the cosine transform; * further proceedingings for classification of vowels or voiced speech are suggested at the end.

The new international capital standard for credit institutions (“Basel II”) allows banks to use internal rating systems in order to determine the risk weights that are relevant for the calculation of capital charge. Therefore, it is necessary to develop a system that enfolds the main practices and methods existing in the context of credit rating. The aim of this thesis is to give a suggestion of setting up a credit rating system, where the main techniques used in practice are analyzed, presenting some alternatives and considering the problems that can arise from a statistical point of view. Finally, we will set up some guidelines on how to accomplish the challenge of credit scoring. The judgement of the quality of a credit with respect to the probability of default is called credit rating. A method based on a multi-dimensional criterion seems to be natural, due to the numerous effects that can influence this rating. However, owing to governmental rules, the tendency is that typically one-dimensional criteria will be required in the future as a measure for the credit worthiness or for the quality of a credit. The problem as described above can be resolved via transformation of a multi-dimensional data set into a one-dimensional one while keeping some monotonicity properties and also keeping the loss of information (due to the loss of dimensionality) at a minimum level.

We consider optimal design problems for semiconductor devices which are simulated using the energy transport model. We develop a descent algorithm based on the adjoint calculus and present numerical results for a ballistic diode. Further, we compare the optimal doping profile with results computed on basis of the drift diffusion model. Finally, we exploit the model hierarchy and test the space mapping approach, especially the aggressive space mapping algorithm, for the design problem. This yields a significant reduction of numerical costs and programming effort.

In this article, we give an explicit homotopy between the solutions (i.e. stress, strain, displacement) of the quasistatic linear elastic and nonlinear elastoplastic boundary value problem, where we assume a linear kinematic hardening material law. We give error estimates with respect to the homotopy parameter.

Error estimates for quasistatic global elastic correction and linear kinematic hardening material
(2006)

We consider in this paper the quasistatic boundary value problems of linear elasticity and nonlinear elastoplasticity with linear kinematic hardening material. We derive expressions and estimates for the difference of solutions (i.e. stress, strain and displacement) of both models. Further, we study the error between the elastoplastic solution and the solution of a postprocessing method, that corrects the solution of the linear elastic problem in order to approximate the elastoplastic model.

In this article, we give some generalisations of existing Lipschitz estimates for the stop and the play operator with respect to an arbitrary convex and closed characteristic a separable Hilbert space. We are especially concerned with the dependency of their outputs with respect to different scalar products.

Traffic flow on road networks has been a continuous source of challenging mathematical problems. Mathematical modelling can provide an understanding of dynamics of traffic flow and hence helpful in organizing the flow through the network. In this dissertation macroscopic models for the traffic flow in road networks are presented. The primary interest is the extension of the existing macroscopic road network models based on partial differential equations (PDE model). In order to overcome the difficulty of high computational costs of PDE model an ODE model has been introduced. In addition, steady state traffic flow model named as RSA model on road networks has been dicsussed. To obtain the optimal flow through the network cost functionals and corresponding optimal control problems are defined. The solution of these optimization problems provides an information of shortest path through the network subject to road conditions. The resulting constrained optimization problem is solved approximately by solving unconstrained problem invovling exact penalty functions and the penalty parameter. A good estimate of the threshold of the penalty parameter is defined. A well defined algorithm for solving a nonlinear, nonconvex equality and bound constrained optimization problem is introduced. The numerical results on the convergence history of the algorithm support the theoretical results. In addition to this, bottleneck situations in the traffic flow have been treated using a domain decomposition method (DDM). In particular this method could be used to solve the scalar conservation laws with the discontinuous flux functions corresponding to other physical problems too. This method is effective even when the flux function presents more than one discontinuity within the same spatial domain. It is found in the numerical results that the DDM is superior to other schemes and demonstrates good shock resolution.

Wetting of a solid surface with liquids is an important parameter in the chemical engineering process such as distillation, absorption and desorption. The degree of wetting in packed columns mainly contributes in the generating of the effective interfacial area and then enhancing of the heat and mass transfer process. In this work the wetting of solid surfaces was studied in real experimental work and virtually through three dimensional CFD simulations using the multiphase flow VOF model implemented in the commercial software FLUENT. That can be used to simulate the stratified flows [1]. The liquid rivulet flow which is a special case of the film flow and mostly found in packed columns has been discussed. Wetting of a solid flat and wavy metal plate with rivulet liquid flow was simulated and experimentally validated. The local rivulet thickness was measured using an optically assisted mechanical sensor using a needle which is moved perpendicular to the plate surface with a step motor and in the other two directions using two micrometers. The measured and simulated rivulet profiles were compared to some selected theoretical models founded in the literature such as Duffy & Muffatt [2], Towell & Rothfeld [3] and Al-Khalil et al. [4]. The velocity field in a cross section of a rivulet flow and the non-dimensional maximum and mean velocity values for the vertical flat plate was also compared with models from Al-Khalil et al. [4] and Allen & Biggin [5]. Few CFD simulations for the wavy plate case were compared to the experimental findings, and the Towel model for a flat plate [3]. In the second stage of this work 3-D CFD simulations and experimental study has been performed for wetting of a structured packing element and packing sheet consisting of three elements from the type Rombopak 4M, which is a product of the company Kuhni, Switzerland. The hydrodynamics parameters of a packed column, e. i. the degree of wetting, the interfacial area and liquid hold-up have been depicted from the CFD simulations for different liquid systems and liquid loads. Flow patterns on the degree of wetting have been compared to that of the experiments, where the experimental values for the degree of wetting were estimated from the snap shooting of the flow on the packing sheet in a test rig. A new model to describe the hydrodynamics of packed columns equipped with Rombopak 4M was derived with help of the CFD–simulation results. The model predicts the degree of wetting, the specific or interfacial area and liquid hold-up at different flow conditions. This model was compared to Billet & Schultes [6], the SRP model Rocha et al. [7-9], to Shi & Mersmann [10] and others. Since the pressure drop is one of the most important parameter in packed columns especially for vacuum operating columns, few CFD simulations were performed to estimate the dry pressure drop in a structured and flat packing element and were compared to the experimental results. It was found a good agreement from one side, between the experimental and the CFD simulation results, and from the other side between the simulations and theoretical models for the rivulet flow on an inclined plate. The flow patterns and liquid spreading behaviour on the packing element agrees well with the experimental results. The VOF (Volume of Fluid) was found very sensitive to different liquid properties and can be used in optimization of the packing geometries and revealing critical details of wetting and film flow. An extension of this work to perform CFD simulations for the flow inside a block of the packing to get a detailed picture about the interaction between the liquid and packing surfaces is recommended as further perspective.

Uncoupling protein1 (UCP1) in brown adipose tissue was discovered earlier as the main uncoupling source of respiration. We describe the basic facts and a modest contribution of our group to the area of research on mitochondrial uncoupling proteins. After defining the terms uncoupling, leak, proton-mediated uncoupling, we discuss the assumption that due to its low abundance, uncoupling protein 2 (UCP2) can provide only mild uncoupling, i.e. can decrease the proton motive force by several mV only. A fatty acid cycling mechanism is described as a plausible explanation for the protonophoretic function of all uncoupling proteins together with our experiments supporting it. A speculation for the phylogenesis of all uncoupling proteins can be deduced by estimated UCP2 content in several tissues, and details of its activation are explained on the basis of our experiments. In the present study a solubilization and refolding method for UCP2 from inclusion bodies was developed and characterized. As it was known and also demonstrated from previous experiments on UCP1 that fatty acids are substrates, we used the same procedure to study the function of UCP2. Utilizing spin-labelled fatty acids (SLFA) for our experiments we demonstrated the binding of fatty acids to UCP2, and the competition of other natural fatty acids like oleic acid, palmitic acid, arachidonic acid and eicosatrienoic acid to the preformed complex emphasizes the presence of a fatty acid binding site for mitochondrial UCP2. The findings were observed by EPR spectroscopy where the highly immobilized spectra with presence of spin-labelled fatty acid eventually end up as free spin label spectra with a particular concentration of the natural fatty acid added to the UCP2 bound with spin-labelled fatty acid. This fits in significantly with the earlier findings of UCP1 and also leads to assumption of functional explanation about the physiological relevance between the uncoupling proteins functions. The present study, in which representative and sensitive parameters for EPR spectroscopy were established, at the same time describes the concentration effects of fatty acids upon the protein bound with spin-labelled fatty acids which are much of importance in comparison to physiological levels, being in the micromolar range (µM) as compared with milli molar (mM) as for UCP1 previously. In appropriate examples, different fatty acids are used and compared with competitors like alkylsulfonates also emphasizing the function of the protein. And the studies with the effect of nucleotides inhibition demonstrate that there exists a putative binding site for fatty acids. Much significance lies in demonstration with the spin-labelled-ATP studies where competition of ATP to the protein bound to spin-labelled ATP explains about the inhibition effect of nucleotides on the UCP2. So the present study applies different methods for the functional characterization of UCP2. The studies of natural fatty acids and alkylsulfonates with UCP2 bound to spin-labelled fatty acid, and study of nucleotide inhibition on UCP2 are closely related and give the much awaited answer to the question of functional similarities between UCP1 and UCP2. This supports the discussion of many groups which predict the functional similarity between these two proteins based upon sequence homology. Also many attempts have been reported in literature to explain the physiological functional relevance where by this present study can also be added to as we now suppose from the present conclusions of our experiments.

In this paper a known orthonormal system of time- and space-dependent functions, that were derived out of the Cauchy-Navier equation for elastodynamic phenomena, is used to construct reproducing kernel Hilbert spaces. After choosing one of the spaces the corresponding kernel is used to define a function system that serves as a basis for a spline space. We show that under certain conditions there exists a unique interpolating or approximating, respectively, spline in this space with respect to given samples of an unknown function. The name "spline" here refers to its property of minimising a norm among all interpolating functions. Moreover, a convergence theorem and an error estimate relative to the point grid density are derived. As numerical example we investigate the propagation of seismic waves.

Tropical geometry is a rather new field of algebraic geometry. The main idea is to replace algebraic varieties by certain piece-wise linear objects in R^n, which can be studied with the aid of combinatorics. There is hope that many algebraically difficult operations become easier in the tropical setting, as the structure of the objects seems to be simpler. In particular, tropical geometry shows promise for application in enumerative geometry. Enumerative geometry deals with the counting of geometric objects that are determined by certain incidence conditions. Until around 1990, not many enumerative questions had been answered and there was not much prospect of solving more. But then Kontsevich introduced the moduli space of stable maps which turned out to be a very useful concept for the study of enumerative geometry. A well-known problem of enumerative geometry is to determine the numbers N_cplx(d,g) of complex genus g plane curves of degree d passing through 3d+g-1 points in general position. Mikhalkin has defined the analogous number N_trop(d,g) for tropical curves and shown that these two numbers coincide (Mikhalkin's Correspondence Theorem). Tropical geometry supplies many new ideas and concepts that could be helpful to answer enumerative problems. However, as a rather new field, tropical geometry has to be studied more thoroughly. This thesis is concerned with the ``translation'' of well-known facts of enumerative geometry to tropical geometry. More precisely, the main results of this thesis are: - a tropical proof of the invariance of N_trop(d,g) of the position of the 3d+g-1 points, - a tropical proof for Kontsevich's recursive formula to compute N_trop(d,0) and - a tropical proof of Caporaso's and Harris' algorithm to compute N_trop(d,g). All results were derived in joint work with my advisor Andreas Gathmann. (Note that tropical research is not restricted to the translation of classically well-known facts, there are actually new results shown by means of tropical geometry that have not been known before. For example, Mikhalkin gave a tropical algorithm to compute the Welschinger invariant for real curves. This shows that tropical geometry can indeed be a tool for a better understanding of classical geometry.)

The study provides insights into the dynamic processes of vascular epiphyte vegetation in two host tree species of lowland forest in Panama. Further, a novel approach is presented to examine the possible role of host tree identity in the structuring of vascular epiphyte communities: For three locally common host tree species (Socratea exorrhiza, Marila laxiflora, Perebea xanthochyma) we created null models of the expected epiphyte assemblages assuming that epiphyte colonization reflected random distribution of epiphytes in the forest. In all three tree species, abundances of the majority of epiphyte species (69 – 81 %) were indistinguishable from random, while the remaining species were about equally over- or underrepresented compared to their occurrence in the entire forest plot. Permutations based on the number of colonized trees (reflecting observed spatial patchiness) yielded similar results. Finally, a Canonical Correspondence Analysis also confirmed host-specific differences in epiphyte assemblages. In spite of pronounced preferences of some epiphytes for particular host trees, no epiphyte species was restricted to a single host. We conclude that the epiphytes on a given tree species are not simply a random sample of the local species pool, but there are no indications of host specificity either. To determine the qualitative and quantitative long-term changes in the vascular epiphyte assemblage of the host tree Socratea exorrhiza, in the lowland forest of the San Lorenzo Crane Plot, we followed the fate of the vascular epiphyte assemblage on 99 individuals of this palm species, in three censuses over the course of five years. The composition of the epiphyte assemblage changed little during the course of the study. While the similarity of epiphyte vegetation decreased on single palm individuals through time, the similarity analyzed over all palms increased. Even well-established epiphyte individuals experienced high mortality with only 46 % of the originally mapped individuals surviving the following five years. We found a positive correlation between host tree size and epiphyte richness and detected higher colonization rates of epiphytes per surface area on larger trees. Epiphyte assemblages on single Socratea exorrhiza trees were highly dynamic while the overall composition of the epiphyte vegetation on the host tree species in the study plot was rather stable. We suggest that higher recruitment rates due to localized seed dispersal by already established epiphytes on larger palms promote the colonization of epiphytes on larger palms. Given the known growth rates and mortality rates of the host tree species, the maximum time available for colonization and reproduction of epiphytes on a given Socratea exorrhiza tree is estimated to be about 60 years. Changes in the epiphyte vegetation of c. 1000 individuals of the host tree species Annona glabra at Barro Colorado Island over the course of eight year were documented by means of repeated censuses. Considerable increase in the abundance of the dominating epiphyte species and ongoing colonization of the host tree species suggests that the epiphyte vegetation has not reached a steady state in the maximal 80 years since the establishment of the host tree. Epiphyte species composition as a whole was rather stable. We disentangled the relationship between epiphyte colonization and tree size/available time for colonization with the finding that tree size explained only a low proportion of colonization while other factors like connectivity to dispersal source and time explain may explain a larger part. Epiphyte populations are patchily distributed and examined species exhibit properties of a metapopulation with asynchronous local population growth, high local population turnover, a positive relationship between regional occurrence and patch population size, and negatively correlated relationship between extinction and patch occupancy. The documented metapopulation processes highlight the importance of not colonized suitable habitat for the conservation of epiphytes.

Linear and integer programs are considered whose coefficient matrices can be partitioned into K consecutive ones matrices. Mimicking the special case of K=1 which is well-known to be equivalent to a network flow problem we show that these programs can be transformed to a generalized network flow problem which we call semi-simultaneous (se-sim) network flow problem. Feasibility conditions for se-sim flows are established and methods for finding initial feasible se-sim flows are derived. Optimal se-sim flows are characterized by a generalization of the negative cycle theorem for the minimum cost flow problem. The issue of improving a given flow is addressed both from a theoretical and practical point of view. The paper concludes with a summary and some suggestions for possible future work in this area.

The validity of formulas w.r.t. a specification over first-order logic with a semantics based on all models is semi-decidable. Therefore, we may implement a proof procedure which finds a proof for every valid formula fully automatically. But this semantics often lacks intuition: Some pathological models such as the trivial model may produce unexpected results w.r.t. validity. Instead, we may consider just a class of special models, for instance, the class of all data models. Proofs are then performed using induction. But, inductive validity is not semi-decidable -- even for first-order logic. This theoretical drawback manifests itself in practical limitations: There are theorems that cannot be proved by induction directly but only generalizations can be proved. For their definition, we may have to extend the specification. Therefore, we cannot expect to prove interesting theorems fully automatically. Instead, we have to support user-interaction in a suitable way. In this thesis, we aim at developing techniques that enhance automatic proof control of (inductive) theorem provers and that enable user-interaction in a suitable way. We integrate our new proof techniques into the inductive theorem prover QuodLibet and validate them with various case studies. Essentially, we introduce the following three proof techniques: -We integrate a decision procedure for linear arithmetic into QuodLibet in a close way by defining new inference rules that perform the elementary steps of the decision procedure. This allows us to implement well-known heuristics for automatic proof control. Furthermore, we are able to provide special purpose tactics that support the manual speculation of lemmas if a proof attempt gets stuck. The integration improves the ability of the theorem prover to prove theorems automatically as well as its efficiency. Our approach is competitive with other approaches regarding efficiency; it provides advantages regarding the speculation of lemmas. -The automatic proof control searches for a proof by applying inference rules. The search space is not only infinite, but grows dramatically with the depth of the search. In contrast to this, checking and analyzing performed proofs is very efficient. As the search space also has a high redundancy, it is reasonable to reuse subproofs found during proof search. We define new notions for the contribution of proof steps to a proof. These notions enable the derivation of pruned proofs and the identification of superfluous subformulas in theorems. A proof may be reused in two ways: upward propagation prunes a proof by eliminating superfluous proof steps; sideward reuse closes an open proof obligation by replaying an already found proof. -For interactive theorem provers, it is essential not only to prove automatically as many lemmas as possible but also to restrict proof search in such a way that the proof process stops within a reasonable amount of time. We introduce different markings in the goals to be proved and the lemmas to be applied to restrict proof search in a flexible way: With a forbidden marking, we can simulate well-known approaches for applying conditional lemmas. A mandatory marking provides a new heuristics which is inspired by local contribution of proof steps. With obligatory and generous markings, we can fine-tune the degree of efficiency and extent of proof search manually. With an elaborate case study, we show the benefits of the different techniques, in particular the synergetic effect of their combination.

The primary object of this work is the development of a robust, accurate and efficient time integrator for the dynamics of flexible multibody systems. Particularly a unified framework for the computational dynamics of multibody systems consisting of mass points, rigid bodies and flexible beams forming open kinematic chains or closed loop systems is developed. In addition, it aims at the presentation of (i) a focused survey of the Lagrangian and Hamiltonian formalism for dynamics, (ii) five different methods to enforce constraints with their respective relations, and (iii) three alternative ways for the temporal discretisation of the evolution equations. The relations between the different methods for the constraint enforcement in conjunction with one specific energy-momentum conserving temporal discretisation method are proved and their numerical performances are compared by means of theoretical considerations as well as with the help of numerical examples.

Using covering problems (CoP) combined with binary search is a well-known and successful solution approach for solving continuous center problems. In this paper, we show that this is also true for center hub location problems in networks. We introduce and compare various formulations for hub covering problems (HCoP) and analyse the feasibility polyhedron of the most promising one. Computational results using benchmark instances are presented. These results show that the new solution approach performs better in most examples.