### Refine

#### Year of publication

- 2006 (67) (remove)

#### Document Type

- Report (25)
- Doctoral Thesis (24)
- Preprint (14)
- Diploma Thesis (2)
- Conference Proceeding (1)
- Periodical Part (1)

#### Language

- English (67) (remove)

#### Keywords

- Elastic BVP (3)
- Approximation (2)
- Elastisches RWP (2)
- Elastoplastisches RWP (2)
- Hysterese (2)
- IMRT (2)
- Kontinuumsmechanik (2)
- Lokalisation (2)
- Multivariate Approximation (2)
- Optimization (2)

#### Faculty / Organisational entity

Matter-wave Optics of Dark-state Polaritons: Applications to Interferometry and Quantum Information
(2006)

The present work "Materwave Optics with Dark-state Polaritons: Applications to Interferometry and Quantum Information" deals in a broad sense with the subject of dark-states and in particular with the so-called dark-state polaritons introduced by M. Fleischhauer and M. D. Lukin. The dark-state polaritons can be regarded as a combined excitation of electromagnetic fields and spin/matter-waves. Within the framework of this thesis the special optical properties of the combined excitation are studied. On one hand a new procedure to spatially manipulate and to increase the excitation density of stored photons is described and on the other hand the properties are used to construct a new type of Sagnac Hybrid interferometer. The thesis is devided into four parts. In the introduction all notions necessary to understand the work are described, e.g.: electromagnetically induced transparency (EIT), dark-state polaritons and the Sagnac effect. The second chapter considers the method developed by A. Andre and M. D. Lukin to create stationary light pulses in specially dressed EIT-media. In a first step a set of field equations is derived and simplified by introducing a new set of normal modes. The absorption of one of the normal modes leads to the phenomenon of pulse-matching for the other mode and thereby to a diffusive spreading of its field envelope. All these considerations are based on a homogeneous field setup of the EIT preparation laser. If this restriction is dismissed one finds that a drift motion is superimposed to the diffusive spreading. By choosing a special laser configuration the drift motion can be tailored such that an effective force is created that counteracts the spreading. Moreover, the force can not only be strong enough to compensate the diffusive spreading but also to exceed this dynamics and hence to compress the field envelope of the excitation. The compression can be discribed using a Fokker-Planck equation of the Ornstein-Uhlenbeck type. The investigations show that the compression leads to an excitation of higher-order modes which decay very fast. In the last section of the chapter this exciation will be discussed in more detail and conditions will be given how the excitation of higher-order modes can be avoided or even suppressed. All results given in the chapter are supported by numerical simulatons. In the third chapter the matterwave optical properties of the dark-state polaritons will be studied. They will be used to construct a light-matterwave hybrid Sagnac interferometer. First the principle setup of such an interferometer will be sketched and the relevant equations of motion of light-matter interaction in a rotating frame will be derived. These form the basis of the following considerations of the dark-state polariton dynamics with and without the influence of external trapping potentials on the matterwave part of the polariton. It will be shown that a sensitivity enhancement compared to a passive laser gyroscope can be anticipated if the gaseous medium is initially in a superfluid quantum state in a ring-trap configuration. To achieve this enhancement a simultaneous coherence and momentum transfer is furthermore necessary. In the last part of the chapter the quantum sensitivity limit of the hybrid interferometer is derived using the one-particle density matrix equations incorporating the motion of the particles. To this end the Maxwell-Bloch equations are considered perturbatively in the rotation rate of the noninertial frame of reference and the susceptibility of the considered 3-level \(\Lambda\)-type system is derived in arbitrary order of the probe-field. This is done to determine the optimum operation point. With its help the anticipated quantum sensitivity of the light-matterwave hybrid Sagnac interferometer is calculated at the shot-noise limit and the results are compared to state-of-the-art laser and matterwave Sagnac interferometers. The last chapter of the thesis originates from a joint theoretical and experimental project with the AG Bergmann. This chapter does no longer consider the dark-state polaritons of the last two chapters but deals with the more general concept of dark states and in particular with the transient velocity selective dark states as introduced by E. Arimondo et al. In the experiment we could for the first time measure these states. The chapter starts with an introduction into the concept of velocity selective dark states as they occur in a \(\Lambda\)-configuration. Then we introduce the transient velocity selective dark-states as they occur in an particular extension of the \(\Lambda\)-system. For later use in the simulations the relevant equations of motion are derived in detail. The simulations are based on the solution of the generalized optical Bloch equations. Finally the experimental setup and procedure are explained and the theoretical and experimental results are compared.

We consider a volume maximization problem arising in gemstone cutting industry. The problem is formulated as a general semi-infinite program (GSIP) and solved using an interiorpoint method developed by Stein. It is shown, that the convexity assumption needed for the convergence of the algorithm can be satisfied by appropriate modelling. Clustering techniques are used to reduce the number of container constraints, which is necessary to make the subproblems practically tractable. An iterative process consisting of GSIP optimization and adaptive refinement steps is then employed to obtain an optimal solution which is also feasible for the original problem. Some numerical results based on realworld data are also presented.

The stationary heat equation is solved with periodic boundary conditions in geometrically complex composite materials with high contrast in the thermal conductivities of the individual phases. This is achieved by harmonic averaging and explicitly introducing the jumps across the material interfaces as additional variables. The continuity of the heat flux yields the needed extra equations for these variables. A Schur-complent formulation for the new variables is derived that is solved using the FFT and BiCGStab methods. The EJ-HEAT solver is given as a 3-page Matlab program in the Appendix. The C++ implementation is used for material design studies. It solves 3-dimensional problems with around 190 Mio variables on a 64-bit AMD Opteron desktop system in less than 6 GB memory and in minutes to hours, depending on the contrast and required accuracy. The approach may also be used to compute effective electric conductivities because they are governed by the stationary heat equation.

In this study, 27 marine bacteria were screened for production of bioactive metabolites. Two strains from the surface of the soft coral Sinularia polydactyla, collected from the Red Sea, and three strains from different habitats in the North Sea were selected as a promising candidates for isolation of antimicrobial substances. A total of 50 compounds were isolated from the selected bacterial strains. From these metabolites 25 substances were known from natural sources, 10 substances were known as synthetic chemical and herein are reported as new natural products, and 13 metabolites are new. Two substances are still under elucidation. All new compounds were chemically and biologically characterized. Pseudoalteromonas sp. T268 produced simple phenol and oxindole derivatives. Production of homogentisic acid and WZ 268S-6 from this bacteria was affected by the salinity stress. WZ 268S-6 shows antimicrobial and cytotoxic activities. Its target is still unclear. Isolation of isatin from this strain points out for the possibility of using this substance as a chemotaxonomical marker for Alteromonas-like bacteria. A large number of nitro-substituted aromatic compounds were isolated from both Salegentibacter sp. T436 and Vibrio sp. WMBA1-4. They may be derived from metabolism of phenylalanine or tyrosine. From Salegentibacter sp. T436, 24 compounds were isolated, of which four compounds are new and six compounds were known as synthetic chemicals. WZ 436S-16 (dinitro-β-styrene) is the most potent antimicrobial and cytotoxic compound. It inhibits the oxygen uptake by N. coryli and causes apoptosis in the human promyelocytic leukaemia (HL-60 cells). From Vibrio sp. WMBA1-4, 13 new alkaloids were isolated, of which four were known as synthetic products and herein are reported as new substances from natural sources. The majority of these compounds show antimicrobial and cytotoxic activities. The cytotoxic activity of WMB4S-11 against the mouse lymphocytic leukaemia (L1210 cells) is due to the inhibition in the protein biosynthesis, while the remaining cytotoxic alkaloids have no effect on the synthesis of macromolecules in this cell line. The antibacterial activity of WMB4S-2, -11, -12, -13 and the antifungal activity of WMB4S-9 are not due to the inhibition in the macromolecules biosynthesis or in the oxygen uptake by the microorganisms. The biological activity of these nitro-aromatic compounds from Salegentibacter sp. T436 and Vibrio sp. WMBA1-4 is influenced by the presence of a nitro group and its position in respect to the hydroxyl group, number of the nitro groups, and the type of substitutions on the side chain. In diaryl-maleimide derivatives, types and position of substitution on the aryl rings, on the maleimide moity, and the hydrophobicity of the aryl ring itself lead to variations in the extent of the bioactivity of these derivatives. This is the first time that vibrindole (WMB4S-14) and turbomycin B or its noncationic form (WMB4S-15), isolated from Vibrio sp., are reported as cytotoxic compounds. WMB4S-15 inhibits the biosynthesis of macromolecules in L1210 cells. The structural similarity between some of the metabolites in this study and previously reported compounds from sponges, ascidians, and bryozoan indicates that the microbial origin of these compounds must be considered.

We present the application of a meshfree method for simulations of interaction between fluids and flexible structures. As a flexible structure we consider a sheet of paper. In a two-dimensional framework this sheet can be modeled as curve by the dynamical Kirchhoff-Love theory. The external forces taken into account are gravitation and the pressure difference between upper and lower surface of the sheet. This pressure difference is computed using the Finite Pointset Method (FPM) for the incompressible Navier-Stokes equations. FPM is a meshfree, Lagrangian particle method. The dynamics of the sheet are computed by a finite difference method. We show the suitability of the meshfree method for simulations of fluid-structure interaction in several applications.

It is commonly believed that not all degrees of freedom are needed to produce good solutions for the treatment planning problem in intensity modulated radiotherapy treatment (IMRT). However, typical methods to exploit this fact have either increased the complexity of the optimization problem or were heuristic in nature. In this work we introduce a technique based on adaptively refining variable clusters to successively attain better treatment plans. The approach creates approximate solutions based on smaller models that may get arbitrarily close to the optimal solution. Although the method is illustrated using a specific treatment planning model, the components constituting the variable clustering and the adaptive refinement are independent of the particular optimization problem.

For the last decade, optimization of beam orientations in intensitymodulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity proles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity proles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity proles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity proles for every selection of beam orientations, making the dependence between beam orientations and its intensity proles less important. We take advantage of this property to present a dynamic algorithm for beam orientation in IMRT which is based on multicriteria inverse planning. The algorithm approximates beam intensity proles iteratively instead of doing it for every selection of beam orientation, saving a considerable amount of calculation time. Every iteration goes from an N-beam plan to a plan with N + 1 beams. Beam selection criteria are based on a score function that minimizes the deviation from the prescribed dose, in addition to a reject-accept criterion. To illustrate the eciency of the algorithm it has been applied to an articial example where optimality is trivial and to three real clinical cases: a prostate carcinoma, a tumor in the head and neck region and a paraspinal tumor. In comparison to the standard equally spaced beam plans, improvements are reported in all of the three clinical examples, even, in some cases with a fewer number of beams.

The topic of this thesis is the coupling of an atomistic and a coarse scale region in molecular dynamics simulations with the focus on the reflection of waves at the interface between the two scales and the velocity of waves in the coarse scale region for a non-equilibrium process. First, two models from the literature for such a coupling, the concurrent coupling of length scales and the bridging scales method are investigated for a one dimensional system with harmonic interaction. It turns out that the concurrent coupling of length scales method leads to the reflection of fine scale waves at the interface, while the bridging scales method gives an approximated system that is not energy conserving. The velocity of waves in the coarse scale region is in both models not correct. To circumvent this problems, we present a coupling based on the displacement splitting of the bridging scales method together with choosing appropriate variables in orthogonal subspaces. This coupling allows the derivation of evolution equations of fine and coarse scale degrees of freedom together with a reflectionless boundary condition at the interface directly from the Lagrangian of the system. This leads to an energy conserving approximated system with a clear separation between modeling errors an errors due to the numerical solution. Possible approximations in the Lagrangian and the numerical computation of the memory integral and other numerical errors are discussed. We further present a method to choose the interpolation from coarse to atomistic scale in such a way, that the fine scale degrees of freedom in the coarse scale region can be neglected. The interpolation weights are computed by comparing the dispersion relations of the coarse scale equations and the fully atomistic system. With this new interpolation weights, the number of degrees of freedom can be drastically reduced without creating an error in the velocity of the waves in the coarse scale region. We give an alternative derivation of the new coupling with the Mori-Zwanzig projection operator formalism, and explain how the method can be extended to non-zero temperature simulations. For the comparison of the results of the approximated with the fully atomistic system, we use a local stress tensor and the energy in the atomistic region. Examples for the numerical solution of the approximated system for harmonic potentials are given in one and two dimensions.

Testing a new suspension based on real load data is performed on elaborate multi channel test rigs. Usually wheel forces and moments measured during driving maneuvers are reproduced on the rig. Because of the complicated interaction between rig and suspension each new rig configuration has to prove its efficiency with respect to the requirements and the configuration might be subject to optimization. This paper deals with modeling a new rig concept based on two hexapods. The real physical rig has been designed and meanwhile built by MOOG-FCS for VOLKSWAGEN. The aim of the simulation project reported here was twofold: First the simulation of the rig together with real VOLKSWAGEN suspension models at a time where the design was not yet finalized was used to verify and optimize the desired properties of the rig. Second the simulation environment was set up in a way that it can be used to prepare real tests on the rig. The model contains the geometric configuration as well as the hydraulics and the controller. It is implemented as an ADAMS/Car template and can be combined with different suspension models to get a complete assembly representing the entire test rig. Using this model, all steps required for a real test run such as controller adaptation, drive file iteration and simulation can be performed. Geometric or hydraulic parameters can be modified easily to improve the setup and adapt the system to the suspension and the load data.

Traffic flow on road networks has been a continuous source of challenging mathematical problems. Mathematical modelling can provide an understanding of dynamics of traffic flow and hence helpful in organizing the flow through the network. In this dissertation macroscopic models for the traffic flow in road networks are presented. The primary interest is the extension of the existing macroscopic road network models based on partial differential equations (PDE model). In order to overcome the difficulty of high computational costs of PDE model an ODE model has been introduced. In addition, steady state traffic flow model named as RSA model on road networks has been dicsussed. To obtain the optimal flow through the network cost functionals and corresponding optimal control problems are defined. The solution of these optimization problems provides an information of shortest path through the network subject to road conditions. The resulting constrained optimization problem is solved approximately by solving unconstrained problem invovling exact penalty functions and the penalty parameter. A good estimate of the threshold of the penalty parameter is defined. A well defined algorithm for solving a nonlinear, nonconvex equality and bound constrained optimization problem is introduced. The numerical results on the convergence history of the algorithm support the theoretical results. In addition to this, bottleneck situations in the traffic flow have been treated using a domain decomposition method (DDM). In particular this method could be used to solve the scalar conservation laws with the discontinuous flux functions corresponding to other physical problems too. This method is effective even when the flux function presents more than one discontinuity within the same spatial domain. It is found in the numerical results that the DDM is superior to other schemes and demonstrates good shock resolution.

* naive examples which show drawbacks of discrete wavelet transform and windowed Fourier transform; * adaptive partition (with a 'best basis' approach) of speech-like signals by means of local trigonometric bases with orthonormal windows. * extraction of formant-like features from the cosine transform; * further proceedingings for classification of vowels or voiced speech are suggested at the end.

Ownership Domains generalize ownership types. They support programming patterns like iterators that are not possible with ordinary ownership types. However, they are still too restrictive for cases in which an object X wants to access the public domains of an arbitrary number of other objects, which often happens in observer scenarios. To overcome this restriction, we developed so-called loose domains which abstract over several precise domains. That is, similar to the relation between supertypes and subtypes we have a relation between loose and precise domains. In addition, we simplified ownership domains by reducing the number of domains per object to two and hard-wiring the access permissions between domains. We formalized the resulting type system for an OO core language and proved type soundness and a fundamental accessibility property.

Biological Soil Crusts (BSCs), composed of lichens, mosses, green algae, microfungi and cyanobacteria are an ecological important part of the perennial landcover of many arid and semiarid regions (Belnap et al. 2001a), (Büdel 2002). In many arid and hyperarid areas BSCs form the only perennial "vegetation cover" largely due to their extensive resistance to drought (Lange et al. 1975). For the Central Namib Desert (Namibia), BSCs consisting of extraordinary vast lichen communities were recently mapped and classified into six morphological classes for a coastal area of 350 km x 60 km. Embedded into the project "BIOTA" (www.biota-africa.org) financed by the German Federal Ministry of Education and Research the study was undertaken in the framework of the PhD thesis by Christoph Schultz. Some of these lichen communities grouped together in so called "lichen fields" have already been studied concerning their ecology and diversity in the past (Lange et al. 1994), (Loris & Schieferstein 1992), (Loris et al. 2004), (Ullmann & Büdel 2001a), (Wessels 1989). Multispectral LANDSAT 7 ETM+ and LANDSAT 5 TM satellite imagery was utilized for an unitemporal supervised classification as well as for the establishment of a monitoring based on a combined retrospective supervised classification and change detection approach (Bock 2003), (Weiers et al. 2003). Results comprise the analysis of the mapped distribution of lichen communities for the Central Namib Desert as of 2003 as well as reconstructed distributions for the years 2000, 1999, 1992 and 1991 derived from retrospective supervised classification. This allows a first monitoring of the disturbance, destruction and recovery of the lichen communities in these arid environments including the analysis of the major abiotic processes involved. Further analysis of these abiotic processes is key for understanding the influence of Namib lichen communities on overall aeolian and water induced erosion rates, nutrient cycles, water balance and pedogenic processes (Belnap & Gillette 1998), (Belnap et al. 2001b), (Belnap 2001c), (Evans & Lange 2001), (McKenna Neumann & Maxwell 1999). In order to aid the understanding of these processes SRTM digital elevation model data as well as climate data sets were used as reference. Good correlation between geomorphological form elements as well as hydrological drainage system and the disturbance patterns derived from individual post classification change comparisons between the timeframes could be observed. Conjoined with the climate data sets sporadic foehn-like windstorms as well as extraordinary precipitation events were identified to largely affect the distribution patterns of lichen communities. Therefore the analysis and monitoring of the diversity, distribution and spatiotemporal change of Central Namib BSCs with the means of Remote Sensing and GIS applications proof to be important tools to create further understanding of desertification and degradation processes in these arid regions.

In this paper we address the improvement of transfer quality in public mass transit networks. Generally there are several transit operators offering service and our work is motivated by the question how their timetables can be altered to yield optimized transfer possibilities in the overall network. To achieve this, only small changes to the timetables are allowed. The set-up makes it possible to use a quadratic semi-assignment model to solve the optimization problem. We apply this model, equipped with a new way to assess transfer quality, to the solution of four real-world examples. It turns out that improvements in overall transfer quality can be determined by such optimization-based techniques. Therefore they can serve as a first step towards a decision support tool for planners of regional transit networks.

The validity of formulas w.r.t. a specification over first-order logic with a semantics based on all models is semi-decidable. Therefore, we may implement a proof procedure which finds a proof for every valid formula fully automatically. But this semantics often lacks intuition: Some pathological models such as the trivial model may produce unexpected results w.r.t. validity. Instead, we may consider just a class of special models, for instance, the class of all data models. Proofs are then performed using induction. But, inductive validity is not semi-decidable -- even for first-order logic. This theoretical drawback manifests itself in practical limitations: There are theorems that cannot be proved by induction directly but only generalizations can be proved. For their definition, we may have to extend the specification. Therefore, we cannot expect to prove interesting theorems fully automatically. Instead, we have to support user-interaction in a suitable way. In this thesis, we aim at developing techniques that enhance automatic proof control of (inductive) theorem provers and that enable user-interaction in a suitable way. We integrate our new proof techniques into the inductive theorem prover QuodLibet and validate them with various case studies. Essentially, we introduce the following three proof techniques: -We integrate a decision procedure for linear arithmetic into QuodLibet in a close way by defining new inference rules that perform the elementary steps of the decision procedure. This allows us to implement well-known heuristics for automatic proof control. Furthermore, we are able to provide special purpose tactics that support the manual speculation of lemmas if a proof attempt gets stuck. The integration improves the ability of the theorem prover to prove theorems automatically as well as its efficiency. Our approach is competitive with other approaches regarding efficiency; it provides advantages regarding the speculation of lemmas. -The automatic proof control searches for a proof by applying inference rules. The search space is not only infinite, but grows dramatically with the depth of the search. In contrast to this, checking and analyzing performed proofs is very efficient. As the search space also has a high redundancy, it is reasonable to reuse subproofs found during proof search. We define new notions for the contribution of proof steps to a proof. These notions enable the derivation of pruned proofs and the identification of superfluous subformulas in theorems. A proof may be reused in two ways: upward propagation prunes a proof by eliminating superfluous proof steps; sideward reuse closes an open proof obligation by replaying an already found proof. -For interactive theorem provers, it is essential not only to prove automatically as many lemmas as possible but also to restrict proof search in such a way that the proof process stops within a reasonable amount of time. We introduce different markings in the goals to be proved and the lemmas to be applied to restrict proof search in a flexible way: With a forbidden marking, we can simulate well-known approaches for applying conditional lemmas. A mandatory marking provides a new heuristics which is inspired by local contribution of proof steps. With obligatory and generous markings, we can fine-tune the degree of efficiency and extent of proof search manually. With an elaborate case study, we show the benefits of the different techniques, in particular the synergetic effect of their combination.

This thesis discusses methods for the classification of finite projective planes via exhaustive search. In the main part the author classifies all projective planes of order 16 admitting a large quasiregular group of collineations. This is done by a complete search using the computer algebra system GAP. Computational methods for the construction of relative difference sets are discussed. These methods are implemented in a GAP-package, which is available separately. As another result --found in cooperation with U. Dempwolff-- the projective planes defined by planar monomials are classified. Furthermore the full automorphism group of the non-translation planes defined by planar monomials are classified.

The new international capital standard for credit institutions (“Basel II”) allows banks to use internal rating systems in order to determine the risk weights that are relevant for the calculation of capital charge. Therefore, it is necessary to develop a system that enfolds the main practices and methods existing in the context of credit rating. The aim of this thesis is to give a suggestion of setting up a credit rating system, where the main techniques used in practice are analyzed, presenting some alternatives and considering the problems that can arise from a statistical point of view. Finally, we will set up some guidelines on how to accomplish the challenge of credit scoring. The judgement of the quality of a credit with respect to the probability of default is called credit rating. A method based on a multi-dimensional criterion seems to be natural, due to the numerous effects that can influence this rating. However, owing to governmental rules, the tendency is that typically one-dimensional criteria will be required in the future as a measure for the credit worthiness or for the quality of a credit. The problem as described above can be resolved via transformation of a multi-dimensional data set into a one-dimensional one while keeping some monotonicity properties and also keeping the loss of information (due to the loss of dimensionality) at a minimum level.

Uncoupling protein1 (UCP1) in brown adipose tissue was discovered earlier as the main uncoupling source of respiration. We describe the basic facts and a modest contribution of our group to the area of research on mitochondrial uncoupling proteins. After defining the terms uncoupling, leak, proton-mediated uncoupling, we discuss the assumption that due to its low abundance, uncoupling protein 2 (UCP2) can provide only mild uncoupling, i.e. can decrease the proton motive force by several mV only. A fatty acid cycling mechanism is described as a plausible explanation for the protonophoretic function of all uncoupling proteins together with our experiments supporting it. A speculation for the phylogenesis of all uncoupling proteins can be deduced by estimated UCP2 content in several tissues, and details of its activation are explained on the basis of our experiments. In the present study a solubilization and refolding method for UCP2 from inclusion bodies was developed and characterized. As it was known and also demonstrated from previous experiments on UCP1 that fatty acids are substrates, we used the same procedure to study the function of UCP2. Utilizing spin-labelled fatty acids (SLFA) for our experiments we demonstrated the binding of fatty acids to UCP2, and the competition of other natural fatty acids like oleic acid, palmitic acid, arachidonic acid and eicosatrienoic acid to the preformed complex emphasizes the presence of a fatty acid binding site for mitochondrial UCP2. The findings were observed by EPR spectroscopy where the highly immobilized spectra with presence of spin-labelled fatty acid eventually end up as free spin label spectra with a particular concentration of the natural fatty acid added to the UCP2 bound with spin-labelled fatty acid. This fits in significantly with the earlier findings of UCP1 and also leads to assumption of functional explanation about the physiological relevance between the uncoupling proteins functions. The present study, in which representative and sensitive parameters for EPR spectroscopy were established, at the same time describes the concentration effects of fatty acids upon the protein bound with spin-labelled fatty acids which are much of importance in comparison to physiological levels, being in the micromolar range (µM) as compared with milli molar (mM) as for UCP1 previously. In appropriate examples, different fatty acids are used and compared with competitors like alkylsulfonates also emphasizing the function of the protein. And the studies with the effect of nucleotides inhibition demonstrate that there exists a putative binding site for fatty acids. Much significance lies in demonstration with the spin-labelled-ATP studies where competition of ATP to the protein bound to spin-labelled ATP explains about the inhibition effect of nucleotides on the UCP2. So the present study applies different methods for the functional characterization of UCP2. The studies of natural fatty acids and alkylsulfonates with UCP2 bound to spin-labelled fatty acid, and study of nucleotide inhibition on UCP2 are closely related and give the much awaited answer to the question of functional similarities between UCP1 and UCP2. This supports the discussion of many groups which predict the functional similarity between these two proteins based upon sequence homology. Also many attempts have been reported in literature to explain the physiological functional relevance where by this present study can also be added to as we now suppose from the present conclusions of our experiments.

Stop Location Design in Public Transportation Networks: Covering and Accessibility Objectives
(2006)

In StopLoc we consider the location of new stops along the edges of an existing public transportation network. Examples of StopLoc include the location of bus stops along some given bus routes or of railway stations along the tracks in a railway system. In order to measure the ''convenience'' of the location decision for potential customers in given demand facilities, two objectives are proposed. In the first one, we give an upper bound on reaching a closest station from any of the demand facilities and minimize the number of stations. In the second objective, we fix the number of new stations and minimize the sum of the distances between demand facilities and stations. The resulting two problems CovStopLoc and AccessStopLoc are solved by a reduction to a classical set covering and a restricted location problem, respectively. We implement the general ideas in two different environments - the plane, where demand facilities are represented by coordinates and in networks, where they are nodes of a graph.

We study model reduction techniques for frequency averaging in radiative heat transfer. Especially, we employ proper orthogonal decomposition in combination with the method of snapshots to devise an automated a posteriori algorithm, which helps to reduce significantly the dimensionality for further simulations. The reliability of the surrogate models is tested and we compare the results with two other reduced models, which are given by the approximation using the weighted sum of gray gases and by an frequency averaged version of the so-called \(\mathrm{SP}_n\) model. We present several numerical results underlining the feasibility of our approach.

This work deals with the mathematical modeling and numerical simulation of the dynamics of a curved inertial viscous Newtonian fiber, which is practically applicable to the description of centrifugal spinning processes of glass wool. Neglecting surface tension and temperature dependence, the fiber flow is modeled as a three-dimensional free boundary value problem via instationary incompressible Navier-Stokes equations. From regular asymptotic expansions in powers of the slenderness parameter leading-order balance laws for mass (cross-section) and momentum are derived that combine the unrestricted motion of the fiber center-line with the inner viscous transport. The physically reasonable form of the one-dimensional fiber model results thereby from the introduction of the intrinsic velocity that characterizes the convective terms. For the numerical simulation of the derived model a finite volume code is developed. The results of the numerical scheme for high Reynolds numbers are validated by comparing them with the analytical solution of the inviscid problem. Moreover, the influence of parameters, like viscosity and rotation on the fiber dynamics are investigated. Finally, an application based on industrial data is performed.

This thesis deals with modeling aspects of generalized Newtonian and of non-Newtonian fluids, as well as with development and validation of algorithms used in simulation of such fluids. The main contribution in the modeling part are the introduction and analysis of a new model for the generalized Newtonian fluids, where constitutive equation is of an algebraic form. Distinction between shear and extensional viscosities leads to anisotropic viscosity model. It can be considered as a natural extension of the well known (isotropic viscosity) Carreau model, which deals only with shear viscosity properties of the fluid. The proposed model takes additionally into account extensional viscosity properties. Numerical results show that the anisotropic viscosity model gives much better agreement with experimental observations than the isotropic one. Another contribution of the thesis consists of the development and analysis of robust and reliable algorithms for simulation of generalized Newtonian fluids. For such fluids the momentum equations are strongly coupled through mixed derivatives appearing in the viscous term (unlike the case of Newtonian fluids). It is shown in this thesis, that a careful treatment of those derivatives is essential in deriving robust algorithms. A modification of a standard SIMPLE-like algorithm is given, where all the viscous terms from the momentum equations are discretized in an implicit manner. Moreover, it is shown that a block diagonal preconditioner to the viscous operator is good enough to be used in simulations. Furthermore, different solution techniques, namely projection type methods (consists of solving momentum equations and pressure correction equation) and fully coupled methods (momentum and continuity equations are solved together), are compared. It is shown, that explicit discretization of the mixed derivatives lead to stability problems. Further, analytical estimates of eigenvalue distribution for three different preconditioners, applied to the transformed system arising after discretization and linearization of the momentum and continuity equations, are provided. We propose to apply a block Gauss-Seidel preconditioner to the transformed system. The analysis shows, that this preconditioner is able to cluster eigenvalues around unity independent of the transformation step. It is not the case for other preconditioners applied to the transformed system as discussed in the thesis. The block Gauss-Seidel preconditioner has also shown the best behavior (among all preconditioners discussed in the thesis) in numerical experiments. Further contribution consists of comparison and validation of numerical algorithms applied in simulations of non-Newtonian fluids modeled by time integral constitutive equations. Numerical results from simulations of dilute polymer solutions, described by the integral Oldroyd B model, have shown very good quantitative agreement with the results obtained by differential Oldroyd B counterpart in 4:1 planar contraction domain at low Weissenberg numbers. In this case, the Weissenberg number is changed by changing the relaxation time. However, contrary to the differential Oldroyd B model, the integral one allows to perform stable simulations also in the range of high Weissenberg numbers. Moreover, very good agreement with experimental observations has been achieved. Simulations of concentrated polymer solutions (polystyrene and polybutadiene solutions), modeled by the integral Doi Edwards model, supplemented by chain length fluctuations, have shown very good qualitative agreement with the results obtained by its differential approximation in 4:1:4 constriction domain. Again, much higher Weissenberg numbers can be achieved when the integral model is used. Moreover, very good quantitative results with experimental data of polystyrene solution for the first normal stress difference and shear viscosity defined here as the quotient of a shear stress and a shear rate. Finally, comparison of the two methods used for approximating the time integral constitutive equation, namely Deformation Field Method (DFM) and Backward Lagrangian Particle Method (BLPM), is performed. In BLPM the particle paths are recalculated at every time step of the simulations, what has never been tried before. The results have shown, that in the considered geometries both methods give similar results.

On a multigrid solver for the threedimensional Biot poroelasticity system in multilayered domains
(2006)

In this paper, we present problem–dependent prolongation and problem–dependent restriction for a multigrid solver for the three-dimensional Biot poroelasticity system, which is solved in a multilayered domain. The system is discretized on a staggered grid using the finite volume method. During the discretization, special care is taken of the discontinuous coefficients. For the efficient multigrid solver, a need in operator-dependent restriction and/or prolongation arises. We derive these operators so that they are consistent with the discretization. They account for the discontinuities of the coefficients, as well as for the coupling of the unknowns within the Biot system. A set of numerical experiments shows necessity of use of the operator-dependent restriction and prolongation in the multigrid solver for the considered class of problems.

In this paper we propose a finite volume discretization for the threedimensional Biot poroelasticity system in multilayered domains. For the stability reasons, staggered grids are used. The discretization accounts for discontinuity of the coefficients across the interfaces between layers with different physical properties. Numerical experiments, based on the proposed discretization showed second order convergence in the maximum norm for the primary as well as flux unknowns of the system. A certain application example is presented as well.

The fast development of the financial markets in the last decade has lead to the creation of a variety of innovative interest rate related products that require advanced numerical pricing methods. Examples in this respect are products with a complicated strong path-dependence such as a Target Redemption Note, a Ratchet Cap, a Ladder Swap and others. On the other side, the usage of the standard in the literature one-factor Hull and White (1990) type of short rate models allows only for a perfect correlation between all continuously compounded spot rates or Libor rates and thus are not suited for pricing innovative products depending on several Libor rates such as for example a "steepener" option. One possible solution to this problem deliver the two-factor short rate models and in this thesis we consider a two-factor Hull and White (1990) type of a short rate process derived from the Heath, Jarrow, Morton (1992) framework by limiting the volatility structure of the forward rate process to a deterministic one. In this thesis, we often choose to use a variety of modified (binomial, trinomial and quadrinomial) tree constructions as a main numerical pricing tool due to their flexibility and fast convergence and (when there is no closed-form solution) compare their results with fine grid Monte Carlo simulations. For the purpose of pricing the already mentioned innovative short-rate related products, in this thesis we offer and examine two different lattice construction methods for the two-factor Hull-White type of a short rate process which are able to deal easily both with modeling of the mean-reversion of the underlying process and with the strong path-dependence of the priced options. Additionally, we prove that the so-called rotated lattice construction method overcomes the typical for the existing two-factor tree constructions problem with obtaining negative "risk-neutral probabilities". With a variety of numerical examples, we show that this leads to a stability in the results especially in cases of high volatility parameters and negative correlation between the base factors (which is typically the case in reality). Further, noticing that Chan et al (1992) and Ritchken and Sankarasubramanian (1995) showed that option prices are sensitive to the level of the short rate volatility, we examine the pricing of European and American options where the short rate process has a volatility structure of a Cheyette (1994) type. In this relation, we examine the application of the two offered lattice construction methods and compare their results with the Monte Carlo simulation ones for a variety of examples. Additionally, for the pricing of American options with the Monte Carlo method we expand and implement the simulation algorithm of Longstaff and Schwartz (2000). With a variety of numerical examples we compare again the stability and the convergence of the different lattice construction methods. Dealing with the problems of pricing strongly path-dependent options, we come across the cumulative Parisian barrier option pricing problem. We notice that in their classical form, the cumulative Parisian barrier options have been priced both analytically (in a quasi closed form) and with a tree approximation (based on the Forward Shooting Grid algorithm, see e.g. Hull and White (1993), Kwok and Lau (2001) and others). However, we offer an additional tree construction method which can be seen as a direct binomial tree integration that uses the analytically calculated conditional survival probabilities. The advantage of the offered method is on one side that the conditional survival probabilities are easier to calculate than the closed-form solution itself and on the other side that this tree construction is very flexible in the sense that it allows easy incorporation of additional features such as e.g a forward starting one. The obtained results are better than the Forward Shooting Grid tree ones and are very close to the analytical quasi closed form solution. Finally, we pay our attention to pricing another type of innovative interest rate alike products - namely the Longevity bond - whose coupon payments depend on the survival function of a given cohort. Due to the lack of a market for mortality, for the pricing of the Longevity bonds we develop (following Korn, Natcheva and Zipperer (2006)) a framework that contains principles from both Insurance and Financial mathematic. Further on, we calibrate the existing models for the stochastic mortality dynamics to historical German data and additionally offer new stochastic extensions of the classical (deterministic) models of mortality such as the Gompertz and the Makeham one. Finally, we compare and analyze the results of the application of all considered models to the pricing of a Longevity bond on the longevity of the German males.

In this thesis we present the implementation of libraries center.lib and perron.lib for the non-commutative extension Plural of the Computer Algebra System Singular. The library center.lib was designed for the computation of elements of the centralizer of a set of elements and the center of a non-commutative polynomial algebra. It also provides solutions to related problems. The library perron.lib contains a procedure for the computation of relations between a set of pairwise commuting polynomials. The thesis comprises the theory behind the libraries, aspects of the implementation and some applications of the developed algorithms. Moreover, we provide extensive benchmarks for the computation of elements of the center. Some of our examples were never computed before.

This thesis introduces so-called cone scalarising functions. They are by construction compatible with a partial order for the outcome space given by a cone. The quality of the parametrisations of the efficient set given by the cone scalarising functions are then investigated. Here, the focus lies on the (weak) efficiency of the generated solutions, the reachability of effiecient points and continuity of the solution set. Based on cone scalarising functions Pareto Navigation a novel, interactive, multiobjective optimisation method is proposed. It changes the ordering cone to realise bounds on partial tradeoffs. Besides, its use of an equality constraint for the changing component of the reference point is a new feature. The efficiency of its solutions, the reachability of efficient solutions and continuity is then analysed. Potential problems are demonstrated using a critical example. Furthermore, the use of Pareto Navigation in a two-phase approach and for nonconvex problems is discussed. Finally, its application for intensity-modulated radiotherapy planning is described. Thereby, its realisation in a graphical user interface is shown.

We show the numerical applicability of a multiresolution method based on harmonic splines on the 3-dimensional ball which allows the regularized recovery of the harmonic part of the Earth's mass density distribution out of different types of gravity data, e.g. different radial derivatives of the potential, at various positions which need not be located on a common sphere. This approximated harmonic density can be combined with its orthogonal anharmonic complement, e.g. determined out of the splitting function of free oscillations, to an approximation of the whole mass density function. The applicability of the presented tool is demonstrated by several test calculations based on simulated gravity values derived from EGM96. The method yields a multiresolution in the sense that the localization of the constructed spline basis functions can be increased which yields in combination with more data a higher resolution of the resulting spline. Moreover, we show that a locally improved data situation allows a highly resolved recovery in this particular area in combination with a coarse approximation elsewhere which is an essential advantage of this method, e.g. compared to polynomial approximation.

We introduce a method to construct approximate identities on the 2-sphere which have an optimal localization. This approach can be used to accelerate the calculations of approximations on the 2-sphere essentially with a comparably small increase of the error. The localization measure in the optimization problem includes a weight function which can be chosen under some constraints. For each choice of weight function existence and uniqueness of the optimal kernel are proved as well as the generation of an approximate identity in the bandlimited case. Moreover, the optimally localizing approximate identity for a certain weight function is calculated and numerically tested.

Katja is a tool generating order-sorted recursive data types as well as position types for Java, from specifications using an enhanced ML like notation. Katja’s main features are its conciseness of specifications, the rich interface provided by the generated code and the Java atypical immutability of types. After several stages of extending and maintaining the Katja project, it became apparent many changes had to be done. The original design of Katja wasn’t prepared for the introduction of several backends, the introduction of position sorts and constant feature enhancements and bug fixes. By supplying this report Katja reaches release status for the first time.

Discontinuities can appear in different fields of mechanics. Some examples where discontinuities arise are more obvious such as the formation of cracks. Other sources of discontinuities are less apparent such as interfaces between different materials. Furthermore continuous fields with steep gradients can also be considered as discontinuous fields. This work aims at the inclusion of arbitrary discontinuities within the finite element method. Although the finite element method is the most sophisticated numerical tool in modern engineering, the inclusion of discontinuities is still a challenging task. Traditionally within finite the framework of FE methods discontinuities are modeled explicitely by the construction of the mesh. Thus, when a fixed mesh is used, the position of the discontinuity is prescribed by the location of interelement boundaries and not by the physical situation. The simulation of crack growth requires a frequent adaption of the mesh and that can be a difficult and computationally expensive task. Thus a more flexible numerical approach is needed which leads to the mesh-independent representation of the discontinuity. A challenging field where the accurate description of discontinuities is of vital importance is the modeling of failure in engineering materials. The load capacity of a structure is limited by the material strength. If the load limit is exceeded failure zones arise and increase. Representative examples of failure mechanisms are are cracks in brittle materials or shear bands in metals or soils. Failure processes are often accompanied by a strain softening material behaviour (decreasing load carrying capacity with increasing strain at a material point). It is known that the inclusion of strain softening material behaviour within a continuum description requires regularization techniques to preserve the well- posedness of the governing equations. One possibility is the consideration of non-local or gradient terms in the constitutive equations but these approaches require a sufficiently fine discretization in the localization zone, which leads to a high numerical effort. If the extent of the failure zone and the failure process to the point of the development of discrete cracks is considered it seems reasonable to include strong discontinuities. In the framework of fracture mechanics the inclusion of displacement jumps is intuitively comprehensible. However, the modeling of localized failure processes demands the consideration of inelastic material behaviour. Cohesive zone models represent an approach which is especially suited for the incorporation within the finite element framework. It is supposed that cohesive tractions are transmitted between the discontinuity surfaces. These tractions are constitutively prescribed by a phenomenological traction separation law and thus allow for the modeling of different inelastic mechanisms, like micro-crack evolution, initiation of voids, plastic flow or crack bridging. The incorporation of a displacement discontinuity in combination with a cohesive traction separation law leads to a sound model to describe failure processes and crack propagation. Another area where the existence of discontinuities is not as obvious is the occurence of material interfaces, inclusions or holes. The accurate modeling of such internal interfaces is important to predict the mechanical behaviour of components. The present discontinuity is of different nature: the displacement field is continuous but there is a jump in the strains, which is denoted by the expression weak discontinuity. Usually in FE methods material interfaces are taken into account by the mesh construction. But if the structure exhibits multiple inclusions of complex geometry it can be advantageous if the interface does not have to be meshed. And when we look at at problems where the interface moves with time, e. g. phase transformation, the mesh-independent modeling of the weak discontinuities naturally holds major advantages. The greatest challenge in the modeling of discontinuities is their incorporation into numerical methods. The focus of the present work is the development, analysis and application of a finite element approach to model mesh-independent discontinuities. The method shall be robust and flexible to be applicable to both, strong and weak discontinuities.

In the Iranian public media, it was widely reported that by the end of 2004, 380 hectares of the eastern farthest end of the Peninsula Mianqala (northern part of Iran, located in the southeastern coasts of Caspian Sea) were sold to an organisation – the result is that "Asurada" Island will be turned into a so-called “Tourist Village”. The decision has been made and civil works are to begin. The village planned as a new settlement is specifically considered to work with Mianqala, which since June 1976 is an international biosphere reserve and since 1969, an Iranian nature protected area. Considering the special condition of the region as a biosphere reserve, this paper introduces the current situation of the Island Āŝūrāda and the suggested program by the aforementioned organisation. Subsequently, it tries to find an optimal answer to the question of whether "Āŝūrāda" is appropriate for such a purpose and how far it is allowed to be interfered with, through this new settlement. The paper asserts for this development, there is consideration of the settlement’s urban and architectural concept; subsequently analysis is conducted for the spatial development of the settlement, in terms of its influences on the ecological sources, the rural structure and the financial as well as social aspects. Such study is required, particularly due to the chain of tourist influences, which certainly will introduce a new pattern of urban character in terms of quality and quantity. Finally, with the assistance of the case presented, this paper poses the question of whether a new urban pattern like this can endanger a traditional and above all a nature protected context or not.

Web-based authentication is a popular mechanism implemented by Wireless Internet Service Providers (WISPs) because it allows a simple registration and authentication of customers, while avoiding the high resource requirements of the new IEEE 802.11i security standard and the backward compatibility issues of legacy devices. In this work we demonstrate two different and novel attacks against web-based authentication. One attack exploits operational anomalies of low- and middle-priced devices in order to hijack wireless clients, while the other exploits an already known vulnerability within wired-networks, which in dynamic wireless environments turns out to be even harder to detect and protect against.

Tropical geometry is a rather new field of algebraic geometry. The main idea is to replace algebraic varieties by certain piece-wise linear objects in R^n, which can be studied with the aid of combinatorics. There is hope that many algebraically difficult operations become easier in the tropical setting, as the structure of the objects seems to be simpler. In particular, tropical geometry shows promise for application in enumerative geometry. Enumerative geometry deals with the counting of geometric objects that are determined by certain incidence conditions. Until around 1990, not many enumerative questions had been answered and there was not much prospect of solving more. But then Kontsevich introduced the moduli space of stable maps which turned out to be a very useful concept for the study of enumerative geometry. A well-known problem of enumerative geometry is to determine the numbers N_cplx(d,g) of complex genus g plane curves of degree d passing through 3d+g-1 points in general position. Mikhalkin has defined the analogous number N_trop(d,g) for tropical curves and shown that these two numbers coincide (Mikhalkin's Correspondence Theorem). Tropical geometry supplies many new ideas and concepts that could be helpful to answer enumerative problems. However, as a rather new field, tropical geometry has to be studied more thoroughly. This thesis is concerned with the ``translation'' of well-known facts of enumerative geometry to tropical geometry. More precisely, the main results of this thesis are: - a tropical proof of the invariance of N_trop(d,g) of the position of the 3d+g-1 points, - a tropical proof for Kontsevich's recursive formula to compute N_trop(d,0) and - a tropical proof of Caporaso's and Harris' algorithm to compute N_trop(d,g). All results were derived in joint work with my advisor Andreas Gathmann. (Note that tropical research is not restricted to the translation of classically well-known facts, there are actually new results shown by means of tropical geometry that have not been known before. For example, Mikhalkin gave a tropical algorithm to compute the Welschinger invariant for real curves. This shows that tropical geometry can indeed be a tool for a better understanding of classical geometry.)

The primary object of this work is the development of a robust, accurate and efficient time integrator for the dynamics of flexible multibody systems. Particularly a unified framework for the computational dynamics of multibody systems consisting of mass points, rigid bodies and flexible beams forming open kinematic chains or closed loop systems is developed. In addition, it aims at the presentation of (i) a focused survey of the Lagrangian and Hamiltonian formalism for dynamics, (ii) five different methods to enforce constraints with their respective relations, and (iii) three alternative ways for the temporal discretisation of the evolution equations. The relations between the different methods for the constraint enforcement in conjunction with one specific energy-momentum conserving temporal discretisation method are proved and their numerical performances are compared by means of theoretical considerations as well as with the help of numerical examples.

The study provides insights into the dynamic processes of vascular epiphyte vegetation in two host tree species of lowland forest in Panama. Further, a novel approach is presented to examine the possible role of host tree identity in the structuring of vascular epiphyte communities: For three locally common host tree species (Socratea exorrhiza, Marila laxiflora, Perebea xanthochyma) we created null models of the expected epiphyte assemblages assuming that epiphyte colonization reflected random distribution of epiphytes in the forest. In all three tree species, abundances of the majority of epiphyte species (69 – 81 %) were indistinguishable from random, while the remaining species were about equally over- or underrepresented compared to their occurrence in the entire forest plot. Permutations based on the number of colonized trees (reflecting observed spatial patchiness) yielded similar results. Finally, a Canonical Correspondence Analysis also confirmed host-specific differences in epiphyte assemblages. In spite of pronounced preferences of some epiphytes for particular host trees, no epiphyte species was restricted to a single host. We conclude that the epiphytes on a given tree species are not simply a random sample of the local species pool, but there are no indications of host specificity either. To determine the qualitative and quantitative long-term changes in the vascular epiphyte assemblage of the host tree Socratea exorrhiza, in the lowland forest of the San Lorenzo Crane Plot, we followed the fate of the vascular epiphyte assemblage on 99 individuals of this palm species, in three censuses over the course of five years. The composition of the epiphyte assemblage changed little during the course of the study. While the similarity of epiphyte vegetation decreased on single palm individuals through time, the similarity analyzed over all palms increased. Even well-established epiphyte individuals experienced high mortality with only 46 % of the originally mapped individuals surviving the following five years. We found a positive correlation between host tree size and epiphyte richness and detected higher colonization rates of epiphytes per surface area on larger trees. Epiphyte assemblages on single Socratea exorrhiza trees were highly dynamic while the overall composition of the epiphyte vegetation on the host tree species in the study plot was rather stable. We suggest that higher recruitment rates due to localized seed dispersal by already established epiphytes on larger palms promote the colonization of epiphytes on larger palms. Given the known growth rates and mortality rates of the host tree species, the maximum time available for colonization and reproduction of epiphytes on a given Socratea exorrhiza tree is estimated to be about 60 years. Changes in the epiphyte vegetation of c. 1000 individuals of the host tree species Annona glabra at Barro Colorado Island over the course of eight year were documented by means of repeated censuses. Considerable increase in the abundance of the dominating epiphyte species and ongoing colonization of the host tree species suggests that the epiphyte vegetation has not reached a steady state in the maximal 80 years since the establishment of the host tree. Epiphyte species composition as a whole was rather stable. We disentangled the relationship between epiphyte colonization and tree size/available time for colonization with the finding that tree size explained only a low proportion of colonization while other factors like connectivity to dispersal source and time explain may explain a larger part. Epiphyte populations are patchily distributed and examined species exhibit properties of a metapopulation with asynchronous local population growth, high local population turnover, a positive relationship between regional occurrence and patch population size, and negatively correlated relationship between extinction and patch occupancy. The documented metapopulation processes highlight the importance of not colonized suitable habitat for the conservation of epiphytes.

Error estimates for quasistatic global elastic correction and linear kinematic hardening material
(2006)

We consider in this paper the quasistatic boundary value problems of linear elasticity and nonlinear elastoplasticity with linear kinematic hardening material. We derive expressions and estimates for the difference of solutions (i.e. stress, strain and displacement) of both models. Further, we study the error between the elastoplastic solution and the solution of a postprocessing method, that corrects the solution of the linear elastic problem in order to approximate the elastoplastic model.

In this article, we give an explicit homotopy between the solutions (i.e. stress, strain, displacement) of the quasistatic linear elastic and nonlinear elastoplastic boundary value problem, where we assume a linear kinematic hardening material law. We give error estimates with respect to the homotopy parameter.

In this article, we give some generalisations of existing Lipschitz estimates for the stop and the play operator with respect to an arbitrary convex and closed characteristic a separable Hilbert space. We are especially concerned with the dependency of their outputs with respect to different scalar products.

In this article, we consider the quasistatic boundary value problems of linear elasticity and nonlinear elastoplasticity, with linear Hooke’s law in the elastic regime for both problems and with the linear kinematic hardening law for the plastic regime in the latter problem. We derive expressions and estimates for the difference of the solutions of both models, i.e. for the stresses, the strains and the displacements. To this end, we use the stop and play operators of nonlinear functional analysis. Further, we give an explicit example of a homotopy between the solutions of both problems.

A unified approach to Credit Default Swaption and Constant Maturity Credit Default Swap valuation
(2006)

In this paper we examine the pricing of arbitrary credit derivatives with the Libor Market Model with Default Risk. We show, how to setup the Monte Carlo-Simulation efficiently and investigate the accuracy of closed-form solutions for Credit Default Swaps, Credit Default Swaptions and Constant Maturity Credit Default Swaps. In addition we derive a new closed-form solution for Credit Default Swaptions which allows for time-dependent volatility and abitrary correlation structure of default intensities.1

In this paper a known orthonormal system of time- and space-dependent functions, that were derived out of the Cauchy-Navier equation for elastodynamic phenomena, is used to construct reproducing kernel Hilbert spaces. After choosing one of the spaces the corresponding kernel is used to define a function system that serves as a basis for a spline space. We show that under certain conditions there exists a unique interpolating or approximating, respectively, spline in this space with respect to given samples of an unknown function. The name "spline" here refers to its property of minimising a norm among all interpolating functions. Moreover, a convergence theorem and an error estimate relative to the point grid density are derived. As numerical example we investigate the propagation of seismic waves.

The desire to simulate more and more geometrical and physical features of technical structures and the availability of parallel computers and parallel numerical solvers which can exploit the power of these machines have lead to a steady increase in the number of grid elements used. Memory requirements and computational time are too large for usual serial PCs. An a priori partitioning algorithm for the parallel generation of 3D nonoverlapping compatible unstructured meshes based on a CAD surface description is presented in this paper. Emphasis is given to practical issues and implementation rather than to theoretical complexity. To achieve robustness of the algorithm with respect to the geometrical shape of the structure authors propose to have several or many but relatively simple algorithmic steps. The geometrical domain decomposition approach has been applied. It allows us to use classic 2D and 3D high-quality Delaunay mesh generators for independent and simultaneous volume meshing. Different aspects of load balancing methods are also explored in the paper. The MPI library and SPMD model are used for parallel grid generator implementation. Several 3D examples are shown.

Annual Report 2005
(2006)

Annual Report, Jahrbuch AG Magnetismus

During the recent years, multiobjective evolutionary algorithms have matured as a flexible optimization tool which can be used in various areas of reallife applications. Practical experiences showed that typically the algorithms need an essential adaptation to the specific problem for a successful application. Considering these requirements, we discuss various issues of the design and application of multiobjective evolutionary algorithms to real-life optimization problems. In particular, questions on problem-specific data structures and evolutionary operators and the determination of method parameters are treated. As a major issue, the handling of infeasible intermediate solutions is pointed out. Three application examples in the areas of constrained global optimization (electronic circuit design), semi-infinite programming (design centering problems), and discrete optimization (project scheduling) are discussed.

Selection of new projects is one of the major decision making activities in any company. Given a set of potential projects to invest, a subset which matches the company's strategy and internal resources best has to be selected. In this paper, we propose a multicriteria model for portfolio selection of projects, where we take into consideration that each of the potential projects has several - usually conflicting - values.

Using covering problems (CoP) combined with binary search is a well-known and successful solution approach for solving continuous center problems. In this paper, we show that this is also true for center hub location problems in networks. We introduce and compare various formulations for hub covering problems (HCoP) and analyse the feasibility polyhedron of the most promising one. Computational results using benchmark instances are presented. These results show that the new solution approach performs better in most examples.

In this paper we present and investigate a stochastic model for the lay-down of fibers on a conveyor belt in the production process of nonwovens. The model is based on a stochastic differential equation taking into account the motion of the ber under the influence of turbulence. A reformulation as a stochastic Hamiltonian system and an application of the stochastic averaging theorem lead to further simplications of the model. Finally, the model is used to compute the distribution of functionals of the process that might be helpful for the quality assessment of industrial fabrics.

Over a period of 30 years, ITU-T’s Specification and Description Language (SDL) has matured to a sophisticated formal modelling language for distributed systems and communication protocols. The language definition of SDL-2000, the latest version of SDL, is complex and difficult to maintain. Full tool support for SDL is costly to implement. Therefore, only subsets of SDL are currently supported by tools. These SDL subsets - called SDL profiles - already cover a wide range of systems, and are often suffcient in practice. In this report, we present our approach for extracting the formal semantics for SDL profiles from the complete SDL semantics. We then formalise the approach, present our SDL-profile tool, and report on our experiences.