### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (1291) (remove)

#### Language

- German (689)
- English (601)
- Multiple languages (1)

#### Keywords

- Visualisierung (18)
- Simulation (16)
- Apoptosis (12)
- Katalyse (12)
- Finite-Elemente-Methode (11)
- Phasengleichgewicht (11)
- Stadtplanung (11)
- Mobilfunk (10)
- Modellierung (10)
- Eisen (9)

#### Faculty / Organisational entity

- Fachbereich Chemie (306)
- Fachbereich Maschinenbau und Verfahrenstechnik (239)
- Fachbereich Mathematik (227)
- Fachbereich Informatik (144)
- Fachbereich Biologie (100)
- Fachbereich ARUBI (71)
- Fachbereich Elektrotechnik und Informationstechnik (67)
- Fachbereich Bauingenieurwesen (39)
- Fachbereich Sozialwissenschaften (30)
- Fachbereich Raum- und Umweltplanung (23)

Der direkte Einstieg in die Phosphorchemie ist durch Reaktion einer Rohlösung von [Cp=Ru(CO)2H] (1) (Cp= = C5H3(SiMe3)2) mit P4 möglich. Dabei erhält man den Komplex [Cp=Ru(CO)2PH2] (6a) mit freier Phosphanidogruppe, der selbst nicht isoliert, dessen Existenz jedoch spektroskopisch und durch Folgereaktionen eindeutig belegt werden kann. Das freie Elektronenpaar des Phosphors kann sowohl durch [M(CO)5(thf)] (M = Cr, Mo, W) (7,8,9) als auch durch [Cp*Re(CO)2(thf)] (14) nach Verlust von deren thf-Liganden komplexiert werden. Das bei der Reaktion mit 14 als Nebenprodukt entstandene [{Cp*(OC)2Re}2(µ-CO)] (17) konnte erstmals röntgenstrukturanalytisch untersucht werden. Die Cothermolyse von [{Cp=Ru(CO)2}2] (4) mit P4 liefert [Cp=Ru(Eta5-P5)] (18) und [{Cp=Ru}2(µ-Eta;2:2-P2)2] (19), die beide röntgenstrukturanalytisch charakterisiert werden konnten. 18 erweist sich als erstes Pentaphosphametallocen, das eine annähernd ideal ekliptische Konformation aufweist. Im Zweikernkomplex 19 konnte ein Komplex der 8. Gruppe mit zwei separaten P2-Brückenliganden eindeutig belegt werden. Die Cothermolyse von 4 mit [Cp*Fe(Eta5-P5)] liefert neben 18 und 19 die Homo- und Heterotrimetallcluster [{Cp=Ru}n{Cp*Fe}3-nP5] (n = 1, 2, 3) mit verzerrt dreiecksdodekaedrischen Gerüststrukturen. Die Röntgenstrukturanalysen von [{Cp=Ru}3P5] (24), [{Cp=Ru}2{Cp*Fe}P5] (25) und [{Cp*Fe}2{Cp=Ru}P5] (26) zeigen, daß die Metallfragmente {Cp*Fe} und {Cp=Ru} nahezu ohne Auswirkungen auf die Gerüststruktur ausgetauscht werden können. Der Vergleich zu den Cp-Derivaten zeigt die annähernde Identität der Verbindungen. Bei 26 bleibt die Stellung des Cp=-Liganden am Ruthenium nach Einbau eines Eisenfragmentes erhalten, während in [{Cp*Fe}2{CpRu}P5] (26a) ein Wechsel in der Stellung des Cp-Liganden stattfindet. Die Photolyse von [Cp=Ru(CO)2H] (1) in THF ergibt die spektroskopisch identifizierten Verbindungen [{Cp=Ru(CO)}2(µ-H)2] (28), [{Cp=Ru(CO)}2{Cp=Ru(CO)H}] (29) und [{Cp=Ru}4(µ3-CO)4] (30). Das in geringen Mengen anfallende 28 konnte als dimere Struktur mit zwei verbrückenden Wasserstoffatomen identifiziert werden. Für Komplex 29 wurde eine Struktur als triangularer Metalldreiring postuliert, der den Verbindungen 33 und 34 sehr ähnlich ist. Bei 29 handelt es sich um einen 46 VE-Komplex mit einer Ru-Ru-Doppelbindung im Dreiring. Verbindung 30 mit tetraedrischem Ru4-Gerüst kann in hoher Ausbeute dargestellt werden. 30 ist luft- und wasserstabil und überaus reaktionsträge. Führt man die Photolyse von 1 in Hexan als nichtkoordinierendem Lösungsmittel durch, so erhält man neben 28 und 29 die Komplexe [{Cp=Ru(µ-CO)}2{Cp=Ru(CO)H}] (33), [{{{Cp=Ru}2(µ-CO)}{Cp=RuH}}(µ3-CO)2] (34) sowie die unbekannte Verbindung 35. Die Röntgenstrukturanalysen von 33 und 34 zeigen zwei mit 46 VE elektronendefiziente, triangulare Ru3-Cluster mit einer M-M-Doppelbindung im Ring, welche beide der Vorgabe der Magischen Zahlen entsprechen. 33 reagiert bereitwillig mit P4 und ergibt [{Cp=Ru}(µ-Eta4:1:1-P4){Ru(CO)Cp=}] (40). Die RSA von 40 zeigt, daß Ru1 um 25,3° aus der Ebene der vier Phosphoratome P1-P4 abgewinkelt und kein planares Tetraphospharuthenol verwirklicht ist. Die Aufsicht auf die P4Ru-Ebene läßt jedoch deutlich das Bestreben erkennen, die Konstitution einer pentagonalen Pyramide auszubilden.

Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.

The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.

Photolysiert man [{Cp R Ru(CO)2}2] (Cp R = Cp", Cp*) (1a,b), in Gegenwart von weißem Phosphor, so können keine phosphorhaltigen Produkte isoliert werden. Setzt man [{Cp"Ru(CO)2}2] (1a) mit weißem Phosphor bei 190 °C in Dekalin um, dann lassen sich als einzige Verbindungen das Pentaphospharuthenocen-Derivat [Cp"Ru(h 5 -P5)] (3a, 6 % Ausbeute) und [Cp"2Ru2P4] (4a, 17 % Ausbeute) säulenchromatographisch abtrennen. Für 4a wird auf Grund seiner spektroskopischen Eigenschaften eine den röntgenstruktur-analytisch charakterisierten pseudo-Tripeldecker-Komplexen [{Cp R Fe}2(micro-h 4:4 -P4)] (Cp R = Cp" [35] , Cp"' [11] ) analoge Struktur mit s-cis-Tetraphosphabutadiendiyl-"Mitteldeck" vorgeschlagen; es sind jedoch auch zwei micro-h 2:2 -P2-Liganden denkbar. Die Cothermolyse von [{Cp"Ru(CO)2}2] (1a) und [Cp*Fe(h 5 -P5)] (2b) ergibt ein breites Produktbild. Während [Cp"Ru(h 5 -P5)] (3a), das chromatographisch nicht von 2b abgetrennt werden konnte, und [Cp"2Ru2P4] (4a, 6 % Ausbeute) in vergleichsweise geringen Mengen entstehen, können [{Cp"Ru}3P5] (6) in 17 % Ausbeute, [{Cp"Ru}2{Cp*Fe}P5] (7) in 7 % Ausbeute, [{Cp*Fe}2{Cp"Ru}P5] (8) in 22 % Ausbeute und [{Cp"Ru}3{Cp*Fe}(P2)2] (9) in 14 % Ausbeute isoliert werden. Für 9 wird, basierend auf den NMR-spektroskopischen Befunden, eine zu [{CpFe}4(P2)2] (10) [24] analoge Struktur mit einem hier verzerrten Dreiecksdodekaedergerüst vorge-schlagen, dessen vier Ecken der Konnektivität fünf von drei Rutheniumatomen und einem Eisenatom besetzt sind und dessen vier Ecken der Konnektivität vier zwei micro-h 2:2:1:1 -P2-Einheiten einnehmen. Es handelt sich bei 9 wie bei 10 um Cluster vom hypercloso-Typ (n+1 = 8 GEP). [30,31] Röntgenstrukturanalysen zeigen, daß 6, 7 und 8 ebenfalls verzerrt dreiecksdodekaedrische Gerüststrukturen besitzen. Damit ist auch die Struktur der bereits früher synthetisierten und spektroskopisch charakterisierten Komplexe [{Cp R Fe}3P5] (Cp R = Cp*, Cp*') (11b,c) [8] geklärt, deren NMR-Daten auf eine enge Verwandtschaft insbesondere mit 6 hinweisen. Gegenüber 9 bzw. 10 [24] mit einem M4P4-Gerüst (M = allgem. Übergangsmetallatom) ist in den Clustern 6, 7, 8 und 11b,c [8] mit M3P5-Gerüst formal ein 13 VE-Metallkomplex-fragment (Konnektivität fünf; 1 GE) durch ein Phosphoratom (3 GE) ersetzt, wodurch der Übergang zum closo-Strukturtyp des Dreiecksdodekaeders (n+1 = 9 GEP) [30,31] vollzogen wird. Die Cluster 6, 7, 8 und 11b,c [8] enthalten eine bisher unbekannte Koordinationsform der P5-Einheit. Bei der thermischen Umsetzung von [{Cp*Ru(CO)2}2] (1b) mit [Cp*Ru(h 5 -P5)] (3b) erhält man als Hauptprodukt den Dreikernkomplex [{Cp*Ru}3(P4)(P)] (17b, 62 % Ausbeute). Als einziges Nebenprodukt kann [Cp*2Ru2P4] (4b, 5 % Ausbeute) säulen-chromatographisch isoliert werden. Die NMR-Daten von 17b lassen in Analogie zum röntgenstrukturanalytisch charakterisierten [{Cp*'Fe}3(P4)(P){Mo(CO)5}] (18c) [8] auf eine cubanartige Struktur schließen, in der die fünf Phosphoratome in Form einer Isotetraphosphid-Einheit und eines einzelnen Phosphoratoms vorliegen. Die Gesamtzahl von 64 Valenzelektronen ist im Einklang mit drei Metall-Metall-Bindungen [31] . Für 4b ist wie für das voranstehend besprochene Cp"-Derivat 4a eine Pseudo- Tripeldecker-Struktur mit s-cis-Tetraphosphabutadiendiyl-"Mitteldeck" oder mit zwei micro-h 2:2 -P2-Liganden zu diskutieren. Thermolysiert man [{Cp* Ru(CO)2}2] (1c) mit 3b, so erhält man die Komplexe [Cp*'nCp*2-nRu2P4] (n = 0,1,2) (4b,d,c) und [{Cp*'Ru}n{Cp*Ru}3-n(P4)(P)] (n = 1,2,3) (17e,d,c) jeweils als nicht auftrennbare Gemische von analog aufgebauten Verbindungen, die sich nur im Zahlenverhältnis der verschiedenen Cyclopentadienyl-Liganden Cp*' und Cp* unterscheiden. In geringem Umfang beobachtet man dabei auch eine cyclo-P5-Übertragung unter Bildung des literaturbekannten [Cp*'Ru(h 5 -P5)] [7,10] , das im Gemisch mit nicht abreagiertem 3b anfällt. Durch Umsetzung mit [W(CO)5(thf)] gelingt die Komplexierung des dreiecks-dodekaedrischen Clusters [{Cp*Fe}2{Cp"Ru}P5] (8) zum Monoaddukt [{Cp*Fe}2{Cp"Ru}P5{W(CO)5}] (19), während beim Komplexierungsversuch des ebenfalls dreiecksdodekaedrischen Clusters [{Cp*'Fe}3P5] (11c) mit [Mo(CO)5(thf)] [8] eine Gerüstumlagerung zur cubanartig aufgebauten Verbindung [{Cp*'Fe}3(P4)(P){Mo(CO)5}] (18c) erfolgt. Der Strukturvorschlag für 19 basiert auf dem 31 P-NMR-Spektrum. Versuche zur Oxidation von 8 mit gelbem Schwefel bei Raumtemperatur führen zu unspezifischer Zersetzung bzw. Folgereaktionen von 8. Orientierende Versuche mit grauem Selen als milderem Oxidationsmittel deuten darauf hin, daß unter geeigneten Reaktionsbedingungen eine einfache Selenierung am endständigen Phosphoratom des P5-Liganden von 8 erfolgt. Bei der Umsetzung von [{Cp"Ru}3{Cp*Fe}(P2)2] (9) mit gelbem Schwefel können je nach Reaktionsbedingungen bis zu drei Phosphoratome oxidiert werden. Vollständige Sulfurierung wie im Falle von [{CpFe}4(P2S2)2] [24] wird nicht beobachtet. Die Sulfurierung ist regioselektiv. Für einen bestimmten Sulfurierungsgrad wird jeweils nur ein Produkt erhalten. Die Strukturvorschläge für die Cluster 21 23 werden anhand ihrer 31 P-NMR-spektroskopischen Daten abgeleitet.

A Consistent Large Eddy Approach for Lattice Boltzmann Methods and its Application to Complex Flows
(2015)

Lattice Boltzmann Methods have shown to be promising tools for solving fluid flow problems. This is related to the advantages of these methods, which are among others, the simplicity in handling complex geometries and the high efficiency in calculating transient flows. Lattice Boltzmann Methods are mesoscopic methods, based on discrete particle dynamics. This is in contrast to conventional Computational Fluid Dynamics methods, which are based on the solution of the continuum equations. Calculations of turbulent flows in engineering depend in general on modeling, since resolving of all turbulent scales is and will be in near future far beyond the computational possibilities. One of the most auspicious modeling approaches is the large eddy simulation, in which the large, inhomogeneous turbulence structures are directly computed and the smaller, more homogeneous structures are modeled.
In this thesis, a consistent large eddy approach for the Lattice Boltzmann Method is introduced. This large eddy model includes, besides a subgrid scale model, appropriate boundary conditions for wall resolved and wall modeled calculations. It also provides conditions for turbulent domain inlets. For the case of wall modeled simulations, a two layer wall model is derived in the Lattice Boltzmann context. Turbulent inlet conditions are achieved by means of a synthetic turbulence technique within the Lattice Boltzmann Method.
The proposed approach is implemented in the Lattice Boltzmann based CFD package SAM-Lattice, which has been created in the course of this work. SAM-Lattice is feasible of the calculation of incompressible or weakly compressible, isothermal flows of engineering interest in complex three dimensional domains. Special design targets of SAM-Lattice are high automatization and high performance.
Validation of the suggested large eddy Lattice Boltzmann scheme is performed for pump intake flows, which have not yet been treated by LBM. Even though, this numerical method is very suitable for this kind of vortical flows in complicated domains. In general, applications of LBM to hydrodynamic engineering problems are rare. The results of the pump intake validation cases reveal that the proposed numerical approach is able to represent qualitatively and quantitatively the very complex flows in the intakes. The findings provided in this thesis can serve as the basis for a broader application of LBM in hydrodynamic engineering problems.

The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.

A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.

This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.

For many years real-time task models have focused the timing constraints on execution windows defined by earliest start times and deadlines for feasibility.
However, the utility of some application may vary among scenarios which yield correct behavior, and maximizing this utility improves the resource utilization.
For example, target sensitive applications have a target point where execution results in maximized utility, and an execution window for feasibility.
Execution around this point and within the execution window is allowed, albeit at lower utility.
The intensity of the utility decay accounts for the importance of the application.
Examples of such applications include multimedia and control; multimedia application are very popular nowadays and control applications are present in every automated system.
In this thesis, we present a novel real-time task model which provides for easy abstractions to express the timing constraints of target sensitive RT applications: the gravitational task model.
This model uses a simple gravity pendulum (or bob pendulum) system as a visualization model for trade-offs among target sensitive RT applications.
We consider jobs as objects in a pendulum system, and the target points as the central point.
Then, the equilibrium state of the physical problem is equivalent to the best compromise among jobs with conflicting targets.
Analogies with well-known systems are helpful to fill in the gap between application requirements and theoretical abstractions used in task models.
For instance, the so-called nature algorithms use key elements of physical processes to form the basis of an optimization algorithm.
Examples include the knapsack problem, traveling salesman problem, ant colony optimization, and simulated annealing.
We also present a few scheduling algorithms designed for the gravitational task model which fulfill the requirements for on-line adaptivity.
The scheduling of target sensitive RT applications must account for timing constraints, and the trade-off among tasks with conflicting targets.
Our proposed scheduling algorithms use the equilibrium state concept to order the execution sequence of jobs, and compute the deviation of jobs from their target points for increased system utility.
The execution sequence of jobs in the schedule has a significant impact on the equilibrium of jobs, and dominates the complexity of the problem --- the optimum solution is NP-hard.
We show the efficacy of our approach through simulations results and 3 target sensitive RT applications enhanced with the gravitational task model.

Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).

Embedded systems have become ubiquitous in everyday life, and especially in the automotive industry. New applications challenge their design by introducing a new class of problems that are based on a detailed analysis of the environmental situation. Situation analysis systems rely on models and algorithms of the domain of computational geometry. The basic model is usually an Euclidean plane, which contains polygons to represent the objects of the environment. Usual implementations of computational geometry algorithms cannot be directly used for safety-critical systems. First, a strict analysis of their correctness is indispensable and second, nonfunctional requirements with respect to the limited resources must be considered. This thesis proposes a layered approach to a polygon-processing system. On top of rational numbers, a geometry kernel is formalised at first. Subsequently, geometric primitives form a second layer of abstraction that is used for plane sweep and polygon algorithms. These layers do not only divide the whole system into manageable parts but make it possible to model problems and reason about them at the appropriate level of abstraction. This structure is used for the verification as well as the implementation of the developed polygon-processing library.

In the filling process of a car tank, the formation of foam plays an unwanted role, as it may prevent the tank from being completely filled or at least delay the filling. Therefore it is of interest to optimize the geometry of the tank using numerical simulation in such a way that the influence of the foam is minimized. In this dissertation, we analyze the behaviour of the foam mathematically on the mezoscopic scale, that is for single lamellae. The most important goals are on the one hand to gain a deeper understanding of the interaction of the relevant physical effects, on the other hand to obtain a model for the simulation of the decay of a lamella which can be integrated in a global foam model. In the first part of this work, we give a short introduction into the physical properties of foam and find that the Marangoni effect is the main cause for its stability. We then develop a mathematical model for the simulation of the dynamical behaviour of a lamella based on an asymptotic analysis using the special geometry of the lamella. The result is a system of nonlinear partial differential equations (PDE) of third order in two spatial and one time dimension. In the second part, we analyze this system mathematically and prove an existence and uniqueness result for a simplified case. For some special parameter domains the system can be further simplified, and in some cases explicit solutions can be derived. In the last part of the dissertation, we solve the system using a finite element approach and discuss the results in detail.

The detection and characterisation of undesired lead structures on shaft surfaces is a concern in production and quality control of rotary shaft lip-type sealing systems. The potential lead structures are generally divided into macro and micro lead based on their characteristics and formation. Macro lead measurement methods exist and are widely applied. This work describes a method to characterise micro lead on ground shaft surfaces. Micro lead is known as the deviation of main orientation of the ground micro texture from circumferential direction. Assessing the orientation of microscopic structures with arc minute accuracy with regard to circumferential direction requires exact knowledge of both the shaft’s orientation and the direction of surface texture. The shaft’s circumferential direction is found by calibration. Measuring systems and calibration procedures capable of calibrating shaft axis orientation with high accuracy and low uncertainty are described. The measuring systems employ areal-topographic measuring instruments suited for evaluating texture orientation. A dedicated evaluation scheme for texture orientation is based on the Radon transform of these topographies and parametrised for the application. Combining the calibration of circumferential direction with the evaluation of texture orientation the method enables the measurement of micro lead on ground shaft surfaces.

1,3-Diynes are frequently found as an important structural motif in natural products, pharmaceuticals and bioactive compounds, electronic and optical materials and supramolecular molecules. Copper and palladium complexes are widely used to prepare 1,3-diynes by homocoupling of terminal alkynes; albeit the potential of nickel complexes towards the same is essentially unexplored. Although a detailed study on the reported nickel-acetylene chemistry has not been carried out, a generalized mechanism featuring a nickel(II)/nickel(0) catalytic cycle has been proposed. In the present work, a detailed mechanistic aspect of the nickel-mediated homocoupling reaction of terminal alkynes is investigated through the isolation and/or characterization of key intermediates from both the stoichiometric and the catalytic reactions. A nickel(II) complex [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) containing a tetradentate N,N′-dimethyl-2,11-diaza[3.3](2,6)pyridinophane (L-N4Me2) as ligand was used as catalyst for homocoupling of terminal alkynes by employing oxygen as oxidant at room temperature. A series of dinuclear nickel(I) complexes bridged by a 1,3-diyne ligand have been isolated from stoichiometric reaction between [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) and lithium acetylides. The dinuclear nickel(I)-diyne complexes [{Ni(L-N4Me2)}2(RC4R)](ClO4)2 (2) were well characterized by X-ray crystal structures, various spectroscopic methods, SQUID and DFT calculation. The complexes not only represent as a key intermediate in aforesaid catalytic reaction, but also describe the first structurally characterized dinuclear nickel(I)-diyne complexes. In addition, radical trapping and low temperature UV-Vis-NIR experiments in the formation of the dinuclear nickel(I)-diyne confirm that the reactions occurring during the reduction of nickel(II) to nickel(I) and C-C bond formation of 1,3-diyne follow non-radical concerted mechanism. Furthermore, spectroscopic investigation on the reactivity of the dinuclear nickel(I)-diyne complex towards molecular oxygen confirmed the formation of a mononuclear nickel(I)-diyne species [Ni(L-N4Me2)(RC4R)]+ (4) and a mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) which were converted to free 1,3-diyne and an unstable dinuclear nickel(II) species [{Ni(L-N4Me2)}2(O2)]2+ (6). A mononuclear nickel(I)-alkyne complex [Ni(L-N4Me2)(PhC2Ph)](ClO4).MeOH (3) and the mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) were isolated/generated and characterized to confirm the formulation of aforementioned mononuclear nickel(I)-diyne and mononuclear nickel(III)-peroxo species. Spectroscopic experiments on the catalytic reaction mixture also confirm the presence of aforesaid intermediates. Results of both stoichiometric and catalytic reactions suggested an intriguing mechanism involving nickel(II)/nickel(I)/nickel(III) oxidation states in contrast to the reported nickel(II)/nickel(0) catalytic cycle. These findings are expected to open a new paradigm towards nickel-catalyzed organic transformations.

Nowadays, accounting, charging and billing users' network resource consumption are commonly used for the purpose of facilitating reasonable network usage, controlling congestion, allocating cost, gaining revenue, etc. In traditional IP traffic accounting systems, IP addresses are used to identify the corresponding consumers of the network resources. However, there are some situations in which IP addresses cannot be used to identify users uniquely, for example, in multi-user systems. In these cases, network resource consumption can only be ascribed to the owners of these hosts instead of corresponding real users who have consumed the network resources. Therefore, accurate accountability in these systems is practically impossible. This is a flaw of the traditional IP address based IP traffic accounting technique. This dissertation proposes a user based IP traffic accounting model which can facilitate collecting network resource usage information on the basis of users. With user based IP traffic accounting, IP traffic can be distinguished not only by IP addresses but also by users. In this dissertation, three different schemes, which can achieve the user based IP traffic accounting mechanism, are discussed in detail. The inband scheme utilizes the IP header to convey the user information of the corresponding IP packet. The Accounting Agent residing in the measured host intercepts IP packets passing through it. Then it identifies the users of these IP packets and inserts user information into the IP packets. With this mechanism, a meter located in a key position of the network can intercept the IP packets tagged with user information, extract not only statistic information, but also IP addresses and user information from the IP packets to generate accounting records with user information. The out-of-band scheme is a contrast scheme to the in-band scheme. It also uses an Accounting Agent to intercept IP packets and identify the users of IP traffic. However, the user information is transferred through a separated channel, which is different from the corresponding IP packets' transmission. The Multi-IP scheme provides a different solution for identifying users of IP traffic. It assigns each user in a measured host a unique IP address. Through that, an IP address can be used to identify a user uniquely without ambiguity. This way, traditional IP address based accounting techniques can be applied to achieve the goal of user based IP traffic accounting. In this dissertation, a user based IP traffic accounting prototype system developed according to the out-of-band scheme is also introduced. The application of user based IP traffic accounting model in the distributed computing environment is also discussed.

The present situation of control engineering in the context of automated production can be described as a tension field between its desired outcome and its actual consideration. On the one hand, the share of control engineering compared to the other engineering domains has significantly increased within the last decades due to rising automation degrees of production processes and equipment. On the other hand, the control engineering domain is still underrepresented within the production engineering process. Another limiting factor constitutes a lack of methods and tools to decrease the amount of software engineering efforts and to permit the development of innovative automation applications that ideally support the business requirements.
This thesis addresses this challenging situation by means of the development of a new control engineering methodology. The foundation is built by concepts from computer science to promote structuring and abstraction mechanisms for the software development. In this context, the key sources for this thesis are the paradigm of Service-oriented Architecture and concepts from Model-driven Engineering. To mold these concepts into an integrated engineering procedure, ideas from Systems Engineering are applied. The overall objective is to develop an engineering methodology to improve the efficiency of control engineering by a higher adaptability of control software and decreased programming efforts by reuse.

A Multi-Phase Flow Model Incorporated with Population Balance Equation in a Meshfree Framework
(2011)

This study deals with the numerical solution of a meshfree coupled model of Computational Fluid Dynamics (CFD) and Population Balance Equation (PBE) for liquid-liquid extraction columns. In modeling the coupled hydrodynamics and mass transfer in liquid extraction columns one encounters multidimensional population balance equation that could not be fully resolved numerically within a reasonable time necessary for steady state or dynamic simulations. For this reason, there is an obvious need for a new liquid extraction model that captures all the essential physical phenomena and still tractable from computational point of view. This thesis discusses a new model which focuses on discretization of the external (spatial) and internal coordinates such that the computational time is drastically reduced. For the internal coordinates, the concept of the multi-primary particle method; as a special case of the Sectional Quadrature Method of Moments (SQMOM) is used to represent the droplet internal properties. This model is capable of conserving the most important integral properties of the distribution; namely: the total number, solute and volume concentrations and reduces the computational time when compared to the classical finite difference methods, which require many grid points to conserve the desired physical quantities. On the other hand, due to the discrete nature of the dispersed phase, a meshfree Lagrangian particle method is used to discretize the spatial domain (extraction column height) using the Finite Pointset Method (FPM). This method avoids the extremely difficult convective term discretization using the classical finite volume methods, which require a lot of grid points to capture the moving fronts propagating along column height.

A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction
(2017)

Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.

The simulation of cutting process challenges established methods due to large deformations and topological changes. In this work a particle finite element method (PFEM) is presented, which combines the benefits of discrete modeling techniques and methods based on continuum mechanics. A crucial part of the PFEM is the detection of the boundary of a set of particles. The impact of this boundary detection method on the structural integrity is examined and a relation of the key parameter of the method to the eigenvalues of strain tensors is elaborated. The influence of important process parameters on the cutting force is studied and a comparison to an empirical relation is presented.

The dissertation is concerned with the numerical solution of Fokker-Planck equations in high dimensions arising in the study of dynamics of polymeric liquids. Traditional methods based on tensor product structure are not applicable in high dimensions for the number of nodes required to yield a fixed accuracy increases exponentially with the dimension; a phenomenon often referred to as the curse of dimension. Particle methods or finite point set methods are known to break the curse of dimension. The Monte Carlo method (MCM) applied to such problems are 1/sqrt(N) accurate, where N is the cardinality of the point set considered, independent of the dimension. Deterministic version of the Monte Carlo method called the quasi Monte Carlo method (QMC) are quite effective in integration problems and accuracy of the order of 1/N can be achieved, up to a logarithmic factor. However, such a replacement cannot be carried over to particle simulations due to the correlation among the quasi-random points. The method proposed by Lecot (C.Lecot and F.E.Khettabi, Quasi-Monte Carlo simulation of diffusion, Journal of Complexity, 15 (1999), pp.342-359) is the only known QMC approach, but it not only leads to large particle numbers but also the proven order of convergence is 1/N^(2s) in dimension s. We modify the method presented there, in such a way that the new method works with reasonable particle numbers even in high dimensions and has better order of convergence. Though the provable order of convergence is 1/sqrt(N), the results show less variance and thus the proposed method still slightly outperforms standard MCM.

This thesis is concerned with a phase field model for martensitic transformations in metastable austenitic steels. Within the phase field approach an order parameter is introduced to indicate whether the present phase is austenite or martensite. The evolving microstructure is described by the evolution of the order parameter, which is assumed to follow the time-dependent Ginzburg-Landau equation. The elastic phase field model is enhanced in two different ways to take further phenomena into account. First, dislocation movement is considered by a crystal plasticity setting. Second, the elastic model for martensitic transformations is combined with a phase field model for fracture. Finite element simulations are used to study the single effects separately which contribute to the microstructure formation.

In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.

The main goal of this work is to model size effects, as they occur in materials with an intrinsic microstructure at the consideration of specimens that are not by orders larger than this microstructure. The micromorphic continuum theory as a generalized continuum theory is well suited to account for the occuring size effects. Thereby additional degrees of freedoms capture the independent deformations of these microstructures, while they provide additional balance equation. In this thesis, the deformational and configurational mechanics of the micromorphic continuum is exploited in a finite-deformation setting. A constitutive and numerical framework is developed, in which also the material-force method is advanced. Furthermore the multiscale modelling of thin material layers with a heterogeneous substructure is of interest. To this end, a computational homogenization framework is developed, which allows to obtain the constitutive relation between traction and separation based on the properties of the underlying micromorphic mesostructure numerically in a nested solution scheme. Within the context of micromorphic continuum mechanics, concepts of both gradient and micromorphic plasticity are developed by systematically varying key ingredients of the respective formulations.

The interest of the exploration of new hydrocarbon fields as well as deep geothermal reservoirs is permanently growing. The analysis of seismic data specific for such exploration projects is very complex and requires the deep knowledge in geology, geophysics, petrology, etc from interpreters, as well as the ability of advanced tools that are able to recover some particular properties. There again the existing wavelet techniques have a huge success in signal processing, data compression, noise reduction, etc. They enable to break complicate functions into many simple pieces at different scales and positions that makes detection and interpretation of local events significantly easier.
In this thesis mathematical methods and tools are presented which are applicable to the seismic data postprocessing in regions with non-smooth boundaries. We provide wavelet techniques that relate to the solutions of the Helmholtz equation. As application we are interested in seismic data analysis. A similar idea to construct wavelet functions from the limit and jump relations of the layer potentials was first suggested by Freeden and his Geomathematics Group.
The particular difficulty in such approaches is the formulation of limit and
jump relations for surfaces used in seismic data processing, i.e., non-smooth
surfaces in various topologies (for example, uniform and
quadratic). The essential idea is to replace the concept of parallel surfaces known for a smooth regular surface by certain appropriate substitutes for non-smooth surfaces.
By using the jump and limit relations formulated for regular surfaces, Helmholtz wavelets can be introduced that recursively approximate functions on surfaces with edges and corners. The exceptional point is that the construction of wavelets allows the efficient implementation in form of
a tree algorithm for the fast numerical computation of functions on the boundary.
In order to demonstrate the
applicability of the Helmholtz FWT, we study a seismic image obtained by the reverse time migration which is based on a finite-difference implementation. In fact, regarding the requirements of such migration algorithms in filtering and denoising the wavelet decomposition is successfully applied to this image for the attenuation of low-frequency
artifacts and noise. Essential feature is the space localization property of
Helmholtz wavelets which numerically enables to discuss the velocity field in
pointwise dependence. Moreover, the multiscale analysis leads us to reveal additional geological information from optical features.

The present thesis describes the development and validation of a viscosity adaption method for the numerical simulation of non-Newtonian fluids on the basis of the Lattice Boltzmann Method (LBM), as well as the development and verification of the related software bundle SAM-Lattice.
By now, Lattice Boltzmann Methods are established as an alternative approach to classical computational fluid dynamics
methods. The LBM has been shown to be an accurate and efficient tool for the numerical simulation of weakly compressible or incompressible fluids. Fields of application reach from turbulent simulations through thermal problems to acoustic calculations among others. The transient nature of the method and the need for a regular grid based, non body conformal discretization makes the LBM ideally suitable for simulations involving complex solids. Such geometries are common, for instance, in the food processing industry, where fluids are mixed by static mixers or agitators. Those fluid flows are often laminar and non-Newtonian.
This work is motivated by the immense practical use of the Lattice Boltzmann Method, which is limited due to stability issues. The stability of the method is mainly influenced by the discretization and the viscosity of the fluid. Thus, simulations of non-Newtonian fluids, whose kinematic viscosity depend on the shear rate, are problematic. Several authors have shown that the LBM is capable of simulating those fluids. However, the vast majority of the simulations in the literature are carried out for simple geometries and/or moderate shear rates, where the LBM is still stable. Special care has to be taken for practical non-Newtonian Lattice Boltzmann simulations in order to keep them stable. A straightforward way is to truncate the modeled viscosity range by numerical stability criteria. This is an effective approach, but from the physical point of view the viscosity bounds are chosen arbitrarily. Moreover, these bounds depend on and vary with the grid and time step size and, therefore, with the simulation Mach number, which is freely chosen at the start of the simulation. Consequently, the modeled viscosity range may not fit to the actual range of the physical problem, because the correct simulation Mach number is unknown a priori. A way around is, to perform precursor simulations on a fixed grid to determine a possible time step size and simulation Mach number, respectively. These precursor simulations can be time consuming and expensive, especially for complex cases and a number of operating points. This makes the LBM unattractive for use in practical simulations of non-Newtonian fluids.
The essential novelty of the method, developed in the course of this thesis, is that the numerically modeled viscosity range is consistently adapted to the actual physically exhibited viscosity range through change of the simulation time step and the simulation Mach number, respectively, while the simulation is running. The algorithm is robust, independent of the Mach number the simulation was started with, and applicable for stationary flows as well as transient flows. The method for the viscosity adaption will be referred to as the "viscosity adaption method (VAM)" and the combination with LBM leads to the "viscosity adaptive LBM (VALBM)".
Besides the introduction of the VALBM, a goal of this thesis is to offer assistance in the spirit of a theory guide to students and assistant researchers concerning the theory of the Lattice Boltzmann Method and its implementation in SAM-Lattice. In Chapter 2, the mathematical foundation of the LBM is given and the route from the BGK approximation of the Boltzmann equation to the Lattice Boltzmann (BGK) equation is delineated in detail.
The derivation is restricted to isothermal flows only. Restrictions of the method, such as low Mach number flows are highlighted and the accuracy of the method is discussed.
SAM-Lattice is a C++ software bundle developed by the author and his colleague Dipl.-Ing. Andreas Schneider. It is a highly automated package for the simulation of isothermal flows of incompressible or weakly compressible fluids in 3D on the basis of the Lattice Boltzmann Method. By the time of writing of this thesis, SAM-Lattice comprises 5 components. The main components are the highly automated lattice generator SamGenerator and the Lattice Boltzmann solver SamSolver. Postprocessing is done with ParaSam, which is our extension of the
open source visualization software ParaView. Additionally, domain decomposition for MPI
parallelism is done by SamDecomposer, which makes use of the graph partitioning library MeTiS. Finally, all mentioned components can be controlled through a user friendly GUI (SamLattice) implemented by the author using QT, including features to visually track output data.
In Chapter 3, some fundamental aspects on the implementation of the main components, including the corresponding flow charts will be discussed. Actual details on the implementation are given in the comprehensive programmers guides to SamGenerator and SamSolver.
In order to ensure the functionality of the implementation of SamSolver, the solver is verified in Chapter 4 for Stokes's First Problem, the suddenly accelerated plate, and for Stokes's Second Problem, the oscillating plate, both for Newtonian fluids. Non-Newtonian fluids are modeled in SamSolver with the power-law model according to Ostwald de Waele. The implementation for non-Newtonian fluids is verified for the Hagen-Poiseuille channel flow in conjunction with a convergence analysis of the method. At the same time, the local grid refinement as it is implemented in SamSolver, is verified. Finally, the verification of higher order boundary conditions is done for the 3D Hagen-Poiseuille pipe flow for both Newtonian and non-Newtonian fluids.
In Chapter 5, the theory of the viscosity adaption method is introduced. For the adaption process, a target collision frequency or target simulation Mach number must be chosen and the distributions must be rescaled according to the modified time step size. A convenient choice is one of the stability bounds. The time step size for the adaption step is deduced from the target collision frequency \(\Omega_t\) and the currently minimal or maximal shear rate in the system, while obeying auxiliary conditions for the simulation Mach number. The adaption is done in the collision step of the Lattice Boltzmann algorithm. We use the transformation matrices of the MRT model to map from distribution space to moment space and vice versa. The actual scaling of the distributions is conducted on the back mapping, because we use the transformation matrix on the basis of the new adaption time step size. It follows an additional rescaling of the non-equilibrium part of the distributions, because of the form of the definition for the discrete stress tensor in the LBM context. For that reason it is clear, that the VAM is applicable for the SRT model as well as the MRT model, where there is virtually no extra cost in the latter case. Also, in Chapter 5, the multi level treatment will be discussed.
Depending on the target collision frequency and the target Mach number, the VAM can be used to optimally use the viscosity range that can be modeled within the stability bounds or it can be used to drastically accelerate the simulation. This is shown in Chapter 6. The viscosity adaptive LBM is verified in the stationary case for the Hagen-Poiseuille channel flow and in the transient case for the Wormersley flow, i.e., the pulsatile 3D Hagen-Poiseuille pipe flow. Although, the VAM is used here for fluids that can be modeled with the power-law approach, the implementation of the VALBM is straightforward for other non-Newtonian models, e.g., the Carreau-Yasuda or Cross model. In the same chapter, the VALBM is validated for the case of a propeller viscosimeter developed at the chair SAM. To this end, the experimental data of the torque on the impeller of three shear thinning non-Newtonian liquids serve for the validation. The VALBM shows excellent agreement with experimental data for all of the investigated fluids and in every operating point. For reasons of comparison, a series of standard LBM simulations is carried out with different simulation Mach numbers, which partly show errors of several hundred percent. Moreover, in Chapter 7, a sensitivity analysis on the parameters used within the VAM is conducted for the simulation of the propeller viscosimeter.
Finally, the accuracy of non-Newtonian Lattice Boltzmann simulations with the SRT and the MRT model is analyzed in detail. Previous work for Newtonian fluids indicate that depending on the numerical value of the collision frequency \(\Omega\), additional artificial viscosity is introduced due to the finite difference scheme, which negatively influences the accuracy. For the non-Newtonian case, an error estimate in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. The estimation of the error minimum is excellent in regions where the \(\Omega\) error is the dominant source of error as opposed to the compressibility error.
Result of this dissertation is a verified and validated software bundle on the basis of the viscosity adaptive Lattice Boltzmann Method. The work restricts itself on the simulation of isothermal, laminar flows with small Mach numbers. As further research goals, the testing of the VALBM with minimal error estimate and the investigation of the VALBM in the case of turbulent flows is suggested.

This dissertation is intended to transport the theory of Serre functors into the context of A-infinity-categories. We begin with an introduction to multicategories and closed multicategories, which form a framework in which the theory of A-infinity-categories is developed. We prove that (unital) A-infinity-categories constitute a closed symmetric multicategory. We define the notion of A-infinity-bimodule similarly to Tradler and show that it is equivalent to an A-infinity-functor of two arguments which takes values in the differential graded category of complexes of k-modules, where k is a commutative ground ring. Serre A-infinity-functors are defined via A-infinity-bimodules following ideas of Kontsevich and Soibelman. We prove that a unital closed under shifts A-infinity-category over a field admits a Serre A-infinity-functor if and only if its homotopy category admits an ordinary Serre functor. The proof uses categories and Serre functors enriched in the homotopy category of complexes of k-modules. Another important ingredient is an A-infinity-version of the Yoneda Lemma.

In dieser Arbeit wurden für die Moleküle B3, B3- und C3+ mit dem MR-CI-Verfahren hochgenaue Potentialfächen für ein oder mehrere elektronische Zustände berechnet. Alle drei Moleküle besitzen elektronisch entartete Jahn-Teller-Zustände. Im Gegesatz zu den früher untersuchten Alkalitrimeren liegt hier die konische Durchschneidung so tief, dass sie bei der Schwingungsanalyse berücksichtigt werden muss und daher eine diabatische Behandlung erfordlich ist. Für den X<-1E'-Übergang im B3 konnte die Übereinstimmung des berechneten Spektrum mit dem gemessenen durch den Einsatz des größeren Basissatzes VQZ, im Vergleich zu den bereits veröffentlichten Ergebnissen, nochmals deutlich verbessert werden. Für den berechneten 00-Übergang ist im gemessen Spektrum kein Übergang zu beobachten. Neben der guten Übereinstimmung der anderen Peaks wird diese These auch die T00 Energie gestützt. Die einfache Progression des experimentellen X<-2E'-Übergangs im B3 konnte ebenfalls in gute Übereinstimmung berechnet werden. Die einfache und kurze Progression ergibt sich aus der Tatsache, dass praktisch keine Jahn-Teller-Verzerrung vorhanden ist, und beide Teilflächen fast deckungsgleich sind. Für den X<-1E'-Übergang des B3- wurde ebenfalls ein Spektrum simuliert, allerdings findet sich keine Übereinstimmung zu den gemessenen Übergängen. Da die beobachtete Elektronenablöseenergie nur unwesentlich oberhalb der Elektronenanregungsenergie liegt und im Hinblick auf die starken X<-2E'-Absorptionen des B3 in der gleichen Messung bleibt offen, welche Strukturn im Experiment zu sehen sind. Zum C3+ wurde eine Schwingungsanalyse für den E'-Grundzustand durchgeführt. Experimentelle Vergleichswerte fehlen in diesem Fall. Allerdings konnte die bereits seit mehr als einem Jahrzehnt diskutierte Höhe der Isomerisierungsenergie zwischen gewinkelter und linearer Geometrie sehr genau, auf nun 6.8 +- 0.5 kcal/mol festgelegt werden. Bei vibronischer Betrachtung unter Einbeziehung der Nullpunktsenergien reduziert sich diese auf bzw. 4.8 kcal/mol. Ausserdem wurde die Existenz eines linearen Minimums bestätigt. C3+ liefert auch ein sehr schönes Beispiel für die Verschränkung verschiedener lokaler und globaler Schwingungszustände, was zu einer irregulären Abfolge von Zuständen führt. Für die Reaktivität des C3+ wurde beobachtet, dass es unterhalb von 50 K die höchste Reaktivität besitzt und darüber deutlich abnimmt. Auf dieses Verhalten liefert die Schwingungsanalyse keine Antwort, da bis selbst zur Raumtemperatur keine thermische Schwingungsanregung statt finden kann.

Im Mittelpunkt der Untersuchungen standen Bismut-Aren-Komplexe der Reihen [(C6H6)BiCl3-n]n+ (n = 0 – 3) und [(MemC6H6-m)BiCl3] (m = 0 – 3). Außerdem wurden die leichteren Homologen [(C6H6)SbCl3] und [(C6H6)AsCl3] untersucht, um Gruppentrends zu erkennen. Es wurden auch die zu den Komplexen [(C6H6)BiCl3-n]n+ (n = 1 – 3) isoelektronischen Blei-Aren-Komplexe mit in die Betrachtungen einbezogen. Von prinzipiellem Interesse ist der Komplex [(C6H6)2Pb]2+, der als Prototyp eines bent-Sandwich-Bis(aren)hauptgruppenelement-Komplexes aufzufassen ist. Die Strukturen der Neutralkomplexe und Komplexkationen wurden auf MP2(fc)/6-31+G(d)(C,H);SBKJC(d)(Bi,Cl)-Niveau optimiert. Die Wechselwirkungsenergie in [(C6H6)BiCl3] beträgt –23 kJ/mol (MP4). Betrachtungen der Elektronenlokalisierungsfunktion und der Molekülorbitale trugen zum Verständnis der Bindungsverhältnisse in den untersuchten Aren-Komplexen bei. Eine Kristallstrukturanalyse bestätigte die gefundenen Trends der Rechnungen. Die Aren-Komplexe mit leichteren Zentralatomen sind weniger stabil. Außerdem steigt die Stabilität mit der Basizität des Aren-Liganden leicht und mit der Ladung der Komplexe stark an. Zur Untersuchung (B3LYP/6-311+G(d)) der Bindungsverhältnisse im P4-Ring des Tetrakis(amino)-1l5,3l5-Tetraphosphets und der P-P-Bindung seines [2+2]-Cycloreversionsprodukts wurden die Elektronenlokalisierungsfunktionen, wichtige Molekülorbitale, Bindungsordnungen, berechnete chemische Verschiebungen der Phosphorkerne und kernunabhängige chemische Verschiebungen betrachtet. Danach kann für die P-P-Bindung im P4-Ring des Tetraphosphets ein beträchtlicher p-Bindungsanteil angenommen werden. Die P-P-Bindung im [2+2]-Cycloreversionsprodukt kann als elektronisch ungewöhnliche, durch Coulombkräfte verkürzte Doppelbindung verstanden werden. Bei präparativen Arbeiten zu funktionalisierten Aminoarsanen wurde vor einiger Zeit ein Tetrakis(amino)diarsan mit einer außerordentlich langen As-As-Bindung (2,673(3) Å) erhalten. Quantenchemische Rechnungen (B3LYP/6-31+G(d)) bestätigen und unterstützen diesen Befund und räumen letzte Zweifel im Zusammenhang mit der kristallstrukturanalytischen Bestimmung der Bindungslänge aus.

Diese Arbeit gehört in die algebraische Geometrie und die Darstellungstheorie und stellt eine Beziehung zwischen beiden Gebieten dar. Man beschäftigt sich mit den abgeleiteten Kategorien auf flachen Entartungen projektiver Geraden und elliptischer Kurven. Als Mittel benutzt man die Technik der Matrixprobleme. Das Hauptergebnis dieser Dissertation ist der folgende Satz: SATZ. Sei X ein Zykel projektiver Geraden. Dann gibt es drei Typen unzerlegbarer Objekte in D^-(Coh_X): - Shifts von Wolkenkratzergarben in einem regulären Punkt; - Bänder B(w,m,lambda), - Saiten S(w). Ganz analog beweist man die Zahmheit der abgeleiteten Kategorien vieler assoziativer Algebren.

Computer-based simulation and visualization of acoustics of a virtual scene can aid during the design process of concert halls, lecture rooms, theaters, or living rooms. Because, not only the visual aspect of the room is important, but also its acoustics. In factory floors noise reduction is important since noise is hazardous to health. Despite the obvious dissimilarity between our aural and visual senses, many techniques required for the visualization of photo-realistic images and for the auralization of acoustic environments are quite similar. Both applications can be served by geometric methods such as particle- and ray tracing if we neglect a number of less important effects. By means of the simulation of room acoustics we want to predict the acoustic properties of a virtual model. For auralization, a pulse response filter needs to be assembled for each pair of source and listener positions. The convolution of this filter with an anechoic source signal provides the signal received at the listener position. Hence, the pulse response filter must contain all reverberations (echos) of a unit pulse, including their frequency decompositions due to absorption at different surface materials. For the room acoustic simulation a method named phonon tracing, since it is based on particles, is developed. The approach computes the energy or pressure decomposition for each particle (phonon) sent out from a sound source and uses this in a second pass (phonon collection) to construct the response filters for different listeners. This step can be performed in different precision levels. During the tracing step particle paths and additional information are stored in a so called phonon map. Using this map several sound visualization approaches were developed. From the visualization, the effect of different materials on the spectral energy / pressure distribution can be observed. The first few reflections already show whether certain frequency bands are rapidly absorbed. The absorbing materials can be identified and replaced in the virtual model, improving the overall acoustic quality of the simulated room. Furthermore an insight into the pressure / energy received at the listener position is possible. The phonon tracing algorithm as well as several sound visualization approaches are integrated into a common system utilizing Virtual Reality technologies in order to facilitate the immersion into the virtual scene. The system is a prototype developed within a project at the University of Kaiserslautern and is still a subject of further improvements. It consists of a stereoscopic back-projection system for visual rendering as well as professional audio equipment for auralization purposes.

Three dimensional (3d) point data is used in industry for measurement and reverse engineering. Precise point data is usually acquired with triangulating laser scanners or high precision structured light scanners. Lower precision point data is acquired by real-time structured light devices or by stereo matching with multiple cameras. The basic principle of all these methods is the so-called triangulation of 3d coordinates from two dimensional (2d) camera images.
This dissertation contributes a method for multi-camera stereo matching that uses a system of four synchronized cameras. A GPU based stereo matching method is presented to achieve a high quality reconstruction at interactive frame rates. Good depth resolution is achieved by allowing large disparities between the images. A multi level approach on the GPU allows a fast processing of these large disparities. In reverse engineering, hand-held laser scanners are used for the scanning of complex shaped objects. The operator of the scanner can scan complex regions slower, multiple times, or from multiple angles to achieve a higher point density. Traditionally, computer aided design (CAD) geometry is reconstructed in a separate step after the scanning. Errors or missing parts in the scan prevent a successful reconstruction. The contribution of this dissertation is an on-line algorithm that allows the reconstruction during the scanning of an object. Scanned points are added to the reconstruction and improve it on-line. The operator can detect the areas in the scan where the reconstruction needs additional data.
First, the point data is thinned out using an octree based data structure. Local normals and principal curvatures are estimated for the reduced set of points. These local geometric values are used for segmentation using a region growing approach. Implicit quadrics are fitted to these segments. The canonical form of the quadrics provides the parameters of basic geometric primitives.
An improved approach uses so called accumulated means of local geometric properties to perform segmentation and primitive reconstruction in a single step. Local geometric values can be added and removed on-line to these means to get a stable estimate over a complete segment. By estimating the shape of the segment it is decided which local areas are added to a segment. An accumulated score estimates the probability for a segment to belong to a certain type of geometric primitive. A boundary around the segment is reconstructed using a growing algorithm that ensures that the boundary is closed and avoids self intersections.

Acrylamid und Acrolein gehören zu den alpha,beta-ungesättigten Carbonylverbindungen. Sie zeichnen sich wie andere alpha,beta-ungesättigte Carbonylverbindungen durch eine hohe Reaktionsfähigkeit aus. Einerseits können sie leicht mit Proteinen und DNA reagieren, was zytotoxische und genotoxische Wirkungen hervorrufen kann, andererseits können sie aber auch schnell durch Glutathionkonjugation detoxifiziert werden.
Acrylamid ist eine in großem Umfang produzierte Industriechemikalie, die hauptsächlich Anwendung bei der Herstellung von Polyacrylamidprodukten findet. Aus Acrylamid hergestellte Polymere und Copolymere werden in der Kosmetikindustrie, als Bindemittel bei der Papierherstellung, als Flockungsmittel in der Abwasseraufbereitung und in biochemischen Laboratorien verwendet. Nachdem Acrylamid-Hämoglobin-Addukte im Jahre 2002 auch in nicht Acrylamid-exponierten Personen gefunden wurden, vermutete man Lebensmittel als mögliche Expositionsquelle. Anschließende Studien konnten dies bestätigen und zeigten, dass Acrylamid beim Erhitzen von Lebensmitteln vor allem bei hohen Temperaturen im Verlauf der Maillard-Reaktion gebildet werden kann. Die World Health Organisation (WHO) beziffert die weltweite durchschnittliche Exposition mit Acrylamid über Lebensmittel auf 1-4 µg Acrylamid/kg Körpergewicht (KG) und Tag. Acrylamid zeigte in verschiedenen Studien neurotoxische, entwicklungs- und reproduktionstoxische, genotoxische und kanzerogene Wirkungen. Acrylamid wurde im Jahre 1994 von der International Agency for Research on Cancer (IARC) in die Gruppe 2A als Stoff eingestuft, der wahrscheinlich krebserzeugend beim Menschen ist.
Acrylamid wird im Organismus zum genotoxischen Metaboliten Glycidamid gegiftet. Glycidamid bildet DNA-Addukte vor allem mit dem N7 des Guanins. Glycidamid-DNA-Addukte konnten im Tierversuch an Nagern nach Verabreichung hoher Mengen Acrylamid in allen untersuchten Organen gefunden werden. Als Hauptweg der Entgiftung von Acrylamid und Glycidamid gilt die Bindung an Glutathion (GSH) und der Abbau und die Ausscheidung als Mercaptursäure (MA) in Urin. Aufgrund des oxidativen Metabolismus von Acrylamid hängt die biologische Wirkung wesentlich vom Gleichgewicht der giftenden und entgiftenden Metabolismuswege in der Leber ab.
Acrolein wird seit 1940 kommerziell zur Herstellung von Acrylsäure, dem Ausgangsprodukt für Acrylatpolymere industriell produziert. Außerdem kann Acrolein aus Aminosäuren, Fetten oder Kohlenhydraten während des Erhitzens von Lebensmitteln gebildet werden. Während der Zubereitung von kohlenhydratreichen Lebensmitteln kann Acrolein wie auch Acrylamid im Verlauf der Maillard-Reaktion entstehen. Acrolein ist als einfachster alpha,beta-ungesättigter Aldehyd hochreaktiv gegenüber Nukleophilen wie z.B. Thiol- oder Aminogruppen unter Ausbildung von Michael-Addukten. Die hohe Reaktivität und Flüchtigkeit von Acrolein führt dazu, dass derzeit nur wenig zuverlässige Daten zu Acrolein-Gehalten speziell in kohlenhydratreichen Lebensmitteln vorliegen; sofern Daten zu Gehalten vorhanden sind, bewegen sich diese im niedrigen µg/kg-Bereich. Zudem ist bisher ungeklärt, in welchem Ausmaß Acrolein zur humanen Gesamtexposition gegenüber hitzeinduzierten Schadstoffen neben Acrylamid in Lebensmitteln beiträgt. Die derzeitige Datenlage lässt eine eindeutige Risikobewertung nicht zu. Eine stetige Exposition mit Acrolein gilt als sicher. In verschiedenen Studien konnte gezeigt werden, dass die toxikologischen Effekte von Acrolein im Gegensatz zu Acrylamid insgesamt nicht auf einer erhöhten Tumorinzidenz beruhen. Daher wurde Acrolein von der IARC in Kategorie 3 eingestuft: Es gilt als möglicherweise krebserzeugend beim Menschen, allerdings ist die Datenlage nicht ausreichend, um eine eindeutige Beurteilung vornehmen zu können.
Ziel der vorliegenden Arbeit war es, die Toxikokinetik und -dynamik der beim Erhitzen von Lebensmitteln entstehenden Kontaminanten Acrylamid und Acrolein in vitro und in vivo zu untersuchen. Im Vordergrund stand die Erfassung dosisabhängiger Genotoxizität von Acrylamid sowie der MA als wichtigste Entgiftungsreaktion im Tierversuch im Bereich der derzeitigen Verbraucherexposition. Die Ergebnisse, insbesondere zur Toxikokinetik, sollten durch in vitro Versuche in primären Rattenhepatozyten untermauert werden. Außerdem sollte vergleichend die bisher kaum mithilfe von Biomarkern untersuchte nahrungsbezogene Exposition des Verbrauchers mit Acrylamid und Acrolein bestimmt werden. Eine Dosis-Wirkungsuntersuchung an Sprague Dawley (SD)-Ratten im Dosisbereich von 0,1 bis 10.000 µg/kg KG lieferte erstmals quantitative Informationen zur DNA-Adduktbildung durch den genotoxischen Acrylamid-Metaboliten Glycidamid bis in niedrigste Expositionsbereiche. In diesem Niedrigdosisbereich (0,1 bis 10 µg/kg KG) liegt die nach Einmaldosierung gemessene N7-GA-Gua-Bildung im unteren Bereich der humanen Hintergrundgewebsspiegel für DNA-Läsionen verschiedenen Ursprungs. Dieser Befund könnte die zukünftige Risikobewertung von Expositionen mit solchen genotoxischen Kanzerogenen auf eine neue und der Messung zugängliche Basis stellen. Mit der in dieser Arbeit eingesetzten extrem empfindlichen instrumentellen Analytik sind erstmals Messungen von genotoxischen Ereignissen bis in den Bereich der Verbraucherexposition möglich geworden. Es bleibt allerdings zu beachten, das Genotoxizität zwar eine notwendige, aber nicht hinreichende Bedingung für Mutagenität und maligne Transformation ist. Die auf ein genotoxisches Ereignis folgende biologische Antwort, muss jedoch in die Risikobewertung mit einbezogen werden.
In primären Rattenhepatozyten ließ sich bei Inkubation mit Acrylamid zeigen, dass GSH-Addukte deutlich früher bei niedrigeren Acrylamidkonzentrationen nachweisbar sind als Glycidamid und N7-GA-Gua-Addukte. Der direkte Vergleich der Bildung von Glycidamid mit jener der AA-GSH-Addukte ließ schließen, dass die Entgiftung von Acrylamid in primären Rattenhepatozyten bis zu dreifach schneller verläuft als die Giftung. Zusätzlich ließ sich erstmals zeigen, dass primäre Rattenhepatozyten neben der Kopplung von Xenobiotika an GSH, zumindest auch in kleinen Anteilen zur Umwandlung in die entsprechenden MA fähig sind.
Um das von Acrolein ausgehende Gefährdungspotential zu untersuchen, wurde dessen DNA-Adduktbildung in vitro untersucht. Als Biomarker für die Bildung eines Haupt-DNA-Adduktes wurde fünffach 15N-markiertes Hydroxypropanodeoxyguanosin (OH-[15N5]-PdG) synthetisiert und charakterisiert. DNA Inkubationsversuche mit Acrolein zeigten eine konzentrations- und zeitabhängige Bildung der OH-PdG-Addukte. Acrolein reagierte nur wenig langsamer als Glycidamid zu diesen Addukten.
Zur Untersuchung der Toxikokinetik von Acrylamid und Acrolein in vivo nach Aufnahme von hoch belasteten bzw. kommerziell erhältlichen Kartoffelchips wurden die Ergebnisse aus zwei Humanstudien durchgeführt und ausgewertet. Die Ausscheidungskinetiken Acrolein-assoziierter MA im Menschen korrelierten eindeutig mit der Aufnahme von Kartoffelchips. Der Vergleich der im Urin ausgeschiedenen Mengen an Acrolein- bzw. Acrylamid-assoziierten MA ließ auf eine wesentlich höhere nahrungsbezogene Exposition mit Acrolein (4- bis 12-fach) verglichen mit Acrylamid schließen. Analytische Messungen der Acroleingehalte in den Lebensmitteln hatten aber nur eine Kontamination ergeben, die nur einen geringen Anteil der expositionsbedingt im Urin erfassten MA-Mengen erklären kann. Ob Acrolein an der Lebensmittelmatrix in einer Weise gebunden vorliegt, dass es sich der analytischen Erfassung durch die zur Verfügung stehenden Verfahren wie Headspace-GC/MS entzieht und erst nach Aufnahme in den Organismus freigesetzt wird, wird Gegenstand künftiger Untersuchungen. Zusätzlich liefern die Ergebnisse beider Humanstudien starke Hinweise auf eine endogene Bildung von Acrolein, da auch in den Wash-out Phasen ein relativ hoher Anteil an Acrolein-assoziierten MA erfasst wurde. Zukünftige Untersuchungen sollten die endogene Exposition und die Bildungsmechanismen von Acrolein und anderen Alkenalen aus verschiedenen physiologischen Quellen genauer untersuchen, und in Beziehung setzen zur exogenen, ernährungsbezogenen Exposition. Ebenso sollten künftig verstärkt die Auswirkungen kombinierter Exposition durch solche erhitzungsbedingt gebildeten Stoffe untersucht werden.

At present the standardization of third generation (3G) mobile radio systems is the subject of worldwide research activities. These systems will cope with the market demand for high data rate services and the system requirement for exibility concerning the offered services and the transmission qualities. However, there will be de ciencies with respect to high capacity, if 3G mobile radio systems exclusively use single antennas. Very promising technique developed for increasing the capacity of 3G mobile radio systems the application is adaptive antennas. In this thesis, the benefits of using adaptive antennas are investigated for 3G mobile radio systems based on Time Division CDMA (TD-CDMA), which forms part of the European 3G mobile radio air interface standard adopted by the ETSI, and is intensively studied within the standardization activities towards a worldwide 3G air interface standard directed by the 3GPP (3rd Generation Partnership Project). One of the most important issues related to adaptive antennas is the analysis of the benefits of using adaptive antennas compared to single antennas. In this thesis, these bene ts are explained theoretically and illustrated by computer simulation results for both data detection, which is performed according to the joint detection principle, and channel estimation, which is applied according to the Steiner estimator, in the TD-CDMA uplink. The theoretical explanations are based on well-known solved mathematical problems. The simulation results illustrating the benefits of adaptive antennas are produced by employing a novel simulation concept, which offers a considerable reduction of the simulation time and complexity, as well as increased exibility concerning the use of different system parameters, compared to the existing simulation concepts for TD-CDMA. Furthermore, three novel techniques are presented which can be used in systems with adaptive antennas for additionally improving the system performance compared to single antennas. These techniques concern the problems of code-channel mismatch, of user separation in the spatial domain, and of intercell interference, which, as it is shown in the thesis, play a critical role on the performance of TD-CDMA with adaptive antennas. Finally, a novel approach for illustrating the performance differences between the uplink and downlink of TD-CDMA based mobile radio systems in a straightforward manner is presented. Since a cellular mobile radio system with adaptive antennas is considered, the ultimate goal is the investigation of the overall system efficiency rather than the efficiency of a single link. In this thesis, the efficiency of TD-CDMA is evaluated through its spectrum efficiency and capacity, which are two closely related performance measures for cellular mobile radio systems. Compared to the use of single antennas, the use of adaptive antennas allows impressive improvements of both spectrum efficiency and capacity. Depending on the mobile radio channel model and the user velocity, improvement factors range from six to 10.7 for the spectrum efficiency, and from 6.7 to 12.6 for the spectrum capacity of TD-CDMA. Thus, adaptive antennas constitute a promising technique for capacity increase of future mobile communications systems.

Adaptive Extraction and Representation of Geometric Structures from Unorganized 3D Point Sets
(2009)

The primary emphasis of this thesis concerns the extraction and representation of intrinsic properties of three-dimensional (3D) unorganized point clouds. The points establishing a point cloud as it mainly emerges from LiDaR (Light Detection and Ranging) scan devices or by reconstruction from two-dimensional (2D) image series represent discrete samples of real world objects. Depending on the type of scenery the data is generated from the resulting point cloud may exhibit a variety of different structures. Especially, in the case of environmental LiDaR scans the complexity of the corresponding point clouds is relatively high. Hence, finding new techniques allowing the efficient extraction and representation of the underlying structural entities becomes an important research issue of recent interest. This thesis introduces new methods regarding the extraction and visualization of structural features like surfaces and curves (e.g. ridge-lines, creases) from 3D (environmental) point clouds. One main part concerns the extraction of curve-like features from environmental point data sets. It provides a new method supporting a stable feature extraction by incorporating a probability-based point classification scheme that characterizes individual points regarding their affiliation to surface-, curve- and volume-like structures. Another part is concerned with the surface reconstruction from (environmental) point clouds exhibiting objects that are more or less complex. A new method providing multi-resolutional surface representations from regular point clouds is discussed. Following the applied principles of this approach a volumetric surface reconstruction method based on the proposed classification scheme is introduced. It allows the reconstruction of surfaces from highly unstructured and noisy point data sets. Furthermore, contributions in the field of reconstructing 3D point clouds from 2D image series are provided. In addition, a discussion concerning the most important properties of (environmental) point clouds with respect to feature extraction is presented.

Real-time systems are systems that have to react correctly to stimuli from the environment within given timing constraints.
Today, real-time systems are employed everywhere in industry, not only in safety-critical systems but also in, e.g., communication, entertainment, and multimedia systems.
With the advent of multicore platforms, new challenges on the efficient exploitation of real-time systems have arisen:
First, there is the need for effective scheduling algorithms that feature low overheads to improve the use of the computational resources of real-time systems.
The goal of these algorithms is to ensure timely execution of tasks, i.e., to provide runtime guarantees.
Additionally, many systems require their scheduling algorithm to flexibly react to unforeseen events.
Second, the inherent parallelism of multicore systems leads to contention for shared hardware resources and complicates system analysis.
At any time, multiple applications run with varying resource requirements and compete for the scarce resources of the system.
As a result, there is a need for an adaptive resource management.
Achieving and implementing an effective and efficient resource management is a challenging task.
The main goal of resource management is to guarantee a minimum resource availability to real-time applications.
A further goal is to fulfill global optimization objectives, e.g., maximization of the global system performance, or the user perceived quality of service.
In this thesis, we derive methods based on the slot shifting algorithm.
Slot shifting provides flexible scheduling of time-constrained applications and can react to unforeseen events in time-triggered systems.
For this reason, we aim at designing slot shifting based algorithms targeted for multicore systems to tackle the aforementioned challenges.
The main contribution of this thesis is to present two global slot shifting algorithms targeted for multicore systems.
Additionally, we extend slot shifting algorithms to improve their runtime behavior, or to handle non-preemptive firm aperiodic tasks.
In a variety of experiments, the effectiveness and efficiency of the algorithms are evaluated and confirmed.
Finally, the thesis presents an implementation of a slot-shifting-based logic into a resource management framework for multicore systems.
Thus, the thesis closes the circle and successfully bridges the gap between real-time scheduling theory and real-world implementations.
We prove applicability of the slot shifting algorithm to effectively and efficiently perform adaptive resource management on multicore systems.

Adaptive Strukturoptimierung von Faserkunststoffverbunden unter Berücksichtigung bionischer Aspekte
(2006)

Es finden immer häufiger Faserverbundmaterialien in Strukturbauteilen Anwendung,
da bei konventionellen Materialien die Zielkriterien, wie definierte Festigkeit, Steifigkeit,
etc. nicht mehr bzw. nicht mit hinreichend geringem Bauteilgewicht erreicht werden
können. Angesichts der hohen Kosten ist es verständlich, dass Faserkunststoffverbunde
(FKV) vorzugsweise in den Bereichen eingesetzt werden, wo die eingangs
erwähnten Optimierungsziele hohe Priorität haben. Besonders hervorzuheben ist
hierbei die Luft- und Raumfahrt. Zunehmende Bedeutung gewinnt der Einsatz von
Faserverbundwerkstoffen aber auch in der Automobil- bzw. Maschinenbauindustrie.
Mit fortschreitender Verbesserung der Optimierungsmethoden sowie der Fertigungstechnologien
und der damit verbundenen Kostenreduktion, werden heute bereits
komplexe Module hergestellt. Das zieht wiederum eine lastgerechte und werkstoffspezifische
Konstruktion nach sich. Gegenstand der Arbeit ist die Entwicklung eines
Topologieoptimierungswerkzeuges zur werkstoffgerechten Auslegung von FKVStrukturen.
Ziel ist, FKV - eine Klasse von Hochleistungswerkstoffen, deren Potenzial
sich nur mit geeigneten Modellen zur Nutzung ihrer anisotropen Eigenschaften ausschöpfen
lässt - unter Berücksichtigung der technischen Realisierbarkeit zu optimieren.
Dabei werden natürliche Wachstumsprinzipien in einen iterativen Prozess überführt.
Als Ziel dieses Algorithmus kann entweder eine gezielte Steifigkeit oder eine
gewichtsoptimale Lösung bei hinreichender Festigkeit mit möglichst gleichmäßiger
Spannungsverteilung im Bauteil definiert werden. Erreicht wird dies durch eine effektive
Lastverteilung von hoch belasteten auf geringer belastete Bereiche und somit
auch die Optimierung der Materialverteilung. In diesem Designvorschlag wird die
Grundorientierung der Basisschicht, die kraftflussgerechte Orientierung der Laminateinzellagen
sowie die Topologie von Zulagenschichten bzw. des Gesamtlaminates
optimiert. Besonders interessant ist die adaptive Strukturoptimierung von FKVStrukturen
bei lokalen Zulagen an hoch belasteten Krafteinleitungsstellen bzw. allgemein
in Bereichen hoher Spannungen. Wie weiterhin gezeigt wird, ist die entwickelte
adaptive Topologie- und Faserwinkeloptimierung in Kombination aus technologischer,werkstoffmechanischer sowie wirtschaftlicher Sicht vorteilhaft und kann
problemlos in der Praxis angewandt werden.
More and more fibre-reinforced composite materials are being used in structural building
components because with conventional materials, the target criteria, such as
defined strength, rigidity etc. can no longer be achieved with a sufficiently low weight
of the structural components, if at all. In view of the high costs, it is understandable
that fibre-reinforced plastic composites tend to be used in technical areas where the
optimization goals mentioned above have a high priority. The aviation and aerospace
industry deserves special mention here. The use of fibre composite materials is also
gaining significance in the automotive and mechanical engineering industry. Thanks
to increasing improvements in optimization methods and manufacturing technologies
and the reduction in costs that this brings with it, complex modules are being produced
even today. This in turn ensures specific-material construction with the necessary
load-bearing properties. The objective of the presentation is the development of
a topology optimization tool for designing Fibre-plastic-composite (FPC)-structures
which is appropriate for each material involved. The objective is to optimize FPC – a
class of high-performance materials the potential of which can only be exploited with
suitable models for the utilization of their anisotropic properties – under consideration
of their capability for technical realization. In doing so, natural growth principles are
implemented into an iterative process, thereby enabling computer simulation. The
main goal of this algorithm is maximum rigidity with as even a distribution of tension
as possible throughout the component, which is achieved by distributing the load
from high-load to lower load bearing areas, thereby optimizing the material distribution.
The weight optimization of specific components is possible in this way. The basic
orientation of the base layer, the orientation of the individual laminate layers in a
manner appropriate to the power flux, as well as the topology of bonding layers
and/or the entire laminate are optimized in this design recommendation. Of particular
interest here is the adaptive structural optimization of FPC structures with localized
bonding to high-load bearing load introduction points or generally, in areas with high
stresses. As continues to be shown, the developed adaptive topology and fibre angle optimization is beneficial from a technological, material-mechanical and economical
point of view, and can be applied in everyday practice without any problems.

Die vorliegende Arbeit beschreibt die Trennung von kurzkettigen Alkan/Alken-Gemischen an nanostrukturierten porösen Adsorbentien. Zu diesem Zweck wurden unterschiedliche metallorganische Koordinationspolymere und Zeolithe synthetisiert und charakterisiert. Zur Untersuchung des Adsorptionsverhaltens dieser Adsorbentien wurden Adsorptionsisothermen von C2-, C3- und C4-Kohlenwasserstoffen bei verschiedenen Temperaturen gemessen. Die Messung der Adsorption der reinen Kohlenwasserstoffe ergab, dass die adsorbierte Stoffmenge mit der spezifischen Oberfläche des Adsorbens korreliert, sowie von der kritischen Temperatur des Adsorptivs abhängt und in der Reihenfolge C2 < C3 < C4 zunimmt. Eine Ausnahme hiervon bilden flexible metallorganische Koordinationspolymere, welche Atmungs- bzw. Porenöffnungseffekte zeigen. Die Isothermen dieser Materialien weisen Sprünge auf, wobei diese jedoch abhängig vom Druck, vom Adsorptiv und von der Temperatur sind. Die Trennung von Alkan/Alken-Gemischen an den hergestellten Adsorbentien wurde in einem kontinuierlich durchströmten Festbettadsorber untersucht. Es zeigten sich unterschiedliche Trennfaktoren in Abhängigkeit von der Porenöffnung und der Gerüststruktur der Adsorbentien. Die Untersuchung der Desorption der Kohlenwasserstoffe von Cu\(_3\)(btc)\(_2\) ergab, dass der Desorptionsprozess bei Raumtemperatur nur sehr langsam verläuft. Es zeigte sich, dass die zur Desorption erforderliche Temperatur nimmt steigender Kohlenstoffzahl des Kohlenwasserstoffs zunimmt.

In recent years the field of polymer tribology experienced a tremendous development
leading to an increased demand for highly sophisticated in-situ measurement methods.
Therefore, advanced measurement techniques were developed and established
in this study. Innovative approaches based on dynamic thermocouple, resistive electrical
conductivity, and confocal distance measurement methods were developed in
order to in-situ characterize both the temperature at sliding interfaces and real contact
area, and furthermore the thickness of transfer films. Although dynamic thermocouple
and real contact area measurement techniques were already used in similar
applications for metallic sliding pairs, comprehensive modifications were necessary to
meet the specific demands and characteristics of polymers and composites since
they have significantly different thermal conductivities and contact kinematics. By using
tribologically optimized PEEK compounds as reference a new measurement and
calculation model for the dynamic thermocouple method was set up. This method
allows the determination of hot spot temperatures for PEEK compounds, and it was
found that they can reach up to 1000 °C in case of short carbon fibers present in the
polymer. With regard to the non-isotropic characteristics of the polymer compound,
the contact situation between short carbon fibers and steel counterbody could be
successfully monitored by applying a resistive measurement method for the real contact
area determination. Temperature compensation approaches were investigated
for the transfer film layer thickness determination, resulting in in-situ measurements
with a resolution of ~0.1 μm. In addition to a successful implementation of the measurement
systems, failure mechanism processes were clarified for the PEEK compound
used. For the first time in polymer tribology the behavior of the most interesting
system parameters could be monitored simultaneously under increasing load
conditions. It showed an increasing friction coefficient, wear rate, transfer film layer
thickness, and specimen overall temperature when frictional energy exceeded the
thermal transport capabilities of the specimen. In contrast, the real contact area between
short carbon fibers and steel decreased due to the separation effect caused by
the transfer film layer. Since the sliding contact was more and more matrix dominated,
the hot spot temperatures on the fibers dropped, too. The results of this failure
mechanism investigation already demonstrate the opportunities which the new
measurement techniques provide for a deeper understanding of tribological processes,
enabling improvements in material composition and application design.

If gradient based derivative algorithms are used to improve industrial products by reducing their target functions, the derivatives need to be exact.
The last percent of possible improvement, like the efficiency of a turbine, can only be gained if the derivatives are consistent with the solution process that is used in the simulation software.
It is problematic that the development of the simulation software is an ongoing process which leads to the use of approximated derivatives.
If a derivative computation is implemented manually, it will be inconsistent after some time if it is not updated.
This thesis presents a generalized approach which differentiates the whole simulation software with Algorithmic Differentiation (AD), and guarantees a correct and consistent derivative computation after each change to the software.
For this purpose, the variable tagging technique is developed.
The technique checks at run-time if all dependencies, which are used by the derivative algorithms, are correct.
Since it is also necessary to check the correctness of the implementation, a theorem is developed which describes how AD derivatives can be compared.
This theorem is used to develop further methods that can detect and correct errors.
All methods are designed such that they can be applied in real world applications and are used within industrial configurations.
The process described above yields consistent and correct derivatives but the efficiency can still be improved.
This is done by deriving new derivative algorithms.
A fixed-point iterator approach, with a consistent derivation, yields all state of the art algorithms and produces two new algorithms.
These two new algorithms include all implementation details and therefore they produce consistent derivative results.
For detecting hot spots in the application, the state of the art techniques are presented and extended.
The data management is changed such that the performance of the software is affected only marginally when quantities, like the number of input and output variables or the memory consumption, are computed for the detection.
The hot spots can be treated with techniques like checkpointing or preaccumulation.
How these techniques change the time and memory consumption is analyzed and it is shown how they need to be used in selected AD tools.
As a last step, the used AD tools are analyzed in more detail.
The major implementation strategies for operator overloading AD tools are presented and implementation improvements for existing AD tools are discussed.\
The discussion focuses on a minimal memory consumption and makes it possible to compare AD tools on a theoretical level.
The new AD tool CoDiPack is based on these findings and its design and concepts are presented.
The improvements and findings in this thesis make it possible, that an automatic, consistent and correct derivative is generated in an efficient way for industrial applications.

Automated theorem proving is a search problem and, by its undecidability, a very difficult one. The challenge in the development of a practically successful prover is the mapping of the extensively developed theory into a program that runs efficiently on a computer. Starting from a level-based system model for automated theorem provers, in this work we present different techniques that are important for the development of powerful equational theorem provers. The contributions can be divided into three areas: Architecture. We present a novel prover architecture that is based on a set-based compression scheme. With moderate additional computational costs we achieve a substantial reduction of the memory requirements. Further wins are architectural clarity, the easy provision of proof objects, and a new way to parallelize a prover which shows respectable speed-ups in practice. The compact representation paves the way to new applications of automated equational provers in the area of verification systems. Algorithms. To improve the speed of a prover we need efficient solutions for the most time-consuming sub-tasks. We demonstrate improvements of several orders of magnitude for two of the most widely used term orderings, LPO and KBO. Other important contributions are a novel generic unsatisfiability test for ordering constraints and, based on that, a sufficient ground reducibility criterion with an excellent cost-benefit ratio. Redundancy avoidance. The notion of redundancy is of central importance to justify simplifying inferences which are used to prune the search space. In our experience with unfailing completion, the usual notion of redundancy is not strong enough. In the presence of associativity and commutativity, the provers often get stuck enumerating equations that are permutations of each other. By extending and refining the proof ordering, many more equations can be shown redundant. Furthermore, our refinement of the unfailing completion approach allows us to use redundant equations for simplification without the need to consider them for generating inferences. We describe the efficient implementation of several redundancy criteria and experimentally investigate their influence on the proof search. The combination of these techniques results in a considerable improvement of the practical performance of a prover, which we demonstrate with extensive experiments for the automated theorem prover Waldmeister. The progress achieved allows the prover to solve problems that were previously out of reach. This considerably enhances the potential of the prover and opens up the way for new applications.

Stochastic Network Calculus (SNC) emerged from two branches in the late 90s:
the theory of effective bandwidths and its predecessor the Deterministic Network
Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support
their design and control.
In contrast to queueing theory, which strives for similar goals, SNC uses in-
equalities to circumvent complex situations, such as stochastic dependencies or
non-Poisson arrivals. Leaving the objective to compute exact distributions behind,
SNC derives stochastic performance bounds. Such a bound would, for example,
guarantee a system’s maximal queue length that is violated by a known small prob-
ability only.
This work includes several contributions towards the theory of SNC. They are
sorted into four main contributions:
(1) The first chapters give a self-contained introduction to deterministic net-
work calculus and its two branches of stochastic extensions. The focus lies on the
notion of network operations. They allow to derive the performance bounds and
simplifying complex scenarios.
(2) The author created the first open-source tool to automate the steps of cal-
culating and optimizing MGF-based performance bounds. The tool automatically
calculates end-to-end performance bounds, via a symbolic approach. In a second
step, this solution is numerically optimized. A modular design allows the user to
implement their own functions, like traffic models or analysis methods.
(3) The problem of the initial modeling step is addressed with the development
of a statistical network calculus. In many applications the properties of included
elements are mostly unknown. To that end, assumptions about the underlying
processes are made and backed by measurement-based statistical methods. This
thesis presents a way to integrate possible modeling errors into the bounds of SNC.
As a byproduct a dynamic view on the system is obtained that allows SNC to adapt
to non-stationarities.
(4) Probabilistic bounds are fundamentally different from deterministic bounds:
While deterministic bounds hold for all times of the analyzed system, this is not
true for probabilistic bounds. Stochastic bounds, although still valid for every time
t, only hold for one time instance at once. Sample path bounds are only achieved by
using Boole’s inequality. This thesis presents an alternative method, by adapting
the theory of extreme values.
(5) A long standing problem of SNC is the construction of stochastic bounds
for a window flow controller. The corresponding problem for DNC had been solved
over a decade ago, but remained an open problem for SNC. This thesis presents
two methods for a successful application of SNC to the window flow controller.

The recently established technologies in the areas of distributed measurement and intelligent
information processing systems, e.g., Cyber Physical Systems (CPS), Ambient
Intelligence/Ambient Assisted Living systems (AmI/AAL), the Internet of Things
(IoT), and Industry 4.0 have increased the demand for the development of intelligent
integrated multi-sensory systems as to serve rapid growing markets [1, 2]. These increase
the significance of complex measurement systems, that incorporate numerous advanced
methodological implementations including electronics circuit, signal processing,
and multi-sensory information fusion. In particular, in multi-sensory cognition applications,
to design such systems, the skill-required tasks, e.g., method selection, parameterization,
model analysis, and processing chain construction are elaborated with immense
effort, which conventionally are done manually by the expert designer. Moreover, the
strong technological competition imposes even more complicated design problems with
multiple constraints, e.g., cost, speed, power consumption,
exibility, and reliability.
Thus, the conventional human expert based design approach may not be able to cope
with the increasing demand in numbers, complexity, and diversity. To alleviate the issue,
the design automation approach has been the topic for numerous research works [3-14]
and has been commercialized to several products [15-18]. Additionally, the dynamic
adaptation of intelligent multi-sensor systems is the potential solution for developing
dependable and robust systems. Intrinsic evolution approach and self-x properties [19],
which include self-monitoring, -calibrating/trimming, and -healing/repairing, are among
the best candidates for the issue. Motivated from the ongoing research trends and based
on the background of our research work [12, 13] among the pioneers in this topic, the
research work of the thesis contributes to the design automation of intelligent integrated
multi-sensor systems.
In this research work, the Design Automation for Intelligent COgnitive system with self-
X properties, the DAICOX, architecture is presented with the aim of tackling the design
effort and to providing high quality and robust solutions for multi-sensor intelligent
systems. Therefore, the DAICOX architecture is conceived with the defined goals as
listed below.
Perform front to back complete processing chain design with automated method
selection and parameterization,
Provide a rich choice of pattern recognition methods to the design method pool,
Associate design information via interactive user interface and visualization along
with intuitive visual programming,
Deliver high quality solutions outperforming conventional approaches by using
multi-objective optimization,
Gain the adaptability, reliability and robustness of designed solutions with self-x
properties,
Derived from the goals, several scientific methodological developments and implementations,
particularly in the areas of pattern recognition and computational intelligence,
will be pursued as part of the DAICOX architecture in the research work of this thesis.
The method pool is aimed to contain a rich choice of methods and algorithms covering
data acquisition and sensor configuration, signal processing and feature computation,
dimensionality reduction, and classification. These methods will be selected and parameterized
automatically by the DAICOX design optimization to construct a multi-sensory
cognition processing chain. A collection of non-parametric feature quality assessment
functions for the purpose of Dimensionality Reduction (DR) process will be presented.
In addition, to standard DR methods, the variations of feature selection method, in
particular, feature weighting will be proposed. Three different classification categories
shall be incorporated in the method pool. Hierarchical classification approach will be
proposed and developed to serve as a multi-sensor fusion architecture at the decision
level. Beside multi-class classification, one-class classification methods, e.g., One-Class
SVM and NOVCLASS will be presented to extend functionality of the solutions, in particular,
anomaly and novelty detection. DAICOX is conceived to effectively handle the
problem of method selection and parameter setting for a particular application yielding
high performance solutions. The processing chain construction tasks will be carried
out by meta-heuristic optimization methods, e.g., Genetic Algorithms (GA) and Particle
Swarm Optimization (PSO), with multi-objective optimization approach and model
analysis for robust solutions. In addition, to the automated system design mechanisms,
DAICOX will facilitate the design tasks with intuitive visual programming and various
options of visualization. Design database concept of DAICOX is aimed to allow the
reusability and extensibility of the designed solutions gained from previous knowledge.
Thus, the cooperative design of machine and knowledge from the design expert can also
be utilized for obtaining fully enhanced solutions. In particular, the integration of self-x
properties as well as intrinsic optimization into the system is proposed to gain enduring
reliability and robustness. Hence, DAICOX will allow the inclusion of dynamically
reconfigurable hardware instances to the designed solutions in order to realize intrinsic
optimization and self-x properties.
As a result from the research work in this thesis, a comprehensive intelligent multisensor
system design architecture with automated method selection, parameterization,
and model analysis is developed with compliance to open-source multi-platform software.It is integrated with an intuitive design environment, which includes visual programming
concept and design information visualizations. Thus, the design effort is minimized as
investigated in three case studies of different application background, e.g., food analysis
(LoX), driving assistance (DeCaDrive), and magnetic localization. Moreover, DAICOX
achieved better quality of the solutions compared to the manual approach in all cases,
where the classification rate was increased by 5.4%, 0.06%, and 11.4% in the LoX,
DeCaDrive, and magnetic localization case, respectively. The design time was reduced
by 81.87% compared to the conventional approach by using DAICOX in the LoX case
study. At the current state of development, a number of novel contributions of the thesis
are outlined below.
Automated processing chain construction and parameterization for the design of
signal processing and feature computation.
Novel dimensionality reduction methods, e.g., GA and PSO based feature selection
and feature weighting with multi-objective feature quality assessment.
A modification of non-parametric compactness measure for feature space quality
assessment.
Decision level sensor fusion architecture based on proposed hierarchical classification
approach using, i.e., H-SVM.
A collection of one-class classification methods and a novel variation, i.e.,
NOVCLASS-R.
Automated design toolboxes supporting front to back design with automated
model selection and information visualization.
In this research work, due to the complexity of the task, neither all of the identified goals
have been comprehensively reached yet nor has the complete architecture definition been
fully implemented. Based on the currently implemented tools and frameworks, ongoing
development of DAICOX is pursuing towards the complete architecture. The potential
future improvements are the extension of method pool with a richer choice of methods
and algorithms, processing chain breeding via graph based evolution approach, incorporation
of intrinsic optimization, and the integration of self-x properties. According to
these features, DAICOX will improve its aptness in designing advanced systems to serve
the increasingly growing technologies of distributed intelligent measurement systems, in
particular, CPS and Industrie 4.0.

Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)

In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.

Die vorliegende Arbeit stellt die Ergebnisse vor, die unter Einsatz von optischer Molekülspektroskopie und quantenchemischen Berechnungen von Merocyanin-Dimeraggregaten erzielt wurden. Mit Hilfe der UV/Vis-Spektroskopie konnten aus der Vielzahl der zur Verfügung stehenden Farbstoffe diejenigen mit ausgeprägter Aggregationsneigung identifiziert werden. Für neun positiv getestete Verbindungen wurden konzentrations- und temperaturabhängige UV/Vis-Spektren aufgenommen. Die Auswertung gelang dabei mit einem selbst entwickelten Algorithmus, der neben der Aggregationskonstante auch die reinen Spektren von Monomer und Dimer berechnet. Für eine Serie von acht neuen Merocyaninen wurde eine umfassende Charakterisierung der elektrooptischen Eigenschaften vorgestellt und die Ergebnisse im Hinblick auf deren Anwendung diskutiert. Für zwei weitere Farbstoffe konnte eine Beeinflussung der Dimerisierung durch ein äußeres elektrisches Feld frei von Diskrepanzen bestätigt werden. Implikationen der beobachteten Befunde auf das Design photonischer Materialien mit exzitonisch gekoppelten Dimeren wurden besprochen. Durch die stationären und dynamischen Fluoreszenzmessungen konnte das bislang nur für einen Farbstoff bekannte Phänomen der Emission von H-Typ Dimeren an drei weiteren Merocyaninen nachgewiesen werden. Es gelang eine präzise spektrale Trennung der Teilbeiträge von Monomer und Dimer in Absorption sowie Emission vorzunehmen und damit erstmals den von Kasha[1] 1965 vorhergesagten Relaxationskanal für H-Typ Aggregate in allen Details quantitativ zu belegen. Durch quantenchemische Berechnungen auf MP2-Niveau konnte die Geometrie von sechs Monomeren und Dimeren optimiert und mit Hilfe experimenteller Strukturinformationen verifiziert werden. Auf Basis dieser Geometrien wurden essentielle Eigenschaften vom elektronischen Grund- und Anregungszustand berechnet und damit Übereinstimmungen und Unterschiede zu den Experimenten aufgezeigt. Weiterhin wurde eine Möglichkeit zur Vorhersage der Aggregationsneigung für einen gegebenen Strukturtyp alleine auf Grundlage von quantenchemischen Ergebnissen vorgestellt. Im Hinblick auf die grundlegenden Triebkräfte der Aggregation ergab die Analyse, dass die Dimerisierung im Wesentlichen auf elektrostatischen Dipol-Dipol- und Dispersionswechselwirkungen beruht, daneben aber auch von der Topologie der Chromophore abhängige lokale Wechselwirkungen einen Beitrag leisten.

This thesis contains the mathematical treatment of a special class of analog microelectronic circuits called translinear circuits. The goal is to provide foundations of a new coherent synthesis approach for this class of circuits. The mathematical methods of the suggested synthesis approach come from graph theory, combinatorics, and from algebraic geometry, in particular symbolic methods from computer algebra. Translinear circuits form a very special class of analog circuits, because they rely on nonlinear device models, but still allow a very structured approach to network analysis and synthesis. Thus, translinear circuits play the role of a bridge between the "unknown space" of nonlinear circuit theory and the very well exploited domain of linear circuit theory. The nonlinear equations describing the behavior of translinear circuits possess a strong algebraic structure that is nonetheless flexible enough for a wide range of nonlinear functionality. Furthermore, translinear circuits offer several technical advantages like high functional density, low supply voltage and insensitivity to temperature. This unique profile is the reason that several authors consider translinear networks as the key to systematic synthesis methods for nonlinear circuits. The thesis proposes the usage of a computer-generated catalog of translinear network topologies as a synthesis tool. The idea to compile such a catalog has grown from the observation that on the one hand, the topology of a translinear network must satisfy strong constraints which severely limit the number of "admissible" topologies, in particular for networks with few transistors, and on the other hand, the topology of a translinear network already fixes its essential behavior, at least for static networks, because the so-called translinear principle requires the continuous parameters of all transistors to be the same. Even though the admissible topologies are heavily restricted, it is a highly nontrivial task to compile such a catalog. Combinatorial techniques have been adapted to undertake this task. In a catalog of translinear network topologies, prototype network equations can be stored along with each topology. When a circuit with a specified behavior is to be designed, one can search the catalog for a network whose equations can be matched with the desired behavior. In this context, two algebraic problems arise: To set up a meaningful equation for a network in the catalog, an elimination of variables must be performed, and to test whether a prototype equation from the catalog and a specified equation of desired behavior can be "matched", a complex system of polynomial equations must be solved, where the solutions are restricted to a finite set of integers. Sophisticated algorithms from computer algebra are applied in both cases to perform the symbolic computations. All mentioned algorithms have been implemented using C++, Singular, and Mathematica, and are successfully applied to actual design problems of humidity sensor circuitry at Analog Microelectronics GmbH, Mainz. As result of the research conducted, an exhaustive catalog of all static formal translinear networks with at most eight transistors is available. The application for the humidity sensor system proves the applicability of the developed synthesis approach. The details and implementations of the algorithms are worked out only for static networks, but can easily be adopted for dynamic networks as well. While the implementation of the combinatorial algorithms is stand-alone software written "from scratch" in C++, the implementation of the algebraic algorithms, namely the symbolic treatment of the network equations and the match finding, heavily rely on the sophisticated Gröbner basis engine of Singular and thus on more than a decade of experience contained in a special-purpose computer algebra system. It should be pointed out that the thesis contains the new observation that the translinear loop equations of a translinear network are precisely represented by the toric ideal of the network's translinear digraph. Altogether, this thesis confirms and strengthenes the key role of translinear circuits as systematically designable nonlinear circuits.

In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.
In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful
in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement.

This thesis builds a bridge between singularity theory and computer algebra. To an isolated hypersurface singularity one can associate a regular meromorphic connection, the Gauß-Manin connection, containing a lattice, the Brieskorn lattice. The leading terms of the Brieskorn lattice with respect to the weight and V-filtration of the Gauß-Manin connection define the spectral pairs. They correspond to the Hodge numbers of the mixed Hodge structure on the cohomology of the Milnor fibre and belong to the finest known invariants of isolated hypersurface singularities. The differential structure of the Brieskorn lattice can be described by two complex endomorphisms A0 and A1 containing even more information than the spectral pairs. In this thesis, an algorithmic approach to the Brieskorn lattice in the Gauß-Manin connection is presented. It leads to algorithms to compute the complex monodromy, the spectral pairs, and the differential structure of the Brieskorn lattice. These algorithms are implemented in the computer algebra system Singular.

In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.

Ultraschall ist eines der am häufigsten genutzen, bildgebenden Verfahren in der Kardiologie. Dies ist durch die günstige Erzeugung, die Nicht-Invasivität und die Unschädlichkeit für die Patienten begründet. Nachteilig an den existierenden Geräten ist der Umstand, daß lediglich zwei-dimensionale Bilder generiert werden können. Zusätzlich können diese Bilder aufgrund anatomischer Gegebenheiten nicht aus einer wahlfreien Position akquiriert werden. Dies erschwert die Analyse der Daten und folglich die Diagnose. Mit dieser Arbeit wurden neue, algorithmische Aspekte des vier-dimensionalen, kardiologischen Ultraschalls ausgehend von der Akquisition der Rohdaten, deren Synchronisation und Rekonstruktion bis hin zur Visualisierung bearbeitet. In einem zusätzlichen Kapitel wurde eine neue Technik zur weiteren Aufwertung der Visualisierung, sowie zur visuellen Bearbeitung der Ultraschalldaten entwickelt. Durch die hier entwickelten Verfahren ist es möglich bestimmte Einschränkungen des kardiologischen Ultraschalls aufzuheben oder zumindest zu mildern. Hierunter zählen vor allem die Einschränkung auf zwei-dimensionale Schnittbilder, sowie die eingeschränkte Sichtwahl.