### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (605) (remove)

#### Language

- English (605) (remove)

#### Keywords

- Visualisierung (13)
- finite element method (8)
- Finite-Elemente-Methode (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Visualization (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Mobilfunk (5)
- Optimization (5)

#### Faculty / Organisational entity

- Fachbereich Mathematik (214)
- Fachbereich Informatik (129)
- Fachbereich Maschinenbau und Verfahrenstechnik (95)
- Fachbereich Chemie (58)
- Fachbereich Elektrotechnik und Informationstechnik (45)
- Fachbereich Biologie (26)
- Fachbereich Sozialwissenschaften (15)
- Fachbereich Wirtschaftswissenschaften (8)
- Fachbereich ARUBI (5)
- Fachbereich Physik (5)

Induktionsschweißen kann sowohl für das Schweißen von thermoplastischen Faser-
Kunststoff-Verbunden als auch für das Verbinden von Metall/Faser-Kunststoff-
Verbunden eingesetzt werden. Nach Betrachtung der Möglichkeiten einer solchen
Verbindung wurde festgestellt, dass die Verbindungsqualität durch die
Oberflächenvorbehandlung des metallischen und des polymeren Fügepartners und
durch die Prozessbedingungen bestimmt wird.
Verschiedene neue Werkzeuge (z.B. spezielle Probenhalterungen, temperierbarer
Anpressstempel, Erwärmungs- und Konsolidierungsrolle) wurden entwickelt und in
die Induktionsschweißanlage zur Herstellung von Metall/Faser-Kunststoff-Verbunden
integriert. Topografische Analysen mittels Rasterelektronenmikroskopie und
Laserprofilometrie zeigen einen großen Einfluss der Vorbehandlungsmethoden auf
die Oberflächenrauhigkeit. Zusätzlich ändert die Vorbehandlung die physikalischen
(Oberflächenenergie) und die chemischen Eigenschaften (Atomkonzentration). Die
Eigenschaften der Verbindungen wurden zuerst anhand von Zugscherprüfungen und
parallel durch Oberflächenanalysen untersucht. Die Ergebnisse dieser
Untersuchungen zeigen:
• Die Vorbehandlungsmethoden Korundstrahlen und Sauerbeizen führen bei
dem metallischen Fügepartner zu den höchsten Verbundfestigkeiten. Die
Atmosphären-Plasmareinigung des polymeren Fügepartners ergibt eine
Zunahme der Zugscherfestigkeit von ca. 10 % sowie auch eine Verkleinerung
des Vertrauensbereiches.
• Die Zugscherfestigkeit hängt vom Prozessdruck und damit vom Fließverhalten
des Polymers in der Fügezone ab.
• Die Orientierung der Prüfkraft relativ zur Faserorientierung hat keinen Einfluss
auf die Zugscherfestigkeit der eingesetzten faserverstärkten Materialien.
• Die Leinwand-Bindung, mit mehr polymerreichen Zonen, führt zu einem
geringen Anstieg der Zugscherfestigkeit im Vergleich zu einer Atlas 1/4-
Bindung. Die Gelege-Struktur ergibt durch Faserverschiebungen ähnliche
Festigkeiten wie die Leinwand-Bindung. Es zeigt sich, dass die
Verbundfestigkeit durch das Polymer bestimmt wird. • Die Zugscherfestigkeit gewinnt einen großen Anstieg durch eine zusätzliche
Polymerfolie in der Fügezone. Die Schliffbilder zeigen eine polymere
Zwischenschichtdicke von 5 bis 20 μm für AlMg3-CF/PA66.
• Durch den gezielten Einsatz verschiedener Vorbehandlungsmethoden
(Korundstrahlen mit zusätzlichem Polymer) kann die Zugscherfestigkeit auf bis
zu 14 MPa für AlMg3-CF/PA66-Verbunde und 18 MPa für DC01-CF/PEEKVerbunde
gegenüber dem unbehandelten Zustand verdoppelt werden. Weitere Untersuchungen an den Prozessparametern ergaben für DC01-CF/PEEKVerbunde,
dass folgende Einstellungen zu einer weiteren Steigerung der
Zugscherfestigkeit auf 19 MPa führen:
• Eine Starttemperatur des Anpresstempels von 370 °C.
• Eine Haltezeit von 7 Minuten.
• Eine Abkühlrate von 6 °C/min.
Für AlMg3-CF/PA66 zeigte sich, dass eine Anpresstemperatur von 10 °C zu einer
Zugscherfestigkeit von 14,5 MPa führt. Diese beiden Zugscherfestigkeiten sind
lediglich 10 – 15 % geringer als die unter optimalen Bedingungen hergestellten
Klebeverbindungen.
Erste Untersuchungen zeigen, dass bei galvanischer Korrosion von Metall/FKVVerbunden
eine schnelle Abnahme der Zugscherfestigkeit erfolgt. Hierfür wurden die
Proben drei Wochen in Wasser gelagert. Beim direkten Kontakt zwischen
Kohlenstofffaser und Aluminium erklärt sich dies durch Korrosion in der Fügezone.
Dabei sinken die Zugscherfestigkeiten der Proben bis auf 5 MPa. Bei Proben mit
einer Glasfaserlage als Isolationsschicht zeigen sich keine Korrosionsprodukte und
die Zugscherfestigkeit nimmt um 30 % bis auf 8 – 9 MPa ab.
Bei in Salzwasser gelagerten Proben ist die galvanische Korrosion deutlich stärker
ausgeprägt. Bereits nach einer Woche besitzen die acetongereinigten Proben mit
zusätzlichem Polymer lediglich eine Restzugscherfestigkeit von 3 bis 4 MPa. Die
korundgestrahlten Proben zeigen Korrosionsprodukte am Rande der Fügezone und
in der Fügezone, weisen aber dennoch eine Zugscherfestigkeit von ca. 10 MPa auf.
Die glasfaserverstärkten Proben zeigen weder Korrosionsprodukte noch eine
Abnahme der Zugscherfestigkeit. Dynamisch thermografische Analysen wurden in verschiedenen Umgebungsgasen
durchgeführt, um die Zersetzungstemperatur des faserverstärkten Polymers zu
bestimmen. Im Falle von CF/PA66 führte dies nicht zu einer Vergrößerung des
Prozessfensters, da die Zersetzung hauptsächlich thermisch und nicht thermooxidativ
ist. Die festgestellte Zersetzungstemperatur von CF/PEEK in Luft betrug
550 °C. Die Vergrößerung des Prozessfensters ist für CF/PA66 gering und zeigte
auch keinen Anstieg in der Zugscherfestigkeit nach dem Schweißen in Stickstoff.
Trotzdem hat das Induktionsschweißen unter Schutzgas ein großes Potential für
gesättigte Kohlenwasserstoffe wie z.B. glasfaserverstärktes Polypropylen. Hier wurde
die Zersetzungstemperatur von 230 °C in Luft auf 390 °C in Stickstoff erhöht.
Es wurde ein Demonstrator bestehend aus einem Aluminium-Profil und einer
CF/PA66-Platte hergestellt, womit gezeigt werden konnte, dass die erworbenen
Kenntnisse auch für die industrielle Anwendung umsetzbar sind. Mittels analytischer
Modelle und FE-Berechnungen wurde die induktive Erwärmung erfolgreich
nachgebildet.

Nanotechnology is now recognized as one of the most promising areas for technological
development in the 21st century. In materials research, the development of
polymer nanocomposites is rapidly emerging as a multidisciplinary research activity
whose results could widen the applications of polymers to the benefit of many different
industries. Nanocomposites are a new class of composites that are particle-filled
polymers for which at least one dimension of the dispersed particle is in the nanometer
range. In the related area polymer/clay nanocomposites have attracted considerable
interest because they often exhibit remarkable property improvements when
compared to virgin polymer or conventional micro- and macro- composites.
The present work addresses the toughening and reinforcement of thermoplastics via
a novel method which allows us to achieve micro- and nanocomposites. In this work
two matrices are used: amorphous polystyrene (PS) and semi-crystalline polyoxymethylene
(POM). Polyurethane (PU) was selected as the toughening agent for POM
and used in its latex form. It is noteworthy that the mean size of rubber latices is
closely matched with that of conventional toughening agents, impact modifiers.
Boehmite alumina and sodium fluorohectorite (FH) were used as reinforcements.
One of the criteria for selecting these fillers was that they are water swellable/
dispersible and thus their nanoscale dispersion can be achieved also in aqueous
polymer latex. A systematic study was performed on how to adapt discontinuousand
continuous manufacturing techniques for the related nanocomposites.
The dispersion of nanofillers was characterized by transmission, scanning electron
and atomic force microcopy (TEM, SEM and AFM respectively), X-ray diffraction
(XRD) techniques, and discussed. The crystallization of POM was studied by means
of differential scanning calorimetry and polarized light optical microscopy (DSC and
PLM, respectively). The mechanical and thermomechanical properties of the composites
were determined in uniaxial tensile, dynamic-mechanical thermal analysis
(DMTA), short-time creep tests, and thermogravimetric analysis (TGA).
PS composites were produced first by a discontinuous manufacturing technique,
whereby FH or alumina was incorporated in the PS matrix by melt blending with and
without latex precompounding of PS latex with the nanofiller. It was found that direct melt mixing (DM) of the nanofillers with PS resulted in micro-, whereas the latex mediated
pre-compounding (masterbatch technique, MB) in nanocomposites. FH was
not intercalated by PS when prepared by DM. On the other hand, FH was well dispersed
(mostly intercalated) in PS via the PS latex-mediated predispersion of FH following
the MB route. The nanocomposites produced by MB outperformed the DM
compounded microcomposites in respect to properties like stiffness, strength and
ductility based on dynamic-mechanical and static tensile tests. It was found that the
resistance to creep (summarized in master curves) of the nanocomposites were improved
compared to those of the microcomposites. Master curves (creep compliance
vs. time), constructed based on isothermal creep tests performed at different temperatures,
showed that the nanofiller reinforcement affects mostly the initial creep
compliance.
Next, ternary composites composed of POM, PU and boehmite alumina were produced
by melt blending with and without latex precompounding. Latex precompounding
served for the predispersion of the alumina particles. The related MB was produced
by mixing the PU latex with water dispersible boehmite alumina. The composites
produced by the MB technique outperformed the DM compounded composites in
respect to most of the thermal and mechanical characteristics.
Toughened and/or reinforced PS- and POM-based composites have been successfully
produced by a continuous extrusion technique, too. This technique resulted in
good dispersion of both nanofillers (boehmite) and impact modifier (PU). Compared
to the microcomposites obtained by conventional DM, the nanofiller dispersion became
finer and uniform when using the water-mediated predispersion. The resulting
structure markedly affected the mechanical properties (stiffness and creep resistance)
of the corresponding composites. The impact resistance of POM was highly
enhanced by the addition of PU rubber when manufactured by the continuous extrusion
manufacturing technique. This was traced to the dispersed PU particle size being
in the range required from conventional, impact modifiers.

In recent years the field of polymer tribology experienced a tremendous development
leading to an increased demand for highly sophisticated in-situ measurement methods.
Therefore, advanced measurement techniques were developed and established
in this study. Innovative approaches based on dynamic thermocouple, resistive electrical
conductivity, and confocal distance measurement methods were developed in
order to in-situ characterize both the temperature at sliding interfaces and real contact
area, and furthermore the thickness of transfer films. Although dynamic thermocouple
and real contact area measurement techniques were already used in similar
applications for metallic sliding pairs, comprehensive modifications were necessary to
meet the specific demands and characteristics of polymers and composites since
they have significantly different thermal conductivities and contact kinematics. By using
tribologically optimized PEEK compounds as reference a new measurement and
calculation model for the dynamic thermocouple method was set up. This method
allows the determination of hot spot temperatures for PEEK compounds, and it was
found that they can reach up to 1000 °C in case of short carbon fibers present in the
polymer. With regard to the non-isotropic characteristics of the polymer compound,
the contact situation between short carbon fibers and steel counterbody could be
successfully monitored by applying a resistive measurement method for the real contact
area determination. Temperature compensation approaches were investigated
for the transfer film layer thickness determination, resulting in in-situ measurements
with a resolution of ~0.1 μm. In addition to a successful implementation of the measurement
systems, failure mechanism processes were clarified for the PEEK compound
used. For the first time in polymer tribology the behavior of the most interesting
system parameters could be monitored simultaneously under increasing load
conditions. It showed an increasing friction coefficient, wear rate, transfer film layer
thickness, and specimen overall temperature when frictional energy exceeded the
thermal transport capabilities of the specimen. In contrast, the real contact area between
short carbon fibers and steel decreased due to the separation effect caused by
the transfer film layer. Since the sliding contact was more and more matrix dominated,
the hot spot temperatures on the fibers dropped, too. The results of this failure
mechanism investigation already demonstrate the opportunities which the new
measurement techniques provide for a deeper understanding of tribological processes,
enabling improvements in material composition and application design.

Sewn net-shape preform based composite manufacturing technology is widely
accepted in combination with liquid composite molding technologies for the
manufacturing of fiber reinforced polymer composites. The development of threedimensional
dry fibrous reinforcement structures containing desired fiber orientation
and volume fraction before the resin infusion is based on the predefined preforming
processes. Various preform manufacturing aspects influence the overall composite
manufacturing processes. Sewing technology used for the preform manufacturing
has number of challenges to overcome which includes consistency in preform quality,
composite quality, and composite mechanical properties.
Experimental studies are undertaken to investigate the influence of various sewing
parameters on the preform manufacturing processes, preform quality, and the fiber
reinforced polymer composite quality and properties. Sewing thread, sewing machine
parameters, shortcomings of sewing process, and remedies are explained according
to their importance during preforming and liquid composite molding. The stitches and
fiber free zone in the form of ellipse that are generated in the thickness direction were
investigated by evaluating the laminate micrographs. Correlation between ellipse
formation phenomenon, sewing thread, and sewing machine parameters is
established. A statistical tool, analysis of variance, was used to emphasize the major
preform processing factors influencing the preform imperfections.
For assessing the preform quality, the observations of sewing thread requirements
for preform and structural sewing were well documented during the experimental
studies and explained according to their significance in the composite processing.
Furthermore, selection criteria for sewing thread according to end application are
discussed in detail. Investigations on polyester sewing thread as a high speed
preform manufacturing element are also performed. Applicability of polyester sewing
thread for the preform sewing and challenges to be overcome for its extensive
utilization in the composite components are explained. Apart from this, influence of
physical structure of sewing thread on the laminate quality and properties are
explained and relationship between them is discussed in brief. Furthermore,
challenges caused due to applied spin-finishes and sizing and remedies for the same
are discussed. Sewing threads made of high performance fibers that are available in the market,
e.g., carbon, glass, and Zylon are studied for effect of thread material on through-thethickness
laminate properties. Threads made up of carbon or glass fibers are very
rigid and produces number of defects, which is a major cause of concern. Optimized
sewing procedure has been implemented to minimize the in-plane and through-thethickness
imperfections and to improve mechanical properties and surface
characteristics of composite laminate.
Preform sewing process and final ready to impregnate preforms were analyzed for
quality appearance. The sewing defects and their influence on composite structure
are monitored. Preform compressibility before and after the sewing operations are
intensively studied and correlation with sewing parameters is developed. Influence of
sewing process parameters on the warpage and change in preform area weight are
also explained in detail. Results of analytical experiments can help to improve further
exploitation of sewn preforms for composite manufacturing and overall preform and
laminate quality.

Induction welding is a technique for joining of thermoplastic composites. An alternating
electromagnetic field is used for contact-free and fast heating of the parts to be
welded. In case of a suitable reinforcement structure heat generation occurs directly
in the laminate with complete heating in thickness direction in the vicinity of the coil.
The resulting temperature field is influenced by the distance to the induction coil with
decreasing temperature for increasing distance. Consequently, the surface facing the
inductor yields the highest, the opposite surface the lowest temperature.
The temperature field described significantly complicates the welding process. Due to
complete heating the laminate has to be loaded with pressure in order to prevent delamination,
which requires the usage of complex and expensive welding tools. Additionally,
the temperature difference between the inductor and the opposite side may
be greater than the processing window, which is determined by the properties of the
matrix polymer.
The induction welding process is influenced by numerous parameters. Due to complexity
process development is mainly based on experimental studies. The investigation
of parameter influences and interactions is cumbersome and the measurement
of quality relevant parameters, especially in the bondline, is difficult. Process simulation
can reduce the effort of parameter studies and contribute to further analysis of
the induction welding process.
The objective of this work is the development of a process variant of induction welding
preventing complete heating of the laminate in thickness direction. For optimal
welding the bondline has to reach the welding temperature whereas the other domains
should remain below the melting temperature of the matrix polymer.
For control of the temperature distribution localized cooling by an impinging jet of
compressed air was implemented. The effect was assessed by static heating experiments
with carbon fiber reinforced polyetheretherketone (CF/PEEK) and polyphenylenesulfide
(CF/PPS).
The application of localized cooling could influence the temperature distribution in
thickness direction of the laminate, according to the specifications of the welding
process. The temperature maximum was shifted from the inductor to the opposite side. This enables heating of the laminate to welding temperature in the bondline and
concurrently preventing melting and effects connected to this on the outer surface.
Inductive heating and the process variant with localized cooling were implemented in
three-dimensional finite-element process models. For that purpose, the finiteelement-
software Comsol Multiphysics 4.1 was used for the development of fully
coupled electromagnetic-thermal models which have been validated experimentally.
A sensitivity analysis for determination of different processing parameters of inductive
heating was conducted. The coil current, field frequency, and heat capacity were
identified as significant parameters. The cooling effect of the impinging jets was estimated
by appropriate convection coefficients.
For transfer of the developed process variant to the continuous induction welding
process, a process model was created. It represents a single overlap joint with continuous
feed. With the help of process modeling a parameter set for welding of
CF/PEEK was determined and used for joining of specimens. In doing so, the desired
temperature field was achieved and melting of the outer layers could be prevented.

Im Zuge der steigenden Anzahl von Einsatzmöglichkeiten der
Faserverbundwerkstoffe in den verschiedensten Industriebereichen spielt die
Entwicklung bzw. Weiterentwicklung neuer und effektiverer Verarbeitungstechniken
eine bedeutende Rolle.
Dabei findet derzeit das Harzinjektionsverfahren (LCM) ausschließlich für kleinere bis
mittlere Stückzahlen seinen Einsatz. Aufgrund der sehr großen Stückzahlen im
Automobilbereich, ist dieses Verfahren hier zurzeit weniger interessant. Daher
werden große Anstrengungen unternommen, das Harzinjektionsverfahren besonders
für solche Bauteile attraktiver zu machen, die gegenwärtig mit Hilfe des Prepreg-
Verfahrens hergestellt werden. Dabei spielt die Reduktion der hier vergleichsweise
hohen Zykluszeit eine tragende Rolle. Die Dauer eines Zyklus wird hierbei
hauptsächlich durch die Vorbereitung und Herstellung der Verstärkungsstruktur
(Preform) sowie durch die Bestückung des Werkzeuges bestimmt. Diese so
genannte Preform-Technik weist daher ein sehr großes Entwicklungspotential auf,
mit dem Ziel, solche Verstärkungsstrukturen herzustellen, die nach der Injektion
keine Nacharbeit erfordern. Solche Strukturen werden auch als „net shape, ready-toimpregnate“-
Preform bezeichnet. Die hierfür notwendigen Techniken stammen
vornehmend aus der Textilindustrie, wie z.B. die direkte Preformtechnik, das Nähen
oder Kleben (Binder-Technik).
Ziel der vorliegenden Dissertation ist es, die Möglichkeiten der Nähtechnik bezogen
auf die Herstellung der Preforms zu untersuchen. Hierfür werden die verschiedenen
Naht- und Verbindungsarten hinsichtlich ihres Einsatzes in der Preformtechnik, wie
die Fixier- und Positionier-, die Füge- oder Verbindungsnaht und die Montagenaht,
untersucht.
Im Rahmen dieser Arbeit wurde zunächst innerhalb einer Studie zur „net shape“-
Preformtechnik eine Versteifungsstruktur entwickelt und hergestellt. Diese Struktur
soll dabei der Veranschaulichung der Möglichkeiten und Einsatzbereiche der
Nähtechnik bei der Preformtechnologie dienen. Zudem kann so ein mehrstufiger
Preformherstellungsprozess demonstriert werden. Ferner zeigt diese Studie, dass
ein hochgradiger, automatisierter Prozess, welcher zudem eine durchgängige
Qualitätskontrolle ermöglicht, realisiert werden konnte. Als ein weiterer Schritt wurde ein Prozess zur Herstellung eine dreidimensionalen
Preform, der die Anwendung verschiedener thermoplastischer,
niedrigtemperaturschmelzender Nähgarne zulässt, ausgearbeitet. Hierbei wurden die
Vorteile der Näh- und der Binder-Technologie miteinander verbunden. Außerdem
konnte durch die bereits formstabile und imprägnierungsfertige Preformstruktur, die
Bestückung des Werkzeuges wesentlich vereinfacht werden. Um die mechanischen
Eigenschaften der Preforms bestimmen zu können, wurden quantitative
Messmethoden erarbeitet. Hierdurch konnten anschließend die Einflüsse der
Orientierung sowie der Stichdichte ermittelt werden. Zudem wurden die folgenden
drei grundlegenden Eigenschaften untersucht: die spezifische Biegesteifigkeit, der so
genannte Rückspringwinkel sowie die Rückstellkraft nach dem Thermoformen
hinsichtlich der verschiedenen Nähtypen.
Um dies zu ergänzen, wurden weiterführende Untersuchungen zu den
Materialeigenschaften der Nähfäden, die bei der dreidimensionalen Preformtechnik
eingesetzt werden können, durchgeführt. Dabei ist neben der niedrigen
Schmelztemperatur die vollständige Auflösbarkeit der Nähgarne in den ungesättigten
Polyester- und Epoxidharzen besonders wichtig. Auf Grund dieser vollständigen
Auflösung der Fäden in der Matrix können die Stichlöcher wieder vollkommen
verschlossen werden. Dadurch kann eine Reduktion des Einflusses solcher
Stichlöcher auf die mechanischen Eigenschaften des Faserverbundwerkstoffes
erreicht werden. Mit Hilfe dieser Untersuchungen wurden schließlich zwei polymere
Nähgarne als vielversprechend beurteilt. Diese weisen eine Schmelztemperatur von
weniger als 100 °C sowie eine gute Lösbarkeit, besonders im Harzsystem RTM 6,
auf.
In der Preformtechnik werden die Nähte nicht nur als Positionier- oder Montagenaht
eingesetzt, sondern können in einer Struktur als auch als Verstärkungselement, eine
so genannte Verstärkungsnaht, verwendet werden. Der Zweck einer solchen Naht ist
die interlaminare Verstärkung von monolitischen oder Sandwichstrukturen. Zudem
besteht die Möglichkeit, diese zur Fixierung von metallischen Funktionselementen
(Inserts) in den Faserverbundwerkstoff zu benutzen. Hinsichtlich diese Möglichkeiten
wurden im Rahmen dieser Arbeit erfolgreich Untersuchung durchgeführt. Dabei
wiesen die eingenähten Krafteinleitungselemente in durchgeführten statischen
Zugversuchen eine annähernd 200 % höhere maximale Zugkraft verglichen mit
entsprechenden Elementen (BigHead®), die nicht durch eine Naht fixiert wurden. Weitere Untersuchungen zeigten auch, dass eine doppelte Naht nicht eine
proportionale Verdoppelung der maximal erreichbaren Zugkraft bewirkt. Der Grund
hierfür liegt an einer partiellen Zerstörung des vorhandenen Nähgarns der ersten
Naht begründet durch den doppelten Einstich in die bereits bestehenden Löcher
beim mehrmaligen Durchlaufen der Nadel. Der größte Verstärkungseffekt konnte
schließlich bei der interlaminaren Einbettung und der Vernähung des Insert erreicht
werden. In diesem Fall kann eine Delamination, wie sie bei lediglich interlaminar
eingebetteten Inserts auftritt, verhindert werden.
Zusätzlich wurden statische Scherversuche durchgeführt, um auch in diesem
Belastungsfall die Versagensart zu untersuchen. Dabei stellte sich heraus, dass nicht
die Nähte sondern der Insert versagte. Auf Grund des Materialbruchs des Inserts,
sowohl in Zug- als auch in Scherversuchen, wurde in einem weiteren Schritt ein
optimiertes Insert entwickelt. Bei diesem wurde der Sockel in soweit modifiziert, dass
die maximale Versagenslast des Nähgarns ermittelt werden konnte. Dabei stellte
sich heraus, dass Glas-, Kohlenstoff- und Aramidfasern sich nur bedingt als
Verstärkungsgarn zur Fixierung von Inserts eignen. Im Gegensatz dazu sind die
Polyestergarne als ausreichende Verstärkung gut geeignet. Weitere Vorteile des
Polyestergarns sind die niedrigeren Kosten sowie die gute Vernähbarkeit.
Anschließend wurde eine solche Verbindung des Inserts mit einem
Faserverbundwerkstoff mit Hilfe der Finite-Elemente-Methode (FEM) simuliert. Dabei
zeigte sich eine gute Übereinstimmung der simulierten Ergebnisse mit denen aus
dem statischen Zugversuch mit dem weiterentwickelten Insert.
Auf Grund der elektrischen Leitfähigkeit von Kohlenstofffasern, können Fäden aus
diesem Material auch als Sensoren zur Überwachung einer Struktur oder Verbindung
eingesetzt werden. Hierfür wurden ebenfalls Untersuchungen durchgeführt. Dabei
konnte mit Hilfe der Änderung des elektrischen Widerstandes auf Schädigungen der
Fasern geschlossen werden. Somit können nicht nur das Bestehen einer
Schädigung, sondern auch der annähernde Ort ermittelt werden. Die
Untersuchungen zeigten somit, dass die Kohlenstofffasern nicht lediglich als
Verstärkung sondern auch als Überwachungssensor bei einem eingebetteten Insert
dienen können.
Im Rahmen aller Untersuchungen konnte das große und vielversprechende Potential
der Nähtechnik bei der Herstellung von Preform-Bauteilen aufgezeigt sowie ein
Einblick in einige von vielen Anwendungsmöglichkeiten gegeben werden.

Unidirectional (UD) composites are the most competitive materials for the production
of high-end structures. Their field of application spreads from the aerospace up to
automotive and general industry sector. Typical examples of components made of
unidirectional reinforced composite materials are rocket motor cases, drive shafts or
pressure vessels for hydrogen storage. The filament winding technology, the pultrusion
process and the tape placement are processes suitable for the manufacturing
using UD semi-finished products. The demand for parts made of UD composites is
constantly increasing over the last years. A key feature for the success of this technology
is the improvement of the manufacturing procedure.
Impregnation is one of the most important steps in the manufacturing process. During
this step the dry continuous fibers are combined with the liquid matrix in order to create
a fully impregnated semi-finished product. The properties of the impregnated roving
have a major effect on the laminate quality, and the efficient processing of the
liquid matrix has a big influence on the manufacturing costs.
The present work is related to the development of a new method for the impregnation
of carbon fiber rovings with thermoset resin. The developed impregnation unit consists
of a sinusoidal cavity without any moving parts. The unit in combination with an
automated resin mixing-dosing system allows complete wet-out of the fibers, precise
calibration of the resin fraction, and stable processing conditions.
The thesis focuses on the modeling of the impregnation process. Mathematical expressions
for the fiber compaction, the gradual increase of the roving tension, the
static pressure, the capillarity inside the filaments of the roving, and the fiber permeation
are presented, discussed, and experimentally verified. These expressions were
implemented in a modeling algorithm. The model takes into account all the relevant
material and process parameters. An experimental set-up based on the filament
winding process was used for the validation of the model. Trials under different conditions
have been performed. The results proved that the model can accurately simulate
the impregnation process. The good impregnation degree of the wound samples
confirmed the efficiency of the developed impregnation unit. A techno economical
analysis has proved that the developed system will result to the reduction of the
manufacturing costs and to the increase of the productivity.

Epoxy resins have achieved acceptance as adhesives, coatings, and potting compounds,
but their main application is as matrix to produce reinforced composites.
However, their usefulness in this field still limited due to their brittle nature. Some
studies have been done to increase the toughness of epoxy composites, of which the
most successful one is the modification of the polymer matrix with a second toughening
phase.
Resin Transfer Molding (RTM) is one of the most important technologies to manufacture
fiber reinforced composites. In the last decade it has experimented new impulse,
due to its favorable application to produce large surface composites with good technical
properties and at relative low cost.
This research work focuses on the development of novel modified epoxy matrices,
with enhanced mechanical and thermal properties, suitable to be processed by resin
transfer molding technology, to manufacture Glass Fiber Reinforced Composites
(GFRC’s) with improved performance in comparison to the commercially available
ones.
In the first stage of the project, a neat epoxy resin (EP) was modified using two different
nano-sized ceramics: silicium dioxide (SiO2) and zirconium dioxide (ZrO2); and
micro-sized particles of silicone rubber (SR) as second filler. Series of nanocomposites
and hybrid modified epoxy resins were obtained by systematic variation of filler
contents. The rheology and curing process of the modified epoxy resins were determined
in order to define their aptness to be processed by RTM. The resulting matrices
were extensively characterized qualitatively and quantitatively to precise the effect
of each filler on the polymer properties.
It was shown that the nanoparticles confer better mechanical properties to the epoxy
resin, including modulus and toughness. It was possible to improve simultaneously
the tensile modulus and toughness of the epoxy matrix in more than 30 % and 50 %
respectively, only by using 8 vol.-% nano-SiO2 as filler. A similar performance was
obtained by nanocomposites containing zirconia. The epoxy matrix modified with 8 vol.-% ZrO2 recorded tensile modulus and toughness improved up to 36% and 45%
respectively regarding EP.
On the other hand, the addition of silicone rubber to EP and nanocomposites results
in a superior toughness but has a slightly negative effect on modulus and strength.
The addition of 3 vol.-% SR to the neat epoxy and nanocomposites increases their
toughness between 1.5 and 2.5 fold; but implies also a reduction in their tensile modulus
and strength in range 5-10%. Therefore, when the right proportion of nanoceramic
and rubber were added to the epoxy resin, hybrid epoxy matrices with fracture
toughness 3 fold higher than EP but also with up to 20% improved modulus were
obtained.
Widespread investigations were carried out to define the structural mechanisms responsible
for these improvements. It was stated, that each type of filler induces specific
energy dissipating mechanisms during the mechanical loading and fracture
processes, which are closely related to their nature, morphology and of course to
their bonding with the epoxy matrix. When both nanoceramic and silicone rubber are
involved in the epoxy formulation, a superposition of their corresponding energy release
mechanisms is generated, which provides the matrix with an unusual properties
balance.
From the modified matrices glass fiber reinforced RTM-plates were produced. The
structure of the obtained composites was microscopically analyzed to determine their
impregnation quality. In all cases composites with no structural defects (i.e. voids,
delaminations) and good superficial finish were reached. The composites were also
properly characterized. As expected the final performance of the GFRCs is strongly
determined by the matrix properties. Thus, the enhancement reached by epoxy matrices
is translated into better GFRC´s macroscopical properties. Composites with up
to 15% enhanced strength and toughness improved up to 50%, were obtained from
the modified epoxy matrices.

The aim of this study is to describe the consolidation in thermoplastic tape placement
process to obtain high quality structure, making the process viable for automotive
and aerospace industrial applications. The major barrier in this technique is very
short residence time of material under the consolidation roller to accomplished complete
polymer diffusion in the bonded region. Hence investigation is performed to find
out the optimize manufacturing parameters by extensive material, process, product
testing and through process simulation.
Temperature distribution and convective heat transfer under the hot gas torch is experimentally
mapped out. Bonding process inside the laminate is the combine effect
of layers (tapes) intimate contact Dic development and resulting polymer diffusion Dh
at these contacted sections. Three energy levels are identified based on the process
velocity and hot gas flow combinations. For the low energy parameter combinations,
the energy input to the incoming tape and substrate material is limited and result in
incomplete intimate contact which restricts the bonding process. On other hand high
energy input although could increase the bonding degree Db even up to the 97%, but
also activate the thermal degradation phenomena. It is found out that the rate of polymer
healing (diffusion) and polymer crosslinking follows the Arrhenius laws with the
activation energies of 43 KJ/mol and 276 KJ/mol. The polymer crosslinking at high
temperature exposure hinder the polymer diffusion process and reduces the strength
development. So the parameters combination at intermediate energy level provides
the opportunity of continuous interlaminar strength improvement through out the layup
process.
Deformation of tape edges is identified as the dictating factor for the laminate’s transverse
strength. Tape placement with slight overlap reinforced the transverse joint by
more 10 % as compared to pure matrix joint. Finally the simulation tool developed in
this research work is used for identifying the existing limitation to achieve full consolidation.
A parameter study shows that extended consolidation either by mean of additional
pass or by increasing consolidation length widens the high strength (over 90%)
bonding degree Db contour. Thus high lay-up velocity (up to 7 m/min) is viable for industrial
production rate.

The broad engineering applications of polymers and composites have become the
state of the art due to their numerous advantages over metals and alloys, such as
lightweight, easy processing and manufacturing, as well as acceptable mechanical
properties. However, a general deficiency of thermoplastics is their relatively poor
creep resistance, impairing service durability and safety, which is a significant barrier
to further their potential applications. In recent years, polymer nanocomposites have
been increasingly focused as a novel field in materials science. There are still many
scientific questions concerning these materials leading to the optimal property
combinations. The major task of the current work is to study the improved creep
resistance of thermoplastics filled with various nanoparticles and multi-walled carbon
nanotubes.
A systematic study of three different nanocomposite systems by means of
experimental observation and modeling and prediction was carried out. In the first
part, a nanoparticle/PA system was prepared to undergo creep tests under different
stress levels (20, 30, 40 MPa) at various temperatures (23, 50, 80 °C). The aim was
to understand the effect of different nanoparticles on creep performance. 1 vol. % of
300 nm and 21 nm TiO2 nanoparticles and nanoclay was considered. Surface
modified 21 nm TiO2 particles were also investigated. Static tensile tests were
conducted at those temperatures accordingly. It was found that creep resistance was
significantly enhanced to different degrees by the nanoparticles, without sacrificing
static tensile properties. Creep was characterized by isochronous stress-strain curves,
creep rate, and creep compliance under different temperatures and stress levels.
Orientational hardening, as well as thermally and stress activated processes were
briefly introduced to further understanding of the creep mechanisms of these
nanocomposites. The second material system was PP filled with 1 vol. % 300 nm and 21 nm TiO2
nanoparticles, which was used to obtain more information about the effect of particle
size on creep behavior based on another matrix material with much lower Tg. It was
found especially that small nanoparticles could significantly improve creep resistance.
Additionally, creep lifetime under high stress levels was noticeably extended by
smaller nanoparticles. The improvement in creep resistance was attributed to a very
dense network formed by the small particles that effectively restricted the mobility of
polymer chains. Changes in the spherulite morphology and crystallinity in specimens
before and after creep tests confirmed this explanation.
In the third material system, the objective was to explore the creep behavior of PP
reinforced with multi-walled carbon nanotubes. Short and long aspect ratio nanotubes
with 1 vol. % were used. It was found that nanotubes markedly improved the creep
resistance of the matrix, with reduced creep deformation and rate. In addition, the
creep lifetime of the composites was dramatically extended by 1,000 % at elevated
temperatures. This enhancement contributed to efficient load transfer between
carbon nanotubes and surrounding polymer chains.
Finally, a modeling analysis and prediction of long-term creep behaviors presented a
comprehensive understanding of creep in the materials studied here. Both the
Burgers model and Findley power law were applied to satisfactorily simulate the
experimental data. The parameter analysis based on Burgers model provided an
explanation of structure-to-property relationships. Due to their intrinsic difference, the
power law was more capable of predicting long-term behaviors than Burgers model.
The time-temperature-stress superposition principle was adopted to predict long-term
creep performance based on the short-term experimental data, to make it possible to
forecast the future performance of materials.

In recent years, nanofiller-reinforced polymer composites have attracted considerable
interest from numerous researchers, since they can offer unique mechanical,
electrical, optical and thermal properties compared to the conventional polymer
composites filled with micron-sized particles or short fibers. With this background, the
main objective of the present work was to investigate the various mechanical
properties of polymer matrices filled with different inorganic rigid nanofillers, including
SiOB2B, TiOB2B, AlB2BOB3B and multi-walled carbon nanotubes (MWNT). Further, special
attention was paid to the fracture behaviours of the polymer nanocomposites. The
polymer matrices used in this work contained two types of epoxy resin (cycloaliphatic
and bisphenol-F) and two types of thermoplastic polymer (polyamide 66 and isotactic
polypropylene).
The epoxy-based nanocomposites (filled with nano-SiOB2B) were formed in situ by a
special sol-gel technique supplied by nanoresins AG. Excellent nanoparticle
dispersion was achieved even at rather high particle loading. The almost
homogeneously distributed nanoparticles can improve the elastic modulus and
fracture toughness (characterized by KBICB and GBICB) simultaneously. According to
dynamic mechanical and thermal analysis (DMTA), the nanosilica particles in epoxy
resins possessed considerable "effective volume fraction" in comparison with their
actual volume fraction, due to the presence of the interphase. Moreover, AFM and
high-resolution SEM observations also suggested that the nanosilica particles were
coated with a polymer layer and therefore a core-shell structure of particle-matrix was
expected. Furthermore, based on SEM fractography, several toughening
mechanisms were considered to be responsible for the improvement in toughness,
which included crack deflection, crack pinning/bowing and plastic deformation of
matrix induced by nanoparticles.
The PA66 or iPP-based nanocomposites were fabricated by a conventional meltextrusion
technique. Here, the nanofiller content was set constant as 1 vol.%. Relatively good particle dispersion was found, though some small aggregates still
existed. The elastic modulus of both PA66 and iPP was moderately improved after
incorporation of the nanofillers. The fracture behaviours of these materials were
characterized by an essential work fracture (EWF) approach. In the case of PA66
system, the EWF experiments were carried out over a broad temperature range
(23~120 °C). It was found that the EWF parameters exhibited high temperature
dependence. At most testing temperatures, a small amount of nanoparticles could
produce obvious toughening effects at the cost of reduction in plastic deformation of
the matrix. In light of SEM fractographs and crack opening tip (COD) analysis, the
crack blunting induced by nanoparticles might be the major source of this toughening.
The fracture behaviours of PP filled with MWNTs were investigated over a broad
temperature range (-196~80 °C) in terms of notched impact resistance. It was found
that MWNTs could enhance the notched impact resistance of PP matrix significantly
once the testing temperature was higher than the glass transition temperature (TBgB) of
neat PP. At the relevant temperature range, the longer the MWNTs, the better was
the impact resistance. SEM observation revealed three failure modes of nanotubes:
nanotube bridging, debonding/pullout and fracture. All of them would contribute to
impact toughness to a degree. Moreover, the nanotube fracture was considered as
the major failure mode. In addition, the smaller spherulites induced by the nanotubes
would also benefit toughness.

In recent years the consumption of polymer based composites in many engineering
fields where friction and wear are critical issues has increased enormously. Satisfying
the growing industrial needs can be successful only if the costly, labor-intensive and
time-consuming cycle of manufacturing, followed by testing, and additionally followed
by further trial-and-error compounding is reduced or even avoided. Therefore, the
objective is to get in advance as much fundamental understanding as possible of the
interaction between various composite components and that of the composite against
its counterface. Sliding wear of polymers and polymer composites involves very
complex and highly nonlinear processes. Consequently, to develop analytical models
for the simulation of the sliding wear behavior of these materials is extremely difficult
or even impossible. It necessitates simplifying hypotheses and thus compromising
accuracy. An alternative way, discussed in this work, is an artificial neural network
based modeling. The principal benefit of artificial neural networks (ANNs) is their ability
to learn patterns through a training experience from experimentally generated data
using self-organizing capabilities.
Initially, the potential of using ANNs for the prediction of friction and wear properties
of polymers and polymer composites was explored using already published friction
and wear data of 101 independent fretting wear tests of polyamide 46 (PA 46) composites.
For comparison, ANNs were also applied to model the mechanical properties
of polymer composites using a commercial data bank of 93 pairs of independent Izod
impact, tension and bending tests of polyamide 66 (PA 66) composites. Different
stages in the development of ANN models such as selection of optimum network
configuration, multi-dimensional modeling, training and testing of the network were
addressed at length. The results of neural network predictions appeared viable and
very promising for their application in the field of tribology.
A case example was subsequently presented to model the sliding friction and wear
properties of polymer composites by using newly measured datasets of polyphenylene
sulfide (PPS) matrix composites. The composites were prepared by twinscrew
extrusion and injection molding. The dataset investigated was generated from
pin-on-disc testing in dry sliding conditions under various contact pressures and sliding speeds. Initially the focus was placed on exploring the possible synergistic effects
between traditional reinforcements and particulate fillers, with special emphasis on
sub-micro TiO2 particles (300 nm average diameter) and short carbon fibers (SCFs).
Subsequently, the lubricating contributions of graphite (Gr) and polytetrafluoroethylene
(PTFE) in these multiphase materials were also studied. ANNs were trained
using a conjugate gradient with Powell/Beale restarts (CGB) algorithm as well as a
variable learning rate backpropagation (GDX) algorithm in order to learn compositionproperty
relationships between the inputs and outputs of the system. Likewise, the
influence of the operating parameters (contact pressure (p) and sliding speed (v))
was also examined. The incorporation of short carbon fibers and sub-micro TiO2
particles resulted in both a lower friction and a great improvement in the wear resistance
of the PPS composites within the low and medium pv-range. The mechanical
characterization and surface analysis after wear testing revealed that this beneficial
tribological performance could be explained by the following phenomena: (i)
enhanced mechanical properties through the inclusion of short carbon fibers, (ii)
favorable protection of the short carbon fibers by the sub-micro particles diminishing
fiber breakage and removal, (iii) self-repairing effects with the sub-micro particles, (iv)
formation of quasi-spherical transfer particles free to roll at the tribological contact.
Still, in the high pv-range stick-slip sliding motion was observed with these hybrid
materials. The adverse stick-slip behavior could be effectively eliminated through the
additional inclusion of solid lubricant reservoirs (Gr and PTFE), analogous to the
lubricants used in real ball bearings. Likewise, solid lubricants improved the wear resistance
of the multiphase system PPS/SCF/TiO2 in the high pv-range (≥ 9 MPa·m/s).
Yet, their positive effect, especially that of graphite, was limited up to certain volume
fraction and loading conditions. The optimum results were obtained by blending
comparatively low amounts of Gr and PTFE (≈ 5 vol.% from each additive). An introduction
of softer sub-micro particles did not bring the desired ball bearing effect and
fiber protection. The ANN prediction profiles for PPS tribo-compounds exhibited very
good or even perfect agreement with the measured results demonstrating that the
target of achieving a well trained network was reached. The results of employing a
validation test dataset indicated that the trained neural network acquired enough
generalization capability to extend what it has learned about the training patterns to
data that it has not seen before from the same knowledge domain. Optimal brain surgeon (OBS) algorithm was employed to perform pruning of the network
topology by eliminating non-useful weights and bias in order to determine if the
performance of the pruned network was better than the fully-connected network.
Pruning resulted in accuracy gains over the fully-connected network, but induced
higher computational cost in coding the data in the required format. Within an importance
analysis, the sensitivity of the network response variable (frictional coefficient
or specific wear rate) to characteristic mechanical and thermo-mechanical input variables
was examined. The goal was to study the relationships between the diverse
input variables and the characteristic tribological parameters for a better understanding
of the sliding wear process with these materials. Finally, it was demonstrated that
the well-trained networks might be applied for visualization what will happen if a certain
filler is introduced into a composite, or what the impacts of the testing conditions
on the frictional coefficient and specific wear rate are. In this way, they might be a
helpful tool for design engineers and materials experts to explore materials and to
make reasoned selection and substitution decisions early in the design phase, when
they incur least cost.

Thermoplastic polymer-polymer composites consist of a polymeric matrix and a
polymeric reinforcement. The combination of these materials offers outstanding
mechanical properties at lower weight than standard fiber reinforced materials.
Furthermore, when both polymeric components originate from the same family or,
ideally, from the same polymer, their sustainability degree is higher than standard
fiber reinforced composites.
A challenge of polymer-polymer composites is the subsequent processing of their
semi-finished materials by heating techniques. Since the fibers are made of meltable
thermoplastic, the reinforcing fiber structure might be lost during the heating process.
Hence, the mechanical properties of an overheated polymer-polymer composite
would decline, and finally, they would be even lower than the neat matrix. A decrease
of process temperature to manage the heating challenge is not reasonable since the
cycle time would be increased at the same time. Therefore, this work pursues the
adaption of a fast and selective heating method on the use with polymer-polymer
composites. Inductively activatable particles, so-called susceptors, were distributed in
the matrix to evoke a local heating in the matrix when being exposed to an
alternating magnetic field. In this way, the energy input to the fibers is limited.
The experimental series revealed the induction particle heating effect to be mainly
related to susceptor material, susceptor fraction, susceptor distribution as well as
magnetic field strength, coupling distance, and heating time. A proper heating was
achieved with ferromagnetic particles at a filler content of only 5 wt-% in HDPE as
well as with its respective polymer fiber reinforced composites. The study included
the analysis of susceptor impact on mechanical and thermal matrix properties as well
as a degradation evaluation. The susceptors were identified to have only a marginal
impact on matrix properties. Furthermore, a semi-empiric simulation of the particle
induction heating was applied, which served for the investigation of intrinsic melting
processes.
The achieved results, the experimental as well as the analytic study, were
successfully adapted to a thermoforming process with a polymer-polymer material,
which had been preheated by means of particle induction.

Die vorliegende Arbeit befasst sich mit der Untersuchung von Absorptionseigenschaften und elektronischer Kurzzeit-Dynamik von organischen Farbstoffmolekülen und supramolekularen Photokatalysatoren in der Gasphase. Dabei wurde erstmals sehr intensiv ein eine relativ unbekannte experimentelle Methode eingesetzt, nämlich die zeitaufgelöste, pump-probe (Anregung-Abfrage) Photofragmentations-Spektroskopie. Die Kombination eines kommerziellen Quadrupol Ionenfallen Massenspektrometers mit einem Femtosekunden Lasersystem erlaubt es die intrinsischen, elektronischen Eigenschaften molekularer, ionischer Systeme abzubilden. Neben Populationsdynamik angeregter Zustände wurden erstmals Schwingungs- und Rotationswellenpaket-Dynamik mit dieser Methode beobachtet und dokumentiert.
Im ersten Teil der Arbeit werden die Ergebnisse der Untersuchungen an einigen ausgewählten Fluoresecein-Derivaten und eines Carbocyanin-Farbstoffes präsentiert. Obwohl diese Modellsysteme zunächst nur dem Zweck dienen sollten die Möglichkeiten des experimentellen Aufbaus zu evaluieren, ergaben die Untersuchungen weiterhin tiefgreifende Einblicke in die elektronische Struktur isolierter organischer Farbstoffe, die bis heute in Literatur nicht dokumentiert worden sind.
Der zweite Teil befasst sich mit der Untersuchung an drei supramolekularen, ionischen Systemen zur photokatalytischen Wasserstofferzeugung. Dabei dienten wieder zwei der Systeme dem Zweck den experimentellen Aufbau zu evaluieren. Neben der elektronischen Populationsdynamik wurde mittels polarisationsabhängiger Messungen weitere Einblicke in den Elektronentransferprozess erhalten – ein Kernpunkt in der Wirkweise supramolekularer Katalysatoren. Die neugewonnen Erkenntnisse wurden schließlich verwendet um einen neuartigen Katalysator zu untersuchen. Dabei stellte sich heraus, dass die Labilität der Ligandensphäre am katalytischen Metallzentrum Untersuchungen am intakten System in Lösung stark beeinträchtigt und somit nur aussagekräftige Ergebnisse mittels einer Gasphasen Methode, einer wie der hier verwendeten, erhalten werden können.
Die experimentellen Ergebnisse werden unterstützt durch quantenchemische Berechnungen von energetischen Minimum-Strukturen, den Strukturen von Übergangszuständen, sowie der Berechnung von Schwingungs- und UV/Vis-Absorptionsspektren mittels (zeitabhängiger) Dichtefunktionaltheorie (DFT & TD-DFT).

In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.

In this dissertation convergence of binomial trees for option pricing is investigated. The focus is on American and European put and call options. For that purpose variations of the binomial tree model are reviewed.
In the first part of the thesis we investigated the convergence behavior of the already known trees from the literature (CRR, RB, Tian and CP) for the European options. The CRR and the RB tree suffer from irregular convergence, so our first aim is to find a way to get the smooth convergence. We first show what causes these oscillations. That will also help us to improve the rate of convergence. As a result we introduce the Tian and the CP tree and we proved that the order of convergence for these trees is \(O \left(\frac{1}{n} \right)\).
Afterwards we introduce the Split tree and explain its properties. We prove the convergence of it and we found an explicit first order error formula. In our setting, the splitting time \(t_{k} = k\Delta t\) is not fixed, i.e. it can be any time between 0 and the maturity time \(T\). This is the main difference compared to the model from the literature. Namely, we show that the good properties of the CRR tree when \(S_{0} = K\) can be preserved even without this condition (which is mainly the case). We achieved the convergence of \(O \left(n^{-\frac{3}{2}} \right)\) and we typically get better results if we split our tree later.

Non–woven materials consist of many thousands of fibres laid down on a conveyor belt
under the influence of a turbulent air stream. To improve industrial processes for the
production of non–woven materials, we develop and explore novel mathematical fibre and
material models.
In Part I of this thesis we improve existing mathematical models describing the fibres on the
belt in the meltspinning process. In contrast to existing models, we include the fibre–fibre
interaction caused by the fibres’ thickness which prevents the intersection of the fibres and,
hence, results in a more accurate mathematical description. We start from a microscopic
characterisation, where each fibre is described by a stochastic functional differential
equation and include the interaction along the whole fibre path, which is described by a
delay term. As many fibres are required for the production of a non–woven material, we
consider the corresponding mean–field equation, which describes the evolution of the fibre
distribution with respect to fibre position and orientation. To analyse the particular case of
large turbulences in the air stream, we develop the diffusion approximation which yields a
distribution describing the fibre position. Considering the convergence to equilibrium on
an analytical level, as well as performing numerical experiments, gives an insight into the
influence of the novel interaction term in the equations.
In Part II of this thesis we model the industrial airlay process, which is a production method
whereby many short fibres build a three–dimensional non–woven material. We focus on
the development of a material model based on original fibre properties, machine data and
micro computer tomography. A possible linking of these models to other simulation tools,
for example virtual tensile tests, is discussed.
The models and methods presented in this thesis promise to further the field in mathematical
modelling and computational simulation of non–woven materials.

The detection and characterisation of undesired lead structures on shaft surfaces is a concern in production and quality control of rotary shaft lip-type sealing systems. The potential lead structures are generally divided into macro and micro lead based on their characteristics and formation. Macro lead measurement methods exist and are widely applied. This work describes a method to characterise micro lead on ground shaft surfaces. Micro lead is known as the deviation of main orientation of the ground micro texture from circumferential direction. Assessing the orientation of microscopic structures with arc minute accuracy with regard to circumferential direction requires exact knowledge of both the shaft’s orientation and the direction of surface texture. The shaft’s circumferential direction is found by calibration. Measuring systems and calibration procedures capable of calibrating shaft axis orientation with high accuracy and low uncertainty are described. The measuring systems employ areal-topographic measuring instruments suited for evaluating texture orientation. A dedicated evaluation scheme for texture orientation is based on the Radon transform of these topographies and parametrised for the application. Combining the calibration of circumferential direction with the evaluation of texture orientation the method enables the measurement of micro lead on ground shaft surfaces.

We introduce and investigate a product pricing model in social networks where the value a possible buyer assigns to a product is influenced by the previous buyers. The selling proceeds in discrete, synchronous rounds for some set price and the individual values are additively altered. Whereas computing the revenue for a given price can be done in polynomial time, we show that the basic problem PPAI, i.e., is there a price generating a requested revenue, is weakly NP-complete. With algorithm Frag we provide a pseudo-polynomial time algorithm checking the range of prices in intervals of common buying behavior we call fragments. In some special cases, e.g., solely positive influences, graphs with bounded in-degree, or graphs with bounded path length, the amount of fragments is polynomial. Since the run-time of Frag is polynomial in the amount of fragments, the algorithm itself is polynomial for these special cases. For graphs with positive influence we show that every buyer does also buy for lower prices, a property that is not inherent for arbitrary graphs. Algorithm FixHighest improves the run-time on these graphs by using the above property.
Furthermore, we introduce variations on this basic model. The version of delaying the propagation of influences and the awareness of the product can be implemented in our basic model by substituting nodes and arcs with simple gadgets. In the chapter on Dynamic Product Pricing we allow price changes, thereby raising the complexity even for graphs with solely positive or negative influences. Concerning Perishable Product Pricing, i.e., the selling of products that are usable for some time and can be rebought afterward, the principal problem is computing the revenue that a given price can generate in some time horizon. In general, the problem is #P-hard and algorithm Break runs in pseudo-polynomial time. For polynomially computable revenue, we investigate once more the complexity to find the best price.
We conclude the thesis with short results in topics of Cooperative Pricing, Initial Value as Parameter, Two Product Pricing, and Bounded Additive Influence.

This thesis comprises several independent research studies on transition metal complexes as trapped ions in isolation. Electrospray Ionization (ESI) serves to transfer ions from solution into the gas phase for mass spectrometric investigations. Subsequently, a variety of experimental and theoretical methods provide fundamental insights into molecular properties of the isolated complexes: InfraRed (Multiple) Photon Dissociation (IR-(M)PD) spectroscopy provides information on binding motifs and molecular structures at cryo temperatures as well as at room temperature. Collision Induced Dissociation (CID) serves to elucidate molecular fragmentation pathways as well as relative stabilities of the complexes at room temperature. Quantum chemical calculations via Density Functional Theory (DFT) substantiate the experimental results and deepen the fundamental insights into the molecular properties of the complexes. Magnetic couplings between metal centers in oligonuclear complexes are investigated by Broken Symmetry DFT modelling and X Ray Magnetic Circular Dichroism (XMCD) spectroscopy.

This thesis brings together convex analysis and hyperspectral image processing.
Convex analysis is the study of convex functions and their properties.
Convex functions are important because they admit minimization by efficient algorithms
and the solution of many optimization problems can be formulated as
minimization of a convex objective function, extending much beyond
the classical image restoration problems of denoising, deblurring and inpainting.
\(\hspace{1mm}\)
At the heart of convex analysis is the duality mapping induced within the
class of convex functions by the Fenchel transform.
In the last decades efficient optimization algorithms have been developed based
on the Fenchel transform and the concept of infimal convolution.
\(\hspace{1mm}\)
The infimal convolution is of similar importance in convex analysis as the
convolution in classical analysis. In particular, the infimal convolution with
scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes,
which approximate a given function from below while preserving its minimum
value and minimizers.
The closely related proximal mapping replaces the gradient step
in a recently developed class of efficient first-order iterative minimization algorithms
for non-differentiable functions. For a finite convex function,
the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope.
Efficient algorithms are needed in hyperspectral image processing,
where several hundred intensity values measured in each spatial point
give rise to large data volumes.
\(\hspace{1mm}\)
In the \(\textbf{first part}\) of this thesis, we are concerned with
models and algorithms for hyperspectral unmixing.
As part of this thesis a hyperspectral imaging system was taken into operation
at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data.
Motivated by missing-pixel defects common in current hyperspectral imaging systems,
we propose a
total variation regularized unmixing model for incomplete and noisy data
for the case when pure spectra are given.
We minimize the proposed model by a primal-dual algorithm based on the
proximum mapping and the Fenchel transform.
To solve the unmixing problem when only a library of pure spectra is provided,
we study a modification which includes a sparsity regularizer into model.
\(\hspace{1mm}\)
We end the first part with the convergence analysis for a multiplicative
algorithm derived by optimization transfer.
The proposed algorithm extends well-known multiplicative update rules
for minimizing the Kullback-Leibler divergence,
to solve a hyperspectral unmixing model in the case
when no prior knowledge of pure spectra is given.
\(\hspace{1mm}\)
In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes,
first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional
Riemannian manifolds with negative curvature,
and then for functions defined on Hadamard spaces.
\(\hspace{1mm}\)
In particular we extend to infinite-dimensional Riemannian manifolds an expression
for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping.
With the help of this expression we show that a sequence of functions
converges to a given limit function in the sense of Mosco
if the corresponding Moreau-Yosida envelopes converge pointwise at all scales.
\(\hspace{1mm}\)
Finally we extend this result to the more general setting of Hadamard spaces.
As the reverse implication is already known, this unites two definitions of Mosco convergence
on Hadamard spaces, which have both been used in the literature,
and whose equivalence has not yet been known.

Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification.
Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed.
However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA).
This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment.
Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints.
At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems.
The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces.
This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS.
Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.

In this thesis, we consider a problem from modular representation theory of finite groups. Lluís Puig asked the question whether the order of the defect groups of a block \( B \) of the group algebra of a given finite group \( G \) can always be bounded in terms of the order of the vertices of an arbitrary simple module lying in \( B \).
In characteristic \( 2 \), there are examples showing that this is not possible in general, whereas in odd characteristic, no such examples are known. For instance, it is known that the answer to Puig's question is positive in case that \( G \) is a symmetric group, by work of Danz, Külshammer, and Puig.
Motivated by this, we study the cases where \( G \) is a finite classical group in non-defining characteristic or one of the finite groups \( G_2(q) \) or \( ³D_4(q) \) of Lie type, again in non-defining characteristic. Here, we generalize Puig's original question by replacing the vertices occurring in his question by arbitrary self-centralizing subgroups of the defect groups. We derive positive and negative answers to this generalized question.
\[\]
In addition to that, we determine the vertices of the unipotent simple \( GL_2(q) \)-module labeled by the partition \( (1,1) \) in characteristic \( 2 \). This is done using a method known as Brauer construction.

The development of autonomous mobile robots is a major topic of current research. As those robots must be able to react to changing environments and avoid collisions also with moving obstacles, the fulfilment of safety requirements is an important aspect. Behaviour-based systems (BBS) have proven to meet several of the properties required for these kindsof robots, such as reactivity, extensibility and re-usability of individual components. BBS consist of a number of behavioural components that individually realise simple tasks. Their interconnection allows to achieve complex robot behaviour, which implies that correct
connections are crucial. The resulting networks can get very large making them difficult to verify. This dissertation presents a novel concept for the analysis and verification of complex autonomous robot systems controlled by behaviour-based software architectures with special focus on the integration of environmental aspects into the processes.
Several analysis techniques have been investigated and adapted to the special requirements of BBS. These include a structural analysis, which is used to find constraint violations and faults in the network layout. Fault tree analysis is applied to identify root causes of hazards and the relationship of system events. For this, a technique to map the behaviour-based control network to the structure of a fault tree has been developed. Testing and data analysis are used for the detection of failures and their root causes. Here, a new concept that identifies patterns in data recorded during test runs has been introduced.
All of these methods cannot guarantee failure-free and safe robot behaviour and can never prove the absence of failures. Therefore, model checking as formal verification technique that proves a property to be correct for the given system, has been chosen to complement the set of analysis techniques. A novel concept for the integration of environmental influences into the model checking process is proposed. Environmental situations and the sensor processing chain are represented as synchronised automata similar to the modelling of the behavioural network. Tools supporting the whole verification process including the creation of formal queries in its environment have been developed.
During the verification of large behavioural networks, the scalability of the model checking approach appears as a big problem. Several approaches that deal with this problem have been investigated and the selection of slicing and abstraction methods has been justified. A concept for the application of these methods is provided, that reduces the behavioural network to the relevant parts before the actual verification process.
All techniques have been applied to the behaviour-based control system of the autonomous outdoor robot RAVON. Its complex network with more than 400 components allows for demonstrating the soundness of the presented concepts. The set of diﬀerent techniques provides a fundamental basis for a comprehensive analysis and verification of BBS acting in changing environments.

This thesis is concerned with different null-models that are used in network analysis. Whenever it is of interest whether a real-world graph is exceptional regarding a particular measure, graphs from a null-model can be used to compare the real-world graph to. By analyzing an appropriate null-model, a researcher may find whether the results of the measure on the real-world graph is exceptional or not.
Deciding which null-model to use is hard and sometimes the difference between the null-models is not even considered. In this thesis, there are several results presented: First, based on simple global measures, undirected graphs are analyzed. The results for these measures indicates that it is not important which null-model is used, thus, the fastest algorithm of a null-model may be used. Next, local measures are investigated. The fastest algorithm proves to be the most complicated to analyze. The model includes multigraphs which do not meet the conditions of all the measures, thus, the measures themselves have to be altered to take care of multigraphs as well. After careful consideration, the conditions are met and the analysis shows, that the fastest is not always the best.
The same applies for directed graphs, as is shown in the last part. There, another more complex measure on graphs is introduced. I continue testing the applicability of several null-models; in the end, a set of equations proves to be fast and good enough as long as conditions regarding the degree sequence are met.

Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.

We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous
short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds.
The goal of an investor is to find a portfolio which maximizes expected utility
from terminal wealth under budget and present expected short-fall (PESF) risk
constraints. We analyze this portfolio optimization problem in both complete and
incomplete financial markets in three different cases: (a) when the PESF risk is
minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds
to the unconstrained Merton investment.
In all cases we find the optimal terminal wealth and portfolio process using the
martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the
optimal terminal wealth in the cases mentioned using numerical examples. Without
risk constraints, we further compare the investment strategies for complete
and incomplete market numerically.

In change-point analysis the point of interest is to decide if the observations follow one model
or if there is at least one time-point, where the model has changed. This results in two sub-
fields, the testing of a change and the estimation of the time of change. This thesis considers
both parts but with the restriction of testing and estimating for at most one change-point.
A well known example is based on independent observations having one change in the mean.
Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was
derived for this model. As it is a well-known fact that the corresponding convergence rate is
very slow, modifications of the test using a weight function were considered. Those tests have
a better performance. We focus on this class of test statistics.
The first part gives a detailed introduction to the techniques for analysing test statistics and
estimators. Therefore we consider the multivariate mean change model and focus on the effects
of the weight function. In the case of change-point estimators we can distinguish between
the assumption of a fixed size of change (fixed alternative) and the assumption that the size
of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed
in the literature. We show how to come from the proof for the fixed alternative to the
proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate
observations.
The main part of this thesis focuses on two points. First, analysing test statistics and, secondly,
analysing the corresponding change-point estimators. In both cases, we first consider a
change in the mean for independent observations but relaxing the moment condition. Based on
a robust estimator for the mean, we derive a new type of change-point test having a randomized
weight function. Secondly, we analyse non-linear autoregressive models with unknown
regression function. Based on neural networks, test statistics and estimators are derived for
correctly specified as well as for misspecified situations. This part extends the literature as
we analyse test statistics and estimators not only based on the sample residuals. In both
sections, the section on tests and the one on the change-point estimator, we end with giving
regularity conditions on the model as well as the parameter estimator.
Finally, a simulation study for the case of the neural network based test and estimator is
given. We discuss the behaviour under correct and mis-specification and apply the neural
network based test and estimator on two data sets.

In current practices of system-on-chip (SoC) design a trend can be observed to integrate more and more low-level software components into the system hardware at different levels of granularity. The implementation of important control functions and communication structures is frequently shifted from the SoC’s hardware into its firmware. As a result, the tight coupling of hardware and software at a low level of granularity raises substantial verification challenges since the conventional practice of verifying hardware and software independently is no longer sufficient. This calls for new methods for verification based on a joint analysis of hardware and software.
This thesis proposes hardware-dependent models of low-level software for performing formal verification. The proposed models are conceived to represent the software integrated with its hardware environment according to the current SoC design practices. Two hardware/software integration scenarios are addressed in this thesis, namely, speed-independent communication of the processor with its hardware periphery and cycle-accurate integration of firmware into an SoC module. For speed-independent hardware/software integration an approach for equivalence checking of hardware-dependent software is proposed and an evaluated. For the case of cycle-accurate hardware/software integration, a model for hardware/software co-verification has been developed and experimentally evaluated by applying it to property checking.

The main goal of this work was the study of the applicability of a polymer film heat exchanger concept for the applications in the chemical industry, such as the condensation of organic solvents. The polymer film heat exchanger investigated is a plate heat exchanger with very thin (0.025 – 0.1 mm) plates or films, which separate the fluids and enable the heat transfer. After a successful application of this concept to seawater desalination in a previous work, a further step is in chemical engineering, where the good chemical resistance of polymers in aggressive fluids is the challenge.
Two approaches were performed in this work. The first one was experimental and included the study of the chemical and mechanical resistance of preselected films, made of polymer materials, such as polyimide (PI), polyethylene terephthalate (PET) and polytetrafluoroethylene (PTFE). To simulate realistic operating conditions in a heat exchanger the films were exposed to a combined thermal (up to 90°C) and mechanical pressure loads (4-6 bar) with permanent contact with the relevant organic solvents, such as toluene, hexane, heptane and tetrahydrofuran (THF). Furthermore, a lab-scale apparatus and a full-scale demonstrator were manufactured in cooperation with two industrial partners. These were used for the investigation of the heat transfer performance for operating modes with and without phase change.
In addition to the experimental work, a coupled finite element –computational fluid dynamics (FEM-CFD)-model was developed, based on the fluid-structure-interaction (FSI). Two major tasks had to be solved here. The first one was the modelling of the condensation process, based on available mathematical models and energy balances. The second one was the consideration of the partially reversible deformation of the used film during operation. Since this deformation changes the geometry of the fluid channels also has an influence on the overall performance of the apparatus, a coupled FEM-CFD model was developed.
During the experimental study of the chemical resistance of the films, the PTFE film showed the best performance, and hence can be used for all four tested solvents. For the polyimide film, failures while exposed to THF were observed, and the PET film can only be used with water and hexane. With the used lab-scale heat exchanger and the full-scale demonstrator competitive overall heat transfer coefficients between 270 W/m²K and 700 W/m²K could be reached for the liquid-liquid (water-water, water-hexane) operation mode without phase change. For the condensation process, overall heat transfer coefficients of up to 1700/m²K could be obtained.
The numerical approach led to a well-functioning coupled model in a very small scale (1 cm²). An upscale, however, failed due to enormous hardware resources necessary required for the simulation of the entire full-scale demonstrator. The main reason for this is the very low thickness of the films, which leads to tiny mesh element sizes (<0.05 mm) necessary to model the deformation of the film. The modelling of the liquid-liquid heat transfer provided an acceptable accuracy (approx. 10%), but at very low rates the deviations were then higher (over 30%). The results of the condensation modelling were ambivalent. One the one hand a physically plausible model was developed, which could map the entire condensation process. On the other hand, the corresponding energy balance revealed major inaccuracy and hence could not be used for the determination of the overall heat transfer and showed the current limits of the FEM-CFD approach.

This dissertation describes an indoor localization system based on oscillating magnetic fields and the underlying processing architecture. The system consists of several fixed anchor points, generating the magnetic fields (transmitter), and wearable magnetic field measurement units, whose position should be determined (receiver). The system is evaluated in different environments and application areas. Additionally, various fields of application are discussed and assessed in ubiquitous and pervasive computing and Ambient Assisted Living. The fusion of magnetic field-based distance information and positions derived from LIDAR distance measurements is described and evaluated.
The system architecture consists of three layers, a physical layer, a layer for position and distance estimation between a magnetic field transmitter and a receiver, and a layer which uses several measurements to different transmitters to estimate the overall position of a wearable measurement unit.
Each layer covers different aspects which have to be taken care of when magnetic field information is processed. Especially the properties of the generated magnetic field information are considered in the processing algorithms.
The physical layer covers the magnetic field generation and magnetic Field-Based information transfer, synchronization of a transmitter and the receivers and the description of the locally measured magnetic fields on the receiver side. After a transfer of this information to a central processing unit, the hardware specific signal levels are transformed to the levels of the theoretical magnetic field models. The values are then used to estimate candidate positions and distances. Due to symmetrical effects of the magnetic fields, it is only possible to reduce the receiver position to 8 points around the transmitter (one position in each of the octants of the coordinate system). The determined positions have a mean error of 108 cm, the average error of the distance is 40 cm.
On top of this, the distance and position information against different transmitters are fused, this covers clock synchronization of transmitters, triggering and scheduling sequences and distance and position based localization and tracking algorithms. The magnetic-field-based indoor localization system has been evaluated in different applications and environments; the mean position error is 60 cm to 70 cm depending on the environment. A comparison against an RF-based indoor localization system shows the robustness of magnetic fields against RF shadows caused by big metal objects.
We additionally present algorithms for regions of interest detection, working on raw magnetic field information and transformed position and distance information. Setups in larger areas can distinguish regions which are further than 50 cm apart, small scale coil setups (3 transmitters in 2m^3) allow to resolve regions below 20 cm.
In the end, we describe a fusion algorithm for a wearable localization system based on 4 LIDAR distance measurement units and magnetic field-based distance estimation. The magnetic field indoor localization system provides distance proximity information which is used to resolve ambiguous position estimates of the LIDAR system. In a room (8m × 10m), we achieve a mean error of 8 cm.

A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction
(2017)

Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.

Wie Proteine sich innerhalb weniger Millisekunden korrekt falten können, ist eine der fundamentalen Fragen in der Biochemie. Ein beim Faltungsprozess durchlaufener Übergangszustand ist der molten globule Zustand (MG Zustand), der sich unter bestimmten Bedingungen stabilisieren und untersuchen lässt. In diesem Zustand ähnelt die Sekundärstruktur dem nativen Zustand, während die Tertiärstruktur eher dem vollständig entfalteten Zustand entspricht. In dieser Arbeit wurde der MG Zustand am Beispiel des Maltose bindenden Proteins (MBP) untersucht. Dazu wurde MBP bei pH 3,2 im MG-Zustand stabilisiert und dies mittels Fluoreszenz Spektroskopie bestätigt. Die Abstände zwischen definierten Aminosäuren im MG Zustand wurden durch Spinlabels, die an gezielt mutierten Cysteinpaaren angebracht wurden, mittels Elektronenspinresonanz (EPR) gemessen und mit den Abständen derselben Aminosäuren im nativen Zustand verglichen. Anhand von sieben verschiedenen Doppelmutanten wurde die periphere Struktur mittels gepulster EPR analysiert, zwei weitere Doppelmutanten dienten dazu, die Struktur der molekularen Bindungstasche von MBP mittels CW EPR zu untersuchen. Die Anwesenheit von Maltose führte im MG Zustand zu einer deutlichen Veränderung der Abstände bestimmter Spinlabels in der peripheren Struktur. Dies deutet darauf hin, dass MBP Maltose sogar im MG Zustand binden kann. Durch isotherme Titrationskalorimetrie (ITC) wurde diese Vermutung bestätigt: die Ergebnisse zeigen jedoch, dass der Bindungsprozess zwischen MBP und Maltose im MG Zustand mit 11 fach geringerer Bindungsenthalpie erfolgt wie im nativen Zustand. Die Abstände der Spinlabel Paare neben der Bindungstasche von MBP unterschieden sich im MG Zustand vom nativen Zustand weder mit noch ohne Maltose. Diese Ergebnisse weisen darauf hin, dass MBP im MG Zustand rund um die Bindungstasche bereits eine klar ausgebildete Tertiärstruktur besitzt. Um diese Befunde zu bestätigen, sollten nun Untersuchungen anhand weiterer Doppelmutanten und mittels empfindlicherer Messungen wie z.B. DQC durchgeführt werden.

Redox-neutral decarboxylative coupling reactions have emerged as a powerful strategy for C-C bond formation. However, the existing reaction conditions possess limitations, such as the coupling of aryl halides restricted to ortho-substituted benzoic acids; alkenyl halides were not applicable in decarboxylative coupling reaction. Within this thesis, the developments of Pd/Cu bimetallic catalyst systems are presented to overcome the limitations.
In the first part of the PhD work, a customized bimetallic PdII/CuI catalyst system was successfully developed to facilitate the decarboxylative cross-coupling of non-ortho-substituted aromatic carboxylates with aryl chlorides. The restriction of decarboxylative cross-coupling reactions to ortho-substituted or heterocyclic carboxylate substrates was overcome by holistic optimization of this bimetallic Cu/Pd catalyst system. All kinds of benzoic acids regardless of their substitution pattern now can be applied in decarboxylative cross-coupling reaction. This confirms prediction by DFT studies that the previously observed limitation to certain activated carboxylates is not intrinsic. The catalyst system also presents higher performance in the coupling of ortho-substituted benzoates, giving much higher yields than those previously reported. ortho-Methyl benzoate and ortho-phenyl benzoate which have never before been converted in decarboxylative coupling reactions, gave reasonable yields. These together further confirm the superiority of the new protocol.
In the second part of the PhD work, arylalkenes syntheses via two different Pd/Cu bimetallic-catalyzed decarboxylative couplings have been developed. This part consists of two projects: 2a) decarboxylative coupling of alkenyl halides; 2b) decarboxylative Mizoroki-Heck coupling of aryl halides with α,β-unsaturated carboxylic acids.
In project 2a, widely available, inexpensive, bench-stable aromatic carboxylic acids are used as nucleophile precursors instead of expensive and sensitive organometallic reagents that are commonly used in previously reported transition-metal catalyzed cross-couplings of alkenyl halides. With this protocol, alkenyl halides for the first time are used in decarboxylative coupling reaction, allowing regiospecific synthesis of a broad range of (hetero)arylalkenes in high yields. Unwanted double bond isomerization, a common side reaction in the alternative Heck reactions especially in the coupling of cycloalkenes or aliphatic alkenes, did not take place in this decarboxylative coupling reaction. Polysubstituted alkenes that hard to access with Heck reaction are also produced in good yields. The reaction can easily be scaled up to gram scale. The synthetic utility of this reaction was also demonstrated by synthesizing an important intermediate of fungicidal compound in high yield within 2 steps.
In project 2b, a Cu/Pd bimetallic catalyzed decarboxylative Mizoroki-Heck coupling of aryl halides with α, β-unsaturated carboxylic acids was successfully developed in which the carboxylate group directs the arylation into its β-position before being tracelessly removed via protodecarboxylation. It opens up a convenient synthesis of unsymmetrical 1,1-disubstituted alkenes from widely available precursors. This reaction features good regioselectivity, which is complementary to that of traditional Heck reactions, and also presents excellent functional group tolerance. Moreover, a one-pot 3-step 1,1-diarylethylene synthesis from methyl acrylate was achieved, where solvent changes or isolation of intermediates are not required. This subproject presents an example of carboxylic acids utility in synthesizing valuable compounds which are hard to access via conventional methodologies.

Requirements-Aware, Template-Based Protocol Graphs for Service-Oriented Network Architectures
(2016)

Rigidness of the Internet causes its architectural design issues such as interdependencies among the layers, no cross-layer information exchange, and applications dependency on the underlying protocols implementation.
G-Lab (i.e., http://www.german-lab.de/) is a research project for Future Internet Architecture (FIA), which focuses on problems of the Internet such as rigidness, mobility, and addressing. Where the focus of ICSY (i.e., www.icsy) was on providing the flexibility in future network architectures. An approach so-called Service Oriented Network Architecture (SONATE) is proposed to compose the protocols dynamically. SONATE is based on principles of the service-oriented architecture (SOA), where protocols are decomposed in software modules and later they are put together on demand to provide the desired service.
This composition of functionalities can be performed at various time-epochs (e.g., run-time, design-time, deployment-time). However, these epochs have trade-off in terms of the time-complexity (i.e., required setup time) and the provided flexibility. The design-time is the least time critical in comparison to other time phases, which makes it possible to utilize human-analytical capability. However, the design-time lacks the real-time knowledge of requirements and network conditions, what results in inflexible protocol graphs, and they cannot be changed at later stages on changing requirements. Contrary to the design-time, the run-time is most time critical where an application is waiting for a connection to be established, but at the same time it has maximum information to generate a protocol graph suitable to the given requirements.
Considering limitations above of different time-phases, in this thesis, a novel intermediate functional composition approach (i.e., Template-Based Composition) has been presented to generate requirements aware protocol graphs. The template-based composition splits the composition process across different time-phases to exploit the less time critical nature and human-analytical availability of the design-time, ability to instantaneously deploy new functionalities of the deployment time and maximum information availability of the run-time. The approach is successfully implemented , demonstrated and evaluated based on its performance to know the implications for the practical use.

When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.

Synapses play a central role in the information propagation in the nervous system. A better understanding of synaptic structures and processes is vital for advancing nervous disease research. This work is part of an interdisciplinary project that aims at the quantitative examination of components of the neuromuscular junction, a synaptic connection between a neuron and a muscle cell.
The research project is based on image stacks picturing neuromuscular junctions captured by modern electron microscopes, which permit the rapid acquisition of huge amounts of image data at a high level of detail. The large amount and sheer size of such microscopic data makes a direct visual examination infeasible, though.
This thesis presents novel problem-oriented interactive visualization techniques that support the segmentation and examination of neuromuscular junctions.
First, I introduce a structured data model for segmented surfaces of neuromuscular junctions to enable the computational analysis of their properties. However, surface segmentation of neuromuscular junctions is a very challenging task due to the extremely intricate character of the objects of interest. Hence, such problematic segmentations are often performed manually by non-experts and thus requires further inspection.
With NeuroMap, I develop a novel framework to support proofreading and correction of three-dimensional surface segmentations. To provide a clear overview and to ease navigation within the data, I propose the surface map, an abstracted two-dimensional representation using key features of the surface as landmarks. These visualizations are augmented with information about automated segmentation error estimates. The framework provides intuitive and interactive data correction mechanisms, which in turn permit the expeditious creation of high-quality segmentations.
While analyzing such segmented synapse data, the formulation of specific research questions is often impossible due to missing insight into the data. I address this problem by designing a generic parameter space for segmented structures from biological image data. Furthermore, I introduce a graphical interface to aid its exploration, combining both parameter selection as well as data representation.

This Ph.D. project as a landscape research practice focuses on the less widely studied aspects of urban agriculture landscape and its application in recreation and leisure, as well as landscape beautification. I research on the edible landscape planning and design, its criteria, possibilities, and traditional roots for the particular situation of Iranian cities and landscapes. The primary objective is preparing a conceptual and practical framework for Iranian professions to integrate the food landscaping into the new greenery and open spaces developments. Furthermore, finding the possibilities of synthesis the traditional utilitarian gardening with the contemporary pioneer viewpoints of agricultural landscapes is the other significant proposed achievement.
Finished tasks and list of achieved results:
• Recognition the software and hardware principles of designing the agricultural landscape based on the Persian gardens
• Multidimensional identity of agricultural landscape in Persian gardens
• Principles of architectural integration and the characteristics of the integrative landscape in Persian gardens
• Distinctive characteristics of agricultural landscape in Persian garden
• Introducing the Persian and historical gardens as the starting point for reentering the agricultural phenomena into the Iranian cities and landscape
• Assessment the structure of Persian gardens based on the new achievements and criteria of designing the urban agriculture
• Investigate the role of Persian gardens in envisioning the urban agriculture in
Iranian cities’ landscape.

Reading as a cultural skill is acquired over a long period of training. This thesis supports the idea that reading is based on specific strategies that result from modification and coordination of earlier developed object recognition strategies. The reading-specific processing strategies are considered to be more analytic compared to object recognition strategies, which are described as holistic. To enable proper reading skills these strategies have to become automatized. Study 1 (Chapter 4) examined the temporal and visual constrains of letter recognition strategies. In the first experiment two successively presented stimuli (letters or non-letters) had to be classified as same or different. The second stimulus could either be presented in isolation or surrounded by a shape, which was either similar (congruent) or different (incongruent) in its geometrical properties to the stimulus itself. The non-letter pairs were presented twice as often as the letter pairs. The results demonstrated a preference for the holistic strategy also in letters, even if the non- letter set was presented twice as often as the letter set, showing that the analytic strategy does not replace the holistic one completely, but that the usage of both strategies is task-sensitive. In Experiment 2, we compared the Global Precedence Effect (GPE) for letters and non-letters in central viewing, with the global stimulus size close to the functional visual field in whole word reading (6.5◦ of visual angle) and local stimuli close to the critical size for fluent reading of individual letters (0.5◦ of visual angle). Under these conditions, the GPE remained robust for non-letters. For letters, however, it disappeared: letters showed no overall response time advantage for the global level and symmetric congruence effects (local-to-global as well as global-to-local interference). These results indicate that reading is based on resident analytic visual processing strategies for letters. In Study 2 (Chapter 5) we replicated the latter result with a large group of participants as part of a study in which pairwise associations of non-letters and phonological or non-phonological sounds were systematically trained. We investigated whether training would eliminate the GPE also for non-letters. We observed, however, that the differentiation between letters and non-letter shapes persists after training. This result implies that pairwise association learning is not sufficient to overrule the process differentiation in adults. In addition, subtle effects arising in the letter condition (due to enhanced power) enable us to further specify the differentiation in processing between letters and non-letter shapes. The influence of reading ability on the GPE was examined in Study 3 (Chapter 6). Children with normal reading skills and children with poor reading skills were instructed to detect a target in Latin or Hebrew Navon letters. Children with normal reading skills showed a GPE for Latin letters, but not for Hebrew letters. In contrast, the dyslexia group did not show GPE for either kind of stimuli. These results suggest that dyslexic children are not able to apply the same automatized letter processing strategy as children with normal reading skills do. The difference between the analytic letter processing and the holistic non-letter processing was transferred to the context of whole word reading in Study 4 (Chapter 7). When participants were instructed to detect either a letter or a non-letter in a mixed character string, for letters the reaction times and error rates increased linearly from the left to the right terminal position in the string, whereas for non-letters a symmetrical U-shaped function was observed. These results suggest, that the letter-specific processing strategies are triggered automatically also for more word-like material. Thus, this thesis supports and expands prior results of letter-specific processing and gives new evidences for letter-specific processing strategies.

This work introduces a promising concept for the preparation of new nano-sized receptors. Mixed monolayer protected gold nanoparticles (AuNPs) for low molecular weight compounds were prepared featuring functional groups on their surfaces. It has been shown that these AuNPs can engage in interactions with peptides in aqueous media. Quantitative binding information was obtained from DOSY-NMR titrations indicating that nanoparticles containing a combination of three orthogonal functional groups are more efficient in binding to dipeptides than mono or difunctionalised analogues. The strategy is highly modular and easily allows adapting the receptor selectivity to a
given substrate by varying the type, number, and ratio of binding sites on the nanoparticle
surface.

This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.

Stochastic Network Calculus (SNC) emerged from two branches in the late 90s:
the theory of effective bandwidths and its predecessor the Deterministic Network
Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support
their design and control.
In contrast to queueing theory, which strives for similar goals, SNC uses in-
equalities to circumvent complex situations, such as stochastic dependencies or
non-Poisson arrivals. Leaving the objective to compute exact distributions behind,
SNC derives stochastic performance bounds. Such a bound would, for example,
guarantee a system’s maximal queue length that is violated by a known small prob-
ability only.
This work includes several contributions towards the theory of SNC. They are
sorted into four main contributions:
(1) The first chapters give a self-contained introduction to deterministic net-
work calculus and its two branches of stochastic extensions. The focus lies on the
notion of network operations. They allow to derive the performance bounds and
simplifying complex scenarios.
(2) The author created the first open-source tool to automate the steps of cal-
culating and optimizing MGF-based performance bounds. The tool automatically
calculates end-to-end performance bounds, via a symbolic approach. In a second
step, this solution is numerically optimized. A modular design allows the user to
implement their own functions, like traffic models or analysis methods.
(3) The problem of the initial modeling step is addressed with the development
of a statistical network calculus. In many applications the properties of included
elements are mostly unknown. To that end, assumptions about the underlying
processes are made and backed by measurement-based statistical methods. This
thesis presents a way to integrate possible modeling errors into the bounds of SNC.
As a byproduct a dynamic view on the system is obtained that allows SNC to adapt
to non-stationarities.
(4) Probabilistic bounds are fundamentally different from deterministic bounds:
While deterministic bounds hold for all times of the analyzed system, this is not
true for probabilistic bounds. Stochastic bounds, although still valid for every time
t, only hold for one time instance at once. Sample path bounds are only achieved by
using Boole’s inequality. This thesis presents an alternative method, by adapting
the theory of extreme values.
(5) A long standing problem of SNC is the construction of stochastic bounds
for a window flow controller. The corresponding problem for DNC had been solved
over a decade ago, but remained an open problem for SNC. This thesis presents
two methods for a successful application of SNC to the window flow controller.

This thesis investigates the electromechanic coupling of dielectric elastomers for the static and dynamic case by numerical simulations. To this end, the fundamental equations of the coupled field problem are introduced and the discretisation procedure for the numerical implementation is described. Furthermore, a three field formulation is proposed and implemented to treat the nearly incompressible behaviour of the elastomer. Because of the reduced electric permittivity of the material, very high electric fields are required for actuation purposes. To improve the electromechanic coupling a heterogeneous microstructure consisting of an elastomer matrix with barium titanate inclusions is proposed and studied.

Mixed-signal systems combine analog circuits with digital hardware and software systems. A particular challenge is the sensitivity of analog parts to even small deviations in parameters, or inputs. Parameters of circuits and systems such as process, voltage, and temperature are never accurate; we hence model them as uncertain values (‘uncertainties’). Uncertain parameters and inputs can modify the dynamic behavior and lead to properties of the system that are not in specified ranges. For verification of mixed- signal systems, the analysis of the impact of uncertainties on the dynamical behavior plays a central role.
Verification of mixed-signal systems is usually done by numerical simulation. A single numerical simulation run allows designers to verify single parameter values out of often ranges of uncertain values. Multi-run simulation techniques such as Monte Carlo Simulation, Corner Case simulation, and enhanced techniques such as Importance Sampling or Design-of-Experiments allow to verify ranges – at the cost of a high number of simulation runs, and with the risk of not finding potential errors. Formal and symbolic approaches are an interesting alternative. Such methods allow a comprehensive verification. However, formal methods do not scale well with heterogeneity and complexity. Also, formal methods do not support existing and established modeling languages. This fact complicates its integration in industrial design flows.
In previous work on verification of Mixed-Signal systems, Affine Arithmetic is used for symbolic simulation. This allows combining the high coverage of formal methods with the ease-of use and applicability of simulation. Affine Arithmetic computes the propagation of uncertainties through mostly linear analog circuits and DSP methods in an accurate way. However, Affine Arithmetic is currently only able to compute with contiguous regions, but does not permit the representation of and computation with discrete behavior, e.g. introduced by software. This is a serious limitation: in mixed-signal systems, uncertainties in the analog part are often compensated by embedded software; hence, verification of system properties must consider both analog circuits and embedded software.
The objective of this work is to provide an extension to Affine Arithmetic that allows symbolic computation also for digital hardware and software systems, and to demonstrate its applicability and scalability. Compared with related work and state of the art, this thesis provides the following achievements:
1. The thesis introduces extended Affine Arithmetic Forms (XAAF) for the representation of branch and merge operations.
2. The thesis describes arithmetic and relational operations on XAAF, and reduces over-approximation by using an LP solver.
3. The thesis shows and discusses ways to integrate this XAAF into existing modeling languages, in particular SystemC. This way, breaks in the design flow can be avoided.
The applicability and scalability of the approach is demonstrated by symbolic simulation of a Delta-Sigma Modulator and a PLL circuit of an IEEE 802.15.4 transceiver system.

Combining ultracold atomic gases with the peculiar properties of Rydberg excited atoms gained a lot of theoretical and experimental attention in recent years. Embedded in the ultracold gas, an interaction between the Rydberg atom and the surrounding ground state atoms arises through the scattering of the Rydberg electron from an intruding perturber atom. This peculiar interaction gives rise to a plenitude of previously unobserved effects. Within the framework of the present thesis, this interaction is studied in detail for Rydberg \(P\)-states in rubidium.
Due to their long lifetime, atoms in Rydberg states are subject to scattering with the surrounding ground state atoms in the ultracold cloud. By measuring their lifetime as a function of the ground state atom flux, we are able to obtain the total inelastic scattering cross section as well as the partial cross section for associative ionisation. The fact that the latter is three orders of magnitude larger than the size of the formed molecular
ion indicates the presence of an efficient mass transport mechanism that is mediated by the Rydberg–ground state interaction. The immense acceleration of the collisional process shows a close analogy to a catalytic process. The increase of the scattering cross section renders associative ionisation an important process that has to be considered for experiments in dense ultracold systems.
The interaction of the Rydberg atom with a ground state perturber gives rise to a highly oscillatory potential that supports molecular bound states. These so-called ultralong-range Rydberg molecules are studied with high resolution time-of-flight spectroscopy, where we are able to determine the binding energies and lifetimes of the molecular states between the two fine structure split \(25P\)-states. Inside an electric field, we observe a broadening of the
molecular lines that indicates the presence of a permanent electric dipole moment, induced by the mixing with high angular momentum states. Due to the mixing of the ground state atom’s hyperfine states by the molecular interaction, we are able to observe a spin-flip of the perturber upon creation of a Rydberg molecule. Furthermore, an incidental near-degeneracy in the underlying level scheme of the \(25P\)-state gives rise to highly entangled states between the Rydberg fine structure state and the perturber’s hyperfine structure. These mechanisms can be used to manipulate the quantum state of a remote particle over distances that exceed by far the typical contact interaction range.
Apart from the ultralong-range Rydberg molecules that predominantly consist of only one low angular momentum state, a class of Rydberg molecules is predicted to exist that strongly mixes the high angular momentum states of the degenerate hydrogenic manifolds. These states, the so-called trilobite- and butterfly Rydberg molecules, show very peculiar properties that cannot be observed for conventional molecules. Here we present the first experimental observation of butterfly Rydberg molecules. In addition to an extensive spectroscopy that reveals the binding energy, we are also able to observe the rotational structure of these exotic molecules. The arising pendular states inside an electric field allow us, in comparison to the model of a dipolar rotor, to extract the precise bond
length and dipole moment of the molecule. With the information obtained in the present study, it is possible to photoassociate butterfly molecules with a selectable bond length, vibrational state, rotational state, and orientation inside an electric field.
By shedding light on various previously unrevealed aspects, the experiments presented in this thesis significantly deepen our knowledge on the Rydberg–ground state interaction and the peculiar effects arising from it. The obtained spectroscopic information on Rydberg molecules and the changed reaction dynamics for molecular ion creation will surely provide valuable data for quantum chemical simulations and provide necessary data to plan future experiments. Beyond that, our study reveals that the hyperfine interaction in Rydberg molecules and the peculiar properties of butterfly states provide very promising new ways to alter the short- and long-range interactions in ultracold many-body systems. In this sense the investigated Rydberg–ground state interaction not only lies right at
the interface between quantum chemistry, quantum many-body systems, and Rydberg physics, but also creates many new and fascinating possibilities by combining these fields.

Knowing the extent to which we rely on technology one may think that correct programs are nowadays the norm. Unfortunately, this is far from the truth. Luckily, possible reasons why program correctness is difficult often come hand in hand with some solutions. Consider concurrent program correctness under Sequential Consistency (SC). Under SC, instructions of each program's concurrent component are executed atomically and in order. By using logic to represent correctness specifications, model checking provides a successful solution to concurrent program verification under SC. Alas, SC’s atomicity assumptions do not reflect the reality of hardware architectures. Total Store Order (TSO) is a less common memory model implemented in SPARC and in Intel x86 multiprocessors that relaxes the SC constraints. While the architecturally de-atomized execution of stores under TSO speeds up program execution, it also complicates program verification. To be precise, due to TSO’s unbounded store buffers, a program’s semantics under TSO might be infinite. This, for example, turns reachability under SC (a PSPACE-complete task) into a non-primitive-recursive-complete problem under TSO. This thesis develops verification techniques targeting TSO-relaxed programs. To be precise, we present under- and over-approximating heuristics for checking reachability in TSO-relaxed programs as well as state-reducing methods for speeding up such heuristics. In a first contribution, we propose an algorithm to check reachability of TSO-relaxed programs lazily. The under-approximating refinement algorithm uses auxiliary variables to simulate TSO’s buffers along instruction sequences suggested by an oracle. The oracle’s deciding characteristic is that if it returns the empty sequence then the program’s SC- and TSO-reachable states are the same. Secondly, we propose several approaches to over-approximate TSO buffers. Combined in a refinement algorithm, these approaches can be used to determine safety with respect to TSO reachability for a large class of TSO-relaxed programs. On the more technical side, we prove that checking reachability is decidable when TSO buffers are approximated by multisets with tracked per address last-added-values. Finally, we analyze how the explored state space can be reduced when checking TSO and SC reachability. Intuitively, through the viewpoint of Shasha-and-Snir-like traces, we exploit the structure of program instructions to explain several state-space reducing methods including dynamic and cartesian partial order reduction.

A vehicles fatigue damage is a highly relevant figure in the complete vehicle design process.
Long term observations and statistical experiments help to determine the influence of differnt parts of the vehicle, the driver and the surrounding environment.
This work is focussing on modeling one of the most important influence factors of the environment: road roughness. The quality of the road is highly dependant on several surrounding factors which can be used to create mathematical models.
Such models can be used for the extrapolation of information and an estimation of the environment for statistical studies.
The target quantity we focus on in this work ist the discrete International Roughness Index or discrete IRI. The class of models we use and evaluate is a discriminative classification model called Conditional Random Field.
We develop a suitable model specification and show new variants of stochastic optimizations to train the model efficiently.
The model is also applied to simulated and real world data to show the strengths of our approach.

Dual-Pivot Quicksort and Beyond: Analysis of Multiway Partitioning and Its Practical Potential
(2016)

Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort.
In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework.
A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.

By using Gröbner bases of ideals of polynomial algebras over a field, many implemented algorithms manage to give exciting examples and counter examples in Commutative Algebra and Algebraic Geometry. Part A of this thesis will focus on extending the concept of Gröbner bases and Standard bases for polynomial algebras over the ring of integers and its factors \(\mathbb{Z}_m[x]\). Moreover we implemented two algorithms for this case in Singular which use different approaches in detecting useless computations, the classical Buchberger algorithm and a F5 signature based algorithm. Part B includes two algorithms that compute the graded Hilbert depth of a graded module over a polynomial algebra \(R\) over a field, as well as the depth and the multigraded Stanley depth of a factor of monomial ideals of \(R\). The two algorithms provide faster computations and examples that lead B. Ichim and A. Zarojanu to a counter example of a question of J. Herzog. A. Duval, B. Goeckner, C. Klivans and J. Martin have recently discovered a counter example for the Stanley Conjecture. We prove in this thesis that the Stanley Conjecture holds in some special cases. Part D explores the General Neron Desingularization in the frame of Noetherian local domains of dimension 1. We have constructed and implemented in Singular and algorithm that computes a strong Artin Approximation for Cohen-Macaulay local rings of dimension 1.

Integrating Security Concerns into Safety Analysis of Embedded Systems Using Component Fault Trees
(2016)

Nowadays, almost every newly developed system contains embedded systems for controlling system functions. An embedded system perceives its environment via sensors, and interacts with it using actuators such as motors. For systems that might damage their environment by faulty behavior usually a safety analysis is performed. Security properties of embedded systems are usually not analyzed at all. New developments in the area of Industry 4.0 and Internet of Things lead to more and more networking of embedded systems. Thereby, new causes for system failures emerge: Vulnerabilities in software and communication components might be exploited by attackers to obtain control over a system. By targeted actions a system may also be brought into a critical state in which it might harm itself or its environment. Examples for such vulnerabilities, and also successful attacks, became known over the last few years.
For this reason, in embedded systems safety as well as security has to be analyzed at least as far as it may cause safety critical failures of system components.
The goal of this thesis is to describe in one model how vulnerabilities from the security point of view might influence the safety of a system. The focus lies on safety analysis of systems, so the safety analysis is extended to encompass security problems that may have an effect on the safety of a system. Component Fault Trees are very well suited to examine causes of a failure and to find failure scenarios composed of combinations of faults. A Component Fault Tree of an analyzed system is extended by additional Basic Events that may be caused by targeted attacks. Qualitative and quantitative analyses are extended to take the additional security events into account. Thereby, causes of failures that are based on safety as well as security problems may be found. Quantitative or at least semi-quantitative analyses allow to evaluate security measures more detailed, and to justify the need of such.
The approach was applied to several example systems: The safety chain of the off-road robot RAVON, an adaptive cruise control, a smart farming scenario, and a model of a generic infusion pump were analyzed. The result of all example analyses was that additional failure causes were found which would not have been detected in traditional Component Fault Trees. In the analyses also failure scenarios were found that are caused solely by attacks, and that are not depending on failures of system components. These are especially critical scenarios which should not happen in this way, as they are not found in a classical safety analysis. Thus the approach shows its additional benefit to a safety analysis which is achieved by the application of established techniques with only little additional effort.

This thesis is concerned with a phase field model for martensitic transformations in metastable austenitic steels. Within the phase field approach an order parameter is introduced to indicate whether the present phase is austenite or martensite. The evolving microstructure is described by the evolution of the order parameter, which is assumed to follow the time-dependent Ginzburg-Landau equation. The elastic phase field model is enhanced in two different ways to take further phenomena into account. First, dislocation movement is considered by a crystal plasticity setting. Second, the elastic model for martensitic transformations is combined with a phase field model for fracture. Finite element simulations are used to study the single effects separately which contribute to the microstructure formation.

The biodiversity of the cyanobacterial lichen flora of Vietnam is chronically understudied. Previous studies often neglected the lichens that inhabit lowlands especially outcrops and sand dunes that are common habitats in Vietnam.
A cyanolichen collection was gathered from lowlands of central and southern Vietnam to study their diversity and distribution. At the same time, cultured photobionts from those lichens were used for olyphasic taxonomic approach.
A total of 66 cyanolichens were recorded from lowland regions in central and southern of Vietnam, doubles the number of cyanolichens for Vietnam. 80% of them are new records for Vietnam in which a new species Pyrenopsis melanophthalma and two new unidentified lichinacean taxa were described.
A notably floristic segregation by habitats was indicated in the communities. Saxicolous Lichinales dominated in coastal outcrops that corresponded to 56% of lichen species richness. Lecanoralean cyanolichens and basidiolichens were found in the lowland forests. Precipitation correlated negatively to species richness in this study, indicating a competitive relationship.
Eleven cyanobacterial strains including 8 baeocyte-forming members of the genus Chroococcidiopsis and 3 heterocyte-forming species of the genera Nostoc and Scytonema were successfully isolated from lichens.
Phylogenetic and morphological analyses indicated that Chroococcidiopsis was the unique photobiont in Peltula. New mophological characters were found in two Chroococcidiopsis strains: (1) the purple content of cells in one photobiont strain that was isolated from a new lichinacean taxon, and (2) the pseudofilamentous feature by binary division from a strain that was isolated from Porocyphus dimorphus.
With respect to heterocyte-forming cyanobiont, Scytonema was confirmed as the photobiont in the ascolichen Heppia lutosa applying the polyphasic method. The genus Scytonema in the basidiolichens Cyphellostereum was morphologically examinated in lichen thalli. For the first time the intracellular haustorial system of basidiolichen genus Cyphellostereum was noted and investigated.
Phylogenetic analysis of photobiont strains Nostoc from Pannaria tavaresii and Parmeliella brisbanensis indicated that a high selectivity occurred in Parmeliella brisbanensis that were from different regions of the world, while low photobiont selectivity occurred among Pannaria tavaresii samples from different geographical regions.
The herewith presented dissertation is therefore an important contribution to the lichen flora of Vietnam and a significant improvement of the actual knowledge about cyanolichens in this country.

The mechanical properties of semi-crystalline polymers depend extremely on their
morphology, which is dependent on the crystallization during processing. The aim of
this research is to determine the effect of various nanoparticles on morphology
formation and tensile mechanical properties of polypropylene under conditions
relevant in polymer processing and to contribute ultimately to the understanding of
this influence.
Based on the thermal analyses of samples during fast cooling, it is found that the
presence of nanoparticle enhances the overall crystallization process of PP. The results
suggest that an increase of the nucleation density/rate is a dominant process that
controls the crystallization process of PP in this work, which can help to reduce the
cycle time in the injection process. Moreover, the analysis of melting behaviors
obtained after each undercooling reveals that crystal perfection increases significantly
with the incorporation of TiO2 nanoparticles, while it is not influenced by the SiO2
nanoparticles.
This work also comprises an analysis of the influence of nanoparticles on the
microstructure of injection-molded parts. The results clearly show multi-layers along
the wall thickness. The spherulite size and the degree of crystallinity continuously
decrease from the center to the edge. Generally both the spherulite size and the degree
of crystallinity decrease with higher the SiO2 loading. In contrast, an increase in the
degree of crystallinity with an increasing TiO2 nanoparticle loading was detected.
The tensile properties exhibit a tendency to increase in the tensile strength as the core
is reached. The tensile strength decreases with the addition of nanoparticles, while the
elongation at break of nanoparticle-filled PP decreases from the skin to the core. With
increasing TiO2 loading, the elongation at break decreases.

Distributed systems are omnipresent nowadays and networking them is fundamental for the continuous dissemination and thus availability of data. Provision of data in real-time is one of the most important non-functional aspects that safety-critical networks must guarantee. Formal verification of data communication against worst-case deadline requirements is key to certification of emerging x-by-wire systems. Verification allows aircraft to take off, cars to steer by wire, and safety-critical industrial facilities to operate. Therefore, different methodologies for worst-case modeling and analysis of real-time systems have been established. Among them is deterministic Network Calculus (NC), a versatile technique that is applicable across multiple domains such as packet switching, task scheduling, system on chip, software-defined networking, data center networking and network virtualization. NC is a methodology to derive deterministic bounds on two crucial performance metrics of communication systems:
(a) the end-to-end delay data flows experience and
(b) the buffer space required by a server to queue all incoming data.
NC has already seen application in the industry, for instance, basic results have been used to certify the backbone network of the Airbus A380 aircraft.
The NC methodology for worst-case performance analysis of distributed real-time systems consists of two branches. Both share the NC network model but diverge regarding their respective derivation of performance bounds, i.e., their analysis principle. NC was created as a deterministic system theory for queueing analysis and its operations were later cast in a (min,+)-algebraic framework. This branch is known as algebraic Network Calculus (algNC). While algNC can efficiently compute bounds on delay and backlog, the algebraic manipulations do not allow NC to attain the most accurate bounds achievable for the given network model. These tight performance bounds can only be attained with the other, newly established branch of NC, the optimization-based analysis (optNC). However, the only optNC analysis that can currently derive tight bounds was proven to be computationally infeasible even for the analysis of moderately sized networks other than simple sequences of servers.
This thesis makes various contributions in the area of algNC: accuracy within the existing framework is improved, distributivity of the sensor network calculus analysis is established, and most significantly the algNC is extended with optimization principles. They allow algNC to derive performance bounds that are competitive with optNC. Moreover, the computational efficiency of the new NC approach is improved such that this thesis presents the first NC analysis that is both accurate and computationally feasible at the same time. It allows NC to scale to larger, more complex systems that require formal verification of their real-time capabilities.

Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field.
In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\).
To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the
modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and
is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\).
On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation.
At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.

This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.

Human forest modification is among the largest global drivers of terrestrial degradation
of biodiversity, species interactions, and ecosystem functioning. One of the most
pertinent components, forest fragmentation, has a long history in ecological research
across the globe, particularly in lower latitudes. However, we still know little how
fragmentation shapes temperate ecosystems, irrespective of the ancient status quo of
European deforestation. Furthermore, its interaction with another pivotal component
of European forests, silvicultural management, are practically unexplored. Hence,
answering the question how anthropogenic modification of temperate forests affects
fundamental components of forest ecosystems is essential basic research that has
been neglected thus far. Most basal ecosystem elements are plants and their insect
herbivores, as they form the energetic basis of the tropic pyramid. Furthermore, their
respective biodiversity, functional traits, and the networks of interactions they
establish are key for a multitude of ecosystem functions, not least ecosystem stability.
Hence, the thesis at hand aimed to disentangle this complex system of
interdependencies of human impacts, biodiversity, species traits and inter-species
interactions.
The first step lay in understanding how woody plant assemblages are shaped by
human forest modification. For this purpose, field investigations in 57 plots in the
hyperfragmented cultural landscape of the Northern Palatinate highlands (SW
Germany) were conducted, censusing > 4,000 tree/shrub individuals from 34 species.
Use of novel, integrative indices for different types of land-use allowed an accurate
quantification of biotic responses. Intriguingly, woody tree/shrub communities reacted
strikingly positive to forest fragmentation, with increases in alpha and beta diversity,
as well as proliferation of heat/drought/light adapted pioneer species. Contrarily,
managed interior forests were homogenized/constrained in biodiversity, with
dominance of shade/cold adapted commercial tree species. Comparisons with recently
unmanaged stands (> 40 a) revealed first indications for nascent conversion to oldgrowth
conditions, with larger variability in light conditions and subsequent
community composition. Reactions to microclimatic conditions, the relationship
between associated species traits and the corresponding species pool, as well as
facilitative/constraining effects by foresters were discussed as underlying mechanisms.
Reactions of herbivore assemblages to forest fragmentation and the subsequent
changes in host plant communities were assessed by comprehensive sampling of >
1,000 live herbivores from 134 species in the forest understory. Diversity was –
similarly to plant communities - higher in fragmentation affected habitats, particularly
in edges of continuous control forests. Furthermore, average trophic specialization
showed an identical pattern. Mechanistically, benefits from microclimatic conditions,
host availability, as well as pronounced niche differentiation are deemed responsible.
While communities were heterogeneous, with no segregation across habitats, (smallforest fragments, edges, and interior of control forests), vegetation diversity, herbivore
diversity, as well as trophic specialization were identified to shape community
composition. This probably reflected a gradient from generalistic/species poor vs.
specialist/species rich herbivore assemblages.
Insect studies conducted in forest systems are doomed to incompleteness
without considering ‘the last biological frontier’, the tree canopies. To access their
biodiversity, relationship to edge effects, and their conservational value, the
arboricolous arthropod fauna of 24 beech (Fagus sylvatica) canopies was sampled via
insecticidal knockdown (‘fogging’). This resulted in an exhaustive collection of > 46,000
specimens from 24 major taxonomic/functional groups. Abundance distributions were
markedly negative exponential, indicating high abundance variability in tree crowns.
Individuals of six pertinent orders were identified to species level, returning > 3,100
individuals from 175 species and 52 families. This high diversity did marginally differ
across habitats, with slightly higher species richness in edge canopies. However,
communities in edge crowns were noticeably more heterogeneous than those in the
forest interior, possibly due to higher variability in environmental edge conditions. In
total, 49 species with protective value were identified, of which only one showed
habitat preferences (for near-natural interior forests). Among them, six species (all
beetles, Coleoptera) were classified as ‘priority species’ for conservation efforts. Hence,
beech canopies of the Northern Palatinate highlands can be considered strongholds of
insect biodiversity, incorporating many species of particular protective value.
The intricacy of plant-herbivore interaction networks and their relationship to
forest fragmentation is largely unexplored, particularly in Central Europe. Illumination
of this matter is all the more important, as ecological networks are highly relevant for
ecosystem stability, particularly in the face of additional anthropogenic disturbances,
such as climate change. Hence, plant-herbivore interaction networks (PHNs) were
constructed from woody plants and their associated herbivores, sampled alive in the
understory. Herbivory verification was achieved using no-choice-feeding assays, as well
as literature references. In total, networks across small forest fragments, edges, and
the forest interior consisted of 696 interactions. Network complexity and trophic niche
redundancy were compared across habitats using a rarefaction-like resampling
procedure. PHNs in fragmentation affected forest habitats were significantly more
complex, as well as more redundant in their realized niches, despite being composed of
relatively more specialist species. Furthermore, network robustness to climate change
was quantified utilizing four different scenarios for climate change susceptibility of
involved plants. In this procedure, remaining herbivores in the network were measured
upon successive loss of their host plant species. Consistently, PHNs in edges (and to a
smaller degree in small fragments) withstood primary extinction of plant species
longer, making them more robust. This was attributed to the high prevalence of
heat/drought-adapted species, as well as to beneficial effects of network topography
(complexity and redundancy). Consequently, strong correlative relationships were
found between realized niche redundancy and climate change robustness of PHNs.
This was both the first time that biologically realistic extinctions (instead of e.g.random extinctions) were used to measure network robustness, and that topographical
network parameters were identified as potential indicators for network robustness
against climate change.
In synthesis, in the light of global biotic degradation due to human forest
modification, the necessity to differentiate must be claimed. Ecosystems react
differently to anthropogenic disturbances, and it seems the particular features present
in Central European forests (ancient deforestation, extensive management, and, most
importantly, high richness in open-forest plant species) cause partly opposed patterns
to other biomes. Lenient microclimates and diverse plant communities facilitate
equally diverse herbivore assemblages, and hence complex and robust networks,
opposed to the forest interior. Therefore, in the reality of extensively used cultural
landscapes, fragmentation affected forest ecosystems, particularly forest edges, can be
perceived as reservoir for biodiversity, and ecosystem functionality. Nevertheless, as
practically all forest habitats considered in this thesis are under human cultivation,
recommendations for ecological enhancement of all forest habitats are discussed.

The Context and Its Importance: In safety and reliability analysis, the information generated by Minimal Cut Set (MCS) analysis is large.
The Top Level event (TLE) that is the root of the fault tree (FT) represents a hazardous state of the system being analyzed.
MCS analysis helps in analyzing the fault tree (FT) qualitatively-and quantitatively when accompanied with quantitative measures.
The information shows the bottlenecks in the fault tree design leading to identifying weaknesses of the system being examined.
Safety analysis (containing the MCS analysis) is especially important for critical systems, where harm can be done to the environment or human causing injuries, or even death during the system usage.
Minimal Cut Set (MCS) analysis is performed using computers and generating a lot of information.
This phase is called MCS analysis I in this thesis.
The information is then analyzed by the analysts to determine possible issues and to improve the design of the system regarding its safety as early as possible.
This phase is called MCS analysis II in this thesis.
The goal of my thesis was developing interactive visualizations to support MCS analysis II of one fault tree (FT).
The Methodology: As safety visualization-in this thesis, Minimal Cut Set analysis II visualization-is an emerging field and no complete checklist regarding Minimal Cut Set analysis II requirements and gaps were available from the perspective of visualization and interaction capabilities,
I have conducted multiple studies using different methods with different data sources (i.e., triangulation of methods and data) for determining these requirements and gaps before developing and evaluating visualizations and interactions supporting Minimal Cut Set analysis II.
Thus, the following approach was taken in my thesis:
1- First, a triangulation of mixed methods and data sources was conducted.
2- Then, four novel interactive visualizations and one novel interaction widget were developed.
3- Finally, these interactive visualizations were evaluated both objectively and subjectively (compared to multiple safety tools)
from the point of view of users and developers of the safety tools that perform MCS analysis I with respect to their degree in supporting MCS analysis II and from the point of non-domain people using empirical strategies.
The Spiral tool supports analysts with different visions, i.e., full vision, color deficiency protanopia, deuteranopia, and tritanopia. It supports 100 out of 103 (97%) requirements obtained from the triangulation and it fills 37 out of 39 (95%) gaps. Its usability was rated high (better than their best currently used tools) by the users of the safety and reliability tools (RiskSpectrum, ESSaRel, FaultTree+, and a self-developed tool) and at least similar to the best currently used tools from the point of view of the CAFTA tool developers. Its quality was higher regarding its degree of supporting MCS analysis II compared to the FaultTree+ tool. The time spent for discovering the critical MCSs from a problem size of 540 MCSs (with a worst case of all equal order) was less than a minute while achieving 99.5% accuracy. The scalability of the Spiral visualization was above 4000 MCSs for a comparison task. The Dynamic Slider reduces the interaction movements up to 85.71% of the previous sliders and solves the overlapping thumb issues by the sliders provides the 3D model view of the system being analyzed provides the ability to change the coloring of MCSs according to the color vision of the user provides selecting a BE (i.e., multi-selection of MCSs), thus, can observe the BEs' NoO and provides its quality provides two interaction speeds for panning and zooming in the MCS, BE, and model views provide a MCS, a BE, and a physical tab for supporting the analysis starting by the MCSs, the BEs, or the physical parts. It combines MCS analysis results and the model of an embedded system enabling the analysts to directly relate safety information with the corresponding parts of the system being analyzed and provides an interactive mapping between the textual information of the BEs and MCSs and the parts related to the BEs.
Verifications and Assessments: I have evaluated all visualizations and the interaction widget both objectively and subjectively, and finally evaluated the final Spiral visualization tool also both objectively and subjectively regarding its perceived quality and regarding its degree of supporting MCS analysis II.

Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory.
Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern.
To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.

Since the early days of representation theory of finite groups in the 19th century, it was known that complex linear representations of finite groups live over number fields, that is, over finite extensions of the field of rational numbers.
While the related question of integrality of representations was answered negatively by the work of Cliff, Ritter and Weiss as well as by Serre and Feit, it was not known how to decide integrality of a given representation.
In this thesis we show that there exists an algorithm that given a representation of a finite group over a number field decides whether this representation can be made integral.
Moreover, we provide theoretical and numerical evidence for a conjecture, which predicts the existence of splitting fields of irreducible characters with integrality properties.
In the first part, we describe two algorithms for the pseudo-Hermite normal form, which is crucial when handling modules over ring of integers.
Using a newly developed computational model for ideal and element arithmetic in number fields, we show that our pseudo-Hermite normal form algorithms have polynomial running time.
Furthermore, we address a range of algorithmic questions related to orders and lattices over Dedekind domains, including computation of genera, testing local isomorphism, computation of various homomorphism rings and computation of Solomon zeta functions.
In the second part we turn to the integrality of representations of finite groups and show that an important ingredient is a thorough understanding of the reduction of lattices at almost all prime ideals.
By employing class field theory and tools from representation theory we solve this problem and eventually describe an algorithm for testing integrality.
After running the algorithm on a large set of examples we are led to a conjecture on the existence of integral and nonintegral splitting fields of characters.
By extending techniques of Serre we prove the conjecture for characters with rational character field and Schur index two.

Thermoplastic composite materials are being widely used in the automotive and aerospace industries. Due to the limitations of shape complexity, different components
need to be joined. They can be joined by mechanical fasteners, adhesive bonding or
both. However, these methods have several limitations. Components can be joined
by fusion bonding due to the property of thermoplastics. Thermoplastics can be melted on heating and regain their shape on cooling. This property makes them ideal for
joining through fusion bonding by induction heating. Joining of non-conducting or
non-magnetic thermoplastic composites needs an additional material that can generate heat by induction heating.
Polymers are neither conductive nor electromagnetic so they don’t have inherent potential for inductive heating. A susceptor sheet having conductive materials (e.g. carbon fiber) or magnetic materials (e.g. nickel) can generate heat during induction. The
main issues related with induction heating are non-homogeneous and uncontrolled
heating.
In this work, it was observed that to generate heat with a susceptor sheet depends
on its filler, its concentration, and its dispersion. It also depends on the coil, magnetic
field strength and coupling distance. The combination of different fillers not only increased the heating rate but also changed the heating mechanism. Heating of 40ºC/
sec was achieved with 15wt.-% nickel coated short carbon fibers and 3wt.-% multiwalled carbon nanotubes. However, only nickel coated short carbon fibers (15wt-.%)
attained the heating rate of 24ºC/ sec. In this study, electrical conductivity, thermal
conductivity and magnetic properties testing were also performed. The results also
showed that electrical percolation was achieved around 15wt.-% in fibers and (13-
6)wt.-% with hybrid fillers. Induction heating tests were also performed by making
parallel and perpendicular susceptor sheet as fibers were uni-directionally aligned.
The susceptor sheet was also tested by making perforations.
The susceptor sheet showed homogeneous and fast heating, and can be used for
joining of non-conductive or non-magnetic thermoplastic composites.

A wide range of methods and techniques have been developed over the years to manage the increasing
complexity of automotive Electrical/Electronic systems. Standardization is an example
of such complexity managing techniques that aims to minimize the costs, avoid compatibility
problems and improve the efficiency of development processes.
A well-known and -practiced standard in automotive industry is AUTOSAR (Automotive
Open System Architecture). AUTOSAR is a common standard among OEMs (Original Equipment
Manufacturer), suppliers and other involved companies. It was developed originally with
the goal of simplifying the overall development and integration process of Electrical/Electronic
artifacts from different functional domains, such as hardware, software, and vehicle communication.
However, the AUTOSAR standard, in its current status, is not able to manage the problems
in some areas of the system development. Validation and optimization process of system configuration
handled in this thesis are examples of such areas, in which the AUTOSAR standard
offers so far no mature solutions.
Generally, systems developed on the basis of AUTOSAR must be configured in a way that all
defined requirements are met. In most cases, the number of configuration parameters and their
possible settings in AUTOSAR systems are large, especially if the developed system is complex
with modules from various knowledge domains. The verification process here can consume a
lot of resources to test all possible combinations of configuration settings, and ideally find the
optimal configuration variant, since the number of test cases can be very high. This problem is
referred to in literature as the combinatorial explosion problem.
Combinatorial testing is an active and promising area of functional testing that offers ideas
to solve the combinatorial explosion problem. Thereby, the focus is to cover the interaction
errors by selecting a sample of system input parameters or configuration settings for test case
generation. However, the industrial acceptance of combinatorial testing is still weak because of
the deficiency of real industrial examples.
This thesis is tempted to fill this gap between the industry and the academy in the area
of combinatorial testing to emphasizes the effectiveness of combinatorial testing in verifying
complex configurable systems.
The particular intention of the thesis is to provide a new applicable approach to combinatorial
testing to fight the combinatorial explosion problem emerged during the verification and
performance measurement of transport protocol parallel routing of an AUTOSAR gateway. The
proposed approach has been validated and evaluated by means of two real industrial examples
of AUTOSAR gateways with multiple communication buses and two different degrees of complexity
to illustrate its applicability.

Accurate path tracking control of tractors became a key technology for automation in agriculture. Increasingly sophisticated solutions, however, revealed that accurate path tracking control of implements is at least equally important. Therefore, this work focuses on accurate path tracking control of both tractors and implements. The latter, as a prerequisite for improved control, are equipped with steering actuators like steerable wheels or a steerable drawbar, i.e. the implements are actively steered. This work contributes both new plant models and new control approaches for those kinds of tractor-implement combinations. Plant models comprise dynamic vehicle models accounting for forces and moments causing the vehicle motion as well as simplified kinematic descriptions. All models have been derived in a systematic and automated manner to allow for variants of implements and actuator combinations. Path tracking controller design begins with a comprehensive overview and discussion of existing approaches in related domains. Two new approaches have been proposed combining the systematic setup and tuning of a Linear-Quadratic-Regulator with the simplicity of a static output feedback approximation. The first approach ensures accurate path tracking on slopes and curves by including integral control for a selection of controlled variables. The second approach, instead, ensures this by adding disturbance feedforward control based on side-slip estimation using a non-linear kinematic plant model and an Extended Kalman Filter. For both approaches a feedforward control approach for curved path tracking has been newly derived. In addition, a straightforward extension of control accounting for the implement orientation has been developed. All control approaches have been validated in simulations and experiments carried out with a mid-size tractor and a custom built demonstrator implement.

In this thesis we developed a desynchronization design flow in the goal of easing the de- velopment effort of distributed embedded systems. The starting point of this design flow is a network of synchronous components. By transforming this synchronous network into a dataflow process network (DPN), we ensures important properties that are difficult or theoretically impossible to analyze directly on DPNs are preserved by construction. In particular, both deadlock-freeness and buffer boundedness can be preserved after desyn- chronization. For the correctness of desynchronization, we developed a criteria consisting of two properties: a global property that demands the correctness of the synchronous network, as well as a local property that requires the latency-insensitivity of each local synchronous component. As the global property is also a correctness requirement of synchronous systems in general, we take this property as an assumption of our desyn- chronization. However, the local property is in general not satisfied by all synchronous components, and therefore needs to be verified before desynchronization. In this thesis we developed a novel technique for the verification of the local property that can be carried out very efficiently. Finally we developed a model transformation method that translates a set of synchronous guarded actions – an intermediate format for synchronous systems – to an asynchronous actor description language (CAL). Our theorem ensures that one passed the correctness verification, the generated DPN of asynchronous pro- cesses (or actors) preserves the functional behavior of the original synchronous network. Moreover, by the correctness of the synchronous network, our theorem guarantees that the derived DPN is deadlock-free and can be implemented with only finitely bounded buffers.

Automata theory has given rise to a variety of automata models that consist
of a finite-state control and an infinite-state storage mechanism. The aim
of this work is to provide insights into how the structure of the storage
mechanism influences the expressiveness and the analyzability of the
resulting model. To this end, it presents generalizations of results about
individual storage mechanisms to larger classes. These generalizations
characterize those storage mechanisms for which the given result remains
true and for which it fails.
In order to speak of classes of storage mechanisms, we need an overarching
framework that accommodates each of the concrete storage mechanisms we wish
to address. Such a framework is provided by the model of valence automata,
in which the storage mechanism is represented by a monoid. Since the monoid
serves as a parameter to specifying the storage mechanism, our aim
translates into the question: For which monoids does the given
(automata-theoretic) result hold?
As a first result, we present an algebraic characterization of those monoids
over which valence automata accept only regular languages. In addition, it
turns out that for each monoid, this is the case if and only if valence
grammars, an analogous grammar model, can generate only context-free
languages.
Furthermore, we are concerned with closure properties: We study which
monoids result in a Boolean closed language class. For every language class
that is closed under rational transductions (in particular, those induced by
valence automata), we show: If the class is Boolean closed and contains any
non-regular language, then it already includes the whole arithmetical
hierarchy.
This work also introduces the class of graph monoids, which are defined by
finite graphs. By choosing appropriate graphs, one can realize a number of
prominent storage mechanisms, but also combinations and variants thereof.
Examples are pushdowns, counters, and Turing tapes. We can therefore relate
the structure of the graphs to computational properties of the resulting
storage mechanisms.
In the case of graph monoids, we study (i) the decidability of the emptiness
problem, (ii) which storage mechanisms guarantee semilinear Parikh images,
(iii) when silent transitions (i.e. those that read no input) can be
avoided, and (iv) which storage mechanisms permit the computation of
downward closures.

The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\]
In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.

The present study investigated the effects of two methods of shared book reading on children´s emergent literacy skills, such as language skills (expressive vocabulary and semantic skills) and grapheme awareness, i.e. before the alphabetic phase of reading acquisition (Lachmann & van Leeuwen, 2014) in home and in kindergarten contexts. The two following shared book reading methods were investigated: Method I - literacy enrichment: 200 extra children's books were distributed in kindergartens and children were encouraged every week to borrow a book to take home and read with their parents. Further, a written letter was sent to the parents encouraging them to frequently read the books with their children at home. Method II - teacher training: kindergarten teachers participated in structured training which included formal instruction on how to promote child language development through shared book reading. The training was an adaptation of the Heidelberger Interaktionstraining für pädagogisches Fachpersonal zur Förderung ein- und mehrsprachiger Kinder - HIT (Buschmann & Jooss, 2011). In addition, the effects of the two methods in combination were investigated. Three questions were addressed in the present study: (1) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's expressive vocabulary? (2) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's semantic skills? (3) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's grapheme awareness? Accordingly, 69 children, ranged in age from 3;0 to 4;8 years, were recruited from four kindergartens in the city of Kaiserslautern, Germany. The kindergartens were divided into: kindergarten 1 – Method I (N = 13); kindergarten 2 - Method II (N = 18); kindergarten 3 - Combination of both methods (N = 17); kindergarten 4 - Control group (N = 21). Half of the participants (N = 35) reported having a migration background. All groups were similar in regards to socioeconomic status and literacy activities at home. In a pre- posttest design, children performed three tests: expressive vocabulary (AWSTR, 3-5; Kiese-Himmel, 2005), semantic skills (SETK, 3-5 subtests ESR; Grimm, 2001), and grapheme awareness which is a task developed with the purpose of testing children’s familiarity with grapheme forms. The intervention period had duration of six months. The data analysis was performed using the software IBM SPSS Statistics version 22. Regarding language skills, Method I showed no significant effects on children expressive vocabulary and semantic skills. Method II showed significant effects for children expressive vocabulary. In addition, the children with migration background took more advantage of the method. Regarding semantic skills, no significant effects were found. No significant effects of the combination of both methods in children's language skills were found. For grapheme awareness, however, results showed positive effects for Method I, and Method II, as well as for the combination of both methods. The combination group, as reported by a large effect size, showed to be more effective than Method I and Method II alone. Moreover, the results indicated that in grapheme awareness, all children (in regards to age, gender, with and without migration background) took equal advantage in all three intervention groups. Overall, it can be concluded with the results of the present study, that by providing access to good books, Method I may help parents involve themselves in the active process of their child's literacy skills development. However, in order to improve language skills, access to books alone showed to be not enough. Therefore, it is suggested that access combined with additional support to parents in how to improve their language interactions with their children is highly recommended. In respect to Method II, the present study suggests that shared book reading through professional training is an important tool that supports children´s language development. For grapheme awareness it is concluded that with the combination of the two performed methods, high exposure to shared book reading helps children to informally learn about the surface characteristics of print, acquire some familiarity with the visual characteristics of the letters and learn to differentiate them from other visual patterns. Finally, it is suggested to organizations and institutions as well as to future research, the importance of having more programs that offer different possibilities to children to have more contact with adequate language interaction as well as more experiences with print through shared book reading as showed in the present study.

Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule.
We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC.
We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate.
To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band.
Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.

In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored.
First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations.
Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established.
Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.

This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134].
A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk.
Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants.
Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.

Towards A Non-tracking Web
(2016)

Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. Simply preventing third-party tracking without replacing the functionality it provides cannot be a viable solution; leaving publishers without essential services will hurt the sustainability of the entire ecosystem.
In this thesis, we present alternative approaches to bridge this gap between privacy for users and functionality for publishers and other entities. We first propose a general and interaction-based third-party cookie policy that prevents third-party tracking via cookies, yet enables social networking features for users when wanted, and does not interfere with non-tracking services for analytics and advertisements. We then present a system that enables publishers to obtain rich web analytics information (e.g., user demographics, other sites visited) without tracking the users across the web. While this system requires no new organizational players and is practical to deploy, it necessitates the publishers to pre-define answer values for the queries, which may not be feasible for many analytics scenarios (e.g., search phrases used, free-text photo labels). Our second system complements the first system by enabling publishers to discover previously unknown string values to be used as potential answers in a privacy-preserving fashion and with low computation overhead for clients as well as servers. These systems suggest that it is possible to provide non-tracking services with (at least) the same functionality as today’s tracking services.

Membrane proteins are generally soluble only in the presence of detergent micelles or other membrane-mimetic systems, which renders the determination of the protein’s molar mass or oligomeric state difficult. Moreover, the amount of bound detergent varies drastically among different proteins and detergents. However, the type of detergent and its concentration have a great influence on the protein’s structure, stability, and functionality and the success of structural and functional investigations and crystallographic trials. Size-exclusion chromatography, which is commonly used to determine the molar mass of water-soluble proteins, is not suitable for detergent-solubilised proteins because
the protein–detergent complex has a different conformation and, thus, commonly exhibits
a different migration behaviour than globular standard proteins. Thus, calibration curves obtained with standard proteins are not useful for membrane-protein analysis. However,
the combination of size-exclusion chromatography with ultraviolet absorbance, static light scattering, and refractive index detection provides a tool to determine the molar mass of protein–detergent complexes in an absolute manner and allows for distinguishing the contributions of detergent and protein to the complex.
The goal of this thesis was to refine the standard triple-detection size-exclusion chromatography measurement and data analysis procedure for challenging membrane-protein samples, non-standard detergents, and difficult solvents such as concentrated denaturant solutions that were thought to elude routine approaches. To this end, the influence of urea on the performance of the method beyond direct influences on detergents and proteins was investigated with the help of the water-soluble bovine serum albumin. On the basis of
the obtained results, measurement and data analysis procedures were refined for different detergents and protein–detergent complexes comprising the membrane proteins OmpLA and Mistic from Escherichia coli and Bacillus subtilis, respectively.
The investigations on mass and shape of different detergent micelles and the compositions of protein–detergent complexes in aqueous buffer and concentrated urea solutions
showed that triple-detection size-exclusion chromatography provides valuable information
about micelle masses and shapes under various conditions. Moreover, it is perfectly suited for the straightforward analysis of detergent-suspended proteins in terms of composition and oligomeric state not only under native but, more importantly, also under denaturing conditions.

Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.
Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies difficult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.
In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter-abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LLSPLAT, two testing tools for asynchronous programs that efficiently uncover many subtle memory violation bugs.

The task of printed Optical Character Recognition (OCR), though considered ``solved'' by many, still poses several challenges. The complex grapheme structure of many scripts, such as Devanagari and Urdu Nastaleeq, greatly lowers the performance of state-of-the-art OCR systems.
Moreover, the digitization of historical and multilingual documents still require much probing. Lack of benchmark datasets further complicates the development of reliable OCR systems. This thesis aims to find the answers to some of these challenges using contemporary machine learning technologies. Specifically, the Long Short-Term Memory (LSTM) networks, have been employed to OCR modern as well historical monolingual documents. The excellent OCR results obtained on these have led us to extend their application for multilingual documents.
The first major contribution of this thesis is to demonstrate the usability of LSTM networks for monolingual documents. The LSTM networks yield very good OCR results on various modern and historical scripts, without using sophisticated features and post-processing techniques. The set of modern scripts include modern English, Urdu Nastaleeq and Devanagari. To address the challenge of OCR of historical documents, this thesis focuses on Old German Fraktur script, medieval Latin script of the 15th century, and Polytonic Greek script. LSTM-based systems outperform the contemporary OCR systems on all of these scripts. To cater for the lack of ground-truth data, this thesis proposes a new methodology, combining segmentation-based and segmentation-free OCR approaches, to OCR scripts for which no transcribed training data is available.
Another major contribution of this thesis is the development of a novel multilingual OCR system. A unified framework for dealing with different types of multilingual documents has been proposed. The core motivation behind this generalized framework is the human reading ability to process multilingual documents, where no script identification takes place.
In this design, the LSTM networks recognize multiple scripts simultaneously without the need to identify different scripts. The first step in building this framework is the realization of a language-independent OCR system which recognizes multilingual text in a single step. This language-independent approach is then extended to script-independent OCR that can recognize multiscript documents using a single OCR model. The proposed generalized approach yields low error rate (1.2%) on a test corpus of English-Greek bilingual documents.
In summary, this thesis aims to extend the research in document recognition, from modern Latin scripts to Old Latin, to Greek and to other ``under-privilaged'' scripts such as Devanagari and Urdu Nastaleeq.
It also attempts to add a different perspective in dealing with multilingual documents.

Cells and organelles are enclosed by membranes that consist of a lipid bilayer harboring highly
diverse membrane proteins (MPs). These carry out vital functions, and α-helical MPs, in
particular, are of outstanding pharmacological importance, as they comprise more than half of
all drug targets. However, knowledge from MP research is limited, as MPs require membranemimetic
environments to retain their native structures and functions and, thus, are not readily
amenable to in vitro studies. To gain insight into vectorial functions, as in the case of channels
and transporters, and into topology, which describes MP conformation and orientation in the
context of a membrane, purified MPs need to be reconstituted, that is, transferred from detergent
micelles into a lipid-bilayer system.
The ultimate goal of this thesis was to elucidate the membrane topology of Mistic, which is
an essential regulator of biofilm formation in Bacillus subtilis consisting of four α-helices. The
conformational stability of Mistic has been shown to depend on the presence of a hydrophobic
environment. However, Mistic is characterized by an uncommonly hydrophilic surface, and
its helices are significantly shorter than transmembrane helices of canonical integral MPs.
Therefore, the means by which its association with the hydrophobic interior of a lipid bilayer
is accomplished is a subject of much debate. To tackle this issue, Mistic was produced and
purified, reconstituted, and subjected to topological studies.
Reconstitution of Mistic in the presence of lipids was performed by lowering the detergent
concentration to subsolubilizing concentrations via addition of cyclodextrin. To fully exploit
the advantages offered by cyclodextrin-mediated detergent removal, a quantitative model was
established that describes the supramolecular state of the reconstitution mixture and allows
for the prediction of reconstitution trajectories and their cross points with phase boundaries.
Automated titrations enabled spectroscopic monitoring of Mistic reconstitutions in real time.
On the basis of the established reconstitution protocol, the membrane topology of Mistic was
investigated with the aid of fluorescence quenching experiments and oriented circular dichroism
spectroscopy. The results of these experiments reveal that Mistic appears to be an exception
from the commonly observed transmembrane orientation of α-helical MPs, since it exhibits
a highly unusual in-plane topology, which goes in line with recent coarse-grained molecular
dynamics simulations.

Computer Vision (CV) problems, such as image classification and segmentation, have traditionally been solved by manual construction of feature hierarchies or incorporation of other prior knowledge. However, noisy images, varying viewpoints and lighting conditions of images, and clutters in real-world images make the problem challenging. Such tasks cannot be efficiently solved without learning from data. Therefore, many Deep Learning (DL) approaches have recently been successful for various CV tasks, for instance, image classification, object recognition and detection, action recognition, video classification, and scene labeling. The main focus of this thesis is to investigate a purely learning-based approach, particularly, Multi-Dimensional LSTM (MD-LSTM) recurrent neural networks to tackle the challenging CV tasks, classification and segmentation on 2D and 3D image data. Due to the structural nature of MD-LSTM, the network learns directly from raw pixel values and takes the complex spatial dependencies of each pixel into account. This thesis provides several key contributions in the field of CV and DL.
Several MD-LSTM network architectural options are suggested based on the type of input and output, as well as the requiring tasks. Including the main layers, which are an input layer, a hidden layer, and an output layer, several additional layers can be added such as a collapse layer and a fully connected layer. First, a single Two Dimensional LSTM (2D-LSTM) is directly applied on texture images for segmentation and show improvement over other texture segmentation methods. Besides, a 2D-LSTM layer with a collapse layer is applied for image classification on texture and scene images and have provided an accurate classification results. In addition, a deeper model with a fully connected layer is introduced to deal with more complex images for scene labeling and outperforms the other state-of-the-art methods including the deep Convolutional Neural Networks (CNN). Here, several input and output representation techniques are introduced to achieve the robust classification. Randomly sampled windows as input are transformed in scaling and rotation, which are integrated to get the final classification. To achieve multi-class image classification on scene images, several pruning techniques are introduced. This framework provides a good results in automatic web-image tagging. The next contribution is an investigation of 3D data with MD-LSTM. The traditional cuboid order of computations in Multi-Dimensional LSTM (MD-LSTM) is re-arranged in pyramidal fashion. The resulting Pyramidal Multi-Dimensional LSTM (PyraMiD-LSTM) is easy to parallelize, especially for 3D data such as stacks of brain slice images. PyraMiD-LSTM was tested on 3D biomedical volumetric images and achieved best known pixel-wise brain image segmentation results and competitive results on Electron Microscopy (EM) data for membrane segmentation.
To validate the framework, several challenging databases for classification and segmentation are proposed to overcome the limitations of current databases. First, scene images are randomly collected from the web and used for scene understanding, i.e., the web-scene image dataset for multi-class image classification. To achieve multi-class image classification, the training and testing images are generated in a different setting. For training, images belong to a single pre-defined category which are trained as a regular single-class image classification. However, for testing, images containing multi-classes are randomly collected by web-image search engine by querying the categories. All scene images include noise, background clutter, unrelated contents, and also diverse in quality and resolution. This setting can make the database possible to evaluate for real-world applications. Secondly, an automated blob-mosaics texture dataset generator is introduced for segmentation. Random 2D Gaussian blobs are generated and filled with random material textures. These textures contain diverse changes in illumination, scale, rotation, and viewpoint. The generated images are very challenging since they are even visually hard to separate the related regions.
Overall, the contributions in this thesis are major advancements in the direction of solving image analysis problems with Long Short-Term Memory (LSTM) without the need of any extra processing or manually designed steps. We aim at improving the presented framework to achieve the ultimate goal of accurate fine-grained image analysis and human-like understanding of images by machines.

Most of today’s wireless communication devices operate on unlicensed bands with uncoordinated spectrum access, with the consequence that RF interference and collisions are impairing the overall performance of wireless networks. In the classical design of network protocols, both packets in a collision are considered lost, such that channel access mechanisms attempt to avoid collisions proactively. However, with the current proliferation of wireless applications, e.g., WLANs, car-to-car networks, or the Internet of Things, this conservative approach is increasingly limiting the achievable network performance in practice. Instead of shunning interference, this thesis questions the notion of „harmful“ interference and argues that interference can, when generated in a controlled manner, be used to increase the performance and security of wireless systems. Using results from information theory and communications engineering, we identify the causes for reception or loss of packets and apply these insights to design system architectures that benefit from interference. Because the effect of signal propagation and channel fading, receiver design and implementation, and higher layer interactions on reception performance is complex and hard to reproduce by simulations, we design and implement an experimental platform for controlled interference generation to strengthen our theoretical findings with experimental results. Following this philosophy, we introduce and evaluate a system architecture that leverage interference.
First, we identify the conditions for successful reception of concurrent transmissions in wireless networks. We focus on the inherent ability of angular modulation receivers to reject interference when the power difference of the colliding signals is sufficiently large, the so-called capture effect. Because signal power fades over distance, the capture effect enables two or more sender–receiver pairs to transmit concurrently if they are positioned appropriately, in turn boosting network performance. Second, we show how to increase the security of wireless networks with a centralized network access control system (called WiFire) that selectively interferes with packets that violate a local security policy, thus effectively protecting legitimate devices from receiving such packets. WiFire’s working principle is as follows: a small number of specialized infrastructure devices, the guardians, are distributed alongside a network and continuously monitor all packet transmissions in the proximity, demodulating them iteratively. This enables the guardians to access the packet’s content before the packet fully arrives at the receiver. Using this knowledge the guardians classify the packet according to a programmable security policy. If a packet is deemed malicious, e.g., because its header fields indicate an unknown client, one or more guardians emit a limited burst of interference targeting the end of the packet, with the objective to introduce bit errors into it. Established communication standards use frame check sequences to ensure that packets are received correctly; WiFire leverages this built-in behavior to prevent a receiver from processing a harmful packet at all. This paradigm of „over-the-air“ protection without requiring any prior modification of client devices enables novel security services such as the protection of devices that cannot defend themselves because their performance limitations prohibit the use of complex cryptographic protocols, or of devices that cannot be altered after deployment.
This thesis makes several contributions. We introduce the first software-defined radio based experimental platform that is able to generate selective interference with the timing precision needed to evaluate the novel architectures developed in this thesis. It implements a real-time receiver for IEEE 802.15.4, giving it the ability to react to packets in a channel-aware way. Extending this system design and implementation, we introduce a security architecture that enables a remote protection of wireless clients, the wireless firewall. We augment our system with a rule checker (similar in design to Netfilter) to enable rule-based selective interference. We analyze the security properties of this architecture using physical layer modeling and validate our analysis with experiments in diverse environmental settings. Finally, we perform an analysis of concurrent transmissions. We introduce a new model that captures the physical properties correctly and show its validity with experiments, improving the state of the art in the design and analysis of cross-layer protocols for wireless networks.

Safety-related Systems (SRS) protect from the unacceptable risk resulting from failures of technical systems. The average probability of dangerous failure on demand (PFD) of these SRS in low demand mode is limited by standards. Probabilistic models are applied to determine the average PFD and verify the specified limits. In this thesis an effective framework for probabilistic modeling of complex SRS is provided. This framework enables to compute the average, instantaneous, and maximum PFD. In SRS, preventive maintenance (PM) is essential to achieve an average PFD in compliance with specified limits. PM intends to reveal dangerous undetected failures and provides repair if necessary. The introduced framework pays special attention to the precise and detailed modeling of PM. Multiple so far neglected degrees of freedom of the PM are considered, such as two types of elementwise PM at arbitrarily variable times. As shown by analyses, these degrees of freedom have a significant impact on the average, instantaneous, and maximum PFD. The PM is optimized to improve the average or maximum PFD or both. A well-known heuristic nonlinear optimization method (Nelder-Mead method) is applied to minimize the average or maximum PFD or a weighted trade-off. A significant improvement of the objectives and an improved protection are achieved. These improvements are achieved via the available degrees of freedom of the PM and without additional effort. Moreover, a set of rules is presented to decide for a given SRS if significant improvements will be achieved by optimization of the PM. These rules are based on the well-known characteristics of the SRS, e.g. redundancy or no redundancy, complete or incomplete coverage of PM. The presented rules aim to support the decision whether the optimization is advantageous for a given SRS and if it should be applied or not.

We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.

In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.

Typically software engineers implement their software according to the design of the software
structure. Relations between classes and interfaces such as method-call relations and inheritance
relations are essential parts of a software structure. Accordingly, analyzing several types of
relations will benefit the static analysis process of the software structure. The tasks of this
analysis include but not limited to: understanding of (legacy) software, checking guidelines,
improving product lines, finding structure, or re-engineering of existing software. Graphs with
multi-type edges are possible representation for these relations considering them as edges, while
nodes represent classes and interfaces of software. Then, this multiple type edges graph can
be mapped to visualizations. However, the visualizations should deal with the multiplicity of
relations types and scalability, and they should enable the software engineers to recognize visual
patterns at the same time.
To advance the usage of visualizations for analyzing the static structure of software systems,
I tracked difierent development phases of the interactive multi-matrix visualization (IMMV)
showing an extended user study at the end. Visual structures were determined and classified
systematically using IMMV compared to PNLV in the extended user study as four categories:
High degree, Within-package edges, Cross-package edges, No edges. In addition to these structures
that were found in these handy tools, other structures that look interesting for software
engineers such as cycles and hierarchical structures need additional visualizations to display
them and to investigate them. Therefore, an extended approach for graph layout was presented
that improves the quality of the decomposition and the drawing of directed graphs
according to their topology based on rigorous definitions. The extension involves describing
and analyzing the algorithms for decomposition and drawing in detail giving polynomial time
complexity and space complexity. Finally, I handled visualizing graphs with multi-type edges
using small-multiples, where each tile is dedicated to one edge-type utilizing the topological
graph layout to highlight non-trivial cycles, trees, and DAGs for showing and analyzing the
static structure of software. Finally, I applied this approach to four software systems to show
its usefulness.

Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)

In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.

Whole-body vibrations (WBV) have adverse effects on ride comfort and human health. Suspension seats have an important influence on the WBV severity. In this study, WBV were measured on a medium-sized compact wheel loader (CWL) in its typical operations. The effect of short-term exposure to the WBV on the ride comfort was evaluated according to ISO 2631-1:1985 and ISO 2631-1:1997. ISO 2631-1:1997 and ISO 2631-5:2004 were adopted to evaluate the effect of long-term exposure to the WBV on the human health. Reasons for the different evaluation results obtained according to ISO 2631-1:1997 and ISO 2631-5:2004 were explained in this study. The WBV measurements were carried out in cases where the driver wore a lap belt or a four-point seat harness and in the case where the driver did not wear any safety belt. The seat effective amplitude transmissibility (SEAT) and the seat transmissibility in the frequency domain in these three cases were analyzed to investigate the effect of a safety belt on the seat transmissibility. Seat tests were performed on a multi-axis shaking table in laboratory to study the dynamic behavior of a suspension seat under the vibration excitations measured on the CWL. The WBV intensity was reduced by optimizing the vertical and the longitudinal seat suspension systems with the help of computational simulations. For the optimization multi-body models of the seat-dummy system in the laboratory seat tests and the seat-driver system in the field vibration measurements were built and validated.

The recently established technologies in the areas of distributed measurement and intelligent
information processing systems, e.g., Cyber Physical Systems (CPS), Ambient
Intelligence/Ambient Assisted Living systems (AmI/AAL), the Internet of Things
(IoT), and Industry 4.0 have increased the demand for the development of intelligent
integrated multi-sensory systems as to serve rapid growing markets [1, 2]. These increase
the significance of complex measurement systems, that incorporate numerous advanced
methodological implementations including electronics circuit, signal processing,
and multi-sensory information fusion. In particular, in multi-sensory cognition applications,
to design such systems, the skill-required tasks, e.g., method selection, parameterization,
model analysis, and processing chain construction are elaborated with immense
effort, which conventionally are done manually by the expert designer. Moreover, the
strong technological competition imposes even more complicated design problems with
multiple constraints, e.g., cost, speed, power consumption,
exibility, and reliability.
Thus, the conventional human expert based design approach may not be able to cope
with the increasing demand in numbers, complexity, and diversity. To alleviate the issue,
the design automation approach has been the topic for numerous research works [3-14]
and has been commercialized to several products [15-18]. Additionally, the dynamic
adaptation of intelligent multi-sensor systems is the potential solution for developing
dependable and robust systems. Intrinsic evolution approach and self-x properties [19],
which include self-monitoring, -calibrating/trimming, and -healing/repairing, are among
the best candidates for the issue. Motivated from the ongoing research trends and based
on the background of our research work [12, 13] among the pioneers in this topic, the
research work of the thesis contributes to the design automation of intelligent integrated
multi-sensor systems.
In this research work, the Design Automation for Intelligent COgnitive system with self-
X properties, the DAICOX, architecture is presented with the aim of tackling the design
effort and to providing high quality and robust solutions for multi-sensor intelligent
systems. Therefore, the DAICOX architecture is conceived with the defined goals as
listed below.
Perform front to back complete processing chain design with automated method
selection and parameterization,
Provide a rich choice of pattern recognition methods to the design method pool,
Associate design information via interactive user interface and visualization along
with intuitive visual programming,
Deliver high quality solutions outperforming conventional approaches by using
multi-objective optimization,
Gain the adaptability, reliability and robustness of designed solutions with self-x
properties,
Derived from the goals, several scientific methodological developments and implementations,
particularly in the areas of pattern recognition and computational intelligence,
will be pursued as part of the DAICOX architecture in the research work of this thesis.
The method pool is aimed to contain a rich choice of methods and algorithms covering
data acquisition and sensor configuration, signal processing and feature computation,
dimensionality reduction, and classification. These methods will be selected and parameterized
automatically by the DAICOX design optimization to construct a multi-sensory
cognition processing chain. A collection of non-parametric feature quality assessment
functions for the purpose of Dimensionality Reduction (DR) process will be presented.
In addition, to standard DR methods, the variations of feature selection method, in
particular, feature weighting will be proposed. Three different classification categories
shall be incorporated in the method pool. Hierarchical classification approach will be
proposed and developed to serve as a multi-sensor fusion architecture at the decision
level. Beside multi-class classification, one-class classification methods, e.g., One-Class
SVM and NOVCLASS will be presented to extend functionality of the solutions, in particular,
anomaly and novelty detection. DAICOX is conceived to effectively handle the
problem of method selection and parameter setting for a particular application yielding
high performance solutions. The processing chain construction tasks will be carried
out by meta-heuristic optimization methods, e.g., Genetic Algorithms (GA) and Particle
Swarm Optimization (PSO), with multi-objective optimization approach and model
analysis for robust solutions. In addition, to the automated system design mechanisms,
DAICOX will facilitate the design tasks with intuitive visual programming and various
options of visualization. Design database concept of DAICOX is aimed to allow the
reusability and extensibility of the designed solutions gained from previous knowledge.
Thus, the cooperative design of machine and knowledge from the design expert can also
be utilized for obtaining fully enhanced solutions. In particular, the integration of self-x
properties as well as intrinsic optimization into the system is proposed to gain enduring
reliability and robustness. Hence, DAICOX will allow the inclusion of dynamically
reconfigurable hardware instances to the designed solutions in order to realize intrinsic
optimization and self-x properties.
As a result from the research work in this thesis, a comprehensive intelligent multisensor
system design architecture with automated method selection, parameterization,
and model analysis is developed with compliance to open-source multi-platform software.It is integrated with an intuitive design environment, which includes visual programming
concept and design information visualizations. Thus, the design effort is minimized as
investigated in three case studies of different application background, e.g., food analysis
(LoX), driving assistance (DeCaDrive), and magnetic localization. Moreover, DAICOX
achieved better quality of the solutions compared to the manual approach in all cases,
where the classification rate was increased by 5.4%, 0.06%, and 11.4% in the LoX,
DeCaDrive, and magnetic localization case, respectively. The design time was reduced
by 81.87% compared to the conventional approach by using DAICOX in the LoX case
study. At the current state of development, a number of novel contributions of the thesis
are outlined below.
Automated processing chain construction and parameterization for the design of
signal processing and feature computation.
Novel dimensionality reduction methods, e.g., GA and PSO based feature selection
and feature weighting with multi-objective feature quality assessment.
A modification of non-parametric compactness measure for feature space quality
assessment.
Decision level sensor fusion architecture based on proposed hierarchical classification
approach using, i.e., H-SVM.
A collection of one-class classification methods and a novel variation, i.e.,
NOVCLASS-R.
Automated design toolboxes supporting front to back design with automated
model selection and information visualization.
In this research work, due to the complexity of the task, neither all of the identified goals
have been comprehensively reached yet nor has the complete architecture definition been
fully implemented. Based on the currently implemented tools and frameworks, ongoing
development of DAICOX is pursuing towards the complete architecture. The potential
future improvements are the extension of method pool with a richer choice of methods
and algorithms, processing chain breeding via graph based evolution approach, incorporation
of intrinsic optimization, and the integration of self-x properties. According to
these features, DAICOX will improve its aptness in designing advanced systems to serve
the increasingly growing technologies of distributed intelligent measurement systems, in
particular, CPS and Industrie 4.0.

In this thesis we develop a shape optimization framework for isogeometric analysis in the optimize first–discretize then setting. For the discretization we use
isogeometric analysis (iga) to solve the state equation, and search optimal designs in a space of admissible b-spline or nurbs combinations. Thus a quite
general class of functions for representing optimal shapes is available. For the
gradient-descent method, the shape derivatives indicate both stopping criteria and search directions and are determined isogeometrically. The numerical treatment requires solvers for partial differential equations and optimization methods, which introduces numerical errors. The tight connection between iga and geometry representation offers new ways of refining the geometry and analysis discretization by the same means. Therefore, our main concern is to develop the optimize first framework for isogeometric shape optimization as ground work for both implementation and an error analysis. Numerical examples show that this ansatz is practical and case studies indicate that it allows local refinement.

This thesis deals with the development of a tractor front loader scale which measures payload continuously, independent of the center of gravity of the payload, and unaffected of the position and movements of the loader. To achieve this, a mathematic model of a common front loader is simplified which makes it possible to identify its parameters by a repeatable and automatic procedure. By measuring accelerations as well as cylinder forces, the payload is determined continuously during the working process. Finally, a prototype was build and the scale was tested on a tractor.

In this thesis, collision-induced dissociation (CID) studies serve to elucidate relative stabilities and to determine bond strengths within a given structure type of transition metal complexes. The infrared multi photon dissociation (IRMPD) spectroscopy combined with density functional theory (DFT) allow for structural analysis and provide insights into the coordination sphere of transition metal centers. The used combination of CID and IRMPD experiments is a powerful tool to obtain a detailed and comprehensive characterization and understanding of interactions between transition metals and organic ligands. The compounds’ spectrum comprises mono- or oligonuclear transition metal complexes containing iron, palladium, and ruthenium as well as lanthanide containing single molecule magnets (SMM). The presented investigations on the different transition metal complexes reveal manifold effects for each species leading to valuable results. A fundamental understanding of metal to ligand interactions is mandatory for the development of new and better organometallic complexes with catalytic, optical or magnetic properties.

The main goal of this thesis is twofold. First, the thesis aims at bridging the gap between existing Pattern Recognition (PR) methods of automatic signature verification and the requirements for their application in forensic science. This gap, attributed by various factors ranging from system definition to evaluation, prevents automatic methods from being used by Forensic Handwriting Examiners (FHEs). Second, the thesis presents novel signature verification methods developed particularly considering the implications of forensic casework, and outperforming the state-of-the-art PR methods.
The first goal of the thesis is attributed by four important factors, i.e., data, terminology, output reporting, and how evaluation of automatic systems is carried out today. It is argued that traditionally the signature data used in PR are not actual/close representative of the real world data (especially that available in forensic cases). The systems trained on such data are, therefore, not suitable for forensic environments. This situation can be tackled by providing more realistic data to PR researchers. To this end, various signature and handwriting datasets are gathered in collaboration with FHEs and are made publicly available through the course of this thesis. A special attention is given to disguised signatures--where authentic authors purposefully make their signatures look like a forgery. This genre was at large neglected in PR research previously.
The terminology used, in the two communities - PR and FHEs, differ greatly. In fact, even in PR, there is no standard terminology and people often differ in the usage of various terms particularly related to various types of forged signatures/handwriting. The thesis presents a new terminology that is equally useful for both forensic scientists and PR researchers. The proposed terminology is hoped to increase the general acceptability of automatic signature analysis systems in forensic science.
The outputs reported by general signature verification systems are not acceptable for FHEs and courts as they are either binary (yes/no) or score (raw evidence) based on similarity/difference. The thesis describes that automatic systems should rather report the probability of observing the evidence (e.g., a certain similarity/difference score) given the signature belongs to the acclaimed identity, and the probability of observing the same evidence given the signature does not belong to the acclaimed identity. This will take automatic systems from hard decisions to soft decisions, thereby enabling them to report likelihood ratios that actually represent the evidential value of the score rather than the raw score (evidence).
When automatic systems report soft decisions (as in the form of likelihood ratios), the thesis argues that there must be some methods to evaluate such systems. This thesis presents one such adaptation. The thesis argues that the state-of-the-art evaluation methods, like equal error rate and area under curve, do not address the needs of forensic science. These needs require an assessment of the evidential value of signature verification, rather than a hard/pure classification (accept/reject binary decision). The thesis demonstrates and validates a relatively simple adaptation of the current verification methods based on the Bayesian inference dependent calibration of continuous scores rather than hard classifications (binary and/or score based classification).
The second goal of this thesis is to introduce various local features based techniques which are capable of performing signature verification in forensic cases and reporting results as anticipated by FHEs and courts. This is an important contribution of the thesis because of the following two reasons. First, to the best of author's knowledge, local feature descriptors are for the first time used for development of signature verification systems for forensic environments (particularly considering disguised signatures). Previously, such methods have been heavily used for recognition tasks, rather than verification of writing behaviors, such as character and digit recognition. Second, the proposed methods not only report the more traditional decisions (like scores-usually reported in PR) but also the Bayesian inference based likelihood ratios (suitable for courts and forensic cases).
Furthermore, the thesis also provides a detailed man vs. machine comparison for signature verification tasks. The men, in this comparison, are forensic scientists serving as forensic handwriting examiners and having experience of varying number of years. The machines are the local features based methods proposed in this thesis, along with various other state-of-the-art signature verification systems. The proposed methods clearly outperform the state-of-the-art systems, and sometimes the human experts.
Finally, the thesis details various tasks that have been performed in the areas closely related to signature verification and its application in forensic casework. These include, developing novel local feature based methods for extraction of signatures/handwritten text from document images, hyper-spectral image analysis for extraction of signatures from forensic documents, and analysis of on-line signatures acquired through specialized pens equipped with Accelerometer and Gyroscope. These tasks are important as they enable the thesis to take PR systems one step further close to direct application in forensic cases.

This thesis treats the application of configurational forces for the evaluation of fracture processes in Antarctic ice shelves. FE simulations are used to analyze the influence of geometric scales, material parameters and boundary conditions on single surface cracks. A break-up event at the Wilkins Ice Shelf that coincided with a major temperature drop motivates the consideration of frost wedging as a mechanism for ice shelf disintegration. An algorithm for the evaluation of the crack propagation direction is used to analyze the horizontal growth of rifts. Using equilibrium considerations for a viscoelastic fluid, a method is introduced to compute viscous volume forces from measured velocity fields as loads for a linear elastic fracture mechanical analysis.

Attention-awareness is a key topic for the upcoming generation of computer-human interaction. A human moves his or her eyes to visually attends to a particular region in a scene. Consequently, he or she can process visual information rapidly and efficiently without being overwhelmed by vast amount of information from the environment. Such a physiological function called visual attention provides a computer system with valuable information of the user to infer his or her activity and the surrounding environment. For example, a computer can infer whether the user is reading text or not by analyzing his or her eye movements. Furthermore, it can infer with which object he or she is interacting by recognizing the object the user is looking at. Recent developments of mobile eye tracking technologies enable us
to capture human visual attention in ubiquitous everyday environments. There are various types of applications where attention-aware systems may be effectively incorporated. Typical examples are augmented reality (AR) applications such as Wikitude which overlay virtual information onto physical objects. This type of AR application presents augmentative information of recognized objects to the user. However, if it presents information of all recognized objects at once, the over
ow of information could be obtrusive to the user. As a solution for such a problem, attention-awareness can be integrated into a system. If a
system knows to which object the user is attending, it can present only the information of
relevant objects to the user.
Towards attention-aware systems in everyday environments, this thesis presents approaches
for analysis of user attention to visual content. Using a state-of-the-art wearable eye tracking device, one can measure the user's eye movements in a mobile scenario. By capturing the user's eye gaze position in a scene and analyzing the image where the eyes focus, a computer can recognize the visual content the user is currently attending to. I propose several image analysis methods to recognize the user-attended visual content in a scene image. For example, I present an application called Museum Guide 2.0. In Museum Guide 2.0, image-based object recognition and eye gaze analysis are combined together to recognize user-attended objects in a museum scenario. Similarly, optical character recognition
(OCR), face recognition, and document image retrieval are also combined with eye gaze analysis to identify the user-attended visual content in respective scenarios. In addition to Museum Guide 2.0, I present other applications in which these combined frameworks are effectively used. The proposed applications show that the user can benefit from active information presentation which augments the attended content in a virtual environment with
a see-through head-mounted display (HMD).
In addition to the individual attention-aware applications mentioned above, this thesis
presents a comprehensive framework that combines all recognition modules to recognize the user-attended visual content when various types of visual information resources such as text, objects, and human faces are present in one scene. In particular, two processing strategies are proposed. The first one selects an appropriate image analysis module according to the user's current cognitive state. The second one runs all image analysis modules simultaneously and merges the analytic results later. I compare these two processing strategies in terms of user-attended visual content recognition when multiple visual information resources are present in the same scene.
Furthermore, I present novel interaction methodologies for a see-through HMD using eye gaze input. A see-through HMD is a suitable device for a wearable attention-aware system for everyday environments because the user can also view his or her physical environment
through the display. I propose methods for the user's attention engagement estimation with the display, eye gaze-driven proactive user assistance functions, and a method for interacting
with a multi-focal see-through display.
Contributions of this thesis include:
• An overview of the state-of-the-art in attention-aware computer-human interaction
and attention-integrated image analysis.
• Methods for the analysis of user-attended visual content in various scenarios.
• Demonstration of the feasibilities and the benefits of the proposed user-attended visual content analysis methods with practical user-supportive applications.
• Methods for interaction with a see-through HMD using eye gaze.
• A comprehensive framework for recognition of user-attended visual content in a complex
scene where multiple visual information resources are present.
This thesis opens a novel field of wearable computer systems where computers can understand the user attention in everyday environments and provide with what the user wants. I will show the potential of such wearable attention-aware systems for everyday
environments for the next generation of pervasive computer-human interaction.

The central topic of this thesis is Alperin's weight conjecture, a problem concerning the representation theory of finite groups.
This conjecture, which was first proposed by J. L. Alperin in 1986, asserts that for any finite group the number of its irreducible Brauer characters coincides with the number of conjugacy classes of its weights. The blockwise version of Alperin's conjecture partitions this problem into a question concerning the number of irreducible Brauer characters and weights belonging to the blocks of finite groups.
A proof for this conjecture has not (yet) been found. However, the problem has been reduced to a question on non-abelian finite (quasi-) simple groups in the sense that there is a set of conditions, the so-called inductive blockwise Alperin weight condition, whose verification for all non-abelian finite simple groups implies the blockwise Alperin weight conjecture. Now the objective is to prove this condition for all non-abelian finite simple groups, all of which are known via the classification of finite simple groups.
In this thesis we establish the inductive blockwise Alperin weight condition for three infinite series of finite groups of Lie type: the special linear groups \(SL_3(q)\) in the case \(q>2\) and \(q \not\equiv 1 \bmod 3\), the Chevalley groups \(G_2(q)\) for \(q \geqslant 5\), and Steinberg's triality groups \(^3D_4(q)\).

In this thesis, we investigate several upcoming issues occurring in the context of conceiving and building a decision support system. We elaborate new algorithms for computing representative systems with special quality guarantees, provide concepts for supporting the decision makers after a representative system was computed, and consider a methodology of combining two optimization problems.
We review the original Box-Algorithm for two objectives by Hamacher et al. (2007) and discuss several extensions regarding coverage, uniformity, the enumeration of the whole nondominated set, and necessary modifications if the underlying scalarization problem cannot be solved to optimality. In a next step, the original Box-Algorithm is extended to the case of three objective functions to compute a representative system with desired coverage error. Besides the investigation of several theoretical properties, we prove the correctness of the algorithm, derive a bound on the number of iterations needed by the algorithm to meet the desired coverage error, and propose some ideas for possible extensions.
Furthermore, we investigate the problem of selecting a subset with desired cardinality from the computed representative system, the Hypervolume Subset Selection Problem (HSSP). We provide two new formulations for the bicriteria HSSP, a linear programming formulation and a \(k\)-link shortest path formulation. For the latter formulation, we propose an algorithm for which we obtain the currently best known complexity bound for solving the bicriteria HSSP. For the tricriteria HSSP, we propose an integer programming formulation with a corresponding branch-and-bound scheme.
Moreover, we address the issue of how to present the whole set of computed representative points to the decision makers. Based on common illustration methods, we elaborate an algorithm guiding the decision makers in choosing their preferred solution.
Finally, we step back and look from a meta-level on the issue of how to combine two given optimization problems and how the resulting combinations can be related to each other. We come up with several different combined formulations and give some ideas for the practical approach.

The Event Segmentation Theory (Kurby & Zacks, 2008; Zacks, Speer, Swallow, Braver, & Reynolds, 2007) explains the perceptual organization of an ongoing activity into meaningful events. The classical event segmentation task (Newtson, 1973) involves watching an online video and indicating with key presses the event boundaries, i.e., when one event ends and the next one begins. The resulting hierarchical organization of object-based coarse events and action-based fine events gives insight into various cognitive processes. I used the Event Segmentation Theory to develop assistance and training systems for assembly workers in industrial settings at various levels - experts, new hires, and intellectually disabled people. Therefore, the first scientific question I asked was whether online and offline event segmentation result in the same event boundaries. This is important because assembly work requires not only watching activities online but processing the information offline, e.g., while performing the assembly task. By developing a special software tool that enables assessment of offline event boundaries, I established that online perception and offline elaboration lead to similar event boundaries. This study supports prior work suggesting that instructions should be structured around event boundaries.
Secondly, I investigated the importance of fine versus coarse event boundaries when learning the sequence of steps in virtual training, both for novices and experts in car door assembly. I found memory, tested by ability to predict the next frame, to be enhanced for object-based coarse events from the nearest fine event boundary. However, virtual training did not improve memory for action-based fine events from the nearest coarse event boundary. I conjecture that trainees primarily acquire the sequence of object-based coarse events in an initial training. Based on differences found in memory performance between experts and novices, I conclude that memory for action-based fine events is dependent on expertise.
Thirdly, I used the Event Segmentation Theory to investigate whether the simple and repetitive assembly tasks offered at workshops for intellectually disabled persons utilize their full cognitive potential. I analyzed event segmentation performance of 32 intellectually disabled persons compared to 30 controls using a variety of event segmentation measures. I found specific deficits in event boundary detection and hierarchical organization of events for the intellectually disabled group. However, results suggest that hierarchical organization is task-dependent. Because the event segmentation task accounted for differences in general cognitive ability, I propose the event segmentation task as diagnostic method for the need for support in executing assembly tasks.
Based on these three studies, I argue that the Event Segmentation Theory offers a framework for assessment and assistance of important attentional, perceptual, and memory processes related to assembly tasks. I demonstrate how practical applications can make use of this framework for the development of new computer-based assistance and training systems that are tailored to the users’ need for support and improve their quality of life.

Industrial design has a long history. With the introduction of Computer-Aided Engineering, industrial design was revolutionised. Due to the newly found support, the design workflow changed, and with the introduction of virtual prototyping, new challenges arose. These new engineering problems have triggered
new basic research questions in computer science.
In this dissertation, I present a range of methods which support different components of the virtual design cycle, from modifications of a virtual prototype and optimisation of said prototype, to analysis of simulation results.
Starting with a virtual prototype, I support engineers by supplying intuitive discrete normal vectors which can be used to interactively deform the control mesh of a surface. I provide and compare a variety of different normal definitions which have different strengths and weaknesses. The best choice depends on
the specific model and on an engineer’s priorities. Some methods have higher accuracy, whereas other methods are faster.
I further provide an automatic means of surface optimisation in the form of minimising total curvature. This minimisation reduces surface bending, and therefore, it reduces material expenses. The best results can be obtained for analytic surfaces, however, the technique can also be applied to real-world examples.
Moreover, I provide engineers with a curvature-aware technique to optimise mesh quality. This helps to avoid degenerated triangles which can cause numerical issues. It can be applied to any component of the virtual design cycle: as a direct modification of the virtual prototype (depending on the surface defini-
tion), during optimisation, or dynamically during simulation.
Finally, I have developed two different particle relaxation techniques that both support two components of the virtual design cycle. The first component for which they can be used is discretisation. To run computer simulations on a model, it has to be discretised. Particle relaxation uses an initial sampling,
and it improves it with the goal of uniform distances or curvature-awareness. The second component for which they can be used is the analysis of simulation results. Flow visualisation is a powerful tool in supporting the analysis of flow fields through the insertion of particles into the flow, and through tracing their movements. The particle seeding is usually uniform, e.g. for an integral surface, one could seed on a square. Integral surfaces undergo strong deformations, and they can have highly varying curvature. Particle relaxation redistributes the seeds on the surface depending on surface properties like local deformation or curvature.

Today’s pervasive availability of computing devices enabled with wireless communication and location- or inertial sensing capabilities is unprecedented. The number of smartphones sold worldwide are still growing and increasing numbers of sensor enabled accessories are available which a user can wear in the shoe or at the wrist for fitness tracking, or just temporarily puts on to measure vital signs. Despite this availability of computing and sensing hardware the merit of application seems rather limited regarding the full potential of information inherent to such senor deployments. Most applications build upon a vertical design which encloses a narrowly defined sensor setup and algorithms specifically tailored to suit the application’s purpose. Successful technologies, however, such as the OSI model, which serves as base for internet communication, have used a horizontal design that allows high level communication protocols to be run independently from the actual lower-level protocols and physical medium access. This thesis contributes to a more horizontal design of human activity recognition systems at two stages. First, it introduces an integrated toolchain to facilitate the entire process of building activity recognition systems and to foster sharing and reusing of individual components. At a second stage, a novel method for automatic integration of new sensors to increase a system’s performance is presented and discussed in detail.
The integrated toolchain is built around an efficient toolbox of parametrizable components for interfacing sensor hardware, synchronization and arrangement of data streams, filtering and extraction of features, classification of feature vectors, and interfacing output devices and applications. The toolbox emerged as open-source project through several research projects and is actively used by research groups. Furthermore, the toolchain supports recording, monitoring, annotation, and sharing of large multi-modal data sets for activity recognition through a set of integrated software tools and a web-enabled database.
The method for automatically integrating a new sensor into an existing system is, at its core, a variation of well-established principles of semi-supervised learning: (1) unsupervised clustering to discover structure in data, (2) assumption that cluster membership is correlated with class membership, and (3) obtaining at a small number of labeled data points for each cluster, from which the cluster labels are inferred. In most semi-supervised approaches, however, the labels are the ground truth provided by the user. By contrast, the approach presented in this thesis uses a classifier trained on an N-dimensional feature space (old classifier) to provide labels for a few points in an (N+1)-dimensional feature space which are used to generate a new, (N+1)-dimensional classifier. The different factors that make a distribution difficult to handle are discussed, a detailed description of heuristics designed to mitigate the influences of such factors is provided, and a detailed evaluation on a set of over 3000 sensor combinations from 3 multi-user experiments that have been used by a variety of previous studies of different activity recognition methods is presented.

The overall goal of the work is to simulate rarefied flows inside geometries with moving boundaries. The behavior of a rarefied flow is characterized through the Knudsen number \(Kn\), which can be very small (\(Kn < 0.01\) continuum flow) or larger (\(Kn > 1\) molecular flow). The transition region (\(0.01 < Kn < 1\)) is referred to as the transition flow regime.
Continuum flows are mainly simulated by using commercial CFD methods, which are used to solve the Euler equations. In the case of molecular flows one uses statistical methods, such as the Direct Simulation Monte Carlo (DSMC) method. In the transition region Euler equations are not adequate to model gas flows. Because of the rapid increase of particle collisions the DSMC method tends to fail, as well
Therefore, we develop a deterministic method, which is suitable to simulate problems of rarefied gases for any Knudsen number and is appropriate to simulate flows inside geometries with moving boundaries. Thus, the method we use is the Finite Pointset Method (FPM), which is a mesh-free numerical method developed at the ITWM Kaiserslautern and is mainly used to solve fluid dynamical problems.
More precisely, we develop a method in the FPM framework to solve the BGK model equation, which is a simplification of the Boltzmann equation. This equation is mainly used to describe rarefied flows.
The FPM based method is implemented for one and two dimensional physical and velocity space and different ranges of the Knudsen number. Numerical examples are shown for problems with moving boundaries. It is seen, that our method is superior to regular grid methods with respect to the implementation of boundary conditions. Furthermore, our results are comparable to reference solutions gained through CFD- and DSMC methods, respectevly.