Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (583) (entfernen)
Sprache
- Englisch (583) (entfernen)
Schlagworte
- Visualisierung (13)
- finite element method (8)
- Finite-Elemente-Methode (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Visualization (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Mobilfunk (5)
- Optimization (5)
Fachbereich / Organisatorische Einheit
- Fachbereich Mathematik (211)
- Fachbereich Informatik (123)
- Fachbereich Maschinenbau und Verfahrenstechnik (89)
- Fachbereich Chemie (56)
- Fachbereich Elektrotechnik und Informationstechnik (44)
- Fachbereich Biologie (25)
- Fachbereich Sozialwissenschaften (14)
- Fachbereich Wirtschaftswissenschaften (6)
- Fachbereich ARUBI (5)
- Fachbereich Physik (5)
The demand of sustainability is continuously increasing. Therefore, thermoplastic
composites became a focus of research due to their good weight to performance
ratio. Nevertheless, the limiting factor of their usage for some processes is the loss of
consolidation during re-melting (deconsolidation), which reduces the part quality.
Several studies dealing with deconsolidation are available. These studies investigate
a single material and process, which limit their usefulness in terms of general
interpretations as well as their comparability to other studies. There are two main
approaches. The first approach identifies the internal void pressure as the main
cause of deconsolidation and the second approach identifies the fiber reinforcement
network as the main cause. Due to of their controversial results and limited variety of
materials and processes, there is a big need of a more comprehensive investigation
on several materials and processes.
This study investigates the deconsolidation behavior of 17 different materials and
material configurations considering commodity, engineering, and performance
polymers as well as a carbon and two glass fiber fabrics. Based on the first law of
thermodynamics, a deconsolidation model is proposed and verified by experiments.
Universal applicable input parameters are proposed for the prediction of
deconsolidation to minimize the required input measurements. The study revealed
that the fiber reinforcement network is the main cause of deconsolidation, especially
for fiber volume fractions higher than 48 %. The internal void pressure can promote
deconsolidation, when the specimen was recently manufactured. In other cases the
internal void pressure as well as the surface tension prevents deconsolidation.
During deconsolidation the polymer is displaced by the volume increase of the void.
The polymer flow damps the progress of deconsolidation because of the internal
friction of the polymer. The crystallinity and the thermal expansion lead to a
reversible thickness increase during deconsolidation. Moisture can highly accelerate
deconsolidation and can increase the thickness by several times because of the
vaporization of water. The model is also capable to predict reconsolidation under the
defined boundary condition of pressure, time, and specimen size. For high pressure
matrix squeeze out occur, which falsifies the accuracy of the model.The proposed model was applied to thermoforming, induction welding, and
thermoplastic tape placement. It is demonstrated that the load rate during
thermoforming is the critical factor of achieving complete reconsolidation. The
required load rate can be determined by the model and is dependent on the cooling
rate, the forming length, the extent of deconsolidation, the processing temperature,
and the final pressure. During induction welding deconsolidation can tremendously
occur because of the left moisture in the polymer at the molten state. The moisture
cannot fully diffuse out of the specimen during the faster heating. Therefore,
additional pressure is needed for complete reconsolidation than it would be for a dry
specimen. Deconsolidation is an issue for thermoplastic tape placement, too. It limits
the placement velocity because of insufficient cooling after compaction. If the
specimen after compaction is locally in a molten state, it deconsolidates and causes
residual stresses in the bond line, which decreases the interlaminar shear strength. It
can be concluded that the study gains new knowledge and helps to optimize these
processes by means of the developed model without a high number of required
measurements.
Aufgrund seiner guten spezifischen Festigkeit und Steifigkeit ist der
endlosfaserverstärkte Thermoplast ein hervorragender Leichtbauwerkstoff. Allerdings
kann es während des Wiederaufschmelzens durch Dekonsolidierung zu einem
Verlust der guten mechanischen Eigenschaften kommen, daher ist Dekonsolidierung
unerwünscht. In vielen Studien wurde die Dekonsolidierung mit unterschiedlichen
Ergebnissen untersucht. Dabei wurde meist ein Material und ein Prozess betrachtet.
Eine allgemeine Interpretation und die Vergleichbarkeit unter den Studien sind
dadurch nur begrenzt möglich. Aus der Literatur sind zwei Ansätze bekannt. Dem
ersten Ansatz liegt der Druckunterschied zwischen Poreninnendruck und
Umgebungsdruck als Hauptursache der Dekonsolidierung zu Grunde. Beim zweiten
Ansatz wird die Faserverstärkung als Hauptursache identifiziert. Aufgrund der
kontroversen Ergebnisse und der begrenzten Anzahl der Materialien und
Verarbeitungsverfahren, besteht die Notwendigkeit einer umfassenden Untersuchung
über mehrere Materialien und Prozesse. Diese Studie umfasst drei Polymere
(Polypropylen, Polycarbonat und Polyphenylensulfid), drei Gewebe (Köper, Atlas und
Unidirektional) und zwei Prozesse (Autoklav und Heißpressen) bei verschiedenen
Faservolumengehalten.
Es wurde der Einfluss des Porengehaltes auf die interlaminare Scherfestigkeit
untersucht. Aus der Literatur ist bekannt, dass die interlaminare Scherfestigkeit mit
der Zunahme des Porengehaltes linear sinkt. Dies konnte für die Dekonsolidierung
bestätigt werden. Die Reduktion der interlaminaren Scherfestigkeit für
thermoplastische Matrizes ist kleiner als für duroplastische Matrizes und liegt im
Bereich zwischen 0,5 % bis 1,5 % pro Prozent Porengehalt. Außerdem ist die
Abnahme signifikant vom Matrixpolymer abhängig.
Im Falle der thermisch induzierten Dekonsolidierung nimmt der Porengehalt
proportional zu der Dicke der Probe zu und ist ein Maß für die Dekonsolidierung. Die
Pore expandiert aufgrund der thermischen Gasexpansion und kann durch äußere
Kräfte zur Expansion gezwungen werden, was zu einem Unterdruck in der Pore
führt. Die Faserverstärkung ist die Hauptursache der Dickenzunahme
beziehungsweise der Dekonsolidierung. Die gespeicherte Energie, aufgebaut während der Kompaktierung, wird während der Dekonsolidierung abgegeben. Der
Dekompaktierungsdruck reicht von 0,02 MPa bis 0,15 MPa für die untersuchten
Gewebe und Faservolumengehalte. Die Oberflächenspannung behindert die
Porenexpansion, weil die Oberfläche vergrößert werden muss, die zusätzliche
Energie benötigt. Beim Kontakt von benachbarten Poren verursacht die
Oberflächenspannung ein Verschmelzen der Poren. Durch das bessere Volumen-
Oberfläche-Verhältnis wird Energie abgebaut. Der Polymerfluss bremst die
Entwicklung der Dickenzunahme aufgrund der erforderlichen Energie (innere
Reibung) der viskosen Strömung. Je höher die Temperatur ist, desto niedriger ist die
Viskosität des Polymers, wodurch weniger Energie für ein weiteres Porenwachstum
benötigt wird. Durch den reversiblen Einfluss der Kristallinität und der
Wärmeausdehnung des Verbundes wird während der Erwärmung die Dicke erhöht
und während der Abkühlung wieder verringert. Feuchtigkeit kann einen enormen
Einfluss auf die Dekonsolidierung haben. Ist noch Feuchtigkeit über der
Schmelztemperatur im Verbund vorhanden, verdampft diese und kann die Dicke um
ein Vielfaches der ursprünglichen Dicke vergrößern.
Das Dekonsolidierungsmodell ist in der Lage die Rekonsolidierung vorherzusagen.
Allerdings muss der Rekonsolidierungsdruck unter einem Grenzwert liegen
(0,15 MPa für 50x50 mm² und 1,5 MPa für 500x500 mm² große Proben), da es sonst
bei der Probe zu einem Polymerfluss aus der Probe von mehr als 2 % kommt. Die
Rekonsolidierung ist eine inverse Dekonsolidierung und weist die gleichen
Mechanismen in der entgegengesetzten Richtung auf.
Das entwickelte Modell basiert auf dem ersten Hauptsatz der Thermodynamik und
kann die Dicke während der Dekonsolidierung und der Rekonsolidierung
vorhersagen. Dabei wurden eine homogene Porenverteilung und eine einheitliche,
kugelförmige Porengröße angenommen. Außerdem wurde die Massenerhaltung
angenommen. Um den Aufwand für die Bestimmung der Eingangsgrößen zu
reduzieren, wurden allgemein gültige Eingabeparameter bestimmt, die für eine
Vielzahl von Konfigurationen gelten. Das simulierte Materialverhalten mit den
allgemein gültigen Eingangsparametern erzielte unter den definierten
Einschränkungen eine gute Übereinstimmung mit dem tatsächlichen
Materialverhalten. Nur bei Konfigurationen mit einer Viskositätsdifferenz von mehr als 30 % zwischen der Schmelztemperatur und der Prozesstemperatur sind die
allgemein gültigen Eingangsparameter nicht anwendbar. Um die Relevanz für die
Industrie aufzuzeigen, wurden die Effekte der Dekonsolidierung für drei weitere
Verfahren simuliert. Es wurde gezeigt, dass die Kraftzunahmegeschwindigkeit
während des Thermoformens ein Schlüsselfaktor für eine vollständige
Rekonsolidierung ist. Wenn die Kraft zu langsam appliziert wird oder die finale Kraft
zu gering ist, ist die Probe bereits erstarrt, bevor eine vollständige Konsolidierung
erreicht werden kann. Auch beim Induktionsschweißen kann Dekonsolidierung
auftreten. Besonders die Feuchtigkeit kann zu einer starken Zunahme der
Dekonsolidierung führen, verursacht durch die sehr schnellen Heizraten von mehr als
100 K/min. Die Feuchtigkeit kann während der kurzen Aufheizphase nicht vollständig
aus dem Polymer ausdiffundieren, sodass die Feuchtigkeit beim Erreichen der
Schmelztemperatur in der Probe verdampft. Beim Tapelegen wird die
Ablegegeschwindigkeit durch die Dekonsolidierung begrenzt. Nach einer scheinbar
vollständigen Konsolidierung unter der Walze kann die Probe lokal dekonsolidieren,
wenn das Polymer unter der Oberfläche noch geschmolzen ist. Die daraus
resultierenden Poren reduzieren die interlaminare Scherfestigkeit drastisch um 5,8 %
pro Prozent Porengehalt für den untersuchten Fall. Ursache ist die Kristallisation in
der Verbindungszone. Dadurch werden Eigenspannungen erzeugt, die in der
gleichen Größenordnung wie die tatsächliche Scherfestigkeit sind.
The focus of this work is to provide and evaluate a novel method for multifield topology-based analysis and visualization. Through this concept, called Pareto sets, one is capable to identify critical regions in a multifield with arbitrary many individual fields. It uses ideas found in graph optimization to find common behavior and areas of divergence between multiple optimization objectives. The connections between the latter areas can be reduced into a graph structure allowing for an abstract visualization of the multifield to support data exploration and understanding.
The research question that is answered in this dissertation is about the general capability and expandability of the Pareto set concept in context of visualization and application. Furthermore, the study of its relations, drawbacks and advantages towards other topological-based approaches. This questions is answered in several steps, including consideration and comparison with related work, a thorough introduction of the Pareto set itself as well as a framework for efficient implementation and an attached discussion regarding limitations of the concept and their implications for run time, suitable data, and possible improvements.
Furthermore, this work considers possible simplification approaches like integrated single-field simplification methods but also using common structures identified through the Pareto set concept to smooth all individual fields at once. These considerations are especially important for real-world scenarios to visualize highly complex data by removing small local structures without destroying information about larger, global trends.
To further emphasize possible improvements and expandability of the Pareto set concept, the thesis studies a variety of different real world applications. For each scenario, this work shows how the definition and visualization of the Pareto set is used and improved for data exploration and analysis based on the scenarios.
In summary, this dissertation provides a complete and sound summary of the Pareto set concept as ground work for future application of multifield data analysis. The possible scenarios include those presented in the application section, but are found in a wide range of research and industrial areas relying on uncertainty analysis, time-varying data, and ensembles of data sets in general.
Novel image processing techniques have been in development for decades, but most
of these techniques are barely used in real world applications. This results in a gap
between image processing research and real-world applications; this thesis aims to
close this gap. In an initial study, the quantification, propagation, and communication
of uncertainty were determined to be key features in gaining acceptance for
new image processing techniques in applications.
This thesis presents a holistic approach based on a novel image processing pipeline,
capable of quantifying, propagating, and communicating image uncertainty. This
work provides an improved image data transformation paradigm, extending image
data using a flexible, high-dimensional uncertainty model. Based on this, a completely
redesigned image processing pipeline is presented. In this pipeline, each
step respects and preserves the underlying image uncertainty, allowing image uncertainty
quantification, image pre-processing, image segmentation, and geometry
extraction. This is communicated by utilizing meaningful visualization methodologies
throughout each computational step.
The presented methods are examined qualitatively by comparing to the Stateof-
the-Art, in addition to user evaluation in different domains. To show the applicability
of the presented approach to real world scenarios, this thesis demonstrates
domain-specific problems and the successful implementation of the presented techniques
in these domains.
Der Fokus der vorliegenden Arbeit liegt auf endlosfaser- und langfaserverstärkten
thermoplastischen Materialien. Hierfür wurde das „multilayered hybrid
(MLH)“ Konzept entwickelt und auf zwei Halbzeuge, den MLH-Roving und die MLHMat
angewendet. Der MLH-Roving ist ein Roving (bestehend aus Endlosfasern), der
durch thermoplastische Folien in mehrere Schichten geteilt wird. Der MLH-Roving
wird durch eine neuartige Spreizmethode mit anschließender thermischen Fixierung
und abschließender mehrfacher Faltung hergestellt. Dadurch können verschiedene
Faser-Matrix-Konfigurationen realisiert werden. Die MLH-Mat ist ein
glasmattenverstärktes thermoplastisches Material, das für hohe Fasergehalte bis 45
vol. % und verschiedene Matrixpolymere, z.B. Polypropylen (PP) und Polyamide 6
(PA6) geeignet ist. Sie zeichnet sich durch eine hohe Homogenität in der
Flächendichte und in der Faserrichtung aus. Durch dynamische Crashversuche mit
auf MLH-Roving und MLH-Mat basierenden Probekörpern wurden das
Crashverhalten und die Performance untersucht. Die Ergebnisse der Crashkörper
basierend auf langfaserverstärktem Material (MLH-Mat) und endlosfaserverstärktem
Material (MLH-Roving) waren vergleichbar. Die PA6-Typen zeigten eine bessere
Crashperformance als PP-Typen.
The present work deals with continuous fiber- and long fiber reinforced thermoplastic
materials. The concept of multilayered hybrid (MLH) structure was developed and
applied to the so-called MLH-roving and MLH-mat. The MLH-roving is a continuous
fiber roving separated evenly into several sublayers by thermoplastic films, through
the sequential processes of spreading with a newly derived equation, thermal fixing,
and folding. It was aimed to satisfy the variety of material configuration as well as the
variety in intermediate product. The MLH-mat is a glass mat reinforced thermoplastic
(GMT)-like material that is suitable for high fiber contents up to 45 vol. % and various
matrix polymers, e.g. polypropylene (PP), polyamide 6 (PA6). It showed homogeneity
in areal density, random directional fiber distribution, and reheating stability required
for molding process. On the MLH-roving and MLH-mat materials, the crash behavior
and performance were investigated by dynamic crash test. Long fiber reinforced
materials (MLH-mat) were equivalent to continuous fiber reinforced materials (MLHroving),
and PA6 grades showed higher crash performance than PP grades.
The gas phase infrared and fragmentation spectra of a systematic group of trimetallic oxo-centered
transition metal complexes are shown and discussed, with formate and acetate bridging ligands and
pyridine and water as axial ligands.
The stability of the complexes, as predicted by appropriate ab initio simulations, is demonstrated to
agree with collision induced dissociation (CID) measurements.
A broad range of DFT calculations are shown. They are used to simulate the geometry, the bonding
situation, relative stability and flexibility of the discussed complexes, and to specify the observed
trends. These simulations correctly predict the trends in the band splitting of the symmetric and
asymmetric carboxylate stretch modes, but fail to account for anharmonic effects observed specifically
in the mid IR range.
The infrared spectra of the different ligands are introduced in a brief literature review. Their changes
in different environments or different bonding situations are discussed and visualized, especially the
interplay between fundamental-, overtone-, and combination bands, as well as Fermi resonances
between them.
A new variation on the infrared multi photon dissociation (IRMPD) spectroscopy method is proposed
and evaluated. In addition to the commonly considered total fragment yield, the cumulative fragment
yield can be used to plot the wavelength dependent relative abundance of different fragmentation
products. This is shown to include valuable additional information on the excited chromophors, and
their coupling to specific fragmentation channels.
High quality homo- and heterometallic IRMPD spectra of oxo centered carboxylate complexes of
chromium and iron show the impacts of the influencing factors: the metal centers, the bridging ligands,
their carboxylate stretch modes and CH bend modes, and the terminal ligands.
In all four formate spectra, anharmonic effects are necessary to explain the observed spectra:
combination bands of both carboxylate stretch modes and a Fermi resonance of the fundamental of
the CH stretch mode, and a combination band of the asymmetric carboxylate stretch mode with the
CH bend mode of the formate bridging ligand.
For the water adduct species, partial hydrolysis is proposed to account for the changes in the observed
carboxylic stretch modes.
Appropriate experiments are suggested to verify the mode assignments that are not directly explained
by the ab initio calculations, the available experimental results or other means like deuteration
experiments.
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).
The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.
The Symbol Grounding Problem (SGP) is one of the first attempts to proposed a hypothesis about mapping abstract concepts and the real world. For example, the concept "ball" can be represented by an object with a round shape (visual modality) and phonemes /b/ /a/ /l/ (audio modality).
This thesis is inspired by the association learning presented in infant development.
Newborns can associate visual and audio modalities of the same concept that are presented at the same time for vocabulary acquisition task.
The goal of this thesis is to develop a novel framework that combines the constraints of the Symbol Grounding Problem and Neural Networks in a simplified scenario of association learning in infants. The first motivation is that the network output can be considered as numerical symbolic features because the attributes of input samples are already embedded. The second motivation is the association between two samples is predefined before training via the same vectorial representation. This thesis proposes to associate two samples and the vectorial representation during training. Two scenarios are considered: sample pair association and sequence pair association.
Three main contributions are presented in this work.
The first contribution is a novel Symbolic Association Model based on two parallel MLPs.
The association task is defined by learning that two instances that represent one concept.
Moreover, a novel training algorithm is defined by matching the output vectors of the MLPs with a statistical distribution for obtaining the relationship between concepts and vectorial representations.
The second contribution is a novel Symbolic Association Model based on two parallel LSTM networks that are trained on weakly labeled sequences.
The definition of association task is extended to learn that two sequences represent the same series of concepts.
This model uses a training algorithm that is similar to MLP-based approach.
The last contribution is a Classless Association.
The association task is defined by learning based on the relationship of two samples that represents the same unknown concept.
In summary, the contributions of this thesis are to extend Artificial Intelligence and Cognitive Computation research with a new constraint that is cognitive motivated. Moreover, two training algorithms with a new constraint are proposed for two cases: single and sequence associations. Besides, a new training rule with no-labels with promising results is proposed.
In recent years, enormous progress has been made in the field of Artificial Intelligence (AI). Especially the introduction of Deep Learning and end-to-end learning, the availability of large datasets and the necessary computational power in form of specialised hardware allowed researchers to build systems with previously unseen performance in areas such as computer vision, machine translation and machine gaming. In parallel, the Semantic Web and its Linked Data movement have published many interlinked RDF datasets, forming the world’s largest, decentralised and publicly available knowledge base.
Despite these scientific successes, all current systems are still narrow AI systems. Each of them is specialised to a specific task and cannot easily be adapted to all other human intelligence tasks, as would be necessary for Artificial General Intelligence (AGI). Furthermore, most of the currently developed systems are not able to learn by making use of freely available knowledge such as provided by the Semantic Web. Autonomous incorporation of new knowledge is however one of the pre-conditions for human-like problem solving.
This work provides a small step towards teaching machines such human-like reasoning on freely available knowledge from the Semantic Web. We investigate how human associations, one of the building blocks of our thinking, can be simulated with Linked Data. The two main results of these investigations are a ground truth dataset of semantic associations and a machine learning algorithm that is able to identify patterns for them in huge knowledge bases.
The ground truth dataset of semantic associations consists of DBpedia entities that are known to be strongly associated by humans. The dataset is published as RDF and can be used for future research.
The developed machine learning algorithm is an evolutionary algorithm that can learn SPARQL queries from a given SPARQL endpoint based on a given list of exemplary source-target entity pairs. The algorithm operates in an end-to-end learning fashion, extracting features in form of graph patterns without the need for human intervention. The learned patterns form a feature space adapted to the given list of examples and can be used to predict target candidates from the SPARQL endpoint for new source nodes. On our semantic association ground truth dataset, our evolutionary graph pattern learner reaches a Recall@10 of > 63 % and an MRR (& MAP) > 43 %, outperforming all baselines. With an achieved Recall@1 of > 34% it even reaches average human top response prediction performance. We also demonstrate how the graph pattern learner can be applied to other interesting areas without modification.
Though environmental inequality research has gained extensive interest in the United States, it has received far less attention in Europe and Germany. The main objective of this book is to extend the research on environmental inequality in Germany. This book aims to shed more light on the question of whether minorities in Germany are affected by a disproportionately high burden of environmental pollution, and to increase the general knowledge about the causal mechanisms, which contribute to the unequal distribution of environmental hazards across the population.
To improve our knowledge about environmental inequality in Germany, this book extends previous research in several ways. First, to evaluate the extent of environmental inequality, this book relies on two different data sources. On the on hand, it uses household-level survey data and self-reports about the impairment through air pollution. On the other hand, it combines aggregated census data and objective register-based measures of industrial air pollution by using geographic information systems (GIS). Consequently, this book offers the first analysis of environmental inequality on the national level that uses objective measures of air pollution in Germany. Second, to evaluate the causes of environmental inequality, this book applies a panel data analysis on the household level, thereby offering the first longitudinal analysis of selective migration processes outside the United States. Third, it compares the level of environmental inequality between German metropolitan areas and evaluates to which extent the theoretical arguments of environmental inequality can explain differing levels of environmental inequality across the country. By doing so, this book not only investigates the impact of indicators derived by the standard strand of theoretical reasoning but also includes structural characteristics of the urban space.
All studies presented in this book confirm the disproportionate exposure of minorities to environmental pollution. Minorities live in more polluted areas in Germany but also in more polluted parts of the communities, and this disadvantage is most severe in metropolitan regions. Though this book finds evidence for selective migration processes contributing to the disproportionate exposure of minorities to environmental pollution, it also stresses the importance of urban conditions. Especially cities with centrally located industrial facilities yield a high level of environmental inequality. This poses the question of whether environmental inequality might be the result of two independent processes: 1) urban infrastructure confines residential choices of minorities to the urban core, and 2) urban infrastructure facilitates centrally located industries. In combination, both processes lead to a disproportionate burden of minority households.
Tables or ranked lists summarize facts about a group of entities in a concise and structured fashion. They are found in all kind of domains and easily comprehensible by humans. Some globally prominent examples of such rankings are the tallest buildings in the World, the richest people in Germany, or most powerful cars. The availability of vast amounts of tables or rankings from open domain allows different ways to explore data. Computing similarity between ranked lists, in order to find those lists where entities are presented in a similar order, carries important analytical insights. This thesis presents a novel query-driven Locality Sensitive Hashing (LSH) method, in order to efficiently find similar top-k rankings for a given input ranking. Experiments show that the proposed method provides a far better performance than inverted-index--based approaches, in particular, it is able to outperform the popular prefix-filtering method. Additionally, an LSH-based probabilistic pruning approach is proposed that optimizes the space utilization of inverted indices, while still maintaining a user-provided recall requirement for the results of the similarity search. Further, this thesis addresses the problem of automatically identifying interesting categorical attributes, in order to explore the entity-centric data by organizing them into meaningful categories. Our approach proposes novel statistical measures, beyond known concepts, like information entropy, in order to capture the distribution of data to train a classifier that can predict which categorical attribute will be perceived suitable by humans for data categorization. We further discuss how the information of useful categories can be applied in PANTHEON and PALEO, two data exploration frameworks developed in our group.
Computational problems that involve dynamic data, such as physics simulations and program development environments, have been an important
subject of study in programming languages. Recent advances in self-adjusting
computation made progress towards achieving efficient incremental computation by providing algorithmic language abstractions to express computations that respond automatically to dynamic changes in their inputs. Selfadjusting programs have been shown to be efficient for a broad range of problems via an explicit programming style, where the programmer uses specific
primitives to identify, create and operate on data that can change over time.
This dissertation presents implicit self-adjusting computation, a type directed technique for translating purely functional programs into self-adjusting
programs. In this implicit approach, the programmer annotates the (toplevel) input types of the programs to be translated. Type inference finds
all other types, and a type-directed translation rewrites the source program
into an explicitly self-adjusting target program. The type system is related to
information-flow type systems and enjoys decidable type inference via constraint solving. We prove that the translation outputs well-typed self-adjusting
programs and preserves the source program’s input-output behavior, guaranteeing that translated programs respond correctly to all changes to their
data. Using a cost semantics, we also prove that the translation preserves the
asymptotic complexity of the source program.
As a second contribution, we present two techniques to facilitate the processing of large and dynamic data in self-adjusting computation. First, we
present a type system for precise dependency tracking that minimizes the
time and space for storing dependency metadata. The type system improves
the scalability of self-adjusting computation by eliminating an important assumption of prior work that can lead to recording spurious dependencies.
We present a type-directed translation algorithm that generates correct selfadjusting programs without relying on this assumption. Second, we show a
probabilistic-chunking technique to further decrease space usage by controlling the fundamental space-time tradeoff in self-adjusting computation.
We implement implicit self-adjusting computation as an extension to Standard ML with compiler and runtime support. Using the compiler, we are able
to incrementalize an interesting set of applications, including standard list
and matrix benchmarks, ray tracer, PageRank, sparse graph connectivity, and
social circle counts. Our experiments show that our compiler incrementalizes existing code with only trivial amounts of annotation, and the resulting
programs bring asymptotic improvements to large datasets from real-world
applications, leading to orders of magnitude speedups in practice.
The transfer of substrates between to enzymes within a biosynthesis pathway is an effective way to synthesize the specific product and a good way to avoid metabolic interference. This process is called metabolic channeling and it describes the (in-)direct transfer of an intermediate molecule between the active sites of two enzymes. By forming multi-enzyme cascades the efficiency of product formation and the flux is elevated and intermediate products are transferred and converted in a correct manner by the enzymes.
During tetrapyrrole biosynthesis several substrate transfer events occur and are prerequisite for an optimal pigment synthesis. In this project the metabolic channeling process during the pink pigment phycoerythrobilin (PEB) was investigated. The responsible ferredoxin-dependent bilin reductases (FDBR) for PEB formation are PebA and PebB. During the pigment synthesis the intermediate molecule 15,16-dihydrobiliverdin (DHBV) is formed and transferred from PebA to PebB. While in earlier studies a metabolic channeling of DHBV was postulated, this work revealed new insights into the requirements of this protein-protein interaction. It became clear, that the most important requirement for the PebA/PebB interaction is based on the affinity to their substrate/product DHBV. The already high affinity of both enzymes to each other is enhanced in the presence of DHBV in the binding pocket of PebA which leads to a rapid transfer to the subsequent enzyme PebB. DHBV is a labile molecule and needs to be rapidly channeled in order to get correctly further reduced to PEB. Fluorescence titration experiments and transfer assays confirmed the enhancement effect of DHBV for its own transfer.
More insights became clear by creating an active fusion protein of PebA and PebB and comparing its reaction mechanism with standard FDBRs. This fusion protein was able to convert biliverdin IXα (BV IXα) to PEB similar to the PebS activity, which also can convert BV IXα via DHBV to PEB as a single enzyme. The product and intermediate of the reaction were identified via HPLC and UV-Vis spectroscopy.
The results of this work revealed that PebA and PebB interact via a proximity channeling process where the intermediate DHBV plays an important role for the interaction. It also highlights the importance of substrate channeling in the synthesis of PEB to optimize the flux of intermediates through this metabolic pathway.
This thesis consists of five chapters. Chapter one elaborates on the principle of cognitive consistency and provides an overview of what extant research refers to as cognitive consistency theories (e.g., Abelson et al., 1968; Harmon-Jones & Harmon-Jones, 2007; Simon, Stenstrom, & Read, 2015). Moreover, it describes the most prominent theoretical representatives in this context, namely balance theory (Heider, 1946, 1958), congruity theory (Osgood & Tannenbaum, 1955), and cognitive dissonance theory (Festinger, 1957). Chapter one further outlines the role of individuals’ preference for cognitive consistency in the context of financial resource acquisition, the recruitment of employees and the acquisition of customers in the entrepreneurial context.
Chapter two is co-authored by Prof. Dr. Matthias Baum and presents two separate studies in which we empirically investigate the hypothesis that social entrepreneurs face a systematic disadvantage, compared to for-profit entrepreneurs, when seeking to acquire financial resources. Further, our work goes beyond existing research by introducing biased perceptions as a factor that may constrain social enterprise resource acquisition and therefore possibly stall the process of social value creation. On the foundation of role congruity theory (Eagly & Karau, 2002), we emphasize on the question whether social entrepreneurs provide signals which are less congruent with the stereotype of successful entrepreneurs and, in such, are perceived as less competent. We further test whether such biased competency perceptions feed forward into a lower probability to receive funding.
Chapter three is also co-authored by Prof. Dr. Matthias Baum as well as by Eva Henrich. The aim of this chapter is to further our understanding of the early recruitment phase and to contribute to the current debate about how firms should orchestrate their recruitment channels in order to enhance the creation of employer knowledge. We introduce the concept of integrated marketing communication into the recruitment field and examine how the level of consistency regarding job or organization information affects the recall and the recognition of that information. We additionally test whether information consistency among multiple recruitment channels influences information recognition failure quota. Answering this question is important as by failing to remember the source of recruitment information, job seekers may attribute job information to the wrong firm and thus create an incorrect employer knowledge.
Chapter four, which is co-authored by Prof. Dr. Matthias Baum, introduces customer congruity perceptions between a brand and a reward in the context of customer referral programs as an essential driver of the effectiveness of such programs. More precisely, we posit and empirically test a model according to which the decision-making process of the customer recommending a firm involves multiple mental steps and assumes reward perceptions to be an immediate antecedent of brand evaluation, which then, ultimately shapes the likelihood of recommendation. The level of congruity/incongruity is set up as an antecedent state and affects the perceived attractiveness of the reward. Our work contributes to the discussion on the optimal level of congruity between a prevailing schema in the mind of the customer and a stimulus presented. In addition, chapter four introduces customer referral programs as a strategic tool for brand managers. Chapter four is further published in Psychology & Marketing.
Chapter five first proposes that marketing strategies specifically designed to induce word-of-mouth (WOM) behavior are particular relevant for new ventures. Against the background that previous research suggests that customer perceptions of young firm age may influence customer behavior and the degree to which customers support new ventures (e.g., Choi & Shepherd, 2005; Stinchcombe, 1965), we secondly conduct an experiment to examine the causal mechanisms linking firm age and customer WOM. Chapter five, too, is co-authored by Prof. Dr. Matthias Baum.
Increasing costs due to the rising attrition of drug candidates in late developmental phases alongside post-marketing withdrawal of drugs challenge the pharmaceutical industry to further improve their current preclinical safety assessment strategies. One of the most common reasons for the termination of drug candidates is drug induced hepatotoxicity, which more often than not remains undetected in early developmental stages, thus emphasizing the necessity for improved and more predictive preclinical test systems. One reason for the very limited value of currently applied in vitro test systems for the detection of potential hepatotoxic liabilities is the lack of organotypic and tissue-specific physiology of hepatocytes cultured in ordinary monolayer culture formats.
The thesis at hand primarily deals with the evaluation of both two- and three-dimensional cell culture approaches with respect to their relative ability to predict the hepatotoxic potential of drug candidates in early developmental phases. First, different hepatic cell models, which are routinely used in pharmaceutical industry (primary human hepatocytes as well as the three cell lines HepG2, HepaRG and Upcyte hepatocytes), were investigated in conventional 2D monolayer culture with respect to their ability to detect hepatotoxic effects in simple cytotoxicity studies. Moreover, it could be shown that the global protein expression levels of all cell lines substantially differ from that of primary human hepatocytes, with the least pronounced difference in HepaRG cells.
The introduction of a third dimension through the cultivation of spheroids enables hepatocytes to recapitulate their typical native polarity and furthermore dramatically increases the contact surface of adjacent cells. These differences in cellular architecture have a positive influence on hepatocyte longevity and the expression of drug metabolizing enzymes and transporters, which could be proven via immunofluorescent (IF) staining for at least 14 days in PHH and at least 28 days in HepaRG spheroids, respectively. Additionally, the IF staining of three different phase III transporters (MDR1, MRP2 and BSEP) indicated a bile canalicular network in spheroids of both cell models. A dose-dependent inducibility of important cytochrome P450 isoenzymes in HepaRG spheroids could be shown on the protein level via IF for at least 14 days. CYP inducibility of HepaRG cells cultured in 2D and 3D was compared on the mRNA level for up to 14 days and inducibility was generally lower in 3D compared to 2D under the conditions of this study. In a comparative cytotoxicity study, both PHH and HepaRG spheroids as well as HepaRG monolayers have been treated with five hepatotoxic drugs for up to 14 days and viability was measured at three time points (days 3, 7 and 14). A clear time- and dose-dependent onset of the drug-induced hepatotoxic effects was observable in all conditions tested, indicated by a shift of the respective EC50 value towards lower doses by increasing exposure. The observed effects were most pronounced in PHH spheroids, thus indicating those as the most sensitive cell model in this study. Moreover, HepaRG cells were more sensitive in spheroid culture compared to monolayers, which suggests a potential application of spheroids as long-term test system for the detection of hepatotoxicities with slow onset. Finally, the basal protein expression levels of three antigens (CYP1A2, CYP3A4 and NAT 1/2) were analyzed via Western Blotting in HepaRG cells cultured in three different cell culture formats (2D, 3D and QV) in order to estimate the impact of the cell culture conditions on protein expression levels. In the QV system enables a pump-driven flow of cell culture media, which introduces both mechanical stimuli through shear and molecular stimuli through dynamic circulation to the monolayer. Those stimuli resulted in a clearly positive effect on the expression levels of the selected antigens by an increased expression level in comparison to both 2D and 3D. In contrast, HepaRG spheroids showed time-dependent differences with the overall highest levels at day 7.
The studies presented in this thesis delivered valuable information on the increased physiological relevance in dependence on the cell culture format: three-dimensionality as well as the circulation of media lead to a more differentiated phenotype in hepatic cell models. Those cell culture formats are applicable in preclinical drug development in order to obtain more relevant information at early developmental stages and thus help to create a more efficient drug development process. Nonetheless, further studies are necessary to thoroughly characterize, validate and standardize such novel cell culture approaches prior to their routine application in industry.
Road accidents remain as one of the major causes of death and injuries globally. Several million people die every year due to road accidents all over the world. Although the number of accidents in European region have reduced in the past years, road safety still remains a major challenge. Especially in case of commercial trucks, due to the size and load of the vehicle, even minor collisions with other road users would lead to serious injuries or death. In order to reduce number of accidents, automotive industry is rapidly developing advanced driver assistance systems (ADAS) and automated driving technologies. Efficient and reliable solutions are required for these systems to sense, perceive and react to different environmental conditions. For vehicle safety applications such as collision avoidance with vulnerable road users (VRUs), it is not only important for the system to efficiently detect and track the objects in the vicinity of the vehicle but should also function robustly.
An environment perception solution for application in commercial truck safety systems and for future automated driving is developed in this work. Thereby a method for integrated tracking and classification of road users in the near vicinity of the vehicle is formulated. The drawbacks in conventional multi-object tracking algorithms with respect to state, measurement and data association uncertainties have been addressed with the recent advancements in the field of unified multi-object tracking solutions based on random finite sets (RFS). Gaussian mixture implementation of the recently developed labeled multi-Bernoulli (LMB) filter [RSD15] is used as the basis for multi-object tracking in this work. Measurement from an high-resolution radar sensor is used as the main input for detecting and tracking objects.
On one side, the focus of this work is on tracking VRUs in the near vicinity of the truck. As it is beneficial for most of the vehicle safety systems to also know the category that the object belongs to, the focus on the other side is also to classify the road users. All the radar detections believed to originate from a single object are clustered together with help of density based spatial clustering for application with noise (DBSCAN) algorithm. Each cluster of detections would have different properties based on the respective object characteristics. Sixteen distinct features based on radar detections, that are suitable for separating pedestrians, bicyclists and passenger car categories are selected and extracted for each of the cluster. A machine learning based classifier is constructed, trained and parameterised for distinguishing the road users based on the extracted features.
The class information derived from the radar detections can further be used by the tracking algorithm, to adapt the model parameters used for precisely predicting the object motion according to the category of the object. Multiple model labeled multi-Bernoulli filter (MMLMB) is used for modelling different object motions. Apart from the detection level, the estimated state of an object on the tracking level also provides information about the object class. Both these informations are fused using Dempster-Shafer theory (DST) of evidence, based on respective class probabilities Thereby, the output of the integrated tracking and classification with MMLMB filter are classified tracks that can be used by truck safety applications with better reliability.
The developed environment perception method is further implemented as a real-time prototypical system on a commercial truck. The performance of the tracking and classification approaches are evaluated with the help of simulation and multiple test scenarios. A comparison of the developed approaches to a conventional converted measurements Kalman filter with global nearest neighbour association (CMKF-GNN) shows significant advantages in the overall accuracy and performance.
Mobility has become an integral feature of many wireless networks. Along with this mobility comes the need for location awareness. A prime example for this development are today’s and future transportation systems. They increasingly rely on wireless communications to exchange location and velocity information for a multitude of functions and applications. At the same time, the technological progress facilitates the widespread availability of sophisticated radio technology such as software-defined radios. The result is a variety of new attack vectors threatening the integrity of location information in mobile networks.
Although such attacks can have severe consequences in safety-critical environments such as transportation, the combination of mobility and integrity of spatial information has not received much attention in security research in the past. In this thesis we aim to fill this gap by providing adequate methods to protect the integrity of location and velocity information in the presence of mobility. Based on physical effects of mobility on wireless communications, we develop new methods to securely verify locations, sequences of locations, and velocity information provided by untrusted nodes. The results of our analyses show that mobility can in fact be exploited to provide robust security at low cost.
To further investigate the applicability of our schemes to real-world transportation systems, we have built the OpenSky Network, a sensor network which collects air traffic control communication data for scientific applications. The network uses crowdsourcing and has already achieved coverage in most parts of the world with more than 1000 sensors.
Based on the data provided by the network and measurements with commercial off-the-shelf hardware, we demonstrate the technical feasibility and security of our schemes in the air traffic scenario. Moreover, the experience and data provided by the OpenSky Network allows us to investigate the challenges for our schemes in the real-world air traffic communication environment. We show that our verification methods match all
requirements to help secure the next generation air traffic system.
This research explores the development of web based reference software for
characterisation of surface roughness for two-dimensional surface data. The reference software used for verification of surface characteristics makes the evaluation methods easier for clients. The algorithms used in this software
are based on International ISO standards. Most software used in industrial measuring
instruments may give variations in the parameters calculated due to numerical changes in
calculation. Such variations can be verified using the proposed reference software.
The evaluation of surface roughness is carried out in four major steps: data capture, data
align, data filtering and parameter calculation. This work walks through each of these steps
explaining how surface profiles are evaluated by pre-processing steps called fitting and
filtering. The analysis process is then followed by parameter evaluation according to DIN EN
ISO 4287 and DIN EN ISO 13565-2 standards to extract important information from the
profile to characterise surface roughness.