Refine
Year of publication
- 2015 (100) (remove)
Document Type
- Doctoral Thesis (100) (remove)
Has Fulltext
- yes (100)
Keywords
- Stadtplanung (2)
- finite element method (2)
- isogeometric analysis (2)
- tractor (2)
- verification (2)
- AMC225xe (1)
- Adaptive time step (1)
- Adjoint method (1)
- Allylierung (1)
- Aminierung (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Informatik (19)
- Kaiserslautern - Fachbereich Mathematik (19)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (18)
- Kaiserslautern - Fachbereich Chemie (17)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (10)
- Kaiserslautern - Fachbereich Biologie (6)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (4)
- Fraunhofer (ITWM) (3)
- Kaiserslautern - Fachbereich Bauingenieurwesen (3)
- Kaiserslautern - Fachbereich Sozialwissenschaften (2)
- Kaiserslautern - Fachbereich ARUBI (1)
The goal of this work is to develop statistical natural language models and processing techniques
based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short-
Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more
robust, and easier to train than traditional methods, i.e., words list and rule-based models. They
improve the output of recognition systems and make them more accessible to users for browsing
and reading. These techniques are required, especially for historical books which might take
years of effort and huge costs to manually transcribe them.
The contributions of this thesis are several new methods which have high-performance computing and accuracy. First, an error model for improving recognition results is designed. As
a second contribution, a hyphenation model for difficult transcription for alignment purposes
is suggested. Third, a dehyphenation model is used to classify the hyphens in noisy transcription. The fourth contribution is using LSTM networks for normalizing historical orthography.
A size normalization alignment is implemented to equal the size of strings, before the training
phase. Using the LSTM networks as a language model to improve the recognition results is
the fifth contribution. Finally, the sixth contribution is a combination of Weighted Finite-State
Transducers (WFSTs), and LSTM applied on multiple recognition systems. These contributions
will be elaborated in more detail.
Context-dependent confusion rules is a new technique to build an error model for Optical
Character Recognition (OCR) corrections. The rules are extracted from the OCR confusions
which appear in the recognition outputs and are translated into edit operations, e.g., insertions,
deletions, and substitutions using the Levenshtein edit distance algorithm. The edit operations
are extracted in a form of rules with respect to the context of the incorrect string to build an
error model using WFSTs. The context-dependent rules assist the language model to find the
best candidate corrections. They avoid the calculations that occur in searching the language
model and they also make the language model able to correct incorrect words by using context-
dependent confusion rules. The context-dependent error model is applied on the university of
Washington (UWIII) dataset and the Nastaleeq script in Urdu dataset. It improves the OCR
results from an error rate of 1.14% to an error rate of 0.68%. It performs better than the
state-of-the-art single rule-based which returns an error rate of 1.0%.
This thesis describes a new, simple, fast, and accurate system for generating correspondences
between real scanned historical books and their transcriptions. The alignment has many challenges, first, the transcriptions might have different modifications, and layout variations than the
original book. Second, the recognition of the historical books have misrecognition, and segmentation errors, which make the alignment more difficult especially the line breaks, and pages will
not have the same correspondences. Adapted WFSTs are designed to represent the transcription. The WFSTs process Fraktur ligatures and adapt the transcription with a hyphenations
model that allows the alignment with respect to the varieties of the hyphenated words in the line
breaks of the OCR documents. In this work, several approaches are implemented to be used for
the alignment such as: text-segments, page-wise, and book-wise approaches. The approaches
are evaluated on German calligraphic (Fraktur) script historical documents dataset from “Wan-
derungen durch die Mark Brandenburg” volumes (1862-1889). The text-segmentation approach
returns an error rate of 2.33% without using a hyphenation model and an error rate of 2.0%
using a hyphenation model. Dehyphenation methods are presented to remove the hyphen from
the transcription. They provide the transcription in a readable and reflowable format to be used
for alignment purposes. We consider the task as classification problem and classify the hyphens
from the given patterns as hyphens for line breaks, combined words, or noise. The methods are
applied on clean and noisy transcription for different languages. The Decision Trees classifier
returns better performance on UWIII dataset and returns an accuracy of 98%. It returns 97%
on Fraktur script.
A new method for normalizing historical OCRed text using LSTM is implemented for different texts, ranging from Early New High German 14th - 16th centuries to modern forms in New
High German applied on the Luther bible. It performed better than the rule-based word-list
approaches. It provides a transcription for various purposes such as part-of-speech tagging and
n-grams. Also two new techniques are presented for aligning the OCR results and normalize the
size by using adding Character-Epsilons or Appending-Epsilons. They allow deletion and insertion in the appropriate position in the string. In normalizing historical wordforms to modern
wordforms, the accuracy of LSTM on seen data is around 94%, while the state-of-the-art combined rule-based method returns 93%. On unseen data, LSTM returns 88% and the combined
rule-based method returns 76%. In normalizing modern wordforms to historical wordforms, the
LSTM delivers the best performance and returns 93.4% on seen data and 89.17% on unknown
data.
In this thesis, a deep investigation has been done on constructing high-performance language
modeling for improving the recognition systems. A new method to construct a language model
using LSTM is designed to correct OCR results. The method is applied on UWIII and Urdu
script. The LSTM approach outperforms the state-of-the-art, especially for unseen tokens
during training. On the UWIII dataset, the LSTM returns reduction in OCR error rates from
1.14% to 0.48%. On the Nastaleeq script in Urdu dataset, the LSTM reduces the error rate
from 6.9% to 1.58%.
Finally, the integration of multiple recognition outputs can give higher performance than a
single recognition system. Therefore, a new method for combining the results of OCR systems is
explored using WFSTs and LSTM. It uses multiple OCR outputs and votes for the best output
to improve the OCR results. It performs better than the ISRI tool, Pairwise of Multiple Sequence and it helps to improve the OCR results. The purpose is to provide correct transcription
so that it can be used for digitizing books, linguistics purposes, N-grams, and part-of-speech
tagging. The method consists of two alignment steps. First, two recognition systems are aligned
using WFSTs. The transducers are designed to be more flexible and compatible with the different symbols in line and page breaks to avoid the segmentation and misrecognition errors.
The LSTM model then is used to vote the best candidate correction of the two systems and
improve the incorrect tokens which are produced during the first alignment. The approaches
are evaluated on OCRs output from the English UWIII and historical German Fraktur dataset
which are obtained from state-of-the-art OCR systems. The Experiments show that the error
rate of ISRI-Voting is 1.45%, the error rate of the Pairwise of Multiple Sequence is 1.32%, the
error rate of the Line-to-Page alignment is 1.26% and the error rate of the LSTM approach has
the best performance with 0.40%.
The purpose of this thesis is to contribute methods providing correct transcriptions corresponding to the original book. This is considered to be the first step towards an accurate and
more effective use of the documents in digital libraries.
Information Visualization (InfoVis) and Human-Computer Interaction (HCI) have strong ties with each other. Visualization supports the human cognitive system by providing interactive and meaningful images of the underlying data. On the other side, the HCI domain cares about the usability of the designed visualization from the human perspectives. Thus, designing a visualization system requires considering many factors in order to achieve the desired functionality and the system usability. Achieving these goals will help these people in understanding the inside behavior of complex data sets in less time.
Graphs are widely used data structures to represent the relations between the data elements in complex applications. Due to the diversity of this data type, graphs have been applied in numerous information visualization applications (e.g., state transition diagrams, social networks, etc.). Therefore, many graph layout algorithms have been proposed in the literature to help in visualizing this rich data type. Some of these algorithms are used to visualize large graphs, while others handle the medium sized graphs. Regardless of the graph size, the resulting layout should be understandable from the users’ perspective and at the same time it should fulfill a list of aesthetic criteria to increase the representation readability. Respecting these two principles leads to produce a resulting graph visualization that helps the users in understanding and exploring the complex behavior of critical systems.
In this thesis, we utilize the graph visualization techniques in modeling the structural and behavioral aspects of embedded systems. Furthermore, we focus on evaluating the resulting representations from the users’ perspectives.
The core contribution of this thesis is a framework, called ESSAVis (Embedded Systems Safety Aspect Visualizer). This framework visualizes not only some of the safety aspects (e.g. CFT models) of embedded systems, but also helps the engineers and experts in analyzing the system safety critical situations. For this, the framework provides a 2Dplus3D environment in which the 2D represents the graph representation of the abstract data about the safety aspects of the underlying embedded system while the 3D represents the underlying system 3D model. Both views are integrated smoothly together in the 3D world fashion. In order to check the effectiveness and feasibility of the framework and its sub-components, we conducted many studies with real end users as well as with general users. Results of the main study that targeted the overall ESSAVis framework show high acceptance ratio and higher accuracy with better performance using the provided visual support of the framework.
The ESSAVis framework has been designed to be compatible with different 3D technologies. This enabled us to use the 3D stereoscopic depth of such technologies to encode nodes attributes in node-link diagrams. In this regard, we conducted an evaluation study to measure the usability of the stereoscopic depth cue approach, called the stereoscopic highlighting technique, against other selected visual cues (i.e., color, shape, and sizes). Based on the results, the thesis proposes the Reflection Layer extension to the stereoscopic highlighting technique, which was also evaluated from the users’ perspectives. Additionally, we present a new technique, called ExpanD (Expand in Depth), that utilizes the depth cue to show the structural relations between different levels of details in node-link diagrams. Results of this part opens a promising direction of the research in which visualization designers can get benefits from the richness of the 3D technologies in visualizing abstract data in the information visualization domain.
Finally, this thesis proposes the application of the ESSAVis frame- work as a visual tool in the educational training process of engineers for understanding the complex concepts. In this regard, we conducted an evaluation study with computer engineering students in which we used the visual representations produced by ESSAVis to teach the principle of the fault detection and the failure scenarios in embedded systems. Our work opens the directions to investigate many challenges about the design of visualization for educational purposes.
In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.
Since their invention in the 1980s, behaviour-based systems have become very popular among roboticists. Their component-based nature facilitates the distributed implementation of systems, fosters reuse, and allows for early testing and integration. However, the distributed approach necessitates the interconnection of many components into a network in order to realise complex functionalities. This network is crucial to the correct operation of the robotic system. There are few sound design techniques for behaviour networks, especially if the systems shall realise task sequences. Therefore, the quality of the resulting behaviour-based systems is often highly dependant on the experience of their developers.
This dissertation presents a novel integrated concept for the design and verification of behaviour-based systems that realise task sequences. Part of this concept is a technique for encoding task sequences in behaviour networks. Furthermore, the concept provides guidance to developers of such networks. Based on a thorough analysis of methods for defining sequences, Moore machines have been selected for representing complex tasks. With the help of the structured workflow proposed in this work and the developed accompanying tool support, Moore machines defining task sequences can be transferred automatically into corresponding behaviour networks, resulting in less work for the developer and a lower risk of failure.
Due to the common integration of automatically and manually created behaviour-based components, a formal analysis of the final behaviour network is reasonable. For this purpose, the dissertation at hand presents two verification techniques and justifies the selection of model checking. A novel concept for applying model checking to behaviour-based systems is proposed according to which behaviour networks are modelled as synchronised automata. Based on such automata, properties of behaviour networks that realise task sequences can be verified or falsified. Extensive graphical tool support has been developed in order to assist the developer during the verification process.
Several examples are provided in order to illustrate the soundness of the presented design and verification techniques. The applicability of the integrated overall concept to real-world tasks is demonstrated using the control system of an autonomous bucket excavator. It can be shown that the proposed design concept is suitable for developing complex sophisticated behaviour networks and that the presented verification technique allows for verifying real-world behaviour-based systems.
This thesis deals with the development of a tractor front loader scale which measures payload continuously, independent of the center of gravity of the payload, and unaffected of the position and movements of the loader. To achieve this, a mathematic model of a common front loader is simplified which makes it possible to identify its parameters by a repeatable and automatic procedure. By measuring accelerations as well as cylinder forces, the payload is determined continuously during the working process. Finally, a prototype was build and the scale was tested on a tractor.
Today’s pervasive availability of computing devices enabled with wireless communication and location- or inertial sensing capabilities is unprecedented. The number of smartphones sold worldwide are still growing and increasing numbers of sensor enabled accessories are available which a user can wear in the shoe or at the wrist for fitness tracking, or just temporarily puts on to measure vital signs. Despite this availability of computing and sensing hardware the merit of application seems rather limited regarding the full potential of information inherent to such senor deployments. Most applications build upon a vertical design which encloses a narrowly defined sensor setup and algorithms specifically tailored to suit the application’s purpose. Successful technologies, however, such as the OSI model, which serves as base for internet communication, have used a horizontal design that allows high level communication protocols to be run independently from the actual lower-level protocols and physical medium access. This thesis contributes to a more horizontal design of human activity recognition systems at two stages. First, it introduces an integrated toolchain to facilitate the entire process of building activity recognition systems and to foster sharing and reusing of individual components. At a second stage, a novel method for automatic integration of new sensors to increase a system’s performance is presented and discussed in detail.
The integrated toolchain is built around an efficient toolbox of parametrizable components for interfacing sensor hardware, synchronization and arrangement of data streams, filtering and extraction of features, classification of feature vectors, and interfacing output devices and applications. The toolbox emerged as open-source project through several research projects and is actively used by research groups. Furthermore, the toolchain supports recording, monitoring, annotation, and sharing of large multi-modal data sets for activity recognition through a set of integrated software tools and a web-enabled database.
The method for automatically integrating a new sensor into an existing system is, at its core, a variation of well-established principles of semi-supervised learning: (1) unsupervised clustering to discover structure in data, (2) assumption that cluster membership is correlated with class membership, and (3) obtaining at a small number of labeled data points for each cluster, from which the cluster labels are inferred. In most semi-supervised approaches, however, the labels are the ground truth provided by the user. By contrast, the approach presented in this thesis uses a classifier trained on an N-dimensional feature space (old classifier) to provide labels for a few points in an (N+1)-dimensional feature space which are used to generate a new, (N+1)-dimensional classifier. The different factors that make a distribution difficult to handle are discussed, a detailed description of heuristics designed to mitigate the influences of such factors is provided, and a detailed evaluation on a set of over 3000 sensor combinations from 3 multi-user experiments that have been used by a variety of previous studies of different activity recognition methods is presented.
Large displays become more and more popular, due to dropping prices. Their size and high resolution leverages collaboration and they are capable of dis- playing even large datasets in one view. This becomes even more interesting as the number of big data applications increases. The increased screen size and other properties of large displays pose new challenges to the Human- Computer-Interaction with these screens. This includes issues such as limited scalability to the number of users, diversity of input devices in general, leading to increased learning efforts for users, and more.
Using smart phones and tablets as interaction devices for large displays can solve many of these issues. Since they are almost ubiquitous today, users can bring their own device. This approach scales well with the number of users. These mobile devices are easy and intuitive to use and allow for new interaction metaphors, as they feature a wide array of input and output capabilities, such as touch screens, cameras, accelerometers, microphones, speakers, Near-Field Communication, WiFi, etc.
This thesis will present a concept to solve the issues posed by large displays. We will show proofs-of-concept, with specialized approaches showing the via- bility of the concept. A generalized, eyes-free technique using smart phones or tablets to interact with any kind of large display, regardless of hardware or software then overcomes the limitations of the specialized approaches. This is implemented in a large display application that is designed to run under a multitude of environments, including both 2D and 3D display setups. A special visualization method is used to combine 2D and 3D data in a single visualization.
Additionally the thesis will present several approaches to solve common is- sues with large display interaction, such as target sizes on large display getting too small, expensive tracking hardware, and eyes-free interaction through vir- tual buttons. These methods provide alternatives and context for the main contribution.
In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.
Specification of asynchronous circuit behaviour becomes more complex as the
complexity of today’s System-On-a-Chip (SOC) design increases. This also causes
the Signal Transition Graphs (STGs) – interpreted Petri nets for the specification
of asynchronous circuit behaviour – to become bigger and more complex, which
makes it more difficult, sometimes even impossible, to synthesize an asynchronous
circuit from an STG with a tool like petrify [CKK+96] or CASCADE [BEW00].
It has, therefore, been suggested to decompose the STG as a first step; this
leads to a modular implementation [KWVB03] [KVWB05], which can reduce syn-
thesis effort by possibly avoiding state explosion or by allowing the use of library
elements. A decomposition approach for STGs was presented in [VW02] [KKT93]
[Chu87a]. The decomposition algorithm by Vogler and Wollowski [VW02] is based
on that of Chu [Chu87a] but is much more generally applicable than the one in
[KKT93] [Chu87a], and its correctness has been proved formally in [VW02].
This dissertation begins with Petri net background described in chapter 2.
It starts with a class of Petri nets called a place/transition (P/T) nets. Then
STGs, the subclass of P/T nets, is viewed. Background in net decomposition
is presented in chapter 3. It begins with the structural decomposition of P/T
nets for analysis purposes – liveness and boundedness of the net. Then STG
decomposition for synthesis from [VW02] is described.
The decomposition method from [VW02] still could be improved to deal with
STGs from real applications and to give better decomposition results. Some
improvements for [VW02] to improve decomposition result and increase algorithm
efficiency are discussed in chapter 4. These improvement ideas are suggested in
[KVWB04] and some of them are have been proved formally in [VK04].
The decomposition method from [VW02] is based on net reduction to find
an output block component. A large amount of work has to be done to reduce
an initial specification until the final component is found. This reduction is not
always possible, which causes input initially classified as irrelevant to become
relevant input for the component. But under certain conditions (e.g. if structural
auto-conflicts turn out to be non-dynamic) some of them could be reclassified as
irrelevant. If this is not done, the specifications become unnecessarily large, which
intern leads to unnecessarily large implemented circuits. Instead of reduction, a
new approach, presented in chapter 5, decomposes the original net into structural
components first. An initial output block component is found by composing the
structural components. Then, a final output block component is obtained by net
reduction.
As we cope with the structure of a net most of the time, it would be useful
to have a structural abstraction of the net. A structural abstraction algorithm
[Kan03] is presented in chapter 6. It can improve the performance in finding an
output block component in most of the cases [War05] [Taw04]. Also, the structure
net is in most cases smaller than the net itself. This increases the efficiency of the
decomposition algorithm because it allows the transitions contained in a node of
the structure graph to be contracted at the same time if the structure graph is
used as internal representation of the net.
Chapter 7 discusses the application of STG decomposition in asynchronous
circuit design. Application to speed independent circuits is discussed first. Af-
ter that 3D circuits synthesized from extended burst mode (XBM) specifications
are discussed. An algorithm for translating STG specifications to XBM specifi-
cations was first suggested by [BEW99]. This algorithm first derives the state
machine from the STG specification, then translates the state machine to XBM
specification. An XBM specification, though it is a state machine, allows some
concurrency. These concurrencies can be translated directly, without deriving
all of the possible states. An algorithm which directly translates STG to XBM
specifications, is presented in chapter 7.3.1. Finally DESI, a tool to decompose
STGs and its decomposition results are presented.
Industrial design has a long history. With the introduction of Computer-Aided Engineering, industrial design was revolutionised. Due to the newly found support, the design workflow changed, and with the introduction of virtual prototyping, new challenges arose. These new engineering problems have triggered
new basic research questions in computer science.
In this dissertation, I present a range of methods which support different components of the virtual design cycle, from modifications of a virtual prototype and optimisation of said prototype, to analysis of simulation results.
Starting with a virtual prototype, I support engineers by supplying intuitive discrete normal vectors which can be used to interactively deform the control mesh of a surface. I provide and compare a variety of different normal definitions which have different strengths and weaknesses. The best choice depends on
the specific model and on an engineer’s priorities. Some methods have higher accuracy, whereas other methods are faster.
I further provide an automatic means of surface optimisation in the form of minimising total curvature. This minimisation reduces surface bending, and therefore, it reduces material expenses. The best results can be obtained for analytic surfaces, however, the technique can also be applied to real-world examples.
Moreover, I provide engineers with a curvature-aware technique to optimise mesh quality. This helps to avoid degenerated triangles which can cause numerical issues. It can be applied to any component of the virtual design cycle: as a direct modification of the virtual prototype (depending on the surface defini-
tion), during optimisation, or dynamically during simulation.
Finally, I have developed two different particle relaxation techniques that both support two components of the virtual design cycle. The first component for which they can be used is discretisation. To run computer simulations on a model, it has to be discretised. Particle relaxation uses an initial sampling,
and it improves it with the goal of uniform distances or curvature-awareness. The second component for which they can be used is the analysis of simulation results. Flow visualisation is a powerful tool in supporting the analysis of flow fields through the insertion of particles into the flow, and through tracing their movements. The particle seeding is usually uniform, e.g. for an integral surface, one could seed on a square. Integral surfaces undergo strong deformations, and they can have highly varying curvature. Particle relaxation redistributes the seeds on the surface depending on surface properties like local deformation or curvature.
In Rheinland-Pfalz kommt den ländlichen Regionen eine hohe Bedeutung als Lebens-, Wirtschafts-, Natur- und Erholungsraum zu. Bei ihrer Entwicklung kann festgestellt werden, dass diese Regionen sowohl Schrumpfungstendenzen wie einen Rückgang der Bevölkerungszahl, zum Teil aber auch positive Entwicklungsdynamiken wie positive bevölkerungsstrukturelle und sozio-ökonomische Entwicklungen aufweisen. In jenen Regionen die positive Entwicklungsdynamiken verzeichnen stellt sich insbesondere die Frage nach „den Erfolgsfaktoren“, die diese Entwicklung begünstigt haben.
Die Bundesraumordnung hat das Potenzial dieser Räume, einen gesamtwirtschaftlichen Wachstumsbeitrag zu liefern, erkannt. Im Jahr 2010 und 2013 hat die Ministerkonferenz für Raumordnung den Beschluss gefasst, die Leitbilder für die Raumentwicklung in Deutschland dahingehend zu konkretisieren und weiterzuentwickeln, dass die Rolle und Bedeutung ländlicher Räume als „eigenständige Wirtschafts-, Kultur- und Lebensräume, die ihre Potenziale durch die Entwicklung eigener Stärken, aber auch durch Verflechtung und Vernetzung besser zur Geltung bringen können“, noch stärker herausgearbeitet werden soll. Im aktuellen Entwurf der Leitbilder und Handlungsstrategien für die Raumentwicklung in Deutschland 2013 zeigt sich, dass im vorgesehenen Leitbild „Wettbewerbsfähigkeit stärken“ sowohl ländliche als auch verstädterte wirtschaftliche Wachstumsräume außerhalb der Metropolregionen verortet werden, die im Rahmen einer Raumentwicklungsstrategie als Wirtschafts-, Innovations- und Technologiestandorte gestärkt werden sollen.
Somit stellt sich für ländliche Regionen insbesondere die Frage, welches die maßgeblichen Faktoren sein können, um eine eigene positive Entwicklungsdynamik zu fördern. Die vorliegende Dissertation identifiziert im Bundesland Rheinland-Pfalz, für das ländliche Regionen in exemplarischer Weise von Bedeutung sind, Landkreise die positive Entwicklungsdynamiken aufzeigen und untersucht sie hinsichtlich ihrer Erfolgsfaktoren. Dabei handelt es sich um den Donnersbergkreis, den Rhein-Hunsrück-Kreis und den Landkreis Südliche Weinstraße einschließlich der Stadt Landau in der Pfalz.
Um sich der Thematik der Erfolgsfaktoren in theoretischer Weise zu nähern, werden zunächst Ansätze zur Erklärung unterschiedlicher regionaler Entwicklungsdynamiken betrachtet und Kriterien abgeleitet, die dazu dienen, bereits bestehende Ansätze, Programme und Projekte zur Förderung solcher Entwicklungen in ländlichen Regionen zu bewerten. Der Untersuchung liegen ferner empirische Erhebungen wie leitfadengestützte Gespräche mit Experten, regionalen Akteuren aus Verwaltung und Wirtschaft sowie mit regionalen und kommunalen Entscheidungsträgern und eine schriftliche Befragung der Bevölkerung in allen drei Landkreisen zugrunde, um weitere Erkenntnisse über (spezifische) Erfolgsfaktoren zu erhalten.
Es zeigt sich dabei, dass die positiven Entwicklungsdynamiken durch die unterschiedlichen Rahmenbedingungen, wie die Lage der Landkreise im Raum, verkehrsinfrastrukturelle und naturräumliche Ausstattung, historische Entwicklungspfade sowie durch bestimmte Unternehmensansiedlungen und ein strategisches Handeln regionaler und kommunaler Akteure, maßgeblich beeinflusst wird. Im Zusammenhang mit dem Thema der europäischen Metropolregionen bleibt zu sagen, dass die untersuchten Räume auch von der Nähe zu diesen Räumen profitieren. Hierbei spielen vor allem eine gut ausgebaute Verkehrsinfrastruktur, Arbeitsplätze und Hochschulen eine wichtige Rolle.
Weiterhin zeigt sich, dass sich beispielsweise eine wirtschaftsnahe Verwaltung und die Etablierung von Netzwerken zwischen Politik, Wirtschaft und Verwaltung unterstützend auswirken. Ein weiteres Hauptaugenmerk im Zusammenhang mit positiven Entwicklungsdynamiken ist auf die Existenz hochinnovativer, vielfach klein- und mittelständischer Unternehmen zu richten. Als maßgeblich für die Entwicklung eines Raumes ist aber auch das hohe Engagement einzelner Personen zu betrachten, die oftmals eine wichtige Rolle als die treibenden Kräfte bei der regionalen Entwicklung einnehmen. Darüber hinaus spielt in den untersuchten ländlichen Regionen, die stark durch den produzierenden Sektor geprägt sind, die betriebliche Aus- und Weiterbildung sowie eine frühzeitige Berufsorientierung in Schulen und die Vermittlung von Praktika eine wichtige Rolle und trägt zur Sicherung der Fachkräftebasis in den Regionen bei. Die durchgeführte Haushaltsbefragung verdeutlicht, dass die Räume eine sehr hohe Wohn- und Lebensqualität aufweisen, welche die Bevölkerung verstärkt an den Raum bindet.
Aufbauend auf den identifizierten Erfolgsfaktoren werden Handlungsansätze der Raumordnung, Landes- und Regionalplanung sowie der Regionalentwicklung und relevanter Politikfelder erarbeitet. Hierbei steht insbesondere die Frage im Vordergrund, wie vorhandene Entwicklungsdynamiken unterstützt, ausgebaut und in anderen Regionen angestoßen werden können. In Hinblick auf die Übertragbarkeit auf andere Regionen ist zu erwähnen, dass dies durchaus möglich ist, wobei die identifizierten Erfolgsfaktoren vielfach von den spezifischen Rahmenbedingungen des Raumes abhängig sind. Wesentlich ist, dass die genannten Handlungsansätze und -strategien eine Orientierung ermöglichen. Berücksichtigt werden muss beim Verfolgen der dargestellten Strategien jedoch, dass sich die Entwicklungen in den Untersuchungsräumen über einen langen Zeitraum vollzogen haben. Somit sollte auch an einer „Strategie der kleinen Schritte“ angesetzt werden.
Today's ubiquity of visual content as driven by the availability of broadband Internet, low-priced storage, and the omnipresence of camera equipped mobile devices conveys much of our thinking and feeling as individuals and as a society. As a result the growth of video repositories is increasing at enourmous rates with content now being embedded and shared through social media. To make use of this new form of social multimedia, concept detection, the automatic mapping of semantic concepts and video content has to be extended such that concept vocabularies are synchronized with current real-world events, systems can perform scalable concept learning with thousands of concepts, and high-level information such as sentiment can be extracted from visual content. To catch up with these demands the following three contributions are made in this thesis: (i) concept detection is linked to trending topics, (ii) visual learning from web videos is presented including the proper treatment of tags as concept labels, and (iii) the extension of concept detection with adjective noun pairs for sentiment analysis is proposed.
In order for concept detection to satisfy users' current information needs, the notion of fixed concept vocabularies has to be reconsidered. This thesis presents a novel concept learning approach built upon dynamic vocabularies, which are automatically augmented with trending topics mined from social media. Once discovered, trending topics are evaluated by forecasting their future progression to predict high impact topics, which are then either mapped to an available static concept vocabulary or trained as individual concept detectors on demand. It is demonstrated in experiments on YouTube video clips that by a visual learning of trending topics, improvements of over 100% in concept detection accuracy can be achieved over static vocabularies (n=78,000).
To remove manual efforts related to training data retrieval from YouTube and noise caused by tags being coarse, subjective and context-depedent, this thesis suggests an automatic concept-to-query mapping for the retrieval of relevant training video material, and active relevance filtering to generate reliable annotations from web video tags. Here, the relevance of web tags is modeled as a latent variable, which is combined with an active learning label refinement. In experiments on YouTube, active relevance filtering is found to outperform both automatic filtering and active learning approaches, leading to a reduction of required label inspections by 75% as compared to an expert annotated training dataset (n=100,000).
Finally, it is demonstrated, that concept detection can serve as a key component to infer the sentiment reflected in visual content. To extend concept detection for sentiment analysis, adjective noun pairs (ANP) as novel entities for concept learning are proposed in this thesis. First a large-scale visual sentiment ontology consisting of 3,000 ANPs is automatically constructed by mining the web. From this ontology a mid-level representation of visual content – SentiBank – is trained to encode the visual presence of 1,200 ANPs. This novel approach of visual learning is validated in three independent experiments on sentiment prediction (n=2,000), emotion detection (n=807) and pornographic filtering (n=40,000). SentiBank is shown to outperform known low-level feature representations (sentiment prediction, pornography detection) or perform comparable to state-of-the art methods (emotion detection).
Altogether, these contributions extend state-of-the-art concept detection approaches such that concept learning can be done autonomously from web videos on a large-scale, and can cope with novel semantic structures such as trending topics or adjective noun pairs, adding a new dimension to the understanding of video content.
Mit der Entdeckung von Acrylamid in Lebensmitteln und seiner krebserzeugenden Wirkung wurde der Fokus auf prozessgebildete Kontaminierungen gelenkt. Weitere hitzebedingte, kanzerogene Substanzen wurden in einer Vielzahl von Nahrungsmitteln entdeckt, eine davon ist Furan. Studien an Ratten und Mäusen zeigten eindeutig seine Karzinogenität und auch weitere toxikologische Untersuchungen stützen diesen Befund. Dennoch konnte der Weg der durch Furan hervorgerufenen Krebsentstehung noch nicht aufgeklärt werden. So steht nach wie vor zur Debatte, ob es sich um eine direkt genotoxische, oder eine indirekt resultierende Bildungsform handelt.
Als Teil des europäischen Furan-RA-Projektes sollte in dieser Arbeit ein Beitrag zur Beantwortung dieser Frage geleistet werden. Speziell im Niedrigdosisbereich unter 2 mg/kg KG wurde nach Gewebsveränderungen und zytotoxischen Effekten gesucht.
Für histologische Untersuchungen der Leber wurden Ratten in drei Dosisgruppen mit 0,1 sowie 0,5 und 2,0 mg/kg KG jeweils 28 Tage lang behandelt. Neben der Kontrollgruppe zum Vergleich wurde eine weitere Gruppe mit anschließenden zwei Wochen Erholungszeit betrachtet. Die Parafinschnitte der fünf Leberlappen wurden mit Hämatoxylin-Eosin und einem PCNA-Antikörper angefärbt.
Die randomisierte Begutachtung unter dem Mikroskop ließ keine dosisbezogenen Gewebsveränderungen erkennen, und es konnten auch keine Hinweise auf krebspromovierende Proliferationen gefunden werden.
Um einen Einblick auf zellulärer Ebene zu erlangen, wurden Hepatomzellen und Primäre Hepatozyten der Ratte mit verschiedenen Furankonzentrationen inkubiert. Wegen des hohen Dampfdruckes von Furan geschah dies im dafür entwickelten, geschlossenen Gefäß, in dem sich ein Gleichgewicht zwischen dem Medium und dem ausreichend dimensionierten Gasraum einstellen konnte. Die Kontrolle der wirkenden Konzentrationen erfolgte mit Hilfe einer geeigneten Headspace-Gaschromatographie. An Primären Hepatozyten zeigte sich eine konzentrationsabhängige Zytotoxizität von Furan mit einem ermittelten EC50 von 0,0188 mM.
Auch die weiteren Metabolite wurden auf ihre Wirkung an Zellen getestet. Der wichtigste Phase-I-Metabolit wies dabei einen EC50-Wert von 1,64 mM an Primären Hepatozyten und 0,55 mM an H4IIE auf. Die sehr hohe Reaktivität dieses cis-1,2-Butendials deutet darauf hin, dass bereits ein Großteil im Medium abreagiert, bevor es an den Zellen wirken kann. Daher resultiert die im Vergleich zum Furan enorm hohe Wirkkonzentration.
Bei einer weiteren Metabolisierung mit Glutathion stieg die gemessene Zytotoxizität wiederum an. Das Produktgemisch dieser beiden Reaktanden zeigte bereits ab einer Gesamtkonzentration von 0,025 mM signifikante Effekte. Die physiologisch gewollte Entgiftung findet also nicht statt. Wie dieser Effekt zustande kommt konnte leider nicht genau geklärt werden. Mit steigendem Butendial-Anteil stieg die schädigende Wirkung jedoch deutlich an.
Dies zeigt unter anderem, dass eine Verarmung an Glutathion die Wirkung von Furan steigert und die Detoxifizierung mit diesem Schritt noch nicht beendet ist. Aus den Ergebnissen geht hervor, dass Furan an sich und zumindest einige seiner Metabolite in der Leber toxisch wirken. Das gilt auch für Konzentrationen in einem Bereich, der keinen ausreichenden Sicherheitsabstand zur möglichen täglichen Aufnahme des Menschen lässt.
Auch wenn in den histologischen Untersuchungen noch keine Hinweise auf Tumore zu erkennen waren, so deuten die Daten in vitro doch deutlich auf ein hohes Potential vor allem des Furanmetaboliten Butendial hin. In wie weit dieser zur in Ratten beobachteten Krebsentstehung beisteuert, sollte Thema weiterer Untersuchungen sein.
The aim of this work was to synthesize and characterize new bidentate N,N,P-ligands and their corresponding heterobimetallic complexes. These bidentate pyridylpyrimidine aminophosphine ligands were synthesized by ring closure of two different enaminones ( 3-(dimethylamino)-1-(pyridine-2-yl)-prop-2-en-1-one or 3-(dimethylamino)-1-(pyridine-2-yl)-but-2-en-1-one) with excess amount of guanidinium salts in the presence of base. The novel phosphine functionalized guanidinium salts were prepared from 2-(diphenylphosphinyl)ethylamine or 3-(diphenyl-phosphinyl)propylamine. These bidentate N,N,P-ligands contain hard and soft donor sites which allows the coordination of two different metal centers and bimetallic complexes. These bimetallic complexes can exhibit a unique behavior as a result of a cooperation between the two metal atoms. First, the gold(I) complexes of all these four different ligands were synthesized. The gold metal coordinates only to the phosphorus atom. It was proved by X-Ray crystallography technique and 31P NMR spectroscopy. Addition to the gold(I)-monometallic complexes, trans- coordinated rhodium complex of (2-amino)pyridylpyrimidine aminophosphine ligand was successfully prepared. The characterization of this complex was achieved by NMR and IR spectroscopy. Reacting the mono gold(I) complexes with the different metal salts like Pd(PhCN)2Cl2, ZnCl2, [Ru(p-cymene)Cl2] dimer gave the target heterobimetallic complexes. The second metal centers coordinated to the N,N donor site which was proved by the help of NMR spectroscopy and ESI-MS measurements. The Au(I) and Au-Zn complexes of N,N,P-ligands were examined as catalysts for the hydroamidation reaction of cyclohexene with p-toluenesulfonamide. They did not show activities under the tested conditions. Further studies are necessary to understand the catalytic activities and cooperativity between the two metal atoms. In addition, bi-and trimetallic complexes with the rhodium compound could be synthesized and tested in different organic transformations. Furthermore, the chiral hydroxyl[2.2]paracyclophane substituted with five different aminopyrimidines were accomplished. These aminopyrimidine ligands were synthesized by a cyclization reaction with hydroxyl[2.2]paracyclophane substituted enaminone and excess amount of corresponding guanidinium salts under basic conditions. In the last part of this work, kinetic studies of cyclopalladation reaction of the 2-(arylaminopyrimidin-4-yl)pyridine ligands with Pd(PhCN)2 These measurements were carried out by using UV-Vis spectroscopy. The spectral studies of cyclometallation step showed that the reaction fits a second order kinetics. In addition to this, a full kinetic investigation was performed at different temperatures and the activation parameters of complex formation were calculated.
The last couple of years have marked the entire field of information technology with the introduction of a new global resource, called data. Certainly, one can argue that large amounts of information and highly interconnected and complex datasets were available since the dawn of the computer and even centuries before. However, it has been only a few years since digital data has exponentially expended, diversified and interconnected into an overwhelming range of domains, generating an entire universe of zeros and ones. This universe represents a source of information with the potential of advancing a multitude of fields and sparking valuable insights. In order to obtain this information, this data needs to be explored, analyzed and interpreted.
While a large set of problems can be addressed through automatic techniques from fields like artificial intelligence, machine learning or computer vision, there are various datasets and domains that still rely on the human intuition and experience in order to parse and discover hidden information. In such instances, the data is usually structured and represented in the form of an interactive visual representation that allows users to efficiently explore the data space and reach valuable insights. However, the experience, knowledge and intuition of a single person also has its limits. To address this, collaborative visualizations allow multiple users to communicate, interact and explore a visual representation by building on the different views and knowledge blocks contributed by each person.
In this dissertation, we explore the potential of subjective measurements and user emotional awareness in collaborative scenarios as well as support flexible and user- centered collaboration in information visualization systems running on tabletop displays. We commence by introducing the concept of user-centered collaborative visualization (UCCV) and highlighting the context in which it applies. We continue with a thorough overview of the state-of-the-art in the areas of collaborative information visualization, subjectivity measurement and emotion visualization, combinable tabletop tangibles, as well as browsing history visualizations. Based on a new web browser history visualization for exploring user parallel browsing behavior, we introduce two novel user-centered techniques for supporting collaboration in co-located visualization systems. To begin with, we inspect the particularities of detecting user subjectivity through brain-computer interfaces, and present two emotion visualization techniques for touch and desktop interfaces. These visualizations offer real-time or post-task feedback about the users’ affective states, both in single-user and collaborative settings, thus increasing the emotional self-awareness and the awareness of other users’ emotions. For supporting collaborative interaction, a novel design for tabletop tangibles is described together with a set of specifically developed interactions for supporting tabletop collaboration. These ring-shaped tangibles minimize occlusion, support touch interaction, can act as interaction lenses, and describe logical operations through nesting operations. The visualization and the two UCCV techniques are each evaluated individually capturing a set of advantages and limitations of each approach. Additionally, the collaborative visualization supported by the two UCCV techniques is also collectively evaluated in three user studies that offer insight into the specifics of interpersonal interaction and task transition in collaborative visualization. The results show that the proposed collaboration support techniques do not only improve the efficiency of the visualization, but also help maintain the collaboration process and aid a balanced social interaction.
Maltose binding protein (MBP) is a monomeric, two domain protein containing 370 amino acids. Seven double cysteine mutants of maltose binding protein (MBP) were generated with one each in the active cleft at position 298 and the second cysteine distributed over both domains of the protein. These cysteines were spin labeled and distances between the labels in biradical pairs determined by pulsed double electron–electron resonance (DEER) measurements. The values were compared with
theoretical predictions of distances between the labels in biradicals constructed by molecular modeling from the crystal structure of MBP without maltose and were found to be in excellent agreement.
MBP is in a molten globule state at pH 3.3 and is known to still bind its substrate maltose.
The ligand-binding affinity of the molten globule and the native states of MBP was studied by isothermal titration calorimetry. Ligand binding affinity measured by isothermal titration calorimetry for the native state of MBP was found to be comparable to that from the literature.
Simultaneous measurements to investigate the molten globule state of MBP were implemented, including the use of far-and near-UV CD and the 8-anilino-1-naphthalene sulfonate (ANS) binding employing fluorescence techniques. Guanidine hydrochloride, urea and thermal denaturation studies have been carried out to compare the stability of the two states of maltose binding protein.
In cw- experiments, the X-band EPR measurements at low temperature confirm indirect that all distances of the biradicals are above 20 Å, otherwise no evidence of dipolar interactions in the immobilized spectra were observed.
DEER measurements of MBP in a molten globule state were yielding a broad distance distribution as was to be expected if there is no explicit tertiary structure and the individual helices pointing into all possible directions.
In a networked system, the communication system is indispensable but often the weakest link w.r.t. performance and reliability. This, particularly, holds for wireless communication systems, where the error- and interference-prone medium and the character of network topologies implicate special challenges. However, there are many scenarios of wireless networks, in which a certain quality-of-service has to be provided despite these conditions. In this regard, distributed real-time systems, whose realization by wireless multi-hop networks becomes increasingly popular, are a particular challenge. For such systems, it is of crucial importance that communication protocols are deterministic and come with the required amount of efficiency and predictability, while additionally considering scarce hardware resources that are a major limiting factor of wireless sensor nodes. This, in turn, does not only place demands on the behavior of a protocol but also on its implementation, which has to comply with timing and resource constraints.
The first part of this thesis presents a deterministic protocol for wireless multi-hop networks with time-critical behavior. The protocol is referred to as Arbitrating and Cooperative Transfer Protocol (ACTP), and is an instance of a binary countdown protocol. It enables the reliable transfer of bit sequences of adjustable length and deterministically resolves contest among nodes based on a flexible priority assignment, with constant delays, and within configurable arbitration radii. The protocol's key requirement is the collision-resistant encoding of bits, which is achieved by the incorporation of black bursts. Besides revisiting black bursts and proposing measures to optimize their detection, robustness, and implementation on wireless sensor nodes, the first part of this thesis presents the mode of operation and time behavior of ACTP. In addition, possible applications of ACTP are illustrated, presenting solutions to well-known problems of distributed systems like leader election and data dissemination. Furthermore, results of experimental evaluations with customary wireless transceivers are outlined to provide evidence of the protocol's implementability and benefits.
In the second part of this thesis, the focus is shifted from concrete deterministic protocols to their model-driven development with the Specification and Description Language (SDL). Though SDL is well-established in the domain of telecommunication and distributed systems, the predictability of its implementations is often insufficient as previous projects have shown. To increase this predictability and to improve SDL's applicability to time-critical systems, real-time tasks, an approved concept in the design of real-time systems, are transferred to SDL and extended to cover node-spanning system tasks. In this regard, a priority-based execution and suspension model is introduced in SDL, which enables task-specific priority assignments in the SDL specification that are orthogonal to the static structure of SDL systems and control transition execution orders on design as well as on implementation level. Both the formal incorporation of real-time tasks into SDL and their implementation in a novel scheduling strategy are discussed in this context. By means of evaluations on wireless sensor nodes, evidence is provided that these extensions reduce worst-case execution times substantially, and improve the predictability of SDL implementations and the language's applicability to real-time systems.
Many tasks in image processing can be tackled by modeling an appropriate data fidelity term \(\Phi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and then solve one of the regularized minimization problems \begin{align*}
&{}(P_{1,\tau}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \big\{ \Phi(x) \;{\rm s.t.}\; \Psi(x) \leq \tau \big\} \\ &{}(P_{2,\lambda}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \{ \Phi(x) + \lambda \Psi(x) \}, \; \lambda > 0 \end{align*} with some function \(\Psi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and a good choice of the parameter(s). Two tasks arise naturally here: \begin{align*} {}& \text{1. Study the solver sets \({\rm SOL}(P_{1,\tau})\) and
\({\rm SOL}(P_{2,\lambda})\) of the minimization problems.} \\ {}& \text{2. Ensure that the minimization problems have solutions.} \end{align*} This thesis provides contributions to both tasks: Regarding the first task for a more special setting we prove that there are intervals \((0,c)\) and \((0,d)\) such that the setvalued curves \begin{align*}
\tau \mapsto {}& {\rm SOL}(P_{1,\tau}), \; \tau \in (0,c) \\ {} \lambda \mapsto {}& {\rm SOL}(P_{2,\lambda}), \; \lambda \in (0,d) \end{align*} are the same, besides an order reversing parameter change \(g: (0,c) \rightarrow (0,d)\). Moreover we show that the solver sets are changing all the time while \(\tau\) runs from \(0\) to \(c\) and \(\lambda\) runs from \(d\) to \(0\).
In the presence of lower semicontinuity the second task is done if we have additionally coercivity. We regard lower semicontinuity and coercivity from a topological point of view and develop a new technique for proving lower semicontinuity plus coercivity.
Dropping any lower semicontinuity assumption we also prove a theorem on the coercivity of a sum of functions.
The present thesis describes the development and validation of a viscosity adaption method for the numerical simulation of non-Newtonian fluids on the basis of the Lattice Boltzmann Method (LBM), as well as the development and verification of the related software bundle SAM-Lattice.
By now, Lattice Boltzmann Methods are established as an alternative approach to classical computational fluid dynamics
methods. The LBM has been shown to be an accurate and efficient tool for the numerical simulation of weakly compressible or incompressible fluids. Fields of application reach from turbulent simulations through thermal problems to acoustic calculations among others. The transient nature of the method and the need for a regular grid based, non body conformal discretization makes the LBM ideally suitable for simulations involving complex solids. Such geometries are common, for instance, in the food processing industry, where fluids are mixed by static mixers or agitators. Those fluid flows are often laminar and non-Newtonian.
This work is motivated by the immense practical use of the Lattice Boltzmann Method, which is limited due to stability issues. The stability of the method is mainly influenced by the discretization and the viscosity of the fluid. Thus, simulations of non-Newtonian fluids, whose kinematic viscosity depend on the shear rate, are problematic. Several authors have shown that the LBM is capable of simulating those fluids. However, the vast majority of the simulations in the literature are carried out for simple geometries and/or moderate shear rates, where the LBM is still stable. Special care has to be taken for practical non-Newtonian Lattice Boltzmann simulations in order to keep them stable. A straightforward way is to truncate the modeled viscosity range by numerical stability criteria. This is an effective approach, but from the physical point of view the viscosity bounds are chosen arbitrarily. Moreover, these bounds depend on and vary with the grid and time step size and, therefore, with the simulation Mach number, which is freely chosen at the start of the simulation. Consequently, the modeled viscosity range may not fit to the actual range of the physical problem, because the correct simulation Mach number is unknown a priori. A way around is, to perform precursor simulations on a fixed grid to determine a possible time step size and simulation Mach number, respectively. These precursor simulations can be time consuming and expensive, especially for complex cases and a number of operating points. This makes the LBM unattractive for use in practical simulations of non-Newtonian fluids.
The essential novelty of the method, developed in the course of this thesis, is that the numerically modeled viscosity range is consistently adapted to the actual physically exhibited viscosity range through change of the simulation time step and the simulation Mach number, respectively, while the simulation is running. The algorithm is robust, independent of the Mach number the simulation was started with, and applicable for stationary flows as well as transient flows. The method for the viscosity adaption will be referred to as the "viscosity adaption method (VAM)" and the combination with LBM leads to the "viscosity adaptive LBM (VALBM)".
Besides the introduction of the VALBM, a goal of this thesis is to offer assistance in the spirit of a theory guide to students and assistant researchers concerning the theory of the Lattice Boltzmann Method and its implementation in SAM-Lattice. In Chapter 2, the mathematical foundation of the LBM is given and the route from the BGK approximation of the Boltzmann equation to the Lattice Boltzmann (BGK) equation is delineated in detail.
The derivation is restricted to isothermal flows only. Restrictions of the method, such as low Mach number flows are highlighted and the accuracy of the method is discussed.
SAM-Lattice is a C++ software bundle developed by the author and his colleague Dipl.-Ing. Andreas Schneider. It is a highly automated package for the simulation of isothermal flows of incompressible or weakly compressible fluids in 3D on the basis of the Lattice Boltzmann Method. By the time of writing of this thesis, SAM-Lattice comprises 5 components. The main components are the highly automated lattice generator SamGenerator and the Lattice Boltzmann solver SamSolver. Postprocessing is done with ParaSam, which is our extension of the
open source visualization software ParaView. Additionally, domain decomposition for MPI
parallelism is done by SamDecomposer, which makes use of the graph partitioning library MeTiS. Finally, all mentioned components can be controlled through a user friendly GUI (SamLattice) implemented by the author using QT, including features to visually track output data.
In Chapter 3, some fundamental aspects on the implementation of the main components, including the corresponding flow charts will be discussed. Actual details on the implementation are given in the comprehensive programmers guides to SamGenerator and SamSolver.
In order to ensure the functionality of the implementation of SamSolver, the solver is verified in Chapter 4 for Stokes's First Problem, the suddenly accelerated plate, and for Stokes's Second Problem, the oscillating plate, both for Newtonian fluids. Non-Newtonian fluids are modeled in SamSolver with the power-law model according to Ostwald de Waele. The implementation for non-Newtonian fluids is verified for the Hagen-Poiseuille channel flow in conjunction with a convergence analysis of the method. At the same time, the local grid refinement as it is implemented in SamSolver, is verified. Finally, the verification of higher order boundary conditions is done for the 3D Hagen-Poiseuille pipe flow for both Newtonian and non-Newtonian fluids.
In Chapter 5, the theory of the viscosity adaption method is introduced. For the adaption process, a target collision frequency or target simulation Mach number must be chosen and the distributions must be rescaled according to the modified time step size. A convenient choice is one of the stability bounds. The time step size for the adaption step is deduced from the target collision frequency \(\Omega_t\) and the currently minimal or maximal shear rate in the system, while obeying auxiliary conditions for the simulation Mach number. The adaption is done in the collision step of the Lattice Boltzmann algorithm. We use the transformation matrices of the MRT model to map from distribution space to moment space and vice versa. The actual scaling of the distributions is conducted on the back mapping, because we use the transformation matrix on the basis of the new adaption time step size. It follows an additional rescaling of the non-equilibrium part of the distributions, because of the form of the definition for the discrete stress tensor in the LBM context. For that reason it is clear, that the VAM is applicable for the SRT model as well as the MRT model, where there is virtually no extra cost in the latter case. Also, in Chapter 5, the multi level treatment will be discussed.
Depending on the target collision frequency and the target Mach number, the VAM can be used to optimally use the viscosity range that can be modeled within the stability bounds or it can be used to drastically accelerate the simulation. This is shown in Chapter 6. The viscosity adaptive LBM is verified in the stationary case for the Hagen-Poiseuille channel flow and in the transient case for the Wormersley flow, i.e., the pulsatile 3D Hagen-Poiseuille pipe flow. Although, the VAM is used here for fluids that can be modeled with the power-law approach, the implementation of the VALBM is straightforward for other non-Newtonian models, e.g., the Carreau-Yasuda or Cross model. In the same chapter, the VALBM is validated for the case of a propeller viscosimeter developed at the chair SAM. To this end, the experimental data of the torque on the impeller of three shear thinning non-Newtonian liquids serve for the validation. The VALBM shows excellent agreement with experimental data for all of the investigated fluids and in every operating point. For reasons of comparison, a series of standard LBM simulations is carried out with different simulation Mach numbers, which partly show errors of several hundred percent. Moreover, in Chapter 7, a sensitivity analysis on the parameters used within the VAM is conducted for the simulation of the propeller viscosimeter.
Finally, the accuracy of non-Newtonian Lattice Boltzmann simulations with the SRT and the MRT model is analyzed in detail. Previous work for Newtonian fluids indicate that depending on the numerical value of the collision frequency \(\Omega\), additional artificial viscosity is introduced due to the finite difference scheme, which negatively influences the accuracy. For the non-Newtonian case, an error estimate in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. The estimation of the error minimum is excellent in regions where the \(\Omega\) error is the dominant source of error as opposed to the compressibility error.
Result of this dissertation is a verified and validated software bundle on the basis of the viscosity adaptive Lattice Boltzmann Method. The work restricts itself on the simulation of isothermal, laminar flows with small Mach numbers. As further research goals, the testing of the VALBM with minimal error estimate and the investigation of the VALBM in the case of turbulent flows is suggested.
Die Klebetechnologie spielt heutzutage eine sehr bedeutende Rolle bei der Realisierung von Werkstoffverbunden unterschiedlichster Art. Hierbei sind die mechanischen und strukturellen Eigenschaften der Klebverbunde unter Einsatzbedingungen bemerkenswert. Diese Merkmale werden erheblich durch die Art der Klebstoffbestandteile und ihre Wechselwirkung beeinflusst.
In dieser Arbeit kommen unterschiedliche Test-Verfahren zum Einsatz, um die mechanischen und strukturellen Eigenschaften von gefüllten Klebstoffen zu untersuchen. Als Basiswerkstoff werden Klebstoffe auf Epoxid- und Polyurethan-System verwendet. Zur Untersuchung des Einflusses der inneren Oberflächen von Füllstoffen auf die oben genannten Eigenschaften finden zwei Gruppen von Füllstoffen auf Calciumcarbonat- und Kieselerde-Basis Verwendung. Die Bewertung der Bruchflächen erfolgt durch die Digital-Mikroskopische-Analyse und Raster-Elektronen-Mikroskopie (REM).
Die aus den Untersuchungen gewonnenen Erkenntnisse zeigen, dass die mechanischen und strukturellen Kennwerte des Polymer-Metall-Verbundes, insbesondere des Elastizitätsmoduls, der Zugfestigkeit, des mittleren und maximalen Schälwiderstands sowie der Bruchzähigkeit durch die Wechselwirkung zwischen den inneren Oberflächen der verwendeten Füllstoffe und der Polymermatrix erheblich beeinflusst werden können.
Optimal Multilevel Monte Carlo Algorithms for Parametric Integration and Initial Value Problems
(2015)
We intend to find optimal deterministic and randomized algorithms for three related problems: multivariate integration, parametric multivariate integration, and parametric initial value problems. The main interest is concentrated on the question, in how far randomization affects the precision of an approximation. We want to understand when and to which extent randomized algorithms are superior to deterministic ones.
All problems are studied for Banach space valued input functions. The analysis of Banach space valued problems is motivated by the investigation of scalar parametric problems; these can be understood as particular cases of Banach space valued problems. The gain achieved by randomization depends on the underlying Banach space.
For each problem, we introduce deterministic and randomized algorithms and provide the corresponding convergence analysis.
Moreover, we also provide lower bounds for the general Banach space valued settings, and thus, determine the complexity of the problems. It turns out that the obtained algorithms are order optimal in the deterministic setting. In the randomized setting, they are order optimal for certain classes of Banach spaces, which includes the L_p spaces and any finite dimensional Banach space. For general Banach spaces, they are optimal up to an arbitrarily small gap in the order of convergence.
Sequential Consistency (SC) is the memory model traditionally applied by programmers and verification tools for the analysis of multithreaded programs.
SC guarantees that instructions of each thread are executed atomically and in program order.
Modern CPUs implement memory models that relax the SC guarantees: threads can execute instructions out of order, stores to the memory can be observed by different threads in different order.
As a result of these relaxations, multithreaded programs can show unexpected, potentially undesired behaviors, when run on real hardware.
The robustness problem asks if a program has the same behaviors under SC and under a relaxed memory model.
Behaviors are formalized in terms of happens-before relations — dataflow and control-flow relations between executed instructions.
Programs that are robust against a memory model produce the same results under this memory model and under SC.
This means, they only need to be verified under SC, and the verification results will carry over to the relaxed setting.
Interestingly, robustness is a suitable correctness criterion not only for multithreaded programs, but also for parallel programs running on computer clusters.
Parallel programs written in Partitioned Global Address Space (PGAS) programming model, when executed on cluster, consist of multiple processes, each running on its cluster node.
These processes can directly access memories of each other over the network, without the need of explicit synchronization.
Reorderings and delays introduced on the network level, just as the reorderings done by the CPUs, may result into unexpected behaviors that are hard to reproduce and fix.
Our first contribution is a generic approach for solving robustness against relaxed memory models.
The approach involves two steps: combinatorial analysis, followed by an algorithmic development.
The aim of combinatorial analysis is to show that among program computations violating robustness there is always a computation in a certain normal form, where reorderings are applied in a restricted way.
In the algorithmic development we work out a decision procedure for checking whether a program has violating normal-form computations.
Our second contribution is an application of the generic approach to widely implemented memory models, including Total Store Order used in Intel x86 and Sun SPARC architectures, the memory model of Power architecture, and the PGAS memory model.
We reduce robustness against TSO to SC state reachability for a modified input program.
Robustness against Power and PGAS is reduced to language emptiness for a novel class of automata — multiheaded automata.
The reductions lead to new decidability results.
In particular, robustness is PSPACE-complete for all the considered memory models.
Verkehrsverbünde in Deutschland haben heute eine hohe Relevanz für die Gestaltung des ÖPNV. Aktuell bestehen bundesweit 58 Verbünde, die ca. 70% des Bundesgebietes abde-cken. Die Bandbreite der Verbünde ist dabei groß und reicht vom Landkreisverbund bis hin zur Abdeckung ganzer Bundesländer. Durch die Heterogenität der Verbundlandschaft ist festzuhalten, dass es DEN Verbundprototyp nicht gibt. Das Erfolgsmodell „Verkehrsver-bund“ hat jedoch auch negative Aspekte. Verbünde sind keine Instrumente zur Defizitbeseitigung. Die regelmäßig auszugleichenden Defizite werden durch die verschiedenen Institutionen der öffentlichen Hand getragen. Obwohl den Verbünden aus dem evidenten Umstand einer Mangelverwaltung ein Effizienzauftrag praktisch mit in die Wiege gelegt wird, ist bisher in der Verbundpraxis kein einheitliches Instrument vorzufinden, das das Handeln der Verbünde systematisch erfasst, eine Bewertung ermöglicht und ggf. einen Vergleich mit anderen Verbünden vornimmt.
Ziel der Arbeit ist es daher, ein Instrument zur Erfassung, Bewertung und zum Vergleich der Arbeit von Verkehrsverbundorganisationen aufzustellen. Dazu lassen sich verschiedene Forschungsfragen formulieren:
1. Welches sind die Ziele und Aufgaben von Verbünden?
2. Wie können Ziele und Aufgaben von Verbünden erfasst und bewertet werden?
3. Wie ist ein Instrument zur einheitlichen Erfassung und Bewertung der Arbeit von Ver-kehrsverbundorganisationen auszugestalten?
4. Wie können Verkehrsverbünde verglichen werden?
5. Ist eine Verbundbewertung und ein Verbundvergleich in der Praxis anwendbar?
Hinsichtlich der Ziele und Aufgaben von Verbundorganisationen zeigen die Untersuchungsergebnisse, dass in der Praxis keine klare Verwendung der Begriffe Oberziele, Ziele und Aufgaben erfolgt. Oberziele liegen oft weit außerhalb des Wirkungsbereichs von Verkehrsverbünden und können somit nicht von diesen beeinflusst werden. Der Züricher Verkehrsverbund (ZVV) hat daher den Begriff der „lenkbaren Größe“ eingeführt, der auch im Rahmen dieser Arbeit Verwendung findet.
Die weiteren Untersuchungen zeigen, dass sich eine Bewertung nicht nur auf betriebswirt-schaftliche Kennzahlen stützen kann, da der jeweilige Handlungsauftrag einer Verkehrsver-bundorganisation, entgegen dem eines Wirtschaftsunternehmens, nicht nur an marktwirt-schaftlich orientierten Erfolgs- und Effizienzkriterien ausgerichtet ist, sondern auch nicht gewinnorientierte Aufgaben der öffentlichen Hand wie z.B. die Daseinsvorsorge einschließt.
In einem folgenden Arbeitsschritt werden die Erkenntnisse aus den Voruntersuchungen im Rahmen der Aufstellung eines speziellen Instruments zur Erfassung und Bewertung der Verbundarbeit zusammengeführt. Dazu werden vier Kernthemen und vier erweiterte Themen der Verbundarbeit definiert, denen dann einzelne Aufgaben zugeordnet werden. Zu diesen Aufgaben werden Indikatoren und Messgrößen bestimmt.
Ein komplexes und anspruchsvolles Bewertungsinstrument benötigt eine systematische Einführung und Anwendung. Eine wesentliche Unterstützung ist dabei eine Institutionalisierung. Die Untersuchung der verschiedenen Möglichkeiten einer Institutionalisierung für das hier entwickelte Bewertungsinstrument lassen den Verband Deutscher Verkehrsunternehmen (VDV) als Branchenverband oder eine unabhängige, als Verein organisierte Institution in Frage kommen.
Als erste Anwendungsstufe ist der entwickelte Bewertungsansatz als internes Controlling-Instrument einzusetzen. Damit kann eine Verkehrsverbundorganisation ihre internen Aufgaben und Prozesse erfassen und den Soll-Ist-Abgleich als Steuerungsinstrument für die Verbundarbeit verwenden. In einer zweiten Stufe ist über den einzelnen Verbund hinaus ein Benchmarking mit anderen Verbünden möglich. Hierbei sollte jedoch ein Vergleich innerhalb einer geeigneten (homogenen) Vergleichsgruppe stattfinden.
Im Rahmen der vorliegenden Arbeit wird eine Systematik zur Einteilung von Verkehrsver-bünden entwickelt, um so geeignete homogene Vergleichsgruppen bilden zu können. Dazu werden die vier Merkmale Fläche, Einwohnerzahl, Anzahl der Verkehrsunternehmen und Anzahl der Aufgabenträger sowie die Organisationsform der Verbünde für eine Einteilung herangezogen. So lassen sich alle Verbünde in eine von neun gebildeten Eigenschaftsgruppen einteilen. Für die vorliegende Untersuchung werden aus den stärksten Gruppen typische Vertreter für eine weitere Untersuchung auswählt.
Nach einer Rückkopplung mit ausgewählten Verbünden zeigte sich, dass die grundsätzliche Systematik des Bewertungsinstruments von allen Verbünden bestätigt wurde. Hinsichtlich der Anwendung als Controlling-Instrument haben die großen Verbünde oft eigene Systematiken aufgebaut, die nur schwer einem neuen hier entwickelten Instrument angepasst werden können. Bei kleinen Verbünden sind i.d.R. keine internen Bewertungsinstrumente im Einsatz, so dass die Voraussetzungen für eine Anwendung hier eher gegeben sind. Aufgrund ihrer oft sehr geringen (personellen) Ressourcen sehen sie sich jedoch mit der Einführung und Anwendung überfordert. Daher kommt einer unterstützenden Institutionalisierung für den zukünftigen Einsatz eine große Bedeutung zu. Im Rahmen der Arbeit wurden dazu Ansätze untersucht und Lösungen aufgezeigt.
An der zweiten Anwendungsstufe in Form eines Benchmarking haben alle befragten Verbünde großes Interesse. Die großen Verbünde sehen hier weniger einen vollumfänglichen Vergleich, sondern eher ausgewählte punktuelle Vergleiche zu speziellen Themen - auch vor dem Hintergrund, dass es nach Einschätzung der Verbünde keine richtigen Vergleichsgruppen in ihrer Größenklasse gibt. Die kleinen Verbünde haben sowohl Interesse an einem vergleichenden Einsatz als auch an der Methodik zur Bildung von (homogenen) Vergleichsgruppen. Insgesamt empfiehlt sich eine intensivere Auseinandersetzung der Verbünde mit der Anwendung eines Bewertungsinstruments, da die Grundlagen eines internen und eines externen Vergleichs dicht zusammen liegen - eine interne Verwendung stellt eine wesentliche Vorstufe für einen externen Vergleich dar. Auch hier kann eine Institutionalisierung diese anstehende Auseinandersetzung positiv unterstützen.
Die Arbeit zeigt, dass für die unterschiedlichen Verbundtypen ein flexibles Bewertungs-instrumentarium, sowohl inhaltlich als auch hinsichtlich der Anwendung, benötigt wird. Die mit dieser Arbeit entwickelten Grundlagen und Instrumente zur Erfassung, Bewertung und Einteilung von Verkehrsverbünden können dies leisten und sollten eine weitere praktische Anwendung erfahren.
Stadtumbau und Kultur
(2015)
Gerade in schrumpfenden Kommunen erscheint es als besonders
wichtig, die Realität des Schrumpfens, die damit verbundenen Verlusterfahrungen und notwendigen städtischen Umbauprozesse mit anderen Mitteln und über neue Zugänge besser verständlich zu machen und Betroffene zu aktivieren. In letzter Zeit werden hierfür
vermehrt auch künstlerische Ausdrucksformen genutzt. Basierend auf eigenen Erfahrungen des Autors ist es zentrales Anliegen der Arbeit, den Gegenstand solcher kulturellen Prozesse im Stadtumbau als Forschungsgegenstand aufzurufen und genauer zu untersuchen.
Eine deutschlandweite Befragung der Stadtumbaukommunen liefert erstmals statistisches Material zum Thema, das dann in vier vertiefenden Fallstudien anschaulich wird. Für die Planungspraxis werden mögliche Aktionsfelder katalogisiert und Anwendungsprinzipien
abgeleitet. Am Ende wird deutlich, dass Kunst und Kultur nicht schmückendes Beiwerk der Planung sondern ein eigenständiger Bestandteil im Stadtumbauprozess sein können.
Schon jetzt durchlaufen mehr als vier von fünf chemischen Produkten bei der Herstellung einen Katalysezyklus. In zunehmendem Maße findet man katalytische Anwendungen neben dem Einsatz in der Synthesechemie auch in den Biowissenschaften, beim Klima- und Umweltschutz sowie zur Energieversorgung. Durch gezieltes Ligandendesign werden dabei kontinuierlich bekannte Katalysatorsysteme optimiert und die Anwendungsbreite erweitert. Für zweizähnige, pyrimidinhaltige Ligandensysteme ist aus anderen Forschungsarbeiten der AG Thiel eine intramolekularen C-H-Aktivierung im Pyrimidinring bekannt, die zu einer carbanionkoordination am Übergangsmetallzentrum führt. Diese Reaktivität wurde im Rahmen dieser Arbeit mit der stabilisierenden Wirkung eines N-heterocyclischen Carbenliganden (NHC) zu einem neuen Ligandensystem kombiniert. Verschiedene Imidazoliumvorstufen neuer NHC-Liganden mit einem in der 2-Position aminosubstituierten Pyrimidinring als N-Substituent wurden über zwei neu erarbeitete Syntheserouten dargestellt und mit verschiedenen Übergangsmetallvorstufen umgesetzt. In Palladium(II)-Komplexen von pyrimidinyl- und mesitylsubstituierten NHC-Liganden wurden verschiedene Koordinationsmodi in Abhängigkeit von der verwendeten Synthesemethode beobachtet. Über Silber-Carben-Komplexe als Carben-Transferreagenzien konnten für verschieden tertiär amino- und mesitylsubstituierten Liganden die nicht C-H-aktivierten, d.h. C,N-koordinierten Palladiumkomplexe dargestellt werden. Eine direkte Umsetzung der ionischen Imidazoliumverbindungen mit Palladiumvorstufen wie PdCl2 führte in Pyridin und Pyridinderivaten als Lösungsmittel bei Reaktionstemperaturen direkt zu einer C-H-Aktivierung im Pyrimidinring des Liganden. Der leicht basische Pyridinligand stabilisiert während der Komplexbildung die hochreaktive, C-H-aktivierte Spezies und verhindert so Neben- und Zersetzungsreaktionen. Über die Abspaltung des labilen Pyridinliganden durch Erhitzen in schwach koordinierenden Lösungsmitteln wurden die zweikernigen, unlöslichen, pyridinfreien Palladiumkomplexe erhalten und mittels Festkörper-NMR-Spektroskopie charakterisiert. Diese Reaktion ist vollständig reversibel und wurde zum Einführen verschiedener Pyridinderivate als labile Liganden genutzt. In schwach koordinierenden Lösungsmitteln mit einem Siedepunkt < 80 °C, wie THF, wurde durch eine direkte Umsetzung der ionischen Vorstufen der Liganden mit PdCl2 eine weitere Art von Pd(II)-Komplexen erhalten, für welche die Strukturformel eines N-koordinierten Palladates postuliert wurde. In NMR-spektroskopischen Experimenten wurde die Reversibilität der C-H-Aktivierung im Pyrimidinring der Pd(II)-Komplexe in Abhängigkeit von pH-Wert und Temperatur nachgewiesen. Auch hier erwies sich der stabilisierende Pyridinligand für die C-H-Aktivierung und HCl-Eliminierung als notwendig. Die Rückreaktion wurde unter schwach sauren Reaktionsbedingungen bei Raumtemperatur über eine NHC-gebundene, pyridinhaltige Spezies, strukturanalog der literaturbekannten PEPPSI-Komplexe, beobachtet.
Für die stark Lewis-aciden Übergangsmetallzentren Iridium (III) und Ruthenium (II) wurden mit den entsprechenden ionischen Ligandenvorstufen über in situ dargestellte Silber-Carben-Komplexe ausschließlich die C-H-aktivierten, C,N-koordinierten Halbsandwichkomplexe der neuen 2-Amino-4-(imidazolylidenyl)pyrimidinliganden erhalten, trotz variierter Reaktionsbedingungen. Die C-H-Aktivierung mit anschließender HCl-Eliminierung erfolgte bei diesen Übergangsmetallzentren bereits bei Raumtemperatur irreversibel.
In Rahmen dieser Arbeit wurde außerdem die Notwendigkeit eines sterisch anspruchsvollen, stabilisierenden Mesitylrestes am NHC-Liganden für stabile und isolierbare C-H-aktivierte Komplexe beobachtet. Mit anderen, sterisch weniger anspruchsvollen Resten an dieser Position des Liganden wurden unter den Reaktionsbedingungen für potentielle C-H-Aktivierungen nur Zersetzungsprodukte erhalten. Von jedem Komplextyp der neuen C-H-aktivierten Übergangsmetallkomplexe wurden messbare Kristalle für eine Kristallstrukturanalyse erhalten, welche tiefere Einblicke in die Bindungssituation der neuen Liganden ermöglichte.
Die C-H-aktivierten Übergangsmetallkomplexe der neuen Liganden zeigen sehr gute Aktivitäten in verschiedenen katalytischen Anwendungen. Neben der stabilisierenden Wirkung des NHC mit starkem σ-Donorcharakter wird die hohe Elektronendichte am Übergangsmetallzentrum durch die Koordination des Carbanions weiter erhöht. Unter optimierten Bedingungen wurden in der Suzuki-Miyaura-Kupplung mit geringeren Katalysatorkonzentrationen der C-H-aktivierten Pd(II)-Komplexe eine große Bandbreite von sterisch und elektronisch gehinderten Chlorarylen mit verschiedenen Boronsäurederivaten erfolgreich zu Biarylen umgesetzt. Mit den C-H-aktivierten Ru(II)- und Ir(III)-Halbsandwichkomplexen der neuen Liganden wurden in der katalytischen Transferhydrierung von Acetophenon bereits bei geringen Katalysatorkonzentrationen von 0.15 mol% sehr hohe Ausbeuten erhalten. Die katalytisch hochaktiven Komplexe zeichneten sich außerdem durch eine hohe Stabilität unter den optimierten Reaktionsbedingungen aus. Die C-H-Aktivierung weist zwar keine Abhängigkeit vom sterischen Anspruch der variierten tertiären Aminosubstituenten auf, wurde aber für die anderen Reste in der 2-Position des Pyrimidinrings nicht beobachtet.
Die klassischen Verfahren zur Herstellung leichter Olefine, wie Steamcracken und Fluid Catalytic Cracking sind nicht mehr in der Lage, die steigende Nachfrage an Propen zu decken. Um dem Ungleichgewicht zwischen Versorgung und Nachfrage zu begegnen wurden neue Strategien und Technologien entwickelt, die eine unabhängige Produktion von Propen ermöglichen. Eine dieser Varianten ist die katalytische Umwandlung von Ethen zu Propen, die derzeit im Labormaßstab untersucht wird. In der Literatur wurden bereits verschiedene Katalysatorsysteme vorgestellt, die unter anderem Metallträgerkatalysatoren, mesoporöse Materialien und mikroporöse Materialien beinhalten. Insbesondere mikroporöse Zeolithe, die bereits in vielen technischen Prozessen erfolgreich eingesetzt werden, zeigen aufgrund ihrer katalytischen Eigenschaften ein hohes Potential in der Ethen-zu-Propen Reaktion.
In der vorliegenden Arbeit wurden Schlüsselfaktoren untersucht, die eine selektive katalytische Umwandlung von Ethen zu Propen und Butenen an 10-Ring-Zeolithen ermöglichen. Im Fokus der Untersuchungen stand der Einfluss unterschiedlicher Porenarchitekturen, die Säurestärkeverteilung und die Kristallitgröße auf die Aktivität und Stabilität der Katalysatoren sowie die Selektivität zu Propen und den Butenen. Die hergestellten 10-Ring-Zeolithe wurden mittels Pulver-Röntgendiffraktometrie, Festkörper-NMR-Spektroskopie, Stickstoff-Physisorption, Thermogravimetrie, Partikelgrößenanalyse und Raster-elektronenmikroskopie charakterisiert. Zur Erprobung der katalytischen Eigenschaften der hergestellten Materialien wurde eine Normaldruck-Strömungsapparatur aufgebaut. Variiert wurden die Katalysatorlaufzeit, die Reaktionstemperatur, die modifizierte Verweilzeit und der Ethen-Partialdruck.
Zu Beginn wurden einige 10-Ring-Zeolithe mit unterschiedlichen Porenarchitekturen hergestellt und mit physikalisch-chemischen Methoden charakterisiert. Voraussetzung zum Vergleich der unterschiedlichen Porenarchitekturen waren dabei hohe Kristallinitäten, ähnliche Kristallitgrößen und Aluminiumgehalte. Aus den katalytischen Experimenten ging hervor, dass die sterischen Restriktionen der unterschiedlichen Porenarchitekturen einen signifikanten Einfluss auf die Selektivitäten zu den leichten Olefinen haben. Daher wurde bei den 1-dimensionalen 10-Ring-Zeolithen eine hohe Selektivität für die Bildung von Propen und den Butenen mit zusammen ca. 70 % gefunden. Die 3-dimensionalen 10-Ring-Porenstrukturen zeigen hingegen deutlich niedrigere Selektivitäten zu Propen und den Butenen mit insgesamt ca. 30 %. Als Ursache der niedrigeren Selektivitäten zu den genannten Olefinen konnten Neben- und Folgereaktionen identifiziert werden, die vermutlich an den Kreuzungspunkten im 3-dimensionalen Porensystem katalysiert werden. Die Neben- und Folgereaktionen beinhalten überwiegend Wasserstofftransferreaktionen und Zyklisierungen, die zur Bildung von Alkanen und Aromaten führen. Durch die gezielte Wahl von 1-dimensionalen Porenstrukturen konnten so die relativ großen Übergangszustände der Wasserstofftransferreaktionen und der Zyklisierungen unterdrückt werden. Daraus ergeben sich im Vergleich zu den 3-dimensionalen Porenstrukturen niedrigere Aktivitäten der 1-dimensionalen Porenstrukturen bei vergleichbarer Ausbeute an Propen und Butenen. Des Weiteren konnten neben strukturellen Einflüssen der unterschiedlichen Porenarchitekturen auch erhebliche Einflüsse der Reaktionsbedingungen auf die Bildung von Neben- und Folgereaktionen aufgezeigt werden. Dies gilt insbesondere für 3-dimensionale Porenstrukturen. Den Experimenten zufolge konkurrieren die beiden bekannten Crackmechanismen (monomolekular / bimolekular) in Abhängigkeit der Reaktionsbedingungen miteinander. Hohe Reaktionstemperaturen, kurze modifizierte Verweilzeiten und Ethen-Partialdrücke begünstigen monomolekulares Cracken und somit die Bildung von Propen und Butenen. Bimolekulares Cracken, welches gerade bei niedrigeren Reaktionstemperaturen, langen modifizierten Verweilzeiten und hohen Ethen-Partialdrücken verstärkt auftritt, fördert Wasserstofftransferreaktion. Der Einfluss der Reaktionsbedingungen ist bei 1-dimensionalen Porenstrukturen weniger stark ausgeprägt, da die formselektiven Eigenschaften bei der ETP-Reaktion dominieren.
Zusätzlich zu den bereits genannten Schlüsselfaktoren wurden auch die Auswirkungen unterschiedlicher Aluminiumgehalte der Zeolithe in der sauer katalysierten Ethen-zu-Propen-Reaktion untersucht. Als Katalysatoren wurden die Zeolithe ZSM-22 und ZSM-5 als jeweilige Vertreter einer 1-dimensionalen und 3-dimensionalen Porenstruktur verwendet. Der Vergleich der katalytischen Eigenschaften erfolgte bei gleichbleibenden Reaktionsbedingungen. In Abhängigkeit von der Dimensionalität des Porensystems (1-D vs. 3-D) wurde beobachtet, dass die Selektivitäten für die kurzkettigen Olefine in einem Fall mit dem Aluminiumgehalt abnehmen (3-D, HZSM-5) und im anderen Fall zunehmen (1-D, HZSM-22). Auch hier dominieren die formselektiven Eigenschaften der 1-dimensionalen Porenstrukturen in der Ethen-zu-Propen-Reaktion, wodurch mit steigender Anzahl saurer Zentren die Aktivität und die Selektivität zu den leichten Olefinen ebenfalls steigen. Es zeigte sich jedoch, dass hohe Aluminiumgehalte zu einer verstärkten Katalysatordesaktivierung beitragen und zusätzlich den Stofftransport der Reaktanden stark beeinflussen. Zeolith ZSM-5 zeigte zwar ebenfalls eine starke Katalysatordesaktivierung mit steigendem Aluminiumgehalt, wohingegen der Stofftransport der Reaktanden nicht beeinflusst wurde. Dies ging aus dem linearen Zusammenhang zwischen der Aktivität und dem Aluminiumgehalt in der Zeolith-Struktur hervor. Die Produktselektivitäten wurden insbesondere an Zeolith ZSM-5 deutlich durch den Aluminiumgehalt beeinflusst. Hohe Aluminiumgehalte begünstigen Wasserstofftransferreaktionen und Zyklisierungen, wohingegen niedrige Aluminiumgehalte die Selektivität zu den leichten Olefinen erhöhen. Ein Erklärungsansatz hierfür basiert auf den ablaufenden Gasphasenmechanismen an heterogenen Katalysatoren: Den Experimenten zufolge verläuft die Ethen-zu-Propen-Reaktion an Zeolith ZSM-5 vermutlich nach dem Eley-Rideal-Mechanismus, wohingegen die konkurrierenden Wasserstofftransferreaktionen nach dem Langmuir-Hinshelwood-Mechanismus ablaufen. Diese Ergebnisse stehen im Einklang mit anderen literaturbekannten Studien.
Die gezielte Variation der Kristallitgröße wurde mit Zeolith ZSM-5 durchgeführt, mit dem Ziel, detailliertere Kenntnisse über das Desaktivierungsverhalten des Katalysators und den Stofftransport der Reaktanden in den Poren zu erhalten. Die Kristallitgröße von Zeolith ZSM-5 wurde einerseits durch die Kristallistaionstemperatur gesteuert und andererseits durch die Zugabe von Triethanolamin als Inhibitor für die Keimbildung. Auf diese Weise konnten mittlere Kristallitgrößen im Bereich von 6 - 69 µm hergestellt werden. Mit zunehmender Kristallitgröße von Zeolith ZSM-5 wurde bei ähnlichen Koks-Gehalten eine schnellere Katalysatordesaktivierung beobachtet. Weiterhin waren abnehmende Aktivitäten mit zunehmender Kristallitgröße zu beobachten. Es konnte gezeigt werden, dass Stofftransportlimitierungen ab einer Kristallitgröße von ca. 27 µm auftreten. Es war ebenfalls ersichtlich, dass mit steigender Kristallitgröße auch unselektive Reaktionen auf der äußeren Oberfläche der Kristallite reduziert werden. Mit diesen waren steigende Selektivitäten zu den leichten Olefinen zu beobachten, bei gleichzeitiger Abnahme der Selektivitäten zu den C1 - C4-Alkanen und den Aromaten. Dies konnte auf eine Reduktion unselektiv ablaufender Neben- und Folgereaktionen auf der äußeren Kristallitoberfläche zurückgeführt werden.
Im Verlauf dieser Dissertation konnte gezeigt werden, dass eine erhöhte Expression des tonoplastidären Dicarboxylat Transporters zu einem erhöhten Gehalt an Malat bei gleichzeitig vermindertem Citratgehalt in den Überexpressions-Pflanzen führt. Somit konnte, ähnlich wie in den k.o.-Pflanzen, ein reziprokes Verhalten von Citrat und Malat aufgezeigt werden.
Elektrophysiologische Analysen an Oozyten von X. laevis in Zusammenhang mit Aufnahmeversuchen an Proteoliposomen zeigten weiterhin, dass der Transport von Citrat ebenfalls durch den TDT katalysiert wird. Anhand eines negativen Einwärts-Strom an Oozyten konnte gezeigt werden, dass dieser Citrat-Transport elektrogen ist. Weiterhin konnte gezeigt werden, dass Citrat2-H die transportierte Form von Citrat darstellt. Dieses wird vermutlich zusammen mit drei Protonen transportiert.
Die Dianionen Malat und Succinat, sowie höchstwahrscheinlich auch Fumarat, werden ebenfalls über den TDT transportiert. Unter Standardbedingungen werden diese in die Vakuole importiert. Im Gegenzug wird Citrat aus der Vakuole exportiert. Die trans-stimulierende Wechselwirkung von Malat, Succinat und Fumarat auf den Citrat Transport und vice versa bestärkt den in dieser Arbeit postulierten Antiport der jeweiligen Carboxylate über den Tonoplasten. Dieser ist jedoch nicht obligat, was an dem verringerten Transport von Citrat ohne Gegensubstrat über die Membran gezeigt werden konnte.
Unter Trockenstress und osmotischen Stress konnte ebenfalls gezeigt werden, dass die erhöhte Expression des TDT maßgeblich an der Akkumulation von Malat und der Mobilisierung von Citrat unter den genannten Stressbedingungen beteiligt ist.
Letztlich konnte mittels Säurestressexperimenten nachgewiesen werden, dass die Malatakkumulation, bei gleichzeitigem Citrat Abbau nicht zwingend miteinander gekoppelt sind, unter Säurestress müssen daher weitere regulatorische Effekte auf den Malat-Import bzw. den Citrat-Export vorherrschen.
Seit Aufkommen der Halbleiter-Technologie existiert ein Trend zur Miniaturisierung elektronischer Systeme. Dies, steigende Anforderungen sowie die zunehmende Integration verschiedener Sensoren zur Interaktion mit der Umgebung lassen solche eingebetteten Systeme, wie sie zum Beispiel in mobilen Geräten oder Fahrzeugen vorkommen, zunehmend komplexer werden. Die Folgen sind ein Anstieg der Entwicklungszeit und ein immer höherer Bauteileaufwand, bei gleichzeitig geforderter Reduktion von Größe und Energiebedarf. Insbesondere der Entwurf von Multi-Sensor-Systemen verlangt für jeden verwendeten Sensortyp jeweils gesondert nach einer spezifischen Sensorelektronik und steht damit den Forderungen nach Miniaturisierung und geringem Leistungsverbrauch entgegen.
In dieser Forschungsarbeit wird das oben beschriebene Problem aufgegriffen und die Entwicklung eines universellen Sensor-Interfaces für eben solche Multi-Sensor-Systeme erörtert. Als ein einzelner integrierter Baustein kann dieses Interface bis zu neun verschiedenen Sensoren unterschiedlichen Typs als Sensorelektronik dienen. Die aufnehmbaren Messgrößen umfassen: Spannung, Strom, Widerstand, Kapazität, Induktivität und Impedanz.
Durch dynamische Rekonfigurierbarkeit und applikationsspezifische Programmierung wird eine variable Konfiguration entsprechend der jeweiligen Anforderungen ermöglicht. Sowohl der Entwicklungs- als auch der Bauteileaufwand können dank dieser Schnittstelle, die zudem einen Energiesparmodus beinhaltet, erheblich reduziert werden.
Die flexible Struktur ermöglicht den Aufbau intelligenter Systeme mit sogenannten Self-x Charakteristiken. Diese betreffen Fähigkeiten zur eigenständigen Systemüberwachung, Kalibrierung oder Reparatur und tragen damit zu einer erhöhten Robustheit und Fehlertoleranz bei. Als weitere Innovation enthält das universelle Interface neuartige Schaltungs- und Sensorkonzepte, beispielsweise zur Messung der Chip-Temperatur oder Kompensation thermischer Einflüsse auf die Sensorik.
Zwei unterschiedliche Anwendungen demonstrieren die Funktionalität der hergestellten Prototypen. Die realisierten Applikationen haben die Lebensmittelanalyse sowie die dreidimensionale magnetische Lokalisierung zum Gegenstand.
Im Rahmen dieser Arbeit wurden DFT Rechnungen zum mechanistischen Verständnis und zur rationalen Entwicklung homogenkatalytischer Reaktionen eingesetzt.
Im ersten Projekt konnten mit Hilfe von DFT Rechnungen effizientere Katalysatorsysteme für Protodecarboxylierungsreaktionen und decarboxylierende Kreuzkupplungen durch rationale Katalysatorentwicklung identifiziert werden. Hierzu wurde die Decarboxylierung von 2- und 4 Fluorbenzoesäure mit DFT Rechnungen untersucht. Zunächst sagten die Rechnungen keine deutlich erhöhten Reaktionsgeschwindigkeiten für Katalysatorsysteme bestehend aus Kupfer(I) und verschiedenen 4,7 disubstituierten 1,10 Phenanthrolinliganden voraus. Weitere Berechnungen prognostizierten hingegen stark erhöhte Effizienz für Silber-basierte Katalysatoren in der Decarboxylierung von ortho-substituierten Benzoesäuren. Tatsächlich konnte daraufhin für diese Carbonsäuren ein Katalysatorsystem bestehend aus AgOAc und K2CO3 in NMP entwickelt werden, welches die Protodecarboxylierung bereits bei 120 °C ermöglicht, 50 °C niedriger als die des Kupfer-basierten Systems.
Die Erkenntnisse ließen sich in der Arbeitsgruppe Gooßen weiterhin auf die decarboxylierende Kreuzkupplung übertragen. Es gelang die Entwicklung eines Ag/Pd-basierten Katalysatorsystems für die Biarylsynthese ausgehend von Benzoesäuren und Aryltriflaten bei Reaktionstemperaturen von nur 130 °C.
Im Folgenden war es möglich, durch den Einsatz von DFT Rechnungen den Reaktionsmechanismus der decarboxylierenden Kreuzkupplung aufzuklären und Voraussagen für ein effizienteres Cu/Pd-basiertes Katalysatorsystem zu treffen. Nachdem durch experimentelle Beobachtungen klar wurde, dass der Decarboxylierungsschritt nicht notwendigerweise geschwindigkeitsbestimmend sein muss, wurde der komplette Katalysezyklus der decarboxylierenden Kreuzkupplung eingehend mit Hilfe von DFT Rechnungen untersucht. In Abhängigkeit des Benzoats wurde die Decarboxylierung oder die Transmetallierung als geschwindigkeitsbestimmend identifiziert. Da in der Transmetallierung zunächst die Bildung eines bimetallischen Cu−Pd-Addukts erforderlich ist, wurde gefolgert, dass die Verwendung von verbrückenden, bidentaten Liganden die Reaktion begünstigen sollte. In der Tat konnte durch Einsatz eines P,N-Liganden eine Cu/Pd-katalysierte decarboxylierende Kreuzkupplung von aromatischen Carboxylaten mit Aryltriflaten bei nur 100 °C entwickelt werden, was einer Absenkung der Reaktionstemperatur um 50 °C entspricht.
Zukünftige Weiterentwicklungen der Cu/Pd-katalysierten decarboxylierenden Kreuzkupplung zielen auf die Überwindung der Beschränkung auf ortho-substituierte Benzoate und den Ersatz der teuren Aryltriflate durch günstigere Arylhalogenide. Arbeiten hierzu sind bereits im Gange.
Im zweiten Projekt wurde der Reaktionsmechanismus der Ruthenium-katalysierten Hydroamidierung terminaler Alkine eingehend untersucht. Nachdem durch Isotopen-markierungsexperimente, Bestimmungen von kinetischen Isotopeneffekten mittels in situ IR-Spektroskopie und verschiedene in situ NMR- sowie ESI-MS-Experimente drei von fünf potentiellen Reaktionsmechanismen ausgeschlossen werden konnten, erlaubten die experimentellen Ergebnisse die Eingrenzung auf einen der verbliebenen Katalysezyklen.
Mit Hilfe von DFT Rechnungen wurde daraufhin bestätigt, dass es sich bei den postulierten Intermediaten um stabile Minima handelt. Das Auftreten einer Ru–Hydrid–Vinylidenspezies lieferte die Erklärung, warum die Hydroamidierung auf terminale Alkine beschränkt ist. Der nukleophile Angriff des Amidliganden an das Vinylidenkohlenstoffatom erklärt die anti-Markovnikov-Selektivität der Reaktion. Nachdem Gooßen und Koley et al. in einer weiteren Untersuchung den Einfluss der Liganden auf die Stereoselektivität der Hydroamidierung aufklären konnten, ist nun der Grundstein für die zukünftige rationale Entwicklung effizienterer Hydroamidierungskatalysatoren gelegt.
Im dritten Projekt konnten Erkenntnisse zum Reaktionsmechanismus der Palladium-katalysierten Isomerisierung von Allylestern zu Enolestern und Hinweise auf die katalytisch aktive Spezies der Reaktion erlangt werden. Zunächst gelang mit dem homodinuklearen Palladiumkatalysator [Pd(μ Br)(PtBu3)]2 die Entwicklung einer effizienten Synthese zur Darstellung einer großen Bandbreite diverser Enolester. In 1 Position verzweigte Enolester dienten anschließend als Substrate für enantioselektive Hydrierungen zur Synthese enantiomerenreiner chiraler Ester.
Aufgrund experimenteller Beobachtungen, die nahelegten, dass ein Palladiumhydrid-Komplex die katalytisch aktive Spezies darstellt, wurde die Bildung verschiedener Palladiumhydrid-Spezies ausgehend vom homodinuklearen Palladiumkatalysator [Pd(μ Br)(PtBu3)]2 mit Hilfe von DFT Rechnungen untersucht. Hierbei konnte der Palladiumhydrid-Komplex [Pd(Br)(H)(PtBu3)] als die vermutlich katalytisch aktive Spezies identifiziert werden. Aufgrund seiner hohen Reaktivität konnten in in situ NMR-Experimenten lediglich ein oxidiertes Dimer und ein Abfangprodukt mit überschüssigem Tri-tert-butylphosphin nachgewiesen werden.
In zukünftigen Arbeiten soll durch kinetische Untersuchungen die Reaktionsordnung der Isomerisierung ermittelt werden. Dies soll dazu beitragen, Aufschluss darüber zu gewinnen, ob tatsächlich ein monometallischer oder ein bimetallischer Komplex die katalytisch aktive Spezies darstellt.
In this thesis we develop a shape optimization framework for isogeometric analysis in the optimize first–discretize then setting. For the discretization we use
isogeometric analysis (iga) to solve the state equation, and search optimal designs in a space of admissible b-spline or nurbs combinations. Thus a quite
general class of functions for representing optimal shapes is available. For the
gradient-descent method, the shape derivatives indicate both stopping criteria and search directions and are determined isogeometrically. The numerical treatment requires solvers for partial differential equations and optimization methods, which introduces numerical errors. The tight connection between iga and geometry representation offers new ways of refining the geometry and analysis discretization by the same means. Therefore, our main concern is to develop the optimize first framework for isogeometric shape optimization as ground work for both implementation and an error analysis. Numerical examples show that this ansatz is practical and case studies indicate that it allows local refinement.
In this thesis, collision-induced dissociation (CID) studies serve to elucidate relative stabilities and to determine bond strengths within a given structure type of transition metal complexes. The infrared multi photon dissociation (IRMPD) spectroscopy combined with density functional theory (DFT) allow for structural analysis and provide insights into the coordination sphere of transition metal centers. The used combination of CID and IRMPD experiments is a powerful tool to obtain a detailed and comprehensive characterization and understanding of interactions between transition metals and organic ligands. The compounds’ spectrum comprises mono- or oligonuclear transition metal complexes containing iron, palladium, and ruthenium as well as lanthanide containing single molecule magnets (SMM). The presented investigations on the different transition metal complexes reveal manifold effects for each species leading to valuable results. A fundamental understanding of metal to ligand interactions is mandatory for the development of new and better organometallic complexes with catalytic, optical or magnetic properties.
Die hier vorgelegte Arbeit konzentrierte sich auf den vakuolären Ribonukleinsäure- (RNA) Abbau in Arabidopsis thaliana (A. thaliana) und die Integration in den Nukleotid- Metabolismus unter Berücksichtigung von Nukleosidtransportprozessen. Insbesondere die physiologische Bedeutung des Verlustes der RNS2-Aktivität auf die vakuolären RNA-Abbauprozesse sollte untersucht werden. Es konnte gezeigt werden, dass RNS2 den größten Anteil an der vakuolären Ribonuklease- (RNase-) Aktivität ausmacht, wobei die Restaktivität von circa 30 Prozent ein Hinweis auf mindestens eine weitere vakuoläre RNase ist. Die vakuolären Adenylatgehalte, Abbauprodukte der RNA, in RNS2-T-DNA-Insertionslinien zeigten, dass RNS2, ähnlich wie die intrazellulären RNasen in L. esculentum, 2‘,3‘-zyklische Nukleotidmonophosphate (2‘,3‘-cNMPs) produzieren. Ferner konnte gezeigt werden, dass in diesen Linien die vakuolären Enzyme sowohl RNA, als auch 2‘,3‘-cNMPs langsamer abbauen als vakuoläre Enzyme aus Wildtyp-Pflanzen. Die Akkumulation dieses zyklischen Intermediates des RNA-Abbaus lässt darauf schließen, dass die Transphosphorylierung schneller verläuft als die Hydrolyse (Abel & Glund, 1987; Löffler et al., 1992; Nürnberger et al., 1990). Es kann angenommen werden, dass ein weiteres Enzym, wie etwa eine zyklische Phosphodiesterase oder eine weitere Ribonuklease, an der Hydrolyse beteiligt ist. Ein weiterer Abschnitt dieser Arbeit beschäftigt sich mit der wichtigen Frage der Qualität von Vakuolenisolationen. Protoplastenkontaminationen konnten mikroskopisch ausgeschlossen werden. Die chloroplastidäre Verunreinigung war mit circa 5 Prozent gering, die cytosolische Kontamination lag jedoch je nach Isolationsmethode bei bis zu 30 Prozent im Vergleich zu Protoplasten. Es konnte darüber hinaus erstmals durch fluoreszenzmikroskopische Untersuchungen gezeigt werden, dass Vakuolen RNA besitzen. Diese Oligonukleotide sind vornehmlich kleine Fragmente im Größenbereich bis 50 nt.
Next Generation Sequencing ermöglichte eine detaillierte Analyse von cDNA- Bibliotheken, gewonnen aus vakuolärer RNA. Diese Technik wurde unter anderem angewandt, um die Reliabilität des Experimentes zu untersuchen. Es zeigte sich eine große Varianz in der Verteilung der Counts auf die verschiedenen RNA-Loci innerhalb biologischer Replikate und unterschiedlicher Vakuolenisolationsmethoden. Erstmals konnte jedoch gezeigt werden, dass circa 70 Prozent der RNA-Fragmente in der
Vakuole von mRNA stammen. Darüber hinaus gibt es Hinweise, dass der Abbau der wenigen rRNA-Transkripte in diesem Organell verstärkt abläuft.
In A. thaliana existiert mit ENT7 lediglich ein Vertreter, der einen Export von RNA- Abbauprodukten aus der Zelle ermöglicht. Da er den namensgebenden Vertretern aus dem Reich der Säugetiere strukturell und funktionell ähnelt, ist ENT7 ein geeignetes ENT-Protein für zukünftige Kristallisations- und Strukturanalysen. In dieser Arbeit konnte ENT7-eGFP in Pichia pastoris mit großer Ausbeute (2 mg Protein pro Liter Hefekultur) synthetisiert und in stabiler Form gereinigt werden. Es konnte gezeigt werden, dass ENT7 ohne eGFP ebenfalls stabil und als Dimer vorliegt. Durch Bindungsstudien erfolgte der Nachweis der erfolgreichen Bindung an bekannte Substrate. Darüber hinaus stellte sich heraus, dass neben Nukleosiden auch Nukleobasen, aber nicht ATP gebunden werden.
This work aims at including nonlinear elastic shell models in a multibody framework. We focus our attention to Kirchhoff-Love shells and explore the benefits of an isogeometric approach, the latest development in finite element methods, within a multibody system. Isogeometric analysis extends isoparametric finite elements to more general functions such as B-Splines and Non-Uniform Rational B-Splines (NURBS) and works on exact geometry representations even at the coarsest level of discretizations. Using NURBS as basis functions, high regularity requirements of the shell model, which are difficult to achieve with standard finite elements, are easily fulfilled. A particular advantage is the promise of simplifying the mesh generation step, and mesh refinement is easily performed by eliminating the need for communication with the geometry representation in a Computer-Aided Design (CAD) tool.
Quite often the domain consists of several patches where each patch is parametrized by means of NURBS, and these patches are then glued together by means of continuity conditions. Although the techniques known from domain decomposition can be carried over to this situation, the analysis of shell structures is substantially more involved as additional angle preservation constraints between the patches might arise. In this work, we address this issue in the stationary and transient case and make use of the analogy to constrained mechanical systems with joints and springs as interconnection elements. Starting point of our work is the bending strip method which is a penalty approach that adds extra stiffness to the interface between adjacent patches and which is found to lead to a so-called stiff mechanical system that might suffer from ill-conditioning and severe stepsize restrictions during time integration. As a remedy, an alternative formulation is developed that improves the condition number of the system and removes the penalty parameter dependence. Moreover, we study another alternative formulation with continuity constraints applied to triples of control points at the interface. The approach presented here to tackle stiff systems is quite general and can be applied to all penalty problems fulfilling some regularity requirements.
The numerical examples demonstrate an impressive convergence behavior of the isogeometric approach even for a coarse mesh, while offering substantial savings with respect to the number of degrees of freedom. We show a comparison between the different multipatch approaches and observe that the alternative formulations are well conditioned, independent of any penalty parameter and give the correct results. We also present a technique to couple the isogeometric shells with multibody systems using a pointwise interaction.
Die Entwicklung nachhaltiger Methoden zur C-C und C-Heteroatom Bindungsknüpfung gehört zu den Hauptzielen der modernen organischen Synthesechemie. Übergangsmetall-katalysierte Kupplungsreaktionen sind dabei besonders effiziente und vielseitige Werkzeuge zum Aufbau komplexer Molekülstrukturen. Im Rahmen dieser Dissertation wurden neue Konzepte zur regioselektiven Bindungsknüpfung entwickelt, mit denen präformierte, organometallische Reagenzien, sowie ökologisch bedenkliche Organohalogenide umgangen werden können. Als Substrate dienen Carbonsäurederivate, die in einer vorgelagerten, reversiblen (Trans-)Esterifizierung aus ubiquitären, lagerstabilen Carbonsäuren oder deren Estern zugänglich sind. Die Insertion eines Metall-Katalysators in die C-O Bindung der Esterfunktionalität führt zum Metallcarboxylat, welches irreversibel decarboxyliert und zum Produkt gekuppelt wird. Als einzige Nebenprodukte dieser Kupplungsreaktionen werden CO2 und Wasser bzw. CO2 und leichtflüchtige Alkohole freigesetzt. Der Nutzen dieses Konzepts konnte mit der Synthese zahlreicher Arylketone, Allylbenzole und Phenylessigsäureester demonstriert werden. Der Einsatz des Palladium(I)-Dimers [Pd(µ-Br)(PtBu3)]2 führte überraschend nicht zur decarboxylierenden Funktionalisierung der Substrate, sondern zur raschen Doppelbindungsisomerisierung und der damit verbundenen Synthese von Enolestern. Die Optimierung der Reaktionsbedingungen führte zu einem hochaktiven Katalysatorsystem, das selbst den besten literaturbekannten Isomerisierungskatalysatoren weit überlegen ist. In weiteren Teilprojekten erfolgte die Entwicklung Sandmeyer-analoger Trifluormethylierungen und Trifluormethylthiolierungen, mit denen leicht zugängliche Aryldiazoniumsalze mit in situ generierten Cu-CF3 Verbindungen bereits bei Raumtemperatur trifluormethyliert werden können. Im Rahmen einer Kooperation mit Umicore erfolgte außerdem die anwendungsbezogene Optimierung eines Kreuzkupplungsverfahrens zur hochselektiven Monoarylierung primärer Amine mit äquimolaren Arylbromidmengen in konzentrierter Lösung. Dabei wurden der präformierte Katalysator Pd(dippf)maleimid und die Katalysatorlösung Pd(dippf)(vs)tol entwickelt.
Für alle Organismen ist es wichtig, sich gegen das Eindringen exogener DNA bzw. RNA wie z.B. Viren oder transposablen Elementen zur Wehr zu setzen um die Integrität ihres eigenen Genoms zu bewahren. Zudem müssen innerhalb eines Organismus oft ganze Genfamilien reguliert werden. Die RNA-Interferenz stellt ein optimales Mittel sowohl für die Abwehr exogener Nukleinsäuren, als auch für die Regulierung endogener Gene dazu bereit. Das Herzstück der RNAi stellen kleine regulatorische siRNAs dar, die Homologie-abhängig Reaktionen in einer Zelle hervorrufen können, wie z.B. das transkriptionelle oder das posttranskriptionelle Silencing. Bei dem Mechanismus der RNAi sind zudem mehrere Komponenten beteiligt um diese siRNAs zu synthetisieren, zu stabilisieren und zu ihrem Zielort zu bringen um dort das Silencing zu vermitteln. Dabei spielen die Enzyme Dicer und RNA abhängige RNA-Polymerasen eine wichtige Rolle in der Synthese. Argonauten, bzw. eine Unterklasse von ihnen, die Piwi-Proteine sind für das eigentliche Silencing des Zielgens wichtig und spielen, wie auch die 2´-O-Methyltransferase Hen1, eine Rolle in der Stabilisierung der siRNAs.
In Paramecium tetraurelia weiß man, dass endogene Genfamilien, wie z.B. die Oberflächen-Antigene RNAi-vermittelt reguliert werden. Zudem ist bekannt, dass man RNAi-Mechanismen, die diesem endogenen Mechanismus ähneln, artifiziell durch das Einbringen einer doppelsträngigen RNA induzieren kann. Dies kann entweder durch das Verfüttern von Bakterien geschehen, die zur Synthese einer dsRNA in ihrem Inneren veranlasst werden und diese anreichern, oder durch die Injektion eines Transgens in den Makronukleus, dessen Transkript ebenfalls zu einer dsRNA umgesetzt wird.
Der Fokus dieser Arbeit lag auf dem exogenen, durch ein injiziertes Transgen induzierten RNAi-Mechanismus in Paramecium tetraurelia und dessen genauere Charakterisierung. Dabei konnte gezeigt werden, dass dieser RNAi-Mechanismus eine Temperaturabhängigkeit aufweist, wie es auch für RNAi-Mechanismen in anderen Organismen beschrieben wurde. Im Rahmen dieser Arbeit konnte jedoch die Ursache diese Temperaturabhängigkeit nicht aufgeklärt werden.
Dafür konnte gezeigt werden, dass zwei Klassen an siRNAs an diesem Mechanismus beteiligt sind. Es konnten neben den schon in der Literatur beschriebenen primären siRNAs auch sekundäre siRNAs nachgewiesen werden, deren Synthese von einer RdRP abhängig ist. Im Rahmen dieser Arbeit konnte der Schluss gezogen werden, das diese RdRP, die für die Synthese der sekundären siRNAs verantwortlich ist, das Homolog Rdr2 ist. Weiter konnte gezeigt werden, dass diese sekundären siRNAs Transitivität induzieren. Dies beschreibt die Amplifikation der siRNAs über das Ausgangsmolekül hinaus. Es konnte dargestellt werden, dass die sekundären siRNAs nicht von dem ursprünglichen Transgen synthetisiert, sondern vielmehr von einem homologen endogenen Transkript, einer mRNA, entstammen und somit als transitiv angesehen werden können.
Ferner konnte gezeigt werden, dass die Nukleotidyltransferase Cid2 ebenfalls in die Akkumulation dieser sekundären siRNAs involviert ist. Es konnte der Schluss gezogen werden, dass dieses Cid2 in einem Komplex mit Rdr2 vorliegt und das Template zur Generierung der sekundären siRNAs stabilisiert und so für Rdr2 zugänglich macht.
Ein weiterer Schwerpunkt dieser Arbeit war die detailliertere Untersuchung der spezifischen Stabilisierung beider siRNA-Klassen. Dabei konnte gezeigt werden, dass mehrere Piwi-Proteine in den Transgen-induzierten Mechanismus involviert sind. Die Paramecium spezifischen Piwis Ptiwi 8, Ptiwi 13 und Ptiwi 14 spielen dabei eine Rolle. Im Rahmen der durchgeführten Analysen konnte gezeigt werden, dass die Ptiwis 8 und 14 in die Akkumulation und damit in die Stabilisierung beider siRNA-Klassen involviert sind. Allerdings scheint dieser Effekt eher auf dem Ptiwi14 zu beruhen. Für das Ptiwi 13 konnte vermutet werden, dass dieses eher in die Akkumulation und spezifischen Stabilisierung der sekundären siRNAs involviert ist. Auch konnte aufgezeigt werden, dass beide Klassen an Transgen-induzierten siRNAs eine Methylgruppe an ihrem 3´ Ende tragen, welche von der 2´-O-Methyltransferase Hen1 abhängig ist und ebenfalls der Stabilisierung der siRNAs dient. Zudem konnte vermutet werden, dass diese Methylierung bereits vor dem Binden der siRNAs an eines der Ptiwis stattfindet und davon unabhängig ist. Somit konnten Rückschlüsse auf den zeitlichen Verlauf des Transgen-induzierten RNAi-Mechanismus gezogen werden.
Über eine Lokalisation dieses Hen1-Proteins konnte ferner gezeigt werden, dass dieses Protein in bzw. an den mit der Keimbahn assoziierten Mikronuklei und dem vegetativen Makronukleus aufzufinden ist. Die Methylierung der siRNAs findet somit in den Kernen statt. Dies lässt den Schluss zu, dass der Transgen-induzierte RNAi-Mechanismus neben der posttranskriptionellen Regulation auch eine transkriptionelle Genregulation direkt am Chromatin vermitteln kann.
Kontinuierlich faserverstärkte Thermoplaste (Organobleche) bieten ein großes Potential für den Einsatz in großvolumigen Sichtanwendungen. Es existieren jedoch einige material- und prozesstechnische Hindernisse hinsichtlich der Umsetzung dieses Potentials. Mit dieser Arbeit soll dazu beigetragen werden, das nötige, tiefgehende Verständnis bei der material- und prozesstechnischen Auslegung von optisch hochwertigen Organoblechbauteilen bereitzustellen. Die Arbeit umfasst:
- Untersuchungen zu material- und prozesstechnischen Parametern
- Eine analytische sowie eine FE-Modellbildung der Oberflächenausbildung samt Verifizierung
- Die Entwicklung eines Werkzeugkonzepts zur Verbesserung des isothermen Verarbeitungsprozesses
Die Untersuchung des Einflusses der textilen Gewebeparameter Faserdurchmesser und Maschenweite auf die Oberflächenwelligkeit von Organoblechen zeigen eine zunehmende Welligkeit mit steigendem Faserdurchmesser bzw. Maschenweite. Es wurde eine Grenzwelligkeit Wz25 = 0,5 μm ermittelt, ab der subjektiv keine Welligkeit mehr wahrgenommen wird. Im Prozessvergleich zwischen isothermer und variothermer Verarbeitung besitzen variotherm verarbeitete Organobleche eine um 40 – 50 % geringere Welligkeit. Dieser Effekt wird auf die geänderte thermische Prozessführung während der Abkühlphase zurückgeführt. Die Erkenntnisse wurden in einem analytischen Prozessmodell beschrieben, welches neben den thermischen Eigenschaften auch das rheologische Matrixverhalten berücksichtigt. Auf dem entwickelten Modell aufbauend wurde eine FE-Prozesssimulation entwickelt und an experimentellen Daten verifiziert. Das Modell ermöglicht die Vorhersage der Oberflächenwelligkeiten von Organoblechen variabler Laminatkonfiguration bei variothermer Verarbeitung und beschreibt zusätzlich das Verhalten der Organobleche unter ebener Scherung.
Um die oberflächenverbessernden Eigenschaften der variothermen Verarbeitung auch im isothermen Prozess nutzbar zu machen, wurde ein neuartiges Werkzeugkonzept entwickelt, welches die Prozessfenster über angepasste thermische Werkzeugeigenschaften gezielt einstellen kann. Neben einer verbesserten Bauteiloberfläche kann durch eine optimierte Prozessauslegung die Gesamtprozesszeit verkürzt und der Energiebedarf verringert werden.
This dissertation focuses on the visualization of urban microclimate data sets,
which describe the atmospheric impact of individual urban features. The application
and adaptation of visualization and analysis concepts to enhance the
insight into observational data sets used this specialized area are explored, motivated
through application problems encountered during active involvement
in urban microclimate research at the Arizona State University in Tempe, Arizona.
Besides two smaller projects dealing with the analysis of thermographs
recorded with a hand-held device and visualization techniques used for building
performance simulation results, the main focus of the work described in
this document is the development of a prototypic tool for the visualization
and analysis of mobile transect measurements. This observation technique involves
a sensor platform mounted to a vehicle, which is then used to traverse
a heterogeneous neighborhood to investigate the relationships between urban
form and microclimate. The resulting data sets are among the most complex
modes of in-situ observations due to their spatio-temporal dependence, their
multivariate nature, but also due to the various error sources associated with
moving platform observations.
The prototype enables urban climate researchers to preprocess their data,
to explore a single transect in detail, and to aggregate observations from multiple
traverses conducted over diverse routes for a visual delineation of climatic
microenvironments. Extending traditional analysis methods, the suggested visualization
tool provides techniques to relate the measured attributes to each
other and to the surrounding land cover structure. In addition to that, an
improved method for sensor lag correction is described, which shows the potential
to increase the spatial resolution of measurements conducted with slow
air temperature sensors.
In summary, the interdisciplinary approach followed in this thesis triggers
contributions to geospatial visualization and visual analytics, as well as to urban
climatology. The solutions developed in the course of this dissertation are
meant to support domain experts in their research tasks, providing means to
gain a qualitative overview over their specific data sets and to detect patterns,
which can then be further analyzed using domain-specific tools and methods.
Mobile Partizipation
(2015)
Smartphones bringen computertechnische Anwendungen in den öffentlichen Raum. Mobiles Web, Geolokalisierung und integrierte Sensoren ermöglichen kollaborative Datenerfassung (Urban Sensing), spontane Kommunikation (Smart Mobs) sowie neue Formen der Planungskommunikation (Mobile Augmented Reality). Es lässt sich eine Partizipation unter geändertem Vorzeichen diagnostizieren: Transparentere Verfahren, früherer Einbezug der Öffentlichkeit und mehr Mitsprachemöglichkeiten werden zunehmend eingefordert. Zugleich entwickeln sich eine Vielzahl an neuen Bottom-up-Bewegungen, die das Internet als einen Ort der Teilhabe und konstruktiver Mitwirkung an Stadt(-planung) begreifen und sich auf vielfältige Weise einbringen. Crowdsourcing, Civic-Hacking und urbane Interventionen stehen beispielhaft für diesen Wandel und fördern diese neuen Formen selbstinitiierter Partizipation. Nach der Definition des Phänomens mobiler Partizipation und einer Vielzahl an Beispielen, werden neue Entwicklungen, Möglichkeiten und Chancen, aber auch Herausforderungen und Hemmnisse für die Stadtplanung beschrieben und ein Blick auf sich zukünftig entwickelnde Arbeitsfelder im Zeitalter der der Smart Cities geworfen.
In some processes for spinning synthetic fibers the filaments are exposed to highly turbulent air flows to achieve a high degree of stretching (elongation). The quality of the resulting filaments, namely thickness and uniformity, is thus determined essentially by the aerodynamic force coming from the turbulent flow. Up to now, there is a gap between the elongation measured in experiments and the elongation obtained by numerical simulations available in the literature.
The main focus of this thesis is the development of an efficient and sufficiently accurate simulation algorithm for the velocity of a turbulent air flow and the application in turbulent spinning processes.
In stochastic turbulence models the velocity is described by an \(\mathbb{R}^3\)-valued random field. Based on an appropriate description of the random field by Marheineke, we have developed an algorithm that fulfills our requirements of efficiency and accuracy. Applying a resulting stochastic aerodynamic drag force on the fibers then allows the simulation of the fiber dynamics modeled by a random partial differential algebraic equation system as well as a quantization of the elongation in a simplified random ordinary differential equation model for turbulent spinning. The numerical results are very promising: whereas the numerical results available in the literature can only predict elongations up to order \(10^4\) we get an order of \(10^5\), which is closer to the elongations of order \(10^6\) measured in experiments.
The Wilkie model is a stochastic asset model, developed by A.D. Wilkie in 1984 with a purpose to explore the behaviour of investment factors of insurers within the United Kingdom. Even so, there is still no analysis that studies the Wilkie model in a portfolio optimization framework thus far. Originally, the Wilkie model is considering a discrete-time horizon and we apply the concept of Wilkie model to develop a suitable ARIMA model for Malaysian data by using Box-Jenkins methodology. We obtained the estimated parameters for each sub model within the Wilkie model that suits the case of Malaysia, and permits us to analyse the result based on statistics and economics view. We then tend to review the continuous time case which was initially introduced by Terence Chan in 1998. The continuous-time Wilkie model inspired is then being employed to develop the wealth equation of a portfolio that consists of a bond and a stock. We are interested in building portfolios based on three well-known trading strategies, a self-financing strategy, a constant growth optimal strategy as well as a buy-and-hold strategy. In dealing with the portfolio optimization problems, we use the stochastic control technique consisting of the maximization problem itself, the Hamilton-Jacobi-equation, the solution to the Hamilton-Jacobi-equation and finally the verification theorem. In finding the optimal portfolio, we obtained the specific solution of the Hamilton-Jacobi-equation and proved the solution via the verification theorem. For a simple buy-and-hold strategy, we use the mean-variance analysis to solve the portfolio optimization problem.
Motivated by the results of infinite dimensional Gaussian analysis and especially white noise analysis, we construct a Mittag-Leffler analysis. This is an infinite dimensional analysis with respect to non-Gaussian measures of Mittag-Leffler type which we call Mittag-Leffler measures. Our results indicate that the Wick ordered polynomials, which play a key role in Gaussian analysis, cannot be generalized to this non-Gaussian case. We provide evidence that a system of biorthogonal polynomials, called generalized Appell system, is applicable to the Mittag-Leffler measures, instead of using Wick ordered polynomials. With the help of an Appell system, we introduce a test function and a distribution space. Furthermore we give characterizations of the distribution space and we characterize the weak integrable functions and the convergent sequences within the distribution space. We construct Donsker's delta in a non-Gaussian setting as an application.
In the second part, we develop a grey noise analysis. This is a special application of the Mittag-Leffler analysis. In this framework, we introduce generalized grey Brownian motion and prove differentiability in a distributional sense and the existence of generalized grey Brownian motion local times. Grey noise analysis is then applied to the time-fractional heat equation and the time-fractional Schrödinger equation. We prove a generalization of the fractional Feynman-Kac formula for distributional initial values. In this way, we find a Green's function for the time-fractional heat equation which coincides with the solutions given in the literature.
The advances in sensor technology have introduced smart electronic products with
high integration of multi-sensor elements, sensor electronics and sophisticated signal
processing algorithms, resulting in intelligent sensor systems with a significant level
of complexity. This complexity leads to higher vulnerability in performing their
respective functions in a dynamic environment. The system dependability can be
improved via the implementation of self-x features in reconfigurable systems. The
reconfiguration capability requires capable switching elements, typically in the form
of a CMOS switch or miniaturized electromagnetic relay. The emerging DC-MEMS
switch has the potential to complement the CMOS switch in System-in-Package as
well as integrated circuits solutions. The aim of this thesis is to study the feasibility
of using DC-MEMS switches to enable the self-x functionality at system level.
The self-x implementation is also extended to the component level, in which the
ISE-DC-MEMS switch is equipped with self-monitoring and self-repairing features.
The MEMS electrical behavioural model generated by the design tool is inadequate,
so additional electrical models have been proposed, simulated and validated. The
simplification of the mechanical MEMS model has produced inaccurate simulation
results that lead to the occurrence of stiction in the actual device. A stiction conformity
test has been proposed, implemented, and successfully validated to compensate
the inaccurate mechanical model. Four different system simulations of representative
applications were carried out using the improved behavioural MEMS model, to
show the aptness and the performances of the ISE-DC-MEMS switch in sensitive
reconfiguration tasks in the application and to compare it with transmission gates.
The current design of the ISE-DC-MEMS switch needs further optimization in terms
of size, driving voltage, and the robustness of the design to guarantee high output
yield in order to match the performance of commercial DC MEMS switches.
In this thesis, an approach is presented that turns the currently unstructured process of automotive hazard analysis and risk assessments (HRA), which relies on creativity techniques, into a structured, model-based approach that makes the HRA results less dependent on experts' experience, more consistent, and gives them higher quality. The challenge can be subdivided into two steps. The first step is to improve the HRA as it is performed in current practice. The second step is to go beyond the current practice and consider not only single service failures as relevant hazards, but also multiple service failures. For the first step, the most important aspect is to formalize the operational situation of the system and to determine its likelihood. Current approaches use natural-language textual descriptions, which makes it hard to ensure consistency and increase efficiency through reuse. Furthermore, due to ambiguity in natural language, it is difficult to ensure consistent likelihood estimates for situations.
The main aspect of the second step is that considering multiple service failures as hazards implies that one needs to analyze an exponential number of hazards. Due to the fact that hazard assessments are currently done purely manually, considering multiple service failures is not possible. The only way to approach this challenge is to formalize the HRA and make extensive use of automation support.
In SAHARA we handle these challenges by first introducing a model-based representation of an HRA with GOBI. Based on this, we formalized the representation of operational situations and their likelihood assessment in OASIS and HEAT, respectively. We show that more consistent situation assessments are possible and that situations (including their likelihood) can be efficiently reused. The second aspect, coping with multiple service failures, is addressed in ARID. We show that using our tool-supported HRA approach, 100% coverage of all possible hazards (including multiple service failures) can be achieved by relying on very limited manual effort. We furthermore show that not considering multiple service failures results in insufficient safety goals.
Computational Homogenization of Piezoelectric Materials using FE² Methods and Configurational Forces
(2015)
Piezoelectric materials are electro-mechanically coupled materials. In these materials it is possible to produce an electric field by applying a mechanical load. This phenomenon is known as the piezoelectric effect. These materials also exhibit a mechanical deformation in response to an external electric loading, which is known as the inverse piezoelectric effect. By using these smart properties of piezoelectric materials, applications are possible in sensors and actuators. Ferroelectric or piezoelectric materials show switching behavior of the polarization in the material under an external loading. Due to this property, these materials are used to produce random access memory (RAM) for the non-volatile storage of data in computing devices. It is essential to understand the material responses of piezoelectric materials properly in order to use them in the engineering applications in innovative manners. Due to the growing interest in determining the material responses of smart material (e.g., piezoelectric material), computational methods are becoming increasingly important.
Many engineering materials possess inhomogeneities on the micro level. These inhomogeneities in the materials cause some difficulties in the determination of the material responses computationally as well as experimentally. But on the other hand, sometimes these inhomogeneities help the materials to render some good physical properties, e.g., glass or carbon fiber reinforced composites are light weight, but show higher strength. Piezoelectric materials also exhibit intense inhomogeneities on the micro level. These inhomogeneities are originating from the presence of domains, domain walls, grains, grain boundaries, micro cracks, etc. in the material. In order to capture the effects of the underlying microstructures on the macro quantities, it is essential to homogenize material parameters and the physical responses. There are several approaches to perform the homogenization. A two-scale classical (first-order) homogenization of electro-mechanically coupled materials using a FE²-approach is discussed in this work. The main objective of this work is to investigate the influences of the underlying micro structures on the macro Eshelby stress tensor and on the macro configurational forces. The configurational forces are determined in certain defect situations. These defect situations include the crack tip of a sharp crack in the macro specimen.
A literature review shows that the macro strain tensor is used to determine the micro boundary condition for the FE²-based homogenization in a small strain setting. This approach is capable to determine the consistent homogenized physical quantities (e.g., stress, strain) and the homogenized material quantities (e.g., stiffness tensor). But the application of these type of micro boundaries for the homogenization does not generate physically consistent macro Eshelby stress tensor or the macro configurational forces. Even in the absence of the micro volume configurational forces, this approach of the homogenization of piezoelectric materials produces unphysical volume configurational forces on the macro level. After a thorough investigation of the boundary conditions on the representative volume elements (RVEs), it is found that a displacement gradient driven micro boundary conditions remedy this issue. The use of the displacement gradient driven micro boundary conditions also satisfies the Hill-Mandel condition. The macro Eshelby stress tensor of a pure mechanical problem in a small deformation setting can be determined in two possible ways: by using the homogenized mechanical quantities (displacement gradient and stress tensor), or by homogenizing the Eshelby stress tensor on the micro level by volume averaging. The first approach does not satisfy the Hill-Mandel condition incorporating the Eshelby stress tensor in the energy term, on the other hand, the Hill-Mandel condition is satisfied in the second approach. In the case of homogenized Eshelby stress tensor determined from the homogenized physical quantities, the Hill-Mandel condition gives an additional energy term. A body in a small deformation setting is deformed according to the displacement gradient. If the homogenization is done using strain driven micro boundary conditions, the micro domain is deformed according to the macro strain, but the tiny vicinity around the corresponding Gauß point is deformed according to the macro displacement gradient. This implies that some restrictions are imposed at every Gauß point on the macro level. This situation helps the macro system to produce nonphysical volume configurational forces.
A FE²-based computational homogenization technique is also considered for the homogenization of piezoelectric materials. In this technique a representative volume element, which comprises of the micro structural features in the material, is assigned to every Gauß point of the macro domain. The macro displacement gradient and the macro electric field, or the macro stress tensor and the macro electric displacement are passed to the RVEs at every macro Gauß point. After determining boundary conditions on the RVEs, the homogenization process is performed. The homogenized physical quantities and the homogenized material parameters are passed back to macro Gauß points. In this work numerical investigations are carried out for two distinct situations of the microstructures of the piezoelectric materials regarding the evolution on the micro level: a) homogenization by using stationary microstructures, and b) homogenization by using evolving microstructures.
For the first case, the domain walls remain at fixed positions through out the simulations for the homogenization of piezoelectric materials. For a considerably large external loading, the real situation is different. But to understand the effects of the underlying microstructures on the macro configurational forces, to some extent it is sufficient to do the homogenization with fixed or stationary microstructures. The homogenization process is carried out for different microstructures and for different loading conditions. If the mechanical load is applied in the direction of the polarization, a smaller crack tip configurational force is observed in comparison to the configurational force determined for a mechanical loading perpendicular to the polarization. If the polarizations in the microstructures are parallel or perpendicular to the applied electric field and the applied displacement, configurational forces parallel to the crack ligament of the macro crack are observed only. In the case of inclined polarizations in the microstructures, configurational forces inclined to the crack ligament are obtained. The simulation results also reveal that an application of an external electric field to the material reduces the value of the nodal configurational forces at the crack tip.
In the second case, the interfaces of the micro structures are allowed to move from their initial positions at every step of the applied incremental external loading. Thus, at every step of the application of the external loading, the microstructures are changed when the external loading is larger than the coercive field. The movement of the interfaces is realized through the nodal configurational forces on the micro level. At every step of the application of the external loading, the nodal configurational forces per unit length on the domain walls are determined in the post-processing of the FE-simulation on the micro domain. With the help of the domain wall kinetics, the new positions of the domain walls are determined. Numerical results show that the crack tip region is the most affected area in the macro domain. For that reason a very different distribution of the macro electric displacement is observed comparing the same produced by using fixed microstructures. Due to the movement of the domain walls, the energy is dissipated in the system. As a result, a smaller configurational force appears at the crack tip on the macro level in the case of the homogenization by using evolving microstructures. By using the homogenization technique involving the evolution of the microstructures, it is possible to produce the electric displacement vs. electric field hysteresis loop on the macro level. The shape of the hysteresis loop depends on the value of the rate of application of the external electric loading. A faster deployment of the external electric field widens the hysteresis loop.
Maintaining complex software systems tends to be a costly activity where software engineers spend a significant amount of time trying to understand the system's structure and behavior. As early as the 1980s, operation and maintenance costs were already twice as expensive as the initial development costs incurred. Since then these costs have steadily increased. The focus of this thesis is to reduce these costs through novel interactive exploratory visualization concepts and to apply these modern techniques in the context of services offered by software quality analysis.
Costs associated with the understanding of software are governed by specific features of the system in terms of different domains, including re-engineering, maintenance, and evolution. These features are reflected in software measurements or inner qualities such as extensibility, reusability, modifiability, testability, compatability, or adatability. The presence or absence of these qualities determines how easily a software system can conform or be customized to meet new requirements. Consequently, the need arises to monitor and evaluate the qualitative state of a software system in terms of these qualities. Using metrics-based analysis, production costs and quality defects of the software can be recorded objectively and analyzed.
In practice, there exist a number of free and commercial tools that analyze the inner quality of a software system through the use of software metrics. However, most of these tools focus on software data mining and metrics (computational analysis) and only a few support visual analytical reasoning. Typically, computational analysis tools generate data and software visualization tools facilitate the exploration and explanation of this data through static or interactive visual representations. Tools that combine these two approaches focus only on well-known metrics and lack the ability to examine user defined metrics. Further, they are often confined to simple visualization methods and metaphors, including charts, histograms, scatter plots, and node-link diagrams.
The goal of this thesis is to develop methodologies that combine computational analysis methods together with sophisticated visualization methods and metaphors through an interactive visual analysis approach. This approach promotes an iterative knowledge discovery process through multiple views of the data where analysts select features of interest in one of the views and inspect data items of the select subset in all of the views. On the one hand, we introduce a novel approach for the visual analysis of software measurement data that captures complete facts of the system, employs a flow-based visual paradigm for the specification of software measurement queries, and presents measurement results through integrated software visualizations. This approach facilitates the on-demand computation of desired features and supports interactive knowledge discovery - the analyst can gain more insight into the data through activities that involve: building a mental model of the system; exploring expected and unexpected features and relations; and generating, verifying, or rejecting hypothesis with visual tools. On the other hand, we have also extended existing tools with additional views of the data for the presentation and interactive exploration of system artifacts and their inter-relations.
Contributions of this thesis have been integrated into two different prototype tools. First evaluations of these tools show that they can indeed improve the understanding of large and complex software systems.
The present research combines different paradigm in the area of visual perception of letter and words. These experiments aimed to understand the deficit underlying the problem associated with the faulty visual processing of letters and words. The present work summarizes the findings from two different types of population: (1) Dyslexics (reading-disabled children) and (2) Illiterates (adults who cannot read). In order to compare the results, comparisons were made between literate and illiterate group; dyslexics and control group (normal reading children). Differences for Even related potentials (ERP’s) between dyslexics and control children were made using mental rotation task for letters. According to the ERP’s, the effect of the mental rotation task of letter perception resulted as a delayed positive component and the component becomes less positive when the task becomes more difficult (Rotation related Negativity – RRN). The component was absent for dyslexics and present for controls. Dyslexics also showed some late effects in comparison to control children and this could be interpreted as problems at the decision stage where they are confused as to the letter is normal or mirrored. Dyslexics also have problems in responding to the letters having visual or phonological similarities (e.g. b vs d, p vs q). Visually similar letters were used to compare dyslexics and controls on a symmetry generalization task in two different contrast conditions (low and high). Dyslexics showed a similar pattern of response, and were overall slower in responding to the task compared to controls. The results were interpreted within the framework of the Functional Coordination Deficit (Lachmann, 2002). Dyslexics also showed delayed response in responding to the word recognition task during motion. Using red background decreases the Magnocellular pathway (M-pathway) activity, making more difficult to identify letters and this effect was worse for dyslexics because their M-pathway is weaker. In dyslexics, the M-pathway is worse; using a red background decreases the M activity and increases the difficulty in identifying lexical task in motion. This effect generated worse response to red compared to the green background. The reaction times with red were longer than those with green background. Further, Illiterates showed an analytic approach to responding to letters as well as on shapes. The analytic approach does not result from an individual capability to read, but is a primary base of visual organization or perception.
The heterogeneity of today's access possibilities to wireless networks imposes challenges for efficient mobility support and resource management across different Radio Access Technologies (RATs). The current situation is characterized by the coexistence of various wireless communication systems, such as GSM, HSPA, LTE, WiMAX, and WLAN. These RATs greatly differ with respect to coverage, spectrum, data rates, Quality of Service (QoS), and mobility support.
In real systems, mobility-related events, such as Handover (HO) procedures, directly affect resource efficiency and End-To-End (E2E) performance, in particular with respect to signaling efforts and users' QoS. In order to lay a basis for realistic multi-radio network evaluation, a novel evaluation methodology is introduced in this thesis.
A central hypothesis of this thesis is that the consideration and exploitation of additional information characterizing user, network, and environment context, is beneficial for enhancing Heterogeneous Access Management (HAM) and Self-Optimizing Networks (SONs). Further, Mobile Network Operator (MNO) revenues are maximized by tightly integrating bandwidth adaptation and admission control mechanisms as well as simultaneously accounting for user profiles and service characteristics. In addition, mobility robustness is optimized by enabling network nodes to tune HO parameters according to locally observed conditions.
For establishing all these facets of context awareness, various schemes and algorithms are developed and evaluated in this thesis. System-level simulation results demonstrate the potential of context information exploitation for enhancing resource utilization, mobility support, self-tuning network operations, and users' E2E performance.
In essence, the conducted research activities and presented results motivate and substantiate the consideration of context awareness as key enabler for cognitive and autonomous network management. Further, the performed investigations and aspects evaluated in the scope of this thesis are highly relevant for future 5G wireless systems and current discussions in the 5G infrastructure Public Private Partnership (PPP).
The overall goal of the work is to simulate rarefied flows inside geometries with moving boundaries. The behavior of a rarefied flow is characterized through the Knudsen number \(Kn\), which can be very small (\(Kn < 0.01\) continuum flow) or larger (\(Kn > 1\) molecular flow). The transition region (\(0.01 < Kn < 1\)) is referred to as the transition flow regime.
Continuum flows are mainly simulated by using commercial CFD methods, which are used to solve the Euler equations. In the case of molecular flows one uses statistical methods, such as the Direct Simulation Monte Carlo (DSMC) method. In the transition region Euler equations are not adequate to model gas flows. Because of the rapid increase of particle collisions the DSMC method tends to fail, as well
Therefore, we develop a deterministic method, which is suitable to simulate problems of rarefied gases for any Knudsen number and is appropriate to simulate flows inside geometries with moving boundaries. Thus, the method we use is the Finite Pointset Method (FPM), which is a mesh-free numerical method developed at the ITWM Kaiserslautern and is mainly used to solve fluid dynamical problems.
More precisely, we develop a method in the FPM framework to solve the BGK model equation, which is a simplification of the Boltzmann equation. This equation is mainly used to describe rarefied flows.
The FPM based method is implemented for one and two dimensional physical and velocity space and different ranges of the Knudsen number. Numerical examples are shown for problems with moving boundaries. It is seen, that our method is superior to regular grid methods with respect to the implementation of boundary conditions. Furthermore, our results are comparable to reference solutions gained through CFD- and DSMC methods, respectevly.
The current procedures for achieving industrial process surveillance, waste reduction, and prognosis of critical process states are still insufficient in some parts of the manufacturing industry. Increasing competitive pressure, falling margins, increasing cost, just-in-time production, environmental protection requirements, and guidelines concerning energy savings pose new challenges to manufacturing companies, from the semiconductor to the pharmaceutical industry.
New, more intelligent technologies adapted to the current technical standards provide companies with improved options to tackle these situations. Here, knowledge-based approaches open up pathways that have not yet been exploited to their full extent. The Knowledge-Discovery-Process for knowledge generation describes such a concept. Based on an understanding of the problems arising during production, it derives conclusions from real data, processes these data, transfers them into evaluated models and, by this open-loop approach, reiteratively reflects the results in order to resolve the production problems. Here, the generation of data through control units, their transfer via field bus for storage in database systems, their formatting, and the immediate querying of these data, their analysis and their subsequent presentation with its ensuing benefits play a decisive role.
The aims of this work result from the lack of systematic approaches to the above-mentioned issues, such as process visualization, the generation of recommendations, the prediction of unknown sensor und production states, and statements on energy cost.
Both science and commerce offer mature statistical tools for data preprocessing, analysis and modeling, and for the final reporting step. Since their creation, the insurance business, the world of banking, market analysis, and marketing have been the application fields of these software types; they are now expanding to the production environment.
Appropriate modeling can be achieved via specific machine learning procedures, which have been established in various industrial areas, e.g., in process surveillance by optical control systems. Here, State-of-the-art classification methods are used, with multiple applications comprising sensor technology, process areas, and production site data. Manufacturing companies now intend to establish a more holistic surveillance of process data, such as, e.g., sensor failures or process deviations, to identify dependencies. The causes of quality problems must be recognized and selected in real time from about 500 attributes of a highly complex production machine. Based on these identified causes, recommendations for improvement must then be generated for the operator at the machine, in order to enable timely measures to avoid these quality deviations.
Unfortunately, the ability to meet the required increases in efficiency – with simultaneous consumption and waste minimization – still depends on data that are, for the most part, not available. There is an overrepresentation of positive examples whereas the number of definite negative examples is too low.
The acquired information can be influenced by sensor drift effects and the occurrence of quality degradation may not be adequately recognized. Sensorless diagnostic procedures with dual use of actuators can be of help here.
Moreover, in the course of a process, critical states with sometimes unexplained behavior can occur. Also in these cases, deviations could be reduced by early countermeasures.
The generation of data models using appropriate statistical methods is of advantage here.
Conventional classification methods sometimes reach their limits. Supervised learning methods are mostly used in areas of high information density with sufficient data available for the classes under examination. However, there is a growing trend (e.g., spam filtering) to apply supervised learning methods to underrepresented classes, the datasets of which are, at best, outliers or not at all existent.
The application field of One-Class Classification (OCC) deals with this issue. Standard classification procedures (e.g., k-nearest-neighbor classifier, support vector machines) can be modified in adjustment to such problems. Thereby, a control system is able to classify statements on changing process states or sensor deviations. The above-described knowledge discovery process was employed in a case study from the polymer film industry, at the Mondi Gronau GmbH, taken as an example, and accomplished by a real-data survey at the production site and subsequent data preprocessing, modeling, evaluation, and deployment as a system for the generation of recommendations. To this end, questions regarding the following topics had to be clarified: data sources, datasets and their formatting, transfer pathways, storage media, query sequences, the employed methods of classification, their adjustment to the problems at hand, evaluation of the results, construction of a dynamic cycle, and the final implementation in the production process, along with its surplus value for the company.
Pivotal options for optimization with respect to ecological and economical aspects can be found here. Capacity for improvement is given in the reduction of energy consumption, CO\(_2\) emissions, and waste at all machines. At this one site, savings of several million euros per month can be achieved.
One major difficulty so far has been hardly accessible process data which, distributed on various data sources and unconnected, in some areas led to an increased analysis effort and a lack of holistic real-time quality surveillance. Monitoring of specifications and the thus obtained support for the operator at the installation resulted in a clear disadvantage with regard to cost minimization.
The data of the case study, captured according to their purposes and in coordination with process experts, amounted to 21,900 process datasets from cast film extrusion during 2 years’ time, including sensor data from dosing facilities and 300 site-specific energy datasets from the years 2002–2014.
In the following, the investigation sequence is displayed:
1. In the first step, industrial approaches according to Industrie 4.0 and related to Big Data were investigated. The applied statistical software suites and their functions were compared with a focus on real-time data acquisition from database systems, different data formats, their sensor locations at the machines, and the data processing part. The linkage of datasets from various data sources for, e.g., labeling and downstream exploration according to the knowledge discovery process is of high importance for polymer manufacturing applications.
2. In the second step, the aims were defined according to the industrial requirements, i.e. the critical production problem called “cut-off” as the main selection, and with regard to their investigation with machine learning methods. Therefore, a system architecture corresponding to the polymer industry was developed, containing the following processing steps: data acquisition, monitoring \& recommendation, and self-configuration.
3. The novel sensor datasets, with 160–2,500 real and synthetic attributes, were acquired within 1-min intervals via PLC and field bus from an Oracle database. The 160 features were reduced to 6 dimensions with feature reduction methods. Due to underrepresentation of the critical class, the learning approaches had to be modified and optimized for one-class classification, which achieved 99% accuracy after training, testing and evaluation with real datasets.
4. In the next step, the 6-dimensional dataset was scaled into lower 1-, 2-, or 3-dimensional space with classical and non-classical mapping approaches for downstream visualization. The mapped view was separated into zones of normal and abnormal process conditions by threshold setting.
5. Afterwards, the boundary zone was investigated and an approach for trajectory extraction consisting of condition points in sequence was developed, to optimize the prediction behavior of the model. The extracted trajectories were trained, tested and evaluated by State-of-the-art classification methods, achieving a 99% recognition ratio.
6. In the last step, the best methods and processing parts were converted into a specifically developed domain-specific graphical user interface for real-time visualization of process condition changes. The requirements of such an interface were discussed with the operators with regard to intuitive handling, interactive visualization and recommendations (as e.g., messaging and traffic lights), and implemented.
The software prototype was tested at a laboratory machine. Correct recognition of abnormal process problems was achieved at a 90\% ratio. The software was afterwards transferred to a group of on-line production machines.
As demonstrated, the monthly amount of waste arising at machine M150 could be decreased from 20.96% to 12.44% during the application time. The frequency of occurrence of the specific problem was reduced by 30% related to monthly savings of 50,000 EUR.
In the approach pertaining to the energy prognosis of load profiles, monthly energy data from 2002 to 2014 (about 36 trajectories with three to eight real parameters each) were used as the basis, analyzed and modeled systematically. The prognosis quality increased with approaching target date. Thereby, the site-specific load profile for 2014 could be predicted with an accuracy of 99%.
The achievement of sustained cost reductions of several 100,000 euros, combined with additional savings of EUR 2.8 million, could be demonstrated.
The process improvements achieved while pursuing scientific targets could be successfully and permanently integrated at the case study plant. The increase in methodical and experimental knowledge was reflected by first economical results and could be verified numerically. The expectations of the company were more than fulfilled and further developments based on the new findings were initiated. Among the new finding are the transfer of the scientific findings onto more machines and even the initiation of further studies expanding into the diagnostics area.
Considering the size of the enterprise, future enhanced success should also be possible for other locations. In the course of the grid charge exemption according to EEG, the energy savings at further German locations can amount to 4–11% on a monetary basis and at least 5% based on energy. Up to 10% of materials and cost can be saved with regard to waste reduction related to specific problems. According to projections, material savings of 5–10 t per month and time savings of up to 50 person-hours are achievable. Important synergy effects can be created by the knowledge transfer.