Refine
Year of publication
Document Type
- Doctoral Thesis (1864)
- Preprint (1185)
- Article (589)
- Report (483)
- Periodical Part (295)
- Master's Thesis (252)
- Working Paper (115)
- Conference Proceeding (47)
- Diploma Thesis (35)
- Lecture (25)
Language
- English (3026)
- German (1972)
- Multiple languages (6)
- Spanish (4)
Keywords
- AG-RESY (64)
- PARO (31)
- Stadtplanung (30)
- Erwachsenenbildung (29)
- Organisationsentwicklung (27)
- Schule (25)
- Modellierung (24)
- Simulation (24)
- Mathematische Modellierung (21)
- Visualisierung (21)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (1175)
- Kaiserslautern - Fachbereich Informatik (910)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (527)
- Kaiserslautern - Fachbereich Chemie (407)
- Kaiserslautern - Fachbereich Sozialwissenschaften (345)
- Kaiserslautern - Fachbereich Physik (323)
- Fraunhofer (ITWM) (224)
- Kaiserslautern - Fachbereich Biologie (174)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (169)
- Distance and Independent Studies Center (DISC) (165)
Synapses are the fundamental structures that regulate the functionality of the neural circuit. The ability of the synapse to modulate its structure and function at a fast rate due to various sensory inputs provides the strength to the nervous system to incorporate new adaptations and behaviors in the animal. The synapses are very dynamic throughout the life of the animal starting from early development. Continuous events of formation and elimination of synapse, activation and inhibition of synaptic function are observed in almost all synapses. These processes occur at a high speed and require controlled cellular mechanisms. Imbalance in these processes results in defective nervous system and has been reported in many neurological disorders. Thus, it is important to understand the mechanisms that regulate process of synapse development maintenance and function.
Kinases and phosphatases are the key regulators of cellular mechanisms. Understanding the function of these molecules in the neuron will shed light on the molecular mechanisms of synaptic plasticity. Using Drosophila melanogaster larval neuromuscular junction as a model, Bulat et al. (2014) performed a large RNAi based screen targeting kinome and phosphatome of Drosophila to identify the essential kinases and phosphatases and found Myeloid leukemia factor-1 adaptor molecule (Madm) and Protein phosphatase 4 (PP4) as novel regulators of synapse development and maintenance. The function of these molecules in the nervous system has not been reported and hence I investigated on the role of Madm and PP4 in the regulation of synapse development, maintenance and function.
Myeloid leukemia factor-1 adaptor molecule (Madm), a ubiquitously expressing psuedokinase essentially functions to regulate synaptic growth, stability and function. Using a combination of genetic and high throughput imaging, I could demonstrate that Madm functions to regulate the synaptic growth and stability from the presynapse and synaptic organization form the postsynapse. Also, I could demonstrate that Madm functions in association with mTOR pathway to regulate synapse growth acting downstream of 4E-BP. In addition, using electrophysiology, we could demonstrate that Madm is essential for the basic synaptic transmission with an additive function of retrograde synaptic potentiation. In summary, I could demonstrate that Madm is a novel regulator of synaptic development, maintenance and function.
Protein phosphatase 4 (PP4), a ubiquitously expressing protein phosphatase is involved in the regulation of multiple aspects of the nervous system. I could demonstrate that PP4 is essential for the development of nervous system and the metamorphosis. Using genetics and imaging analysis, I could demonstrate that loss of PP4 results in the abnormal morphology of cell organelles. In addition, I could show that loss of PP4 results in defective brain development with poorly developed structures.
Altogether, in this study, I could demonstrate the importance of novel molecules, a pesudokinase Madm and protein phosphatases PP4 in the nervous system to regulate distinct aspects of the neuron.
The number of sensors used in modern devices is rapidly increasing, and the interaction with sensors demands analog-to-digital data conversion (ADC). A conventional ADC in leading-edge technologies faces
many issues due to signal swings, manufacturing deviations, noise, etc. Designers of ADCs are moving to the
time domain and digital designs techniques to deal with these issues. This work pursues a novel self-adaptive
spiking neural ADC (SN-ADC) design with promising features, e.g., technology scaling issues, low-voltage
operation, low power, and noise-robust conditioning. The SN-ADC uses spike time to carry the information.
Therefore, it can be effectively translated to aggressive new technologies to implement reliable advanced sensory electronic systems. The SN-ADC supports self-x (self-calibration, self-optimization, and self-healing) and
machine learning required for the internet of things (IoT) and Industry 4.0. We have designed the main part of
SN-ADC, which is an adaptive spike-to-digital converter (ASDC). The ASDC is based on a self-adaptive complementary metal–oxide–semiconductor (CMOS) memristor. It mimics the functionality of biological synapses,
long-term plasticity, and short-term plasticity. The key advantage of our design is the entirely local unsupervised
adaptation scheme. The adaptation scheme consists of two hierarchical layers; the first layer is self-adapted, and
the second layer is manually treated in this work. In our previous work, the adaptation process is based on 96 variables. Therefore, it requires considerable adaptation time to correct the synapses’ weight. This paper proposes a
novel self-adaptive scheme to reduce the number of variables to only four and has better adaptation capability
with less delay time than our previous implementation. The maximum adaptation times of our previous work
and this work are 15 h and 27 min vs. 1 min and 47.3 s. The current winner-take-all (WTA) circuits have issues, a
high-cost design, and no identifying the close spikes. Therefore, a novel WTA circuit with memory is proposed.
It used 352 transistors for 16 inputs and can process spikes with a minimum time difference of 3 ns. The ASDC
has been tested under static and dynamic variations. The nominal values of the SN-ADC parameters’ number
of missing codes (NOMCs), integral non-linearity (INL), and differential non-linearity (DNL) are no missing
code, 0.4 and 0.22 LSB, respectively, where LSB stands for the least significant bit. However, these values are
degraded due to the dynamic and static deviation with maximum simulated change equal to 0.88 and 4 LSB and
6 codes for DNL, INL, and NOMC, respectively. The adaptation resets the SN-ADC parameters to the nominal
values. The proposed ASDC is designed using X-FAB 0.35 µm CMOS technology and Cadence tools.
A potential fucoidan-based PEGylated PLGA nanoparticles (NPs) offering a proper delivery of N-methyl anthranilic acid (MA, a model of hydrophobic anti-inflammatory drug) have been
developed via the formation of fucoidan aqueous coating surrounding PEGylated PLGA NPs. The optimum formulation (FuP2) composed of fucoidan:m-PEG-PLGA (1:0.5 w/w) with particle size(365 ± 20.76 nm), zeta potential (-22.30 ± 2.56 mV), % entrapment efficiency (85.45 ± 7.41), drug loading (51.36 ± 4.75 µg/mg of NPs), % initial burst (47.91 ± 5.89), and % cumulative release
(102.79 ± 6.89) has been further investigated for the anti-inflammatory in vivo study. This effect of
FuP2 was assessed in rats’ carrageenan-induced acute inflammation model. The average weight of the
paw edema was significantly lowered (p ≤ 0.05) by treatment with FuP2. Moreover, cyclooxygenase-2 and tumor necrosis factor-alpha immunostaining were decreased in FuP2 treated group compared to the other groups. The levels of prostaglandin E2, nitric oxide, and malondialdehyde were significantly
reduced (p ≤ 0.05) in the FuP2-treated group. A significant reduction (p ≤ 0.05) in the expression
of interleukins (IL-1b and IL-6) with an improvement of the histological findings of the paw tissues was observed in the FuP2-treated group. Thus, fucoidan-based PEGylated PLGA–MA NPs are a promising anti-inflammatory delivery system that can be applied for other similar drugs potentiating their pharmacological and pharmacokinetic properties.
With the burgeoning computing power available, multiscale modelling and simulation has these days become increasingly capable of capturing the details of physical processes on different scales. The mechanical behavior of solids is oftentimes the result of interaction between multiple spatial and temporal scales at different levels and hence it is a typical phenomena of interest exhibiting multiscale characteristic. At the most basic level, properties of solids can be attributed to atomic interactions and crystal structure that can be described on nano scale. Mechanical properties at the macro scale are modeled using continuum mechanics for which we mention stresses and strains. Continuum models, however they offer an efficient way of studying material properties they are not accurate enough and lack microstructural information behind the microscopic mechanics that cause the material to behave in a way it does. Atomistic models are concerned with phenomenon at the level of lattice thereby allowing investigation of detailed crystalline and defect structures, and yet the length scales of interest are inevitably far beyond the reach of full atomistic computation and is rohibitively expensive. This makes it necessary the need for multiscale models. The bottom line and a possible avenue to this end is, coupling different length scales, the continuum and the atomistics in accordance with standard procedures. This is done by recourse to the Cauchy-Born rule and in so doing, we aim at a model that is efficient and reasonably accurate in mimicking physical behaviors observed in nature or laboratory. In this work, we focus on concurrent coupling based on energetic formulations that links the continuum to atomistics. At the atomic scale, we describe deformation of the solid by the displaced positions of atoms that make up the solid and at the continuum level deformation of the solid is described by the displacement field that minimize the total energy. In the coupled model, continuum-atomistic, a continuum formulation is retained as the overall framework of the problem and the atomistic feature is introduced by way of constitutive description, with the Cauchy-Born rule establishing the point of contact. The entire formulation is made in the framework of nonlinear elasticity and all the simulations are carried out within the confines of quasistatic settings. The model gives direct account to measurable features of microstructures developed by crystals through sequential lamination.
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
This Dissertation tried to provide insights into the influences of individual and contextual factors on Technical and Vocational Education and Training (TVET) teachers’ learning and professional development in Ethiopia. Specifically, this research focused on identifying and determining the influences of teachers’ self perception as learners and professionals, and investigates the impact of the context, process and content of their learning and experiences on their professional development. The knowledge of these factors and their impacts help in improving the learning and professional development of the TVET teachers and their professionalization. This research tried to provide answers for the following five research questions. (1) How do TVET teachers perceive themselves as active learners and as professionals? And what are the implications of their perceptions on their learning and development? (2) How do TVET teachers engage themselves in learning and professional development activities? (3) What contextual factors facilitated or hindered the TVET Teachers’ learning and professional development? (4) Which competencies are found critical for the TVET teachers’ learning and professional development? (5) What actions need to be considered to enhance and sustain TVET teachers learning and professional development in their context? It is believed that the research results are significant not only to the TVET teachers, but also to schools leaders, TVET Teacher Training Institutions, education experts and policy makers, researchers and others stakeholders in the TVET sector. The theoretical perspectives adopted in this research are based on the systemic constructivist approach to professional development. An integrated approach to professional development requires that the teachers’ learning and development activities to be taken as an adult education based on the principles of constructivism. Professional development is considered as context - specific and long-term process in which teachers are trusted, respected and empowered as professionals. Teachers’ development activities are sought as more of collaborative activities portraying the social nature of learning. Schools that facilitate the learning and development of teachers exhibit characteristics of a learning organisation culture where, professional collaboration, collegiality and shared leadership are practiced. This research has drawn also relevant point of views from studies and reports on vocational education and TVET teacher education programs and practices at international, continental and national levels. The research objectives and the types of research questions in this study implied the use of a qualitative inductive research approach as a research strategy. Primary data were collected from TVET teachers in four schools using a one-on-one qualitative in-depth interview method. These data were analyzed using a Qualitative Content Analysis method based on the inductive category development procedure. ATLAS.ti software was used for supporting the coding and categorization process. The research findings showed that most of the TVET teachers neither perceive themselves as professionals nor as active learners. These perceptions are found to be one of the major barriers to their learning and development. Professional collaborations in the schools are minimal and teaching is sought as an isolated individual activity; a secluded task for the teacher. Self-directed learning initiatives and individual learning projects are not strongly evident. The predominantly teacher-centered approach used in TVET teacher education and professional development programs put emphasis mainly to the development of technical competences and has limited the development of a range of competences essential to teachers’ professional development. Moreover, factors such as the TVET school culture, the society’s perception of the teaching profession, economic conditions, and weak links with industries and business sectors are among the major contextual factors that hindered the TVET teachers’ learning and professional development. A number of recommendations are forwarded to improve the professional development of the TVET teachers. These include change in the TVET schools culture, a paradigm shift in TVET teacher education approach and practice, and development of educational policies that support the professionalization of TVET teachers. Areas for further theoretical research and empirical enquiry are also suggested to support the learning and professional development of the TVET teachers in Ethiopia.
Manipulating deformable linear objects - Vision-based recognition of contact state transitions -
(1999)
A new and systematic approach to machine vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation of the object with respect to an obstacle and are derived from the object image and its features. Therefore, the object is segmented from a standard video frame using a fast segmentation algorithm. Several object features are presented which allow the state recognition of the object while being manipulated by the robot.
A new and systematic basic approach to force- and vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation between the deformable and rigid obstacles, and are derived from the object image and its features. We give an enumeration of possible contact states and discuss the main characteristics of each state. We investigate the performance of robust transitions between the contact states and derive criteria and conditions for each of the states and for two sensor systems, i.e. a vision sensor and a force/torque sensor. This results in a new and task-independent approach in regarding the handling of deformable objects and in a sensor-based implementation of manipulation primitives for industrial robots. Thus, the usage of sensor processing is an appropriate solution for our problem. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task. Experimental results show the feasibility of our approach: A robot performs several contact state transitions which can be combined for solving a more complex task.
A geoscientifically relevant wavelet approach is established for the classical (inner) displacement problem corresponding to a regular surface (such as sphere, ellipsoid, actual earth's surface). Basic tools are the limit and jump relations of (linear) elastostatics. Scaling functions and wavelets are formulated within the framework of the vectorial Cauchy-Navier equation. Based on appropriate numerical integration rules a pyramid scheme is developed providing fast wavelet transform (FWT). Finally multiscale deformation analysis is investigated numerically for the case of a spherical boundary.
The focus of this work has been to develop two families of wavelet solvers for the inner displacement boundary-value problem of elastostatics. Our methods are particularly suitable for the deformation analysis corresponding to geoscientifically relevant (regular) boundaries like sphere, ellipsoid or the actual Earth's surface. The first method, a spatial approach to wavelets on a regular (boundary) surface, is established for the classical (inner) displacement problem. Starting from the limit and jump relations of elastostatics we formulate scaling functions and wavelets within the framework of the Cauchy-Navier equation. Based on numerical integration rules a tree algorithm is constructed for fast wavelet computation. This method can be viewed as a first attempt to "short-wavelength modelling", i.e. high resolution of the fine structure of displacement fields. The second technique aims at a suitable wavelet approximation associated to Green's integral representation for the displacement boundary-value problem of elastostatics. The starting points are tensor product kernels defined on Cauchy-Navier vector fields. We come to scaling functions and a spectral approach to wavelets for the boundary-value problems of elastostatics associated to spherical boundaries. Again a tree algorithm which uses a numerical integration rule on bandlimited functions is established to reduce the computational effort. For numerical realization for both methods, multiscale deformation analysis is investigated for the geoscientifically relevant case of a spherical boundary using test examples. Finally, the applicability of our wavelet concepts is shown by considering the deformation analysis of a particular region of the Earth, viz. Nevada, using surface displacements provided by satellite observations. This represents the first step towards practical applications.
In this work the investigation of a (Ti, Al, Si) N system was done. The main point of investigation was to study the possibility of getting the nanocomposite coatings structures by deposition of multilayer films from TiN, AlSiN, . This tries to understand the relation between the mechanical properties (hardness, Young s modulus), and the microstructure (nanocrystalline with individual phases). Particularly special attention was given to the temperature effects on microstructural changes in annealing at 600 °C for the coatings. The surface hardness, elastic modulus, and the multilayers diffusion and compositions were the test tools for the comparison between the different coated samples with and without annealing at 600 °C. To achieve this object a rectangular aluminum vacuum chamber with three unbalanced sputtering magnetrons for the deposition of thin film coatings from different materials was constructed The chamber consists mainly of two chambers, the pre-vacuum chamber to load the workpiece, and the main vacuum chamber where the sputtering deposition of the thin film coatings take place. The workpiece is moving on a car travel on a railway between the two chambers to the position of the magnetrons by step motors. The chambers are divided by a self constructed rectangular gate controlled manually from outside the chamber. The chamber was sealed for vacuum use using glue and screws. Therefore, different types of glue were tested not only for its ability to develop an uniform thin layer in the gap between the aluminum plates to seal the chamber for vacuum use, but also low outgassing rates which made it suitable for vacuum use. A epoxy was able to fulfill this tasks. The evacuation characteristics of the constructed chamber was improved by minimizing the inner surface outgassing rate. Therefore, the throughput outgassing rate test method was used in the comparisons between the selected two aluminum materials (A2017 and A5353) samples short time period (one hour) outgassing rates. Different machining methods and treatments for the inner surface of the vacuum chamber were tested. The machining of the surface of material A (A2017) with ethanol as coolant fluid was able to reduce its outgassing rate a factor of 6 compared with a non-machined sample surface of the same material. The reduction of the surface porous oxide layer on the top of the aluminum surface by the pickling process with HNO3 acid, and the protection of it by producing another passive non-porous oxides layer using anodizing process will protect the surface for longer time and will minimize the outgassing rates even under humid atmosphere The residual gas analyzer (RGA) 6. Summary test shows that more than 85% of the gases inside the test chamber were water vapour (H2O) and the rests are (N2, H2, CO), so liquid nitrogen water vapor trap can enhance the chamber pumping down process. As a result it was possible to construct a chamber that can be pumped down using a turbo molecular pump (450 L/s) to the range of 1x10-6 mbar within one hour of evacuations where the chamber volume is 160 Litters and the inner surface area is 1.6 m2. This is a good base pressure for the process of sputtering deposition of hard thin film coatings. Multilayer thin film coating was deposited to demonstrate that nanostructured thin film within the (Ti, Al, Si) N system could be prepared by reactive magnetron sputtering of multi thin film layers of TiN, AlSiN. The (SNMS) spectrometry of the test samples show that a complete diffusion between the different deposited thin film coating layers in each sample takes place, even at low substrate deposition temperature. The high magnetic flux of the unbalanced magnetrons and the high sputtering power were able to produce a high ion-toatom flux, which give high mobility to the coated atoms. The interactions between the high mobility of the coated atoms and the ion-to-atom flux were sufficient to enhance the diffusion between the different deposited thin layers. It was shown from the XRD patterns for this system that the structure of the formed mixture consists of two phases. One phase is noted as TiN bulk and another detected unknown amorphous phase, which can be SiNx or AlN or a combination of Ti-Al-Si-N. As a result we where able to deposit a nanocomposite coatings by the deposition of multilayers from TiN, AlSiN thin film coatings using the constructed vacuum chamber
Self-adaptation allows software systems to autonomously adjust their behavior during run-time by handling all possible
operating states that violate the requirements of the managed system. This requires an adaptation engine that receives adaptation
requests during the monitoring process of the managed system and responds with an automated and appropriate adaptation
response. During the last decade, several engineering methods have been introduced to enable self-adaptation in software systems.
However, these methods lack addressing (1) run-time uncertainty that hinders the adaptation process and (2) the performance
impacts resulted from the complexity and the large number of the adaptation space. This paper presents CRATER, a framework
that builds an external adaptation engine for self-adaptive software systems. The adaptation engine, which is built on Case-based
Reasoning, handles the aforementioned challenges together. This paper is braced with an experiment illustrating the benefits of
this framework. The experimental results shows the potential of CRATER in terms handling run-time uncertainty and adaptation
remembrance that enhances the performance for large number of adaptation space.
Postmortem Analysis of Decayed Online Social Communities: Cascade Pattern Analysis and Prediction
(2018)
Recently, many online social networks, such as MySpace, Orkut, and Friendster, have faced inactivity decay of their members, which contributed to the collapse of these networks. The reasons, mechanics, and prevention mechanisms of such inactivity decay are not fully understood. In this work, we analyze decayed and alive subwebsites from the Stack Exchange platform. The analysis mainly focuses on the inactivity cascades that occur among the members of these communities. We provide measures to understand the decay process and statistical analysis to extract the patterns that accompany the inactivity decay. Additionally, we predict cascade size and cascade virality using machine learning. The results of this work include a statistically significant difference of the decay patterns between the decayed and the alive subwebsites. These patterns are mainly cascade size, cascade virality, cascade duration, and cascade similarity. Additionally, the contributed prediction framework showed satisfactorily prediction results compared to a baseline predictor. Supported by empirical evidence, the main findings of this work are (1) there are significantly different decay patterns in the alive and the decayed subwebsites of the Stack Exchange; (2) the cascade’s node degrees contribute more to the decay process than the cascade’s virality, which indicates that the expert members of the Stack Exchange subwebsites were mainly responsible for the activity or inactivity of the Stack Exchange subwebsites; (3) the Statistics subwebsite is going through decay dynamics that may lead to it becoming fully-decayed; (4) the decay process is not governed by only one network measure, it is better described using multiple measures; (5) decayed subwebsites were originally less resilient to inactivity decay, unlike the alive subwebsites; and (6) network’s structure in the early stages of its evolution dictates the activity/inactivity characteristics of the network.
Learning From Networked-data: Methods and Models for Understanding Online Social Networks Dynamics
(2020)
Abstract
Nowadays, people and systems created by people are generating an unprecedented amount of
data. This data has brought us data-driven services with a variety of applications that affect
people’s behavior. One of these applications is the emergent online social networks as a method
for communicating with each other, getting and sharing information, looking for jobs, and many
other things. However, the tremendous growth of these online social networks has also led to many
new challenges that need to be addressed. In this context, the goal of this thesis is to better understand
the dynamics between the members of online social networks from two perspectives. The
first perspective is to better understand the process and the motives underlying link formation in
online social networks. We utilize external information to predict whether two members of an online
social network are friends or not. Also, we contribute a framework for assessing the strength of
friendship ties. The second perspective is to better understand the decay dynamics of online social
networks resulting from the inactivity of their members. Hence, we contribute a model, methods,
and frameworks for understanding the decay mechanics among the members, for predicting members’
inactivity, and for understanding and analyzing inactivity cascades occurring during the decay.
The results of this thesis are: (1) The link formation process is at least partly driven by interactions
among members that take place outside the social network itself; (2) external interactions might
help reduce the noise in social networks and for ranking the strength of the ties in these networks;
(3) inactivity dynamics can be modeled, predicted, and controlled using the models contributed in
this thesis, which are based on network measures. The contributions and the results of this thesis
can be beneficial in many respects. For example, improving the quality of a social network by introducing
new meaningful links and removing noisy ones help to improve the quality of the services
provided by the social network, which, e.g., enables better friend recommendations and helps to
eliminate fake accounts. Moreover, understanding the decay processes involved in the interaction
among the members of a social network can help to prolong the engagement of these members. This
is useful in designing more resilient social networks and can assist in finding influential members
whose inactivity may trigger an inactivity cascade resulting in a potential decay of a network.
Building interoperation among separately developed software units requires checking their conceptual assumptions and constraints. However, eliciting such assumptions and constraints is time consuming and is a challenging task as it requires analyzing each of the interoperating software units. To address this issue we proposed a new conceptual interoperability analysis approach which aims at decreasing the analysis cost and the conceptual mismatches between the interoperating software units. In this report we present the design of a planned controlled experiment for evaluating the effectiveness, efficiency, and acceptance of our proposed conceptual interoperability analysis approach. The design includes the study objectives, research questions, statistical hypotheses, and experimental design. It also provides the materials that will be used in the execution phase of the planned experiment.
Typically software engineers implement their software according to the design of the software
structure. Relations between classes and interfaces such as method-call relations and inheritance
relations are essential parts of a software structure. Accordingly, analyzing several types of
relations will benefit the static analysis process of the software structure. The tasks of this
analysis include but not limited to: understanding of (legacy) software, checking guidelines,
improving product lines, finding structure, or re-engineering of existing software. Graphs with
multi-type edges are possible representation for these relations considering them as edges, while
nodes represent classes and interfaces of software. Then, this multiple type edges graph can
be mapped to visualizations. However, the visualizations should deal with the multiplicity of
relations types and scalability, and they should enable the software engineers to recognize visual
patterns at the same time.
To advance the usage of visualizations for analyzing the static structure of software systems,
I tracked difierent development phases of the interactive multi-matrix visualization (IMMV)
showing an extended user study at the end. Visual structures were determined and classified
systematically using IMMV compared to PNLV in the extended user study as four categories:
High degree, Within-package edges, Cross-package edges, No edges. In addition to these structures
that were found in these handy tools, other structures that look interesting for software
engineers such as cycles and hierarchical structures need additional visualizations to display
them and to investigate them. Therefore, an extended approach for graph layout was presented
that improves the quality of the decomposition and the drawing of directed graphs
according to their topology based on rigorous definitions. The extension involves describing
and analyzing the algorithms for decomposition and drawing in detail giving polynomial time
complexity and space complexity. Finally, I handled visualizing graphs with multi-type edges
using small-multiples, where each tile is dedicated to one edge-type utilizing the topological
graph layout to highlight non-trivial cycles, trees, and DAGs for showing and analyzing the
static structure of software. Finally, I applied this approach to four software systems to show
its usefulness.
In the literature, there are at least two equivalent two-factor Gaussian models for the instantaneous short rate. These are the original two-factor Hull White model (see [3]) and the G2++ one by Brigo and Mercurio (see [1]). Both these models first specify a time homogeneous two-factor short rate dynamics and then by adding a deterministic shift function '(·) fit exactly the initial term structure of interest rates. However, the obtained results are rather clumsy and not intuitive which means that a special care has to be taken for their correct numerical implementation.
In this thesis, we have dealt with two modeling approaches of the credit risk, namely the structural (firm value) and the reduced form. In the former one, the firm value is modeled by a stochastic process and the first hitting time of this stochastic process to a given boundary defines the default time of the firm. In the existing literature, the stochastic process, triggering the firm value, has been generally chosen as a diffusion process. Therefore, on one hand it is possible to obtain closed form solutions for the pricing problems of credit derivatives and on the other hand the optimal capital structure of a firm can be analysed by obtaining closed form solutions of firm's corporate securities such as; equity value, debt value and total firm value, see Leland(1994). We have extended this approach by modeling the firm value as a jump-diffusion process. The choice of the jump-diffusion process was a crucial step to obtain closed form solutions for corporate securities. As a result, we have chosen a jump-diffusion process with double exponentially distributed jump heights, which enabled us to analyse the effects of jump on the optimal capital structure of a firm. In the second part of the thesis, by following the reduced form models, we have assumed that the default is triggered by the first jump of a Cox process. Further, by following Schönbucher(2005), we have modeled the forward default intensity of a firm as a geometric Brownian motion and derived pricing formulas for credit default swap options in a more general setup than the ones in Schönbucher(2005).
This paper deals with the handling of deformable linear objects (DLOs), such as hoses, wires, or leaf springs. It investigates usable features for the vision-based detection of a changing contact situation between a DLO and a rigid polyhedral obstacle and a classification of such contact state transitions. The result is a complete classification of contact state transitions and of the most significant features for each class. This knowledge enables reliable detection of changes in the DLO contact situation, facilitating implementation of sensor-based manipulation skills for all possible contact changes.
In der vorliegenden Arbeit wird das Tragverhalten von durchlaufenden stahlfaserbewehrten Stahlverbunddecken analysiert. Auf der Basis von experimentellen und rechnerischen Untersuchungen werden zwei Bemessungsmodelle entwickelt. Anhand der experimentellen Untersuchungen an einfeldrigen und durchlaufenden stahlfaserbewehrten Verbunddecken werden Aufschlüsse über das Trag- und Verformungsverhalten der Decken gewonnen. Dabei werden sowohl offene trapezförmige als auch hinterschnittene Profilbleche verwendet. Auf eine konventionelle Betonstahlbewehrung wird gänzlich verzichtet. Das Stützmoment wird vom Stahlfaserbeton alleine aufgenommen. In vier Versuchsserien mit insgesamt 18 Versuchen werden einzelne Parameter wie z. B. unterschiedliche Deckenstärken, unterschiedliche Profilblechgeometrien sowie unterschiedliche Stahlfaserbetonmischungen untersucht. Für die Berechnung und Bemessung werden die im Verbundbau üblichen Nachweisverfahren aufgegriffen und modifiziert. Die Traganteile des Stahlfaserbetons werden über den Ansatz von Spannungsblöcken implementiert. Bei der Nachrechnung der einzelnen Versuche zeigt sich die Eignung der Verfahren. Für die einzelnen Nachweise werden in Parameterstudien Bemessungsdiagramme und –tabellen erstellt, die dem anwendenden Ingenieur ein einfaches und sicheres Bemessen ermöglichen. Anhand der experimentellen Ergebnisse und der rechnerischen Untersuchungen werden zwei mögliche Bemessungsmodelle entwickelt, mit denen die Tragfähigkeit von stahlfaserbewehrten durchlaufenden Verbunddecken nachgewiesen werden kann. Dabei kann der Nachweis entweder nach den Verfahren Elastisch-Plastisch oder Plastisch-Plastisch erfolgen.
In this paper we study the possibilities of sharing profit in combinatorial procurement auctions and exchanges. Bundles of heterogeneous items are offered by the sellers, and the buyers can then place bundle bids on sets of these items. That way, both sellers and buyers can express synergies between items and avoid the well-known risk of exposure (see, e.g., [3]). The reassignment of items to participants is known as the Winner Determination Problem (WDP). We propose solving the WDP by using a Set Covering formulation, because profits are potentially higher than with the usual Set Partitioning formulation, and subsidies are unnecessary. The achieved benefit is then to be distributed amongst the participants of the auction, a process which is known as profit sharing. The literature on profit sharing provides various desirable criteria. We focus on three main properties we would like to guarantee: Budget balance, meaning that no more money is distributed than profit was generated, individual rationality, which guarantees to each player that participation does not lead to a loss, and the core property, which provides every subcoalition with enough money to keep them from separating. We characterize all profit sharing schemes that satisfy these three conditions by a monetary flow network and state necessary conditions on the solution of the WDP for the existence of such a profit sharing. Finally, we establish a connection to the famous VCG payment scheme [2, 8, 19], and the Shapley Value [17].
In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.
Acidic zeolites like H-Y, H-ZSM-5, H-MCM-22 and H-MOR zeolites were found to be the selective adsorbents for the removal of thiophene from toluene or n-heptane as solvent. The competitive adsorption of toluene is found to influence the adsorption capacity for thiophene and is more predominant when high-alumina zeolites are used as adsorbents. This behaviour is also reflected by the results of the adsorption of thiophene on H-ZSM-5 zeolites with varied nSi/nAl ratios (viz. 13, 19 and 36) from toluene and n-heptane as solvents, respectively. UV-Vis spectroscopic results show that the oligomerization of thiophene leads to the formation of dimers and trimers on these zeolites. The oligomerization in acid zeolites is regarded to be dependent on the geometry of the pore system of the zeolites. The sulphur-containing compounds with more than one ring viz. benzothiophene, which are also present in substantial amounts in certain hydrocarbon fractions, are not adsorbed on H-ZSM-5 zeolites. This is obvious, as the diameter of the pore aperture of zeolite H-ZSM-5 is smaller than the molecular size of benzothiophene. Metal ion-exchanged FAU-type zeolites are found to be promising adsorbents for the removal of sulphur-containing compounds from model solutions. The introduction of Cu+-, Ni2+-, Ce3+-, La3+- and Y3+- ions into zeolite Na+-Y by aqueous ion-exchange substantially improves the adsorption capacity for thiophene from toluene or n-heptane as solvent. More than the absolute content of Cu+-ions, the presence of Cu+-ions at the sites exposed to supercages is believed to influence the adsorption of thiophene on Cu+-Y zeolite. It was shown experimentally for the case of Cu+-Y and Ce3+-Y that the supercages present in the FAU zeolite allow for an access of bulkier sulphur-containing compounds (viz. benzothiophene, dibenzothiophene and dimethyl dibenzothiophene). The presence of these bulkier compounds compete with thiophene and are preferentially adsorbed on Cu+-Y zeolite. IR spectroscopic results revealed that the adsorption of thiophene on Na+-Y, Cu+-Y and Ni2+-Y is primarily a result of the interaction of thiophene via pi-complexation between C=C double bond (of thiophene) and metal ions (in the zeolite framework). A different mode of interaction of thiophene with Ce3+-, La3+- and Y3+-metal ions was observed in the IR spectra of thiophene adsorbed on Ce3+-Y, La3+-Y and Y3+-Y zeolites, respectively. On these adsorbents, thiophene is believed to interact via a lone electron pair of the sulphur atom with metal ions present in the adsorbent (M-S interaction). The experimental results show that there is a large difference in the thiophene adsorption capacities of pi-complexation adsorbents (like Cu+-Y, Ni2+-Y) between the model solution with toluene as solvent and the model solution with n-heptane as solvent. The lower capacity of these zeolites for the adsorption of thiophene from toluene than from n-heptane as solvent is the clear indication of competition of toluene in interating with adsorbent in a way similar to thiophene. The difference in thiophene adsorption capacities is very low in the case of adsorbents Ce3+-Y, La3+-Y and Y3+-Y, which are believed to interact with thiophene predominantly by direct M3+-S bond (thiophene interacting with metal ion via a lone pair of electrons). TG-DTA analysis was used to study the regeneration behaviour of the adsorbents. Acid zeolites can be regenerated by simply heating at 400 °C in a flow of nitrogen whereas thiophene is chemically adsorbed on the metal ion. By contrast, it is not possible to regenerate by heating under idle inert gas flow. The only way to regenerate these adsorbents is to burn off the adsorbate, which eventually brings about an undesired emission of SOx. The exothermic peaks appeared at different temperatures in the heat flow profiles of Cu+-Y, Ce3+-Y, La3+-Y and Y3+-Y are also indicating that two different types of interaction are present as revealed by IR spectroscopy, too. One major difficulty in reducing the sulphur content in fuels to value below 10 ppm is the inability in removing alkyl dibenzothiophenes, viz. 4,6 dimethyl dibenzothiophene, by the existing catalytic hydrodesulphurization technique. Cu+-Y and Ce3+-Y were found in the present study to adsorb this compound from toluene to a certain extent. To meet the stringent regulations on sulphur content, selective adsorption by zeolites could be a valuable post-purification method after the catalytic hydrodesulphurization unit.
Highly Automated Driving (HAD) vehicles represent complex and safety critical systems. They are deployed in an open context i.e., an intricate environment which undergoes continual changes. The complexity of these systems and insufficiencies in sensing and understanding the open context may result in unsafe and uncertain behaviour. The safety critical nature of the HAD vehicles requires modelling of root causes for unsafe behaviour and their mitigation to argue sufficient reduction of residual risk.
Standardization activities such as ISO 21448 provide guidelines on the Safety Of The Intended Functionality (SOTIF) and focus on the analysis of performance limitations under the influence of triggering conditions that can lead to hazardous behaviour. SOTIF references traditional safety analyses methods e.g., Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) to perform safety analysis. These analyses methods are based on certain assumptions e.g., single point failure in FMEA and independence of basic events in FTA. Moreover, these analyses are generally based on expert knowledge i.e., data-based models or hybrid approaches (expert and data) are seldom practised. The resulting safety model is fixed i.e., it is generally seen as a one-time artefact. Open context environment may contain triggering conditions which may not be evident to the expert. Open context also evolves over time and new phenomena may emerge.
This thesis explores the applicability of the traditional safety analyses techniques to provide safety models for HAD vehicles operating in the open context, under the light of modelling assumptions taken by traditional safety analyses techniques. Moreover, incorporating uncertainties into safety analyses models is also explored. An explicit distinction between the inherent uncertainty of a probabilistic event (aleatory) and uncertainty due to lack of knowledge (epistemic) is made to formalize models to perform SOTIF analysis. A further distinction is made for conditions of complete ignorance and termed as ontological uncertainty. The distinction is important as for HAD vehicles operating in open context the ontological uncertainty can never be completely disregarded.
This thesis proposes a novel framework of SOTIF to model, estimate and dis cover triggering conditions relevant to performance limitations. The framework provides the ability to model uncertainties while also providing a hybrid approach i.e., supporting inclusion of expert knowledge as well as data driven engineering processes. Two representative algorithms are provided to support the framework. Bayesian Network (BN) and p-value hypothesis testing are utilised in this regard. The framework is implemented on a real-world case study in which LIDARs based perception systems are used as vehicle detection system.
In Deutschland haben die Familienunternehmen eine lange Tradition. Einige weltberühmte deutsche Unternehmen, wie die Volkswagen AG, BMW Group oder Robert Bosch GmbH, werden von der Gründerfamilie oder deren Nachfolger geführt. Neben Familienunternehmen dieser Größenklasse werden auch viele andere mittelständige, familiengeführte Unternehmen in Deutschland jährlich an die nächste Generation weitergegeben oder vererbt. Ganz gleich ob bereits eine gezielte Nachfolgeplanung vorliegt oder ein plötzlicher Generationswechsel geschieht, soll das Firmenvermögen in aller Regel bei der Familie verbleiben.
Das Nachfolgemanagement gestaltet sich dabei oft schwierig, denn bei dem Thema Nachfolgeplanung müssen nicht nur die strategischen Ziele und die möglichen Akteure der Unternehmensfortführung gefunden werden, sondern es soll ebenfalls die Harmonie der Familie erhalten bleiben. In einigen Fällen ist kein Familienmitglied bereit dazu oder geeignet, das Unternehmen weiterzuführen. In solchen Fällen gibt es die Möglichkeit, die Fortführung in die Hände eines erfahrenen Managers zu legen. Für die Dauer der Fortführung wird i.d.R. eine Beteiligung am Firmenvermögen angeboten, um dem Manager eine intrinsische Motivation an der erfolgreichen Weiterführung des Familienunternehmens und an der Steigerung des Firmenvermögens, mithin auch seiner eigenen Beteiligung, zu geben.
Durch die Übertragung von Gesellschaftsanteilen wird der externe Manager Teil eines familiengeprägten Gesellschafterkreises. Es stellt sich die Frage, wie diese Übertragung so gestaltet werden kann, dass die Familie weiterhin die Kontrolle über die Anteile behält und nicht umgangen werden kann. Darüber hinaus stellt sich die Frage, wie weit solche Kontrollmaßnahmen ihre Reichweite erstrecken und wie sich die Auslegung solcher Klauseln ändern kann, wenn die Gesellschafter nicht mehr nur aus der Familie als homogene Einheit stammen und persönlich in der Gesellschaft nicht mehr mitarbeiten.
Dealing with information in modern times involves users to cope with hundreds of thousands of documents, such as articles, emails, Web pages, or News feeds.
Above all information sources, the World Wide Web presents information seekers with great challenges.
It offers more text in natural language than one is capable to read.
The key idea for this research intends to provide users with adaptable filtering techniques, supporting them in filtering out the specific information items they need.
Its realization focuses on developing an Information Extraction system,
which adapts to a domain of concern, by interpreting the contained formalized knowledge.
Utilizing the Resource Description Framework (RDF), which is the Semantic Web's formal language for exchanging information,
allows extending information extractors to incorporate the given domain knowledge.
Because of this, formal information items from the RDF source can be recognized in the text.
The application of RDF allows a further investigation of operations on recognized information items, such as disambiguating and rating the relevance of these.
Switching between different RDF sources allows changing the application scope of the Information Extraction system from one domain of concern to another.
An RDF-based Information Extraction system can be triggered to extract specific kinds of information entities by providing it with formal RDF queries in terms of the SPARQL query language.
Representing extracted information in RDF extends the coverage of the Semantic Web's information degree and provides a formal view on a text from the perspective of the RDF source.
In detail, this work presents the extension of existing Information Extraction approaches by incorporating the graph-based nature of RDF.
Hereby, the pre-processing of RDF sources allows extracting statistical information models dedicated to support specific information extractors.
These information extractors refine standard extraction tasks, such as the Named Entity Recognition, by using the information provided by the pre-processed models.
The post-processing of extracted information items enables representing these results in RDF format or lists, which can now be ranked or filtered by relevance.
Post-processing also comprises the enrichment of originating natural language text sources with extracted information items by using annotations in RDFa format.
The results of this research extend the state-of-the-art of the Semantic Web.
This work contributes approaches for computing customizable and adaptable RDF views on the natural language content of Web pages.
Finally, due to the formal nature of RDF, machines can interpret these views allowing developers to process the contained information in a variety of applications.
Das Idealbild der europäischen Stadt mit ihrer dicht gewachsenen Baustruktur und ihren öffentlichen Räumen steht als Synonym für 'Urbanität' und beeinflußt bis zum heutigen Tag das planerische Denken und Handeln. Eng verbunden damit tauchen immer wieder Assoziationen zu Agora und Forum auf, die als Archetypen des 'Öffentlichen' schlechthin, den Mythos einer sich dort artikulierenden und konstituierenden, idealen und demokratischen Stadtgesellschaft transportieren. Zu Beginn des 21. Jahrhunderts stellt sich jedoch die Frage, ob sich die tradierten und erprobten Denkmodelle und Bilder des öffentlichen Raumes aufgrund der rasanten gesellschaftlichen und informationstechnologischen Veränderungen, überlebt haben. Haben die typischen Ideen von Stadt, die auf dem öffentlichen Raum beruhen nur noch rein symbolische Bedeutung? Verlagern sie sich mehr und mehr in den virtuellen Raum? 'Die Stadt der kurzen Wege' mit ihrer räumlichen Mischung ist als 'Marktplatz' für den Austausch von Informationen und Waren nicht mehr in der bekannten Weise relevant und gesellschaftlich bestimmend. Die Stadt als einheitliches und homogenes Gebilde existiert nicht mehr. Stattdessen ist sie durch Fragmentierung und Zersplitterung gekennzeichnet, wie u.a. die Diagnosen von Touraine ['Die Stadt - ein überholter Entwurf', 1996], Koolhaas ['Generic City', 1997], Sieverts ['Die Zwischenstadt', 1999] und Augé ['Orte und Nicht-Orte', 1998] zeigen. Synchron dazu entstehen mit dem Cyberspace oder 'Virtual Cities' [Rötzer, 1997] neue Formen von öffentlichem Raum und Öffentlichkeit - Parallelräume zur realen Welt. Welche Auswirkungen diese neuen Räume auf das Leben der Menschen und die Stadt haben werden, kann noch nicht abgesehen werden. In der realen Welt werden die Planungsspielräume der Kommunen immer kleiner; die Wechselwirkungen, die u.a. durch Globalisierung, Privatisierung und Deregulierung öffentlicher Aufgaben ausgelöst werden, immer komplexer. Neben den international zu beobachtenden Entwicklungen [Shopping-Mall, New Urbanism, Gated Community], die sich länderübergreifend in leicht abgewandelten Varianten durchzusetzen scheinen, üben auch soziokulturelle Gesellschaftstrends großen Einfluß auf den öffentlichen Raum aus. Der öffentliche Raum als Bindeglied zwischen dem 'Öffentlichen' und dem 'Privaten' ist zunehmend dem Druck der 'Erlebnis- und Konsumgesellschaft' ausgesetzt und kann deshalb seine gesamtgesellschaftliche Funktion für die Stadt nur noch eingeschränkt wahrnehmen. Die individualisierte und mobile Gesellschaft mit ihren gewandelten und diversifizierten Lebensentwürfen und Wertvorstellungen stellt das tradierte Verständnis des öffentlichen Raums zusätzlich in Frage. Das Bild des öffentlichen Raumes als ein Bereich der Gesellschaft des 21. Jahrhunderts kann nicht mehr mit einem mythisierenden Agora-Begriff begegnet werden. Neue Wege und eine Überprüfung der aktuellen Entwicklungen sind dafür unerlässlich. Gleichzeitig müssen jedoch auch die geschichtlichen Gegebenheiten berücksichtigt werden, um aus ihnen zu lernen.
Dynamics of Excited Electrons in Copper and Ferromagnetic Transition Metals: Theory and Experiment
(2000)
Both theoretical and experimental results for the dynamics of photoexcited electrons at surfaces of Cu and the ferromagnetic transition metals Fe, Co, and Ni are presented. A model for the dynamics of excited electrons is developed, which is based on the Boltzmann equation and includes effects of photoexcitation, electron-electron scattering, secondary electrons (cascade and Auger electrons), and transport of excited carriers out of the detection region. From this we determine the time-resolved two-photon photoemission (TR-2PPE). Thus a direct comparison of calculated relaxation times with experimental results by means of TR-2PPE becomes possible. The comparison indicates that the magnitudes of the spin-averaged relaxation time t and of the ratio t_up/t_down of majority and minority relaxation times for the different ferromagnetic transition metals result not only from density-of-states effects, but also from different Coulomb matrix elements M. Taking M_Fe > M_Cu > M_Ni = M_Co we get reasonable agreement with experiments.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
In this paper we consider the stochastic primitive equation for geophysical flows subject to transport noise and turbulent pressure. Admitting very rough noise terms, the global existence and uniqueness of solutions to this stochastic partial differential equation are proven using stochastic maximal
-regularity, the theory of critical spaces for stochastic evolution equations, and global a priori bounds. Compared to other results in this direction, we do not need any smallness assumption on the transport noise which acts directly on the velocity field and we also allow rougher noise terms. The adaptation to Stratonovich type noise and, more generally, to variable viscosity and/or conductivity are discussed as well.
Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.
In this paper we investigate the use of the sharp function known from functional analysis in image processing. The sharp function gives a measure of the variations of a function and can be used as an edge detector. We extend the classical notion of the sharp function for measuring anisotropic behaviour and give a fast anisotropic edge detection variant inspired by the sharp function. We show that these edge detection results are useful to steer isotropic and anisotropic nonlinear diffusion filters for image enhancement.
Hydrogels are known to be covalently or ionic cross-linked, hydrophilic three-dimensional
polymer networks, which exist in our bodies in a biological gel form such as the vitreous
humour that fills the interior of the eyes. Poly(N-isopropylacrylamide) (poly(NIPAAm))
hydrogels are attracting more interest in biomedical applications because, besides others, they
exhibit a well-defined lower critical solution temperature (LCST) in water, around 31–34°C,
which is close to the body temperature. This is considered to be of great interest in drug
delivery, cell encapsulation, and tissue engineering applications. In this work, the
poly(NIPAAm) hydrogel is synthesized by free radical polymerization. Hydrogel properties
and the dimensional changes accompanied with the volume phase transition of the
thermosensitive poly(NIPAAm) hydrogel were investigated in terms of Raman spectra,
swelling ratio, and hydration. The thermal swelling/deswelling changes that occur at different
equilibrium temperatures and different solutions (phenol, ethanol, propanol, and sodium
chloride) based on Raman spectrum were investigated. In addition, Raman spectroscopy has
been employed to evaluate the diffusion aspects of bovine serum albumin (BSA) and phenol
through the poly(NIPAAm) network. The determination of the mutual diffusion coefficient,
\(D_{mut}\) for hydrogels/solvent system was achieved successfully using Raman spectroscopy at
different solute concentrations. Moreover, the mechanical properties of the hydrogel, which
were investigated by uniaxial compression tests, were used to characterize the hydrogel and to
determine the collective diffusion coefficient through the hydrogel. The solute release coupled
with shrinking of the hydrogel particles was modelled with a bi-dimensional diffusion model
with moving boundary conditions. The influence of the variable diffusion coefficient is
observed and leads to a better description of the kinetic curve in the case of important
deformation around the LCST. A good accordance between experimental and calculated data
was obtained.
Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.
This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.
Entwicklung von Fermentationsstrategien zur stofflichen Nutzung von nachwachsenden Rohstoffen
(2022)
Biertreber stellt einen wichtigen Vertreter eines nachwachsenden Rohstoffes dar, da es sich dabei um ein niedrigpreisiges Nebenprodukt des Brauprozesses handelt, welches jedes Jahr in großen Mengen anfällt. In der vorliegenden Arbeit wurde Biertreber aus sieben verschiedenen Braurezepten, sowohl aus eigener Herstellung als auch industriellen Ursprungs, analysiert und in Bezug auf die zugrundeliegenden Braugänge klassifiziert. Darüber hinaus wurde der Treber durch Pressen in zwei separate Stoffströme aufgeteilt: eine flüssige und eine feste Fraktion. Für beide Fraktionen wurden Bioprozesse etabliert, um einerseits das flüssige Substrat (Treberpresssaft) mit einem Milchsäurebakterium (Lactobacillus delbrueckii subsp. lactis) zu Milchsäure und andererseits das feste Substrat (Treberrückstand) mit einem lignocellulolytischen und gemischtsäuregärung-betreibenden Stamm (Cellulomonas uda) zu Ethanol und Essigsäure umzusetzen. Außerdem wurde ein kinetisches Modell aufgestellt, welches u.a. die Milchsäurebildung und das Zellwachstum von L. delbrueckii subsp. lactis für drei Treberpresssäfte unterschiedlicher Braurezepturen, d.h. mit unterschiedlicher Nährstoffausstattung, in einer simultanen Verzuckerung und Fermentation vorhersagen konnte. Des Weiteren konnten die entwickelten Fermentationsstrategien zur Verwertung des Treberpresssaftes und Treberrückstandes sowie die zugrundeliegenden Prozessüberwachungs- und Regelungsstrategien auf Fermentationen mit den gleichen Organismen aber dem Substrat Wiesenschnitt – also einen weiteren nachwachsenden Rohstoff – übertragen werden.
The advent of heterogeneous many-core systems has increased the spectrum
of achievable performance from multi-threaded programming. As the processor components become more distributed, the cost of synchronization and
communication needed to access the shared resources increases. Concurrent
linearizable access to shared objects can be prohibitively expensive in a high
contention workload. Though there are various mechanisms (e.g., lock-free
data structures) to circumvent the synchronization overhead in linearizable
objects, it still incurs performance overhead for many concurrent data types.
Moreover, many applications do not require linearizable objects and apply
ad-hoc techniques to eliminate synchronous atomic updates.
In this thesis, we propose the Global-Local View Model. This programming model exploits the heterogeneous access latencies in many-core systems.
In this model, each thread maintains different views on the shared object: a
thread-local view and a global view. As the thread-local view is not shared,
it can be updated without incurring synchronization costs. The local updates
become visible to other threads only after the thread-local view is merged
with the global view. This scheme improves the performance at the expense
of linearizability.
Besides the weak operations on the local view, the model also allows strong
operations on the global view. Combining operations on the global and the
local views, we can build data types with customizable consistency semantics
on the spectrum between sequential and purely mergeable data types. Thus
the model provides a framework that captures the semantics of Multi-View
Data Types. We discuss a formal operational semantics of the model. We
also introduce a verification method to verify the correctness of the implementation of several multi-view data types.
Frequently, applications require updating shared objects in an “all-or-nothing” manner. Therefore, the mechanisms to synchronize access to individual objects are not sufficient. Software Transactional Memory (STM)
is a mechanism that helps the programmer to correctly synchronize access to
multiple mutable shared data by serializing the transactional reads and writes.
But under high contention, serializable transactions incur frequent aborts and
limit parallelism, which can lead to severe performance degradation.
Mergeable Transactional Memory (MTM), proposed in this thesis, allows accessing multi-view data types within a transaction. Instead of aborting
and re-executing the transaction, MTM merges its changes using the data-type
specific merge semantics. Thus it provides a consistency semantics that allows
for more scalability even under contention. The evaluation of our prototype
implementation in Haskell shows that mergeable transactions outperform serializable transactions even under low contention while providing a structured
and type-safe interface.
Towards A Non-tracking Web
(2016)
Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. Simply preventing third-party tracking without replacing the functionality it provides cannot be a viable solution; leaving publishers without essential services will hurt the sustainability of the entire ecosystem.
In this thesis, we present alternative approaches to bridge this gap between privacy for users and functionality for publishers and other entities. We first propose a general and interaction-based third-party cookie policy that prevents third-party tracking via cookies, yet enables social networking features for users when wanted, and does not interfere with non-tracking services for analytics and advertisements. We then present a system that enables publishers to obtain rich web analytics information (e.g., user demographics, other sites visited) without tracking the users across the web. While this system requires no new organizational players and is practical to deploy, it necessitates the publishers to pre-define answer values for the queries, which may not be feasible for many analytics scenarios (e.g., search phrases used, free-text photo labels). Our second system complements the first system by enabling publishers to discover previously unknown string values to be used as potential answers in a privacy-preserving fashion and with low computation overhead for clients as well as servers. These systems suggest that it is possible to provide non-tracking services with (at least) the same functionality as today’s tracking services.
We present a constructive theory for locally supported approximate identities on the unit ball in \(\mathbb{R}^3\). The uniform convergence of the convolutions of the derived kernels with an arbitrary continuous function \(f\) to \(f\), i.e. the defining property of an approximate identity, is proved. Moreover, an explicit representation for a class of such kernels is given. The original publication is available at www.springerlink.com
We consider a linearized kinetic BGK equation and the associated acoustic system on a network.
Coupling conditions for the macroscopic equations are derived from the kinetic conditions via an asymptotic analysis near the nodes of the network.
This analysis leads to the consideration of a fixpoint problem involving the solutions of kinetic half-space problems.
This work extends the procedure developed in [13], where coupling conditions for a simplified BGK model have been derived.
Numerical comparisons between different coupling conditions
confirm the accuracy of the proposed approximation.
This research work focuses on the generation of a high resolution digital surface model featuring complex urban surface characteristics in order to enrich the database for runoff simulations of urban drainage systems. The discussion of global climate change and its possible consequences have taken centre stage over the last decade. Global climate change has triggered more erratic weather patterns by causing severe and unpredictable rainfall events in many parts of the world. The incidence of more frequent rainfall has led to the problem of increased flooding in urban areas. The increased property values of urban structures and threats to people's personal safety have hastened the demand for a detailed urban drainage simulation model for accurate flood prediction. Although the use of 2D hydraulic modelling approach in rural floodplains is in practice for quite a long time, the use of the same approach in urban floodplains is still in its infancy. The reason is mainly due to the lack of a high resolution topographic model describing urban surface characteristics properly.
High resolution surface data describing hydrologic and hydraulic properties of complex urban areas are the prerequisite to more accurately describing and simulating the flood water movement and thereby taking adequate measures against urban flooding. Airborne LiDAR (Light detection and ranging) is an efficient way of generating a high resolution Digital Surface Model (DSM) of any study area. The processing of high-density and large volume of unstructured LiDAR data is a difficult and time-consuming task towards generating fine resolution spatial databases when considering only human intervention. The application of robust algorithms in terms of processing this massive volume of data can significantly reduce the data processing time and thereby increase the degree of automation as well as accuracy.
This research work presents a number of techniques pertaining to processing, filtering and classification of LiDAR point data in order to achieve higher degree of automation and accuracy towards generating a high resolution urban surface model. This research work also describes the use of ancillary datasets such as aerial images and topographic maps in combination with LiDAR data for feature detection and surface characterization. The integration of various data sources facilitates detailed modelling of street networks and accurate detection of various urban surface types (e.g. grasslands, bare soil and impervious surfaces).
While the accurate characterization of various surface types contributes to the better modelling of rainfall runoff processes, the application of LiDAR-derived fine resolution DSM serves as input to 2D hydraulic models and capable of simulating surface flooding scenarios in cases the sewer systems are surcharged.
Thus, this research work develops high resolution spatial databases aiming at improving the accuracy of hydrologic and hydraulic databases of urban drainage systems. Later, these databases are given as input to a standard flood simulation software in order to: 1) test the suitability of the databases for running the simulation; 2) assess the performance of the hydraulic capacity of urban drainage systems and 3) predict and visualize the surface flooding scenarios in order to take necessary flood protection measures.
The goal of this work is to develop statistical natural language models and processing techniques
based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short-
Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more
robust, and easier to train than traditional methods, i.e., words list and rule-based models. They
improve the output of recognition systems and make them more accessible to users for browsing
and reading. These techniques are required, especially for historical books which might take
years of effort and huge costs to manually transcribe them.
The contributions of this thesis are several new methods which have high-performance computing and accuracy. First, an error model for improving recognition results is designed. As
a second contribution, a hyphenation model for difficult transcription for alignment purposes
is suggested. Third, a dehyphenation model is used to classify the hyphens in noisy transcription. The fourth contribution is using LSTM networks for normalizing historical orthography.
A size normalization alignment is implemented to equal the size of strings, before the training
phase. Using the LSTM networks as a language model to improve the recognition results is
the fifth contribution. Finally, the sixth contribution is a combination of Weighted Finite-State
Transducers (WFSTs), and LSTM applied on multiple recognition systems. These contributions
will be elaborated in more detail.
Context-dependent confusion rules is a new technique to build an error model for Optical
Character Recognition (OCR) corrections. The rules are extracted from the OCR confusions
which appear in the recognition outputs and are translated into edit operations, e.g., insertions,
deletions, and substitutions using the Levenshtein edit distance algorithm. The edit operations
are extracted in a form of rules with respect to the context of the incorrect string to build an
error model using WFSTs. The context-dependent rules assist the language model to find the
best candidate corrections. They avoid the calculations that occur in searching the language
model and they also make the language model able to correct incorrect words by using context-
dependent confusion rules. The context-dependent error model is applied on the university of
Washington (UWIII) dataset and the Nastaleeq script in Urdu dataset. It improves the OCR
results from an error rate of 1.14% to an error rate of 0.68%. It performs better than the
state-of-the-art single rule-based which returns an error rate of 1.0%.
This thesis describes a new, simple, fast, and accurate system for generating correspondences
between real scanned historical books and their transcriptions. The alignment has many challenges, first, the transcriptions might have different modifications, and layout variations than the
original book. Second, the recognition of the historical books have misrecognition, and segmentation errors, which make the alignment more difficult especially the line breaks, and pages will
not have the same correspondences. Adapted WFSTs are designed to represent the transcription. The WFSTs process Fraktur ligatures and adapt the transcription with a hyphenations
model that allows the alignment with respect to the varieties of the hyphenated words in the line
breaks of the OCR documents. In this work, several approaches are implemented to be used for
the alignment such as: text-segments, page-wise, and book-wise approaches. The approaches
are evaluated on German calligraphic (Fraktur) script historical documents dataset from “Wan-
derungen durch die Mark Brandenburg” volumes (1862-1889). The text-segmentation approach
returns an error rate of 2.33% without using a hyphenation model and an error rate of 2.0%
using a hyphenation model. Dehyphenation methods are presented to remove the hyphen from
the transcription. They provide the transcription in a readable and reflowable format to be used
for alignment purposes. We consider the task as classification problem and classify the hyphens
from the given patterns as hyphens for line breaks, combined words, or noise. The methods are
applied on clean and noisy transcription for different languages. The Decision Trees classifier
returns better performance on UWIII dataset and returns an accuracy of 98%. It returns 97%
on Fraktur script.
A new method for normalizing historical OCRed text using LSTM is implemented for different texts, ranging from Early New High German 14th - 16th centuries to modern forms in New
High German applied on the Luther bible. It performed better than the rule-based word-list
approaches. It provides a transcription for various purposes such as part-of-speech tagging and
n-grams. Also two new techniques are presented for aligning the OCR results and normalize the
size by using adding Character-Epsilons or Appending-Epsilons. They allow deletion and insertion in the appropriate position in the string. In normalizing historical wordforms to modern
wordforms, the accuracy of LSTM on seen data is around 94%, while the state-of-the-art combined rule-based method returns 93%. On unseen data, LSTM returns 88% and the combined
rule-based method returns 76%. In normalizing modern wordforms to historical wordforms, the
LSTM delivers the best performance and returns 93.4% on seen data and 89.17% on unknown
data.
In this thesis, a deep investigation has been done on constructing high-performance language
modeling for improving the recognition systems. A new method to construct a language model
using LSTM is designed to correct OCR results. The method is applied on UWIII and Urdu
script. The LSTM approach outperforms the state-of-the-art, especially for unseen tokens
during training. On the UWIII dataset, the LSTM returns reduction in OCR error rates from
1.14% to 0.48%. On the Nastaleeq script in Urdu dataset, the LSTM reduces the error rate
from 6.9% to 1.58%.
Finally, the integration of multiple recognition outputs can give higher performance than a
single recognition system. Therefore, a new method for combining the results of OCR systems is
explored using WFSTs and LSTM. It uses multiple OCR outputs and votes for the best output
to improve the OCR results. It performs better than the ISRI tool, Pairwise of Multiple Sequence and it helps to improve the OCR results. The purpose is to provide correct transcription
so that it can be used for digitizing books, linguistics purposes, N-grams, and part-of-speech
tagging. The method consists of two alignment steps. First, two recognition systems are aligned
using WFSTs. The transducers are designed to be more flexible and compatible with the different symbols in line and page breaks to avoid the segmentation and misrecognition errors.
The LSTM model then is used to vote the best candidate correction of the two systems and
improve the incorrect tokens which are produced during the first alignment. The approaches
are evaluated on OCRs output from the English UWIII and historical German Fraktur dataset
which are obtained from state-of-the-art OCR systems. The Experiments show that the error
rate of ISRI-Voting is 1.45%, the error rate of the Pairwise of Multiple Sequence is 1.32%, the
error rate of the Line-to-Page alignment is 1.26% and the error rate of the LSTM approach has
the best performance with 0.40%.
The purpose of this thesis is to contribute methods providing correct transcriptions corresponding to the original book. This is considered to be the first step towards an accurate and
more effective use of the documents in digital libraries.
One of the biggest social issues in mature societies such as Europe and Japan
is the aging population and declining birth rate. These societies have a serious
problem with the retirement of the expert workers, doctors, and engineers etc.
Especially in the sectors that require long time to make experts in fields like medicine and industry; the retirement and injuries of the experts, is a serious problem. The technology to support the training and assessment of skilled workers (like doctors, manufacturing
workers) is strongly required for the society. Although there are some solutions for
this problem, most of them are video-based which violates the privacy of the subjects.
Furthermore, they are not easy to deploy due to the need for large training data.
This thesis provides a novel framework to recognize, analyze, and assess human
skills with minimum customization cost. The presented framework tackles this problem
in two different domains, industrial setup and medical operations of catheter-based
cardiovascular interventions (CBCVI).
In particular, the contributions of this thesis are four-fold. First, it proposes an
easy-to-deploy framework for human activity recognition based on zero-shot learning
approach, which is based on learning basic actions and objects. The model recognizes
unseen activities by combinations of basic actions learned in a preliminary way and involved objects. Therefore, it is completely configurable by the user and can be used to detect completely new activities.
Second, a novel gaze-estimation model for attention driven object detection task is
presented. The key features of the model are: (i) usage of the deformable convolutional
layers to better incorporate spatial dependencies of different shapes of objects and
backgrounds, (ii) formulation of the gaze-estimation problem in two different way, as a
classification as well as a regression problem. We combine both formulations using a
joint loss that incorporates both the cross-entropy as well as the mean-squared error in
order to train our model. This enhanced the accuracy of the model from 6.8 by using only
the cross-entropy loss to 6.4 for the joint loss.
The third contribution of this thesis targets the area of quantification of quality of
i
actions using wearable sensor. To address the variety of scenarios, we have targeted two
possibilities: a) both expert and novice data is available , b) only expert data is available,
a quite common case in safety critical scenarios.
Both of the developed methods from these scenarios are deep learning based. In the
first one, we use autoencoders with OneClass SVM, and in the second one we use the
Siamese Networks. These methods allow us to encode the expert’s expertise and to learn
the differences between novice and expert workers. This enables quantification of the
performance of the novice in comparison to the expert worker.
The fourth contribution, explicitly targets medical practitioners and provides a
methodology for novel gaze-based temporal spatial analysis of CBCVI data. The developed
methodology allows continuous registration and analysis of gaze data for analysis
of the visual X-ray image processing (XRIP) strategies of expert operators in live-cases scenarios and may assist in transferring experts’ reading skills to novices.
Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)
Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.
The Context and Its Importance: In safety and reliability analysis, the information generated by Minimal Cut Set (MCS) analysis is large.
The Top Level event (TLE) that is the root of the fault tree (FT) represents a hazardous state of the system being analyzed.
MCS analysis helps in analyzing the fault tree (FT) qualitatively-and quantitatively when accompanied with quantitative measures.
The information shows the bottlenecks in the fault tree design leading to identifying weaknesses of the system being examined.
Safety analysis (containing the MCS analysis) is especially important for critical systems, where harm can be done to the environment or human causing injuries, or even death during the system usage.
Minimal Cut Set (MCS) analysis is performed using computers and generating a lot of information.
This phase is called MCS analysis I in this thesis.
The information is then analyzed by the analysts to determine possible issues and to improve the design of the system regarding its safety as early as possible.
This phase is called MCS analysis II in this thesis.
The goal of my thesis was developing interactive visualizations to support MCS analysis II of one fault tree (FT).
The Methodology: As safety visualization-in this thesis, Minimal Cut Set analysis II visualization-is an emerging field and no complete checklist regarding Minimal Cut Set analysis II requirements and gaps were available from the perspective of visualization and interaction capabilities,
I have conducted multiple studies using different methods with different data sources (i.e., triangulation of methods and data) for determining these requirements and gaps before developing and evaluating visualizations and interactions supporting Minimal Cut Set analysis II.
Thus, the following approach was taken in my thesis:
1- First, a triangulation of mixed methods and data sources was conducted.
2- Then, four novel interactive visualizations and one novel interaction widget were developed.
3- Finally, these interactive visualizations were evaluated both objectively and subjectively (compared to multiple safety tools)
from the point of view of users and developers of the safety tools that perform MCS analysis I with respect to their degree in supporting MCS analysis II and from the point of non-domain people using empirical strategies.
The Spiral tool supports analysts with different visions, i.e., full vision, color deficiency protanopia, deuteranopia, and tritanopia. It supports 100 out of 103 (97%) requirements obtained from the triangulation and it fills 37 out of 39 (95%) gaps. Its usability was rated high (better than their best currently used tools) by the users of the safety and reliability tools (RiskSpectrum, ESSaRel, FaultTree+, and a self-developed tool) and at least similar to the best currently used tools from the point of view of the CAFTA tool developers. Its quality was higher regarding its degree of supporting MCS analysis II compared to the FaultTree+ tool. The time spent for discovering the critical MCSs from a problem size of 540 MCSs (with a worst case of all equal order) was less than a minute while achieving 99.5% accuracy. The scalability of the Spiral visualization was above 4000 MCSs for a comparison task. The Dynamic Slider reduces the interaction movements up to 85.71% of the previous sliders and solves the overlapping thumb issues by the sliders provides the 3D model view of the system being analyzed provides the ability to change the coloring of MCSs according to the color vision of the user provides selecting a BE (i.e., multi-selection of MCSs), thus, can observe the BEs' NoO and provides its quality provides two interaction speeds for panning and zooming in the MCS, BE, and model views provide a MCS, a BE, and a physical tab for supporting the analysis starting by the MCSs, the BEs, or the physical parts. It combines MCS analysis results and the model of an embedded system enabling the analysts to directly relate safety information with the corresponding parts of the system being analyzed and provides an interactive mapping between the textual information of the BEs and MCSs and the parts related to the BEs.
Verifications and Assessments: I have evaluated all visualizations and the interaction widget both objectively and subjectively, and finally evaluated the final Spiral visualization tool also both objectively and subjectively regarding its perceived quality and regarding its degree of supporting MCS analysis II.
Der Erfolg von Transformationen hängt von sehr vielen Faktoren ab. In der vorliegenden Arbeit wurden einige ausgewählte Faktoren aufgegriffen und validiert. Die vorliegende
Arbeit enthält keine abschließende Beurteilung, der in der Literatur dargestellten Erfolgsfaktoren.
Change-Prozesse beeinflussen im organisationalen Kontext in den meisten Fällen die sozialen Beziehungen, sei es auf individueller, Team- oder gesamtorganisationaler
Ebene. Je grösser, einschneidender, tiefgehender oder radikaler die Veränderung, desto schwieriger sind der Ausgang und insbesondere die Erfolgsaussichten vorhersehbar. Gemäß den Forschungsergebnissen spielen neben den geprüften Erfolgsfaktoren auch die bisherigen Erlebnisse eine wichtige Rolle. Die Organisation kann die bisherigen Erfahrungen zu einem großen Teil selber steuern, reflektieren und daraus lernen. Die individuellen Change-Erfahrungen der Mitarbeitenden hingegen sind schwer zu fassen und können höchstens durch individuelle Begleitung verarbeitet werden.
Grundsätzlich kann bestätigt werden, dass die gewählten und validierten Maßnahmen erfolgsversprechend zur positiven Entwicklung der Organisationskultur eingesetzt werden
können. Neben der Prägung der Organisationskultur sollten Organisationen eine grundsätzliche Change-Affinität entwickeln und fördern. Die Interviews belegen, was in der Literatur postuliert wird: Die Organisationskultur ist die Basis der Zusammenarbeit in Organisationen und darf aus diesem Grund nicht vernachlässigt werden. Die Organisationskultur wird von allen befragten Personen als ein sehr wichtiges (wenn nicht sogar als wichtigstes) Element der Organisation betrachtet.
Die Kulturentwicklung in der Praxis wird als Nebeneffekt anderer Interventionen wahrgenommen. Eine klare Aussage, wie die Kultur in eine bestimmte Richtung entwickelt werden kann, wurde nicht getroffen.
This research explores the development of web based reference software for
characterisation of surface roughness for two-dimensional surface data. The reference software used for verification of surface characteristics makes the evaluation methods easier for clients. The algorithms used in this software
are based on International ISO standards. Most software used in industrial measuring
instruments may give variations in the parameters calculated due to numerical changes in
calculation. Such variations can be verified using the proposed reference software.
The evaluation of surface roughness is carried out in four major steps: data capture, data
align, data filtering and parameter calculation. This work walks through each of these steps
explaining how surface profiles are evaluated by pre-processing steps called fitting and
filtering. The analysis process is then followed by parameter evaluation according to DIN EN
ISO 4287 and DIN EN ISO 13565-2 standards to extract important information from the
profile to characterise surface roughness.
Die Arbeit behandelt das facettenreiche Gebiet des steuerlichen Verlustabzugs, namentlich die konkrete Ausgestaltung des Verlustvortrags. Neben der verfassungsrechtlichen Würdigung der relevanten Normen legt die Autorin besonderen Wert auf die Frage, wie § 10d EStG im Sinne eines kohärenten Verlustabzugssystems auszugestalten sei. Auch die Verwerfungen der Corona-Krise haben im Bereich des Untersuchungsgegenstandes rege gesetzgeberische Tätigkeit ausgelöst. Die Verfasserin nimmt dies zum Anlass, pandemiebedingte Änderungen gesondert auf Rechtmäßigkeit wie auch ökonomische Wirksamkeit zu prüfen und die Vereinbarkeit der Neuerungen mit dem bestehenden Normsystem zu hinterfragen.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.