KLUEDO RSS FeedKLUEDO Dokumente/documents
https://kluedo.ub.uni-kl.de/index/index/
Wed, 25 Mar 2015 11:35:53 +0100Wed, 25 Mar 2015 11:35:53 +0100Audio Interface for the Zedboard
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4034
This paper describes an audio interface written in VHDL, that connects the ADAU1761 audio codec on the Zedboard to the Zynq PL. Audio signals can be received in stereo from the line in jack and/or transmitted to the headphone out jack.Stefan Schollotherhttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4034Wed, 25 Mar 2015 11:35:53 +0100Modern dehydrogenative amination reactions
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4037
Nitrogen element is preponderant in Nature. Found in its simplest form as diatomic gas in the air, as well as in elaborated molecules such as the double helix of DNA, this element is indisputably essential for life. Indeed, nitrogen is omnipresent in all metabolic pathways.
With the advent of green chemistry, researchers attempt to functionalize arenes without pre-functionalization of the later for the establishment of C-C bond formation. Why not C-N bond formation?
We investigated new oxidative amination reactions by cross-dehydrogenative-coupling. Concerned by atom economy and green processes, our objectives were: 1) to obviate pre-activation or pre-oxidation of both C-H coupling partner and N-aminating agent. 2) to avoid the use of chelating directing group.
We achieved C-N bond formation for some classes of amines. Thus, we will describe the reactivity of cyclic secondary amines: carbazole, in presence of catalytic amount of ruthenium (II) and copper (II) to build the challenging C-N bond between two carbazoles. The initial mechanistic experiments will be present and discuss.
Then, we will describe more challenging hetero-coupling formation between diarylamines and carbazoles. The new ruthenium (II)/ copper (II) catalytic system allowed forming the ortho-N-carbazolation of diarylamines. This reaction performed under mild conditions (O2 as terminal oxidant) displays an unusual intramolecular N-H••N interaction in the novel class of compounds.
Finally, we will present a surprising metal free C-N bond formation between the ubiquitous phenols and the phenothiazines. Initially conducted in the presence of transition metals (RuII/CuII), this reaction proved to be efficient with the only effect of cumene and O2. Those components suggest a mechanism initiated by a Hock process. An initial infra-red analysis might point out a strong intramolecular O-H••N interaction in the resulting products.
These first elements of reactivity, developed within the laboratory for “modern dehydrogenative amination reactions”, will be presented and discussed.
Marie-Laure Louillat Habermeyerdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4037Wed, 25 Mar 2015 11:25:07 +0100Certification-Cognizant Mixed-Criticality Scheduling in Time-Triggered Systems
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4030
In embedded systems, there is a trend of integrating several different functionalities on a common platform. This has been enabled by increasing processing power and the arise of integrated system-on-chips.
The composition of safety-critical and non-safety-critical applications results in mixed-criticality systems. Certification Authorities (CAs) demand the certification of safety-critical applications with strong confidence in the execution time bounds. As a consequence, CAs use conservative assumptions in the worst-case execution time (WCET) analysis which result in more pessimistic WCETs than the ones used by designers. The existence of certified safety-critical and non-safety-critical applications can be represented by dual-criticality systems, i.e., systems with two criticality levels.
In this thesis, we focus on the scheduling of mixed-criticality systems which are subject to certification. Scheduling policies cognizant of the mixed-criticality nature of the systems and the certification requirements are needed for efficient and effective scheduling. Furthermore, we aim at reducing the certification costs to allow faster modification and upgrading, and less error-prone certification. Besides certification aspects, requirements of different operational modes result in challenging problems for the scheduling process. Despite the mentioned problems, schedulers require a low runtime overhead for an efficient execution at runtime.
The presented solutions are centered around time-triggered systems which feature a low runtime overhead. We present a transformation to include event-triggered activities, represented by sporadic tasks, already into the offline scheduling process. Further, this transformation can also be applied on periodic tasks to shorten the length of schedule tables which reduces certification costs. These results can be used in our method to construct schedule tables which creates two schedule tables to fulfill the requirements of dual-criticality systems using mode changes at runtime. Finally, we present a scheduler based on the slot-shifting algorithm for mixed-criticality systems. In a first version, the method schedules dual-criticality jobs without the need for mode changes. An already certified schedule table can be used and at runtime, the scheduler reacts to the actual behavior of the jobs and thus, makes effective use of the available resources. Next, we extend this method to schedule mixed-criticality job sets with different operational modes. As a result, we can schedule jobs with varying parameters in different modes.Jens Theisdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4030Mon, 23 Mar 2015 12:38:23 +0100SAHARA - A Structured Approach for Hazard Analysis and Risk Assessments
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4027
In this thesis, an approach is presented that turns the currently unstructured process of automotive hazard analysis and risk assessments (HRA), which relies on creativity techniques, into a structured, model-based approach that makes the HRA results less dependent on experts' experience, more consistent, and gives them higher quality. The challenge can be subdivided into two steps. The first step is to improve the HRA as it is performed in current practice. The second step is to go beyond the current practice and consider not only single service failures as relevant hazards, but also multiple service failures. For the first step, the most important aspect is to formalize the operational situation of the system and to determine its likelihood. Current approaches use natural-language textual descriptions, which makes it hard to ensure consistency and increase efficiency through reuse. Furthermore, due to ambiguity in natural language, it is difficult to ensure consistent likelihood estimates for situations.
The main aspect of the second step is that considering multiple service failures as hazards implies that one needs to analyze an exponential number of hazards. Due to the fact that hazard assessments are currently done purely manually, considering multiple service failures is not possible. The only way to approach this challenge is to formalize the HRA and make extensive use of automation support.
In SAHARA we handle these challenges by first introducing a model-based representation of an HRA with GOBI. Based on this, we formalized the representation of operational situations and their likelihood assessment in OASIS and HEAT, respectively. We show that more consistent situation assessments are possible and that situations (including their likelihood) can be efficiently reused. The second aspect, coping with multiple service failures, is addressed in ARID. We show that using our tool-supported HRA approach, 100% coverage of all possible hazards (including multiple service failures) can be achieved by relying on very limited manual effort. We furthermore show that not considering multiple service failures results in insufficient safety goals.Sören Kemmanndoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4027Wed, 18 Mar 2015 11:15:06 +0100Visual Learning of Socio-Video Semantics
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4023
Today's ubiquity of visual content as driven by the availability of broadband Internet, low-priced storage, and the omnipresence of camera equipped mobile devices conveys much of our thinking and feeling as individuals and as a society. As a result the growth of video repositories is increasing at enourmous rates with content now being embedded and shared through social media. To make use of this new form of social multimedia, concept detection, the automatic mapping of semantic concepts and video content has to be extended such that concept vocabularies are synchronized with current real-world events, systems can perform scalable concept learning with thousands of concepts, and high-level information such as sentiment can be extracted from visual content. To catch up with these demands the following three contributions are made in this thesis: (i) concept detection is linked to trending topics, (ii) visual learning from web videos is presented including the proper treatment of tags as concept labels, and (iii) the extension of concept detection with adjective noun pairs for sentiment analysis is proposed.
In order for concept detection to satisfy users' current information needs, the notion of fixed concept vocabularies has to be reconsidered. This thesis presents a novel concept learning approach built upon dynamic vocabularies, which are automatically augmented with trending topics mined from social media. Once discovered, trending topics are evaluated by forecasting their future progression to predict high impact topics, which are then either mapped to an available static concept vocabulary or trained as individual concept detectors on demand. It is demonstrated in experiments on YouTube video clips that by a visual learning of trending topics, improvements of over 100% in concept detection accuracy can be achieved over static vocabularies (n=78,000).
To remove manual efforts related to training data retrieval from YouTube and noise caused by tags being coarse, subjective and context-depedent, this thesis suggests an automatic concept-to-query mapping for the retrieval of relevant training video material, and active relevance filtering to generate reliable annotations from web video tags. Here, the relevance of web tags is modeled as a latent variable, which is combined with an active learning label refinement. In experiments on YouTube, active relevance filtering is found to outperform both automatic filtering and active learning approaches, leading to a reduction of required label inspections by 75% as compared to an expert annotated training dataset (n=100,000).
Finally, it is demonstrated, that concept detection can serve as a key component to infer the sentiment reflected in visual content. To extend concept detection for sentiment analysis, adjective noun pairs (ANP) as novel entities for concept learning are proposed in this thesis. First a large-scale visual sentiment ontology consisting of 3,000 ANPs is automatically constructed by mining the web. From this ontology a mid-level representation of visual content – SentiBank – is trained to encode the visual presence of 1,200 ANPs. This novel approach of visual learning is validated in three independent experiments on sentiment prediction (n=2,000), emotion detection (n=807) and pornographic filtering (n=40,000). SentiBank is shown to outperform known low-level feature representations (sentiment prediction, pornography detection) or perform comparable to state-of-the art methods (emotion detection).
Altogether, these contributions extend state-of-the-art concept detection approaches such that concept learning can be done autonomously from web videos on a large-scale, and can cope with novel semantic structures such as trending topics or adjective noun pairs, adding a new dimension to the understanding of video content.Damian Borthdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4023Wed, 18 Mar 2015 10:50:50 +0100Statistical Language Modeling for Historical Documents using Weighted Finite-State Transducers and Long Short-Term Memory
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4022
The goal of this work is to develop statistical natural language models and processing techniques
based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short-
Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more
robust, and easier to train than traditional methods, i.e., words list and rule-based models. They
improve the output of recognition systems and make them more accessible to users for browsing
and reading. These techniques are required, especially for historical books which might take
years of effort and huge costs to manually transcribe them.
The contributions of this thesis are several new methods which have high-performance computing and accuracy. First, an error model for improving recognition results is designed. As
a second contribution, a hyphenation model for difficult transcription for alignment purposes
is suggested. Third, a dehyphenation model is used to classify the hyphens in noisy transcription. The fourth contribution is using LSTM networks for normalizing historical orthography.
A size normalization alignment is implemented to equal the size of strings, before the training
phase. Using the LSTM networks as a language model to improve the recognition results is
the fifth contribution. Finally, the sixth contribution is a combination of Weighted Finite-State
Transducers (WFSTs), and LSTM applied on multiple recognition systems. These contributions
will be elaborated in more detail.
Context-dependent confusion rules is a new technique to build an error model for Optical
Character Recognition (OCR) corrections. The rules are extracted from the OCR confusions
which appear in the recognition outputs and are translated into edit operations, e.g., insertions,
deletions, and substitutions using the Levenshtein edit distance algorithm. The edit operations
are extracted in a form of rules with respect to the context of the incorrect string to build an
error model using WFSTs. The context-dependent rules assist the language model to find the
best candidate corrections. They avoid the calculations that occur in searching the language
model and they also make the language model able to correct incorrect words by using context-
dependent confusion rules. The context-dependent error model is applied on the university of
Washington (UWIII) dataset and the Nastaleeq script in Urdu dataset. It improves the OCR
results from an error rate of 1.14% to an error rate of 0.68%. It performs better than the
state-of-the-art single rule-based which returns an error rate of 1.0%.
This thesis describes a new, simple, fast, and accurate system for generating correspondences
between real scanned historical books and their transcriptions. The alignment has many challenges, first, the transcriptions might have different modifications, and layout variations than the
original book. Second, the recognition of the historical books have misrecognition, and segmentation errors, which make the alignment more difficult especially the line breaks, and pages will
not have the same correspondences. Adapted WFSTs are designed to represent the transcription. The WFSTs process Fraktur ligatures and adapt the transcription with a hyphenations
model that allows the alignment with respect to the varieties of the hyphenated words in the line
breaks of the OCR documents. In this work, several approaches are implemented to be used for
the alignment such as: text-segments, page-wise, and book-wise approaches. The approaches
are evaluated on German calligraphic (Fraktur) script historical documents dataset from “Wan-
derungen durch die Mark Brandenburg” volumes (1862-1889). The text-segmentation approach
returns an error rate of 2.33% without using a hyphenation model and an error rate of 2.0%
using a hyphenation model. Dehyphenation methods are presented to remove the hyphen from
the transcription. They provide the transcription in a readable and reflowable format to be used
for alignment purposes. We consider the task as classification problem and classify the hyphens
from the given patterns as hyphens for line breaks, combined words, or noise. The methods are
applied on clean and noisy transcription for different languages. The Decision Trees classifier
returns better performance on UWIII dataset and returns an accuracy of 98%. It returns 97%
on Fraktur script.
A new method for normalizing historical OCRed text using LSTM is implemented for different texts, ranging from Early New High German 14th - 16th centuries to modern forms in New
High German applied on the Luther bible. It performed better than the rule-based word-list
approaches. It provides a transcription for various purposes such as part-of-speech tagging and
n-grams. Also two new techniques are presented for aligning the OCR results and normalize the
size by using adding Character-Epsilons or Appending-Epsilons. They allow deletion and insertion in the appropriate position in the string. In normalizing historical wordforms to modern
wordforms, the accuracy of LSTM on seen data is around 94%, while the state-of-the-art combined rule-based method returns 93%. On unseen data, LSTM returns 88% and the combined
rule-based method returns 76%. In normalizing modern wordforms to historical wordforms, the
LSTM delivers the best performance and returns 93.4% on seen data and 89.17% on unknown
data.
In this thesis, a deep investigation has been done on constructing high-performance language
modeling for improving the recognition systems. A new method to construct a language model
using LSTM is designed to correct OCR results. The method is applied on UWIII and Urdu
script. The LSTM approach outperforms the state-of-the-art, especially for unseen tokens
during training. On the UWIII dataset, the LSTM returns reduction in OCR error rates from
1.14% to 0.48%. On the Nastaleeq script in Urdu dataset, the LSTM reduces the error rate
from 6.9% to 1.58%.
Finally, the integration of multiple recognition outputs can give higher performance than a
single recognition system. Therefore, a new method for combining the results of OCR systems is
explored using WFSTs and LSTM. It uses multiple OCR outputs and votes for the best output
to improve the OCR results. It performs better than the ISRI tool, Pairwise of Multiple Sequence and it helps to improve the OCR results. The purpose is to provide correct transcription
so that it can be used for digitizing books, linguistics purposes, N-grams, and part-of-speech
tagging. The method consists of two alignment steps. First, two recognition systems are aligned
using WFSTs. The transducers are designed to be more flexible and compatible with the different symbols in line and page breaks to avoid the segmentation and misrecognition errors.
The LSTM model then is used to vote the best candidate correction of the two systems and
improve the incorrect tokens which are produced during the first alignment. The approaches
are evaluated on OCRs output from the English UWIII and historical German Fraktur dataset
which are obtained from state-of-the-art OCR systems. The Experiments show that the error
rate of ISRI-Voting is 1.45%, the error rate of the Pairwise of Multiple Sequence is 1.32%, the
error rate of the Line-to-Page alignment is 1.26% and the error rate of the LSTM approach has
the best performance with 0.40%.
The purpose of this thesis is to contribute methods providing correct transcriptions corresponding to the original book. This is considered to be the first step towards an accurate and
more effective use of the documents in digital libraries.
Mayce Al Azawidoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4022Thu, 12 Mar 2015 14:57:16 +0100Context Awareness for Enhancing Heterogeneous Access Management and Self-Optimizing Networks
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4021
The heterogeneity of today's access possibilities to wireless networks imposes challenges for efficient mobility support and resource management across different Radio Access Technologies (RATs). The current situation is characterized by the coexistence of various wireless communication systems, such as GSM, HSPA, LTE, WiMAX, and WLAN. These RATs greatly differ with respect to coverage, spectrum, data rates, Quality of Service (QoS), and mobility support.
In real systems, mobility-related events, such as Handover (HO) procedures, directly affect resource efficiency and End-To-End (E2E) performance, in particular with respect to signaling efforts and users' QoS. In order to lay a basis for realistic multi-radio network evaluation, a novel evaluation methodology is introduced in this thesis.
A central hypothesis of this thesis is that the consideration and exploitation of additional information characterizing user, network, and environment context, is beneficial for enhancing Heterogeneous Access Management (HAM) and Self-Optimizing Networks (SONs). Further, Mobile Network Operator (MNO) revenues are maximized by tightly integrating bandwidth adaptation and admission control mechanisms as well as simultaneously accounting for user profiles and service characteristics. In addition, mobility robustness is optimized by enabling network nodes to tune HO parameters according to locally observed conditions.
For establishing all these facets of context awareness, various schemes and algorithms are developed and evaluated in this thesis. System-level simulation results demonstrate the potential of context information exploitation for enhancing resource utilization, mobility support, self-tuning network operations, and users' E2E performance.
In essence, the conducted research activities and presented results motivate and substantiate the consideration of context awareness as key enabler for cognitive and autonomous network management. Further, the performed investigations and aspects evaluated in the scope of this thesis are highly relevant for future 5G wireless systems and current discussions in the 5G infrastructure Public Private Partnership (PPP).Andreas Kleindoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4021Wed, 11 Mar 2015 09:20:39 +0100Context-Enabled Optimization of Energy-Autarkic Networks for Carrier-Grade Wireless Backhauling
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4020
This work establishes the novel category of coordinated Wireless Backhaul Networks (WBNs) for energy-autarkic point-to-point radio backhauling. The networking concept is based on three major building blocks: cost-efficient radio transceiver hardware, a self-organizing network operations framework, and power supply from renewable energy sources. The aim of this novel backhauling approach is to combine carrier-grade network performance with reduced maintenance effort as well as independent and self-sufficient power supply. In order to facilitate the success prospects of this concept, the thesis comprises the following major contributions: Formal, multi-domain system model and evaluation methodology
First, adapted from the theory of cyber-physical systems, the author devises a multi-domain evaluation methodology and a system-level simulation framework for energy-autarkic coordinated WBNs, including a novel balanced scorecard concept. Second, the thesis specifically addresses the topic of Topology Control (TC) in point-to-point radio networks and how it can be exploited for network management purposes. Given a set of network nodes equipped with multiple radio transceivers and known locations, TC continuously optimizes the setup and configuration of radio links between network nodes, thus supporting initial network deployment, network operation, as well as topology re-configuration. In particular, the author shows that TC in WBNs belongs to the class of NP-hard quadratic assignment problems and that it has significant impact in operational practice, e.g., on routing efficiency, network redundancy levels, service reliability, and energy consumption. Two novel algorithms focusing on maximizing edge connectivity of network graphs are developed.
Finally, this work carries out an analytical benchmarking and a numerical performance analysis of the introduced concepts and algorithms. The author analytically derives minimum performance levels of the the developed TC algorithms. For the analyzed scenarios of remote Alpine communities and rural Tanzania, the evaluation shows that the algorithms improve energy efficiency and more evenly balance energy consumption across backhaul nodes, thus significantly increasing the number of available backhaul nodes compared to state-of-the-art TC algorithms.Christian Mannweilerdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4020Tue, 10 Mar 2015 12:22:33 +0100Modeling and design optimization of textile-like materials via homogenization and one-dimensional models of elasticity
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4019
The work consists of two parts.
In the first part an optimization problem of structures of linear elastic material with contact modeled by Robin-type boundary conditions is considered. The structures model textile-like materials and possess certain quasiperiodicity properties. The homogenization method is used to represent the structures by homogeneous elastic bodies and is essential for formulations of the effective stress and Poisson's ratio optimization problems. At the micro-level, the classical one-dimensional Euler-Bernoulli beam model extended with jump conditions at contact interfaces is used. The stress optimization problem is of a PDE-constrained optimization type, and the adjoint approach is exploited. Several numerical results are provided.
In the second part a non-linear model for simulation of textiles is proposed. The yarns are modeled by hyperelastic law and have no bending stiffness. The friction is modeled by the Capstan equation. The model is formulated as a problem with the rate-independent dissipation, and the basic continuity and convexity properties are investigated. The part ends with numerical experiments and a comparison of the results to a real measurement.
Vladimir Shiryaevdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4019Mon, 09 Mar 2015 14:42:08 +0100An Automata-Theoretic Approach to Open Actor System Verification
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4018
Open distributed systems are a class of distributed systems where (i) only partial information about the environment, in which they are running, is present, (ii) new resources may become available at runtime, and (iii) a subsystem may become aware of other subsystems after some interaction. Modeling and implementing such systems correctly is a complex task due to the openness and the dynamicity aspects. One way to ensure that the resulting systems behave correctly is to utilize formal verification.
Formal verification requires an adequate semantic model of the implementation, a specification of the desired behavior, and a reasoning technique. The actor model is a semantic model that captures the challenging aspects of open distributed systems by utilizing actors as universal primitives to represent system entities and allowing them to create new actors and to communicate by sending directed messages as reply to received messages. To enable compositional reasoning, where the reasoning task is reduced to independent verification of the system parts, semantic entities at a higher level of abstraction than actors are needed.
This thesis proposes an automaton model and combines sound reasoning techniques to compositionally verify implementations of open actor systems. Based on I/O automata, the model allows automata to be created dynamically and captures dynamic changes in communication patterns. Each automaton represents either an actor or a group of actors. The specification of the desired behavior is given constructively as an automaton. As the basis for compositionality, we formalize a component notion based on the static structure of the implementation instead of the dynamic entities (the actors) occurring in the system execution. The reasoning proceeds in two stages. The first stage establishes the connection between the automata representing single actors and their implementation description by means of weakest liberal preconditions. The second stage employs this result as the basis for verifying whether a component specification is satisfied. The verification is done by building a simulation relation from the automaton representing the implementation to the component's automaton. Finally, we validate the compositional verification approach through a number of examples by proving correctness of their actor implementations with respect to system specifications.Ilham W. Kurniadoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4018Mon, 09 Mar 2015 10:14:58 +0100A Finite Dominating Set Algorithm for a Dynamic Location Problem in the Plane
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4017
A single facility problem in the plane is considered, where an optimal location has to be
identified for each of finitely many time-steps with respect to time-dependent weights and
demand points. It is shown that the median objective can be reduced to a special case of the
static multifacility median problem such that results from the latter can be used to tackle the
dynamic location problem. When using block norms as distance measure between facilities,
a Finite Dominating Set (FDS) is derived. For the special case with only two time-steps, the
resulting algorithm is analyzed with respect to its worst-case complexity. Due to the relation
between dynamic location problems for T time periods and T-facility problems, this algorithm
can also be applied to the static 2-facility location problem.Andrea Maier; Horst W. Hamacherpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4017Thu, 05 Mar 2015 14:56:26 +0100Modeling and Simulation of a Moving Rigid Body in a Rarefied Gas
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4012
We present a numerical scheme to simulate a moving rigid body with arbitrary shape suspended in a rarefied gas micro flows, in view of applications to complex computations of moving structures in micro or vacuum systems. The rarefied gas is simulated by solving the Boltzmann equation using a DSMC particle method. The motion of the rigid body is governed by the Newton-Euler equations, where the force and the torque on the rigid body is computed from the momentum transfer of the gas molecules colliding with the body. The resulting motion of the rigid body affects in turn again the gas flow in the surroundings. This means that a two-way coupling has been modeled. We validate the scheme by performing various numerical experiments in 1-, 2- and 3-dimensional computational domains. We have presented 1-dimensional actuator problem, 2-dimensional cavity driven flow problem, Brownian diffusion of a spherical particle both with translational and rotational motions, and finally thermophoresis on a spherical particles. We compare the numerical results obtained from the numerical simulations with the existing theories in each test examples. Samir Shresthadoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4012Wed, 04 Mar 2015 11:43:53 +0100Visual Processing in Reading and Dyslexia
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4007
The present research combines different paradigm in the area of visual perception of letter and words. These experiments aimed to understand the deficit underlying the problem associated with the faulty visual processing of letters and words. The present work summarizes the findings from two different types of population: (1) Dyslexics (reading-disabled children) and (2) Illiterates (adults who cannot read). In order to compare the results, comparisons were made between literate and illiterate group; dyslexics and control group (normal reading children). Differences for Even related potentials (ERP’s) between dyslexics and control children were made using mental rotation task for letters. According to the ERP’s, the effect of the mental rotation task of letter perception resulted as a delayed positive component and the component becomes less positive when the task becomes more difficult (Rotation related Negativity – RRN). The component was absent for dyslexics and present for controls. Dyslexics also showed some late effects in comparison to control children and this could be interpreted as problems at the decision stage where they are confused as to the letter is normal or mirrored. Dyslexics also have problems in responding to the letters having visual or phonological similarities (e.g. b vs d, p vs q). Visually similar letters were used to compare dyslexics and controls on a symmetry generalization task in two different contrast conditions (low and high). Dyslexics showed a similar pattern of response, and were overall slower in responding to the task compared to controls. The results were interpreted within the framework of the Functional Coordination Deficit (Lachmann, 2002). Dyslexics also showed delayed response in responding to the word recognition task during motion. Using red background decreases the Magnocellular pathway (M-pathway) activity, making more difficult to identify letters and this effect was worse for dyslexics because their M-pathway is weaker. In dyslexics, the M-pathway is worse; using a red background decreases the M activity and increases the difficulty in identifying lexical task in motion. This effect generated worse response to red compared to the green background. The reaction times with red were longer than those with green background. Further, Illiterates showed an analytic approach to responding to letters as well as on shapes. The analytic approach does not result from an individual capability to read, but is a primary base of visual organization or perception.Gunjan Kheradoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4007Tue, 03 Mar 2015 14:12:00 +0100Adaptive Real-Time Scheduling and Resource Management on Multicore Architectures
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4008
Real-time systems are systems that have to react correctly to stimuli from the environment within given timing constraints.
Today, real-time systems are employed everywhere in industry, not only in safety-critical systems but also in, e.g., communication, entertainment, and multimedia systems.
With the advent of multicore platforms, new challenges on the efficient exploitation of real-time systems have arisen:
First, there is the need for effective scheduling algorithms that feature low overheads to improve the use of the computational resources of real-time systems.
The goal of these algorithms is to ensure timely execution of tasks, i.e., to provide runtime guarantees.
Additionally, many systems require their scheduling algorithm to flexibly react to unforeseen events.
Second, the inherent parallelism of multicore systems leads to contention for shared hardware resources and complicates system analysis.
At any time, multiple applications run with varying resource requirements and compete for the scarce resources of the system.
As a result, there is a need for an adaptive resource management.
Achieving and implementing an effective and efficient resource management is a challenging task.
The main goal of resource management is to guarantee a minimum resource availability to real-time applications.
A further goal is to fulfill global optimization objectives, e.g., maximization of the global system performance, or the user perceived quality of service.
In this thesis, we derive methods based on the slot shifting algorithm.
Slot shifting provides flexible scheduling of time-constrained applications and can react to unforeseen events in time-triggered systems.
For this reason, we aim at designing slot shifting based algorithms targeted for multicore systems to tackle the aforementioned challenges.
The main contribution of this thesis is to present two global slot shifting algorithms targeted for multicore systems.
Additionally, we extend slot shifting algorithms to improve their runtime behavior, or to handle non-preemptive firm aperiodic tasks.
In a variety of experiments, the effectiveness and efficiency of the algorithms are evaluated and confirmed.
Finally, the thesis presents an implementation of a slot-shifting-based logic into a resource management framework for multicore systems.
Thus, the thesis closes the circle and successfully bridges the gap between real-time scheduling theory and real-world implementations.
We prove applicability of the slot shifting algorithm to effectively and efficiently perform adaptive resource management on multicore systems.Stefan Schorrdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4008Tue, 03 Mar 2015 11:18:21 +0100CRATER: Case-based Reasoning Framework for Engineering an Adaptation Engine in Self-Adaptive Software Systems
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4005
Self-adaptation allows software systems to autonomously adjust their behavior during run-time by handling all possible
operating states that violate the requirements of the managed system. This requires an adaptation engine that receives adaptation
requests during the monitoring process of the managed system and responds with an automated and appropriate adaptation
response. During the last decade, several engineering methods have been introduced to enable self-adaptation in software systems.
However, these methods lack addressing (1) run-time uncertainty that hinders the adaptation process and (2) the performance
impacts resulted from the complexity and the large number of the adaptation space. This paper presents CRATER, a framework
that builds an external adaptation engine for self-adaptive software systems. The adaptation engine, which is built on Case-based
Reasoning, handles the aforementioned challenges together. This paper is braced with an experiment illustrating the benefits of
this framework. The experimental results shows the potential of CRATER in terms handling run-time uncertainty and adaptation
remembrance that enhances the performance for large number of adaptation space.Mohammed Abufoudaarticlehttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4005Mon, 02 Mar 2015 09:30:57 +0100Structural Decomposition of STGs
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3999
Specification of asynchronous circuit behaviour becomes more complex as the
complexity of today’s System-On-a-Chip (SOC) design increases. This also causes
the Signal Transition Graphs (STGs) – interpreted Petri nets for the specification
of asynchronous circuit behaviour – to become bigger and more complex, which
makes it more difficult, sometimes even impossible, to synthesize an asynchronous
circuit from an STG with a tool like petrify [CKK+96] or CASCADE [BEW00].
It has, therefore, been suggested to decompose the STG as a first step; this
leads to a modular implementation [KWVB03] [KVWB05], which can reduce syn-
thesis effort by possibly avoiding state explosion or by allowing the use of library
elements. A decomposition approach for STGs was presented in [VW02] [KKT93]
[Chu87a]. The decomposition algorithm by Vogler and Wollowski [VW02] is based
on that of Chu [Chu87a] but is much more generally applicable than the one in
[KKT93] [Chu87a], and its correctness has been proved formally in [VW02].
This dissertation begins with Petri net background described in chapter 2.
It starts with a class of Petri nets called a place/transition (P/T) nets. Then
STGs, the subclass of P/T nets, is viewed. Background in net decomposition
is presented in chapter 3. It begins with the structural decomposition of P/T
nets for analysis purposes – liveness and boundedness of the net. Then STG
decomposition for synthesis from [VW02] is described.
The decomposition method from [VW02] still could be improved to deal with
STGs from real applications and to give better decomposition results. Some
improvements for [VW02] to improve decomposition result and increase algorithm
efficiency are discussed in chapter 4. These improvement ideas are suggested in
[KVWB04] and some of them are have been proved formally in [VK04].
The decomposition method from [VW02] is based on net reduction to find
an output block component. A large amount of work has to be done to reduce
an initial specification until the final component is found. This reduction is not
always possible, which causes input initially classified as irrelevant to become
relevant input for the component. But under certain conditions (e.g. if structural
auto-conflicts turn out to be non-dynamic) some of them could be reclassified as
irrelevant. If this is not done, the specifications become unnecessarily large, which
intern leads to unnecessarily large implemented circuits. Instead of reduction, a
new approach, presented in chapter 5, decomposes the original net into structural
components first. An initial output block component is found by composing the
structural components. Then, a final output block component is obtained by net
reduction.
As we cope with the structure of a net most of the time, it would be useful
to have a structural abstraction of the net. A structural abstraction algorithm
[Kan03] is presented in chapter 6. It can improve the performance in finding an
output block component in most of the cases [War05] [Taw04]. Also, the structure
net is in most cases smaller than the net itself. This increases the efficiency of the
decomposition algorithm because it allows the transitions contained in a node of
the structure graph to be contracted at the same time if the structure graph is
used as internal representation of the net.
Chapter 7 discusses the application of STG decomposition in asynchronous
circuit design. Application to speed independent circuits is discussed first. Af-
ter that 3D circuits synthesized from extended burst mode (XBM) specifications
are discussed. An algorithm for translating STG specifications to XBM specifi-
cations was first suggested by [BEW99]. This algorithm first derives the state
machine from the STG specification, then translates the state machine to XBM
specification. An XBM specification, though it is a state machine, allows some
concurrency. These concurrencies can be translated directly, without deriving
all of the possible states. An algorithm which directly translates STG to XBM
specifications, is presented in chapter 7.3.1. Finally DESI, a tool to decompose
STGs and its decomposition results are presented.
Benedictus Benyamin Kangsahdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3999Thu, 26 Feb 2015 08:27:56 +0100Testrig optimization by block loads: Remodelling of damage as Gaussian functions and their clustering method
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4003
In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.Chhitiz Buchasiadoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4003Tue, 24 Feb 2015 11:08:29 +0100Freeness of hyperplane arrangements with multiplicities
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3986
This bachelor thesis is concerned with arrangements of hyperplanes, that
is, finite collections of hyperplanes in a finite-dimensional vector
space. Such arrangements can be studied using methods from
combinatorics, topology or algebraic geometry. Our focus lies on an
algebraic object associated to an arrangement \(\mathcal{A}\), the module \(\mathcal{D(A)}\) of
logarithmic derivations along \(\mathcal{A}\). It was introduced by K. Saito in the
context of singularity theory, and intensively studied by Terao and
others. If \(\mathcal{D(A)}\) admits a basis, the arrangement \(\mathcal{A}\) is called free.
Ziegler generalized the concept of freeness to so-called
multiarrangements, where each hyperplane carries a multiplicity. Terao
conjectured that freeness of arrangements can be decided based on the
combinatorics. We pursue the analogous question for multiarrangements in
special cases. Firstly, we give a new proof of a result of Ziegler
stating that generic multiarrangements are totally non-free, that is,
non-free for any multiplicity. Our proof relies on the new concept of
unbalanced multiplicities. Secondly, we consider freeness asymptotically
for increasing multiplicity of a fixed hyperplane. We give an explicit
bound for the multiplicity where the freeness property has stabilized.Lukas Kühnebachelorthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3986Thu, 12 Feb 2015 16:33:38 +0100An Efficient Multiscale Method for Modeling Progressive Damage in Composite Materials
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3980
An efficient multiscale approach is established in order to compute the macroscopic response of nonlinear composites. The micro problem is rewritten in an integral form of the Lippmann-Schwinger type and solved efficiently by Fast Fourier Transforms. Using realistic microstructure models complex nonlinear effects are reproduced and validated with measured data of fiber reinforced plastics. The micro problem is integrated in a Finite Element framework which is used to solve the macroscale. The scale coupling technique and a consistent numerical algorithm is established. The method provides an efficient way to determine the macroscopic response considering arbitrary microstructures, constitutive behaviors and loading conditions.Johannes Spahndoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3980Mon, 02 Feb 2015 08:46:45 +0100Bicriteria approach to the optimal location of surveillance cameras
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3979
We consider the problem of finding efficient locations of surveillance cameras, where we distinguish
between two different problems. In the first, the whole area must be monitored and the number of cameras
should be as small as possible. In the second, the goal is to maximize the monitored area for a fixed number of
cameras. In both of these problems, restrictions on the ability of the cameras, like limited depth of view or range
of vision are taken into account. We present solution approaches for these problems and report on results of
their implementations applied to an authentic problem. We also consider a bicriteria problem with two objectives:
maximizing the monitored area and minimizing the number of cameras, and solve it for our study case.Aleksandra Gross; Horst W. Hamacherpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3979Thu, 29 Jan 2015 08:18:53 +0100Interactive Verification of Synchronous Systems
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3973
Embedded systems, ranging from very simple systems up to complex controllers, may
nowadays have quite challenging real-time requirements. Many embedded systems are reactive
systems that have to respond to environmental events and have to guarantee certain real-time
constrain. Their execution is usually divided into reaction steps, where in each step, the
system reads inputs from the environment and reacts to these by computing corresponding
outputs.
The synchronous Model of Computation (MoC) has proven to be well-suited for the
development of reactive real-time embedded systems whose paradigm directly reflects the
reactive nature of the systems it describes. Another advantage is the availability of formal
verification by model checking as a result of the deterministic execution based on a formal
semantics. Nevertheless, the increasing complexity of embedded systems requires to compensate
the natural disadvantages of model checking that suffers from the well-known state-space
explosion problem. It is therefore natural to try to integrate other verification methods with
the already established techniques. Hence, improvements to encounter these problems are
required, e.g., appropriate decomposition techniques, which encounter the disadvantages
of the model checking approach naturally. But defining decomposition techniques for synchronous
language is a difficult task, as a result of the inherent parallelism emerging from
the synchronous broadcast communication.
Inspired by the progress in the field of desynchronization of synchronous systems by
representing them in other MoCs, this work will investigate the possibility of adapting and use
methods and tools designed for other MoC for the verification of systems represented in the
synchronous MoC. Therefore, this work introduces the interactive verification of synchronous
systems based on the basic foundation of formal verification for sequential programs – the
Hoare calculus. Due to the different models of computation several problems have to be
solved. In particular due to the large amount of concurrency, several parts of the program
are active at the same point of time. In contrast to sequential programs, a decomposition
in the Hoare-logic style that is in some sense a symbolic execution from one control flow
location to another one requires the consideration of several flows here. Therefore, different
approaches for the interactive verification of synchronous systems are presented.
Additionally, the representation of synchronous systems by other MoCs and the influence
of the representation on the verification task by differently embedding synchronous system
in a single verification tool are elaborated.
The feasibility is shown by integration of the presented approach with the established
model checking methods by implementing the AIFProver on top of the Averest system.Manuel Geselldoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3973Fri, 23 Jan 2015 08:52:04 +0100Test rig optimization
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3960
Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.Alexander Belyaevdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3960Thu, 22 Jan 2015 10:45:03 +0100Private-by-Design Advertising and Analytics: From Theory to Practice
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3970
There are a number of designs for an online advertising system that allow for behavioral targeting without revealing user online behavior or user interest profiles to the ad network. Although these designs purport to be practical solutions, none of them adequately consider the role of ad auctions, which today are central to the operation of online advertising systems. Moreover, none of the proposed designs have been deployed in real-life settings. In this thesis, we present an effort to fill this gap. First, we address the challenge of running ad auctions that leverage user profiles while keeping the profile information private. We define the problem, broadly explore the solution space, and discuss the pros and cons of these solutions. We analyze the performance of our solutions using data from Microsoft Bing advertising auctions. We conclude that, while none of our auctions are ideal in all respects, they are adequate and practical solutions. Second, we build and evaluate a fully functional prototype of a practical privacy-preserving ad system at a reasonably large scale. With more than 13K opted-in users, our system was in operation for over two months serving an average of 4800 active users daily. During the last month alone, we registered 790K ad views, 417 clicks, and even a small number of product purchases. Our system obtained click-through rates comparable with those for Google display ads. In addition, our prototype is equipped with a differentially private analytics mechanism, which we used as the primary means for gathering experimental data. In this thesis, we describe our first-hand experience and lessons learned in running the world's first fully operational “private-by-design” behavioral advertising and analytics system.Alexey Reznichenkodoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3970Thu, 22 Jan 2015 09:21:40 +0100On the Extended Finite Element Method for the Elasto-Plastic Deformation of Heterogeneous Materials
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3962
This thesis is concerned with the extended finite element method (XFEM) for deformation analysis of three-dimensional heterogeneous materials. Using the "enhanced abs enrichment" the XFEM is able to reproduce kinks in the displacements and therewith jumps in the strains within elements of the underlying tetrahedral finite element mesh. A complex model for the micro structure reconstruction of aluminum matrix composite AMC225xe and the modeling of its macroscopic thermo-mechanical plastic deformation behavior is presented, using the XFEM. Additionally, a novel stabilization algorithm is introduced for the XFEM. This algorithm requires preprocessing only.Tobias Zangmeisterdoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3962Wed, 07 Jan 2015 11:05:26 +0100Combinations of Boolean Groebner Bases and SAT Solvers
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3958
In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.Thanh Hung Nguyendoctoralthesishttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3958Thu, 18 Dec 2014 14:11:19 +0100