Fachbereich Informatik
Refine
Year of publication
Document Type
- Preprint (346)
- Doctoral Thesis (191)
- Report (139)
- Article (117)
- Master's Thesis (45)
- Study Thesis (13)
- Conference Proceeding (8)
- Bachelor Thesis (3)
- Habilitation (2)
- Part of a Book (1)
Has Fulltext
- yes (865)
Is part of the Bibliography
- no (865)
Keywords
- AG-RESY (64)
- PARO (31)
- Case-Based Reasoning (20)
- Visualisierung (17)
- SKALP (16)
- CoMo-Kit (15)
- Fallbasiertes Schliessen (12)
- RODEO (12)
- Robotik (12)
- HANDFLEX (11)
Faculty / Organisational entity
- Fachbereich Informatik (865)
Industrial manufacturing companies have different IT control functions that can be represented with a so-called hierarchical automation pyramid. While these conventional software systems especially support the mass production with consistent demand, the future project “Industry 4.0” focuses on customer-oriented and adaptable production processes. In order to move from conventional production systems to a factory of the future, the control levels must be redistributed. With the help of cyber-physical production systems, an interoperable architecture must be, implemented which removes the hierarchical connection of the former control levels. The accompanied digitalisation of industrial companies makes the transition to modular production possible. At the same time, the requirements for production planning and control are increasing, which can be solved with approaches such as multi-agent systems (MASs). These software solutions are autonomous and intelligent objects with a distinct collaborative ability. There are different modelling methods, communication and interaction structures, as well as different development frameworks for these new systems. Since multi-agent systems have not yet been established as an industrial standard due to their high complexity, they are usually only tested in simulations. In this bachelor thesis, a detailed literature review on the topic of MASs in the field of production planning and control is presented. In addition, selected multi-agent approaches are evaluated and compared using specific classification criteria. In addition, the applicability of using these systems in digital and modular production is assessed.
Sequence learning describes the process of understanding the spatio-temporal
relations in a sequence in order to classify it, label its elements or generate
new sequences. Due to the prevalence of structured sequences in nature
and everyday life, it has many practical applications including any language
related processing task. One particular such task that has seen recent success
using sequence learning techniques is the optical recognition of characters
(OCR).
State-of-the-art sequence learning solutions for OCR achieve high performance
through supervised training, which requires large amounts of transcribed
training data. On the other hand, few solutions have been proposed on how
to apply sequence learning in the absence of such data, which is especially
common for hard to transcribe historical documents. Rather than solving
the unsupervised training problem, research has focused on creating efficient
methods for collecting training data through smart annotation tools or generating
synthetic training data. These solutions come with various limitations
and do not solve all of the related problems.
In this work, first the use of erroneous transcriptions for supervised sequence
learning is introduced and it is described how this concept can be applied in
unsupervised training scenarios by collecting or generating such transcriptions.
The proposed OCR pipeline reduces the need of domain specific expertise
to apply OCR, with the goal of making it more accessible. Furthermore, an
approach for evaluating sequence learning OCR models in the absence of
reference transcriptions is presented and its different properties compared
to the standard method are discussed. In a second approach, unsupervised
OCR is treated as an alignment problem between the latent features of the
different language modalities. The outlined solution is to extract language
properties from both the text and image domain through adversarial training
and learn to align them by adding a cycle consistency constraint. The proposed
approach has some strict limitations on the input data, but the results
encourage future research into more widespread applications.
Recommender systems recommend items (e.g., movies, products, books) to users. In this thesis, we proposed two comprehensive and cluster-induced recommendation-based methods: Orthogonal Inductive Matrix Completion (OMIC) and Burst-induced Multi-armed Bandit (BMAB). Given the presence of side information, the first method is categorized as context-aware. OMIC is the first matrix completion method to approach the problem of incorporating biases, side information terms and a pure low-rank term into a single flexible framework with a well-principled optimization procedure. The second method, BMAB, is context-free. That is, it does not require any side data about users or items. Unlike previous context-free multi-armed bandit approaches, our method considers the temporal dynamics of human communication on the web and treats the problem in a continuous time setting. We built our models' assumptions under solid theoretical foundations. For OMIC, we provided theoretical guarantees in the form of generalization bounds by considering the distribution-free case: no assumptions about the sampling distribution are made. Additionally, we conducted a theoretical analysis of community side information when the sampling distribution is known and an adjusted nuclear norm regularization is applied. We showed that our method requires just a few entries to accurately recover the ratings matrix if the structure of the ground truth closely matches the cluster side information. For BMAB, we provided regret guarantees under mild conditions that demonstrate how the system's stability affects the expected reward. Furthermore, we conducted extensive experiments to validate our proposed methodologies. In a controlled environment, we implemented synthetic data generation techniques capable of replicating the domains for which OMIC and BMAB were designed. As a result, we were able to analyze our algorithms' performance across a broad spectrum of ground truth regimes. Finally, we replicated a real-world scenario by utilizing well-established recommender datasets. After comparing our approaches to several baselines, we observe that they achieved state-of-the-art results in terms of accuracy. Apart from being highly accurate, these methods improve interpretability by describing and quantifying features of the datasets they characterize.
In the past, information and knowledge dissemination was relegated to the
brick-and-mortar classrooms, newspapers, radio, and television. As these
processes were simple and centralized, the models behind them were well
understood and so were the empirical methods for optimizing them. In today’s
world, the internet and social media has become a powerful tool for information
and knowledge dissemination: Wikipedia gets more than 1 million edits per day,
Stack Overflow has more than 17 million questions, 25% of US population visits
Yahoo! News for articles and discussions, Twitter has more than 60 million
active monthly users, and Duolingo has 25 million users learning languages
online. These developments have introduced a paradigm shift in the process of
dissemination. Not only has the nature of the task moved from being centralized
to decentralized, but the developments have also blurred the boundary between
the creator and the consumer of the content, i.e., information and knowledge.
These changes have made it necessary to develop new models, which are better
suited to understanding and analysing the dissemination, and to develop new
methods to optimize them.
At a broad level, we can view the participation of users in the process of
dissemination as falling in one of two settings: collaborative or competitive.
In the collaborative setting, the participants work together in crafting
knowledge online, e.g., by asking questions and contributing answers, or by
discussing news or opinion pieces. In contrast, as competitors, they vie for
the attention of their followers on social media. This thesis investigates both
these settings.
The first part of the thesis focuses on the understanding and analysis of
content being created online collaboratively. To this end, I propose models for
understanding the complexity of the content of collaborative online discussions
by looking exclusively at the signals of agreement and disagreement expressed
by the crowd. This leads to a formal notion of complexity of opinions and
online discussions. Next, I turn my attention to the participants of the crowd,
i.e., the creators and consumers themselves, and propose an intuitive model for
both, the evolution of their expertise and the value of the content they
collaboratively contribute and learn from on online Q&A based forums. The
second part of the thesis explores the competitive setting. It provides methods
to help the creators gain more attention from their followers on social media.
In particular, I consider the problem of controlling the timing of the posts of
users with the aim of maximizing the attention that their posts receive under
the idealized setting of full-knowledge of timing of posts of others. To solve
it, I develop a general reinforcement learning based method which is shown to
have good performance on the when-to-post problem and which can be employed in
many other settings as well, e.g., determining the reviewing times for spaced
repetition which lead to optimal learning. The last part of the thesis looks at
methods for relaxing the idealized assumption of full knowledge. This basic
question of determining the visibility of one’s posts on the followers’ feeds
becomes difficult to answer on the internet when constantly observing the feeds
of all the followers becomes unscalable. I explore the links of this problem to
the well-studied problem of web-crawling to update a search engine’s index and
provide algorithms with performance guarantees for feed observation policies
which minimize the error in the estimate of visibility of one’s posts.
Data is the new gold and serves as a key to answer the five W’s (Who, What, Where, When, Why) and How’s of any business. Companies are now mining data more than ever and one of the most important aspects while analyzing this data is to detect anomalous patterns to identify critical patterns and points. To tackle the vital aspects of timeseries analysis, this thesis presents a novel hybrid framework that stands on three pillars: Anomaly Detection, Uncertainty Estimation,
and Interpretability and Explainability.
The first pillar is comprised of contributions in the area of time-series anomaly detection. Deep Anomaly Detection for Time-series (DeepAnT), a novel deep learning-based anomaly detection method, lies at the foundation of the proposed hybrid framework and addresses the inadequacy of traditional anomaly detection methods. To the best of the author’s knowledge, Convolutional Neural Network (CNN) was used for the first time in Deep Anomaly Detection for Time-series (DeepAnT) to robustly detect multiple types of anomalies in the tricky
and continuously changing time-series data. To further improve the anomaly detection performance, a fusion-based method, Fusion of
Statistical and Deep Learning for Anomaly Detection (FuseAD) is proposed. This method aims to combine the strengths of existing wellfounded
statistical methods and powerful data-driven methods.
In the second pillar of this framework, a hybrid approach that combines the high accuracy of the deterministic models with the posterior distribution approximation of Bayesian neural networks is proposed.
In the third pillar of the proposed framework, mechanisms to enable both HOW and WHY parts are presented.
In order to improve performance or conserve energy, modern hardware implementations have adopted weak memory models; that is, models of concurrency that allow more outcomes than the classic sequentially consistent (SC) model of execution. Modern programming languages similarly provide their own language-level memory models, which strive to allow all the behaviors allowed by the various hardware-level memory models, as well as those that can occur as a result of desired compiler optimizations.
As these weak memory models are often rather intricate, it can be difficult for programmers to keep track of all the possible behaviors of their programs. It is therefore very useful to have an abstraction layer over the model that can be used to ensure program correctness without reasoning about the underlying memory model. Program logics are a way of constructing such an abstraction—one can use their syntactic rules to reason about programs, without needing to understand the messy details of the memory model for which the logic has been proven sound.
Unfortunately, most of the work on formal verification in general, and program logics in particular, has so far assumed the SC model of execution. This means that new logics for weak memory have to be developed.
This thesis presents two such logics—fenced separation logic (FSL) and weak separation logic (Weasel)—which are sound for reasoning under two different weak memory models.
FSL considers the C/C++ concurrency memory model, supporting several of its advanced features. The soundness of FSL depends crucially on a specific strengthening of the model which eliminates a certain class of undesired behaviors (so-called out-of-thin-air behaviors) that were inadvertently allowed by the original C/C++ model.
Weasel works under weaker assumptions than FSL, considering a model which takes a more fine-grained approach to the out-of-thin-air problem. Weasel's focus is on exploring the programming constructs directly related to out-of-thin-air behaviors, and is therefore significantly less feature-rich than FSL.
Using FSL and Weasel, the thesis explores the key challenges in reasoning under weak memory models, and what effect different solutions to the out-of-thin-air problem have on such reasoning. It explains which reasoning principles are preserved when moving from a stronger to a weaker model, and develops novel proof techniques to establish soundness of logics under weaker models.
Using Enhanced Logic Programming Semantics for Extending and Optimizing Synchronous System Design
(2021)
The semantics of programming languages assign a meaning to the written program syntax.
Currently, the meaning of synchronous programming languages, which are especially designed to develop programs for reactive and embedded systems, is based on a formal semantics definition similar to Fitting`s fixpoint semantics for logic programs.
Nevertheless, it is possible to write a synchronous program code that does not evaluate to concrete values with the current semantics, which means those programs are currently seen to be not constructive.
In the last decades, the theoretical knowledge and representation of semantics for logic programming has increased, but not all theoretical results and achievements have found their way to practice and application in system design.
This thesis, in a first part, focuses on extensions to the semantics of synchronous programming languages to an evaluation similar to a well-founded semantics as defined in logic programming by van Gelder, Ross and Schlipf and to the stable model semantics as defined by Gelfond and Lifschitz. Particularly, this allows an evaluation for some of the currently not constructive programs where the semantics based on Fitting`s fixpoint fails.
It is shown that the extension to well-founded semantics is a conservative extension of Fitting`s semantics, so that the meaning for programs which were already constructive does not change. Finally, it is considered how one can still generate circuits that implement the considered synchronous programs with the well-founded semantics. Again, this is a conservative approach that does not modify the circuits generated by the so-far used synthesis procedures.
Answer set programming and the underlying stable model semantics describe problems by constraints and the related answer set solvers give all solutions to that problem as so-called answers. This allows the formulation of searching and planning problems as well as efficient solutions without having the need to develop special and possibly error-prone algorithms for every single application.
The semantics of the synchronous programming language Quartz is also extended to the stable model semantics. For this extension, two alternatives are discussed: First of all, a direct extension similar to the extension to well-founded semantics is discussed. Second, a transformation of synchronous programs to the available answer set programming languages is given, as this allows to directly use answer set solvers for the synthesis and optimization of synchronous systems.
The second part of the thesis contains further examples of the use of answer set programming in system design to emphasis their benefits for system design in general. The first example is hereby the generation of optimal/minimal interconnection-networks which allow non-blocking connections between n sources and n targets in parallel. As a second example, the stable model semantics is used to build a complete compiler chain, which transforms a given program to an optimal assembler code (called move code) for the new SCAD processor architecture which was developed at the University of Kaiserslautern. As a final part, the lessons learned from the two examples are shown by the means of some enhancement ideas for the synchronous programming language paradigm.
Deep learning has achieved significant improvements in a variety of tasks in computer vision applications with an open image dataset which has a large amount of data. However, the acquisition of a large number of the dataset is a challenge in real-world applications, especially if they are new eras for deep learning. Furthermore, the distribution of class in the dataset is often imbalanced. The data imbalance problem is frequently bottlenecks of the neural network performance in classification. Recently, the potential of generative adversarial networks (GAN) as a data augmentation method on minority data has been studied.
This dissertation investigates using GAN and transfer learning to improve the performance of the classification under imbalanced data conditions. We first propose a classification enhancement generative adversarial networks (CEGAN) to enhance the quality of generated synthetic minority data and more importantly, to improve the prediction accuracy in data imbalanced condition. Our experiments show that approximating the real data distribution using CEGAN improves the classification performance significantly in data imbalanced conditions compared with various standard data augmentation methods.
To further improve the performance of the classification, we propose a novel supervised discriminative feature generation method (DFG) for minority class dataset. DFG is based on the modified structure of Generative Adversarial Network consisting of four independent networks: generator, discriminator, feature extractor, and classifier. To augment the selected discriminative features of minority class data by adopting attention mechanism, the generator for class-imbalanced target task is trained while feature extractor and classifier are regularized with the pre-trained ones from large source data. The experimental results show that the generator of DFG enhances the augmentation of label-preserved and diverse features, and classification results are significantly improved on the target task.
In this thesis, these proposals are deployed to bearing fault detection and diagnosis of induction motor and shipping label recognition and validation for logistics. The experimental results for bearing fault detection and diagnosis conclude that the proposed GAN-based framework has good performance on the imbalanced fault diagnosis of rotating machinery. The experimental results for shipping label recognition and validation also show that the proposed method achieves better performance than many classical and state-of-the-art algorithms.
Medical cyber-physical systems (MCPS) emerged as an evolution of the relations between connected health systems, healthcare providers, and modern medical devices. Such systems combine independent medical devices at runtime in order to render new patient monitoring/control functionalities, such as physiological closed loops for controlling drug infusion or optimization of alarms. Despite the advances regarding alarm precision, healthcare providers still struggle with alarm flooding caused by the limited risk assessment models. Furthermore, these limitations also impose severe barriers on the adoption of automated supervision through autonomous actions, such as safety interlocks for avoiding overdosage. The literature has focused on the verification of safety parameters to assure the safety of treatment at runtime and thus optimize alarms and automated actions. Such solutions have relied on the definition of actuation ranges based on thresholds for a few monitored parameters. Given the very dynamic nature of the relevant context conditions (e.g., the patient’s condition, treatment details, system configurations, etc.), fixed thresholds are a weak means for assessing the current risk. This thesis presents an approach for enabling dynamic risk assessment for cooperative MCPS based on an adaptive Bayesian Networks (BN) model. The main aim of the approach is to support continuous runtime risk assessment of the current situation based on relevant context and system information. The presented approach comprises (i) a dynamic risk analysis constituent, which corresponds to the elicitation of relevant risk parameters, risk metric building, and risk metric management; and (ii) a runtime risk classification constituent, which aims to analyze the current situation risk, establish risk classes, and identify and deploy mitigation measures. The proposed approach was evaluated and its feasibility proved by means of simulated experiments guided by an international team of medical experts with a focus on the requirements of efficacy, efficiency, and availability of patient treatment.
Dataflow process networks (DPNs) consist of statically defined process nodes with First-In-First-Out (FIFO) buffered point-to-point connections. DPNs are intrinsically data-driven, i.e., node actions are not synchronized among each other and may fire whenever sufficient input operands arrived at a node. In this original form, DPNs are data-driven and therefore a suitable model of computation (MoC) for asynchronous and distributed systems. For DPNs having nodes with only static consumption/production rates, however, one can easily derive an optimal schedule that can then be used to implement the DPN in a time-driven (clock-driven) way, where each node fires according to the schedule.
Both data-driven and time-driven MoCs have their own advantages and disadvantages. For this reason, desynchronization techniques are used to convert clock-driven models into data-driven ones in order to more efficiently support distributed implementations. These techniques preserve the functional specification of the synchronous models and moreover preserve properties like deadlock-freedom and bounded memory usage that are otherwise difficult to ensure in DPNs. These desynchronized models are the starting point of this thesis.
While the general MoC of DPNs does not impose further restrictions, many different subclasses of DPNs representing different dataflow MoCs have been considered over time like Kahn process networks, cyclo-static and synchronous DPNs. These classes mainly differ in the kinds of behaviors of the processes which affect on the one hand the expressiveness of the DPN class as well as the methods for their analysis (predictability) and synthesis (efficiency). A DPN may be heterogeneous in the sense that different processes in the network may exhibit different kinds of behaviors. A heterogeneous DPN therefore can be effectively used to model and implement different components of a system with different kinds of processes and therefore different dataflow MoCs.
Design tools for modeling like Ptolemy and FERAL are used to model and to design parallel embedded systems using well-defined and precise MoCs, including different dataflow MoCs. However, there is a lack of automatic synthesis methods to analyze and to evaluate the artifacts exhibited by particular MoCs. Second, the existing design tools for synthesis are usually restricted to the weakest classes of DPNs, i.e., cyclo-static and synchronous DPNs where each tool only supports a specific dataflow MoC.
This thesis presents a model-based design based on different dataflow MoCs including their heterogeneous combinations. This model-based design covers in particular the automatic software synthesis of systems from DPN models. The main objective is to validate, evaluate and compare the artifacts exhibited by different dataflow MoCs at the implementation level of embedded systems under the supervision of a common design tool. We are mainly concerned about how these different dataflow MoCs affect the synthesis, in particular, how they affect the code generation and the final implementation on the target hardware. Moreover, this thesis also aims at offering an efficient synthesis method that targets and exploits heterogeneity in DPNs by generating implementations based on the kinds of behaviors of the processes.
The proposed synthesis design flow therefore generally starts from the desynchronized dataflow models and automatically synthesizes them for cross-vendor target hardware. In particular, it provides a synthesis tool chain, including different specialized code generators for specific dataflow MoCs, and a runtime system that finally maps models using a combination of different dataflow MoCs on the target hardware. Moreover, the tool chain offers a platform-independent code synthesis method based on the open computing language (OpenCL) that enables a more generalized synthesis targeting cross-vendor commercial off-the-shelf (COTS) heterogeneous platforms.