Kaiserslautern - Fachbereich Informatik
Refine
Year of publication
- 1999 (228)
- 1994 (33)
- 1996 (32)
- 1998 (32)
- 1995 (31)
- 1997 (26)
- 2021 (25)
- 2022 (25)
- 2016 (23)
- 2019 (23)
- 2023 (23)
- 2015 (22)
- 2001 (20)
- 2020 (19)
- 2007 (18)
- 2018 (18)
- 2013 (17)
- 2014 (15)
- 1993 (14)
- 2009 (11)
- 2012 (11)
- 2000 (10)
- 2002 (10)
- 2003 (7)
- 2004 (7)
- 2006 (7)
- 2008 (7)
- 2017 (7)
- 2024 (7)
- 2010 (6)
- 2005 (5)
- 1991 (4)
- 1992 (4)
- 2011 (4)
- 1979 (2)
- 1980 (1)
- 1990 (1)
Document Type
- Preprint (290)
- Doctoral Thesis (218)
- Report (111)
- Article (108)
- Conference Proceeding (8)
- Master's Thesis (7)
- Study Thesis (5)
- Bachelor Thesis (4)
- Habilitation (3)
- Part of a Book (1)
Language
- English (755) (remove)
Has Fulltext
- yes (755)
Keywords
- AG-RESY (47)
- PARO (25)
- Visualisierung (16)
- SKALP (15)
- Case-Based Reasoning (11)
- RODEO (11)
- HANDFLEX (9)
- Visualization (8)
- Deep Learning (7)
- Robotics (7)
Faculty / Organisational entity
Intelligent formal methods
(2022)
Information technology has become an indispensable part of our daily lives, with a significant proportion of our everyday activities relying on the safe and reliable operation of computer systems. One promising approach to ensuring these critical properties is the use of so-called formal methods, a broad range of rigorous, mathematical techniques for specifying, developing, and verifying hardware, software, cyber-physical systems, and artificial intelligence. Unlike traditional quality assurance approaches, such as testing, formal methods offer the unique ability to provide formal proof of the absence of errors, a trait particularly desirable in the context of today's ubiquitous safety-critical systems. However, this advantage comes at a cost: formal methods require extensive training, often assume idealized or limited settings, and typically demand substantial computational resources.
Inspired by the vision of artificial intelligence, this work seeks to automate formal methods and dramatically expand their applicability. To achieve this goal, we develop a novel, innovative type of formal method that combines inductive techniques from machine learning with deductive techniques from logic. We name this new approach "intelligent formal methods" and apply it to three fundamental areas: software verification, hardware and software synthesis, and the generation of formal specifications.
Objective. Gradient-based optimization using algorithmic derivatives can be a useful technique to improve engineering designs with respect to a computer-implemented objective function. Likewise, uncertainty quantification through computer simulations can be carried out by means of derivatives of the computer simulation. However, the effectiveness of these techniques depends on how 'well-linearizable' the software is. In this study, we assess how promising derivative information of a typical proton computed tomography (pCT) scan computer simulation is for the aforementioned applications. Approach. This study is mainly based on numerical experiments, in which we repeatedly evaluate three representative computational steps with perturbed input values. We support our observations with a review of the algorithmic steps and arithmetic operations performed by the software, using debugging techniques. Main results. The model-based iterative reconstruction (MBIR) subprocedure (at the end of the software pipeline) and the Monte Carlo (MC) simulation (at the beginning) were piecewise differentiable. However, the observed high density and magnitude of jumps was likely to preclude most meaningful uses of the derivatives. Jumps in the MBIR function arose from the discrete computation of the set of voxels intersected by a proton path, and could be reduced in magnitude by a 'fuzzy voxels' approach. The investigated jumps in the MC function arose from local changes in the control flow that affected the amount of consumed random numbers. The tracking algorithm solves an inherently non-differentiable problem. Significance. Besides the technical challenges of merely applying AD to existing software projects, the MC and MBIR codes must be adapted to compute smoother functions. For the MBIR code, we presented one possible approach for this while for the MC code, this will be subject to further research. For the tracking subprocedure, further research on surrogate models is necessary.
Distributed message-passing systems have become ubiquitous and essential for our daily lives. Hence, designing and implementing them correctly is of utmost importance. This is, however, very challenging at the same time. In fact, it is well-known that verifying such systems is algorithmically undecidable in general due to the interplay of asynchronous communication (messages are buffered) and concurrency. When designing communication in a system, it is natural to start with a global protocol specification of the desired communication behaviour. In such a top-down approach, the implementability problem asks, given such a global protocol, if the specified behaviour can be implemented in a distributed setting without additional synchronisation. This problem has been studied from two perspectives in the literature. On the one hand, there are Multiparty Session Types (MSTs) from process algebra, with global types to specify protocols. Key to the MST approach is a so-called projection operator, which takes a global type and tries to project it onto every participant: if successful, the local specifications are safe to use. This approach is efficient but brittle. On the other hand, High-level Message Sequence Charts (HMSCs) study the implementability problem from an automata-theoretic perspective. They employ very few restrictions on protocol specifications, making the implementability problem for HMSCs undecidable in general. The work in this thesis is the first to formally build a bridge between the world of MSTs and HMSCs. To start, we present a generalised projection operator for sender-driven choice. This allows a sender to send to different receivers when branching, which is crucial to handle common communication patterns from distributed computing. Despite this first step, we also show that the classical MST projection approach is inherently incomplete. We present the first formal encoding from global types to HMSCs. With this, we prove decidability of the implementability problem for global types with sender-driven choice. Furthermore, we develop the first direct and complete projection operator for global types with sender-driven choice, using automata-theoretic techniques, and show its effectiveness with a prototype implementation. We are the first to provide an upper bound for the implementability problem for global types with sender-driven (or directed) choice and show it to be in PSPACE. We also provide a session type system that uses the results from our projection operator. Last, we introduce protocol state machines (PSMs) – an automata-based protocol specification formalism – that subsume both global types from MSTs and HMSCs with regard to expressivity. We use transformations on PSMs to show that many of the syntactic restrictions of global types are not restrictive in terms of protocol expressivity. We prove that the implementability problem for PSMs with mixed choice, which requires no dedicated sender for a branch but solely all labels to be distinct, is undecidable in general. With our results on expressivity, this answers an open question: the implementability problem for mixed-choice global types is undecidable in general.
Machine learning algorithms are widely applied to create powerful prediction models. With increasingly complex models, humans' ability to understand the decision function (that maps from a high-dimensional input space) is quickly exceeded. To explain a model's decisions, black-box methods have been proposed that provide either non-linear maps of the global topology of the decision boundary, or samples that allow approximating it locally. The former loses information about distances in input space, while the latter only provides statements about given samples, but lacks a focus on the underlying model for precise ‘What-If'-reasoning. In this paper, we integrate both approaches and propose an interactive exploration method using local linear maps of the decision space. We create the maps on high-dimensional hyperplanes—2D-slices of the high-dimensional parameter space—based on statistical and personal feature mutability and guided by feature importance. We complement the proposed workflow with established model inspection techniques to provide orientation and guidance. We demonstrate our approach on real-world datasets and illustrate that it allows identification of instance-based decision boundary structures and can answer multi-dimensional ‘What-If'-questions, thereby identifying counterfactual scenarios visually.
Edit distances between merge trees of scalar fields have many applications in scientific visualization, such as ensemble analysis, feature tracking or symmetry detection. In this paper, we propose branch mappings, a novel approach to the construction of edit mappings for merge trees. Classic edit mappings match nodes or edges of two trees onto each other, and therefore have to either rely on branch decompositions of both trees or have to use auxiliary node properties to determine a matching. In contrast, branch mappings employ branch properties instead of node similarity information, and are independent of predetermined branch decompositions. Especially for topological features, which are typically based on branch properties, this allows a more intuitive distance measure which is also less susceptible to instabilities from small-scale perturbations. For trees with 𝒪(n) nodes, we describe an 𝒪(n4) algorithm for computing optimal branch mappings, which is faster than the only other branch decomposition-independent method in the literature by more than a linear factor. Furthermore, we compare the results of our method on synthetic and real-world examples to demonstrate its practicality and utility.
The development of algorithmic differentiation (AD) tools focuses mostly on handling floating point types in the target language. Taping optimizations in these tools mostly focus on specific operations like matrix vector products. Aggregated types like std::complex are usually handled by specifying the AD type as a template argument. This approach provides exact results, but prevents the use of expression templates. If AD tools are extended and specialized such that aggregated types can be added to the expression framework, then this will result in reduced memory utilization and improve the timing for applications where aggregated types such as complex number or matrix vector operations are used. Such an integration requires a reformulation of the stored data per expression and a rework of the tape evaluation process. We will demonstrate the overheads on a synthetic benchmark and show the improvement when aggregated types are handled properly by the expression framework of the AD tool.
To increase situational awareness of the crane operator, the aim of this thesis is to develop a vision-based deep learning object detection from crane load-view using an adaptive perception in the construction area. Conventional worker detection methods are based on simple shape or color features from the worker's appearances. Nonetheless, these methods can fail to recognize the workers who do not wear the protective gears. To find out an image representation of the object from the top view manually or handcrafted feature is crucial. We, therefore, employed deep learning methods to automatically learn those features.
To yield optimal results, deep learning methods require mass amount of data.
Due to the data deficit especially in the construction domain, we developed the photorealistic world to create the data in addition to our samples collected from the real construction area. The simulated platform does not benefit only from diverse data types, but also concurrent research development which speeds up the pipeline at a low cost.
Our research findings indicate that the combination of synthetic and real training samples improved the state-of-the-art detector. In line with previous studies to bridge the gap between synthetic and real data, the results of preprocessed synthetic images are substantially better than using the raw data by approximately 10%.
Finding the right deep learning model for load-view detection is challenging.
By investigating our training data, it becomes evident that the majority of bounding box sizes are very small with a complex background.
In addition, we gave the priority to speed over accuracy based on the construction safety criteria. Finally, RetinaNet is chosen out of the three primary object detection models.
Nevertheless, the data-driven detection algorithm can fail to handle scale invariance, especially for detectors whose input size is changed in an extremely wide range.
The adaptive zoom feature can enhance the quality of the worker detection.
To avoid further data gathering and extensive retraining, the proposed automatic zoom method of the load-view crane camera supports the deep learning algorithm, specifically in the high scale variant problem. The finite state machine is employed for control strategies to adapt the zoom level to cope not only with inconsistent detection but also abrupt camera movement during lifting operation. Consequently, the detector is able to detect a small size object by smooth continuous zoom control without additional training.
The adaptive zoom control not only enhances the performance of the top-view object detection but also reduces the interaction of the crane operator with camera system, reducing the risk of fatality during load lifting operation.
Turbulence models, which are a means to fix the closure problem arising from Reynolds averaging of Navier-Stokes equations, are economical stop-gaps but suffer from accuracy issues. Modifying turbulence models by incorporating corrections in their functional form is one approach to improve their accuracy. We estimate correction functionals for the Spalart - Allmaras turbulence model, based on an inverse problem with PDE constraints emphasizing the issue of regularization.
Algorithmic decision-making (ADM) systems have come to support, pre-empt or substitute for human decisions in manifold areas, with potentially significant impacts on individuals' lives. Achieving transparency and accountability has been formulated as a general goal regarding the use of these systems. However, concrete applications differ widely in the degree of risk and the accountability problems they entail for data subjects. The present paper addresses this variation and presents a framework that differentiates regulatory requirements for a range of ADM system uses. It draws on agency theory to conceptualize accountability challenges from the point of view of data subjects with the purpose to systematize instruments for safeguarding algorithmic accountability. The paper furthermore shows how such instruments can be matched to applications of ADM based on a risk matrix. The resulting comprehensive framework can guide the evaluation of ADM systems and the choice of suitable regulatory provisions.
We describe a novel technique for the simultaneous visualization of multiple scalar fields, e.g. representing the members of an ensemble, based on their contour trees. Using tree alignments, a graph-theoretic concept similar to edit distance mappings, we identify commonalities across multiple contour trees and leverage these to obtain a layout that can represent all trees simultaneously in an easy-to-interpret, minimally-cluttered manner. We describe a heuristic algorithm to compute tree alignments for a given similarity metric, and give an algorithm to compute a joint layout of the resulting aligned contour trees. We apply our approach to the visualization of scalar field ensembles, discuss basic visualization and interaction possibilities, and demonstrate results on several analytic and real-world examples.