Refine
Year of publication
Document Type
- Preprint (1036)
- Doctoral Thesis (769)
- Report (399)
- Article (312)
- Conference Proceeding (27)
- Diploma Thesis (23)
- Periodical Part (21)
- Master's Thesis (17)
- Working Paper (14)
- Lecture (11)
Language
- English (2661) (remove)
Is part of the Bibliography
- no (2661)
Keywords
- AG-RESY (47)
- PARO (25)
- SKALP (15)
- Visualisierung (15)
- Wavelet (13)
- Case-Based Reasoning (11)
- Inverses Problem (11)
- RODEO (11)
- finite element method (11)
- Mehrskalenanalyse (10)
Faculty / Organisational entity
- Fachbereich Mathematik (983)
- Fachbereich Informatik (699)
- Fachbereich Physik (268)
- Fraunhofer (ITWM) (204)
- Fachbereich Maschinenbau und Verfahrenstechnik (163)
- Fachbereich Elektrotechnik und Informationstechnik (97)
- Fachbereich Chemie (74)
- Fachbereich Biologie (71)
- Fachbereich Sozialwissenschaften (45)
- Fachbereich Wirtschaftswissenschaften (22)
Automation, Industry 4.0 and artificial intelligence are playing an increasingly central role for companies. Artificial intelligence in particular is currently enabling new methods to achieve a higher level of automation. However, machine learning methods are usually particularly lucrative when a lot of data can be easily collected and patterns can be learned with the help of this data. In the field of metrology, this can prove difficult depending on the area of work. Particularly for micrometer-scale measurements, measurement data often involves a lot of time, effort, patience, and money, so measurement data is not readily available. This raises the question of how meaningfully machine learning approaches can be applied to different domains of measurement tasks, especially in comparison to current solution approaches that use model-based methods. This thesis addresses this question by taking a closer look at two research areas in metrology, micro lead determination and reconstruction. Methods for micro lead determination are presented that determine texture and tool axis with high accuracy. The methods are based on signal processing, classical optimization and machine learning. In the second research area, reconstructions for cutting edges are considered in detail. The reconstruction methods here are based on the robust Gaussian filter and deep neural networks, more specifically autoencoders. All results on micro lead and reconstruction are compared and contrasted in this thesis, and the applicability of the different approaches is evaluated.
Controller design for continuous dynamical systems is a core algorithmic problem in the design of cyber-physical systems (CPS). When the CPS application is safety critical, additionally we require the controller to have strong correctness guarantees. One approach for this design problem is to use simpler discrete abstraction of the original continuous system, on which known reactive synthesis methods can be used to design the controller. This approach is known as the abstraction-based controller design (ABCD) paradigm.
In this thesis, we build ABCD procedures which are faster and more modular compared to the state-of-the-art, and can handle problems which were beyond the scope of the existing techniques.
Usually, existing ABCD approaches use state space discretization for computing the abstractions, for which the procedures do not scale well for larger systems. Our first contribution is a multi-layered ABCD algorithm, where we combine coarse abstractions and lazily computed fine abstractions to improve scalability. So far, we only address reach-avoid and safety specifications, for which our prototype tool (called Mascot) showed up to an order of magnitude speedup on standard benchmark examples.
Second, we consider the problem of modular design of sound local controllers for a network of local discrete abstractions communicating via discrete/boolean variables and having local specifications. We propose a sound algorithm, where the systems negotiate a pair of local assume-guarantee contracts, in order to synchronize on a set of non-conflicting and correct behaviors. As a by-product, we also obtain a set of local controllers for the systems which ensure simultaneous satisfaction of the local specifications. We show the effectiveness of the our algorithm using a prototype tool (called Agnes) on a set of discrete benchmark examples.
Our third contribution is a novel ABCD algorithm for a more expressive model of nonlinear dynamical systems with stochastic disturbances and ω-regular specifications. This part has two subparts, which are of significant merits on their own rights. First, we present an abstraction algorithm for nonlinear stochastic systems using 2.5-player games (turn-based stochastic graph games). We show that an almost sure winning strategy in this abstract 2.5-player game gives us a sound controller for the original system for satisfying the specification with probability one. Second, we present symbolic algorithms for a seemingly different class of 2-player games with certain environmental fairness assumptions, which can also be used to efficiently compute winning strategies in the aforementioned abstract 2.5-player game. Using our prototype tool (Mascot-SDS), we show that our algorithm significantly outperforms the state-of-the-art implementation on standard benchmark examples from the literature.
Comparative Uncertainty Visualization for High-Level Analysis of Scalar- and Vector-Valued Ensembles
(2022)
With this thesis, I contribute to the research field of uncertainty visualization, considering parameter dependencies in multi valued fields and the uncertainty of automated data analysis. Like uncertainty visualization in general, both of these fields are becoming more and more important due to increasing computational power, growing importance and availability of complex models and collected data, and progress in artificial intelligence. I contribute in the following application areas:
Uncertain Topology of Scalar Field Ensembles.
The generalization of topology-based visualizations to multi valued data involves many challenges. An example is the comparative visualization of multiple contour trees, complicated by the random nature of prevalent contour tree layout algorithms. I present a novel approach for the comparative visualization of contour trees - the Fuzzy Contour Tree.
Uncertain Topological Features in Time-Dependent Scalar Fields.
Tracking features in time-dependent scalar fields is an active field of research, where most approaches rely on the comparison of consecutive time steps. I created a more holistic visualization for time-varying scalar field topology by adapting Fuzzy Contour Trees to the time-dependent setting.
Uncertain Trajectories in Vector Field Ensembles.
Visitation maps are an intuitive and well-known visualization of uncertain trajectories in vector field ensembles. For large ensembles, visitation maps are not applicable, or only with extensive time requirements. I developed Visitation Graphs, a new representation and data reduction method for vector field ensembles that can be calculated in situ and is an optimal basis for the efficient generation of visitation maps. This is accomplished by bringing forward calculation times to the pre-processing.
Visually Supported Anomaly Detection in Cyber Security.
Numerous cyber attacks and the increasing complexity of networks and their protection necessitate the application of automated data analysis in cyber security. Due to uncertainty in automated anomaly detection, the results need to be communicated to analysts to ensure appropriate reactions. I introduce a visualization system combining device readings and anomaly detection results: the Security in Process System. To further support analysts I developed an application agnostic framework that supports the integration of knowledge assistance and applied it to the Security in Process System. I present this Knowledge Rocks Framework, its application and the results of evaluations for both, the original and the knowledge assisted Security in Process System. For all presented systems, I provide implementation details, illustrations and applications.
Robotic systems are entering the stage. Enabled by advances in both hardware components and software techniques, robots are increasingly able to operate outside of factories, assist humans, and work alongside them. The limiting factor of robots’ expansion remains the programming of robotic systems. Due to the many diverse skills necessary to build a multi-robot system, only the biggest organizations are able to innovate in the space of services provided by robots.
To make developing new robotic services easier, in this dissertation I propose a program- ming model in which users (programmers) give a declarative specification of what needs to be accomplished, and then a backend system makes sure that the specification is safely and reliably executed. I present Antlab, one such backend system. Antlab accepts Linear Temporal Logic (LTL) specifications from multiple users and executes them using a set of robots of different capabilities.
Building on the experience acquired implementing Antlab, I identify problems arising from the proposed programming model. These problems fall into two broad categories, specification and planning.
In the category of specification problems, I solve the problem of inferring an LTL formula from sets of positive and negative example traces, as well as from a set of positive examples only. Building on top of these solutions, I develop a method to help users transfer their intent into a formal specification. The approach taken in this dissertation is combining the intent signals from a single demonstration and a natural language description given by a user. A set of candidate specifications is inferred by encoding the problem as a satisfiability problem for propositional logic. This set is narrowed down to a single specification through interaction with the user; the user approves or declines generated simulations of the robot’s behavior in different situations.
In the category of planning problems, I first solve the problem of planning for robots that are currently executing their tasks. In such a situation, it is unclear what to take as the initial state for planning. I solve the problem by considering multiple, speculative initial states. The paths from those states are explored based on a quality function that repeatedly estimates the planning time. The second problem is a problem of reinforcement learning when the reward function is non-Markovian. The proposed solution consists of iteratively learning an automaton representing the reward function and using it to guide the exploration.
Building on knowledge and innovation: the role of Green Economy in revitalising Shrinking Cities
(2022)
This research introduces the topic of the Green Economy in the context of shrinking cities. The analysis is supported by two case studies, one located in Mexico and the other in France, to identify adaptable strategies of sustainable development in different contexts of urban decline that consider significant differences in the availability of resources.
Shrinking cities suffer from problems such as depopulation, economic decline and underuse of urban infrastructure, mainly due to a regional process of economic peripheralisation. Shrinking cities can adopt two logics to address these issues: de-peripheralisation and endogenous development.
It is argued that shrinking cities can exploit emerging green markets to stimulate economic growth and enhance liveability and sustainability; however, the solutions vary depending on the available resources, local comparative advantages and national political and financial support systems. The Green Economy driven solutions in shrinking cities can follow two main strategies: one is aimed at regrowth, betting on the creation of regional innovation systems by investing in research and development and the local capture of the produced spill-overs; the other, inspired by the concept of greening, aims to improve the quality of urban life of the inhabitants by enhancing the quality of consumption sectors through ecological practices and respect for the environment. The analysis of the two case studies serves as a method to observe different strategies for the sustainable development of shrinking cities by introducing activities in the sectors of the Green Economy.
This study supports the global comparative perspective approach in urban studies focusing on urban shrinkage. The context of shrinking cities is explored in Latin America by identifying the eighteen shrinking cities in Mexico.
In the pre-seed phase before entering a market, new ventures face the complex, multi-faceted, and uncertain task of designing a business model. Founders accomplish this task within the framework of an innovation process, the so-called business model innovation process. However, because a set of feasible opportunities to design a viable business model is often not predictable in this early phase (Alvarez & Barney, 2007), business model ideas have to be revised multiple times, which corresponds to experimenting with alternative business models (Chesbrough, 2010). This also brings scholars to the relevant, but seldom noticed field of research on experimentation as a cognitive schema (Felin et al., 2015; Gavetti & Levinthal, 2000). The few scholars that discussed the importance of such thought experimentation did not elaborate on the manifestations of this phenomenon. Thus, building on qualitative interviews with entrepreneurs, the current state of the research has a gap that offers this dissertation the ability to clearly conceptualise the manifestation of experimentation as a cognitive schema in business model innovation. The results extend previous conceptualisations of experimentation by illustrating the interplay of three different forms of thought experimentation, namely purposeful interactions, incidental interactions, and theorising. In addition, the role of individuals in business model innovation has recently been recognised by scholars (Amit & Zott, 2015; Snihur & Zott, 2020). It is noticed that not only the founders themselves but also many other actors play a central role in this process to support a new venture on its way to designing a viable business model, such as accelerators or public institutions. It thus stands to reason that in addition to understanding how new ventures design their business model, it is also important to study how different actors are involved in this process. Building on qualitative interviews with entrepreneurs, this gap offers this dissertation the ability to study how different actors are involved in business model innovation and conceptualise actor engagement behaviours in this context. The results reveal six different actor engagement behaviours, including teaching, supporting, mobilising, co-developing, sharing, and signalling behaviour. Furthermore, it stands to reason, that entrepreneurs and external actors each play a certain role in business model innovation. Certain behavioural patterns and types of resource contributions may be characteristic for a group of actors, leading to the emergence of distinct actor roles. Thus, in this dissertation a role concept is established to illustrate how actors are involved in designing a new business model, including 13 actor roles. These actor roles are divided into task-oriented and network-oriented roles. Building on this, a variety of role dynamics are unveiled. Moreover, special attention is given to role temporality. Building on two case studies and a quantitative survey, the results reveal how actor roles are played at a certain point in time, thereby concretising them in relation to certain stages of the pre-seed phase.
Data-driven and Sparse-to-Dense Concepts in Scene Flow Estimation for Automotive Applications
(2022)
Highly assisted driving and autonomous vehicles require a detailed and accurate perception of the environment. This includes the perception of the 3D geometry of the scene and the 3D motion of other road users. The estimation of both based on images is known as the scene flow problem in computer vision. This thesis deals with a solution to the scene flow problem that is suitable for application in autonomous vehicles. This application imposes strict requirements on accuracy, robustness, and speed. Previous work was lagging behind in at least one of these metrics. To work towards the fulfillment of those requirements, the sparse-to-dense concept for scene flow estimation is introduced in this thesis. The idea can be summarized as follows: First, scene flow is estimated for some points of the scene for which this can be done comparatively easily and reliably. Then, an interpolation is performed to obtain a dense estimate for the entire scene. Because of the separation into two steps, each part can be optimized individually. In a series of experiments, it is shown that the proposed methods achieve competitive results and are preferable to previous techniques in some aspects. As a second contribution, individual components in the sparse-to-dense pipeline are replaced by deep learning modules. These are a highly localized and highly accurate feature descriptor to represent pixels for dense matching, and a network for robust and generic sparse-to-dense interpolation. Compared to end-to-end architectures, the advantage of deep modules is that they can be trained more effciently with data from different domains. The recombination approach applies a similar concept as the sparse-to-dense approach by solving and combining less diffcult, auxiliary sub-problems. 3D geometry and 2D motion are estimated separately, the individual results are combined, and then also interpolated into a dense scene flow. As a final contribution, the thesis proposes a set of monolithic end-to-end networks for scene flow estimation.
Every organism contains a characteristic number of chromosomes that have to be segregated equally into
two daughter cells during mitosis. Any error during chromosome segregation results in daughter cells that
lost or gained a chromosome, a condition known as aneuploidy. Several studies from our laboratory and
across the world have previously shown that aneuploidy per se strongly affects cellular physiology.
However, these studies were limited mainly to the chromosomal gains due to the availability of several
model systems. Strikingly, no systemic study to evaluate the impact of chromosome loss in the human
cells has been performed so far. This is mainly due to the lack of model systems, as chromosome loss is
incompatible with survival and drastically reduces cellular fitness. During my PhD thesis, for the first time,
I used diverse omics and biochemical approaches to investigate the consequences of chromosome losses
in human somatic cells.
Using isogenic monosomic cells derived from the human cell line RPE1 lacking functional p53, we showed
that, similar to the cells with chromosome gains, monosomic cells proliferate slower than the parental
cells and exhibit genomic instability. Transcriptome and proteome analysis revealed that the expression
of genes encoded on the monosomic chromosomes was reduced, as expected, but the abundance was
partially compensated towards diploid levels by both transcriptional and post transcriptional mechanisms.
Furthermore, we showed that monosomy induces global gene expression changes that are distinct to
changes in response to chromosome gains. The most consistently deregulated pathways among the
monosomies were ribosomes and translation, which we validated using polysome profiling and analysis
of translation with puromycin incorporation experiments. We showed that these defects could be
attributed to the haploinsufficiency of ribosomal protein genes (RPGs) encoded on monosomic
chromosomes. Reintroduction of p53 into the monosomic cells uncovered that monosomy is incompatible
with p53 expression and that the monosomic cells expressing p53 are either eliminated or outgrown by
the p53 negative population. Given the RPG haploinsufficiency and ribosome biogenesis defects caused
by monosomy, we show an evidence that the p53 activation in monosomies could be caused by the
defects in ribosomes. These findings were further supported by computational analysis of cancer genomes
revealing those cancers with monosomic karyotype accumulated frequently p53 pathway mutations and
show reduced ribosomal functions.
Together, our findings provide a rationale as to why monosomy is embryonically lethal, but frequently
occurs in p53 deficient cancers.
An increasing number of nowadays tasks, such as speech recognition, image generation,
translation, classification or prediction are performed with the help of machine learning.
Especially artificial neural networks (ANNs) provide convincing results for these tasks.
The reasons for this success story are the drastic increase of available data sources in
our more and more digitalized world as well as the development of remarkable ANN
architectures. This development has led to an increasing number of model parameters
together with more and more complex models. Unfortunately, this yields a loss in the
interpretability of deployed models. However, there exists a natural desire to explain the
deployed models, not just by empirical observations but also by analytical calculations.
In this thesis, we focus on variational autoencoders (VAEs) and foster the understanding
of these models. As the name suggests, VAEs are based on standard autoencoders (AEs)
and therefore used to perform dimensionality reduction of data. This is achieved by a
bottleneck structure within the hidden layers of the ANN. From a data input the encoder,
that is the part up to the bottleneck, produces a low dimensional representation. The
decoder, the part from the bottleneck to the output, uses this representation to reconstruct
the input. The model is learned by minimizing the error from the reconstruction.
In our point of view, the most remarkable property and, hence, also a central topic
in this thesis is the auto-pruning property of VAEs. Simply speaking, the auto-pruning
is preventing the VAE with thousands of parameters from overfitting. However, such a
desirable property comes with the risk for the model of learning nothing at all. In this
thesis, we look at VAEs and the auto-pruning from two different angles and our main
contributions to research are the following:
(i) We find an analytic explanation of the auto-pruning. We do so, by leveraging the
framework of generalized linear models (GLMs). As a result, we are able to explain
training results of VAEs before conducting the actual training.
(ii) We construct a time dependent VAE and show the effects of the auto-pruning in
this model. As a result, we are able to model financial data sequences and estimate
the value-at-risk (VaR) of associated portfolios. Our results show that we surpass
the standard benchmarks for VaR estimation.
Today’s digital world would be unthinkable without complex data sets. Whether in private, business or industrial environments, complex data provide the basis for important and critical decisions and determine many processes, some of which are automated. This is often associated with Big Data. However, often only one aspect of the usual Big Data definitions is sufficient and a human observer can no longer capture the data completely and correctly. In this thesis, different approaches are presented in order to master selected challenges in a more effective, efficient and userfriendly way. The approaches range from easier pre-processing of data sets for later analysis and the identification of design guidelines of such assistants, new visualization techniques for presenting uncertainty, extensions of existing visualizations for categorical data, concepts for time-saving selection methods for subsets of data points and faster navigation and zoom interaction–especially in the web-based area with enormous amounts of data–to new and innovative orientation-based interaction metaphors for mobile devices as well as stationary working environments. Evaluations and appropriate use case of the individual approaches show the usability also in comparison with state-of-the-art techniques.