Refine
Year of publication
- 2016 (81) (remove)
Document Type
- Doctoral Thesis (51)
- Conference Proceeding (14)
- Preprint (7)
- Working Paper (2)
- Article (1)
- Bachelor Thesis (1)
- Book (1)
- Course Material (1)
- Habilitation (1)
- Master's Thesis (1)
Language
- English (81) (remove)
Has Fulltext
- yes (81)
Keywords
- Cache (3)
- SRAM (3)
- DRAM (2)
- PIM (2)
- haptotaxis (2)
- ARM Processor (1)
- AUTOSAR (1)
- Affine Arithmetic (1)
- Approximation Algorithms (1)
- Autoregressive Hilbertian model (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Informatik (23)
- Kaiserslautern - Fachbereich Mathematik (23)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (17)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (6)
- Kaiserslautern - Fachbereich Biologie (4)
- Kaiserslautern - Fachbereich Chemie (2)
- Kaiserslautern - Fachbereich Physik (2)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (2)
- Kaiserslautern - Fachbereich Sozialwissenschaften (2)
Automata theory has given rise to a variety of automata models that consist
of a finite-state control and an infinite-state storage mechanism. The aim
of this work is to provide insights into how the structure of the storage
mechanism influences the expressiveness and the analyzability of the
resulting model. To this end, it presents generalizations of results about
individual storage mechanisms to larger classes. These generalizations
characterize those storage mechanisms for which the given result remains
true and for which it fails.
In order to speak of classes of storage mechanisms, we need an overarching
framework that accommodates each of the concrete storage mechanisms we wish
to address. Such a framework is provided by the model of valence automata,
in which the storage mechanism is represented by a monoid. Since the monoid
serves as a parameter to specifying the storage mechanism, our aim
translates into the question: For which monoids does the given
(automata-theoretic) result hold?
As a first result, we present an algebraic characterization of those monoids
over which valence automata accept only regular languages. In addition, it
turns out that for each monoid, this is the case if and only if valence
grammars, an analogous grammar model, can generate only context-free
languages.
Furthermore, we are concerned with closure properties: We study which
monoids result in a Boolean closed language class. For every language class
that is closed under rational transductions (in particular, those induced by
valence automata), we show: If the class is Boolean closed and contains any
non-regular language, then it already includes the whole arithmetical
hierarchy.
This work also introduces the class of graph monoids, which are defined by
finite graphs. By choosing appropriate graphs, one can realize a number of
prominent storage mechanisms, but also combinations and variants thereof.
Examples are pushdowns, counters, and Turing tapes. We can therefore relate
the structure of the graphs to computational properties of the resulting
storage mechanisms.
In the case of graph monoids, we study (i) the decidability of the emptiness
problem, (ii) which storage mechanisms guarantee semilinear Parikh images,
(iii) when silent transitions (i.e. those that read no input) can be
avoided, and (iv) which storage mechanisms permit the computation of
downward closures.
A wide range of methods and techniques have been developed over the years to manage the increasing
complexity of automotive Electrical/Electronic systems. Standardization is an example
of such complexity managing techniques that aims to minimize the costs, avoid compatibility
problems and improve the efficiency of development processes.
A well-known and -practiced standard in automotive industry is AUTOSAR (Automotive
Open System Architecture). AUTOSAR is a common standard among OEMs (Original Equipment
Manufacturer), suppliers and other involved companies. It was developed originally with
the goal of simplifying the overall development and integration process of Electrical/Electronic
artifacts from different functional domains, such as hardware, software, and vehicle communication.
However, the AUTOSAR standard, in its current status, is not able to manage the problems
in some areas of the system development. Validation and optimization process of system configuration
handled in this thesis are examples of such areas, in which the AUTOSAR standard
offers so far no mature solutions.
Generally, systems developed on the basis of AUTOSAR must be configured in a way that all
defined requirements are met. In most cases, the number of configuration parameters and their
possible settings in AUTOSAR systems are large, especially if the developed system is complex
with modules from various knowledge domains. The verification process here can consume a
lot of resources to test all possible combinations of configuration settings, and ideally find the
optimal configuration variant, since the number of test cases can be very high. This problem is
referred to in literature as the combinatorial explosion problem.
Combinatorial testing is an active and promising area of functional testing that offers ideas
to solve the combinatorial explosion problem. Thereby, the focus is to cover the interaction
errors by selecting a sample of system input parameters or configuration settings for test case
generation. However, the industrial acceptance of combinatorial testing is still weak because of
the deficiency of real industrial examples.
This thesis is tempted to fill this gap between the industry and the academy in the area
of combinatorial testing to emphasizes the effectiveness of combinatorial testing in verifying
complex configurable systems.
The particular intention of the thesis is to provide a new applicable approach to combinatorial
testing to fight the combinatorial explosion problem emerged during the verification and
performance measurement of transport protocol parallel routing of an AUTOSAR gateway. The
proposed approach has been validated and evaluated by means of two real industrial examples
of AUTOSAR gateways with multiple communication buses and two different degrees of complexity
to illustrate its applicability.
Thermoplastic composite materials are being widely used in the automotive and aerospace industries. Due to the limitations of shape complexity, different components
need to be joined. They can be joined by mechanical fasteners, adhesive bonding or
both. However, these methods have several limitations. Components can be joined
by fusion bonding due to the property of thermoplastics. Thermoplastics can be melted on heating and regain their shape on cooling. This property makes them ideal for
joining through fusion bonding by induction heating. Joining of non-conducting or
non-magnetic thermoplastic composites needs an additional material that can generate heat by induction heating.
Polymers are neither conductive nor electromagnetic so they don’t have inherent potential for inductive heating. A susceptor sheet having conductive materials (e.g. carbon fiber) or magnetic materials (e.g. nickel) can generate heat during induction. The
main issues related with induction heating are non-homogeneous and uncontrolled
heating.
In this work, it was observed that to generate heat with a susceptor sheet depends
on its filler, its concentration, and its dispersion. It also depends on the coil, magnetic
field strength and coupling distance. The combination of different fillers not only increased the heating rate but also changed the heating mechanism. Heating of 40ºC/
sec was achieved with 15wt.-% nickel coated short carbon fibers and 3wt.-% multiwalled carbon nanotubes. However, only nickel coated short carbon fibers (15wt-.%)
attained the heating rate of 24ºC/ sec. In this study, electrical conductivity, thermal
conductivity and magnetic properties testing were also performed. The results also
showed that electrical percolation was achieved around 15wt.-% in fibers and (13-
6)wt.-% with hybrid fillers. Induction heating tests were also performed by making
parallel and perpendicular susceptor sheet as fibers were uni-directionally aligned.
The susceptor sheet was also tested by making perforations.
The susceptor sheet showed homogeneous and fast heating, and can be used for
joining of non-conductive or non-magnetic thermoplastic composites.
Since the early days of representation theory of finite groups in the 19th century, it was known that complex linear representations of finite groups live over number fields, that is, over finite extensions of the field of rational numbers.
While the related question of integrality of representations was answered negatively by the work of Cliff, Ritter and Weiss as well as by Serre and Feit, it was not known how to decide integrality of a given representation.
In this thesis we show that there exists an algorithm that given a representation of a finite group over a number field decides whether this representation can be made integral.
Moreover, we provide theoretical and numerical evidence for a conjecture, which predicts the existence of splitting fields of irreducible characters with integrality properties.
In the first part, we describe two algorithms for the pseudo-Hermite normal form, which is crucial when handling modules over ring of integers.
Using a newly developed computational model for ideal and element arithmetic in number fields, we show that our pseudo-Hermite normal form algorithms have polynomial running time.
Furthermore, we address a range of algorithmic questions related to orders and lattices over Dedekind domains, including computation of genera, testing local isomorphism, computation of various homomorphism rings and computation of Solomon zeta functions.
In the second part we turn to the integrality of representations of finite groups and show that an important ingredient is a thorough understanding of the reduction of lattices at almost all prime ideals.
By employing class field theory and tools from representation theory we solve this problem and eventually describe an algorithm for testing integrality.
After running the algorithm on a large set of examples we are led to a conjecture on the existence of integral and nonintegral splitting fields of characters.
By extending techniques of Serre we prove the conjecture for characters with rational character field and Schur index two.
Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory.
Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern.
To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.
The Context and Its Importance: In safety and reliability analysis, the information generated by Minimal Cut Set (MCS) analysis is large.
The Top Level event (TLE) that is the root of the fault tree (FT) represents a hazardous state of the system being analyzed.
MCS analysis helps in analyzing the fault tree (FT) qualitatively-and quantitatively when accompanied with quantitative measures.
The information shows the bottlenecks in the fault tree design leading to identifying weaknesses of the system being examined.
Safety analysis (containing the MCS analysis) is especially important for critical systems, where harm can be done to the environment or human causing injuries, or even death during the system usage.
Minimal Cut Set (MCS) analysis is performed using computers and generating a lot of information.
This phase is called MCS analysis I in this thesis.
The information is then analyzed by the analysts to determine possible issues and to improve the design of the system regarding its safety as early as possible.
This phase is called MCS analysis II in this thesis.
The goal of my thesis was developing interactive visualizations to support MCS analysis II of one fault tree (FT).
The Methodology: As safety visualization-in this thesis, Minimal Cut Set analysis II visualization-is an emerging field and no complete checklist regarding Minimal Cut Set analysis II requirements and gaps were available from the perspective of visualization and interaction capabilities,
I have conducted multiple studies using different methods with different data sources (i.e., triangulation of methods and data) for determining these requirements and gaps before developing and evaluating visualizations and interactions supporting Minimal Cut Set analysis II.
Thus, the following approach was taken in my thesis:
1- First, a triangulation of mixed methods and data sources was conducted.
2- Then, four novel interactive visualizations and one novel interaction widget were developed.
3- Finally, these interactive visualizations were evaluated both objectively and subjectively (compared to multiple safety tools)
from the point of view of users and developers of the safety tools that perform MCS analysis I with respect to their degree in supporting MCS analysis II and from the point of non-domain people using empirical strategies.
The Spiral tool supports analysts with different visions, i.e., full vision, color deficiency protanopia, deuteranopia, and tritanopia. It supports 100 out of 103 (97%) requirements obtained from the triangulation and it fills 37 out of 39 (95%) gaps. Its usability was rated high (better than their best currently used tools) by the users of the safety and reliability tools (RiskSpectrum, ESSaRel, FaultTree+, and a self-developed tool) and at least similar to the best currently used tools from the point of view of the CAFTA tool developers. Its quality was higher regarding its degree of supporting MCS analysis II compared to the FaultTree+ tool. The time spent for discovering the critical MCSs from a problem size of 540 MCSs (with a worst case of all equal order) was less than a minute while achieving 99.5% accuracy. The scalability of the Spiral visualization was above 4000 MCSs for a comparison task. The Dynamic Slider reduces the interaction movements up to 85.71% of the previous sliders and solves the overlapping thumb issues by the sliders provides the 3D model view of the system being analyzed provides the ability to change the coloring of MCSs according to the color vision of the user provides selecting a BE (i.e., multi-selection of MCSs), thus, can observe the BEs' NoO and provides its quality provides two interaction speeds for panning and zooming in the MCS, BE, and model views provide a MCS, a BE, and a physical tab for supporting the analysis starting by the MCSs, the BEs, or the physical parts. It combines MCS analysis results and the model of an embedded system enabling the analysts to directly relate safety information with the corresponding parts of the system being analyzed and provides an interactive mapping between the textual information of the BEs and MCSs and the parts related to the BEs.
Verifications and Assessments: I have evaluated all visualizations and the interaction widget both objectively and subjectively, and finally evaluated the final Spiral visualization tool also both objectively and subjectively regarding its perceived quality and regarding its degree of supporting MCS analysis II.
Reading as a cultural skill is acquired over a long period of training. This thesis supports the idea that reading is based on specific strategies that result from modification and coordination of earlier developed object recognition strategies. The reading-specific processing strategies are considered to be more analytic compared to object recognition strategies, which are described as holistic. To enable proper reading skills these strategies have to become automatized. Study 1 (Chapter 4) examined the temporal and visual constrains of letter recognition strategies. In the first experiment two successively presented stimuli (letters or non-letters) had to be classified as same or different. The second stimulus could either be presented in isolation or surrounded by a shape, which was either similar (congruent) or different (incongruent) in its geometrical properties to the stimulus itself. The non-letter pairs were presented twice as often as the letter pairs. The results demonstrated a preference for the holistic strategy also in letters, even if the non- letter set was presented twice as often as the letter set, showing that the analytic strategy does not replace the holistic one completely, but that the usage of both strategies is task-sensitive. In Experiment 2, we compared the Global Precedence Effect (GPE) for letters and non-letters in central viewing, with the global stimulus size close to the functional visual field in whole word reading (6.5◦ of visual angle) and local stimuli close to the critical size for fluent reading of individual letters (0.5◦ of visual angle). Under these conditions, the GPE remained robust for non-letters. For letters, however, it disappeared: letters showed no overall response time advantage for the global level and symmetric congruence effects (local-to-global as well as global-to-local interference). These results indicate that reading is based on resident analytic visual processing strategies for letters. In Study 2 (Chapter 5) we replicated the latter result with a large group of participants as part of a study in which pairwise associations of non-letters and phonological or non-phonological sounds were systematically trained. We investigated whether training would eliminate the GPE also for non-letters. We observed, however, that the differentiation between letters and non-letter shapes persists after training. This result implies that pairwise association learning is not sufficient to overrule the process differentiation in adults. In addition, subtle effects arising in the letter condition (due to enhanced power) enable us to further specify the differentiation in processing between letters and non-letter shapes. The influence of reading ability on the GPE was examined in Study 3 (Chapter 6). Children with normal reading skills and children with poor reading skills were instructed to detect a target in Latin or Hebrew Navon letters. Children with normal reading skills showed a GPE for Latin letters, but not for Hebrew letters. In contrast, the dyslexia group did not show GPE for either kind of stimuli. These results suggest that dyslexic children are not able to apply the same automatized letter processing strategy as children with normal reading skills do. The difference between the analytic letter processing and the holistic non-letter processing was transferred to the context of whole word reading in Study 4 (Chapter 7). When participants were instructed to detect either a letter or a non-letter in a mixed character string, for letters the reaction times and error rates increased linearly from the left to the right terminal position in the string, whereas for non-letters a symmetrical U-shaped function was observed. These results suggest, that the letter-specific processing strategies are triggered automatically also for more word-like material. Thus, this thesis supports and expands prior results of letter-specific processing and gives new evidences for letter-specific processing strategies.
This Ph.D. project as a landscape research practice focuses on the less widely studied aspects of urban agriculture landscape and its application in recreation and leisure, as well as landscape beautification. I research on the edible landscape planning and design, its criteria, possibilities, and traditional roots for the particular situation of Iranian cities and landscapes. The primary objective is preparing a conceptual and practical framework for Iranian professions to integrate the food landscaping into the new greenery and open spaces developments. Furthermore, finding the possibilities of synthesis the traditional utilitarian gardening with the contemporary pioneer viewpoints of agricultural landscapes is the other significant proposed achievement.
Finished tasks and list of achieved results:
• Recognition the software and hardware principles of designing the agricultural landscape based on the Persian gardens
• Multidimensional identity of agricultural landscape in Persian gardens
• Principles of architectural integration and the characteristics of the integrative landscape in Persian gardens
• Distinctive characteristics of agricultural landscape in Persian garden
• Introducing the Persian and historical gardens as the starting point for reentering the agricultural phenomena into the Iranian cities and landscape
• Assessment the structure of Persian gardens based on the new achievements and criteria of designing the urban agriculture
• Investigate the role of Persian gardens in envisioning the urban agriculture in
Iranian cities’ landscape.
Synapses play a central role in the information propagation in the nervous system. A better understanding of synaptic structures and processes is vital for advancing nervous disease research. This work is part of an interdisciplinary project that aims at the quantitative examination of components of the neuromuscular junction, a synaptic connection between a neuron and a muscle cell.
The research project is based on image stacks picturing neuromuscular junctions captured by modern electron microscopes, which permit the rapid acquisition of huge amounts of image data at a high level of detail. The large amount and sheer size of such microscopic data makes a direct visual examination infeasible, though.
This thesis presents novel problem-oriented interactive visualization techniques that support the segmentation and examination of neuromuscular junctions.
First, I introduce a structured data model for segmented surfaces of neuromuscular junctions to enable the computational analysis of their properties. However, surface segmentation of neuromuscular junctions is a very challenging task due to the extremely intricate character of the objects of interest. Hence, such problematic segmentations are often performed manually by non-experts and thus requires further inspection.
With NeuroMap, I develop a novel framework to support proofreading and correction of three-dimensional surface segmentations. To provide a clear overview and to ease navigation within the data, I propose the surface map, an abstracted two-dimensional representation using key features of the surface as landmarks. These visualizations are augmented with information about automated segmentation error estimates. The framework provides intuitive and interactive data correction mechanisms, which in turn permit the expeditious creation of high-quality segmentations.
While analyzing such segmented synapse data, the formulation of specific research questions is often impossible due to missing insight into the data. I address this problem by designing a generic parameter space for segmented structures from biological image data. Furthermore, I introduce a graphical interface to aid its exploration, combining both parameter selection as well as data representation.
When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.