Refine
Year of publication
- 2016 (80) (remove)
Document Type
- Doctoral Thesis (51)
- Conference Proceeding (14)
- Preprint (7)
- Article (1)
- Bachelor Thesis (1)
- Book (1)
- Course Material (1)
- Habilitation (1)
- Master's Thesis (1)
- Periodical Part (1)
Language
- English (80) (remove)
Keywords
- Cache (3)
- SRAM (3)
- DRAM (2)
- PIM (2)
- haptotaxis (2)
- ARM Processor (1)
- AUTOSAR (1)
- Affine Arithmetic (1)
- Approximation Algorithms (1)
- Autoregressive Hilbertian model (1)
Faculty / Organisational entity
- Fachbereich Informatik (23)
- Fachbereich Mathematik (23)
- Fachbereich Elektrotechnik und Informationstechnik (17)
- Fachbereich Maschinenbau und Verfahrenstechnik (5)
- Fachbereich Biologie (4)
- Fachbereich Chemie (2)
- Fachbereich Physik (2)
- Fachbereich Raum- und Umweltplanung (2)
- Fachbereich Sozialwissenschaften (2)
We propose a multiscale model for tumor cell migration in a tissue network. The system of equations involves a structured population model for the tumor cell density, which besides time and
position depends on a further variable characterizing the cellular state with respect to the amount
of receptors bound to soluble and insoluble ligands. Moreover, this equation features pH-taxis and
adhesion, along with an integral term describing proliferation conditioned by receptor binding. The
interaction of tumor cells with their surroundings calls for two more equations for the evolution of
tissue fibers and acidity (expressed via concentration of extracellular protons), respectively. The
resulting ODE-PDE system is highly nonlinear. We prove the global existence of a solution and
perform numerical simulations to illustrate its behavior, paying particular attention to the influence
of the supplementary structure and of the adhesion.
This thesis investigates the electromechanic coupling of dielectric elastomers for the static and dynamic case by numerical simulations. To this end, the fundamental equations of the coupled field problem are introduced and the discretisation procedure for the numerical implementation is described. Furthermore, a three field formulation is proposed and implemented to treat the nearly incompressible behaviour of the elastomer. Because of the reduced electric permittivity of the material, very high electric fields are required for actuation purposes. To improve the electromechanic coupling a heterogeneous microstructure consisting of an elastomer matrix with barium titanate inclusions is proposed and studied.
Stochastic Network Calculus (SNC) emerged from two branches in the late 90s:
the theory of effective bandwidths and its predecessor the Deterministic Network
Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support
their design and control.
In contrast to queueing theory, which strives for similar goals, SNC uses in-
equalities to circumvent complex situations, such as stochastic dependencies or
non-Poisson arrivals. Leaving the objective to compute exact distributions behind,
SNC derives stochastic performance bounds. Such a bound would, for example,
guarantee a system’s maximal queue length that is violated by a known small prob-
ability only.
This work includes several contributions towards the theory of SNC. They are
sorted into four main contributions:
(1) The first chapters give a self-contained introduction to deterministic net-
work calculus and its two branches of stochastic extensions. The focus lies on the
notion of network operations. They allow to derive the performance bounds and
simplifying complex scenarios.
(2) The author created the first open-source tool to automate the steps of cal-
culating and optimizing MGF-based performance bounds. The tool automatically
calculates end-to-end performance bounds, via a symbolic approach. In a second
step, this solution is numerically optimized. A modular design allows the user to
implement their own functions, like traffic models or analysis methods.
(3) The problem of the initial modeling step is addressed with the development
of a statistical network calculus. In many applications the properties of included
elements are mostly unknown. To that end, assumptions about the underlying
processes are made and backed by measurement-based statistical methods. This
thesis presents a way to integrate possible modeling errors into the bounds of SNC.
As a byproduct a dynamic view on the system is obtained that allows SNC to adapt
to non-stationarities.
(4) Probabilistic bounds are fundamentally different from deterministic bounds:
While deterministic bounds hold for all times of the analyzed system, this is not
true for probabilistic bounds. Stochastic bounds, although still valid for every time
t, only hold for one time instance at once. Sample path bounds are only achieved by
using Boole’s inequality. This thesis presents an alternative method, by adapting
the theory of extreme values.
(5) A long standing problem of SNC is the construction of stochastic bounds
for a window flow controller. The corresponding problem for DNC had been solved
over a decade ago, but remained an open problem for SNC. This thesis presents
two methods for a successful application of SNC to the window flow controller.
Buses not arriving on time and then arriving all at once - this phenomenon is known from
busy bus routes and is called bus bunching.
This thesis combines the well studied but so far separate areas of bus-bunching prediction
and dynamic holding strategies, which allow to modulate buses’ dwell times at stops to
eliminate bus bunching. We look at real data of the Dublin Bus route 46A and present
a headway-based predictive-control framework considering all components like data
acquisition, prediction and control strategies. We formulate time headways as time series
and compare several prediction methods for those. Furthermore we present an analytical
model of an artificial bus route and discuss stability properties and dynamic holding
strategies using both data available at the time and predicted headway data. In a numerical
simulation we illustrate the advantages of the presented predictive-control framework
compared to the classical approaches which only use directly available data.
Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)
In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.
Urban quality of life is currently conceptualized in principally economic terms. As the decline in manufacturing activities, the rise of the service and knowledge economy, the growing importance of accessibility and globalizing processes continue to reconfigure the economic competition between cities, quality of life enters the discourse primarily as a means to attract high-skilled workers and improve the city’s economic prospects. Local governments increasingly seek partnerships with local and foreign capital, reorganizing institutions and tasks to attract capital, including the “selling of place”, strengthening place promotion and marketing efforts. The rhetoric clearly welcomes wealthy, creative, high-skilled people, disadvantaged and low skilled groups receive less attention in the making of places. Especially with respect to inner city areas, high quality of life is promoted as spaces for ‘clean’ and convenient consumption with positive atmospheres and shiny images.
Yet, a plethora of theoretical engagements with urban everyday live reminds us that, while on the one hand, variety of jobs, quality of public spaces, range of shops and services, cultural facilities and public transport are important place characteristics, more subjective aspects such as safe neighbourhoods, well-being, community prospects, social cohesion, happiness, satisfaction and social and spatial justice are equally crucial determinants of urban quality of life. These elements of urban quality of life – and how they are experienced by diverse formations of urban inhabitants – seem to be absent from, if not at odds with, the dominant discourse in rankings, policy and practice. Urban life, social cohesion and complexity are at risk in the dynamics of modernization and adaptation strategies of cities. Gentrification, the occupation of inner-city districts by hyper-rich people, segregation and displacement of lower and middle classes can be observed as a consequence of these long-lasting strategies.
Well-known sociologists and geographers from the UK and Germany have presented their insights on the matter and debate theoretical and empirical attempts to capture the dynamics of urban processes in shaping the quality of life.
Knowing the extent to which we rely on technology one may think that correct programs are nowadays the norm. Unfortunately, this is far from the truth. Luckily, possible reasons why program correctness is difficult often come hand in hand with some solutions. Consider concurrent program correctness under Sequential Consistency (SC). Under SC, instructions of each program's concurrent component are executed atomically and in order. By using logic to represent correctness specifications, model checking provides a successful solution to concurrent program verification under SC. Alas, SC’s atomicity assumptions do not reflect the reality of hardware architectures. Total Store Order (TSO) is a less common memory model implemented in SPARC and in Intel x86 multiprocessors that relaxes the SC constraints. While the architecturally de-atomized execution of stores under TSO speeds up program execution, it also complicates program verification. To be precise, due to TSO’s unbounded store buffers, a program’s semantics under TSO might be infinite. This, for example, turns reachability under SC (a PSPACE-complete task) into a non-primitive-recursive-complete problem under TSO. This thesis develops verification techniques targeting TSO-relaxed programs. To be precise, we present under- and over-approximating heuristics for checking reachability in TSO-relaxed programs as well as state-reducing methods for speeding up such heuristics. In a first contribution, we propose an algorithm to check reachability of TSO-relaxed programs lazily. The under-approximating refinement algorithm uses auxiliary variables to simulate TSO’s buffers along instruction sequences suggested by an oracle. The oracle’s deciding characteristic is that if it returns the empty sequence then the program’s SC- and TSO-reachable states are the same. Secondly, we propose several approaches to over-approximate TSO buffers. Combined in a refinement algorithm, these approaches can be used to determine safety with respect to TSO reachability for a large class of TSO-relaxed programs. On the more technical side, we prove that checking reachability is decidable when TSO buffers are approximated by multisets with tracked per address last-added-values. Finally, we analyze how the explored state space can be reduced when checking TSO and SC reachability. Intuitively, through the viewpoint of Shasha-and-Snir-like traces, we exploit the structure of program instructions to explain several state-space reducing methods including dynamic and cartesian partial order reduction.
Combining ultracold atomic gases with the peculiar properties of Rydberg excited atoms gained a lot of theoretical and experimental attention in recent years. Embedded in the ultracold gas, an interaction between the Rydberg atom and the surrounding ground state atoms arises through the scattering of the Rydberg electron from an intruding perturber atom. This peculiar interaction gives rise to a plenitude of previously unobserved effects. Within the framework of the present thesis, this interaction is studied in detail for Rydberg \(P\)-states in rubidium.
Due to their long lifetime, atoms in Rydberg states are subject to scattering with the surrounding ground state atoms in the ultracold cloud. By measuring their lifetime as a function of the ground state atom flux, we are able to obtain the total inelastic scattering cross section as well as the partial cross section for associative ionisation. The fact that the latter is three orders of magnitude larger than the size of the formed molecular
ion indicates the presence of an efficient mass transport mechanism that is mediated by the Rydberg–ground state interaction. The immense acceleration of the collisional process shows a close analogy to a catalytic process. The increase of the scattering cross section renders associative ionisation an important process that has to be considered for experiments in dense ultracold systems.
The interaction of the Rydberg atom with a ground state perturber gives rise to a highly oscillatory potential that supports molecular bound states. These so-called ultralong-range Rydberg molecules are studied with high resolution time-of-flight spectroscopy, where we are able to determine the binding energies and lifetimes of the molecular states between the two fine structure split \(25P\)-states. Inside an electric field, we observe a broadening of the
molecular lines that indicates the presence of a permanent electric dipole moment, induced by the mixing with high angular momentum states. Due to the mixing of the ground state atom’s hyperfine states by the molecular interaction, we are able to observe a spin-flip of the perturber upon creation of a Rydberg molecule. Furthermore, an incidental near-degeneracy in the underlying level scheme of the \(25P\)-state gives rise to highly entangled states between the Rydberg fine structure state and the perturber’s hyperfine structure. These mechanisms can be used to manipulate the quantum state of a remote particle over distances that exceed by far the typical contact interaction range.
Apart from the ultralong-range Rydberg molecules that predominantly consist of only one low angular momentum state, a class of Rydberg molecules is predicted to exist that strongly mixes the high angular momentum states of the degenerate hydrogenic manifolds. These states, the so-called trilobite- and butterfly Rydberg molecules, show very peculiar properties that cannot be observed for conventional molecules. Here we present the first experimental observation of butterfly Rydberg molecules. In addition to an extensive spectroscopy that reveals the binding energy, we are also able to observe the rotational structure of these exotic molecules. The arising pendular states inside an electric field allow us, in comparison to the model of a dipolar rotor, to extract the precise bond
length and dipole moment of the molecule. With the information obtained in the present study, it is possible to photoassociate butterfly molecules with a selectable bond length, vibrational state, rotational state, and orientation inside an electric field.
By shedding light on various previously unrevealed aspects, the experiments presented in this thesis significantly deepen our knowledge on the Rydberg–ground state interaction and the peculiar effects arising from it. The obtained spectroscopic information on Rydberg molecules and the changed reaction dynamics for molecular ion creation will surely provide valuable data for quantum chemical simulations and provide necessary data to plan future experiments. Beyond that, our study reveals that the hyperfine interaction in Rydberg molecules and the peculiar properties of butterfly states provide very promising new ways to alter the short- and long-range interactions in ultracold many-body systems. In this sense the investigated Rydberg–ground state interaction not only lies right at
the interface between quantum chemistry, quantum many-body systems, and Rydberg physics, but also creates many new and fascinating possibilities by combining these fields.
Mixed-signal systems combine analog circuits with digital hardware and software systems. A particular challenge is the sensitivity of analog parts to even small deviations in parameters, or inputs. Parameters of circuits and systems such as process, voltage, and temperature are never accurate; we hence model them as uncertain values (‘uncertainties’). Uncertain parameters and inputs can modify the dynamic behavior and lead to properties of the system that are not in specified ranges. For verification of mixed- signal systems, the analysis of the impact of uncertainties on the dynamical behavior plays a central role.
Verification of mixed-signal systems is usually done by numerical simulation. A single numerical simulation run allows designers to verify single parameter values out of often ranges of uncertain values. Multi-run simulation techniques such as Monte Carlo Simulation, Corner Case simulation, and enhanced techniques such as Importance Sampling or Design-of-Experiments allow to verify ranges – at the cost of a high number of simulation runs, and with the risk of not finding potential errors. Formal and symbolic approaches are an interesting alternative. Such methods allow a comprehensive verification. However, formal methods do not scale well with heterogeneity and complexity. Also, formal methods do not support existing and established modeling languages. This fact complicates its integration in industrial design flows.
In previous work on verification of Mixed-Signal systems, Affine Arithmetic is used for symbolic simulation. This allows combining the high coverage of formal methods with the ease-of use and applicability of simulation. Affine Arithmetic computes the propagation of uncertainties through mostly linear analog circuits and DSP methods in an accurate way. However, Affine Arithmetic is currently only able to compute with contiguous regions, but does not permit the representation of and computation with discrete behavior, e.g. introduced by software. This is a serious limitation: in mixed-signal systems, uncertainties in the analog part are often compensated by embedded software; hence, verification of system properties must consider both analog circuits and embedded software.
The objective of this work is to provide an extension to Affine Arithmetic that allows symbolic computation also for digital hardware and software systems, and to demonstrate its applicability and scalability. Compared with related work and state of the art, this thesis provides the following achievements:
1. The thesis introduces extended Affine Arithmetic Forms (XAAF) for the representation of branch and merge operations.
2. The thesis describes arithmetic and relational operations on XAAF, and reduces over-approximation by using an LP solver.
3. The thesis shows and discusses ways to integrate this XAAF into existing modeling languages, in particular SystemC. This way, breaks in the design flow can be avoided.
The applicability and scalability of the approach is demonstrated by symbolic simulation of a Delta-Sigma Modulator and a PLL circuit of an IEEE 802.15.4 transceiver system.
An Adaptive and Dynamic Simulation Framework for Incremental, Collaborative Classifier Fusion
(2016)
Abstract. To investigate incremental collaborative classifier fusion techniques, we have developed a comprehensive simulation framework. It is highly flexible and customizable, and can be adapted to various settings and scenarios. The toolbox is realized as an extension to the NetLogo multi-agent based simulation environment using its comprehensive Java- API. The toolbox has been integrated in two di↵erent environments, one for demonstration purposes and another, modeled on persons using re- alistic motion data from Zurich, who are communicating in an ad hoc fashion using mobile devices.
This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.
Reading as a cultural skill is acquired over a long period of training. This thesis supports the idea that reading is based on specific strategies that result from modification and coordination of earlier developed object recognition strategies. The reading-specific processing strategies are considered to be more analytic compared to object recognition strategies, which are described as holistic. To enable proper reading skills these strategies have to become automatized. Study 1 (Chapter 4) examined the temporal and visual constrains of letter recognition strategies. In the first experiment two successively presented stimuli (letters or non-letters) had to be classified as same or different. The second stimulus could either be presented in isolation or surrounded by a shape, which was either similar (congruent) or different (incongruent) in its geometrical properties to the stimulus itself. The non-letter pairs were presented twice as often as the letter pairs. The results demonstrated a preference for the holistic strategy also in letters, even if the non- letter set was presented twice as often as the letter set, showing that the analytic strategy does not replace the holistic one completely, but that the usage of both strategies is task-sensitive. In Experiment 2, we compared the Global Precedence Effect (GPE) for letters and non-letters in central viewing, with the global stimulus size close to the functional visual field in whole word reading (6.5◦ of visual angle) and local stimuli close to the critical size for fluent reading of individual letters (0.5◦ of visual angle). Under these conditions, the GPE remained robust for non-letters. For letters, however, it disappeared: letters showed no overall response time advantage for the global level and symmetric congruence effects (local-to-global as well as global-to-local interference). These results indicate that reading is based on resident analytic visual processing strategies for letters. In Study 2 (Chapter 5) we replicated the latter result with a large group of participants as part of a study in which pairwise associations of non-letters and phonological or non-phonological sounds were systematically trained. We investigated whether training would eliminate the GPE also for non-letters. We observed, however, that the differentiation between letters and non-letter shapes persists after training. This result implies that pairwise association learning is not sufficient to overrule the process differentiation in adults. In addition, subtle effects arising in the letter condition (due to enhanced power) enable us to further specify the differentiation in processing between letters and non-letter shapes. The influence of reading ability on the GPE was examined in Study 3 (Chapter 6). Children with normal reading skills and children with poor reading skills were instructed to detect a target in Latin or Hebrew Navon letters. Children with normal reading skills showed a GPE for Latin letters, but not for Hebrew letters. In contrast, the dyslexia group did not show GPE for either kind of stimuli. These results suggest that dyslexic children are not able to apply the same automatized letter processing strategy as children with normal reading skills do. The difference between the analytic letter processing and the holistic non-letter processing was transferred to the context of whole word reading in Study 4 (Chapter 7). When participants were instructed to detect either a letter or a non-letter in a mixed character string, for letters the reaction times and error rates increased linearly from the left to the right terminal position in the string, whereas for non-letters a symmetrical U-shaped function was observed. These results suggest, that the letter-specific processing strategies are triggered automatically also for more word-like material. Thus, this thesis supports and expands prior results of letter-specific processing and gives new evidences for letter-specific processing strategies.
Synapses play a central role in the information propagation in the nervous system. A better understanding of synaptic structures and processes is vital for advancing nervous disease research. This work is part of an interdisciplinary project that aims at the quantitative examination of components of the neuromuscular junction, a synaptic connection between a neuron and a muscle cell.
The research project is based on image stacks picturing neuromuscular junctions captured by modern electron microscopes, which permit the rapid acquisition of huge amounts of image data at a high level of detail. The large amount and sheer size of such microscopic data makes a direct visual examination infeasible, though.
This thesis presents novel problem-oriented interactive visualization techniques that support the segmentation and examination of neuromuscular junctions.
First, I introduce a structured data model for segmented surfaces of neuromuscular junctions to enable the computational analysis of their properties. However, surface segmentation of neuromuscular junctions is a very challenging task due to the extremely intricate character of the objects of interest. Hence, such problematic segmentations are often performed manually by non-experts and thus requires further inspection.
With NeuroMap, I develop a novel framework to support proofreading and correction of three-dimensional surface segmentations. To provide a clear overview and to ease navigation within the data, I propose the surface map, an abstracted two-dimensional representation using key features of the surface as landmarks. These visualizations are augmented with information about automated segmentation error estimates. The framework provides intuitive and interactive data correction mechanisms, which in turn permit the expeditious creation of high-quality segmentations.
While analyzing such segmented synapse data, the formulation of specific research questions is often impossible due to missing insight into the data. I address this problem by designing a generic parameter space for segmented structures from biological image data. Furthermore, I introduce a graphical interface to aid its exploration, combining both parameter selection as well as data representation.
When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.
Requirements-Aware, Template-Based Protocol Graphs for Service-Oriented Network Architectures
(2016)
Rigidness of the Internet causes its architectural design issues such as interdependencies among the layers, no cross-layer information exchange, and applications dependency on the underlying protocols implementation.
G-Lab (i.e., http://www.german-lab.de/) is a research project for Future Internet Architecture (FIA), which focuses on problems of the Internet such as rigidness, mobility, and addressing. Where the focus of ICSY (i.e., www.icsy) was on providing the flexibility in future network architectures. An approach so-called Service Oriented Network Architecture (SONATE) is proposed to compose the protocols dynamically. SONATE is based on principles of the service-oriented architecture (SOA), where protocols are decomposed in software modules and later they are put together on demand to provide the desired service.
This composition of functionalities can be performed at various time-epochs (e.g., run-time, design-time, deployment-time). However, these epochs have trade-off in terms of the time-complexity (i.e., required setup time) and the provided flexibility. The design-time is the least time critical in comparison to other time phases, which makes it possible to utilize human-analytical capability. However, the design-time lacks the real-time knowledge of requirements and network conditions, what results in inflexible protocol graphs, and they cannot be changed at later stages on changing requirements. Contrary to the design-time, the run-time is most time critical where an application is waiting for a connection to be established, but at the same time it has maximum information to generate a protocol graph suitable to the given requirements.
Considering limitations above of different time-phases, in this thesis, a novel intermediate functional composition approach (i.e., Template-Based Composition) has been presented to generate requirements aware protocol graphs. The template-based composition splits the composition process across different time-phases to exploit the less time critical nature and human-analytical availability of the design-time, ability to instantaneously deploy new functionalities of the deployment time and maximum information availability of the run-time. The approach is successfully implemented , demonstrated and evaluated based on its performance to know the implications for the practical use.
This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134].
A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk.
Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants.
Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.
In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored.
First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations.
Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established.
Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.
Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule.
We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC.
We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate.
To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band.
Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.
The present study investigated the effects of two methods of shared book reading on children´s emergent literacy skills, such as language skills (expressive vocabulary and semantic skills) and grapheme awareness, i.e. before the alphabetic phase of reading acquisition (Lachmann & van Leeuwen, 2014) in home and in kindergarten contexts. The two following shared book reading methods were investigated: Method I - literacy enrichment: 200 extra children's books were distributed in kindergartens and children were encouraged every week to borrow a book to take home and read with their parents. Further, a written letter was sent to the parents encouraging them to frequently read the books with their children at home. Method II - teacher training: kindergarten teachers participated in structured training which included formal instruction on how to promote child language development through shared book reading. The training was an adaptation of the Heidelberger Interaktionstraining für pädagogisches Fachpersonal zur Förderung ein- und mehrsprachiger Kinder - HIT (Buschmann & Jooss, 2011). In addition, the effects of the two methods in combination were investigated. Three questions were addressed in the present study: (1) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's expressive vocabulary? (2) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's semantic skills? (3) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's grapheme awareness? Accordingly, 69 children, ranged in age from 3;0 to 4;8 years, were recruited from four kindergartens in the city of Kaiserslautern, Germany. The kindergartens were divided into: kindergarten 1 – Method I (N = 13); kindergarten 2 - Method II (N = 18); kindergarten 3 - Combination of both methods (N = 17); kindergarten 4 - Control group (N = 21). Half of the participants (N = 35) reported having a migration background. All groups were similar in regards to socioeconomic status and literacy activities at home. In a pre- posttest design, children performed three tests: expressive vocabulary (AWSTR, 3-5; Kiese-Himmel, 2005), semantic skills (SETK, 3-5 subtests ESR; Grimm, 2001), and grapheme awareness which is a task developed with the purpose of testing children’s familiarity with grapheme forms. The intervention period had duration of six months. The data analysis was performed using the software IBM SPSS Statistics version 22. Regarding language skills, Method I showed no significant effects on children expressive vocabulary and semantic skills. Method II showed significant effects for children expressive vocabulary. In addition, the children with migration background took more advantage of the method. Regarding semantic skills, no significant effects were found. No significant effects of the combination of both methods in children's language skills were found. For grapheme awareness, however, results showed positive effects for Method I, and Method II, as well as for the combination of both methods. The combination group, as reported by a large effect size, showed to be more effective than Method I and Method II alone. Moreover, the results indicated that in grapheme awareness, all children (in regards to age, gender, with and without migration background) took equal advantage in all three intervention groups. Overall, it can be concluded with the results of the present study, that by providing access to good books, Method I may help parents involve themselves in the active process of their child's literacy skills development. However, in order to improve language skills, access to books alone showed to be not enough. Therefore, it is suggested that access combined with additional support to parents in how to improve their language interactions with their children is highly recommended. In respect to Method II, the present study suggests that shared book reading through professional training is an important tool that supports children´s language development. For grapheme awareness it is concluded that with the combination of the two performed methods, high exposure to shared book reading helps children to informally learn about the surface characteristics of print, acquire some familiarity with the visual characteristics of the letters and learn to differentiate them from other visual patterns. Finally, it is suggested to organizations and institutions as well as to future research, the importance of having more programs that offer different possibilities to children to have more contact with adequate language interaction as well as more experiences with print through shared book reading as showed in the present study.
We propose and study a strongly coupled PDE-ODE-ODE system modeling cancer cell invasion through a tissue network
under the go-or-grow hypothesis asserting that cancer cells can either move or proliferate. Hence our setting features
two interacting cell populations with their mutual transitions and involves tissue-dependent degenerate diffusion and
haptotaxis for the moving subpopulation. The proliferating cells and the tissue evolution are characterized by way of ODEs
for the respective densities. We prove the global existence of weak solutions and illustrate the model behaviour by
numerical simulations in a two-dimensional setting.
The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\]
In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.
Membrane proteins are generally soluble only in the presence of detergent micelles or other membrane-mimetic systems, which renders the determination of the protein’s molar mass or oligomeric state difficult. Moreover, the amount of bound detergent varies drastically among different proteins and detergents. However, the type of detergent and its concentration have a great influence on the protein’s structure, stability, and functionality and the success of structural and functional investigations and crystallographic trials. Size-exclusion chromatography, which is commonly used to determine the molar mass of water-soluble proteins, is not suitable for detergent-solubilised proteins because
the protein–detergent complex has a different conformation and, thus, commonly exhibits
a different migration behaviour than globular standard proteins. Thus, calibration curves obtained with standard proteins are not useful for membrane-protein analysis. However,
the combination of size-exclusion chromatography with ultraviolet absorbance, static light scattering, and refractive index detection provides a tool to determine the molar mass of protein–detergent complexes in an absolute manner and allows for distinguishing the contributions of detergent and protein to the complex.
The goal of this thesis was to refine the standard triple-detection size-exclusion chromatography measurement and data analysis procedure for challenging membrane-protein samples, non-standard detergents, and difficult solvents such as concentrated denaturant solutions that were thought to elude routine approaches. To this end, the influence of urea on the performance of the method beyond direct influences on detergents and proteins was investigated with the help of the water-soluble bovine serum albumin. On the basis of
the obtained results, measurement and data analysis procedures were refined for different detergents and protein–detergent complexes comprising the membrane proteins OmpLA and Mistic from Escherichia coli and Bacillus subtilis, respectively.
The investigations on mass and shape of different detergent micelles and the compositions of protein–detergent complexes in aqueous buffer and concentrated urea solutions
showed that triple-detection size-exclusion chromatography provides valuable information
about micelle masses and shapes under various conditions. Moreover, it is perfectly suited for the straightforward analysis of detergent-suspended proteins in terms of composition and oligomeric state not only under native but, more importantly, also under denaturing conditions.
Towards A Non-tracking Web
(2016)
Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. Simply preventing third-party tracking without replacing the functionality it provides cannot be a viable solution; leaving publishers without essential services will hurt the sustainability of the entire ecosystem.
In this thesis, we present alternative approaches to bridge this gap between privacy for users and functionality for publishers and other entities. We first propose a general and interaction-based third-party cookie policy that prevents third-party tracking via cookies, yet enables social networking features for users when wanted, and does not interfere with non-tracking services for analytics and advertisements. We then present a system that enables publishers to obtain rich web analytics information (e.g., user demographics, other sites visited) without tracking the users across the web. While this system requires no new organizational players and is practical to deploy, it necessitates the publishers to pre-define answer values for the queries, which may not be feasible for many analytics scenarios (e.g., search phrases used, free-text photo labels). Our second system complements the first system by enabling publishers to discover previously unknown string values to be used as potential answers in a privacy-preserving fashion and with low computation overhead for clients as well as servers. These systems suggest that it is possible to provide non-tracking services with (at least) the same functionality as today’s tracking services.
Integrating Security Concerns into Safety Analysis of Embedded Systems Using Component Fault Trees
(2016)
Nowadays, almost every newly developed system contains embedded systems for controlling system functions. An embedded system perceives its environment via sensors, and interacts with it using actuators such as motors. For systems that might damage their environment by faulty behavior usually a safety analysis is performed. Security properties of embedded systems are usually not analyzed at all. New developments in the area of Industry 4.0 and Internet of Things lead to more and more networking of embedded systems. Thereby, new causes for system failures emerge: Vulnerabilities in software and communication components might be exploited by attackers to obtain control over a system. By targeted actions a system may also be brought into a critical state in which it might harm itself or its environment. Examples for such vulnerabilities, and also successful attacks, became known over the last few years.
For this reason, in embedded systems safety as well as security has to be analyzed at least as far as it may cause safety critical failures of system components.
The goal of this thesis is to describe in one model how vulnerabilities from the security point of view might influence the safety of a system. The focus lies on safety analysis of systems, so the safety analysis is extended to encompass security problems that may have an effect on the safety of a system. Component Fault Trees are very well suited to examine causes of a failure and to find failure scenarios composed of combinations of faults. A Component Fault Tree of an analyzed system is extended by additional Basic Events that may be caused by targeted attacks. Qualitative and quantitative analyses are extended to take the additional security events into account. Thereby, causes of failures that are based on safety as well as security problems may be found. Quantitative or at least semi-quantitative analyses allow to evaluate security measures more detailed, and to justify the need of such.
The approach was applied to several example systems: The safety chain of the off-road robot RAVON, an adaptive cruise control, a smart farming scenario, and a model of a generic infusion pump were analyzed. The result of all example analyses was that additional failure causes were found which would not have been detected in traditional Component Fault Trees. In the analyses also failure scenarios were found that are caused solely by attacks, and that are not depending on failures of system components. These are especially critical scenarios which should not happen in this way, as they are not found in a classical safety analysis. Thus the approach shows its additional benefit to a safety analysis which is achieved by the application of established techniques with only little additional effort.
By using Gröbner bases of ideals of polynomial algebras over a field, many implemented algorithms manage to give exciting examples and counter examples in Commutative Algebra and Algebraic Geometry. Part A of this thesis will focus on extending the concept of Gröbner bases and Standard bases for polynomial algebras over the ring of integers and its factors \(\mathbb{Z}_m[x]\). Moreover we implemented two algorithms for this case in Singular which use different approaches in detecting useless computations, the classical Buchberger algorithm and a F5 signature based algorithm. Part B includes two algorithms that compute the graded Hilbert depth of a graded module over a polynomial algebra \(R\) over a field, as well as the depth and the multigraded Stanley depth of a factor of monomial ideals of \(R\). The two algorithms provide faster computations and examples that lead B. Ichim and A. Zarojanu to a counter example of a question of J. Herzog. A. Duval, B. Goeckner, C. Klivans and J. Martin have recently discovered a counter example for the Stanley Conjecture. We prove in this thesis that the Stanley Conjecture holds in some special cases. Part D explores the General Neron Desingularization in the frame of Noetherian local domains of dimension 1. We have constructed and implemented in Singular and algorithm that computes a strong Artin Approximation for Cohen-Macaulay local rings of dimension 1.
Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field.
In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\).
To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the
modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and
is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\).
On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation.
At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.
Distributed systems are omnipresent nowadays and networking them is fundamental for the continuous dissemination and thus availability of data. Provision of data in real-time is one of the most important non-functional aspects that safety-critical networks must guarantee. Formal verification of data communication against worst-case deadline requirements is key to certification of emerging x-by-wire systems. Verification allows aircraft to take off, cars to steer by wire, and safety-critical industrial facilities to operate. Therefore, different methodologies for worst-case modeling and analysis of real-time systems have been established. Among them is deterministic Network Calculus (NC), a versatile technique that is applicable across multiple domains such as packet switching, task scheduling, system on chip, software-defined networking, data center networking and network virtualization. NC is a methodology to derive deterministic bounds on two crucial performance metrics of communication systems:
(a) the end-to-end delay data flows experience and
(b) the buffer space required by a server to queue all incoming data.
NC has already seen application in the industry, for instance, basic results have been used to certify the backbone network of the Airbus A380 aircraft.
The NC methodology for worst-case performance analysis of distributed real-time systems consists of two branches. Both share the NC network model but diverge regarding their respective derivation of performance bounds, i.e., their analysis principle. NC was created as a deterministic system theory for queueing analysis and its operations were later cast in a (min,+)-algebraic framework. This branch is known as algebraic Network Calculus (algNC). While algNC can efficiently compute bounds on delay and backlog, the algebraic manipulations do not allow NC to attain the most accurate bounds achievable for the given network model. These tight performance bounds can only be attained with the other, newly established branch of NC, the optimization-based analysis (optNC). However, the only optNC analysis that can currently derive tight bounds was proven to be computationally infeasible even for the analysis of moderately sized networks other than simple sequences of servers.
This thesis makes various contributions in the area of algNC: accuracy within the existing framework is improved, distributivity of the sensor network calculus analysis is established, and most significantly the algNC is extended with optimization principles. They allow algNC to derive performance bounds that are competitive with optNC. Moreover, the computational efficiency of the new NC approach is improved such that this thesis presents the first NC analysis that is both accurate and computationally feasible at the same time. It allows NC to scale to larger, more complex systems that require formal verification of their real-time capabilities.
The mechanical properties of semi-crystalline polymers depend extremely on their
morphology, which is dependent on the crystallization during processing. The aim of
this research is to determine the effect of various nanoparticles on morphology
formation and tensile mechanical properties of polypropylene under conditions
relevant in polymer processing and to contribute ultimately to the understanding of
this influence.
Based on the thermal analyses of samples during fast cooling, it is found that the
presence of nanoparticle enhances the overall crystallization process of PP. The results
suggest that an increase of the nucleation density/rate is a dominant process that
controls the crystallization process of PP in this work, which can help to reduce the
cycle time in the injection process. Moreover, the analysis of melting behaviors
obtained after each undercooling reveals that crystal perfection increases significantly
with the incorporation of TiO2 nanoparticles, while it is not influenced by the SiO2
nanoparticles.
This work also comprises an analysis of the influence of nanoparticles on the
microstructure of injection-molded parts. The results clearly show multi-layers along
the wall thickness. The spherulite size and the degree of crystallinity continuously
decrease from the center to the edge. Generally both the spherulite size and the degree
of crystallinity decrease with higher the SiO2 loading. In contrast, an increase in the
degree of crystallinity with an increasing TiO2 nanoparticle loading was detected.
The tensile properties exhibit a tendency to increase in the tensile strength as the core
is reached. The tensile strength decreases with the addition of nanoparticles, while the
elongation at break of nanoparticle-filled PP decreases from the skin to the core. With
increasing TiO2 loading, the elongation at break decreases.
The biodiversity of the cyanobacterial lichen flora of Vietnam is chronically understudied. Previous studies often neglected the lichens that inhabit lowlands especially outcrops and sand dunes that are common habitats in Vietnam.
A cyanolichen collection was gathered from lowlands of central and southern Vietnam to study their diversity and distribution. At the same time, cultured photobionts from those lichens were used for olyphasic taxonomic approach.
A total of 66 cyanolichens were recorded from lowland regions in central and southern of Vietnam, doubles the number of cyanolichens for Vietnam. 80% of them are new records for Vietnam in which a new species Pyrenopsis melanophthalma and two new unidentified lichinacean taxa were described.
A notably floristic segregation by habitats was indicated in the communities. Saxicolous Lichinales dominated in coastal outcrops that corresponded to 56% of lichen species richness. Lecanoralean cyanolichens and basidiolichens were found in the lowland forests. Precipitation correlated negatively to species richness in this study, indicating a competitive relationship.
Eleven cyanobacterial strains including 8 baeocyte-forming members of the genus Chroococcidiopsis and 3 heterocyte-forming species of the genera Nostoc and Scytonema were successfully isolated from lichens.
Phylogenetic and morphological analyses indicated that Chroococcidiopsis was the unique photobiont in Peltula. New mophological characters were found in two Chroococcidiopsis strains: (1) the purple content of cells in one photobiont strain that was isolated from a new lichinacean taxon, and (2) the pseudofilamentous feature by binary division from a strain that was isolated from Porocyphus dimorphus.
With respect to heterocyte-forming cyanobiont, Scytonema was confirmed as the photobiont in the ascolichen Heppia lutosa applying the polyphasic method. The genus Scytonema in the basidiolichens Cyphellostereum was morphologically examinated in lichen thalli. For the first time the intracellular haustorial system of basidiolichen genus Cyphellostereum was noted and investigated.
Phylogenetic analysis of photobiont strains Nostoc from Pannaria tavaresii and Parmeliella brisbanensis indicated that a high selectivity occurred in Parmeliella brisbanensis that were from different regions of the world, while low photobiont selectivity occurred among Pannaria tavaresii samples from different geographical regions.
The herewith presented dissertation is therefore an important contribution to the lichen flora of Vietnam and a significant improvement of the actual knowledge about cyanolichens in this country.
This thesis is concerned with a phase field model for martensitic transformations in metastable austenitic steels. Within the phase field approach an order parameter is introduced to indicate whether the present phase is austenite or martensite. The evolving microstructure is described by the evolution of the order parameter, which is assumed to follow the time-dependent Ginzburg-Landau equation. The elastic phase field model is enhanced in two different ways to take further phenomena into account. First, dislocation movement is considered by a crystal plasticity setting. Second, the elastic model for martensitic transformations is combined with a phase field model for fracture. Finite element simulations are used to study the single effects separately which contribute to the microstructure formation.
Software defined radios can be implemented on general purpose processors (CPUs), e.g. based on a PC. A processor offers high flexibility: It can not only be used to process the data samples, but also to control receiver functions, display a waterfall or run demodulation software. However, processors can only handle signals of limited bandwidth due to their comparatively low processing speed. For signals of high bandwidth the SDR algorithms have to be implemented as custom designed digital circuits on an FPGA chip. An FPGA provides a very high processing speed, but also lacks flexibility and user interfaces. Recently the FPGA manufacturer Xilinx has
introduced a hybrid system on chip called Zynq, that combines both approaches. It features a dual ARM Cortex-A9 processor and an FPGA, that offer the flexibility of a processor with the processing speed of an FPGA on a single chip. The Zynq is therefore very interesting for use in SDRs. In this paper the
application of the Zynq and its evaluation board (Zedboard) will be discussed. As an example, a direct sampling receiver has been implemented on the Zedboard using a high-speed 16 bit ADC with 250 Msps.
In this paper, we discuss the problem of approximating ellipsoid uncertainty sets with bounded (gamma) uncertainty sets. Robust linear programs with ellipsoid uncertainty lead to quadratically constrained programs, whereas robust linear programs with bounded uncertainty sets remain linear programs which are generally easier to solve.
We call a bounded uncertainty set an inner approximation of an ellipsoid if it is contained in it. We consider two different inner approximation problems. The first problem is to find a bounded uncertainty set which sticks close to the ellipsoid such that a shrank version of the ellipsoid is contained in it. The approximation is optimal if the required shrinking is minimal. In the second problem, we search for a bounded uncertainty set within the ellipsoid with maximum volume. We present how both problems can be solved analytically by stating explicit formulas for the optimal solutions of these problems.
Further, we present in a computational experiment how the derived approximation techniques can be used to approximate shortest path and network flow problems which are affected by ellipsoidal uncertainty.
Most of today’s wireless communication devices operate on unlicensed bands with uncoordinated spectrum access, with the consequence that RF interference and collisions are impairing the overall performance of wireless networks. In the classical design of network protocols, both packets in a collision are considered lost, such that channel access mechanisms attempt to avoid collisions proactively. However, with the current proliferation of wireless applications, e.g., WLANs, car-to-car networks, or the Internet of Things, this conservative approach is increasingly limiting the achievable network performance in practice. Instead of shunning interference, this thesis questions the notion of „harmful“ interference and argues that interference can, when generated in a controlled manner, be used to increase the performance and security of wireless systems. Using results from information theory and communications engineering, we identify the causes for reception or loss of packets and apply these insights to design system architectures that benefit from interference. Because the effect of signal propagation and channel fading, receiver design and implementation, and higher layer interactions on reception performance is complex and hard to reproduce by simulations, we design and implement an experimental platform for controlled interference generation to strengthen our theoretical findings with experimental results. Following this philosophy, we introduce and evaluate a system architecture that leverage interference.
First, we identify the conditions for successful reception of concurrent transmissions in wireless networks. We focus on the inherent ability of angular modulation receivers to reject interference when the power difference of the colliding signals is sufficiently large, the so-called capture effect. Because signal power fades over distance, the capture effect enables two or more sender–receiver pairs to transmit concurrently if they are positioned appropriately, in turn boosting network performance. Second, we show how to increase the security of wireless networks with a centralized network access control system (called WiFire) that selectively interferes with packets that violate a local security policy, thus effectively protecting legitimate devices from receiving such packets. WiFire’s working principle is as follows: a small number of specialized infrastructure devices, the guardians, are distributed alongside a network and continuously monitor all packet transmissions in the proximity, demodulating them iteratively. This enables the guardians to access the packet’s content before the packet fully arrives at the receiver. Using this knowledge the guardians classify the packet according to a programmable security policy. If a packet is deemed malicious, e.g., because its header fields indicate an unknown client, one or more guardians emit a limited burst of interference targeting the end of the packet, with the objective to introduce bit errors into it. Established communication standards use frame check sequences to ensure that packets are received correctly; WiFire leverages this built-in behavior to prevent a receiver from processing a harmful packet at all. This paradigm of „over-the-air“ protection without requiring any prior modification of client devices enables novel security services such as the protection of devices that cannot defend themselves because their performance limitations prohibit the use of complex cryptographic protocols, or of devices that cannot be altered after deployment.
This thesis makes several contributions. We introduce the first software-defined radio based experimental platform that is able to generate selective interference with the timing precision needed to evaluate the novel architectures developed in this thesis. It implements a real-time receiver for IEEE 802.15.4, giving it the ability to react to packets in a channel-aware way. Extending this system design and implementation, we introduce a security architecture that enables a remote protection of wireless clients, the wireless firewall. We augment our system with a rule checker (similar in design to Netfilter) to enable rule-based selective interference. We analyze the security properties of this architecture using physical layer modeling and validate our analysis with experiments in diverse environmental settings. Finally, we perform an analysis of concurrent transmissions. We introduce a new model that captures the physical properties correctly and show its validity with experiments, improving the state of the art in the design and analysis of cross-layer protocols for wireless networks.
Computer Vision (CV) problems, such as image classification and segmentation, have traditionally been solved by manual construction of feature hierarchies or incorporation of other prior knowledge. However, noisy images, varying viewpoints and lighting conditions of images, and clutters in real-world images make the problem challenging. Such tasks cannot be efficiently solved without learning from data. Therefore, many Deep Learning (DL) approaches have recently been successful for various CV tasks, for instance, image classification, object recognition and detection, action recognition, video classification, and scene labeling. The main focus of this thesis is to investigate a purely learning-based approach, particularly, Multi-Dimensional LSTM (MD-LSTM) recurrent neural networks to tackle the challenging CV tasks, classification and segmentation on 2D and 3D image data. Due to the structural nature of MD-LSTM, the network learns directly from raw pixel values and takes the complex spatial dependencies of each pixel into account. This thesis provides several key contributions in the field of CV and DL.
Several MD-LSTM network architectural options are suggested based on the type of input and output, as well as the requiring tasks. Including the main layers, which are an input layer, a hidden layer, and an output layer, several additional layers can be added such as a collapse layer and a fully connected layer. First, a single Two Dimensional LSTM (2D-LSTM) is directly applied on texture images for segmentation and show improvement over other texture segmentation methods. Besides, a 2D-LSTM layer with a collapse layer is applied for image classification on texture and scene images and have provided an accurate classification results. In addition, a deeper model with a fully connected layer is introduced to deal with more complex images for scene labeling and outperforms the other state-of-the-art methods including the deep Convolutional Neural Networks (CNN). Here, several input and output representation techniques are introduced to achieve the robust classification. Randomly sampled windows as input are transformed in scaling and rotation, which are integrated to get the final classification. To achieve multi-class image classification on scene images, several pruning techniques are introduced. This framework provides a good results in automatic web-image tagging. The next contribution is an investigation of 3D data with MD-LSTM. The traditional cuboid order of computations in Multi-Dimensional LSTM (MD-LSTM) is re-arranged in pyramidal fashion. The resulting Pyramidal Multi-Dimensional LSTM (PyraMiD-LSTM) is easy to parallelize, especially for 3D data such as stacks of brain slice images. PyraMiD-LSTM was tested on 3D biomedical volumetric images and achieved best known pixel-wise brain image segmentation results and competitive results on Electron Microscopy (EM) data for membrane segmentation.
To validate the framework, several challenging databases for classification and segmentation are proposed to overcome the limitations of current databases. First, scene images are randomly collected from the web and used for scene understanding, i.e., the web-scene image dataset for multi-class image classification. To achieve multi-class image classification, the training and testing images are generated in a different setting. For training, images belong to a single pre-defined category which are trained as a regular single-class image classification. However, for testing, images containing multi-classes are randomly collected by web-image search engine by querying the categories. All scene images include noise, background clutter, unrelated contents, and also diverse in quality and resolution. This setting can make the database possible to evaluate for real-world applications. Secondly, an automated blob-mosaics texture dataset generator is introduced for segmentation. Random 2D Gaussian blobs are generated and filled with random material textures. These textures contain diverse changes in illumination, scale, rotation, and viewpoint. The generated images are very challenging since they are even visually hard to separate the related regions.
Overall, the contributions in this thesis are major advancements in the direction of solving image analysis problems with Long Short-Term Memory (LSTM) without the need of any extra processing or manually designed steps. We aim at improving the presented framework to achieve the ultimate goal of accurate fine-grained image analysis and human-like understanding of images by machines.
Cells and organelles are enclosed by membranes that consist of a lipid bilayer harboring highly
diverse membrane proteins (MPs). These carry out vital functions, and α-helical MPs, in
particular, are of outstanding pharmacological importance, as they comprise more than half of
all drug targets. However, knowledge from MP research is limited, as MPs require membranemimetic
environments to retain their native structures and functions and, thus, are not readily
amenable to in vitro studies. To gain insight into vectorial functions, as in the case of channels
and transporters, and into topology, which describes MP conformation and orientation in the
context of a membrane, purified MPs need to be reconstituted, that is, transferred from detergent
micelles into a lipid-bilayer system.
The ultimate goal of this thesis was to elucidate the membrane topology of Mistic, which is
an essential regulator of biofilm formation in Bacillus subtilis consisting of four α-helices. The
conformational stability of Mistic has been shown to depend on the presence of a hydrophobic
environment. However, Mistic is characterized by an uncommonly hydrophilic surface, and
its helices are significantly shorter than transmembrane helices of canonical integral MPs.
Therefore, the means by which its association with the hydrophobic interior of a lipid bilayer
is accomplished is a subject of much debate. To tackle this issue, Mistic was produced and
purified, reconstituted, and subjected to topological studies.
Reconstitution of Mistic in the presence of lipids was performed by lowering the detergent
concentration to subsolubilizing concentrations via addition of cyclodextrin. To fully exploit
the advantages offered by cyclodextrin-mediated detergent removal, a quantitative model was
established that describes the supramolecular state of the reconstitution mixture and allows
for the prediction of reconstitution trajectories and their cross points with phase boundaries.
Automated titrations enabled spectroscopic monitoring of Mistic reconstitutions in real time.
On the basis of the established reconstitution protocol, the membrane topology of Mistic was
investigated with the aid of fluorescence quenching experiments and oriented circular dichroism
spectroscopy. The results of these experiments reveal that Mistic appears to be an exception
from the commonly observed transmembrane orientation of α-helical MPs, since it exhibits
a highly unusual in-plane topology, which goes in line with recent coarse-grained molecular
dynamics simulations.
The task of printed Optical Character Recognition (OCR), though considered ``solved'' by many, still poses several challenges. The complex grapheme structure of many scripts, such as Devanagari and Urdu Nastaleeq, greatly lowers the performance of state-of-the-art OCR systems.
Moreover, the digitization of historical and multilingual documents still require much probing. Lack of benchmark datasets further complicates the development of reliable OCR systems. This thesis aims to find the answers to some of these challenges using contemporary machine learning technologies. Specifically, the Long Short-Term Memory (LSTM) networks, have been employed to OCR modern as well historical monolingual documents. The excellent OCR results obtained on these have led us to extend their application for multilingual documents.
The first major contribution of this thesis is to demonstrate the usability of LSTM networks for monolingual documents. The LSTM networks yield very good OCR results on various modern and historical scripts, without using sophisticated features and post-processing techniques. The set of modern scripts include modern English, Urdu Nastaleeq and Devanagari. To address the challenge of OCR of historical documents, this thesis focuses on Old German Fraktur script, medieval Latin script of the 15th century, and Polytonic Greek script. LSTM-based systems outperform the contemporary OCR systems on all of these scripts. To cater for the lack of ground-truth data, this thesis proposes a new methodology, combining segmentation-based and segmentation-free OCR approaches, to OCR scripts for which no transcribed training data is available.
Another major contribution of this thesis is the development of a novel multilingual OCR system. A unified framework for dealing with different types of multilingual documents has been proposed. The core motivation behind this generalized framework is the human reading ability to process multilingual documents, where no script identification takes place.
In this design, the LSTM networks recognize multiple scripts simultaneously without the need to identify different scripts. The first step in building this framework is the realization of a language-independent OCR system which recognizes multilingual text in a single step. This language-independent approach is then extended to script-independent OCR that can recognize multiscript documents using a single OCR model. The proposed generalized approach yields low error rate (1.2%) on a test corpus of English-Greek bilingual documents.
In summary, this thesis aims to extend the research in document recognition, from modern Latin scripts to Old Latin, to Greek and to other ``under-privilaged'' scripts such as Devanagari and Urdu Nastaleeq.
It also attempts to add a different perspective in dealing with multilingual documents.
Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.
Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies difficult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.
In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter-abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LLSPLAT, two testing tools for asynchronous programs that efficiently uncover many subtle memory violation bugs.
Human forest modification is among the largest global drivers of terrestrial degradation
of biodiversity, species interactions, and ecosystem functioning. One of the most
pertinent components, forest fragmentation, has a long history in ecological research
across the globe, particularly in lower latitudes. However, we still know little how
fragmentation shapes temperate ecosystems, irrespective of the ancient status quo of
European deforestation. Furthermore, its interaction with another pivotal component
of European forests, silvicultural management, are practically unexplored. Hence,
answering the question how anthropogenic modification of temperate forests affects
fundamental components of forest ecosystems is essential basic research that has
been neglected thus far. Most basal ecosystem elements are plants and their insect
herbivores, as they form the energetic basis of the tropic pyramid. Furthermore, their
respective biodiversity, functional traits, and the networks of interactions they
establish are key for a multitude of ecosystem functions, not least ecosystem stability.
Hence, the thesis at hand aimed to disentangle this complex system of
interdependencies of human impacts, biodiversity, species traits and inter-species
interactions.
The first step lay in understanding how woody plant assemblages are shaped by
human forest modification. For this purpose, field investigations in 57 plots in the
hyperfragmented cultural landscape of the Northern Palatinate highlands (SW
Germany) were conducted, censusing > 4,000 tree/shrub individuals from 34 species.
Use of novel, integrative indices for different types of land-use allowed an accurate
quantification of biotic responses. Intriguingly, woody tree/shrub communities reacted
strikingly positive to forest fragmentation, with increases in alpha and beta diversity,
as well as proliferation of heat/drought/light adapted pioneer species. Contrarily,
managed interior forests were homogenized/constrained in biodiversity, with
dominance of shade/cold adapted commercial tree species. Comparisons with recently
unmanaged stands (> 40 a) revealed first indications for nascent conversion to oldgrowth
conditions, with larger variability in light conditions and subsequent
community composition. Reactions to microclimatic conditions, the relationship
between associated species traits and the corresponding species pool, as well as
facilitative/constraining effects by foresters were discussed as underlying mechanisms.
Reactions of herbivore assemblages to forest fragmentation and the subsequent
changes in host plant communities were assessed by comprehensive sampling of >
1,000 live herbivores from 134 species in the forest understory. Diversity was –
similarly to plant communities - higher in fragmentation affected habitats, particularly
in edges of continuous control forests. Furthermore, average trophic specialization
showed an identical pattern. Mechanistically, benefits from microclimatic conditions,
host availability, as well as pronounced niche differentiation are deemed responsible.
While communities were heterogeneous, with no segregation across habitats, (smallforest fragments, edges, and interior of control forests), vegetation diversity, herbivore
diversity, as well as trophic specialization were identified to shape community
composition. This probably reflected a gradient from generalistic/species poor vs.
specialist/species rich herbivore assemblages.
Insect studies conducted in forest systems are doomed to incompleteness
without considering ‘the last biological frontier’, the tree canopies. To access their
biodiversity, relationship to edge effects, and their conservational value, the
arboricolous arthropod fauna of 24 beech (Fagus sylvatica) canopies was sampled via
insecticidal knockdown (‘fogging’). This resulted in an exhaustive collection of > 46,000
specimens from 24 major taxonomic/functional groups. Abundance distributions were
markedly negative exponential, indicating high abundance variability in tree crowns.
Individuals of six pertinent orders were identified to species level, returning > 3,100
individuals from 175 species and 52 families. This high diversity did marginally differ
across habitats, with slightly higher species richness in edge canopies. However,
communities in edge crowns were noticeably more heterogeneous than those in the
forest interior, possibly due to higher variability in environmental edge conditions. In
total, 49 species with protective value were identified, of which only one showed
habitat preferences (for near-natural interior forests). Among them, six species (all
beetles, Coleoptera) were classified as ‘priority species’ for conservation efforts. Hence,
beech canopies of the Northern Palatinate highlands can be considered strongholds of
insect biodiversity, incorporating many species of particular protective value.
The intricacy of plant-herbivore interaction networks and their relationship to
forest fragmentation is largely unexplored, particularly in Central Europe. Illumination
of this matter is all the more important, as ecological networks are highly relevant for
ecosystem stability, particularly in the face of additional anthropogenic disturbances,
such as climate change. Hence, plant-herbivore interaction networks (PHNs) were
constructed from woody plants and their associated herbivores, sampled alive in the
understory. Herbivory verification was achieved using no-choice-feeding assays, as well
as literature references. In total, networks across small forest fragments, edges, and
the forest interior consisted of 696 interactions. Network complexity and trophic niche
redundancy were compared across habitats using a rarefaction-like resampling
procedure. PHNs in fragmentation affected forest habitats were significantly more
complex, as well as more redundant in their realized niches, despite being composed of
relatively more specialist species. Furthermore, network robustness to climate change
was quantified utilizing four different scenarios for climate change susceptibility of
involved plants. In this procedure, remaining herbivores in the network were measured
upon successive loss of their host plant species. Consistently, PHNs in edges (and to a
smaller degree in small fragments) withstood primary extinction of plant species
longer, making them more robust. This was attributed to the high prevalence of
heat/drought-adapted species, as well as to beneficial effects of network topography
(complexity and redundancy). Consequently, strong correlative relationships were
found between realized niche redundancy and climate change robustness of PHNs.
This was both the first time that biologically realistic extinctions (instead of e.g.random extinctions) were used to measure network robustness, and that topographical
network parameters were identified as potential indicators for network robustness
against climate change.
In synthesis, in the light of global biotic degradation due to human forest
modification, the necessity to differentiate must be claimed. Ecosystems react
differently to anthropogenic disturbances, and it seems the particular features present
in Central European forests (ancient deforestation, extensive management, and, most
importantly, high richness in open-forest plant species) cause partly opposed patterns
to other biomes. Lenient microclimates and diverse plant communities facilitate
equally diverse herbivore assemblages, and hence complex and robust networks,
opposed to the forest interior. Therefore, in the reality of extensively used cultural
landscapes, fragmentation affected forest ecosystems, particularly forest edges, can be
perceived as reservoir for biodiversity, and ecosystem functionality. Nevertheless, as
practically all forest habitats considered in this thesis are under human cultivation,
recommendations for ecological enhancement of all forest habitats are discussed.
This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.
Interconnected, autonomously driving cars shall realize the vision of a zero-accident, low energy mobility in spite of a fast increasing traffic volume. Tightly interconnected medical devices and health care systems shall ensure the health of an aging society. And interconnected virtual power plants based on renewable energy sources shall ensure a clean energy supply in a society that consumes more energy than ever before. Such open systems of systems will play an essential role for economy and society.
Open systems of systems dynamically connect to each other in order to collectively provide a superordinate functionality, which could not be provided by a single system alone. The structure as well as the behavior of an open system of system dynamically emerge at runtime leading to very flexible solutions working under various different environmental conditions. This flexibility and adaptivity of systems of systems are a key for realizing the above mentioned scenarios.
On the other hand, however, this leads to uncertainties since the emerging structure and behavior of a system of system can hardly be anticipated at design time. This impedes the indispensable safety assessment of such systems in safety-critical application domains. Existing safety assurance approaches presume that a system is completely specified and configured prior to a safety assessment. Therefore, they cannot be applied to open systems of systems. In consequence, safety assurance of open systems of systems could easily become a bottleneck impeding or even preventing the success of this promising new generation of embedded systems.
For this reason, this thesis introduces an approach for the safety assurance of open systems of systems. To this end, we shift parts of the safety assurance lifecycle into runtime in order to dynamically assess the safety of the emerging system of system. We use so-called safety models at runtime for enabling systems to assess the safety of an emerging system of system themselves. This leads to a very flexible runtime safety assurance framework.
To this end, this thesis describes the fundamental knowledge on safety assurance and model-driven development, which are the indispensable prerequisites for defining safety models at runtime. Based on these fundamentals, we illustrate how we modularized and formalized conventional safety assurance techniques using model-based representations and analyses. Finally, we explain how we advanced these design time safety models to safety models that can be used by the systems themselves at runtime and how we use these safety models at runtime to create an efficient and flexible runtime safety assurance framework for open systems of systems.
In retail, assortment planning refers to selecting a subset of products to offer that maximizes profit. Assortments can be planned for a single store or a retailer with multiple chain stores where demand varies between stores. In this paper, we assume that a retailer with a multitude of stores wants to specify her offered assortment. To suit all local preferences, regionalization and store-level assortment optimization are widely used in practice and lead to competitive advantages. When selecting regionalized assortments, a tradeoff between expensive, customized assortments in every store and inexpensive, identical assortments in all stores that neglect demand variation is preferable.
We formulate a stylized model for the regionalized assortment planning problem (APP) with capacity constraints and given demand. In our approach, a 'common assortment' that is supplemented by regionalized products is selected. While products in the common assortment are offered in all stores, products in the local assortments are customized and vary from store to store.
Concerning the computational complexity, we show that the APP is strongly NP-complete. The core of this hardness result lies in the selection of the common assortment. We formulate the APP as an integer program and provide algorithms and methods for obtaining approximate solutions and solving large-scale instances.
Lastly, we perform computational experiments to analyze the benefits of regionalized assortment planning depending on the variation in customer demands between stores.
Dual-Pivot Quicksort and Beyond: Analysis of Multiway Partitioning and Its Practical Potential
(2016)
Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort.
In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework.
A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.
A vehicles fatigue damage is a highly relevant figure in the complete vehicle design process.
Long term observations and statistical experiments help to determine the influence of differnt parts of the vehicle, the driver and the surrounding environment.
This work is focussing on modeling one of the most important influence factors of the environment: road roughness. The quality of the road is highly dependant on several surrounding factors which can be used to create mathematical models.
Such models can be used for the extrapolation of information and an estimation of the environment for statistical studies.
The target quantity we focus on in this work ist the discrete International Roughness Index or discrete IRI. The class of models we use and evaluate is a discriminative classification model called Conditional Random Field.
We develop a suitable model specification and show new variants of stochastic optimizations to train the model efficiently.
The model is also applied to simulated and real world data to show the strengths of our approach.
In this paper, we show the feasibility of low supply voltage for SRAM (Static Random Access Memory) by adding error correction coding (ECC). In SRAM, the memory matrix needs to be powered for data retentive standby operation, resulting in standby leakage current. Particularly for low duty- cycle systems, the energy consumed due to standby leakage current can become significant. Lowering the supply voltage (VDD) during standby mode to below the specified data retention voltage (DRV) helps decrease the leakage current. At these VDD levels errors start to appear, which we can remedy by adding ECC. We show in this paper that addition of a simple single error correcting (SEC) ECC enables us to decrease the leakage current by 45% and leakage power by 72%. We verify this on a large set of commercially available standard 40nm SRAMs.
We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.
To continue reducing voltage in scaled technologies, both circuit and architecture-level resiliency techniques are needed to tolerate process-induced defects, variation, and aging in SRAM cells. Many different resiliency schemes have been proposed and evaluated, but most prior results focus on voltage reduction instead of energy reduction. At the circuit level, device cell architectures and assist techniques have been shown to lower Vmin for SRAM, while at the architecture level, redundancy and cache disable techniques have been used to improve resiliency at low voltages. This paper presents a unified study of error tolerance for both circuit and architecture techniques and estimates their area and energy overheads. Optimal techniques are selected by evaluating both the error-correcting abilities at low supplies and the overheads of each technique in a 28nm. The results can be applied to many of the emerging memory technologies.
Emerging Memories (EMs) could benefit from Error Correcting Codes (ECCs) able to correct few errors in a few nanoseconds. The low latency is necessary to meet the DRAM- like and/or eXecuted-in-Place requirements of Storage Class Memory devices. The error correction capability would help manufacturers to cope with unknown failure mechanisms and to fulfill the market demand for a rapid increase in density. This paper shows the design of an ECC decoder for a shortened BCH code with 256-data-bit page able to correct three errors in less than 3 ns. The tight latency constraint is met by pre-computing the coefficients of carefully chosen Error Locator Polynomials, by optimizing the operations in the Galois Fields and by resorting to a fully parallel combinatorial implementation of the decoder. The latency and the area occupancy are first estimated by the number of elementary gates to traverse, and by the total number of elementary gates of the decoder. Eventually, the implementation of the solution by Synopsys topographical synthesis methodology in 54nm logic gate length CMOS technology gives a latency lower than 3 ns and a total area less than \(250 \cdot 10^3 \mu m^2\).
Magnetic spin-based memory technologies are a promising solution to overcome the incoming limits of microelectronics. Nevertheless, the long write latency and high write energy of these memory technologies compared to SRAM make it difficult to use these for fast microprocessor memories, such as L1- Caches. However, the recent advent of the Spin Orbit Torque (SOT) technology changed the story: indeed, it potentially offers a writing speed comparable to SRAM with a much better density as SRAM and an infinite endurance, paving the way to a new paradigm in processor architectures, with introduction of non- volatility in all the levels of the memory hierarchy towards full normally-off and instant-on processors. This paper presents a full design flow, from device to system, allowing to evaluate the potential of SOT for microprocessor cache memories and very encouraging simulation results using this framework.
A counter-based read circuit tolerant to process variation for low-voltage operating STT-MRAM
(2016)
The capacity of embedded memory on LSIs has kept increasing. It is important to reduce the leakage power of embedded memory for low-power LSIs. In fact, the ITRS predicts that the leakage power in embedded memory will account for 40% of all power consumption by 2024 [1]. A spin transfer torque magneto-resistance random access memory (STT-MRAM) is promising for use as non-volatile memory to reduce the leakage power. It is useful because it can function at low voltages and has a lifetime of over 1016 write cycles [2]. In addition, the STT-MRAM technology has a smaller bit cell than an SRAM. Making the STT-MRAM is suitable for use in high-density products [3–7]. The STT-MRAM uses magnetic tunnel junction (MTJ). The MTJ has two states: a parallel state and an anti-parallel state. These states mean that the magnetization direction of the MTJ’s layers are the same or different. The directions pair determines the MTJ’s magneto- resistance value. The states of MTJ can be changed by the current flowing. The MTJ resistance becomes low in the parallel state and high in the anti-parallel state. The MTJ potentially operates at less than 0.4 V [8]. In other hands, it is difficult to design peripheral circuitry for an STT-MRAM array at such a low voltage. In this paper, we propose a counter-based read circuit that functions at 0.4 V, which is tolerant of process variation and temperature fluctuation.