In the present work, magnetic moments of isolated Single Molecule Magnets (SMMs) and transition
metal clusters were investigated. Gas phase X‐ray Magnetic Circular Dichroism (XMCD) in
combination with sum rule analysis served to separate the total magnetic moments of the
investigated species into their spin and orbital contributions. Two different mass spectrometry based
setups were used for the presented investigations on transition metal clusters (GAMBIT‐setup) and
on single molecule magnets (NanoClusterTrap). Both experiments were coupled to the UE52‐PGM
beamline at the BESSY II synchrotron facility (Helmholtz Zentrum Berlin) which provided the
necessary polarized X‐ray photons. The investigation of the given compounds as isolated molecules
in the gas phase enabled a determination of their intrinsic magnetic properties void of any influences
of e.g. a surrounding bulk or supporting surface
The Wilkie model is a stochastic asset model, developed by A.D. Wilkie in 1984 with a purpose to explore the behaviour of investment factors of insurers within the United Kingdom. Even so, there is still no analysis that studies the Wilkie model in a portfolio optimization framework thus far. Originally, the Wilkie model is considering a discrete-time horizon and we apply the concept of Wilkie model to develop a suitable ARIMA model for Malaysian data by using Box-Jenkins methodology. We obtained the estimated parameters for each sub model within the Wilkie model that suits the case of Malaysia, and permits us to analyse the result based on statistics and economics view. We then tend to review the continuous time case which was initially introduced by Terence Chan in 1998. The continuous-time Wilkie model inspired is then being employed to develop the wealth equation of a portfolio that consists of a bond and a stock. We are interested in building portfolios based on three well-known trading strategies, a self-financing strategy, a constant growth optimal strategy as well as a buy-and-hold strategy. In dealing with the portfolio optimization problems, we use the stochastic control technique consisting of the maximization problem itself, the Hamilton-Jacobi-equation, the solution to the Hamilton-Jacobi-equation and finally the verification theorem. In finding the optimal portfolio, we obtained the specific solution of the Hamilton-Jacobi-equation and proved the solution via the verification theorem. For a simple buy-and-hold strategy, we use the mean-variance analysis to solve the portfolio optimization problem.
Information Visualization (InfoVis) and Human-Computer Interaction (HCI) have strong ties with each other. Visualization supports the human cognitive system by providing interactive and meaningful images of the underlying data. On the other side, the HCI domain cares about the usability of the designed visualization from the human perspectives. Thus, designing a visualization system requires considering many factors in order to achieve the desired functionality and the system usability. Achieving these goals will help these people in understanding the inside behavior of complex data sets in less time.
Graphs are widely used data structures to represent the relations between the data elements in complex applications. Due to the diversity of this data type, graphs have been applied in numerous information visualization applications (e.g., state transition diagrams, social networks, etc.). Therefore, many graph layout algorithms have been proposed in the literature to help in visualizing this rich data type. Some of these algorithms are used to visualize large graphs, while others handle the medium sized graphs. Regardless of the graph size, the resulting layout should be understandable from the users’ perspective and at the same time it should fulfill a list of aesthetic criteria to increase the representation readability. Respecting these two principles leads to produce a resulting graph visualization that helps the users in understanding and exploring the complex behavior of critical systems.
In this thesis, we utilize the graph visualization techniques in modeling the structural and behavioral aspects of embedded systems. Furthermore, we focus on evaluating the resulting representations from the users’ perspectives.
The core contribution of this thesis is a framework, called ESSAVis (Embedded Systems Safety Aspect Visualizer). This framework visualizes not only some of the safety aspects (e.g. CFT models) of embedded systems, but also helps the engineers and experts in analyzing the system safety critical situations. For this, the framework provides a 2Dplus3D environment in which the 2D represents the graph representation of the abstract data about the safety aspects of the underlying embedded system while the 3D represents the underlying system 3D model. Both views are integrated smoothly together in the 3D world fashion. In order to check the effectiveness and feasibility of the framework and its sub-components, we conducted many studies with real end users as well as with general users. Results of the main study that targeted the overall ESSAVis framework show high acceptance ratio and higher accuracy with better performance using the provided visual support of the framework.
The ESSAVis framework has been designed to be compatible with different 3D technologies. This enabled us to use the 3D stereoscopic depth of such technologies to encode nodes attributes in node-link diagrams. In this regard, we conducted an evaluation study to measure the usability of the stereoscopic depth cue approach, called the stereoscopic highlighting technique, against other selected visual cues (i.e., color, shape, and sizes). Based on the results, the thesis proposes the Reflection Layer extension to the stereoscopic highlighting technique, which was also evaluated from the users’ perspectives. Additionally, we present a new technique, called ExpanD (Expand in Depth), that utilizes the depth cue to show the structural relations between different levels of details in node-link diagrams. Results of this part opens a promising direction of the research in which visualization designers can get benefits from the richness of the 3D technologies in visualizing abstract data in the information visualization domain.
Finally, this thesis proposes the application of the ESSAVis frame- work as a visual tool in the educational training process of engineers for understanding the complex concepts. In this regard, we conducted an evaluation study with computer engineering students in which we used the visual representations produced by ESSAVis to teach the principle of the fault detection and the failure scenarios in embedded systems. Our work opens the directions to investigate many challenges about the design of visualization for educational purposes.
Large displays become more and more popular, due to dropping prices. Their size and high resolution leverages collaboration and they are capable of dis- playing even large datasets in one view. This becomes even more interesting as the number of big data applications increases. The increased screen size and other properties of large displays pose new challenges to the Human- Computer-Interaction with these screens. This includes issues such as limited scalability to the number of users, diversity of input devices in general, leading to increased learning efforts for users, and more.
Using smart phones and tablets as interaction devices for large displays can solve many of these issues. Since they are almost ubiquitous today, users can bring their own device. This approach scales well with the number of users. These mobile devices are easy and intuitive to use and allow for new interaction metaphors, as they feature a wide array of input and output capabilities, such as touch screens, cameras, accelerometers, microphones, speakers, Near-Field Communication, WiFi, etc.
This thesis will present a concept to solve the issues posed by large displays. We will show proofs-of-concept, with specialized approaches showing the via- bility of the concept. A generalized, eyes-free technique using smart phones or tablets to interact with any kind of large display, regardless of hardware or software then overcomes the limitations of the specialized approaches. This is implemented in a large display application that is designed to run under a multitude of environments, including both 2D and 3D display setups. A special visualization method is used to combine 2D and 3D data in a single visualization.
Additionally the thesis will present several approaches to solve common is- sues with large display interaction, such as target sizes on large display getting too small, expensive tracking hardware, and eyes-free interaction through vir- tual buttons. These methods provide alternatives and context for the main contribution.
Scheduling-Location (ScheLoc) Problems integrate the separate fields of
scheduling and location problems. In ScheLoc Problems the objective is to
find locations for the machines and a schedule for each machine subject to
some production and location constraints such that some scheduling object-
ive is minimized. In this paper we consider the Discrete Parallel Machine
Makespan (DPMM) ScheLoc Problem where the set of possible machine loc-
ations is discrete and a set of n jobs has to be taken to the machines and
processed such that the makespan is minimized. Since the separate location
and scheduling problem are both NP-hard, so is the corresponding ScheLoc
Problem. Therefore, we propose an integer programming formulation and
different versions of clustering heuristics, where jobs are split into clusters
and each cluster is assigned to one of the possible machine locations. Since
the IP formulation can only be solved for small scale instances we propose
several lower bounds to measure the quality of the clustering heuristics. Ex-
tensive computational tests show the efficiency of the heuristics.
Motivated by the time-dependent location problem over T time-periods introduced in
Maier and Hamacher (2015) we consider the special case of two time-steps, which was shown
to be equivalent to the static 2-facility location problem in the plane. Geometric optimality
conditions are stated for the median objective. When using block norms, these conditions
are used to derive a polygon grid inducing a subdivision of the plane based on normal cones,
yielding a new approach to solve the 2-facility location problem in polynomial time. Combinatorial algorithms for the 2-facility location problem based on geometric properties are
deduced and their complexities are analyzed. These methods differ from others as they are
completely working on geometric objects to derive the optimal solution set.
Maintaining complex software systems tends to be a costly activity where software engineers spend a significant amount of time trying to understand the system's structure and behavior. As early as the 1980s, operation and maintenance costs were already twice as expensive as the initial development costs incurred. Since then these costs have steadily increased. The focus of this thesis is to reduce these costs through novel interactive exploratory visualization concepts and to apply these modern techniques in the context of services offered by software quality analysis.
Costs associated with the understanding of software are governed by specific features of the system in terms of different domains, including re-engineering, maintenance, and evolution. These features are reflected in software measurements or inner qualities such as extensibility, reusability, modifiability, testability, compatability, or adatability. The presence or absence of these qualities determines how easily a software system can conform or be customized to meet new requirements. Consequently, the need arises to monitor and evaluate the qualitative state of a software system in terms of these qualities. Using metrics-based analysis, production costs and quality defects of the software can be recorded objectively and analyzed.
In practice, there exist a number of free and commercial tools that analyze the inner quality of a software system through the use of software metrics. However, most of these tools focus on software data mining and metrics (computational analysis) and only a few support visual analytical reasoning. Typically, computational analysis tools generate data and software visualization tools facilitate the exploration and explanation of this data through static or interactive visual representations. Tools that combine these two approaches focus only on well-known metrics and lack the ability to examine user defined metrics. Further, they are often confined to simple visualization methods and metaphors, including charts, histograms, scatter plots, and node-link diagrams.
The goal of this thesis is to develop methodologies that combine computational analysis methods together with sophisticated visualization methods and metaphors through an interactive visual analysis approach. This approach promotes an iterative knowledge discovery process through multiple views of the data where analysts select features of interest in one of the views and inspect data items of the select subset in all of the views. On the one hand, we introduce a novel approach for the visual analysis of software measurement data that captures complete facts of the system, employs a flow-based visual paradigm for the specification of software measurement queries, and presents measurement results through integrated software visualizations. This approach facilitates the on-demand computation of desired features and supports interactive knowledge discovery - the analyst can gain more insight into the data through activities that involve: building a mental model of the system; exploring expected and unexpected features and relations; and generating, verifying, or rejecting hypothesis with visual tools. On the other hand, we have also extended existing tools with additional views of the data for the presentation and interactive exploration of system artifacts and their inter-relations.
Contributions of this thesis have been integrated into two different prototype tools. First evaluations of these tools show that they can indeed improve the understanding of large and complex software systems.
In the digital era we live in, users can access an abundance of digital resources in their daily life. These digital resources can be located on the user's devices, in traditional repositories such as intranets or digital libraries, but also in open environments such as the World Wide Web.
To be able to efficiently work with this abundance of information, users need support to get access to the resources that are relevant to them. Access to digital resources can be supported in various ways. Whether we talk about technologies for browsing, searching, filtering, ranking, or recommending resources: what they all have in common is that they depend on the available information (i.e., resources and metadata). The accessibility of digital resources that meet a user's information need, and the existence and quality of metadata is crucial for the success of any information system.
This work focuses on how social media technologies can support the access to digital resources. In contrast to closed and controlled environments where only selected users have the rights to contribute digital resources and metadata, and where this contribution involves a social process of formal agreement of the relevant stakeholders, potentially any user can easily create and provide information in social media environments. This usually leads to a larger variety of resources and metadata, and allows for dynamics that would otherwise hardly be possible.
Most information systems still mainly rely on traditional top-down approaches where only selected stakeholders can contribute information. The main idea of this thesis is an approach that allows for introducing the characteristics of social media environments in such traditional contexts. The requirements for such an approach are being examined, as well as the benefits and potentials it can provide.
The ALOE infrastructure was developed according to the identified requirements and realises a Social Resource and Metadata Hub. Case studies and evaluation results are provided to show the impact of the approach on the user's behaviours and the creation of digital resources and metadata, and to justify the presented approach.
We study an online flow shop scheduling problem where each job consists of several tasks that have to be completed in t different stages and the goal is to maximize the total weight of accepted jobs.
The set of tasks of a job contains one task for each stage and each stage has a dedicated set of identical parallel machines corresponding to it that can only process tasks of this stage. In order to gain the weight (profit) associated with a job j, each of its tasks has to be executed between a task-specific release date and deadline subject to the constraint that all tasks of job j from stages 1, …, i-1 have to be completed before the task of the ith stage can be started. In the online version, jobs arrive over time and all information about the tasks of a job becomes available at the release date of its first task. This model can be used to describe production processes in supply chains when customer orders arrive online.
We show that even the basic version of the offline problem with a single machine in each stage, unit weights, unit processing times, and fixed execution times for all tasks (i.e., deadline minus release date equals processing time) is APX-hard. Moreover, we show that the approximation ratio of any polynomial-time approximation algorithm for this basic version of the problem must depend on the number t of stages.
For the online version of the basic problem, we provide a (2t-1)-competitive deterministic online algorithm and a matching lower bound. Moreover, we provide several (sometimes tight) upper and lower bounds on the competitive ratio of online algorithms for several generalizations of the basic problem involving different weights, arbitrary release dates and deadlines, different processing times of tasks, and several identical machines per stage.
In a networked system, the communication system is indispensable but often the weakest link w.r.t. performance and reliability. This, particularly, holds for wireless communication systems, where the error- and interference-prone medium and the character of network topologies implicate special challenges. However, there are many scenarios of wireless networks, in which a certain quality-of-service has to be provided despite these conditions. In this regard, distributed real-time systems, whose realization by wireless multi-hop networks becomes increasingly popular, are a particular challenge. For such systems, it is of crucial importance that communication protocols are deterministic and come with the required amount of efficiency and predictability, while additionally considering scarce hardware resources that are a major limiting factor of wireless sensor nodes. This, in turn, does not only place demands on the behavior of a protocol but also on its implementation, which has to comply with timing and resource constraints.
The first part of this thesis presents a deterministic protocol for wireless multi-hop networks with time-critical behavior. The protocol is referred to as Arbitrating and Cooperative Transfer Protocol (ACTP), and is an instance of a binary countdown protocol. It enables the reliable transfer of bit sequences of adjustable length and deterministically resolves contest among nodes based on a flexible priority assignment, with constant delays, and within configurable arbitration radii. The protocol's key requirement is the collision-resistant encoding of bits, which is achieved by the incorporation of black bursts. Besides revisiting black bursts and proposing measures to optimize their detection, robustness, and implementation on wireless sensor nodes, the first part of this thesis presents the mode of operation and time behavior of ACTP. In addition, possible applications of ACTP are illustrated, presenting solutions to well-known problems of distributed systems like leader election and data dissemination. Furthermore, results of experimental evaluations with customary wireless transceivers are outlined to provide evidence of the protocol's implementability and benefits.
In the second part of this thesis, the focus is shifted from concrete deterministic protocols to their model-driven development with the Specification and Description Language (SDL). Though SDL is well-established in the domain of telecommunication and distributed systems, the predictability of its implementations is often insufficient as previous projects have shown. To increase this predictability and to improve SDL's applicability to time-critical systems, real-time tasks, an approved concept in the design of real-time systems, are transferred to SDL and extended to cover node-spanning system tasks. In this regard, a priority-based execution and suspension model is introduced in SDL, which enables task-specific priority assignments in the SDL specification that are orthogonal to the static structure of SDL systems and control transition execution orders on design as well as on implementation level. Both the formal incorporation of real-time tasks into SDL and their implementation in a novel scheduling strategy are discussed in this context. By means of evaluations on wireless sensor nodes, evidence is provided that these extensions reduce worst-case execution times substantially, and improve the predictability of SDL implementations and the language's applicability to real-time systems.