D. Software
Refine
Year of publication
Document Type
- Doctoral Thesis (16)
- Master's Thesis (5)
- Course Material (3)
- Bachelor Thesis (1)
- Preprint (1)
Has Fulltext
- yes (26)
Keywords
- AUTOSAR (1)
- Affine Arithmetic (1)
- Behavioural Customisation (1)
- Combinatorial Testing (1)
- Compatibility (1)
- Component Life Cycle (1)
- Data Modeling (1)
- Delay Management (1)
- Energie (1)
- Experiment (1)
Faculty / Organisational entity
This research for this thesis was conducted to develop a framework which supports the automatic configuration of project-specific software development processes by selecting and combining different technologies: the Process Configuration Framework. The research draws attention to the problem that while the research community develops new technologies, the industrial companies continue only using their well-known ones. Because of this, technology transfer takes decades. In addition, there is the fact that there is no solution which solves all problems in a software development project. This leads to a number of technologies which need to be combined for one project.
The framework developed and explained in this research mainly addresses those problems by building a bridge between research and industry as well as by supporting software companies during the selection of the most appropriate technologies combined in a software process. The technology transformation gap is filled by a repository of (new) technologies which are used as a foundation of the Process Configuration Framework. The process is configured by providing SPEM process pattern for each technology, so that the companies can build their process by plugging into each other.
The technologies of the repository were specified in a schema including a technology model, context model, and an impact model. With context and impact it is possible to provide information about a technology, for example, its benefits to quality, cost or schedule. The offering of the process pattern as output of the Process Configuration Framework is performed in several stages:
I Technology Ranking:
1 Ranking based on Application Domain, Project & Impact
2 Ranking based on Environment
3 Ranking based on Static Context
II Technology Combination:
4 Creation of all possible Technology Chains
5 Restriction of the Technology Chains
6 Ranking based on Static and Dynamic Context
7 Extension of the Chains by Quality Assurance
III Process Configuration:
8 Process Component Diagram
9 Extension of the Process Component Diagram
10 Instantiation of the Components by Technologies of the Technology Chain
11 Providing process patterns
12 Creation of the process based on Patterns
The effectiveness and quality of the Process Configuration Framework have additionally been evaluated in a case study. Here, the Technology Chains manually created by experts were compared to the chains automatically created by the framework after it was configured by those experts. This comparison depicted that the framework results are similar and therefore can be used as a recommendation.
We conclude from our research that support during the configuration of a process for software projects is important especially for non-experts. This support is provided by the Process Configuration Framework developed in this research. In addition our research has shown that this framework offers a possibility to speed up the technology transformation gap between the research community and industrial companies.
The rapid growth of systems, both in size and complexity, combined with their distributed
nature, is posing challenges for their efficient integration and functioning. Moreover,
in order to achieve sustainability objectives and future goals, systems are increasingly
collaborating with each other, resulting in the emergence of Systems of Systems (SoS)
that are large-scale and independent. In such scenarios, multiple stakeholders and systems
from different disciplines with diverse interests need to interoperate. In various domains,
this trend of growing systems creates a greater need for interfaces that ensure seamless
interoperability in between and within these systems and SoS.
To address these challenges, an effective method for integrating systems and SoS is required.
A key to ease this integration can be the use of interface specifications to describe and
specify interfaces. However, there is currently no comprehensive understanding of how
to write high-quality interface specifications, nor is there a common overview of interface
specification approaches.
This thesis aims to fill these gaps of documented knowledge by reviewing recent developments
and best practices for interface specifications in the context of systems engineering
and SoS engineering. The review was conducted through a literature review focusing on
interface specifications, complemented by an analysis of existing interface specification
approaches and expert interviews. The goal is to provide an overview of current interface
specification characteristics and their common use cases. Based on this analysis, a
usage-driven approach in the form of customised interface specification mappings was
developed, which can assist in identifying an appropriate approach for specifying interfaces.
In light of the increasing connectivity in our lives, the work provides a framework for
better classifying and approaching interface specifications, seeking to move away from
viewing interfaces as neglected elements of systems engineering, towards a more intelligent
and productive classification and approach.
With the ever-increasing amount of satellite-backed communication, constellations covering the entire world, and the rise of Software Defined Radios (SDRs), satellite signals have already become prime targets for scientific research all over the globe. However, due to logistical challenges like capture time/location and peripheral/system management for the sensors and the wide variety of protocols/encoding schemes used, no one-fits-all sniffing solution exists for capturing their wide variety of signals. Therefore, this thesis aims to analyze, design, and implement a system that makes it possible to study LEO (Low Earth Orbit) L-Band satellite signals with readily available Single Board Computers (SBCs) in a widely distributed, location, and time-aware way. The key design factors were useability, maintainability, adaptability, and security in a centrally managed client-server architecture. The research presented yielded a Satellite probe Operating System called SATOS, which aims to implement on-sensor data decoding driven by GNU Radio and secure Over The Air (OTA) updates inside the Buildroot build environment. Its intended use case is the future deployment of DISCOSAT on a university working group scale.
The proliferation of sensors in everyday devices – especially in smartphones – has led to crowd sensing becoming an important technique in many urban applications ranging from noise pollution mapping or road condition monitoring to tracking the spreading of diseases. However, in order to establish integrated crowd sensing environments on a large scale, some open issues need to be tackled first. On a high level, this thesis concentrates on dealing with two of those key issues: (1) efficiently collecting and processing large amounts of sensor data from smartphones in a scalable manner and (2) extracting abstract data models from those collected data sets thereby enabling the development of complex smart city services based on the extracted knowledge.
Going more into detail, the first main contribution of this thesis is the development of methods and architectures to facilitate simple and efficient deployments, scalability and adaptability of crowd sensing applications in a broad range of scenarios while at the same time enabling the integration of incentivation mechanisms for the participating general public. During an evaluation within a complex, large-scale environment it is shown that real-world deployments of the proposed data recording architecture are in fact feasible. The second major contribution of this thesis is the development of a novel methodology for using the recorded data to extract abstract data models which are representing the inherent core characteristics of the source data correctly. Finally – and in order to bring together the results of the thesis – it is demonstrated how the proposed architecture and the modeling method can be used to implement a complex smart city service by employing a data driven development approach.
For the development of the Extremely Large Telescope (ELT), the European Southern Observatory (ESO)
uses state machines to model life cycles and basic behaviour of control software components. To provide certain degrees of freedom, the component life cycles need to be customisable but in order to remain compatible, they must also conform to specific standard behaviour.
Clearly, these two goals are competing. High customisation causes difficulties in maintenance and may also lead to incompatible solutions. The introduction of strict compatibility requirements
on the other hand may increase maintainability but it also makes the system less flexible. To avoid spending a significant portion of the Assembly, Integration and Verification (AIV) phase in integration hell, it is of high importance to find the right balance between customisability and compatibility early enough.
To address this problem, this thesis examines different variability realisation mechanisms with respect to their applicability for the behavioural customisation of state machine models. Based on this information, a novel approach is presented that combines a set of variability realisation mechanisms and thereby enables open and stepwise customisation, systematic reuse and separation of concerns. Concretely, the method enhances a framework approach with model manipulation capabilities and mixin composition while also supporting conditional compilation and conditional execution. Moreover, the thesis demonstrates that compatibility can be ensured by combining constructive and analytical methods, namely feature orientation and conformance testing. Finally, feasibility and soundness of the elaborated solution concept are demonstrated using a proof of concept implementation that has already been applied to a real-world project in scope of the ELT program.
The aim of this thesis is to perform a case study to investigate the usability of SysMD in
industrial applications. The focus is on how well it can bridge the gap between requirement
specifications, modeling, and actual development.
SysMD is a new documentation and modeling language which aims to bring documentation
and modeling closer together while still not requiring the user to be an expert in modeling or
requirement specification. This differentiates SysMD from other tools which focus on either
documentation, modeling, or are aimed at modeling experts.
This thesis will show through the case study part that SysMD as a language has a good future
with potential of being used as a language bridging the gap between requirements,
documentation, and modeling without the user needing to be an expert within modeling. It
will also show that SysMD Notebook in its current state is not ready for primetime, and I give
recommendations on how to improve both the SysMD language as well as the SysMD
Notebook to make it usable for industrial projects in the future.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2020.12. For more information, see https://www.lintim.net
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2021.10. For more information, see https://www.lintim.net
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2021.12. For more information, see https://www.lintim.net
Maintaining complex software systems tends to be a costly activity where software engineers spend a significant amount of time trying to understand the system's structure and behavior. As early as the 1980s, operation and maintenance costs were already twice as expensive as the initial development costs incurred. Since then these costs have steadily increased. The focus of this thesis is to reduce these costs through novel interactive exploratory visualization concepts and to apply these modern techniques in the context of services offered by software quality analysis.
Costs associated with the understanding of software are governed by specific features of the system in terms of different domains, including re-engineering, maintenance, and evolution. These features are reflected in software measurements or inner qualities such as extensibility, reusability, modifiability, testability, compatability, or adatability. The presence or absence of these qualities determines how easily a software system can conform or be customized to meet new requirements. Consequently, the need arises to monitor and evaluate the qualitative state of a software system in terms of these qualities. Using metrics-based analysis, production costs and quality defects of the software can be recorded objectively and analyzed.
In practice, there exist a number of free and commercial tools that analyze the inner quality of a software system through the use of software metrics. However, most of these tools focus on software data mining and metrics (computational analysis) and only a few support visual analytical reasoning. Typically, computational analysis tools generate data and software visualization tools facilitate the exploration and explanation of this data through static or interactive visual representations. Tools that combine these two approaches focus only on well-known metrics and lack the ability to examine user defined metrics. Further, they are often confined to simple visualization methods and metaphors, including charts, histograms, scatter plots, and node-link diagrams.
The goal of this thesis is to develop methodologies that combine computational analysis methods together with sophisticated visualization methods and metaphors through an interactive visual analysis approach. This approach promotes an iterative knowledge discovery process through multiple views of the data where analysts select features of interest in one of the views and inspect data items of the select subset in all of the views. On the one hand, we introduce a novel approach for the visual analysis of software measurement data that captures complete facts of the system, employs a flow-based visual paradigm for the specification of software measurement queries, and presents measurement results through integrated software visualizations. This approach facilitates the on-demand computation of desired features and supports interactive knowledge discovery - the analyst can gain more insight into the data through activities that involve: building a mental model of the system; exploring expected and unexpected features and relations; and generating, verifying, or rejecting hypothesis with visual tools. On the other hand, we have also extended existing tools with additional views of the data for the presentation and interactive exploration of system artifacts and their inter-relations.
Contributions of this thesis have been integrated into two different prototype tools. First evaluations of these tools show that they can indeed improve the understanding of large and complex software systems.