Refine
Year of publication
Document Type
- Master's Thesis (30) (remove)
Language
- English (30) (remove)
Has Fulltext
- yes (30)
Keywords
Faculty / Organisational entity
Epidemiological models have gained much interest during the COVID-19 pandemic.
As the pandemic is now driven by newly emerging variants of SARS-CoV-2, the
question arises how to model multiple virus variants in a single model.
In this thesis, we have extended an established model for COVID-19 forecasts to multiple
virus variants. We analyzed the model mathematically and showed the global
existence and uniqueness of the solution as well as important invariance properties
for a meaningful model. The implementation into an existing framework which allows
us to identify model parameters based on surveillance data is described briefly.
When applying our model to actual transitions between SARS-CoV-2 variants, we
found that forecasts would have been significantly improved by our model extension.
In most cases, we were able to precisely predict peak dates and heights in
case incidences of waves caused by newly emerging variants during early transition
phases. More severe outcomes, like hospitalizations, are found to be harder to predict
because of very limited observational data regarding these outcomes for newly
emerging variants.
This master thesis presents a collection of architectural design patterns for safety-critical systems deployed on public cloud infrastructure. The research aims to enhance system reliability, mitigate risks, and improve overall performance in safety-critical applications. The study follows a systematic approach, considering multiple safety-critical use cases and prioritizing factors such as timing constraints and system resilience. The railway signaling system, particularly the moving block computation, is selected as the most suitable use case due to its ability to tolerate response delays and re-request computations. The thesis addresses four research questions concerning the deployment of safety-critical systems to the public cloud, existing fault-tolerance methods in the cloud, identification of relevant design patterns, and the applicability of design patterns in various safety-critical systems.
The study identifies and review's fault tolerance methods and cloud failure modes, which serve as a basis for identifying design patterns. The Structured What-If Technique (SWIFT) is utilized to analyze prospective hazards and recommend actions, which are then mapped onto design patterns for wide applicability across different projects. Each design pattern presents a problem statement, guidelines for implementation, and associated benefits and drawbacks.
The contribution of this thesis lies in the development of a valuable resource for architects and engineers working on safety-critical systems in the cloud. The design patterns offer practical solutions and a framework for the design and implementation of robust and secure systems. Detailed documentation, including context, benefits, drawbacks, and practical examples, facilitates understanding and adoption.
In conclusion, this thesis contributes to the advancement of safety and reliability in cloud-based safety-critical systems by providing architectural design patterns. Future research should focus on integrating security aspects, gathering diverse use cases, and validating the patterns in practical settings. Continued exploration and refinement of the design patterns will lead to more robust solutions for meeting the needs and challenges of safety-critical applications in various contexts.
Given a finite or countably infinite family of Hilbert spaces \((H_j)_{j\in N} \), we study the Hilbert space tensor product \(\bigotimes_{j\in N} H_j\). In the general case, these tensor products were introduced by John von Neumann. We are especially interested in the case where each Hilbert space \(H_j\) is given as a reproducing kernel Hilbert space, i.e., \(H_j = H(K_j)\) for some reproducing kernel \(K_j\). We establish the following result, which is new for the case of N being infinite: If we restrict the domains of the kernels \(K_j\) properly, their pointwise product \(K\) is again a reproducing kernel, and
\[
H(K) \cong \bigotimes_{j\in N} H_j\,
\]
i.e., there is an isometric isomorphism between both spaces respecting the tensor product structure.
In product line engineering tasks, the need for merging models from different product
variants emerges as the commonly used clone-and-own approach suffers from high
maintenance costs in the long run. By identifying models with a high number of similarities
we can merge them to one highly reusable model. This approach will increase the
maintainability, and further expandability of the model.
Already many works have been published aiming to solve this problem with different
N-way model Matching approaches. However, there is lack of practical evidence that the
published theories work as designed in real world cases.
In this work, we will evaluate relevant published approaches and then attempt to
integrate the most promising one in the product line analysis framework VARIOUS from
Fraunhofer IESE. Next, the implemented approach will be evaluated in comparison to the
existing mechanism for model matching that VARIOUS integrates that is called "System
Aligner". The main aspects of our evaluation are:
• Accuracy - Can it accurately find the most similar models?
• Performance - How fast is it?
• Scalability - How well does it scale in large amount of input models?
• Configurability - Can it be adapted easily for different systems?
Influencer marketing, a tool to use a popular person’s reach on social media for market-ing, has been a constantly changing, critical tool for convincing potential customers of products, services, or other messages for about 15 years. Nongovernmental organiza-tions (NGOs) have also recognized the benefits of influencers in building awareness about their work. However, the influencer paradigm has controversies and risks, espe-cially for the often sensitive NGO work. The involvement of influencer marketing in the nonprofit work sector is a relatively 'newer' phenomenon, with little experience, guid-ance, or specific expertise. Despite growing interest among researchers and practition-ers, scholarly work resulting from the growth of influencer marketing is inconsistent and fragmented. Scientifically-based recommendations for practice are almost entirely lack-ing.
The master thesis contributes to filling the knowledge gap and supporting NGO employ-ees in, e.g., communication and social media positions to successfully integrate influ-encers for a good cause. The question of how influencers can effectively support the communication work of NGOs and what steps are needed is clarified. The author devel-oped a scientific handout by comparing two case studies of cooperation between NGOs and influencers, including semi-structured interviews with involved people supported by the available literature. The guidelines include necessary steps and instructions for ac-tion placed in the context of NGO work. NGOs must first learn about the influencer busi-ness, agree on the cooperation, identify the matching candidate, and plan the collabora-tion carefully. When selecting the influencer, values such as authenticity, trustworthi-ness, and genuine interest in the NGO’s good cause are preconditions for the coopera-tion’s success. Influencer marketing in NGOs will likely grow in the following years, and learning about the field will become imperative.
The rapid growth of systems, both in size and complexity, combined with their distributed
nature, is posing challenges for their efficient integration and functioning. Moreover,
in order to achieve sustainability objectives and future goals, systems are increasingly
collaborating with each other, resulting in the emergence of Systems of Systems (SoS)
that are large-scale and independent. In such scenarios, multiple stakeholders and systems
from different disciplines with diverse interests need to interoperate. In various domains,
this trend of growing systems creates a greater need for interfaces that ensure seamless
interoperability in between and within these systems and SoS.
To address these challenges, an effective method for integrating systems and SoS is required.
A key to ease this integration can be the use of interface specifications to describe and
specify interfaces. However, there is currently no comprehensive understanding of how
to write high-quality interface specifications, nor is there a common overview of interface
specification approaches.
This thesis aims to fill these gaps of documented knowledge by reviewing recent developments
and best practices for interface specifications in the context of systems engineering
and SoS engineering. The review was conducted through a literature review focusing on
interface specifications, complemented by an analysis of existing interface specification
approaches and expert interviews. The goal is to provide an overview of current interface
specification characteristics and their common use cases. Based on this analysis, a
usage-driven approach in the form of customised interface specification mappings was
developed, which can assist in identifying an appropriate approach for specifying interfaces.
In light of the increasing connectivity in our lives, the work provides a framework for
better classifying and approaching interface specifications, seeking to move away from
viewing interfaces as neglected elements of systems engineering, towards a more intelligent
and productive classification and approach.
Evaluation and development of the bridging application between ISO 15118 and OCPP 2.0.1 protocols
(2023)
The increase in the number of electric vehicles(EVs) has undoubtedly put stress on the local power grid because these systems were designed without anticipating the charging needs of electric vehicles. To overcome this problem, Smart Charging is introduced to allow the Charging Stations Management System(CSMS) to load-balance the charging needs of the electric vehicles during peak hours. In addition, it allows the EVs to return their energy to the system when needed. Smart Charging uses the de facto standards ISO 15118 and OCPP to enable the CSMS to control the charging profiles of the EVs. Since these protocols are specified by different organizations, their compatibility must be analyzed to ensure their interoperability.
In the first part, this thesis aims to apply a theoretical analysis method to analyze the compatibility between ISO 15118 and OCPP. This method uses the Symbolic Transition System to model the interactions between the protocols. Then, the state transitions and message exchanges of the models are analyzed using the flooding algorithm. The result of this analysis is a compatibility matrix, which illustrates the degrees of compatibility between the states of the protocols. Based on the results, this thesis concludes that ISO 15118 and OCPP are compatible. However, their compatibility is not perfect because of
data type incompatibility between messages. The reason is that ISO 15118 uses domain data types for its parameters, while OCPP uses generic data types to increase its interoperability with other protocols.
The second part of this thesis describes the concept and design of the application to bridge the communication between ISO 15118 and OCPP. The application also demonstrates how to overcome the problems found in the compatibility analysis using facade patterns. In addition, the development of the bridging application highlights several issues that have arisen in practice. The first issue is, due to the large memory footprint of the messages, the OCPP stack is not suitable for running on small embedded systems without extreme optimization. Second, using JSON, a human-readable format, to encode the OCPP messages is unnecessary because most of the messages are processed by machines. In addition, the OCPP application is highly complex due to the nested conditions involved in sending and receiving OCPP messages. Finally, both the JSON and EXI data formats require serializers (parsers) to encode (decode) the messages, adding to the complexity of the system.
Model Identification of Power Electronic Systems for Interaction Studies and Small-Signal Analysis
(2023)
The rapid growth in offshore wind brings various challenges to power system research
and industry, such as the development of multi-terminal multi-vendor HVDC grids.
To ensure interoperability in those power converter dominated systems, suitable
models are needed to efficiently perform stability and interaction studies. With
state-space based small-signal methods stability and interaction phenomena can be
assessed globally for a complex system. Yet detailed models are needed. However,
in multi-vendor projects most likely only black-boxed models will be available to
protect the intellectual property, so that identification techniques are necessary to
obtain suitable models. This thesis contributes to the research activities on statespace
model identification of black-boxed power electronic systems.
In the first part of the thesis, a method was developed and tested, where the matrix
elements of linearized state-space models were fitted in dependency of the operating
point, based on input sweeps performed on the model of a grid forming power converter
controlled as a virtual synchronous machine. It was discussed how changes in
multiple inputs can be approximated by the superposition of the individual input
dependencies and a fully operating point dependent state-space model approximation
was created. The results were validated in time and frequency domain analyses.
It was found that the method can provide a good approximation, especially for the
operating range around the default operating point.
In the second part, identification of a power electronic system was performed based
on measurement data which was generated experimentally from a low voltage laboratory
system. A sequence of input perturbations was applied to the laboratory
system and frequency response data was calculated from the corresponding output
perturbations. The data served as basis for model identification with N4SID and a
soon to be published vector fitting method. The identified models were validated by
a visual inspection of the transfer function and by comparison of the calculated step
responses to the step responses measured in the laboratory. It was found that the
treatment of incomplete data sets, the generation of substitute data and the impact
of time delays on the identification might be worth further investigation.
This work provides a valuable contribution to the research of state-space model
identification of black-boxed power electronic systems. It points out challenges and
presents promising approaches to enable state-space based methods for stability
analysis and interaction studies in future multi-terminal multi-vendor HVDC grids.
The aim of this thesis is to perform a case study to investigate the usability of SysMD in
industrial applications. The focus is on how well it can bridge the gap between requirement
specifications, modeling, and actual development.
SysMD is a new documentation and modeling language which aims to bring documentation
and modeling closer together while still not requiring the user to be an expert in modeling or
requirement specification. This differentiates SysMD from other tools which focus on either
documentation, modeling, or are aimed at modeling experts.
This thesis will show through the case study part that SysMD as a language has a good future
with potential of being used as a language bridging the gap between requirements,
documentation, and modeling without the user needing to be an expert within modeling. It
will also show that SysMD Notebook in its current state is not ready for primetime, and I give
recommendations on how to improve both the SysMD language as well as the SysMD
Notebook to make it usable for industrial projects in the future.
Global temperature rise, and growing consumption of limited resources are global
threats. Therefore, industry and consumers will need to reduce their environmental im-
pacts. For this Product Environmental Declarations (EPD) are used for eco design and
product impact comparison. As EPDs are likely to become mandatory the total number
of products to be assessed will increase tremendously. Therefore, the entire EPD work-
flow will need to be automatized to allow large-scale application of EPDs. The goal of
this thesis is to develop an automated workflow for EPDs (aEPD) by combining Model-
Based-Systems Engineering (MBSE), Digital Twin and Life Cycle Assessment concepts.
While MBSE is used for the multilevel requirements analysis the focus was set on auto-
mating data collection along the supply and value chain using the AAS 4.0 Implementa-
tion of the Digital Twin concept. The applicability of the aEPD workflow is shown in the
prototypical implementation of an aEPD for an electric motor. Even though progress has
been made research should be continued in the development of further AAS Submodel
templates and PCRs to allow standardized data collection and communication on a
global scale.
With growing prevalence, agile methodology also pervades domains which adhered to conventional models for decades. At the same time, the demand for safety critical applications and thus rigorous quality assurance increases. This raises the question whether agile methodology is able to support the required level of quality assurance.
This master’s thesis aims at analyzing the situation of analytical quality assurance in agile environments in order to identify shortcomings and provide potential solutions. The author derives an initial hypothesis based on his own professional experience, stating that analytical quality assurance is not sufficiently considered by agile development models and agile transformation. This hypothesis is split into eight sub-hypotheses, each describing a particular problem or challenge. Qualitative interviews with seven experts and complementary literature researches are performed to confirm given hypotheses, identify further challenges, and collect appropriate solution proposals. Eventually, based on the elicited data, five hypotheses as well as the initial hypothesis are corroborated and five new challenges are added. Furthermore, twenty-six potential solutions for relevant hypotheses are collected and presented. The solutions comprise established approaches, such as Dynamic System Development Model or Explorative Testing but also innovative ideas, including the Three-Field Agile approach publicized by this thesis.
Altogether, it is found that agile methodology largely not supports traditional analytical quality assurance in its concepts and even worse, some of the core principles are contradictive. However, numerous solutions are found and presented that address particular discrepancies and have the capability to ease the pictured situation.
For the development of the Extremely Large Telescope (ELT), the European Southern Observatory (ESO)
uses state machines to model life cycles and basic behaviour of control software components. To provide certain degrees of freedom, the component life cycles need to be customisable but in order to remain compatible, they must also conform to specific standard behaviour.
Clearly, these two goals are competing. High customisation causes difficulties in maintenance and may also lead to incompatible solutions. The introduction of strict compatibility requirements
on the other hand may increase maintainability but it also makes the system less flexible. To avoid spending a significant portion of the Assembly, Integration and Verification (AIV) phase in integration hell, it is of high importance to find the right balance between customisability and compatibility early enough.
To address this problem, this thesis examines different variability realisation mechanisms with respect to their applicability for the behavioural customisation of state machine models. Based on this information, a novel approach is presented that combines a set of variability realisation mechanisms and thereby enables open and stepwise customisation, systematic reuse and separation of concerns. Concretely, the method enhances a framework approach with model manipulation capabilities and mixin composition while also supporting conditional compilation and conditional execution. Moreover, the thesis demonstrates that compatibility can be ensured by combining constructive and analytical methods, namely feature orientation and conformance testing. Finally, feasibility and soundness of the elaborated solution concept are demonstrated using a proof of concept implementation that has already been applied to a real-world project in scope of the ELT program.
On the one hand, Model-based Systems and Software Engineering approaches ease the development of complex software systems. On the other, they introduce the challenge of managing the multitude of different artifacts created using various tools during the system lifecycle. For understanding and maintaining these artifacts as they evolve, it is advisable to establish traceability among them. Traceability is the ability to relate the various artifacts created and evolved during the project. However, organizations often consider traceability a burden because it is time-consuming and error-prone when done manually. Hence, the objective of this thesis is to research and develop pragmatic traceability approaches that can be followed in the MBSE context. A systematic mapping study was conducted to understand and compile the various criteria that need to be followed while creating and maintaining trace links. It also provided insights on the approaches followed to ease the burden on engineers. Expert interviews with industrial companies were conducted to investigate the real-life experiences of engineers on traceability, to get an overview of best practices and known pitfalls. Based on the mapping study and the results of the interviews, various approaches and tools used to achieve traceability were discussed. A case study was conducted for state-of-the-practice traceability approaches in a toolchain consisting of Polarion, Enterprise Architect, and Doxygen. For research, open-source libraries and applications were used for analysis. A tool prototype was developed to create and maintain trace links between artifacts created in the toolchain mentioned above. The use cases in which the tool eases achieving traceability are discussed along with pros and cons.
In recent months, sustainable development and the achievement of the United Nations Sus- tainable Development Goals has gained unprecedented prominence. SDG 7 aspires to achieve access to electricity for the entire world population by 2030 and - at the same time - to significantly increase the share of renewable energy in the power mix. This target trans- lates into ambitious electricity supply and renewable energy asset growth scenarios for Sub- Saharan Africa, the least developed region worldwide. Though theoretical renewable energy potential is abundant and capital generally available, progress has been slow. Aside funds from donors and Development Finance Institutions, private commercial capital is required to accelerate the progress. Project Finance has successfully attracted private funds for renew- able energy assets in other jurisdictions but has played a negligible role in the energy tran- sition in Sub-Saharan Africa. A variety of reasons are identified that impede their implemen- tation, which are categorised into (i) unsatisfactory project pre-requisites and preparation, (ii) challenging host country conditions, (iii) elevated non-financial project risks and (iv) risky financial transaction structures. While a review of potential mitigation measures reveals that the risk factors are theoretically addressable, most require a multi-stakeholder alignment and exhibit some implementation complexity. Putting them into practice will therefore take time and will require a high level of commitment from host governments, sponsors, and fi- nancial institutions. While pressure and urgency are mounting, time will tell whether the pro- ject parties are more successful going forward.
Industry 4.0 defines the organization of production and manufacturing processes based on technological advanced solutions and devices autonomously communicating with each other.
Within the context of this industrial revolution, the smart reconfigurable manufacturing systems are introduced. These systems shall be able to provide a dynamic level of reconfigurability based on the production demand and system availability. The introduction of the manufacturing reconfigurability constitutes a particularly important and expensive decision for the organizations and therefore scoping methods are becoming constantly essential.
The present work covers a first approach to defining reconfigurability methods and drivers for the manufacturing systems within the context of Industry 4.0. The thesis introduces five main reconfigurability use case scenarios for manufacturing systems and the description of a two – dimensional model of scoping parameters.
The first dimension is based on the potential business targets and reconfigurability drivers, while the second dimension focuses on the system functions and technologies, which are
required for the successful realization of the reconfigurability use case scenarios. Finally, the thesis concludes with a brief comparison between the traditional software product line scoping approach and purposed scoping method for the reconfigurability of manufacturing systems.
This thesis aims at investigating the capability and feasibility of Machine Learning algorithms for developing models simulating the behavior of E/E powertrain components. Machine learning based simulation models possess the advantage of being trained via real measurement data and no time-consuming manual set up of equation and parameter adaptions are needed to get a proper simulation model of the component.
For this purpose, the thesis starts with the introduction of E/E powertrain components of interest. Moreover, Machine Learning algorithms are introduced that support model based and supervised training and are hence of interest for behavior simulation.
The design, implementation, training and optimization of the different Machine Learning based simulation models according to the provided data is presented. These models are not only simulation models of the single introduced components but also models of the composition of these components.
The resulting models are evaluated against test data which has not been used for training. This evaluation illustrates the ability and inability of the different Machine Learning algorithms to simulate and generalize specific powertrain components. It also illustrates the necessary scope of the models according the number of composite components and their accuracy.
Industries use software product lines as a solution to the ever-increasing variety-rich customer requirements for the software products. In order to realize the variability in the product line, several variability realization techniques are used, of which, conditional compilation and execution are more frequently used in practice. This is not without its challenges.
As the product line evolves in space and time, several versions of products are released, thereby increasing the complexity of variability code in an uncontrolled manner. In most cases, there exists no explicit variability model to provide important configuration knowledge, or the variability model and variability code do not synchronize with each other, e.g. important dependencies from the code realizations are not reflected in the variability model. When the domain experts leave the company, the product configuration knowledge will be lost. New employees will have to be trained on the domain knowledge and are left with the herculean task of tracking the code changes in the variability code for the different versions. They also have to understand the variability code to analyze the impact of code changes and how to adapt them. Overall, that lack of explicit and sound configuration knowledge results in higher efforts during the product configuration and quality assurance. Hence, industries are interested in recovering configuration knowledge via semi-automated analyses of the variability code and the existing product configurations.
This Master’s thesis investigates the various approaches that can be followed in order to recover existing configuration knowledge. It is an extension of the previous research works on the VITAL approach conducted at TU Kaiserslautern and Fraunhofer IESE. The focus of this research will be the solution space, i.e., the variability realization through variability code mechanisms like conditional compilation/execution. The goal is to analyze the preprocessor directives or respective constructs in programming languages, study respective state of the art advances in recent years and enhance the VITAL analysis method and tool. In particular, identification of configuration parameters, their values and ranges, the constraints and nesting between one parameter to the other are the primary objectives of the research. As secondary goals, visualization of the identified product configuration knowledge in the existing tool and optimization of the algorithms present in the tool will be implemented from the results of the primary goals. For the research, open source libraries and applications will be identified and used for analysis. The work will be guided by real world industrial settings.
Model-based Systems Engineering (MBSE) has established itself as a successful approach to realize increasingly complex systems within an acceptable timeframe. However, rapidly changing and evolving systems as well as their growing distributed development pose additional challenges, especially with regard to the modifiability, adaptability and reusability of their components. In addition, the demand for highly flexible and customizable systems continues to grow. This results in a significantly greater need for an efficient variant management. Proven approaches and methods already exist in the respective development disciplines to face these challenges. A solid MBSE approach, however, must provide a system-wide solution and answer how concurrent changes in a system model can be handled efficiently, especially if several similar system variants are developed in parallel. Industrial practice still shows a great deal of uncertainty in this respect. There are no conclusive answers to many questions. How can changes in a SysML model best be supported and, in particular, transferred effectively between model variants and versions? Should one model contain all configurations or is a separate variability model more useful? Which strategies are best suited to avoid imminent discrepancies between variant configuration and implementation and how can individual model components be efficiently reused? In order to address these questions and provide practitioners with a helpful guideline, this master’s thesis examines and compares existing approaches for realizing model variants in SysML with regard to their functionality as well as their effects (positive and negative) on the overall system concept. Since the focus lies on the feasibility of the shown approaches, they are applied by means of typical evolution scenarios and subsequently evaluated with regard to relevant performance indicators such as understandability, effort, granularity and independence. It is not expected that one approach is the best choice for every initial situation and under all circumstances. The introduced evaluation system thus aims to serve on the one hand as a situational decision support and on the other hand to offer the opportunity to examine, classify and evaluate own approaches and procedures more thoroughly.
Synapses are connections between different nerve cells that form an essential link in neural signal transmission. It is generally distinguished between electrical and chemical synapses, where chemical synapses are more common in the human brain and are also the type we deal with in this work.
In chemical synapses, small container-like objects called vesicles fill with neurotransmitter and expel them from the cell during synaptic transmission. This process is vital for communication between neurons. However, to the best of our knowledge no mathematical models that take different filling states of the vesicles into account have been developed before this thesis was written.
In this thesis we propose a novel mathematical model for modeling synaptic transmission at chemical synapses which includes the description of vesicles of different filling states. The model consists of a transport equation (for the vesicle growth process) plus three ordinary differential equations (ODEs) and focuses on the presynapse and synaptic cleft.
The well-posedness is proved in detail for this partial differential equation (PDE) system. We also propose a few different variations and related models. In particular, an ODE system is derived and a delay differential equation (DDE) system is formulated. We then use nonlinear optimization methods for data fitting to test some of the models on data made available to us by the Animal Physiology group at TU Kaiserslautern.
Cutting-edge cancer therapy involves producing individualized medicine for many patients at the same time. Within this process, most steps can be completed for a certain number of patients simultaneously. Using these resources efficiently may significantly reduce waiting times for the patients and is therefore crucial for saving human lives. However, this involves solving a complex scheduling problem, which can mathematically be modeled as a proportionate flow shop of batching machines (PFB). In this thesis we investigate exact and approximate algorithms for tackling many variants of this problem. Related mathematical models have been studied before in the context of semiconductor manufacturing.