Refine
Year of publication
Document Type
- Master's Thesis (30) (remove)
Language
- English (30) (remove)
Has Fulltext
- yes (30)
Keywords
Faculty / Organisational entity
Music Information Retrieval (MIR) is an interdisciplinary research area that has the goal to improve the way music is accessible through information systems. One important part of MIR is the research for algorithms to extract meaningful information (called feature data) from music audio signals. Feature data can for example be used for content based genre classification of music pieces. This masters thesis contributes in three ways to the current state of the art: • First, an overview of many of the features that are being used in MIR applications is given. These methods – called “descriptors” or “features” in this thesis – are discussed in depth, giving a literature review and for most of them illustrations. • Second, a large part of the described features are implemented in a uniform framework, called T-Toolbox which is programmed in the Matlab environment. It also allows to do classification experiments and descriptor visualisation. For classification, an interface to the machine-learning environment WEKA is provided. • Third, preliminary evaluations are done investigating how well these methods are suited for automatically classifying music according to categorizations such as genre, mood, and perceived complexity. This evaluation is done using the descriptors implemented in the T-Toolbox, and several state-of-the-art machine learning algorithms. It turns out that – in the experimental setup of this thesis – the treated descriptors are not capable to reliably discriminate between the classes of most examined categorizations; but there is an indication that these results could be improved by developing more elaborate techniques.
Nowadays, vehicle control systems such as anti-lock braking systems, electronic stability control, and cruise control systems yield many advantages. The electronic control units that are deployed in this specific application domain are embedded systems that are integrated in larger systems to achieve predefined applications. Embedded systems consist of embedded hardware and a large software part. Model-based development for embedded systems offers significant software-development benefits that are pointed out in this thesis. The vehicle control system Adaptive Cruise Control is developed in this thesis using a model-based software development process for embedded systems. As a modern industrial design tool that is prevalent in this domain, simulink,is used for modeling the environment, the system behavior, for determining controller parameters, and for simulation purposes. Using an appropriate toolchain, the embedded code is automatically generated. The adaptive cruise control system could be successfully implemented and tested within this short timespan using a waterfall model without increments. The vehicle plant and important filters are fully deduced in detail. Therefore, the design of further vehicle control systems needs less effort for development and precise simulation.
In its rather short history robotic research has come a long way in the half century since it started to exist as a noticeable scientic eld. Due to its roots in engineering, computer science, mathematics, and several other 'classical' scientic branches,a grand diversity of methodologies and approaches existed from the very beginning. Hence, the researchers in this eld are in particular used to adopting ideas that originate in other elds. As a fairly logical consequence of this, scientists tended to biology during the 1970s in order to nd approaches that are ideally adapted to the conditions of our natural environment. Doing so allows for introducing principles to robotics that have already shown their great potential by prevailing in a tough evolutionary selection process for millions of years. The variety of these approaches spans from efficient locomotion, to sensor processing methodologies and all the way to control architectures. Thus, the full spectrum of challenges for autonomous interaction with the surroundings while pursuing a task can be covered by such means. A feature that has proven to be amongst the most challenging to recreate is the human ability of biped locomotion. This is mainly caused by the fact that walking,running and so on are highly complex processes involving the need for energy efficient actuation, sophisticated control architectures and algorithms, and an elaborate mechanical design while at the same time posting restrictions concerning stability and weight. However, it is of special interest since our environment is favoring this specic kind of locomotion and thus promises to open up an enormous potential if mastered. More than the mere scientic interest, it is the fascination of understanding and recreating parts of oneself that drives the ongoing eorts in this area of research. The fact that this is not at all an easy task to tackle is not only caused by the highly dynamical processes but also has its roots in the challenging design process. That is because it cannot be limited to just one aspect like e.g. the control architecture, actuation, sensors, or mechanical design alone. Each aspect has to be incorporated into a sound general concept in order to allow for a successful outcome in the end. Since control is in this context inseparably coupled with the mechanics of the system, both has to be dealt with here.
Ever since Mark Weiser’s vision of Ubiquitous Computing the importance of context has increased in the computer science domain. Future Ambient Intelligent Environments will assist humans in their everyday activities, even without them being constantly aware of it. Objects in such environments will have small computers embedded into them which have the ability to predict human needs from the current context and adapt their behavior accordingly. This vision equally applies to future production environments. In modern factories workers and technical staff members are confronted with a multitude of devices from various manufacturers, all with different user interfaces, interaction concepts and degrees of complexity. Production processes are highly dynamic, whole modules can be exchanged or restructured. Both factors force users to continuously change their mental model of the environment. This complicates their workflows and leads to avoidable user errors or slips in judgement. In an Ambient Intelligent Production Environment these challenges have to be approached. The SmartMote is a universal control device for ambient intelligent production environments like the SmartFactoryKL. It copes with the problems mentioned above by integrating all the user interfaces into a single, holistic and mobile device. Following an automated Model-Based User Interface Development (MBUID) process it generates a fully functional graphical user interface from an abstract task-based description of the environment during run-time. This work introduces an approach to integrating context, namely the user’s location, as an adaptation basis into the MBUID process. A Context Model is specified, which stores location information in a formal and precise way. Connected sensors continuously update the model with new values. The model is complemented by a reasoning component which uses an extensible set of rules. These rules are used to derive more abstract context information from basic sensor data and for providing this information to the MBUID process. The feasibility of the approach is shown by using the example of Interaction Zones, which let developers describe different task models depending on the user’s location. Using the context model to determine when a user enters or leaves a zone, the generator can adapt the graphical user interface accordingly. Context-awareness and the potential to adapt to the current context of use are key requirements of applications in ambient intelligent environments. The approach presented here provides a clear procedure and extension scheme for the consideration of additional context types. As context has significant influence on the overall User Experience, this results not only in a better usefulness, but also in an improved usability of the SmartMote.
This research for this thesis was conducted to develop a framework which supports the automatic configuration of project-specific software development processes by selecting and combining different technologies: the Process Configuration Framework. The research draws attention to the problem that while the research community develops new technologies, the industrial companies continue only using their well-known ones. Because of this, technology transfer takes decades. In addition, there is the fact that there is no solution which solves all problems in a software development project. This leads to a number of technologies which need to be combined for one project.
The framework developed and explained in this research mainly addresses those problems by building a bridge between research and industry as well as by supporting software companies during the selection of the most appropriate technologies combined in a software process. The technology transformation gap is filled by a repository of (new) technologies which are used as a foundation of the Process Configuration Framework. The process is configured by providing SPEM process pattern for each technology, so that the companies can build their process by plugging into each other.
The technologies of the repository were specified in a schema including a technology model, context model, and an impact model. With context and impact it is possible to provide information about a technology, for example, its benefits to quality, cost or schedule. The offering of the process pattern as output of the Process Configuration Framework is performed in several stages:
I Technology Ranking:
1 Ranking based on Application Domain, Project & Impact
2 Ranking based on Environment
3 Ranking based on Static Context
II Technology Combination:
4 Creation of all possible Technology Chains
5 Restriction of the Technology Chains
6 Ranking based on Static and Dynamic Context
7 Extension of the Chains by Quality Assurance
III Process Configuration:
8 Process Component Diagram
9 Extension of the Process Component Diagram
10 Instantiation of the Components by Technologies of the Technology Chain
11 Providing process patterns
12 Creation of the process based on Patterns
The effectiveness and quality of the Process Configuration Framework have additionally been evaluated in a case study. Here, the Technology Chains manually created by experts were compared to the chains automatically created by the framework after it was configured by those experts. This comparison depicted that the framework results are similar and therefore can be used as a recommendation.
We conclude from our research that support during the configuration of a process for software projects is important especially for non-experts. This support is provided by the Process Configuration Framework developed in this research. In addition our research has shown that this framework offers a possibility to speed up the technology transformation gap between the research community and industrial companies.
Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting
method for Oracle's Java 7 runtime library. The decision for the change was based on
empirical studies showing that on average, the new algorithm is faster than the formerly
used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot
approach — an idea that was considered not promising by several theoretical studies in the
past. In this thesis, I try to find the reason for this unexpected success.
My focus is on the precise and detailed average case analysis, aiming at the flavor of
Knuth's series “The Art of Computer Programming”. In particular, I go beyond abstract
measures like counting key comparisons, and try to understand the efficiency of the
algorithms at different levels of abstraction. Whenever possible, precise expected values are
preferred to asymptotic approximations. This rigor ensures that (a) the sorting methods
discussed here are actually usable in practice and (b) that the analysis results contribute to
a sound comparison of the Quicksort variants.
Data usage control is a concept that extends access control to also protect data after it
has been released. Usage control enforcement relies on available information about the
distribution of data in the monitored system. In this thesis we introduce an information
flow tracking approach for JavaScript in order to enable usage control for dynamic content
in web browsers. The proposed model is implemented as a prototype in the JavaScript
engine V8 of the Chromium browser to evaluate the feasibility of the chosen approach.
Buses not arriving on time and then arriving all at once - this phenomenon is known from
busy bus routes and is called bus bunching.
This thesis combines the well studied but so far separate areas of bus-bunching prediction
and dynamic holding strategies, which allow to modulate buses’ dwell times at stops to
eliminate bus bunching. We look at real data of the Dublin Bus route 46A and present
a headway-based predictive-control framework considering all components like data
acquisition, prediction and control strategies. We formulate time headways as time series
and compare several prediction methods for those. Furthermore we present an analytical
model of an artificial bus route and discuss stability properties and dynamic holding
strategies using both data available at the time and predicted headway data. In a numerical
simulation we illustrate the advantages of the presented predictive-control framework
compared to the classical approaches which only use directly available data.
In the present master’s thesis we investigate the connection between derivations and
homogeneities of complete analytic algebras. We prove a theorem, which describes a specific set of generators
for the module of derivations of an analytic algebra, which map the maximal ideal of R into itself. It turns out, that this set has a structure similar to a Cartan subalgebra and contains
information regarding multi-homogeneity. In order to prove
this theorem, we extend the notion of grading by Scheja and Wiebe to projective systems and state the connection between multi-gradings and pairwise
commuting diagonalizable derivations. We prove a theorem similar to Cartan’s Conjugacy Theorem in the setup of infinite-dimensional Lie algebras, which arise as projective limits of finite-dimensional Lie algebras. Using this result, we can show that the structure of the aforementioned set of generators is an intrinsic property of the analytic algebra. At the end we state an algorithm, which is theoretically able to compute the maximal multi-homogeneity of a complete analytic algebra.
Optimal control of partial differential equations is an important task in applied mathematics where it is used in order to optimize, for example, industrial or medical processes. In this thesis we investigate an optimal control problem with tracking type cost functional for the Cattaneo equation with distributed control, that is, \(\tau y_{tt} + y_t - \Delta y = u\). Our focus is on the theoretical and numerical analysis of the limit process \(\tau \to 0\) where we prove the convergence of solutions of the Cattaneo equation to solutions of the heat equation.
We start by deriving both the Cattaneo and the classical heat equation as well as introducing our notation and some functional analytic background. Afterwards, we prove the well-posedness of the Cattaneo equation for homogeneous Dirichlet boundary conditions, that is, we show the existence and uniqueness of a weak solution together with its continuous dependence on the data. We need this in the following, where we investigate the optimal control problem for the Cattaneo equation: We show the existence and uniqueness of a global minimizer for an optimal control problem with tracking type cost functional and the Cattaneo equation as a constraint. Subsequently, we do an asymptotic analysis for \(\tau \to 0\) for both the forward equation and the aforementioned optimal control problem and show that the solutions of these problems for the Cattaneo equation converge strongly to the ones for the heat equation. Finally, we investigate these problems numerically, where we examine the different behaviour of the models and also consider the limit \(\tau \to 0\), suggesting a linear convergence rate.
Cutting-edge cancer therapy involves producing individualized medicine for many patients at the same time. Within this process, most steps can be completed for a certain number of patients simultaneously. Using these resources efficiently may significantly reduce waiting times for the patients and is therefore crucial for saving human lives. However, this involves solving a complex scheduling problem, which can mathematically be modeled as a proportionate flow shop of batching machines (PFB). In this thesis we investigate exact and approximate algorithms for tackling many variants of this problem. Related mathematical models have been studied before in the context of semiconductor manufacturing.
Synapses are connections between different nerve cells that form an essential link in neural signal transmission. It is generally distinguished between electrical and chemical synapses, where chemical synapses are more common in the human brain and are also the type we deal with in this work.
In chemical synapses, small container-like objects called vesicles fill with neurotransmitter and expel them from the cell during synaptic transmission. This process is vital for communication between neurons. However, to the best of our knowledge no mathematical models that take different filling states of the vesicles into account have been developed before this thesis was written.
In this thesis we propose a novel mathematical model for modeling synaptic transmission at chemical synapses which includes the description of vesicles of different filling states. The model consists of a transport equation (for the vesicle growth process) plus three ordinary differential equations (ODEs) and focuses on the presynapse and synaptic cleft.
The well-posedness is proved in detail for this partial differential equation (PDE) system. We also propose a few different variations and related models. In particular, an ODE system is derived and a delay differential equation (DDE) system is formulated. We then use nonlinear optimization methods for data fitting to test some of the models on data made available to us by the Animal Physiology group at TU Kaiserslautern.
Industry 4.0 defines the organization of production and manufacturing processes based on technological advanced solutions and devices autonomously communicating with each other.
Within the context of this industrial revolution, the smart reconfigurable manufacturing systems are introduced. These systems shall be able to provide a dynamic level of reconfigurability based on the production demand and system availability. The introduction of the manufacturing reconfigurability constitutes a particularly important and expensive decision for the organizations and therefore scoping methods are becoming constantly essential.
The present work covers a first approach to defining reconfigurability methods and drivers for the manufacturing systems within the context of Industry 4.0. The thesis introduces five main reconfigurability use case scenarios for manufacturing systems and the description of a two – dimensional model of scoping parameters.
The first dimension is based on the potential business targets and reconfigurability drivers, while the second dimension focuses on the system functions and technologies, which are
required for the successful realization of the reconfigurability use case scenarios. Finally, the thesis concludes with a brief comparison between the traditional software product line scoping approach and purposed scoping method for the reconfigurability of manufacturing systems.
In recent months, sustainable development and the achievement of the United Nations Sus- tainable Development Goals has gained unprecedented prominence. SDG 7 aspires to achieve access to electricity for the entire world population by 2030 and - at the same time - to significantly increase the share of renewable energy in the power mix. This target trans- lates into ambitious electricity supply and renewable energy asset growth scenarios for Sub- Saharan Africa, the least developed region worldwide. Though theoretical renewable energy potential is abundant and capital generally available, progress has been slow. Aside funds from donors and Development Finance Institutions, private commercial capital is required to accelerate the progress. Project Finance has successfully attracted private funds for renew- able energy assets in other jurisdictions but has played a negligible role in the energy tran- sition in Sub-Saharan Africa. A variety of reasons are identified that impede their implemen- tation, which are categorised into (i) unsatisfactory project pre-requisites and preparation, (ii) challenging host country conditions, (iii) elevated non-financial project risks and (iv) risky financial transaction structures. While a review of potential mitigation measures reveals that the risk factors are theoretically addressable, most require a multi-stakeholder alignment and exhibit some implementation complexity. Putting them into practice will therefore take time and will require a high level of commitment from host governments, sponsors, and fi- nancial institutions. While pressure and urgency are mounting, time will tell whether the pro- ject parties are more successful going forward.
Model-based Systems Engineering (MBSE) has established itself as a successful approach to realize increasingly complex systems within an acceptable timeframe. However, rapidly changing and evolving systems as well as their growing distributed development pose additional challenges, especially with regard to the modifiability, adaptability and reusability of their components. In addition, the demand for highly flexible and customizable systems continues to grow. This results in a significantly greater need for an efficient variant management. Proven approaches and methods already exist in the respective development disciplines to face these challenges. A solid MBSE approach, however, must provide a system-wide solution and answer how concurrent changes in a system model can be handled efficiently, especially if several similar system variants are developed in parallel. Industrial practice still shows a great deal of uncertainty in this respect. There are no conclusive answers to many questions. How can changes in a SysML model best be supported and, in particular, transferred effectively between model variants and versions? Should one model contain all configurations or is a separate variability model more useful? Which strategies are best suited to avoid imminent discrepancies between variant configuration and implementation and how can individual model components be efficiently reused? In order to address these questions and provide practitioners with a helpful guideline, this master’s thesis examines and compares existing approaches for realizing model variants in SysML with regard to their functionality as well as their effects (positive and negative) on the overall system concept. Since the focus lies on the feasibility of the shown approaches, they are applied by means of typical evolution scenarios and subsequently evaluated with regard to relevant performance indicators such as understandability, effort, granularity and independence. It is not expected that one approach is the best choice for every initial situation and under all circumstances. The introduced evaluation system thus aims to serve on the one hand as a situational decision support and on the other hand to offer the opportunity to examine, classify and evaluate own approaches and procedures more thoroughly.
Industries use software product lines as a solution to the ever-increasing variety-rich customer requirements for the software products. In order to realize the variability in the product line, several variability realization techniques are used, of which, conditional compilation and execution are more frequently used in practice. This is not without its challenges.
As the product line evolves in space and time, several versions of products are released, thereby increasing the complexity of variability code in an uncontrolled manner. In most cases, there exists no explicit variability model to provide important configuration knowledge, or the variability model and variability code do not synchronize with each other, e.g. important dependencies from the code realizations are not reflected in the variability model. When the domain experts leave the company, the product configuration knowledge will be lost. New employees will have to be trained on the domain knowledge and are left with the herculean task of tracking the code changes in the variability code for the different versions. They also have to understand the variability code to analyze the impact of code changes and how to adapt them. Overall, that lack of explicit and sound configuration knowledge results in higher efforts during the product configuration and quality assurance. Hence, industries are interested in recovering configuration knowledge via semi-automated analyses of the variability code and the existing product configurations.
This Master’s thesis investigates the various approaches that can be followed in order to recover existing configuration knowledge. It is an extension of the previous research works on the VITAL approach conducted at TU Kaiserslautern and Fraunhofer IESE. The focus of this research will be the solution space, i.e., the variability realization through variability code mechanisms like conditional compilation/execution. The goal is to analyze the preprocessor directives or respective constructs in programming languages, study respective state of the art advances in recent years and enhance the VITAL analysis method and tool. In particular, identification of configuration parameters, their values and ranges, the constraints and nesting between one parameter to the other are the primary objectives of the research. As secondary goals, visualization of the identified product configuration knowledge in the existing tool and optimization of the algorithms present in the tool will be implemented from the results of the primary goals. For the research, open source libraries and applications will be identified and used for analysis. The work will be guided by real world industrial settings.
This thesis aims at investigating the capability and feasibility of Machine Learning algorithms for developing models simulating the behavior of E/E powertrain components. Machine learning based simulation models possess the advantage of being trained via real measurement data and no time-consuming manual set up of equation and parameter adaptions are needed to get a proper simulation model of the component.
For this purpose, the thesis starts with the introduction of E/E powertrain components of interest. Moreover, Machine Learning algorithms are introduced that support model based and supervised training and are hence of interest for behavior simulation.
The design, implementation, training and optimization of the different Machine Learning based simulation models according to the provided data is presented. These models are not only simulation models of the single introduced components but also models of the composition of these components.
The resulting models are evaluated against test data which has not been used for training. This evaluation illustrates the ability and inability of the different Machine Learning algorithms to simulate and generalize specific powertrain components. It also illustrates the necessary scope of the models according the number of composite components and their accuracy.
On the one hand, Model-based Systems and Software Engineering approaches ease the development of complex software systems. On the other, they introduce the challenge of managing the multitude of different artifacts created using various tools during the system lifecycle. For understanding and maintaining these artifacts as they evolve, it is advisable to establish traceability among them. Traceability is the ability to relate the various artifacts created and evolved during the project. However, organizations often consider traceability a burden because it is time-consuming and error-prone when done manually. Hence, the objective of this thesis is to research and develop pragmatic traceability approaches that can be followed in the MBSE context. A systematic mapping study was conducted to understand and compile the various criteria that need to be followed while creating and maintaining trace links. It also provided insights on the approaches followed to ease the burden on engineers. Expert interviews with industrial companies were conducted to investigate the real-life experiences of engineers on traceability, to get an overview of best practices and known pitfalls. Based on the mapping study and the results of the interviews, various approaches and tools used to achieve traceability were discussed. A case study was conducted for state-of-the-practice traceability approaches in a toolchain consisting of Polarion, Enterprise Architect, and Doxygen. For research, open-source libraries and applications were used for analysis. A tool prototype was developed to create and maintain trace links between artifacts created in the toolchain mentioned above. The use cases in which the tool eases achieving traceability are discussed along with pros and cons.
For the development of the Extremely Large Telescope (ELT), the European Southern Observatory (ESO)
uses state machines to model life cycles and basic behaviour of control software components. To provide certain degrees of freedom, the component life cycles need to be customisable but in order to remain compatible, they must also conform to specific standard behaviour.
Clearly, these two goals are competing. High customisation causes difficulties in maintenance and may also lead to incompatible solutions. The introduction of strict compatibility requirements
on the other hand may increase maintainability but it also makes the system less flexible. To avoid spending a significant portion of the Assembly, Integration and Verification (AIV) phase in integration hell, it is of high importance to find the right balance between customisability and compatibility early enough.
To address this problem, this thesis examines different variability realisation mechanisms with respect to their applicability for the behavioural customisation of state machine models. Based on this information, a novel approach is presented that combines a set of variability realisation mechanisms and thereby enables open and stepwise customisation, systematic reuse and separation of concerns. Concretely, the method enhances a framework approach with model manipulation capabilities and mixin composition while also supporting conditional compilation and conditional execution. Moreover, the thesis demonstrates that compatibility can be ensured by combining constructive and analytical methods, namely feature orientation and conformance testing. Finally, feasibility and soundness of the elaborated solution concept are demonstrated using a proof of concept implementation that has already been applied to a real-world project in scope of the ELT program.
With growing prevalence, agile methodology also pervades domains which adhered to conventional models for decades. At the same time, the demand for safety critical applications and thus rigorous quality assurance increases. This raises the question whether agile methodology is able to support the required level of quality assurance.
This master’s thesis aims at analyzing the situation of analytical quality assurance in agile environments in order to identify shortcomings and provide potential solutions. The author derives an initial hypothesis based on his own professional experience, stating that analytical quality assurance is not sufficiently considered by agile development models and agile transformation. This hypothesis is split into eight sub-hypotheses, each describing a particular problem or challenge. Qualitative interviews with seven experts and complementary literature researches are performed to confirm given hypotheses, identify further challenges, and collect appropriate solution proposals. Eventually, based on the elicited data, five hypotheses as well as the initial hypothesis are corroborated and five new challenges are added. Furthermore, twenty-six potential solutions for relevant hypotheses are collected and presented. The solutions comprise established approaches, such as Dynamic System Development Model or Explorative Testing but also innovative ideas, including the Three-Field Agile approach publicized by this thesis.
Altogether, it is found that agile methodology largely not supports traditional analytical quality assurance in its concepts and even worse, some of the core principles are contradictive. However, numerous solutions are found and presented that address particular discrepancies and have the capability to ease the pictured situation.
Global temperature rise, and growing consumption of limited resources are global
threats. Therefore, industry and consumers will need to reduce their environmental im-
pacts. For this Product Environmental Declarations (EPD) are used for eco design and
product impact comparison. As EPDs are likely to become mandatory the total number
of products to be assessed will increase tremendously. Therefore, the entire EPD work-
flow will need to be automatized to allow large-scale application of EPDs. The goal of
this thesis is to develop an automated workflow for EPDs (aEPD) by combining Model-
Based-Systems Engineering (MBSE), Digital Twin and Life Cycle Assessment concepts.
While MBSE is used for the multilevel requirements analysis the focus was set on auto-
mating data collection along the supply and value chain using the AAS 4.0 Implementa-
tion of the Digital Twin concept. The applicability of the aEPD workflow is shown in the
prototypical implementation of an aEPD for an electric motor. Even though progress has
been made research should be continued in the development of further AAS Submodel
templates and PCRs to allow standardized data collection and communication on a
global scale.
The aim of this thesis is to perform a case study to investigate the usability of SysMD in
industrial applications. The focus is on how well it can bridge the gap between requirement
specifications, modeling, and actual development.
SysMD is a new documentation and modeling language which aims to bring documentation
and modeling closer together while still not requiring the user to be an expert in modeling or
requirement specification. This differentiates SysMD from other tools which focus on either
documentation, modeling, or are aimed at modeling experts.
This thesis will show through the case study part that SysMD as a language has a good future
with potential of being used as a language bridging the gap between requirements,
documentation, and modeling without the user needing to be an expert within modeling. It
will also show that SysMD Notebook in its current state is not ready for primetime, and I give
recommendations on how to improve both the SysMD language as well as the SysMD
Notebook to make it usable for industrial projects in the future.
Model Identification of Power Electronic Systems for Interaction Studies and Small-Signal Analysis
(2023)
The rapid growth in offshore wind brings various challenges to power system research
and industry, such as the development of multi-terminal multi-vendor HVDC grids.
To ensure interoperability in those power converter dominated systems, suitable
models are needed to efficiently perform stability and interaction studies. With
state-space based small-signal methods stability and interaction phenomena can be
assessed globally for a complex system. Yet detailed models are needed. However,
in multi-vendor projects most likely only black-boxed models will be available to
protect the intellectual property, so that identification techniques are necessary to
obtain suitable models. This thesis contributes to the research activities on statespace
model identification of black-boxed power electronic systems.
In the first part of the thesis, a method was developed and tested, where the matrix
elements of linearized state-space models were fitted in dependency of the operating
point, based on input sweeps performed on the model of a grid forming power converter
controlled as a virtual synchronous machine. It was discussed how changes in
multiple inputs can be approximated by the superposition of the individual input
dependencies and a fully operating point dependent state-space model approximation
was created. The results were validated in time and frequency domain analyses.
It was found that the method can provide a good approximation, especially for the
operating range around the default operating point.
In the second part, identification of a power electronic system was performed based
on measurement data which was generated experimentally from a low voltage laboratory
system. A sequence of input perturbations was applied to the laboratory
system and frequency response data was calculated from the corresponding output
perturbations. The data served as basis for model identification with N4SID and a
soon to be published vector fitting method. The identified models were validated by
a visual inspection of the transfer function and by comparison of the calculated step
responses to the step responses measured in the laboratory. It was found that the
treatment of incomplete data sets, the generation of substitute data and the impact
of time delays on the identification might be worth further investigation.
This work provides a valuable contribution to the research of state-space model
identification of black-boxed power electronic systems. It points out challenges and
presents promising approaches to enable state-space based methods for stability
analysis and interaction studies in future multi-terminal multi-vendor HVDC grids.
Evaluation and development of the bridging application between ISO 15118 and OCPP 2.0.1 protocols
(2023)
The increase in the number of electric vehicles(EVs) has undoubtedly put stress on the local power grid because these systems were designed without anticipating the charging needs of electric vehicles. To overcome this problem, Smart Charging is introduced to allow the Charging Stations Management System(CSMS) to load-balance the charging needs of the electric vehicles during peak hours. In addition, it allows the EVs to return their energy to the system when needed. Smart Charging uses the de facto standards ISO 15118 and OCPP to enable the CSMS to control the charging profiles of the EVs. Since these protocols are specified by different organizations, their compatibility must be analyzed to ensure their interoperability.
In the first part, this thesis aims to apply a theoretical analysis method to analyze the compatibility between ISO 15118 and OCPP. This method uses the Symbolic Transition System to model the interactions between the protocols. Then, the state transitions and message exchanges of the models are analyzed using the flooding algorithm. The result of this analysis is a compatibility matrix, which illustrates the degrees of compatibility between the states of the protocols. Based on the results, this thesis concludes that ISO 15118 and OCPP are compatible. However, their compatibility is not perfect because of
data type incompatibility between messages. The reason is that ISO 15118 uses domain data types for its parameters, while OCPP uses generic data types to increase its interoperability with other protocols.
The second part of this thesis describes the concept and design of the application to bridge the communication between ISO 15118 and OCPP. The application also demonstrates how to overcome the problems found in the compatibility analysis using facade patterns. In addition, the development of the bridging application highlights several issues that have arisen in practice. The first issue is, due to the large memory footprint of the messages, the OCPP stack is not suitable for running on small embedded systems without extreme optimization. Second, using JSON, a human-readable format, to encode the OCPP messages is unnecessary because most of the messages are processed by machines. In addition, the OCPP application is highly complex due to the nested conditions involved in sending and receiving OCPP messages. Finally, both the JSON and EXI data formats require serializers (parsers) to encode (decode) the messages, adding to the complexity of the system.
Influencer marketing, a tool to use a popular person’s reach on social media for market-ing, has been a constantly changing, critical tool for convincing potential customers of products, services, or other messages for about 15 years. Nongovernmental organiza-tions (NGOs) have also recognized the benefits of influencers in building awareness about their work. However, the influencer paradigm has controversies and risks, espe-cially for the often sensitive NGO work. The involvement of influencer marketing in the nonprofit work sector is a relatively 'newer' phenomenon, with little experience, guid-ance, or specific expertise. Despite growing interest among researchers and practition-ers, scholarly work resulting from the growth of influencer marketing is inconsistent and fragmented. Scientifically-based recommendations for practice are almost entirely lack-ing.
The master thesis contributes to filling the knowledge gap and supporting NGO employ-ees in, e.g., communication and social media positions to successfully integrate influ-encers for a good cause. The question of how influencers can effectively support the communication work of NGOs and what steps are needed is clarified. The author devel-oped a scientific handout by comparing two case studies of cooperation between NGOs and influencers, including semi-structured interviews with involved people supported by the available literature. The guidelines include necessary steps and instructions for ac-tion placed in the context of NGO work. NGOs must first learn about the influencer busi-ness, agree on the cooperation, identify the matching candidate, and plan the collabora-tion carefully. When selecting the influencer, values such as authenticity, trustworthi-ness, and genuine interest in the NGO’s good cause are preconditions for the coopera-tion’s success. Influencer marketing in NGOs will likely grow in the following years, and learning about the field will become imperative.
Epidemiological models have gained much interest during the COVID-19 pandemic.
As the pandemic is now driven by newly emerging variants of SARS-CoV-2, the
question arises how to model multiple virus variants in a single model.
In this thesis, we have extended an established model for COVID-19 forecasts to multiple
virus variants. We analyzed the model mathematically and showed the global
existence and uniqueness of the solution as well as important invariance properties
for a meaningful model. The implementation into an existing framework which allows
us to identify model parameters based on surveillance data is described briefly.
When applying our model to actual transitions between SARS-CoV-2 variants, we
found that forecasts would have been significantly improved by our model extension.
In most cases, we were able to precisely predict peak dates and heights in
case incidences of waves caused by newly emerging variants during early transition
phases. More severe outcomes, like hospitalizations, are found to be harder to predict
because of very limited observational data regarding these outcomes for newly
emerging variants.
The rapid growth of systems, both in size and complexity, combined with their distributed
nature, is posing challenges for their efficient integration and functioning. Moreover,
in order to achieve sustainability objectives and future goals, systems are increasingly
collaborating with each other, resulting in the emergence of Systems of Systems (SoS)
that are large-scale and independent. In such scenarios, multiple stakeholders and systems
from different disciplines with diverse interests need to interoperate. In various domains,
this trend of growing systems creates a greater need for interfaces that ensure seamless
interoperability in between and within these systems and SoS.
To address these challenges, an effective method for integrating systems and SoS is required.
A key to ease this integration can be the use of interface specifications to describe and
specify interfaces. However, there is currently no comprehensive understanding of how
to write high-quality interface specifications, nor is there a common overview of interface
specification approaches.
This thesis aims to fill these gaps of documented knowledge by reviewing recent developments
and best practices for interface specifications in the context of systems engineering
and SoS engineering. The review was conducted through a literature review focusing on
interface specifications, complemented by an analysis of existing interface specification
approaches and expert interviews. The goal is to provide an overview of current interface
specification characteristics and their common use cases. Based on this analysis, a
usage-driven approach in the form of customised interface specification mappings was
developed, which can assist in identifying an appropriate approach for specifying interfaces.
In light of the increasing connectivity in our lives, the work provides a framework for
better classifying and approaching interface specifications, seeking to move away from
viewing interfaces as neglected elements of systems engineering, towards a more intelligent
and productive classification and approach.
In product line engineering tasks, the need for merging models from different product
variants emerges as the commonly used clone-and-own approach suffers from high
maintenance costs in the long run. By identifying models with a high number of similarities
we can merge them to one highly reusable model. This approach will increase the
maintainability, and further expandability of the model.
Already many works have been published aiming to solve this problem with different
N-way model Matching approaches. However, there is lack of practical evidence that the
published theories work as designed in real world cases.
In this work, we will evaluate relevant published approaches and then attempt to
integrate the most promising one in the product line analysis framework VARIOUS from
Fraunhofer IESE. Next, the implemented approach will be evaluated in comparison to the
existing mechanism for model matching that VARIOUS integrates that is called "System
Aligner". The main aspects of our evaluation are:
• Accuracy - Can it accurately find the most similar models?
• Performance - How fast is it?
• Scalability - How well does it scale in large amount of input models?
• Configurability - Can it be adapted easily for different systems?
This master thesis presents a collection of architectural design patterns for safety-critical systems deployed on public cloud infrastructure. The research aims to enhance system reliability, mitigate risks, and improve overall performance in safety-critical applications. The study follows a systematic approach, considering multiple safety-critical use cases and prioritizing factors such as timing constraints and system resilience. The railway signaling system, particularly the moving block computation, is selected as the most suitable use case due to its ability to tolerate response delays and re-request computations. The thesis addresses four research questions concerning the deployment of safety-critical systems to the public cloud, existing fault-tolerance methods in the cloud, identification of relevant design patterns, and the applicability of design patterns in various safety-critical systems.
The study identifies and review's fault tolerance methods and cloud failure modes, which serve as a basis for identifying design patterns. The Structured What-If Technique (SWIFT) is utilized to analyze prospective hazards and recommend actions, which are then mapped onto design patterns for wide applicability across different projects. Each design pattern presents a problem statement, guidelines for implementation, and associated benefits and drawbacks.
The contribution of this thesis lies in the development of a valuable resource for architects and engineers working on safety-critical systems in the cloud. The design patterns offer practical solutions and a framework for the design and implementation of robust and secure systems. Detailed documentation, including context, benefits, drawbacks, and practical examples, facilitates understanding and adoption.
In conclusion, this thesis contributes to the advancement of safety and reliability in cloud-based safety-critical systems by providing architectural design patterns. Future research should focus on integrating security aspects, gathering diverse use cases, and validating the patterns in practical settings. Continued exploration and refinement of the design patterns will lead to more robust solutions for meeting the needs and challenges of safety-critical applications in various contexts.
Given a finite or countably infinite family of Hilbert spaces \((H_j)_{j\in N} \), we study the Hilbert space tensor product \(\bigotimes_{j\in N} H_j\). In the general case, these tensor products were introduced by John von Neumann. We are especially interested in the case where each Hilbert space \(H_j\) is given as a reproducing kernel Hilbert space, i.e., \(H_j = H(K_j)\) for some reproducing kernel \(K_j\). We establish the following result, which is new for the case of N being infinite: If we restrict the domains of the kernels \(K_j\) properly, their pointwise product \(K\) is again a reproducing kernel, and
\[
H(K) \cong \bigotimes_{j\in N} H_j\,
\]
i.e., there is an isometric isomorphism between both spaces respecting the tensor product structure.