Music Information Retrieval (MIR) is an interdisciplinary research area that has the goal to improve the way music is accessible through information systems. One important part of MIR is the research for algorithms to extract meaningful information (called feature data) from music audio signals. Feature data can for example be used for content based genre classification of music pieces. This masters thesis contributes in three ways to the current state of the art: • First, an overview of many of the features that are being used in MIR applications is given. These methods – called “descriptors” or “features” in this thesis – are discussed in depth, giving a literature review and for most of them illustrations. • Second, a large part of the described features are implemented in a uniform framework, called T-Toolbox which is programmed in the Matlab environment. It also allows to do classification experiments and descriptor visualisation. For classification, an interface to the machine-learning environment WEKA is provided. • Third, preliminary evaluations are done investigating how well these methods are suited for automatically classifying music according to categorizations such as genre, mood, and perceived complexity. This evaluation is done using the descriptors implemented in the T-Toolbox, and several state-of-the-art machine learning algorithms. It turns out that – in the experimental setup of this thesis – the treated descriptors are not capable to reliably discriminate between the classes of most examined categorizations; but there is an indication that these results could be improved by developing more elaborate techniques.
In its rather short history robotic research has come a long way in the half century since it started to exist as a noticeable scientic eld. Due to its roots in engineering, computer science, mathematics, and several other 'classical' scientic branches,a grand diversity of methodologies and approaches existed from the very beginning. Hence, the researchers in this eld are in particular used to adopting ideas that originate in other elds. As a fairly logical consequence of this, scientists tended to biology during the 1970s in order to nd approaches that are ideally adapted to the conditions of our natural environment. Doing so allows for introducing principles to robotics that have already shown their great potential by prevailing in a tough evolutionary selection process for millions of years. The variety of these approaches spans from efficient locomotion, to sensor processing methodologies and all the way to control architectures. Thus, the full spectrum of challenges for autonomous interaction with the surroundings while pursuing a task can be covered by such means. A feature that has proven to be amongst the most challenging to recreate is the human ability of biped locomotion. This is mainly caused by the fact that walking,running and so on are highly complex processes involving the need for energy efficient actuation, sophisticated control architectures and algorithms, and an elaborate mechanical design while at the same time posting restrictions concerning stability and weight. However, it is of special interest since our environment is favoring this specic kind of locomotion and thus promises to open up an enormous potential if mastered. More than the mere scientic interest, it is the fascination of understanding and recreating parts of oneself that drives the ongoing eorts in this area of research. The fact that this is not at all an easy task to tackle is not only caused by the highly dynamical processes but also has its roots in the challenging design process. That is because it cannot be limited to just one aspect like e.g. the control architecture, actuation, sensors, or mechanical design alone. Each aspect has to be incorporated into a sound general concept in order to allow for a successful outcome in the end. Since control is in this context inseparably coupled with the mechanics of the system, both has to be dealt with here.
Nowadays, vehicle control systems such as anti-lock braking systems, electronic stability control, and cruise control systems yield many advantages. The electronic control units that are deployed in this specific application domain are embedded systems that are integrated in larger systems to achieve predefined applications. Embedded systems consist of embedded hardware and a large software part. Model-based development for embedded systems offers significant software-development benefits that are pointed out in this thesis. The vehicle control system Adaptive Cruise Control is developed in this thesis using a model-based software development process for embedded systems. As a modern industrial design tool that is prevalent in this domain, simulink,is used for modeling the environment, the system behavior, for determining controller parameters, and for simulation purposes. Using an appropriate toolchain, the embedded code is automatically generated. The adaptive cruise control system could be successfully implemented and tested within this short timespan using a waterfall model without increments. The vehicle plant and important filters are fully deduced in detail. Therefore, the design of further vehicle control systems needs less effort for development and precise simulation.
Ever since Mark Weiser’s vision of Ubiquitous Computing the importance of context has increased in the computer science domain. Future Ambient Intelligent Environments will assist humans in their everyday activities, even without them being constantly aware of it. Objects in such environments will have small computers embedded into them which have the ability to predict human needs from the current context and adapt their behavior accordingly. This vision equally applies to future production environments. In modern factories workers and technical staff members are confronted with a multitude of devices from various manufacturers, all with different user interfaces, interaction concepts and degrees of complexity. Production processes are highly dynamic, whole modules can be exchanged or restructured. Both factors force users to continuously change their mental model of the environment. This complicates their workflows and leads to avoidable user errors or slips in judgement. In an Ambient Intelligent Production Environment these challenges have to be approached. The SmartMote is a universal control device for ambient intelligent production environments like the SmartFactoryKL. It copes with the problems mentioned above by integrating all the user interfaces into a single, holistic and mobile device. Following an automated Model-Based User Interface Development (MBUID) process it generates a fully functional graphical user interface from an abstract task-based description of the environment during run-time. This work introduces an approach to integrating context, namely the user’s location, as an adaptation basis into the MBUID process. A Context Model is specified, which stores location information in a formal and precise way. Connected sensors continuously update the model with new values. The model is complemented by a reasoning component which uses an extensible set of rules. These rules are used to derive more abstract context information from basic sensor data and for providing this information to the MBUID process. The feasibility of the approach is shown by using the example of Interaction Zones, which let developers describe different task models depending on the user’s location. Using the context model to determine when a user enters or leaves a zone, the generator can adapt the graphical user interface accordingly. Context-awareness and the potential to adapt to the current context of use are key requirements of applications in ambient intelligent environments. The approach presented here provides a clear procedure and extension scheme for the consideration of additional context types. As context has significant influence on the overall User Experience, this results not only in a better usefulness, but also in an improved usability of the SmartMote.
This research for this thesis was conducted to develop a framework which supports the automatic configuration of project-specific software development processes by selecting and combining different technologies: the Process Configuration Framework. The research draws attention to the problem that while the research community develops new technologies, the industrial companies continue only using their well-known ones. Because of this, technology transfer takes decades. In addition, there is the fact that there is no solution which solves all problems in a software development project. This leads to a number of technologies which need to be combined for one project.
The framework developed and explained in this research mainly addresses those problems by building a bridge between research and industry as well as by supporting software companies during the selection of the most appropriate technologies combined in a software process. The technology transformation gap is filled by a repository of (new) technologies which are used as a foundation of the Process Configuration Framework. The process is configured by providing SPEM process pattern for each technology, so that the companies can build their process by plugging into each other.
The technologies of the repository were specified in a schema including a technology model, context model, and an impact model. With context and impact it is possible to provide information about a technology, for example, its benefits to quality, cost or schedule. The offering of the process pattern as output of the Process Configuration Framework is performed in several stages:
I Technology Ranking:
1 Ranking based on Application Domain, Project & Impact
2 Ranking based on Environment
3 Ranking based on Static Context
II Technology Combination:
4 Creation of all possible Technology Chains
5 Restriction of the Technology Chains
6 Ranking based on Static and Dynamic Context
7 Extension of the Chains by Quality Assurance
III Process Configuration:
8 Process Component Diagram
9 Extension of the Process Component Diagram
10 Instantiation of the Components by Technologies of the Technology Chain
11 Providing process patterns
12 Creation of the process based on Patterns
The effectiveness and quality of the Process Configuration Framework have additionally been evaluated in a case study. Here, the Technology Chains manually created by experts were compared to the chains automatically created by the framework after it was configured by those experts. This comparison depicted that the framework results are similar and therefore can be used as a recommendation.
We conclude from our research that support during the configuration of a process for software projects is important especially for non-experts. This support is provided by the Process Configuration Framework developed in this research. In addition our research has shown that this framework offers a possibility to speed up the technology transformation gap between the research community and industrial companies.
Data usage control is a concept that extends access control to also protect data after it
has been released. Usage control enforcement relies on available information about the
distribution of data in the monitored system. In this thesis we introduce an information
engine V8 of the Chromium browser to evaluate the feasibility of the chosen approach.
Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting
method for Oracle's Java 7 runtime library. The decision for the change was based on
empirical studies showing that on average, the new algorithm is faster than the formerly
used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot
approach — an idea that was considered not promising by several theoretical studies in the
past. In this thesis, I try to find the reason for this unexpected success.
My focus is on the precise and detailed average case analysis, aiming at the flavor of
Knuth's series “The Art of Computer Programming”. In particular, I go beyond abstract
measures like counting key comparisons, and try to understand the efficiency of the
algorithms at different levels of abstraction. Whenever possible, precise expected values are
preferred to asymptotic approximations. This rigor ensures that (a) the sorting methods
discussed here are actually usable in practice and (b) that the analysis results contribute to
a sound comparison of the Quicksort variants.