### Refine

#### Year of publication

#### Document Type

- Preprint (1033)
- Doctoral Thesis (618)
- Report (399)
- Article (181)
- Conference Proceeding (26)
- Diploma Thesis (22)
- Periodical Part (21)
- Working Paper (12)
- Master's Thesis (11)
- Lecture (7)

#### Language

- English (2352) (remove)

#### Keywords

- AG-RESY (47)
- PARO (25)
- SKALP (15)
- Visualisierung (13)
- Wavelet (13)
- Case-Based Reasoning (11)
- Inverses Problem (11)
- RODEO (11)
- Mehrskalenanalyse (10)
- finite element method (10)

#### Faculty / Organisational entity

- Fachbereich Mathematik (939)
- Fachbereich Informatik (649)
- Fachbereich Physik (239)
- Fraunhofer (ITWM) (203)
- Fachbereich Maschinenbau und Verfahrenstechnik (109)
- Fachbereich Elektrotechnik und Informationstechnik (78)
- Fachbereich Chemie (58)
- Fachbereich Biologie (33)
- Fachbereich Sozialwissenschaften (22)
- Fachbereich Wirtschaftswissenschaften (12)

Asynchronous concurrency is a wide-spread way of writing programs that
deal with many short tasks. It is the programming model behind
event-driven concurrency, as exemplified by GUI applications, where the
tasks correspond to event handlers, web applications based around
JavaScript, the implementation of web browsers, but also of server-side
software or operating systems.
This model is widely used because it provides the performance benefits of
concurrency together with easier programming than multi-threading. While
there is ample work on how to implement asynchronous programs, and
significant work on testing and model checking, little research has been
done on handling asynchronous programs that involve heap manipulation, nor
on how to automatically optimize code for asynchronous concurrency.
This thesis addresses the question of how we can reason about asynchronous
programs while considering the heap, and how to use this this to optimize
programs. The work is organized along the main questions: (i) How can we
reason about asynchronous programs, without ignoring the heap? (ii) How
can we use such reasoning techniques to optimize programs involving
asynchronous behavior? (iii) How can we transfer these reasoning and
optimization techniques to other settings?
The unifying idea behind all the results in the thesis is the use of an
appropriate model encompassing global state and a promise-based model of
asynchronous concurrency. For the first question, We start from refinement
type systems for sequential programs and extend them to perform precise
resource-based reasoning in terms of heap contents, known outstanding
tasks and promises. This extended type system is known as Asynchronous
Liquid Separation Types, or ALST for short. We implement ALST in for OCaml
programs using the Lwt library.
For the second question, we consider a family of possible program
optimizations, described by a set of rewriting rules, the DWFM rules. The
rewriting rules are type-driven: We only guarantee soundness for programs
that are well-typed under ALST. We give a soundness proof based on a
semantic interpretation of ALST that allows us to show behavior inclusion
of pairs of programs.
For the third question, we address an optimization problem from industrial
practice: Normally, JavaScript files that are referenced in an HTML file
are be loaded synchronously, i.e., when a script tag is encountered, the
browser must suspend parsing, then load and execute the script, and only
after will it continue parsing HTML. But in practice, there are numerous
JavaScript files for which asynchronous loading would be perfectly sound.
First, we sketch a hypothetical optimization using the DWFM rules and a
static analysis.
To actually implement the analysis, we modify the approach to use a
dynamic analysis. This analysis, known as JSDefer, enables us to analyze
real-world web pages, and provide experimental evidence for the efficiency
of this transformation.

The design of the fifth generation (5G) cellular network should take account of the emerging services with divergent quality of service requirements. For instance, a vehicle-to-everything (V2X) communication is required to facilitate the local data exchange and therefore improve the automation level in automated driving applications. In this work, we inspect the performance of two different air interfaces (i.e., LTE-Uu and PC5) which are proposed by the third generation partnership project (3GPP) to enable the V2X communication. With these two air interfaces, the V2X communication can be realized by transmitting data packets either over the network infrastructure or directly among traffic participants. In addition, the ultra-high reliability requirement in some V2X communication scenarios can not be fulfilled with any single transmission technology (i.e., either LTE-Uu or PC5). Therefore, we discuss how to efficiently apply multi-radio access technologies (multi-RAT) to improve the communication reliability. In order to exploit the multi-RAT in an efficient manner, both the independent and the coordinated transmission schemes are designed and inspected. Subsequently, the conventional uplink is also extended to the case where a base station can receive data packets through both the LTE-Uu and PC5 interfaces. Moreover, different multicast-broadcast single-frequency network (MBSFN) area mapping approaches are also proposed to improve the communication reliability in the LTE downlink. Last but not least, a system level simulator is implemented in this work. The simulation results do not only provide us insights on the performances of different technologies but also validate the effectiveness of the proposed multi-RAT scheme.

Autonomous driving is disrupting the conventional automotive development. In fact, autonomous driving kicks off the consolidation of control units, i.e. the transition from distributed Electronic Control Units (ECUs) to centralized domain controllers. Platforms like Audi’s zFAS demonstrate this very clearly, where GPUs, Custom SoCs, Microcontrollers, and FPGAs are integrated on a single domain controller in order to perform sensor fusion, processing and decision making on a single Printed Circuit Board (PCB). The communication between these heterogeneous components and the algorithms for Advanced Driving Assistant Systems (ADAS) itself requires a huge amount of memory bandwidth, which will bring the Memory Wall from High Performance Computing (HPC) and data-centers directly in our cars. In this paper we highlight the roles and issues of Dynamic Random Access Memories (DRAMs) for future autonomous driving architectures.

The authors explore the intrinsic trade-off in a DRAM between the power consumption (due to refresh) and the reliability. Their unique measurement platform allows tailoring to the design constraints depending on whether power consumption, performance or reliability has the highest design priority. Furthermore, the authors show how this measurement platform can be used for reverse engineering the internal structure of DRAMs and how this knowledge can be used to improve DRAM’s reliability.

Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.

The N-containing heterocycles have received strong attention from the organic synthesis field because of their importance for pharmaceutical and material sciences. Nitrogen element plays an important role between inorganic salts and biomolecules, to search convenient methods combine C-N bond together become a hot topic in recent decades.
Since the early beginning of 20th century, transition-metal-catalyzed coupling reactions had been well-known and world widely spread in organic researchs, achieved abundant significant progress. In the other side, the less toxic and more challenging transition metal free coupling method remained further potential value.
With the evolution of amination reactions and oxidants, more and more effective, simplified, and atom economic organic synthesis methods will come soon. And these stories also drove me to think about investigating the novel cross-dehydrogenative-coupling amination methods development as the topics of my PhD research.
Thus, we selected the phenothiazine derivatives as the N-nucleophile reagents and the phenols as the C-nucleophile reagents. To achieve the transition metal-free CDC aminations of phenols with phenothiazines, we scanned the chemical toolbox and tested a series of both common and uncommon oxidants.
Firstly, we start the condition in the presence of cumene and O2. The proposed mechanism initiated by a Hock process, which would form in situ peroxo-species as initiator of the reaction. And the initial infra-red analysis predicted there is a strong O-H..N interaction.
In the second method, a series of iodines with different valance have been tested to achieve the C-N bond formation of phenols with phenothiazines. This time, a simplified and more efficient method had been developed, which also provides a wider scope of phenols. Several controlling experiments had been conducted for the plausible pathway research. Large-scale synthesis of target molecular was also successfully performed.
And then, we focus the research on the cross-coupling reaction of pre-oxidized(iminated) phenothiazine with ubiquitous phenols and indoles. In this task, we first regio-selectively synthesized the novel iminated phenothiazine derivatives with the traditional biocide and mild disinfectant, Chloramine T. Then the phenothiazinimine performed an ultra-simple condensation technique with phenol or indole coupling partners in a simplified condition. Parallel reactions were also performed to investigate the plausible pathway.

Nowadays, the increasing demand for ever more customizable products has emphasized the need for more flexible and fast-changing manufacturing systems. In this environment, simulation has become a strategic tool for the design, development, and implementation of such systems. Simulation represents a relatively low-cost and risk-free alternative for testing the impact and effectiveness of changes in different aspects of manufacturing systems.
Systems that deal with this kind of data for its use in decision making processes are known as Simulation-Based Decision Support Systems (SB-DSS). Although most SB-DSS provide a powerful variety of tools for the automatic and semi-automatic analysis of simulations, visual and interactive alternatives for the manual exploration of the results are still open to further development.
The work in this dissertation is focused on enhancing decision makers’ analysis capabilities by making simulation data more accessible through the incorporation of visualization and analysis techniques. To demonstrate how this goal can be achieved, two systems were developed. The first system, viPhos – standing for visualization of Phos: Greek for light –, is a system that supports lighting design in factory layout planning. viPhos combines simulation, analysis, and visualization tools and techniques to facilitate the global and local (overall factory or single workstations, respectively) interactive exploration and comparison of lighting design alternatives.
The second system, STRAD - standing for Spatio-Temporal Radar -, is a web-based systems that considers the spatio/attribute-temporal analysis of event data. Since decision making processes in manufacturing also involve the monitoring of the systems over time, STRAD enables the multilevel exploration of event data (e.g., simulated or historical registers of the status of machines or results of quality control processes).
A set of four case studies and one proof of concept prepared for both systems demonstrate the suitability of the visualization and analysis strategies adopted for supporting decision making processes in diverse application domains. The results of these case studies indicate that both, the systems as well as the techniques included in the systems can be generalized and extended to support the analysis of different tasks and scenarios.

The scientific and industrial interest devoted to polymer/layered silicate
nanocomposites due to their outstanding properties and novel applications resulted
in numerous studies in the last decade. They cover mostly thermoplastic- and
thermoset-based systems. Recently, studies in rubber/layered silicate
nanocomposites were started, as well. It was presented how complex maybe the
nanocomposite formation for the related systems. Therefore the rules governing their
structure-property relationships have to be clarified. In this Thesis, the related
aspects were addressed.
For the investigations several ethylene propylene diene rubbers (EPDM) of polar and
non-polar origin were selected, as well as, the more polar hydrogenated acrylonitrile
butadiene rubber (HNBR). The polarity was found to be beneficial on the
nanocomposite formation as it assisted to the intercalation of the polymer chains
within the clay galleries. This favored the development of exfoliated structures.
Finding an appropriate processing procedure, i.e. compounding in a kneader instead
of on an open mill, the mechanical performance of the nanocomposites was
significantly improved. The complexity of the nanocomposite formation in
rubber/organoclay system was demonstrated. The deintercalation of the organoclay
observed, was traced to the vulcanization system used. It was evidenced by an
indirect way that during sulfur curing, the primary amine clay intercalant leaves the
silicate surface and migrates in the rubber matrix. This was explained by its
participation in the sulfur-rich Zn-complexes created. Thus, by using quaternary
amine clay intercalants (as it was presented for EPDM or HNBR compounds) the
deintercalation was eliminated. The organoclay intercalation/deintercalation detected
for the primary amine clay intercalants, were controlled by means of peroxide curing
(as it was presented for HNBR compounds), where the vulcanization mechanism
differs from that of the sulfur curing.
The current analysis showed that by selecting the appropriate organoclay type the
properties of the nanocomposites can be tailored. This occurs via generating different
nanostructures (i.e. exfoliated, intercalated or deintercalated). In all cases, the
rubber/organoclay nanocomposites exhibited better performance than vulcanizates
with traditional fillers, like silica or unmodified (pristine) layered silicates.The mechanical and gas permeation behavior of the respective nanocomposites
were modelled. It was shown that models (e.g. Guth’s or Nielsen’s equations)
developed for “traditional” vulcanizates can be used when specific aspects are taken
into consideration. These involve characteristics related to the platy structure of the
silicates, i.e. their aspect ratio after compounding (appearance of platelet stacks), or
their orientation in the rubber matrix (order parameter).

Benzene is a natural constituent of crude oil and a product of incomplete combustion of petrol
and has been classified as “carcinogenic to humans” by IARC in 1982 (IARC 1982). (E,E)-
Muconaldehyde has been postulated to be a microsomal metabolite of benzene in vitro
(Latriano et al. 1986). (E,E)-Muconaldehyde is hematotoxic in vivo and its role in the
hematotoxicity of benzene is unclear (Witz et al. 1985).
We intended to ascertain the presence of (E,E)-muconaldehyde in vivo by detection of a
protein conjugate deriving from (E,E)-muconaldehyde.
Therefore we improved the current synthetic access to (E,E)-muconaldehyde. (E,E)-
muconaldehyde was synthesized in three steps starting from with (E,E)-muconic acid in an
overall yield of 60 %.
Reaction of (E,E)-muconaldehyde with bovine serum albumin resulted in formation of a
conjugate which was converted upon addition of NaBH4 to a new species whose HPLC-
retention time, UV spectra, Q1 mass and MS2 spectra matched those of the crude reaction
product from one pot conversion of Ac-Lys-OMe with (E,E)-muconaldehyde in the presence
of NaBH4 and subsequent cleavage of protection groups.
Synthetic access to the presumed structure (S)-2-ammonio-6-(((E,E)-6-oxohexa-2,4-dien-1-
yl)amino)hexanoate (Lys(MUC-CHO)) was provided in eleven steps starting from (E,E)-
muconic acid and Lys(Z)-OtBu*HCl in 2 % overall yield. Additionally synthetic access to
(S)-2-ammonio-6-(((E,E)-6-hydroxyhexa-2,4-dien-1-yl)amino)hexanoate (Lys(MUC-OH))
and (S)-2-ammonio-6-((6-hydroxyhexyl)amino)hexanoate (IS) was provided.
With synthetic reference material at hand, the presumed structure Lys(MUC-OH) could be
identified from incubations of (E,E)-muconaldehyde with bovine serum albumin via HPLC-ESI+-
MS/MS.
Cytotoxicity analysis of (E,E)-muconaldehyde and Lys(MUC-CHO) in human promyelocytic
NB4 cells resulted in EC50 ≈ 1 μM for (E,E)-muconaldehyde. Lys(MUC-CHO) did not show
any additional cytotoxicity up to 10 μM.
B6C3F1 mice were exposed to 0, 400 and 800 mg/kg b.w. benzene to examine the formation
of Lys(MUC-OH) in vivo. After 24 h mice were sacrificed and serum albumin was isolated.
Analysis for Lys(MUC-OH) has not been performed in this work.

Collaboration aims to increase the efficiency of problem solving and decision making by bringing diverse areas of expertise together, i.e., teams of experts from various disciplines, all necessary to come up with acceptable concepts. This dissertation is concerned with the design of highly efficient computer-supported collaborative work involving active participation of user groups with diverse expertise. Three main contributions can be highlighted: (1) the definition and design of a framework facilitating collaborative decision making; (2) the deployment and evaluation of more natural and intuitive interaction and visualization techniques in order to support multiple decision makers in virtual reality environments; and (3) the integration of novel techniques into a single proof-of-concept system.
Decision making processes are time-consuming, typically involving several iterations of different options before a generally acceptable solution is obtained. Although, collaboration is an often-applied method, the execution of collaborative sessions is often inefficient, does not involve all participants, and decisions are often finalized with- out the agreement of all participants. An increasing number of computer-supported cooperative work systems (CSCW) facilitate collaborative work by providing shared viewpoints and tools to solve joint tasks. However, most of these software systems are designed from a feature-oriented perspective, rather than a human-centered perspective and without the consideration of user groups with diverse experience and joint goals instead of joint tasks. The aim of this dissertation is to bring insights to the following research question: How can computer-supported cooperative work be designed to be more efficient? This question opens up more specific questions like: How can collaborative work be designed to be more efficient? How can all participants be involved in the collaboration process? And how can interaction interfaces that support collaborative work be designed to be more efficient? As such, this dissertation makes contributions in:
1. Definition and design of a framework facilitating decision making and collaborative work. Based on examinations of collaborative work and decision making processes requirements of a collaboration framework are assorted and formulated. Following, an approach to define and rate software/frameworks is introduced. This approach is used to translate the assorted requirements into a software’s architecture design. Next, an approach to evaluate alternatives based on Multi Criteria Decision Making (MCDM) and Multi Attribute Utility Theory (MAUT) is presented. Two case studies demonstrate the usability of this approach for (1) benchmarking between systems and evaluates the value of the desired collaboration framework, and (2) ranking a set of alternatives resulting from a decision-making process incorporating the points of view of multiple stake- holders.
2. Deployment and evaluation of natural and intuitive interaction and visualization techniques in order to support multiple diverse decision makers. A user taxonomy of industrial corporations serves to create a petri network of users in order to identify dependencies and information flows between each other. An explicit characterization and design of task models was developed to define interfaces and further components of the collaboration framework. In order to involve and support user groups with diverse experiences, smart de- vices and virtual reality are used within the presented collaboration framework. Natural and intuitive interaction techniques as well as advanced visualizations of user centered views of the collaboratively processed data are developed in order to support and increase the efficiency of decision making processes. The smartwatch as one of the latest technologies of smart devices, offers new possibilities of interaction techniques. A multi-modal interaction interface is provided, realized with smartwatch and smartphone in full immersive environments, including touch-input, in-air gestures, and speech.
3. Integration of novel techniques into a single proof-of-concept system. Finally, all findings and designed components are combined into the new collaboration framework called IN2CO, for distributed or co-located participants to efficiently collaborate using diverse mobile devices. In a prototypical implementation, all described components are integrated and evaluated. Examples where next-generation network-enabled collaborative environments, connected by visual and mobile interaction devices, can have significant impact are: design and simulation of automobiles and aircrafts; urban planning and simulation of urban infrastructure; or the design of complex and large buildings, including efficiency- and cost-optimized manufacturing buildings as task in factory planning. To demonstrate the functionality and usability of the framework, case studies referring to factory planning are demonstrated. Considering that factory planning is a process that involves the interaction of multiple aspects as well as the participation of experts from different domains (i.e., mechanical engineering, electrical engineering, computer engineering, ergonomics, material science, and even more), this application is suitable to demonstrate the utilization and usability of the collaboration framework. The various software modules and the integrated system resulting from the research will all be subjected to evaluations. Thus, collaborative decision making for co-located and distributed participants is enhanced by the use of natural and intuitive multi-modal interaction interfaces and techniques.

Due to their superior weight-specific mechanical properties, carbon fibre epoxy composites (CFRP) are commonly used in aviation industry. However, their brittle failure behaviour limits the structural integrity and damage tolerance in case of impact (e.g. tool drop, bird strike, hail strike, ramp collision) or crash events. To ensure sufficient robustness, a minimum skin thickness is therefore prescribed for the fuselage, partially exceeding typical service load requirements from ground or flight manoeuvre load cases. A minimum skin thickness is also required for lightning strike protection purposes and to enable state-of-the-art bolted repair technology. Furthermore, the electrical conductivity of CFRP aircraft structures is insufficient for certain applications; additional metal components are necessary to provide electrical functionality (e.g. metal meshes on the outer skin for lightning strike protection, wires for electrical bonding and grounding, overbraiding of cables to provide electromagnetic shielding). The corresponding penalty weights compromise the lightweight potential that is actually given by the structural performance of CFRP over aluminium alloys.
Former research attempts tried to overcome these deficits by modifying the resin system (e.g. by addition of conductive particles or toughening agents) but could not prove sufficient enhancements. A novel holistic approach is the incorporation of highly conductive and ductile continuous metal fibres into CFRP. The basic idea of this hybrid material concept is to take advantage of both the electrical and mechanical capabilities of the integrated metal fibres in order to simultaneously improve the electrical conductivity and the damage tolerance of the composite. The increased density of the hybrid material is over-compensated by omitting the need for additional electrical system installation items and by the enhanced structural performance, enabling a reduction of the prescribed minimum skin thickness. Advantages over state-of-the-art fibre metal laminates mainly arise from design and processing technology aspects.
In this context, the present work focuses on analysing and optimising the structural and electrical performance of such hybrid composites with shares of metal fibres up to 20 vol.%. Bundles of soft-annealed austenitic steel or copper cladded low carbon steel fibres with filament diameters of 60 or 63 µm are considered. The fibre bundles are distinguished by high elongation at break (32 %) and ultimate tensile strength (900 MPa) or high electrical conductivity (2.4 × 10^7 S/m). Comprehensive researches are carried out on the fibre bundles as well as on unidirectional and multiaxial laminates. Both hybrid composites with homogeneous and accumulated steel fibre arrangement are taken into account. Electrical in-plane conductivity, plain tensile behaviour, suitability for bolted joints as well as impact and perforation performance of the composite are analysed. Additionally, a novel non-destructive testing method based on measurement of deformation-induced phase transformation of the metastable austenitic steel fibres is discussed.
The outcome of the conductivity measurements verifies a correlation of the volume conductivity of the composite with the volume share and the specific electrical resistance of the incorporated metal fibres. Compared to conventional CFRP, the electrical conductivity in parallel to the fibre orientation can be increased by one to two orders of magnitude even for minor percentages of steel fibres. The analysis, however, also discloses the challenge of establishing a sufficient connection to the hybrid composite in order to entirely exploit its electrical conductivity.
In case of plain tensile load, the performance of the hybrid composite is essentially affected by the steel fibre-resin-adhesion as well as the laminate structure. Uniaxial hybrid laminates show brittle, singular failure behaviour. Exhaustive yielding of the embedded steel fibres is confined to the arising fracture gap. The high transverse stiffness of the isotropic metal fibres additionally intensifies strain magnification within the resin under transverse tensile load. This promotes (intralaminar) inter-fibre-failure at minor composite deformation. By contrast, multiaxial hybrid laminates exhibit distinctive damage evolution. After failure initiation, the steel fibres extensively yield and sustain the load-carrying capacity of angularly (e.g. ±45°) aligned CFRP plies. The overall material response is thus not only a simple superimposition but a complex interaction of the mechanical behaviour of the composite’s constituents. As a result of this post-damage performance, an ultimate elongation of over 11 % can be proven for the hybrid laminates analysed in this work. In this context, the influence of the steel fibre-resin adhesion on the failure behaviour of the hybrid composite is explicated by means of an analytical model. Long term exposure to corrosive media has no detrimental effect on the mechanical performance of stainless steel fibre reinforced composites. By trend, water uptake increases the maximum elongation at break of the hybrid laminate.
Moreover, the suitability of CFRP for bolted joints can partially be improved by the integration of steel fibres. While the bearing strength basically remains nearly unaffected, the bypass failure behaviour (ε_{max}: +363 %) as well as the head pull-through resistance (E_{a,BPT}: +81 %) can be enhanced. The improvements primarily concern the load-carrying capacity after failure initiation. Additionally, the integrated ductile steel fibres significantly increase the energy absorption capacity of the laminate in case of progressive bearing failure by up to 63 %.
However, the hybrid composite exhibits a sensitive low velocity/low mass impact behaviour. Compared to conventional CFRP, the damage threshold load of very thin hybrid laminates is lower, making them prone for delamination at minor, non-critical impact energies. At higher energy levels, however, the impact-induced delamination spreads less since most of the impact energy is absorbed by yielding of the ductile metal fibres instead of crack propagation. This structural advantage compared to CFRP gains in importance with increasing impact energy. The plastic deformation of the metastable austenitic steel fibres is accompanied by a phase transformation from paramagnetic γ-austenite to ferromagnetic α’-martensite. This change of the magnetic behaviour can be used to detect and evaluate impacts on the surface of the hybrid composite, which provides a simple non-destructive testing method. In case of low velocity/high mass impact, integration of ductile metal fibres into CFRP enables to address spacious areas of the laminate for energy absorption purposes. As a consequence, the perforation resistance of the hybrid composite is significantly enhanced; by addition of approximately 20 vol.% of stainless steel fibres, the perforation strength can be increased by 61 %, while the maximum energy absorption capacity rises by 194 %.

Due to the steadily growing flood of data, the appropriate use of visualizations for efficient data analysis is as important today as it has never been before. In many application domains, the data flood is based on processes that can be represented by node-link diagrams. Within such a diagram, nodes may represent intermediate results (or products), system states (or snapshots), milestones or real (and possibly georeferenced) objects, while links (edges) can embody transition conditions, transformation processes or real physical connections. Inspired by the engineering sciences application domain and the research project “SinOptiKom: Cross-sectoral optimization of transformation processes in municipal infrastructures in rural areas”, a platform for the analysis of transformation processes has been researched and developed based on a geographic information system (GIS). Caused by the increased amount of available and interesting data, a particular challenge is the simultaneous visualization of several visible attributes within one single diagram instead of using multiple ones. Therefore, two approaches have been developed, which utilize the available space between nodes in a diagram to display additional information.
Motivated by the necessity of appropriate result communication with various stakeholders, a concept for a universal, dashboard-based analysis platform has been developed. This web-based approach is conceptually capable of displaying data from various data sources and has been supplemented by collaboration possibilities such as sharing, annotating and presenting features.
In order to demonstrate the applicability and usability of newly developed applications, visualizations or user interfaces, extensive evaluations with human users are often inevitable. To reduce the complexity and the effort for conducting an evaluation, the browser-based evaluation framework (BREF) has been designed and implemented. Through its universal and flexible character, virtually any visualization or interaction running in the browser can be evaluated with BREF without any additional application (except for a modern web browser) on the target device. BREF has already proved itself in a wide range of application areas during the development and has since grown into a comprehensive evaluation tool.

In the present master’s thesis we investigate the connection between derivations and
homogeneities of complete analytic algebras. We prove a theorem, which describes a specific set of generators
for the module of derivations of an analytic algebra, which map the maximal ideal of R into itself. It turns out, that this set has a structure similar to a Cartan subalgebra and contains
information regarding multi-homogeneity. In order to prove
this theorem, we extend the notion of grading by Scheja and Wiebe to projective systems and state the connection between multi-gradings and pairwise
commuting diagonalizable derivations. We prove a theorem similar to Cartan’s Conjugacy Theorem in the setup of infinite-dimensional Lie algebras, which arise as projective limits of finite-dimensional Lie algebras. Using this result, we can show that the structure of the aforementioned set of generators is an intrinsic property of the analytic algebra. At the end we state an algorithm, which is theoretically able to compute the maximal multi-homogeneity of a complete analytic algebra.

A fast numerical method for an advanced electro-chemo-mechanical model is developed which is able to capture phase separation processes in porous materials. This method is applied to simulate lithium-ion battery cells, where the complex microstructure of the electrodes is fully resolved. The intercalation of ions into the popular cathode material LFP leads to a separation into lithium-rich and lithium-poor phases. The large concentration gradients result in high mechanical stresses. A phase-field method applying the Cahn-Hilliard equation is used to describe the diffusion. For the sake of simplicity, the linear elastic case is considered. Numerical tests for fully resolved three-dimensional granular microstructures are discussed in detail.

In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.

In this thesis, we deal with the finite group of Lie type \(F_4(2^n)\). The aim is to find information on the \(l\)-decomposition numbers of \(F_4(2^n)\) on unipotent blocks for \(l\neq2\) and \(n\in \mathbb{N}\) arbitrary and on the irreducible characters of the Sylow \(2\)-subgroup of \(F_4(2^n)\).
S. M. Goodwin, T. Le, K. Magaard and A. Paolini have found a parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), a Sylow \(2\)-subgroup of \(F_4(q)\), of \(F_4(p^n)\), \(p\) a prime, for the case \(p\neq2\).
We managed to adapt their methods for the parametrization of the irreducible characters of the Sylow \(2\)-subgroup for the case \(p=2\) for the group \(F_4(q)\), \(q=p^n\). This gives a nearly complete parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), namely of all irreducible characters of \(U\) arising from so-called abelian cores.
The general strategy we have applied to obtain information about the \(l\)-decomposition numbers on unipotent blocks is to induce characters of the unipotent subgroup \(U\) of \(F_4(q)\) and Harish-Chandra induce projective characters of proper Levi subgroups of \(F_4(q)\) to obtain projective characters of \(F_4(q)\). Via Brauer reciprocity, the multiplicities of the ordinary irreducible unipotent characters in these projective characters give us information on the \(l\)-decomposition numbers of the unipotent characters of \(F_4(q)\).
Sadly, the projective characters of \(F_4(q)\) we obtained were not sufficient to give the shape of the entire decomposition matrix.

Arctic, Antarctic and alpine biological soil crusts (BSCs) are formed by adhesion of soil particles to exopolysaccharides (EPSs) excreted by cyanobacterial and green algal communities, the pioneers and main primary producers in these habitats. These BSCs provide and inﬂuence many ecosystem services such as soil erodibility, soil formation and nitrogen (N) and carbon (C) cycles. In cold environments degradation rates are low and BSCs continuously increase soil organic C; therefore, these soils are considered to be CO2 sinks. This work provides a novel, nondestructive and highly comparable method to investigate intact BSCs with a focus on cyanobacteria and green algae and their contribution to soil organic C. A new terminology arose,basedonconfocallaserscanningmicroscopy(CLSM) 2-D biomaps, dividing BSCs into a photosynthetic active layer (PAL) made of active photoautotrophic organisms and a photosynthetic inactive layer (PIL) harbouring remnants of cyanobacteria and green algae glued together by their remaining EPSs. By the application of CLSM image analysis (CLSM–IA) to 3-D biomaps, C coming from photosynthetic activeorganismscouldbevisualizedasdepthproﬁleswithC peaks at 0.5 to 2mm depth. Additionally, the CO2 sink character of these cold soil habitats dominated by BSCs could be highlighted, demonstrating that the ﬁrst cubic centimetre of soil consists of between 7 and 17% total organic carbon, identiﬁed by loss on ignition.

European economic, social and territorial cohesion is one of the fundamental aims of the European Union (EU). It seeks to both reduce the effects of internal borders and enhance European integration. In order to facilitate territorial cohesion, the linkage of member states by means of efficient cross-border transport infrastructures and services is an important factor. Many cross-border transport challenges have historically existed in everyday life. They have hampered smooth passenger and freight flows within the EU.
Two EU policies, namely European Territorial Cooperation (ETC) and the Trans-European Transport Networks (TEN-T), promote enhancing cross-border transport through cooperation in soft spaces. This dissertation seeks to explore the influence of these two EU policies on cross-border transport and further European integration.
Based on an analysis of European, national and cross-border policy and planning documents, surveys with TEN-T Corridor Coordinators and INTERREG Secretariats and a high number of elite interviews, the dissertation will investigate how the objectives of the two EU policies were formally implemented in both soft spaces and the EU member states as well as which practical implementations have taken place. Thereby, the initiated Europeanisation and European integration processes will be evaluated. The analysis is conducted in nine preliminary case studies and two in-depth case studies. The cases comprise cross-border regions funded by the ETC policy that are crossed by a TEN-T corridor. The in-depth analysis explores the Greater Region Saar-Lor-Lux+ and the Brandenburg-Lubuskie region. The cases are characterised by different initial situations.
The research determined that the two EU policies support cross-border transport on different levels and, further, that they need to be better intertwined in order to make effective use of their complementarities. Moreover, it became clear that the EU policies have a distinct influence on domestic policy and planning documents of different administrative levels and countries as well as on the practical implementation. The final implementation of the EU objectives and the cross-border transport initiatives was strongly influenced by the member states’ initial situations – particularly, the regional and local transport needs. This dissertation concludes that the two EU policies cannot remove the entirety of the cross-border transport-related challenges. However, in addition to their financial investments in concrete projects, they promote the importance of cross-border transport and facilitate cooperation, learning and exchange processes. These are all of high relevance to cross-border transport development, driven by member states, as well as to further European integration.
The dissertation recommends that the transport planning competences of the EU in addition to the TEN-T network should not be enlarged in the future, but rather further transnational transport development tasks should be decentralised to transnational transport planning committees that are aware of regional needs and can coordinate a joint transport development strategy. The latter should be implemented with the support of additional EU funds for secondary and tertiary cross-border connections. Moreover, the potential complementarities of the transnational regions and transport corridors as well as the two EU policy fields should be made better use of by improving communication. This means that soft spaces, the TEN-T and ETC Policy as well as the domestic transport ministries and the domestic administrations that are responsible for the two EU policies need to intensify their cooperation. Furthermore, a focus of future ETC projects on topics that are of added value for the whole cross-border region or else that can be applied in different territorial contexts is recommended rather than investing in small-scale scattered expensive infrastructures and services that are only of benefit for a small part of the region. Additionally, the dissemination of project results should be enhanced so that the developed tools can be accessed by potential users and benefits become more visible to a wider society, despite the fact that they might not be measurable in numbers. In addition, the research points at another success factor for more concrete outputs: the frequent involvement of transport and spatial planners in transnational projects could increase the relation to planning practice. Besides that, advanced training regarding planning culture could reduce cooperation barriers.

Field-effect transistor (FET) sensors and in particular their nanoscale variant of silicon nanowire transistors are very promising technology platforms for label-free biosensor applications. These devices directly detect the intrinsic electrical charge of biomolecules at the sensor’s liquid-solid interface. The maturity of micro fabrication techniques enables very large FET sensor arrays for massive multiplex detection. However, the direct detection of charged molecules in liquids faces a significant limitation due to a charge screening effect in physiological solutions, which inhibits the realization of point-of-care applications. As an alternative, impedance spectroscopy with FET devices has the potential to enable measurements in physiological samples. Even though promising studies were published in the field, impedimetric detection with silicon FET devices is not well understood.
The first goal of this thesis was to understand the device performances and to relate the effects seen in biosensing experiments to device and biomolecule types. A model approach should help to understand the capability and limitations of the impedimetric measurement method with FET biosensors. In addition, to obtain experimental results, a high precision readout device was needed. Consequently, the second goal was to build up multi-channel, highly accurate amplifier systems that would also enable future multi-parameter handheld devices.
A PSPICE FET model for potentiometric and impedimetric detection was adapted to the experiments and further expanded to investigate the sensing mechanism, the working principle, and effects of side parameters for the biosensor experiments. For potentiometric experiments, the pH sensitivity of the sensors was also included in this modelling approach. For impedimetric experiments, solutions of different conductivity were used to validate the suggested theories and assumptions. The impedance spectra showed two pronounced frequency domains: a low-pass characteristic at lower frequencies and a resonance effect at higher frequencies. The former can be interpreted as a contribution of the source and double layer capacitances. The latter can be interpreted as a combined effect of the drain capacitance with the operational amplifier in the transimpedance circuit.
Two readout systems, one as a laboratory system and one as a point-of-care demonstrator, were developed and used for several chemical and biosensing experiments. The PSPICE model applied to the sensors and circuits were utilized to optimize the systems and to explain the sensor responses. The systems as well as the developed modelling approach were a significant step towards portable instruments with combined transducer principles in future healthcare applications.

The research problem is that the land-use (re-)planning process in the existing Egyptian cities
does not attain sustainability. This is because of the unfulfillment of essential principles within
their land-use structures, lack of harmony between the added and old parts in the cities, and
other reasons. This leads to the need for developing an assessment system, which is a
computational spatial planning support system-SPSS. This SPSS is used for identifying the
degree of sustainability attainment in land-uses plans, predicting probable problems, and
suggesting modifications in the evaluated plans.
The main goal is to design the SPSS for supporting sustainability in the Egyptian cities. The
secondary goals are: studying the Egyptian planning and administrative systems for designing
the technical and administrative frameworks for the SPSS, the development of an assessment
model from the SPSS for assessing sustainability in land-use structures of urban areas, as well
as the identification of the improvements required in the model and the recommendations for
developing the SPSS.
The theoretical part aims to design each of the administrative and technical frameworks of the
SPSS. This requires studying each of the main planning approaches, the sustainability in urban
land-use planning, and the significance of using efficient assessment tools for evaluating the
sustainability in this process. The added value of the planning support systems-PSSs for
planning and their role in supporting sustainability attainment in urban land-use planning are
discussed. Then, a group of previous examples in the sustainability assessment from various
countries (developed and developing countries) are selected, which have used various
assessment tools. This is to extract some learned lessons to be guides for the SPSS. And so,
the comprehensive technical framework for the SPSS is designed, which includes the suggested
methods and techniques that perform various stages of the assessment process.
The Egyptian context is studied regarding the planning and administration systems within the
Egyptian cities, as well as the spatial and administrative problems facing the sustainable
development. And so, the administrative framework for the SPSS is identified, which includes
the entities that should be involved in the assessment process.
The empirical part focuses on the design of a selected assessment model from the
comprehensive technical framework of the SPSS to be established as a minimized version from
it. This model is programmed in the form of a new toolbox within the ArcGIS™ software through
geoscripting using Python programming language to be applied for assessing the sustainability
attainment in the land-use structure of urban areas. The required assessing criteria for the model
specialized for the Egyptian and German cities are identified, for applying it on German and
Egyptian study areas.
The conclusions regarding each of PSSs, the Egyptian local administration and planning
systems, sustainability attainment in the land-use planning process in Egyptian Cities, as well as
the proposed SPSS and the developed toolbox are drawn. The recommendations are regarding
each of challenges facing the development and application of PSSs, the Egyptian local
administration and planning systems, the spatial problems in Egyptian cities, the establishment
of the SPSS, and the application of the toolbox. The future agenda is in the fields of sustainable urban land-use planning, planning support science, and the development process in the
Egyptian cities.

A popular model for the locations of fibres or grains in composite materials
is the inhomogeneous Poisson process in dimension 3. Its local intensity function
may be estimated non-parametrically by local smoothing, e.g. by kernel
estimates. They crucially depend on the choice of bandwidths as tuning parameters
controlling the smoothness of the resulting function estimate. In this
thesis, we propose a fast algorithm for learning suitable global and local bandwidths
from the data. It is well-known, that intensity estimation is closely
related to probability density estimation. As a by-product of our study, we
show that the difference is asymptotically negligible regarding the choice of
good bandwidths, and, hence, we focus on density estimation.
There are quite a number of data-driven bandwidth selection methods for
kernel density estimates. cross-validation is a popular one and frequently proposed
to estimate the optimal bandwidth. However, if the sample size is very
large, it becomes computational expensive. In material science, in particular,
it is very common to have several thousand up to several million points.
Another type of bandwidth selection is a solve-the-equation plug-in approach
which involves replacing the unknown quantities in the asymptotically optimal
bandwidth formula by their estimates.
In this thesis, we develop such an iterative fast plug-in algorithm for estimating
the optimal global and local bandwidth for density and intensity estimation with a focus on 2- and 3-dimensional data. It is based on a detailed
asymptotics of the estimators of the intensity function and of its second
derivatives and integrals of second derivatives which appear in the formulae
for asymptotically optimal bandwidths. These asymptotics are utilised to determine
the exact number of iteration steps and some tuning parameters. For
both global and local case, fewer than 10 iterations suffice. Simulation studies
show that the estimated intensity by local bandwidth can better indicate
the variation of local intensity than that by global bandwidth. Finally, the
algorithm is applied to two real data sets from test bodies of fibre-reinforced
high-performance concrete, clearly showing some inhomogeneity of the fibre
intensity.

In this thesis, we focus on the application of the Heath-Platen (HP) estimator in option
pricing. In particular, we extend the approach of the HP estimator for pricing path dependent
options under the Heston model. The theoretical background of the estimator
was first introduced by Heath and Platen [32]. The HP estimator was originally interpreted
as a control variate technique and an application for European vanilla options was
presented in [32]. For European vanilla options, the HP estimator provided a considerable
amount of variance reduction. Thus, applying the technique for path dependent options
under the Heston model is the main contribution of this thesis.
The first part of the thesis deals with the implementation of the HP estimator for pricing
one-sided knockout barrier options. The main difficulty for the implementation of the HP
estimator is located in the determination of the first hitting time of the barrier. To test the
efficiency of the HP estimator we conduct numerical tests with regard to various aspects.
We provide a comparison among the crude Monte Carlo estimation, the crude control
variate technique and the HP estimator for all types of barrier options. Furthermore, we
present the numerical results for at the money, in the money and out of the money barrier
options. As numerical results imply, the HP estimator performs superior among others
for pricing one-sided knockout barrier options under the Heston model.
Another contribution of this thesis is the application of the HP estimator in pricing bond
options under the Cox-Ingersoll-Ross (CIR) model and the Fong-Vasicek (FV) model. As
suggested in the original paper of Heath and Platen [32], the HP estimator has a wide
range of applicability for derivative pricing. Therefore, transferring the structure of the
HP estimator for pricing bond options is a promising contribution. As the approximating
Vasicek process does not seem to be as good as the deterministic volatility process in the
Heston setting, the performance of the HP estimator in the CIR model is only relatively
good. However, for the FV model the variance reduction provided by the HP estimator is
again considerable.
Finally, the numerical result concerning the weak convergence rate of the HP estimator
for pricing European vanilla options in the Heston model is presented. As supported by
numerical analysis, the HP estimator has weak convergence of order almost 1.

Multifacility location problems arise in many real world applications. Often, the facilities can only be placed in feasible regions such as development or industrial areas. In this paper we show the existence of a finite dominating set (FDS) for the planar multifacility location problem with polyhedral gauges as distance functions, and polyhedral feasible regions, if the interacting facilities form a tree. As application we show how to solve the planar 2-hub location problem in polynomial time. This approach will yield an ε-approximation for the euclidean norm case polynomial in the input data and 1/ε.

The screening of metagenomic datasets led to the identification of new phage-derived members of the heme oxygenase and the ferredoxin-dependent bilin reductase enzyme families.
The novel bilin biosynthesis genes were shown to form mini-cassettes on metagenomic scaffolds and further form distinct clusters in phylogenetic analyses (Ledermann et al., 2016). In this project, it was demonstrated that the discovered sequences actually encode for active enzymes. The biochemical characterization of a member of the heme oxygenases (ΦHemO) revealed that it possesses a regiospecificity for the α-methine bridge in the cleavage of the heme macrocycle. The reaction product biliverdin IXα was shown to function as the substrate for the novel ferredoxin-dependent bilin reductases (PcyX reductases), which catalyze its reduction to PEB via the intermediate 15,16-DHBV. While it was demonstrated that ΦPcyX, a phage-derived member of the PcyX reductases, is an active enzyme, it also became clear that the rate of the reaction is highly dependent on the employed redox partner. It turned out that the ferredoxin from the cyanophage P-SSM2 is to date the most suitable redox partner for the reductases of the PcyX group. Furthermore, the solution of the ΦPcyX crystal structure revealed that it adopts an α/β/α-sandwich fold, typical for the FDBR-family. Activity assays and subsequent HPLC analyses with different variants of the ΦPcyX protein demonstrated that, despite their similarity, PcyX and PcyA reductases must act via different reaction mechanisms.
Another part of this project focused on the biochemical characterization of the FDBR KflaHY2 from the streptophyte alga Klebsormidium flaccidum. Experiments with recombinant KflaHY2 showed that it is an active FDBR which produces 3(Z)-PCB as the main reaction product, like it can be found in reductases of the PcyA group. Moreover, it was shown that under the employed assay conditions the reaction of BV to PCB proceeds in two different ways: Both 3(Z)-PΦB and 18¹,18²-DHBV occur as intermediates. Activity assays with the purified intermediates yielded PCB. Hence, both compounds are suitable substrates for KflaHY2.
The results of this work highlight the importance of the biochemical experiments, as catalytic activity cannot solely be predicted by sequence analysis.

In this article a new numerical solver for simulations of district heating networks is presented. The numerical method applies the local time stepping introduced in [11] to networks of linear advection equations. In combination with the high order approach of [4] an accurate and very efficient scheme is developed. In several numerical test cases the advantages for simulations of district heating networks are shown.

In this paper, we demonstrate the power of functional data models for a statistical analysis of stimulus-response experiments which is a quite natural way to look at this kind of data and which makes use of the full information available. In particular, we focus on the detection of a change in the mean of the response in a series of stimulus-response curves where we also take into account dependence in time.

1,3-Diynes are frequently found as an important structural motif in natural products, pharmaceuticals and bioactive compounds, electronic and optical materials and supramolecular molecules. Copper and palladium complexes are widely used to prepare 1,3-diynes by homocoupling of terminal alkynes; albeit the potential of nickel complexes towards the same is essentially unexplored. Although a detailed study on the reported nickel-acetylene chemistry has not been carried out, a generalized mechanism featuring a nickel(II)/nickel(0) catalytic cycle has been proposed. In the present work, a detailed mechanistic aspect of the nickel-mediated homocoupling reaction of terminal alkynes is investigated through the isolation and/or characterization of key intermediates from both the stoichiometric and the catalytic reactions. A nickel(II) complex [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) containing a tetradentate N,N′-dimethyl-2,11-diaza[3.3](2,6)pyridinophane (L-N4Me2) as ligand was used as catalyst for homocoupling of terminal alkynes by employing oxygen as oxidant at room temperature. A series of dinuclear nickel(I) complexes bridged by a 1,3-diyne ligand have been isolated from stoichiometric reaction between [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) and lithium acetylides. The dinuclear nickel(I)-diyne complexes [{Ni(L-N4Me2)}2(RC4R)](ClO4)2 (2) were well characterized by X-ray crystal structures, various spectroscopic methods, SQUID and DFT calculation. The complexes not only represent as a key intermediate in aforesaid catalytic reaction, but also describe the first structurally characterized dinuclear nickel(I)-diyne complexes. In addition, radical trapping and low temperature UV-Vis-NIR experiments in the formation of the dinuclear nickel(I)-diyne confirm that the reactions occurring during the reduction of nickel(II) to nickel(I) and C-C bond formation of 1,3-diyne follow non-radical concerted mechanism. Furthermore, spectroscopic investigation on the reactivity of the dinuclear nickel(I)-diyne complex towards molecular oxygen confirmed the formation of a mononuclear nickel(I)-diyne species [Ni(L-N4Me2)(RC4R)]+ (4) and a mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) which were converted to free 1,3-diyne and an unstable dinuclear nickel(II) species [{Ni(L-N4Me2)}2(O2)]2+ (6). A mononuclear nickel(I)-alkyne complex [Ni(L-N4Me2)(PhC2Ph)](ClO4).MeOH (3) and the mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) were isolated/generated and characterized to confirm the formulation of aforementioned mononuclear nickel(I)-diyne and mononuclear nickel(III)-peroxo species. Spectroscopic experiments on the catalytic reaction mixture also confirm the presence of aforesaid intermediates. Results of both stoichiometric and catalytic reactions suggested an intriguing mechanism involving nickel(II)/nickel(I)/nickel(III) oxidation states in contrast to the reported nickel(II)/nickel(0) catalytic cycle. These findings are expected to open a new paradigm towards nickel-catalyzed organic transformations.

Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis.
This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals.
I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115
people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.

Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.

Embedded reactive systems underpin various safety-critical applications wherein they interact with other systems and the environment with limited or even no human supervision. Therefore, design errors that violate essential system specifications can lead to severe unacceptable damages. For this reason, formal verification of such systems in their physical environment is of high interest. Synchronous programs are typically used to represent embedded reactive systems while hybrid systems serve to model discrete reactive system in a continuous environment. As such, both synchronous programs and hybrid systems play important roles in the model-based design of embedded reactive systems. This thesis develops induction-based techniques for safety property verification of synchronous and hybrid programs. The imperative synchronous language Quartz and its hybrid systems’ extensions are used to sustain the findings.
Deductive techniques for software verification typically use Hoare calculus. In this context, Verification Condition Generation (VCG) is used to apply Hoare calculus rules to a program whose statements are annotated with pre- and postconditions so that the validity of an obtained Verification Condition (VC) implies correctness of a given proof goal. Due to the abstraction of macro steps, Hoare calculus cannot directly generate VCs of synchronous programs unless it handles additional label variables or goto statements. As a first contribution, Floyd’s induction-based approach is employed to generate VCs for synchronous and hybrid programs. Five VCG methods are introduced that use inductive assertions to decompose the overall proof goal. Given the right assertions, the procedure can automatically generate a set of VCs that can then be checked by SMT solvers or automated theorem provers. The methods are proved sound and relatively complete, provided that the underlying assertion language is expressive enough. They can be applied to any program with a state-based semantics.
Property Directed Reachability (PDR) is an efficient method for synchronous hardware circuit verification based on induction rather than fixpoint computation. Crucial steps of the PDR method consist of deciding about the reachability of Counterexamples to Induction (CTIs) and generalizing them to clauses that cover as many unreachable states as possible. The thesis demonstrates that PDR becomes more efficient for imperative synchronous programs when using the distinction between the control- and dataflow. Before calling the PDR method, it is possible to derive additional program control-flow information that can be added to the transition relation such that less CTIs will be generated. Two methods to compute additional control-flow information are presented that differ in how precisely they approximate the reachable control-flow states and, consequently, in their required runtime. After calling the PDR method, the CTI identification work is reduced to its control-flow part and to checking whether the obtained control-flow states are unreachable in the corresponding extended finite state machine of the program. If so, all states of the transition system that refer to the same program locations can be excluded, which significantly increases the performance of PDR.

Grape powdery mildew, Erysiphe necator, is one of the most significant plant pathogens, which affects grape growing regions world-wide. Because of its short generation time and the production of large amounts of conidia throughout the season, E. necator is classified as a moderate to high risk pathogen with respect to the development of fungicide resistance. The number of fungicidal mode of actions available to control powdery mildew is limited and for some of them resistances are already known. Aryl-phenyl-ketones (APKs), represented by metrafenone and pyriofenone, and succinate-dehydrogenase inhibitors (SDHIs), composed of numerous active ingredients, are two important fungicide classes used for the control of E. necator. Over the period 2014 to 2016, the emergence and development of metrafenone and SDHI resistant E. necator isolates in Europe was followed and evaluated. The distribution of resistant isolates was thereby strongly dependent on the European region. Whereas the north-western part is still predominantly sensitive, samples from east European countries showed higher resistance frequencies.
Classical sensitivity tests with obligate biotrophs can be challenging regarding sampling, transport and especially the maintenance of the living strains. Whenever possible, molecular genetic methods are preferred for a more efficient monitoring. Such methods require the knowledge of the resistance mechanisms. The exact molecular target and the resistance mechanism of metrafenone is still unknown. Whole genome sequencing of metrafenone sensitive and resistant wheat powdery mildew isolates, as well as adapted laboratory mutants of Aspergillus nidulans, where performed with the aim to identify proteins potentially linked to the mode of action or which contribute to metrafenone resistance. Based on comparative SNP analysis, four proteins potentially associated with metrafenone resistance were identified, but validation studies could not confirm their role in metrafenone resistance. In contrast to APKs, the mode of action of SDHIs is well understood. Sequencing of the sdh-genes of less sensitive E. necator isolates identified four different target-site mutations, the B-H242R, B-I244V, C-G169D and C-G169S, in sdhB and sdhC, respectively. Based on this information it was possible to develop molecular genetic monitoring methods for the mutations B-H242R and C-G169D. In 2016, the B-H242R was thereby identified as by far the most frequent mutation. Depending on the analysed SDH compound and the sdh-genotype, different sensitivities were observed and revealed a complex cross-resistance pattern.
Growth competition assays without selection pressure, with mixtures of sensitive and resistant E. necator isolates, were performed to determine potential fitness costs associated with fungicide resistance. With the experimental setups used, a clear fitness disadvantage associated with metrafenone resistance was not identified, although a strong variability of fitness was observed among the tested resistant E. necator isolates. For isolates with a reduced sensitivity towards SDHIs, associated fitness costs were dependent on the sdh-genotype analysed. Competition tests with the B-H242R genotypes gave evidence that there are no fitness costs associated with this mutation. In contrast, the C-G169D genotypes were less competitive, indicating a restricted fitness compared to the tested sensitive partners. Competition assays of field isolates, which exhibited several resistances towards different fungicide classes, indicated that there are no fitness costs associated with a multiple resistant phenotype in E. necator. Overall, these results clearly indicate the importance to analyse a representative number of isolates with sensitive and resistant phenotypes.

Epoxy belongs to a category of high-performance thermosetting polymers which have been used extensively in industrial and consumer applications. Highly cross-linked epoxy polymers offer excellent mechanical properties, adhesion, and chemical resistance. However, unmodified epoxies are prone to brittle fracture and crack propagation due to their highly crosslinked structure. As a result, epoxies are normally toughened to ensure the usability of these materials in practical applications.
This research work focuses on the development of novel modified epoxy matrices, with enhanced mechanical, fracture mechanical and thermal properties, suitable to be processed by filament winding technology, to manufacture composite based calender roller covers with improved performance in comparison to commercially available products.
In the first stage, a neat epoxy resin (EP) was modified using three different high functionality epoxy resins with two type of hardeners i.e. amine-based (H1) and anhydride-based (H2). Series of hybrid epoxy resins were obtained by systematic variation of high functionality epoxy resin contents with reference epoxy system. The resulting matrices were characterized by their tensile properties and the best system was chosen from each hardener system i.e. amine and anhydride. For tailored amine based system (MEP_H1) 14 % improvement was measured for bulk samples similarly, for tailored anhydride system (MEP_H2) 11 % improvement was measured when tested at 23 °C.
Further, tailored epoxy systems (MEP_H1 and MEP_H2) were modified using specially designed block copolymer (BCP), and core-shell rubber nanoparticles (CSR). Series of nanocomposites were obtained by systematic variation of filler contents. The resulting matrices were extensively characterized qualitatively and quantitatively to reveal the effect of each filler on the polymer properties. It was shown that the BCP confer better fracture properties to the epoxy resin at low filler loading without losing the other mechanical properties. These characteristics were accompanied by ductility and temperature stability. All composites were tested at 23 °C and at 80 °C to understand the effect of temperature on the mechanical and fracture properties.
Examinations on fractured specimen surfaces provided information about the mechanisms responsible for reinforcement. Nanoparticles generate several energy dissipating mechanisms in the epoxy, e.g. plastic deformation of the matrix, cavitation, void growth, debonding and crack pinning. These were closely related to the microstructure of the materials. The characteristic of the microstructure was verified by microscopy methods (SEM and AFM). The microstructure of neat epoxy hardener system was strongly influenced by the nanoparticles and the resulting interfacial interactions. The interaction of nanoparticles with a different hardener system will result in different morphology which will ultimately influence the mechanical and fracture mechanical properties of the nanocomposites. Hybrid toughening using a combination of the block-copolymer / core-shell rubber nanoparticles and block copolymer / TiO2 nanoparticles has been investigated in the epoxy systems. It was found out that addition of rigid phase with a soft phase recovers the loss of strength in the nanocomposites caused by a softer phase.
In order to clarify the relevant relationships, the microstructural and mechanical properties were correlated. The Counto’s, Halpin-Tsai, and Lewis-Nielsen equations were used to calculate the modulus of the composites and predicted modulus fit well with the measured values. Modeling was done to predict the toughening contribution from block copolymers and core-shell rubber nanoparticles. There was good agreement between the predicted values and the experimental values for the fracture energy.

Computational simulations run on large supercomputers balance their outputs with the need of the scientist and the capability of the machine. Persistent storage is typically expensive and slow, its peformance grows at a slower rate than the processing power of the machine. This forces scientists to be practical about the size and frequency of the simulation outputs that can be later analyzed to understand the simulation states. Flexibility in the trade-offs of flexibilty and accessibility of the outputs of the simulations are critical the success of scientists using the supercomputers to understand their science. In situ transformations of the simulation state to be persistently stored is the focus of this dissertation.
The extreme size and parallelism of simulations can cause challenges for visualization and data analysis. This is coupled with the need to accept pre partitioned data into the analysis algorithms, which is not always well oriented toward existing software infrastructures. The work in this dissertation is focused on improving current work flows and software to accept data as it is, and efficiently produce smaller, more information rich data, for persistent storage that is easily consumed by end-user scientists. I attack this problem from both a theoretical and practical basis, by managing completely raw data to quantities of information dense visualizations and study methods for managing both the creation and persistence of data products from large scale simulations.

In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.

Annual Report 2017
(2017)

The present situation of control engineering in the context of automated production can be described as a tension field between its desired outcome and its actual consideration. On the one hand, the share of control engineering compared to the other engineering domains has significantly increased within the last decades due to rising automation degrees of production processes and equipment. On the other hand, the control engineering domain is still underrepresented within the production engineering process. Another limiting factor constitutes a lack of methods and tools to decrease the amount of software engineering efforts and to permit the development of innovative automation applications that ideally support the business requirements.
This thesis addresses this challenging situation by means of the development of a new control engineering methodology. The foundation is built by concepts from computer science to promote structuring and abstraction mechanisms for the software development. In this context, the key sources for this thesis are the paradigm of Service-oriented Architecture and concepts from Model-driven Engineering. To mold these concepts into an integrated engineering procedure, ideas from Systems Engineering are applied. The overall objective is to develop an engineering methodology to improve the efficiency of control engineering by a higher adaptability of control software and decreased programming efforts by reuse.

The proliferation of sensors in everyday devices – especially in smartphones – has led to crowd sensing becoming an important technique in many urban applications ranging from noise pollution mapping or road condition monitoring to tracking the spreading of diseases. However, in order to establish integrated crowd sensing environments on a large scale, some open issues need to be tackled first. On a high level, this thesis concentrates on dealing with two of those key issues: (1) efficiently collecting and processing large amounts of sensor data from smartphones in a scalable manner and (2) extracting abstract data models from those collected data sets thereby enabling the development of complex smart city services based on the extracted knowledge.
Going more into detail, the first main contribution of this thesis is the development of methods and architectures to facilitate simple and efficient deployments, scalability and adaptability of crowd sensing applications in a broad range of scenarios while at the same time enabling the integration of incentivation mechanisms for the participating general public. During an evaluation within a complex, large-scale environment it is shown that real-world deployments of the proposed data recording architecture are in fact feasible. The second major contribution of this thesis is the development of a novel methodology for using the recorded data to extract abstract data models which are representing the inherent core characteristics of the source data correctly. Finally – and in order to bring together the results of the thesis – it is demonstrated how the proposed architecture and the modeling method can be used to implement a complex smart city service by employing a data driven development approach.

We continue in this paper the study of k-adaptable robust solutions for combinatorial optimization problems with bounded uncertainty sets. In this concept not a single solution needs to be chosen to hedge against the uncertainty. Instead one is allowed to choose a set of k different solutions from which one can be chosen after the uncertain scenario has been revealed. We first show how the problem can be decomposed into polynomially many subproblems if k is fixed. In the remaining part of the paper we consider the special case where k=2, i.e., one is allowed to choose two different solutions to hedge against the uncertainty. We decompose this problem into so called coordination problems. The study of these coordination problems turns out to be interesting on its own. We prove positive results for the unconstrained combinatorial optimization problem, the matroid maximization problem, the selection problem, and the shortest path problem on series parallel graphs. The shortest path problem on general graphs turns out to be NP-complete. Further, we present for minimization problems how to transform approximation algorithms for the coordination problem to approximation algorithms for the original problem. We study the knapsack problem to show that this relation does not hold for maximization problems in general. We present a PTAS for the corresponding coordination problem and prove that the 2-adaptable knapsack problem is not at all approximable.

The cytosolic Fe65 adaptor protein family, consisting of Fe65, Fe65L1 and Fe65L2 is involved in many intracellular signaling pathways linking via its three interaction domains a continuously growing list of proteins by facilitating functional interactions. One of the most important binding partners of Fe65 family proteins is the amyloid precursor protein (APP), which plays an important role in Alzheimer Disease.
To gain deeper insights in the function of the ubiquitously expressed Fe65 and the brain enriched Fe65L1, the goal of my study was I) to analyze their putative synaptic function in vivo, II) to examine structural analysis focusing on a putative dimeric complex of Fe65, III) to consider the involvement of Fe65 in mediating LRP1 and APP intracellular trafficking in murine hippocampal neurons. By utilizing several behavioral analyses of Fe65 KO, Fe65L1 KO and Fe65/Fe65L1 DKO mice I could demonstrate that the Fe65 protein family is essential for learning and memory as well as grip strength and locomotor activity. Furthermore, immunohistological as well as protein biochemical analysis revealed that the Fe65 protein family is important for neuromuscular junction formation in the peripheral nervous system, which involves binding of APP and acting downstream of the APP signaling pathway. Via Co-immunoprecipitation analysis I could verify that Fe65 is capable to form dimers ex vivo, which exclusively occur in the cytosol and upon APP expression are shifted to membrane compartments forming trimeric complexes. The influence of the loss of Fe65 and/or Fe65L1 on APP and/or LRP1 transport characteristics in axons could not be verified, possibly conditioned by the compensatory effect of Fe65L2. However, I could demonstrate that LRP1 affects the APP transport independently of Fe65 by shifting APP into slower types of vesicles leading to changed processing and endocytosis of APP.
The outcome of my thesis advanced our understanding of the Fe65 protein family, especially its interplay with APP physiological function in synapse formation and synaptic plasticity.

This paper presents a case study of duty rostering for physicians at a department of orthopedics and trauma surgery. We provide a detailed description of the rostering problem faced and present an integer programming model that has been used in practice for creating duty rosters at the department for more than a year. Using real world data, we compare the model output to a manually generated roster as used previously by the department and analyze the quality of the rosters generated by the model over a longer time span. Moreover, we demonstrate how unforeseen events such as absences of scheduled physicians are handled.

For many years, most distributed real-time systems employed data communication systems specially tailored to address the specific requirements of individual domains: for instance, Controlled Area Network (CAN) and Flexray in the automotive domain, ARINC 429 [FW10] and TTP [Kop95] in the aerospace domain. Some of these solutions were expensive, and eventually not well understood.
Mostly driven by the ever decreasing costs, the application of such distributed real-time system have drastically increased in the last years in different domains. Consequently, cross-domain communication systems are advantageous. Not only the number of distributed real-time systems have been increasing but also the number of nodes per system, have drastically increased, which in turn increases their network bandwidth requirements. Further, the system architectures have been changing, allowing for applications to spread computations among different computer nodes. For example, modern avionics systems moved from federated to integrated modular architecture, also increasing the network bandwidth requirements.
Ethernet (IEEE 802.3) [iee12] is a well established network standard. Further, it is fast, easy to install, and the interface ICs are cheap [Dec05]. However, Ethernet does not offer any temporal guarantee. Research groups from academia and industry have presented a number of protocols merging the benefits of Ethernet and the temporal guarantees required by distributed real-time systems. Two of these protocols are: Avionics Full-Duplex Switched Ethernet (AFDX) [AFD09] and Time-Triggered Ethernet (TTEthernet) [tim16]. In this dissertation, we propose solutions for two problems faced during the design of AFDX and TTEthernet networks: avoiding data loss due to buffer overflow in AFDX networks with multiple priority traffic, and scheduling of TTEthernet networks.
AFDX guarantees bandwidth separation and bounded transmission latency for each communication channel. Communication channels in AFDX networks are not synchronized, and therefore frames might compete for the same output port, requiring buffering to avoid data loss. To avoid buffer overflow and the resulting data loss, the network designer must reserve a safe, but not too pessimistic amount of memory of each buffer. The current AFDX standard allows for the classification of the network traffic with two priorities. Nevertheless, some commercial solutions provide multiple priorities, increasing the complexity of the buffer backlog analysis. The state-of-the-art AFDX buffer backlog analysis does not provide a method to compute deterministic upper bounds
iiifor buffer backlog of AFDX networks with multiple priority traffic. Therefore, in this dissertation we propose a method to address this open problem. Our method is based on the analysis of the largest busy period encountered by frames stored in a buffer. We identify the ingress (and respective egress) order of frames in the largest busy period that leads to the largest buffer backlog, and then compute the respective buffer backlog upper bound. We present experiments to measure the computational costs of our method.
In TTEthernet, nodes are synchronized, allowing for message transmission at well defined points in time, computed off-line and stored in a conflict-free scheduling table. The computation of such scheduling tables is a NP-complete problem [Kor92], which should be solved in reasonable time for industrial size networks. We propose an approach to efficiently compute a schedule for the TT communication channels in TTEthernet networks, in which we model the scheduling problem as a search tree. As the scheduler traverses the search tree, it schedules the communication channels on a physical link. We presented two approaches to traverse the search tree while progressively creating the vertices of the search tree. A valid schedule is found once the scheduler reaches a valid leaf. If on the contrary, it reaches an invalid leaf, the scheduler backtracks searching for a path to a valid leaf. We present a set of experiments to demonstrate the impact of the input parameters on the time taken to compute a feasible schedule or to deem the set of virtual links infeasible.

Order-semi-primal lattices
(1994)

A nonequilibrium situation governed by kinetic equations with strongly contrasted Knudsen numbers in different subdomains is discussed. We consider a domain decomposition problem for Boltzmann- and Euler equations, establish the correct coupling conditions and prove the validity of the obtained coupled solution . Moreover numerical examples comparing different types of coupling conditions are presented.

We are concerned with a parameter choice strategy for the Tikhonov regularization \((\tilde{A}+\alpha I)\tilde{x}\) = T* \(\tilde{y}\)+ w where \(\tilde{A}\) is a (not necessarily selfadjoint) approximation of T*T and T*\(\tilde y\)+ w is a perturbed form of the (not exactly computed) term T*y. We give conditions for convergence and optimal convergence rates.

We study high dimensional integration in the quantum model of computation. We develop quantum algorithms for integration of functions from Sobolev classes \(W^r_p [0,1]^d\) and analyze their convergence rates. We also prove lower bounds which show that the proposed algorithms are, in many cases, optimal within the setting of quantum computing. This extends recent results of Novak on integration of functions from Hölder classes.

In this paper, the complexity of full solution of Fredholm integral equations of the second kind with data from the Sobolev class \(W^r_2\) is studied. The exact order of information complexity is derived. The lower bound is proved using a Gelfand number technique. The upper bound is shown by providing a concrete algorithm of optimal order, based on a specific hyperbolic cross approximation of the kernel function. Numerical experiments are included, comparing the optimal algorithm with the standard Galerkin method.

A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\).
If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices
and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of
lattices and present several applications in geometry.

On derived varieties
(1996)

Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.

It is shown that Tikhonov regularization for ill- posed operator equation
\(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach.

The article provides an asymptotic probabilistic analysis of the variance of the number of pivot steps required by phase II of the "shadow vertex algorithm" - a parametric variant of the simplex algorithm, which has been proposed by Borgwardt [1] . The analysis is done for data which satisfy a rotationally
invariant distribution law in the \(n\)-dimensional unit ball.

Let \(a_i i:= 1,\dots,m.\) be an i.i.d. sequence taking values in \(\mathbb{R}^n\). Whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables which decompose additively relative to their boundary simplices, eg. the volume of \(P\), integral representations of their first two moments are given which lead to asymptotic estimations of variances for special "additive variables" known from stochastic approximation theory in case of rotationally symmetric distributions.

Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional
subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).

Let (\(a_i)_{i\in \bf{N}}\) be a sequence of identically and independently distributed random vectors drawn from the \(d\)-dimensional unit ball \(B^d\)and let \(X_n\):= convhull \((a_1,\dots,a_n\)) be the random polytope generated by \((a_1,\dots\,a_n)\). Furthermore, let \(\Delta (X_n)\) : = (Vol \(B^d\) \ \(X_n\)) be the deviation of the polytope's volume from the volume of the ball. For uniformly distributed \(a_i\) and \(d\ge2\), we prove that tbe limiting distribution of \(\frac{\Delta (X_n)} {E(\Delta (X_n))}\) for \(n\to\infty\) satisfies a 0-1-law. Especially, we provide precise information about the asymptotic behaviour of the variance of \(\Delta (X_n\)). We deliver analogous results for spherically symmetric distributions in \(B^d\) with regularly varying tail.

Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data.

Let \(A\):= {\(a_i\mid i= 1,\dots,m\)} be an i.i.d. random sample in (\mathbb{R}^n\), which we consider a random polyhedron, either as the convex hull of the \(a_i\) or as the intersection of halfspaces {\(x \mid a ^T_i x\leq 1\)}. We introduce a class of polyhedral functionals we will call "additive-type functionals", which covers a number of polyhedral functionals discussed in different mathematical fields, where the emphasis in our contribution will be on those, which arise in linear optimization theory. The class of additive-type functionals is a suitable setting in order to unify and to simplify the asymptotic probabilistic analysis of first and second moments of polyhedral functionals. We provide examples of asymptotic results on expectations and on variances.

Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)),
\(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation
functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in
each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework
of continuous and \(f\)- additive polytope functionals.

Let \(a_1, i:=1,\dots,m\), be an i.i.d. sequence taking values in \(\mathbb{R}^n\), whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables, which decompose additively relative to their boundary simplices, eg. the volume of \(P\), simple integral representations of its first two moments are given in case of rotationally symmetric distributions in order to facilitate estimations of variances or to quantify large deviations from the mean.

Max ordering (MO) optimization is introduced as tool for modelling production
planning with unknown lot sizes and in scenario modelling. In MO optimization a feasible solution set \(X\) and, for each \(x\in X, Q\) individual objective functions \(f_1(x),\dots,f_Q(x)\) are given. The max ordering objective
\(g(x):=max\) {\(f_1(x),\dots,f_Q(x)\)} is then minimized over all \(x\in X\).
The paper discusses complexity results and describes exact and approximative
algorithms for the case where \(X\) is the solution set of combinatorial
optimization problems and network flow problems, respectively.

In this paper the existence of translation transversal designs which is equivalent to the existence of certain particular partitions in finite groups is studied. All considerations are based on the fact that the particular component of such a partition (the component representing the point classes of the corresponding design) is a normal subgroup of the translation group. With regard to groups admitting an (s,k,\(\lambda\))-partiton, on one hand the already known families of such groups are determined without using R. BAER's, 0.H.KEGEL's and M. SUZUKI' s classification of finite groups with partition and on the other hand some new results on the special structure of p - groups are proved. Furthermore, the existence of a series of nonabelian p - groups of odd order which can be represented as translation groups of certain (s,k,1) - translation transversal designs is shown; moreover, the translation groups are normal subgroups of collineation groups acting regularly on the set of flags of the same designs.

We show that the different module structures of GF(\(q^m\)) arising from the intermediate fields of GF(\(q^m\))and GF(q) can be studied simultaneously with the help of some basic properties of cyclotomic polynomials. We use this ideas to give a detailed and constructive proof of the most difficult part of a Theorem of D. Blessenohl and K. Johnsen (1986), i.e., the existence of elements v in GF(\(q^m\)) over GF(q) which generate normal bases over any intermediate field of GF(\(q^m\)) and GF(q), provided that m is a prime power. Such elements are called completely free in GF(\(q^m\)) over GF(q). We develop a recursive formula for the number of completely free elements in GF(\(q^m\)) over GF(q) in the case where m is a prime power. Some of the results can be generalized to finite cyclic Galois extensions
over arbitrary fields.

In this paper we continue the study of p - groups G of square order \(p^{2n}\) and investigate the existence of partial congruence partitions (sets of mutually disjoint subgroups of order \(p^n\)) in G. Partial congruence partitions are used to construct translation nets and partial difference sets, two objects studied extensively in finite geometries and combinatorics. We prove that the maximal number of mutually disjoint subgroups of order \(p^n\) in a group G of order \(p^{2n}\) cannot be more than \((p^{n-1}-1)(p-1)^{-1}\) provided that \(n\ge4\)and that G is not elementary abelian. This improves a result in [6] and as we do not distinguish the cases p=2 and p odd in the present paper, we also have a generalization of D. FROHARDT' s theorem on 2 - groups in [4]. Furthermore we study groups of order \(p^6\). We can show that for each odd prime number, there exist exactly four nonisomorphic groups which contain at least p+2 mutually disjoint subgroups of order \(p^3\). Again, as we do not distinguish between the even and the odd case in advance, we in particular obtain
D. GLUCK' s and A. P. SPRAGUE' s classification of groups of order 64 which contain at least 4 mutually disjoint subgroups of order 8 in [5] and [13] respectively.

This thesis presents research studies on the fundamental interplay of diatomic molecules with transition metal compounds under cryogenic conditions. The utilized setup offers a multitude of opportunities to study isolated ions: The ions can either be generated by an ElectroSpray Ionization (ESI) source or a Laser VAPorization (LVAP) cluster ion source. The setup facilitates kinetic investigations of the ions with different reaction gases under well-defined isothermal conditions. Moreover it enables cryo InfraRed (Multiple) Photon Dissociation (IR-(M)PD) spectroscopy in combination with tunable OPO/OPA laser systems. In conjunction with density functional theory (DFT) modelling, the IR(M)-PD spectra allow for an assignment of geometric minimum structures. Furthermore DFT modelling helps to identify possible reaction pathways. Altogether the presented methods allow to gain fundamental insights into molecular structures and reactivity of the investigated systems.
The first part of this thesis focuses on the interplay of N2 with different transition metal clusters (Con+, Nin+, and Fen+) by cryo IR spectroscopy and cryo kinetics. In conjunction with DFT modelling the N2 coordination was elucidated (Con+), structures were assigned (Nin+), the concept of structure related surface adsorption behavior was introduced (Nin+), and the a first explanation for the inertness if Fe17+ was given (Fen+). Furthermore this thesis provides for a case study on the coadsorption of H2 and N2 on Ru8+ that elucidates the H migration on the Ru cluster. The last part of the thesis addresses the IR spectra of in vacuo generated [Hemin]+ complexes with N2, O2, and CO. Structures and spin states were assigned with the help of DFT modelling.

We present a generalization of Proth's theorem for testing certain large integers for primality. The use of Gauß sums leads to a much simpler approach to these primality criteria as compared to the earlier tests. The running time of the algorithms is bounded by a polynomial in the length of the input string. The applicability of our algorithms is linked to certain diophantine approximations of \(l\)-adic roots of unity.

We survey old and new results about optimal algorithms for summation of finite sequences and for integration of functions from Hölder or Sobolev spaces. First we discuss optimal deterministic and randornized algorithms. Then we add a new aspect, which has not been covered before on conferences
about (quasi-) Monte Carlo methods: quantum computation. We give a short introduction into this setting and present recent results of the authors on optimal quantum algorithms for summation and integration. We discuss comparisons between the three settings. The most interesting case for Monte
Carlo and quantum integration is that of moderate smoothness \(k\) and large dimension \(d\) which, in fact, occurs in a number of important applied problems. In that case the deterministic exponent is negligible, so the \(n^{-1/2}\) Monte Carlo and the \(n^{-1}\) quantum speedup essentially constitute the entire convergence rate.

Chlorogenic acids (CGA) are phenolic compounds that form during the esterification of certain trans-cinnamic acids with (-)-quinic acid. According to several human intervention studies, they may have potential health benefits. Coffee is the main source of CGA in human nutrition, and is consumed either alone or in combination with a variety of foods. For this reason, the presented study aimed to clarify whether the simultaneous consumption of food, for example, a breakfast rich in carbohydrates, with instant coffee affects the absorption and bioavailability of CGA. The research specifically focused on how various food matrices, which are consumed at the same time as a coffee beverage, will influence kinetic parameters such as area under the curve (AUC), maximum plasma concentration (cmax), and time needed to reach maximum plasma concentration (tmax).
In a randomized crossover study, fourteen healthy participants consumed either pure instant coffee or coffee with a carbohydrate- or fat-rich meal. All of the subjects consumed the same quantity of CGA (3.1 mg CGA/kg body weight). Blood samples, collected at various time points up to 15 h after instant coffee consumption, were quantitatively analysed. Additionally, three urine collection intervals were chosen over a time period of 24h. High performance liquid chromatography electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS) was used to determine the CGA present, along with the concentrations of respective metabolites.
During a blind data review meeting, 20 of the 56 analysed plasma metabolites were chosen for further statistical analysis. A total of 36 metabolites were monitored in the urine samples. Similar as in the plasma samples, between-treatment differences, measured through AUC, Cmax, and tmax, of various CGA derived metabolites were to estimate. Each treatment was also analysed in terms of the correlation between the plasma AUC and urinary excretion of seven metabolites.
It is already known that inter-individual variations in CGA absorption depends on gut microbial degradation and affects the efficacy of these compounds. Microorganisms present in the gastrointestinal tract metabolise CGA to form dihydroferulic acid (DHFA) and dihydrocaffeic acid (DHCA) derivatives, which precede the subsequent formation of a wide range of metabolites. Therefore stool samples were collected from the participants within 12 h before the second study day. Subsequent an ex-vivo incubation of faecal samples with 5-O-caffeoylquinic acid (5-CQA), the main chlorogenic acid found in coffee was performed. An HPLC system connected to a CoulArray® detector was used to measure the concentrations of 5-CQA and its metabolites. Reduced concentrations of 5-CQA as well as the appearance of DHCA and caffeic acid (CA) in the gut microbiota medium, were monitored to calculate the inter-individual kinetics for each compound. In addition, these samples were analysed for microbiota content by an external laboratory (L&S, Bad Bocklet, Germany). These results were used to distinguish whether the decreased or increased content of a specific microorganism was related to an individual’s decreased or increased metabolic efficiency. Finally, we used to aforementioned results to evaluate if any correlation could be drawn between the plasma appearance, urinary excretion and ability of microorganisms to degrade 5-CQA.
Strong inter-individual variation was observed for AUC, Cmax and tmax. The AUC measured the quantity of CGA in plasma samples. We noted that pure instant coffee consumption resulted in slightly higher CGA bioavailability than instant coffee with the additional consumption of a meal. However, these differences were not statistically significant. Additionally, the metabolites were divided into groups, according to similarity and chemical properties. They were further classified into three groups according to their physical structure and predicated from the area of appearance: directly from coffee (quinics), after first degradation and metabolism (phenolics, all trans-cinammic acids and their sulfates and glucuronides) as well as colonic degradation and metabolism (colonics, all dihydro compounds). These respective metabolic classes showed significant differences in the AUC values of certain classes yet no significant between-treatment differences. Our results corroborated earlier studies in that the three caffeoylquinic acid (CQA) isomers were absorbed to a lower extent whereas all feruloylquinic acids (FQA) were detected in comparably high amounts in the plasma samples of the volunteers. However, the amount of these quinic acid conjugates in the plasma samples accounted for only 0,5% of the total amount of identified. In contrast, at least 8.7% of the investigated compounds were identified to be phenolics. Dihydro compounds, the so known colonics, were identified as the most common metabolites (90.8%). Additionally, dihydroferulic acid (DHFA), meta-dihydrocoumaric acid (mDHCoA), dihydrocaffeic acid-3-sulfate (DHCA3S) and dihydroisoferulic acid (DHiFA) were identified to account for 78% of the studied metabolites, and thus represent the most abundant compounds circulating in the plasma after coffee consumption.
Irrespective of treatment, the tmax value for early metabolites (quinic and phenolic compounds) was observed between 0 and 2 h after the ingestion of coffee and tmax value for late metabolites (colonic metabolites) was observed between 7 and 10 h. The amount of colonic metabolites had not returned to the baseline level 15 h after the ingestion of coffee. The co-ingestion of breakfast and coffee, when compared to the ingestion of coffee alone, significantly increased the Cmax values for all quinic and phenolic compounds, as well as two colonic metabolites (DHCA and DHiFA). These differences also revealed that the three treatments differed in terms of the kinetics of release. Thus, future studies should use an extended plasma collection time with shorter intervals (e.g. 2 h) to provide a full pharmacokinetic profile.
There were no statistically significant between-treatment differences in the urine samples collected 24 h after coffee ingestion. However, urine samples collected within six hours of the consumption of coffee alone or in combination with a fat-rich meal showed significantly higher CGA quantities than samples collected at the same time point for coffee ingested with a carbohydrate-rich. Strong inter-individual variability and the fact that only 14 healthy subjects participated in the study hindered the identification of any clear trend between the plasma concentrations of metabolites and their excretion in urine.
Four hours after the ex vivo incubation of 5-CQA with individual faecal samples the sum of 5-CQA, CA, and DHCA varied strongly between participants. These findings could result from binding effects of the phenolic compounds with faecal constituents, further degradation or metabolism, and/or the release of bound phenolic substances before the experiment started. We hypothesized that for participants with high plasma AUCs of dihydro compounds, their incubation samples show also high concentrations of CA and DHCA in the incubation medium after four hours. No significant correlation could be found.
This study and all of the outcomes were exploratory. Due to the limited number of participants, we could only investigate tendencies for how the co-ingestion of food affects the bioavailability of CGAs and their respective metabolites following coffee consumption. Therefore, the achieved results are only indicative. Despite this limitation, the data highlight that even though all three treatments had strong similarities in the total bioavailability of CGAs and metabolites from instant coffee, there were between-treatment differences in the kinetics of release. The co-ingestion of breakfast and coffee favoured a slow and continuous release of colonic metabolites while non-metabolized coffee components were observed in plasma within the first hour when coffee was ingested alone.
In conclusion, both a shift in gastrointestinal transit time and the plasma metabolite composition were observed when the ingestion of coffee alone or in combination with breakfast were compared. These results showed that breakfast consumption induces the retarded release of chlorogenic acid metabolites in humans. The data from our human intervention study suggest that the bioavailability of chlorogenic acids from coffee and their derivatives does not only depend on chemical structure, molecular size and active or passive transport ability, but is also influenced by inter-individual differences. Therefore, we strongly recommend that future studies include metabolism experiments that focus on microbiota genotypes and/or the genotyping of individual subjects. This type of research could be pivotal to elucidating whether, and how, genotype affects the metabolic profile after chlorogenic acid intake.

Free Form Volumes
(1994)

Software development organizations measure their real-world processes, products, and resources to achieve the goal of improving their practices. Accurate and useful measurement relies on explicit models of the real-world processes, products, and resources. These explicit models assist with planning measurement, interpreting data, and assisting developers with their work. However, little work has been done on the joint use of measurem(int and process technologies. We hypothesize that it is possible to integrate measurement and process technologies in a way that supports automation of measurement-based feedback. Automated support for measurementbased feedback means that software developers and maintainers are provided with on-line, detailed information about their work. This type of automated support is expected to help software professionals gain intellectual control over their software projects. The dissertation offers three major contributions. First, an integrated measurement and
process modeling framework was constructed. This framework establishes the necessary foundation for integrating measurement and process technologies in a way that will permit automation. Second, a process-centered software engineering environment was developed to support measurement-based feedback. This system provides personnel with information about the tasks expected of them based on an integrated set of measurement and process views. Third, a set of assumptions and requirements about that system were examined in a controlled experiment. The experiment compared the use of different levels of automation to evaluate the acceptance and effectiveness of measurement-based feedback.

Hyperidentities
(1992)

The concept of a free algebra plays an essential role in universal algebra and in computer science. Manipulation of terms, calculations and the derivation of identities are performed in free algebras. Word problems, normal forms, system of reductions, unification and finite bases of identities are topics in algebra and logic as well as in computer science. A very fruitful point of view is to consider structural properties of free algebras. A.I. Malcev initiated a thorough research of the congruences of free algebras. Henceforth congruence permutable, congruence distributive and congruence modular varieties are
intensively studied. A lot of Malcev type theorems are connected to the congruence lattice of free algebras. Here we consider free algebras as semigroups of compositions of terms and more specific as clones of terms. The properties of these semigroups and clones are adequately described by hyperidentities. Naturally a lot of theorems of "semigroup" or "clone" type can be derived. This topic of research is still in its beginning and therefore a lot öf concepts and results cannot be presented in a final and polished form. Furthermore a lot of problems and questions are open which are of importance for the further development of the theory of hyperidentities.

Wireless LANs operating within unlicensed frequency bands require random access schemes such as CSMA/ CA, so that wireless networks from different administrative domains (for example wireless community networks) may co-exist without central coordination, even when they happen to operate on the same radio channel. Yet, it is evident that this Jack of coordination leads to an inevitable loss in efficiency due to contention on the MAC layer. The interesting question is, which efficiency may be gained by adding coordination to existing, unrelated wireless networks, for example by self-organization. In this paper, we present a methodology based on a mathematical programming formulation to determine the
parameters (assignment of stations to access points, signal strengths and channel assignment of both access points and stations) for a scenario of co-existing CSMA/ CA-based wireless networks, such that the contention between these networks is minimized. We demonstrate how it is possible to solve this discrete, non-linear optimization problem exactly for small
problems. For larger scenarios, we present a genetic algorithm specifically tuned for finding near-optimal solutions, and compare its results to theoretical lower bounds. Overall, we provide a benchmark on the minimum contention problem for coordination mechanisms in CSMA/CA-based wireless networks.

This report presents a generalization of tensor-product B-spline surfaces. The new scheme permits knots whose endpoints lie in the interior of the domain rectangle of a surface. This allows local refinement of the knot structure for approximation purposes as well as modeling surfaces with local tangent or curvature discontinuities. The surfaces are represented in terms of B-spline basis functions, ensuring affine invariance, local control, the convex hull property, and evaluation by de Boor's algorithm. A dimension formula for a class of generalized tensor-product spline spaces is developed.

We present a methodology to augment system safety step-by-step and illustrate the approach by the definition of reusable solutions for the detection of fail-silent nodes - a watchdog and a heartbeat. These solutions can be added to real-time system designs, to protect against certain types of system failures. We use SDL as a system design language for the development of distributed systems, including real-time systems.

Interactive graphics has been limited to simple direct illumination that commonly results in an artificial appearance. A more realistic appearance by simulating global illumination effects has been too costly to compute at interactive rates. In this paper we describe a new Monte Carlo-based global illumination algorithm. It achieves performance of up to 10 frames per second while arbitrary changes to the scene may be applied interactively. The performance is obtained through the effective use of a fast, distributed ray-tracing engine as well as a new interleaved sampling technique for parallel Monte Carlo simulation. A new filtering step in combination with correlated sampling avoids the disturbing noise artifacts common to Monte Carlo methods.

Estelle is an internationally standardized formal description technique (FDT) designed for the specification of distributed systems, in particular communication protocols. An Estelle specification describes a system of communicating components (module instances). The specified system is closed in a topological sense, i.e. it has no ability to interact with some environment. Because of this restriction, open systems can only be specified together with and incorporated with an environment. To overcome this restriction, we introduce a compatible extension of Estelle, called "Open Estelle". It allows the specification of (topologically) open systems, i.e. systems that have the ability to communicate with any environment through a well-defined external interface. We define aformal syntax and a formal semantics for Open Estelle, both based on and extending the syntax and semantics of Estelle. The extension is compatible syntactically and semantically, i.e. Estelle is a subset of Open Estelle. In particular, the formal semantics of Open Estelle reduces to the Estelle semantics in the special case of a closed system. Furthermore, we present a tool for the textual integration of open systems into environments specified in Open Estelle, and a compiler for the automatic generation of implementations directly from Open Estelle specifications.

This paper describes some new algorithms for the accurate calculation of surface properties. In the first part an arithmetic on Bézier surfaces is introduced. Formulas are given, which determine the Bézier points and weights of the resulting surface from the points and weights of the operand surfaces. An application of the arithmetic operations to the surface interrogation methods are described in the second part. It turns out, that the quality analysis can be reduced to a few numerical stable operations. Finally the advantages and disadvantages of this method are discussed.

Partitioned chain grammars
(1979)

This paper introduces a new class of grammars, the partitioned chain grammars, for which efficient parsers can be automatically generated. Besides being efficiently parsable these grammars possess a number of other properties, which make them very attractive for the use in parser-generators. They for instance form a large grammarclass and describe all deterministic context-free languages. Main advantage of the partitioned chain grammars however is, that given a language it is usually easier to describe it by a partitioned chain grammar than to construct a grammar of some other type commonly used in parser-generators for it.

The intuitionistic calculus mj for sequents, in which no other logical symbols than those for implication and universal quantification occur, is introduced and analysed. It allows a simple backward application, called mj-reduction here, for searching for derivation trees. Terms needed in mj-reduction can be found with the unification algorithm. mj-Reduction with unification can be seen as a natural extension of SLD-resolution. mj-Derivability of the sequents considered here coincides with derivability in Johansson's minimal intuitionistic calculus LHM in [6]. Intuitionistic derivability of formulae with negation and classical derivability of formulae with all usual logical symbols can be expressed with mj-derivability and hence be verified by mj-reduction. mj-Derivations can be easily translated into LJ-derivations without
"Schnitt", or into NJ-derivations in a slightly sharpened form of Prawitz' normal form. In the first three sections, the systematic use of mj-reduction for proving in predicate logic is emphasized. Although the fourth section, the last and largest, is exclusively devoted to the mathematical analysis of the calculus mj, the first three sections may be of interest to a wider readership, including readers looking for applications of symbolic logic. Unfortunately, the mathematical analysis of the calculus mj, as the study of Gentzen's calculi, demands a large amount of technical work that obscures the natural unfolding of the argumentation. To alleviate this, definitions and theorems are completely embedded in the text to provide a fluent and balanced mathematical discourse: new concepts are indicated with bold-face, proofs of assertions are outlined, or omitted when it is assumed that the reader can provide them.

A natural extension of SLD-resolution is introduced as a goal directed proof procedure
for the full first order implicational fragment of intuitionistic logic. Its intuitionistic semantic fits a procedural interpretation of logic programming. By allowing arbitrary nested implications it can be used for implementing modularity in logic programs. With adequate negation axioms it gives an alternative to negation as failure and leads to a proof procedure for full first order predicate logic.

The use of non-volatile semiconductor memory within an extended storage hierarchy promises significant performance improvements for transaction processing. Although page-addressable semiconductor memories like extended memory, solid-state disks and disk caches are commercially available since several years, no detailed investigation of their use for transaction processing has been performed so far. We present a comprehensive simulation study that compares the performance of these storage types and of different usage forms. The following usage forms are considered: allocation of entire log and database files in non-volatile semiconductor memory, using a so-called write buffer to perform disk writes asynchronously, and caching of database pages at intermediate storage levels (in addition to main memory caching). Our simulations are conducted with both synthetically generated workloads and traces from real-life database applications. In particular, simulation results will be presented for the debit-credit workload frequently used in transaction processing benchmarks. As expected, the greatest performance improvements (but at the highest cost) can be achieved by storing log and database files completely in non-volatile semiconductor memory. For update-intensive
workloads, a limited amount of non-volatile memory used as a write buffer also proved to be very effective. To reduce the number of disk reads; caching of database pages in addition to main memory is best supported by an extended memory buffer. In this respect, disk caches are found to be less effective as they are designed for one-level caching. Different storage costs suggest that it may be cost-effective to use two or even three of the intermediate storage types together. The performance improvements obtainable by the use of non-volatile semiconductor memory is also found to reduce the need for sophisticated DBMS buffer management in order to achieve high transaction processing performance.

The rapid development of any field of knowledge brings with it unavoidable fragmentation and proliferation of new disciplines. The development of computer science is no exception. Software engineering (SE) and human-computer interaction (HCI) are both relatively new disciplines of computer science. Furthermore, as both names suggest, they each have strong connections with other subjects. SE is concerned with methods and tools for general software development based on engineering principles. This discipline has its roots not only in computer science but also in a number of traditional engineering disciplines. HCI is concerned with methods and tools for the development of human-computer interfaces, assessing the usability of computer systems and with broader issues about how people interact with computers. It is based on theories about how humans process information and interact with computers, other objects and other people in the organizational and social contexts in
which computers are used. HCI draws on knowledge and skills from psychology, anthropology and sociology in addition to computer science. Both disciplines need ways of measuring how well their products and development processes fulfil their intended requirements. Traditionally SE has been concerned with 'how software is constructed' and HCI with 'how people use software'. Given the
different histories of the disciplines and their different objectives, it is not surprising that they take different approaches to measurement. Thus, each has its own distinct 'measurement culture.' In this paper we analyse the differences and the commonalties of the two cultures by examining the measurement approaches used by each. We then argue the need for a common measurement taxonomy and framework, which is derived from our analyses of the two disciplines. Next we demonstrate the usefulness of the taxonomy and framework via specific example studies drawn from our own work and that of others and show that, in fact, the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer. Finally, we discuss the role of the taxonomy as a framework to support: reuse, planning future studies, guiding practice and facilitating communication between the two disciplines.

Optimization of Projection Methods for Solving ill-posed Problems. In this paper we propose a modification of the projection scheme for solving ill-posed problems. We show that this modification allows to obtain the best possible order of accuracy of Tikhonov Regularization using an amount of information which is far less than for the standard projection technique.

In this paper we show how Metropolis Light Transport can be extended both in the underlying theoretical framework and the algorithmic implementation to incorporate volumetric scattering.
We present a generalization of the path integral formulation thathandles anisotropic scattering in non-homogeneous media. Based on this framework we introduce a new mutation strategy that is
specifically designed for participating media. It exploits the locality of light propagation by perturbing certain interaction points within the medium. To efficiently sample inhomogeneous media a new ray marching method has been developed that avoids aliasing artefacts and is significantly faster than stratified sampling. The resulting global illumination algorithm provides a physically correct simulation of light transport in the presence of participating media that includes effects such as volume caustics and multiple volume scattering. It is not restricted to certain classes of geometry and scattering models and has minimal memory requirements. Furthermore, it is unbiased and robust, in the sense that it produces satisfactory results for a wide range of input scenes and lighting situations within acceptable time bounds. In particular, we found that it is weil suited for complex scenes with many light sources.