Refine
Year of publication
- 1999 (525)
- 2021 (225)
- 2020 (206)
- 2018 (177)
- 2019 (158)
- 2016 (156)
- 2000 (152)
- 1998 (147)
- 2014 (146)
- 2015 (146)
- 2007 (145)
- 2006 (139)
- 2017 (139)
- 2009 (135)
- 2003 (128)
- 2005 (126)
- 2001 (121)
- 2004 (121)
- 2010 (117)
- 1996 (113)
- 2011 (111)
- 2002 (110)
- 2008 (110)
- 2012 (110)
- 1995 (103)
- 2013 (99)
- 1997 (97)
- 1994 (91)
- 1992 (55)
- 1993 (51)
- 2022 (51)
- 1991 (39)
- 1990 (11)
- 1985 (9)
- 1984 (7)
- 1986 (6)
- 1987 (6)
- 1989 (6)
- 1988 (5)
- 1979 (4)
- 1983 (2)
- 1980 (1)
- 1981 (1)
Document Type
- Doctoral Thesis (1592)
- Preprint (1184)
- Report (480)
- Article (385)
- Periodical Part (271)
- Master's Thesis (174)
- Working Paper (111)
- Conference Proceeding (45)
- Diploma Thesis (34)
- Lecture (25)
Language
- English (2647)
- German (1752)
- Multiple Languages (4)
- Spanish (4)
Is part of the Bibliography
- no (4407)
Keywords
- AG-RESY (64)
- PARO (31)
- Stadtplanung (28)
- Erwachsenenbildung (24)
- Modellierung (22)
- Simulation (22)
- Case-Based Reasoning (20)
- Schule (20)
- wissenschaftliche Weiterbildung (19)
- Visualisierung (18)
Faculty / Organisational entity
- Fachbereich Mathematik (1108)
- Fachbereich Informatik (865)
- Fachbereich Maschinenbau und Verfahrenstechnik (400)
- Fachbereich Chemie (354)
- Fachbereich Physik (303)
- Fachbereich Sozialwissenschaften (298)
- Fraunhofer (ITWM) (223)
- Fachbereich Elektrotechnik und Informationstechnik (152)
- Fachbereich Biologie (148)
- Distance and Independent Studies Center (DISC) (96)
With the burgeoning computing power available, multiscale modelling and simulation has these days become increasingly capable of capturing the details of physical processes on different scales. The mechanical behavior of solids is oftentimes the result of interaction between multiple spatial and temporal scales at different levels and hence it is a typical phenomena of interest exhibiting multiscale characteristic. At the most basic level, properties of solids can be attributed to atomic interactions and crystal structure that can be described on nano scale. Mechanical properties at the macro scale are modeled using continuum mechanics for which we mention stresses and strains. Continuum models, however they offer an efficient way of studying material properties they are not accurate enough and lack microstructural information behind the microscopic mechanics that cause the material to behave in a way it does. Atomistic models are concerned with phenomenon at the level of lattice thereby allowing investigation of detailed crystalline and defect structures, and yet the length scales of interest are inevitably far beyond the reach of full atomistic computation and is rohibitively expensive. This makes it necessary the need for multiscale models. The bottom line and a possible avenue to this end is, coupling different length scales, the continuum and the atomistics in accordance with standard procedures. This is done by recourse to the Cauchy-Born rule and in so doing, we aim at a model that is efficient and reasonably accurate in mimicking physical behaviors observed in nature or laboratory. In this work, we focus on concurrent coupling based on energetic formulations that links the continuum to atomistics. At the atomic scale, we describe deformation of the solid by the displaced positions of atoms that make up the solid and at the continuum level deformation of the solid is described by the displacement field that minimize the total energy. In the coupled model, continuum-atomistic, a continuum formulation is retained as the overall framework of the problem and the atomistic feature is introduced by way of constitutive description, with the Cauchy-Born rule establishing the point of contact. The entire formulation is made in the framework of nonlinear elasticity and all the simulations are carried out within the confines of quasistatic settings. The model gives direct account to measurable features of microstructures developed by crystals through sequential lamination.
This Dissertation tried to provide insights into the influences of individual and contextual factors on Technical and Vocational Education and Training (TVET) teachers’ learning and professional development in Ethiopia. Specifically, this research focused on identifying and determining the influences of teachers’ self perception as learners and professionals, and investigates the impact of the context, process and content of their learning and experiences on their professional development. The knowledge of these factors and their impacts help in improving the learning and professional development of the TVET teachers and their professionalization. This research tried to provide answers for the following five research questions. (1) How do TVET teachers perceive themselves as active learners and as professionals? And what are the implications of their perceptions on their learning and development? (2) How do TVET teachers engage themselves in learning and professional development activities? (3) What contextual factors facilitated or hindered the TVET Teachers’ learning and professional development? (4) Which competencies are found critical for the TVET teachers’ learning and professional development? (5) What actions need to be considered to enhance and sustain TVET teachers learning and professional development in their context? It is believed that the research results are significant not only to the TVET teachers, but also to schools leaders, TVET Teacher Training Institutions, education experts and policy makers, researchers and others stakeholders in the TVET sector. The theoretical perspectives adopted in this research are based on the systemic constructivist approach to professional development. An integrated approach to professional development requires that the teachers’ learning and development activities to be taken as an adult education based on the principles of constructivism. Professional development is considered as context - specific and long-term process in which teachers are trusted, respected and empowered as professionals. Teachers’ development activities are sought as more of collaborative activities portraying the social nature of learning. Schools that facilitate the learning and development of teachers exhibit characteristics of a learning organisation culture where, professional collaboration, collegiality and shared leadership are practiced. This research has drawn also relevant point of views from studies and reports on vocational education and TVET teacher education programs and practices at international, continental and national levels. The research objectives and the types of research questions in this study implied the use of a qualitative inductive research approach as a research strategy. Primary data were collected from TVET teachers in four schools using a one-on-one qualitative in-depth interview method. These data were analyzed using a Qualitative Content Analysis method based on the inductive category development procedure. ATLAS.ti software was used for supporting the coding and categorization process. The research findings showed that most of the TVET teachers neither perceive themselves as professionals nor as active learners. These perceptions are found to be one of the major barriers to their learning and development. Professional collaborations in the schools are minimal and teaching is sought as an isolated individual activity; a secluded task for the teacher. Self-directed learning initiatives and individual learning projects are not strongly evident. The predominantly teacher-centered approach used in TVET teacher education and professional development programs put emphasis mainly to the development of technical competences and has limited the development of a range of competences essential to teachers’ professional development. Moreover, factors such as the TVET school culture, the society’s perception of the teaching profession, economic conditions, and weak links with industries and business sectors are among the major contextual factors that hindered the TVET teachers’ learning and professional development. A number of recommendations are forwarded to improve the professional development of the TVET teachers. These include change in the TVET schools culture, a paradigm shift in TVET teacher education approach and practice, and development of educational policies that support the professionalization of TVET teachers. Areas for further theoretical research and empirical enquiry are also suggested to support the learning and professional development of the TVET teachers in Ethiopia.
Manipulating deformable linear objects - Vision-based recognition of contact state transitions -
(1999)
A new and systematic approach to machine vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation of the object with respect to an obstacle and are derived from the object image and its features. Therefore, the object is segmented from a standard video frame using a fast segmentation algorithm. Several object features are presented which allow the state recognition of the object while being manipulated by the robot.
A new and systematic basic approach to force- and vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation between the deformable and rigid obstacles, and are derived from the object image and its features. We give an enumeration of possible contact states and discuss the main characteristics of each state. We investigate the performance of robust transitions between the contact states and derive criteria and conditions for each of the states and for two sensor systems, i.e. a vision sensor and a force/torque sensor. This results in a new and task-independent approach in regarding the handling of deformable objects and in a sensor-based implementation of manipulation primitives for industrial robots. Thus, the usage of sensor processing is an appropriate solution for our problem. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task. Experimental results show the feasibility of our approach: A robot performs several contact state transitions which can be combined for solving a more complex task.
A geoscientifically relevant wavelet approach is established for the classical (inner) displacement problem corresponding to a regular surface (such as sphere, ellipsoid, actual earth's surface). Basic tools are the limit and jump relations of (linear) elastostatics. Scaling functions and wavelets are formulated within the framework of the vectorial Cauchy-Navier equation. Based on appropriate numerical integration rules a pyramid scheme is developed providing fast wavelet transform (FWT). Finally multiscale deformation analysis is investigated numerically for the case of a spherical boundary.
The focus of this work has been to develop two families of wavelet solvers for the inner displacement boundary-value problem of elastostatics. Our methods are particularly suitable for the deformation analysis corresponding to geoscientifically relevant (regular) boundaries like sphere, ellipsoid or the actual Earth's surface. The first method, a spatial approach to wavelets on a regular (boundary) surface, is established for the classical (inner) displacement problem. Starting from the limit and jump relations of elastostatics we formulate scaling functions and wavelets within the framework of the Cauchy-Navier equation. Based on numerical integration rules a tree algorithm is constructed for fast wavelet computation. This method can be viewed as a first attempt to "short-wavelength modelling", i.e. high resolution of the fine structure of displacement fields. The second technique aims at a suitable wavelet approximation associated to Green's integral representation for the displacement boundary-value problem of elastostatics. The starting points are tensor product kernels defined on Cauchy-Navier vector fields. We come to scaling functions and a spectral approach to wavelets for the boundary-value problems of elastostatics associated to spherical boundaries. Again a tree algorithm which uses a numerical integration rule on bandlimited functions is established to reduce the computational effort. For numerical realization for both methods, multiscale deformation analysis is investigated for the geoscientifically relevant case of a spherical boundary using test examples. Finally, the applicability of our wavelet concepts is shown by considering the deformation analysis of a particular region of the Earth, viz. Nevada, using surface displacements provided by satellite observations. This represents the first step towards practical applications.
In this work the investigation of a (Ti, Al, Si) N system was done. The main point of investigation was to study the possibility of getting the nanocomposite coatings structures by deposition of multilayer films from TiN, AlSiN, . This tries to understand the relation between the mechanical properties (hardness, Young s modulus), and the microstructure (nanocrystalline with individual phases). Particularly special attention was given to the temperature effects on microstructural changes in annealing at 600 °C for the coatings. The surface hardness, elastic modulus, and the multilayers diffusion and compositions were the test tools for the comparison between the different coated samples with and without annealing at 600 °C. To achieve this object a rectangular aluminum vacuum chamber with three unbalanced sputtering magnetrons for the deposition of thin film coatings from different materials was constructed The chamber consists mainly of two chambers, the pre-vacuum chamber to load the workpiece, and the main vacuum chamber where the sputtering deposition of the thin film coatings take place. The workpiece is moving on a car travel on a railway between the two chambers to the position of the magnetrons by step motors. The chambers are divided by a self constructed rectangular gate controlled manually from outside the chamber. The chamber was sealed for vacuum use using glue and screws. Therefore, different types of glue were tested not only for its ability to develop an uniform thin layer in the gap between the aluminum plates to seal the chamber for vacuum use, but also low outgassing rates which made it suitable for vacuum use. A epoxy was able to fulfill this tasks. The evacuation characteristics of the constructed chamber was improved by minimizing the inner surface outgassing rate. Therefore, the throughput outgassing rate test method was used in the comparisons between the selected two aluminum materials (A2017 and A5353) samples short time period (one hour) outgassing rates. Different machining methods and treatments for the inner surface of the vacuum chamber were tested. The machining of the surface of material A (A2017) with ethanol as coolant fluid was able to reduce its outgassing rate a factor of 6 compared with a non-machined sample surface of the same material. The reduction of the surface porous oxide layer on the top of the aluminum surface by the pickling process with HNO3 acid, and the protection of it by producing another passive non-porous oxides layer using anodizing process will protect the surface for longer time and will minimize the outgassing rates even under humid atmosphere The residual gas analyzer (RGA) 6. Summary test shows that more than 85% of the gases inside the test chamber were water vapour (H2O) and the rests are (N2, H2, CO), so liquid nitrogen water vapor trap can enhance the chamber pumping down process. As a result it was possible to construct a chamber that can be pumped down using a turbo molecular pump (450 L/s) to the range of 1x10-6 mbar within one hour of evacuations where the chamber volume is 160 Litters and the inner surface area is 1.6 m2. This is a good base pressure for the process of sputtering deposition of hard thin film coatings. Multilayer thin film coating was deposited to demonstrate that nanostructured thin film within the (Ti, Al, Si) N system could be prepared by reactive magnetron sputtering of multi thin film layers of TiN, AlSiN. The (SNMS) spectrometry of the test samples show that a complete diffusion between the different deposited thin film coating layers in each sample takes place, even at low substrate deposition temperature. The high magnetic flux of the unbalanced magnetrons and the high sputtering power were able to produce a high ion-toatom flux, which give high mobility to the coated atoms. The interactions between the high mobility of the coated atoms and the ion-to-atom flux were sufficient to enhance the diffusion between the different deposited thin layers. It was shown from the XRD patterns for this system that the structure of the formed mixture consists of two phases. One phase is noted as TiN bulk and another detected unknown amorphous phase, which can be SiNx or AlN or a combination of Ti-Al-Si-N. As a result we where able to deposit a nanocomposite coatings by the deposition of multilayers from TiN, AlSiN thin film coatings using the constructed vacuum chamber
Self-adaptation allows software systems to autonomously adjust their behavior during run-time by handling all possible
operating states that violate the requirements of the managed system. This requires an adaptation engine that receives adaptation
requests during the monitoring process of the managed system and responds with an automated and appropriate adaptation
response. During the last decade, several engineering methods have been introduced to enable self-adaptation in software systems.
However, these methods lack addressing (1) run-time uncertainty that hinders the adaptation process and (2) the performance
impacts resulted from the complexity and the large number of the adaptation space. This paper presents CRATER, a framework
that builds an external adaptation engine for self-adaptive software systems. The adaptation engine, which is built on Case-based
Reasoning, handles the aforementioned challenges together. This paper is braced with an experiment illustrating the benefits of
this framework. The experimental results shows the potential of CRATER in terms handling run-time uncertainty and adaptation
remembrance that enhances the performance for large number of adaptation space.
Postmortem Analysis of Decayed Online Social Communities: Cascade Pattern Analysis and Prediction
(2018)
Recently, many online social networks, such as MySpace, Orkut, and Friendster, have faced inactivity decay of their members, which contributed to the collapse of these networks. The reasons, mechanics, and prevention mechanisms of such inactivity decay are not fully understood. In this work, we analyze decayed and alive subwebsites from the Stack Exchange platform. The analysis mainly focuses on the inactivity cascades that occur among the members of these communities. We provide measures to understand the decay process and statistical analysis to extract the patterns that accompany the inactivity decay. Additionally, we predict cascade size and cascade virality using machine learning. The results of this work include a statistically significant difference of the decay patterns between the decayed and the alive subwebsites. These patterns are mainly cascade size, cascade virality, cascade duration, and cascade similarity. Additionally, the contributed prediction framework showed satisfactorily prediction results compared to a baseline predictor. Supported by empirical evidence, the main findings of this work are (1) there are significantly different decay patterns in the alive and the decayed subwebsites of the Stack Exchange; (2) the cascade’s node degrees contribute more to the decay process than the cascade’s virality, which indicates that the expert members of the Stack Exchange subwebsites were mainly responsible for the activity or inactivity of the Stack Exchange subwebsites; (3) the Statistics subwebsite is going through decay dynamics that may lead to it becoming fully-decayed; (4) the decay process is not governed by only one network measure, it is better described using multiple measures; (5) decayed subwebsites were originally less resilient to inactivity decay, unlike the alive subwebsites; and (6) network’s structure in the early stages of its evolution dictates the activity/inactivity characteristics of the network.
Learning From Networked-data: Methods and Models for Understanding Online Social Networks Dynamics
(2020)
Abstract
Nowadays, people and systems created by people are generating an unprecedented amount of
data. This data has brought us data-driven services with a variety of applications that affect
people’s behavior. One of these applications is the emergent online social networks as a method
for communicating with each other, getting and sharing information, looking for jobs, and many
other things. However, the tremendous growth of these online social networks has also led to many
new challenges that need to be addressed. In this context, the goal of this thesis is to better understand
the dynamics between the members of online social networks from two perspectives. The
first perspective is to better understand the process and the motives underlying link formation in
online social networks. We utilize external information to predict whether two members of an online
social network are friends or not. Also, we contribute a framework for assessing the strength of
friendship ties. The second perspective is to better understand the decay dynamics of online social
networks resulting from the inactivity of their members. Hence, we contribute a model, methods,
and frameworks for understanding the decay mechanics among the members, for predicting members’
inactivity, and for understanding and analyzing inactivity cascades occurring during the decay.
The results of this thesis are: (1) The link formation process is at least partly driven by interactions
among members that take place outside the social network itself; (2) external interactions might
help reduce the noise in social networks and for ranking the strength of the ties in these networks;
(3) inactivity dynamics can be modeled, predicted, and controlled using the models contributed in
this thesis, which are based on network measures. The contributions and the results of this thesis
can be beneficial in many respects. For example, improving the quality of a social network by introducing
new meaningful links and removing noisy ones help to improve the quality of the services
provided by the social network, which, e.g., enables better friend recommendations and helps to
eliminate fake accounts. Moreover, understanding the decay processes involved in the interaction
among the members of a social network can help to prolong the engagement of these members. This
is useful in designing more resilient social networks and can assist in finding influential members
whose inactivity may trigger an inactivity cascade resulting in a potential decay of a network.
Building interoperation among separately developed software units requires checking their conceptual assumptions and constraints. However, eliciting such assumptions and constraints is time consuming and is a challenging task as it requires analyzing each of the interoperating software units. To address this issue we proposed a new conceptual interoperability analysis approach which aims at decreasing the analysis cost and the conceptual mismatches between the interoperating software units. In this report we present the design of a planned controlled experiment for evaluating the effectiveness, efficiency, and acceptance of our proposed conceptual interoperability analysis approach. The design includes the study objectives, research questions, statistical hypotheses, and experimental design. It also provides the materials that will be used in the execution phase of the planned experiment.
Typically software engineers implement their software according to the design of the software
structure. Relations between classes and interfaces such as method-call relations and inheritance
relations are essential parts of a software structure. Accordingly, analyzing several types of
relations will benefit the static analysis process of the software structure. The tasks of this
analysis include but not limited to: understanding of (legacy) software, checking guidelines,
improving product lines, finding structure, or re-engineering of existing software. Graphs with
multi-type edges are possible representation for these relations considering them as edges, while
nodes represent classes and interfaces of software. Then, this multiple type edges graph can
be mapped to visualizations. However, the visualizations should deal with the multiplicity of
relations types and scalability, and they should enable the software engineers to recognize visual
patterns at the same time.
To advance the usage of visualizations for analyzing the static structure of software systems,
I tracked difierent development phases of the interactive multi-matrix visualization (IMMV)
showing an extended user study at the end. Visual structures were determined and classified
systematically using IMMV compared to PNLV in the extended user study as four categories:
High degree, Within-package edges, Cross-package edges, No edges. In addition to these structures
that were found in these handy tools, other structures that look interesting for software
engineers such as cycles and hierarchical structures need additional visualizations to display
them and to investigate them. Therefore, an extended approach for graph layout was presented
that improves the quality of the decomposition and the drawing of directed graphs
according to their topology based on rigorous definitions. The extension involves describing
and analyzing the algorithms for decomposition and drawing in detail giving polynomial time
complexity and space complexity. Finally, I handled visualizing graphs with multi-type edges
using small-multiples, where each tile is dedicated to one edge-type utilizing the topological
graph layout to highlight non-trivial cycles, trees, and DAGs for showing and analyzing the
static structure of software. Finally, I applied this approach to four software systems to show
its usefulness.
In the literature, there are at least two equivalent two-factor Gaussian models for the instantaneous short rate. These are the original two-factor Hull White model (see [3]) and the G2++ one by Brigo and Mercurio (see [1]). Both these models first specify a time homogeneous two-factor short rate dynamics and then by adding a deterministic shift function '(·) fit exactly the initial term structure of interest rates. However, the obtained results are rather clumsy and not intuitive which means that a special care has to be taken for their correct numerical implementation.
In this thesis, we have dealt with two modeling approaches of the credit risk, namely the structural (firm value) and the reduced form. In the former one, the firm value is modeled by a stochastic process and the first hitting time of this stochastic process to a given boundary defines the default time of the firm. In the existing literature, the stochastic process, triggering the firm value, has been generally chosen as a diffusion process. Therefore, on one hand it is possible to obtain closed form solutions for the pricing problems of credit derivatives and on the other hand the optimal capital structure of a firm can be analysed by obtaining closed form solutions of firm's corporate securities such as; equity value, debt value and total firm value, see Leland(1994). We have extended this approach by modeling the firm value as a jump-diffusion process. The choice of the jump-diffusion process was a crucial step to obtain closed form solutions for corporate securities. As a result, we have chosen a jump-diffusion process with double exponentially distributed jump heights, which enabled us to analyse the effects of jump on the optimal capital structure of a firm. In the second part of the thesis, by following the reduced form models, we have assumed that the default is triggered by the first jump of a Cox process. Further, by following Schönbucher(2005), we have modeled the forward default intensity of a firm as a geometric Brownian motion and derived pricing formulas for credit default swap options in a more general setup than the ones in Schönbucher(2005).
This paper deals with the handling of deformable linear objects (DLOs), such as hoses, wires, or leaf springs. It investigates usable features for the vision-based detection of a changing contact situation between a DLO and a rigid polyhedral obstacle and a classification of such contact state transitions. The result is a complete classification of contact state transitions and of the most significant features for each class. This knowledge enables reliable detection of changes in the DLO contact situation, facilitating implementation of sensor-based manipulation skills for all possible contact changes.
In der vorliegenden Arbeit wird das Tragverhalten von durchlaufenden stahlfaserbewehrten Stahlverbunddecken analysiert. Auf der Basis von experimentellen und rechnerischen Untersuchungen werden zwei Bemessungsmodelle entwickelt. Anhand der experimentellen Untersuchungen an einfeldrigen und durchlaufenden stahlfaserbewehrten Verbunddecken werden Aufschlüsse über das Trag- und Verformungsverhalten der Decken gewonnen. Dabei werden sowohl offene trapezförmige als auch hinterschnittene Profilbleche verwendet. Auf eine konventionelle Betonstahlbewehrung wird gänzlich verzichtet. Das Stützmoment wird vom Stahlfaserbeton alleine aufgenommen. In vier Versuchsserien mit insgesamt 18 Versuchen werden einzelne Parameter wie z. B. unterschiedliche Deckenstärken, unterschiedliche Profilblechgeometrien sowie unterschiedliche Stahlfaserbetonmischungen untersucht. Für die Berechnung und Bemessung werden die im Verbundbau üblichen Nachweisverfahren aufgegriffen und modifiziert. Die Traganteile des Stahlfaserbetons werden über den Ansatz von Spannungsblöcken implementiert. Bei der Nachrechnung der einzelnen Versuche zeigt sich die Eignung der Verfahren. Für die einzelnen Nachweise werden in Parameterstudien Bemessungsdiagramme und –tabellen erstellt, die dem anwendenden Ingenieur ein einfaches und sicheres Bemessen ermöglichen. Anhand der experimentellen Ergebnisse und der rechnerischen Untersuchungen werden zwei mögliche Bemessungsmodelle entwickelt, mit denen die Tragfähigkeit von stahlfaserbewehrten durchlaufenden Verbunddecken nachgewiesen werden kann. Dabei kann der Nachweis entweder nach den Verfahren Elastisch-Plastisch oder Plastisch-Plastisch erfolgen.
In this paper we study the possibilities of sharing profit in combinatorial procurement auctions and exchanges. Bundles of heterogeneous items are offered by the sellers, and the buyers can then place bundle bids on sets of these items. That way, both sellers and buyers can express synergies between items and avoid the well-known risk of exposure (see, e.g., [3]). The reassignment of items to participants is known as the Winner Determination Problem (WDP). We propose solving the WDP by using a Set Covering formulation, because profits are potentially higher than with the usual Set Partitioning formulation, and subsidies are unnecessary. The achieved benefit is then to be distributed amongst the participants of the auction, a process which is known as profit sharing. The literature on profit sharing provides various desirable criteria. We focus on three main properties we would like to guarantee: Budget balance, meaning that no more money is distributed than profit was generated, individual rationality, which guarantees to each player that participation does not lead to a loss, and the core property, which provides every subcoalition with enough money to keep them from separating. We characterize all profit sharing schemes that satisfy these three conditions by a monetary flow network and state necessary conditions on the solution of the WDP for the existence of such a profit sharing. Finally, we establish a connection to the famous VCG payment scheme [2, 8, 19], and the Shapley Value [17].
In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.
Acidic zeolites like H-Y, H-ZSM-5, H-MCM-22 and H-MOR zeolites were found to be the selective adsorbents for the removal of thiophene from toluene or n-heptane as solvent. The competitive adsorption of toluene is found to influence the adsorption capacity for thiophene and is more predominant when high-alumina zeolites are used as adsorbents. This behaviour is also reflected by the results of the adsorption of thiophene on H-ZSM-5 zeolites with varied nSi/nAl ratios (viz. 13, 19 and 36) from toluene and n-heptane as solvents, respectively. UV-Vis spectroscopic results show that the oligomerization of thiophene leads to the formation of dimers and trimers on these zeolites. The oligomerization in acid zeolites is regarded to be dependent on the geometry of the pore system of the zeolites. The sulphur-containing compounds with more than one ring viz. benzothiophene, which are also present in substantial amounts in certain hydrocarbon fractions, are not adsorbed on H-ZSM-5 zeolites. This is obvious, as the diameter of the pore aperture of zeolite H-ZSM-5 is smaller than the molecular size of benzothiophene. Metal ion-exchanged FAU-type zeolites are found to be promising adsorbents for the removal of sulphur-containing compounds from model solutions. The introduction of Cu+-, Ni2+-, Ce3+-, La3+- and Y3+- ions into zeolite Na+-Y by aqueous ion-exchange substantially improves the adsorption capacity for thiophene from toluene or n-heptane as solvent. More than the absolute content of Cu+-ions, the presence of Cu+-ions at the sites exposed to supercages is believed to influence the adsorption of thiophene on Cu+-Y zeolite. It was shown experimentally for the case of Cu+-Y and Ce3+-Y that the supercages present in the FAU zeolite allow for an access of bulkier sulphur-containing compounds (viz. benzothiophene, dibenzothiophene and dimethyl dibenzothiophene). The presence of these bulkier compounds compete with thiophene and are preferentially adsorbed on Cu+-Y zeolite. IR spectroscopic results revealed that the adsorption of thiophene on Na+-Y, Cu+-Y and Ni2+-Y is primarily a result of the interaction of thiophene via pi-complexation between C=C double bond (of thiophene) and metal ions (in the zeolite framework). A different mode of interaction of thiophene with Ce3+-, La3+- and Y3+-metal ions was observed in the IR spectra of thiophene adsorbed on Ce3+-Y, La3+-Y and Y3+-Y zeolites, respectively. On these adsorbents, thiophene is believed to interact via a lone electron pair of the sulphur atom with metal ions present in the adsorbent (M-S interaction). The experimental results show that there is a large difference in the thiophene adsorption capacities of pi-complexation adsorbents (like Cu+-Y, Ni2+-Y) between the model solution with toluene as solvent and the model solution with n-heptane as solvent. The lower capacity of these zeolites for the adsorption of thiophene from toluene than from n-heptane as solvent is the clear indication of competition of toluene in interating with adsorbent in a way similar to thiophene. The difference in thiophene adsorption capacities is very low in the case of adsorbents Ce3+-Y, La3+-Y and Y3+-Y, which are believed to interact with thiophene predominantly by direct M3+-S bond (thiophene interacting with metal ion via a lone pair of electrons). TG-DTA analysis was used to study the regeneration behaviour of the adsorbents. Acid zeolites can be regenerated by simply heating at 400 °C in a flow of nitrogen whereas thiophene is chemically adsorbed on the metal ion. By contrast, it is not possible to regenerate by heating under idle inert gas flow. The only way to regenerate these adsorbents is to burn off the adsorbate, which eventually brings about an undesired emission of SOx. The exothermic peaks appeared at different temperatures in the heat flow profiles of Cu+-Y, Ce3+-Y, La3+-Y and Y3+-Y are also indicating that two different types of interaction are present as revealed by IR spectroscopy, too. One major difficulty in reducing the sulphur content in fuels to value below 10 ppm is the inability in removing alkyl dibenzothiophenes, viz. 4,6 dimethyl dibenzothiophene, by the existing catalytic hydrodesulphurization technique. Cu+-Y and Ce3+-Y were found in the present study to adsorb this compound from toluene to a certain extent. To meet the stringent regulations on sulphur content, selective adsorption by zeolites could be a valuable post-purification method after the catalytic hydrodesulphurization unit.
Dealing with information in modern times involves users to cope with hundreds of thousands of documents, such as articles, emails, Web pages, or News feeds.
Above all information sources, the World Wide Web presents information seekers with great challenges.
It offers more text in natural language than one is capable to read.
The key idea for this research intends to provide users with adaptable filtering techniques, supporting them in filtering out the specific information items they need.
Its realization focuses on developing an Information Extraction system,
which adapts to a domain of concern, by interpreting the contained formalized knowledge.
Utilizing the Resource Description Framework (RDF), which is the Semantic Web's formal language for exchanging information,
allows extending information extractors to incorporate the given domain knowledge.
Because of this, formal information items from the RDF source can be recognized in the text.
The application of RDF allows a further investigation of operations on recognized information items, such as disambiguating and rating the relevance of these.
Switching between different RDF sources allows changing the application scope of the Information Extraction system from one domain of concern to another.
An RDF-based Information Extraction system can be triggered to extract specific kinds of information entities by providing it with formal RDF queries in terms of the SPARQL query language.
Representing extracted information in RDF extends the coverage of the Semantic Web's information degree and provides a formal view on a text from the perspective of the RDF source.
In detail, this work presents the extension of existing Information Extraction approaches by incorporating the graph-based nature of RDF.
Hereby, the pre-processing of RDF sources allows extracting statistical information models dedicated to support specific information extractors.
These information extractors refine standard extraction tasks, such as the Named Entity Recognition, by using the information provided by the pre-processed models.
The post-processing of extracted information items enables representing these results in RDF format or lists, which can now be ranked or filtered by relevance.
Post-processing also comprises the enrichment of originating natural language text sources with extracted information items by using annotations in RDFa format.
The results of this research extend the state-of-the-art of the Semantic Web.
This work contributes approaches for computing customizable and adaptable RDF views on the natural language content of Web pages.
Finally, due to the formal nature of RDF, machines can interpret these views allowing developers to process the contained information in a variety of applications.
Das Idealbild der europäischen Stadt mit ihrer dicht gewachsenen Baustruktur und ihren öffentlichen Räumen steht als Synonym für 'Urbanität' und beeinflußt bis zum heutigen Tag das planerische Denken und Handeln. Eng verbunden damit tauchen immer wieder Assoziationen zu Agora und Forum auf, die als Archetypen des 'Öffentlichen' schlechthin, den Mythos einer sich dort artikulierenden und konstituierenden, idealen und demokratischen Stadtgesellschaft transportieren. Zu Beginn des 21. Jahrhunderts stellt sich jedoch die Frage, ob sich die tradierten und erprobten Denkmodelle und Bilder des öffentlichen Raumes aufgrund der rasanten gesellschaftlichen und informationstechnologischen Veränderungen, überlebt haben. Haben die typischen Ideen von Stadt, die auf dem öffentlichen Raum beruhen nur noch rein symbolische Bedeutung? Verlagern sie sich mehr und mehr in den virtuellen Raum? 'Die Stadt der kurzen Wege' mit ihrer räumlichen Mischung ist als 'Marktplatz' für den Austausch von Informationen und Waren nicht mehr in der bekannten Weise relevant und gesellschaftlich bestimmend. Die Stadt als einheitliches und homogenes Gebilde existiert nicht mehr. Stattdessen ist sie durch Fragmentierung und Zersplitterung gekennzeichnet, wie u.a. die Diagnosen von Touraine ['Die Stadt - ein überholter Entwurf', 1996], Koolhaas ['Generic City', 1997], Sieverts ['Die Zwischenstadt', 1999] und Augé ['Orte und Nicht-Orte', 1998] zeigen. Synchron dazu entstehen mit dem Cyberspace oder 'Virtual Cities' [Rötzer, 1997] neue Formen von öffentlichem Raum und Öffentlichkeit - Parallelräume zur realen Welt. Welche Auswirkungen diese neuen Räume auf das Leben der Menschen und die Stadt haben werden, kann noch nicht abgesehen werden. In der realen Welt werden die Planungsspielräume der Kommunen immer kleiner; die Wechselwirkungen, die u.a. durch Globalisierung, Privatisierung und Deregulierung öffentlicher Aufgaben ausgelöst werden, immer komplexer. Neben den international zu beobachtenden Entwicklungen [Shopping-Mall, New Urbanism, Gated Community], die sich länderübergreifend in leicht abgewandelten Varianten durchzusetzen scheinen, üben auch soziokulturelle Gesellschaftstrends großen Einfluß auf den öffentlichen Raum aus. Der öffentliche Raum als Bindeglied zwischen dem 'Öffentlichen' und dem 'Privaten' ist zunehmend dem Druck der 'Erlebnis- und Konsumgesellschaft' ausgesetzt und kann deshalb seine gesamtgesellschaftliche Funktion für die Stadt nur noch eingeschränkt wahrnehmen. Die individualisierte und mobile Gesellschaft mit ihren gewandelten und diversifizierten Lebensentwürfen und Wertvorstellungen stellt das tradierte Verständnis des öffentlichen Raums zusätzlich in Frage. Das Bild des öffentlichen Raumes als ein Bereich der Gesellschaft des 21. Jahrhunderts kann nicht mehr mit einem mythisierenden Agora-Begriff begegnet werden. Neue Wege und eine Überprüfung der aktuellen Entwicklungen sind dafür unerlässlich. Gleichzeitig müssen jedoch auch die geschichtlichen Gegebenheiten berücksichtigt werden, um aus ihnen zu lernen.
Dynamics of Excited Electrons in Copper and Ferromagnetic Transition Metals: Theory and Experiment
(2000)
Both theoretical and experimental results for the dynamics of photoexcited electrons at surfaces of Cu and the ferromagnetic transition metals Fe, Co, and Ni are presented. A model for the dynamics of excited electrons is developed, which is based on the Boltzmann equation and includes effects of photoexcitation, electron-electron scattering, secondary electrons (cascade and Auger electrons), and transport of excited carriers out of the detection region. From this we determine the time-resolved two-photon photoemission (TR-2PPE). Thus a direct comparison of calculated relaxation times with experimental results by means of TR-2PPE becomes possible. The comparison indicates that the magnitudes of the spin-averaged relaxation time t and of the ratio t_up/t_down of majority and minority relaxation times for the different ferromagnetic transition metals result not only from density-of-states effects, but also from different Coulomb matrix elements M. Taking M_Fe > M_Cu > M_Ni = M_Co we get reasonable agreement with experiments.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.
In this paper we investigate the use of the sharp function known from functional analysis in image processing. The sharp function gives a measure of the variations of a function and can be used as an edge detector. We extend the classical notion of the sharp function for measuring anisotropic behaviour and give a fast anisotropic edge detection variant inspired by the sharp function. We show that these edge detection results are useful to steer isotropic and anisotropic nonlinear diffusion filters for image enhancement.
Hydrogels are known to be covalently or ionic cross-linked, hydrophilic three-dimensional
polymer networks, which exist in our bodies in a biological gel form such as the vitreous
humour that fills the interior of the eyes. Poly(N-isopropylacrylamide) (poly(NIPAAm))
hydrogels are attracting more interest in biomedical applications because, besides others, they
exhibit a well-defined lower critical solution temperature (LCST) in water, around 31–34°C,
which is close to the body temperature. This is considered to be of great interest in drug
delivery, cell encapsulation, and tissue engineering applications. In this work, the
poly(NIPAAm) hydrogel is synthesized by free radical polymerization. Hydrogel properties
and the dimensional changes accompanied with the volume phase transition of the
thermosensitive poly(NIPAAm) hydrogel were investigated in terms of Raman spectra,
swelling ratio, and hydration. The thermal swelling/deswelling changes that occur at different
equilibrium temperatures and different solutions (phenol, ethanol, propanol, and sodium
chloride) based on Raman spectrum were investigated. In addition, Raman spectroscopy has
been employed to evaluate the diffusion aspects of bovine serum albumin (BSA) and phenol
through the poly(NIPAAm) network. The determination of the mutual diffusion coefficient,
\(D_{mut}\) for hydrogels/solvent system was achieved successfully using Raman spectroscopy at
different solute concentrations. Moreover, the mechanical properties of the hydrogel, which
were investigated by uniaxial compression tests, were used to characterize the hydrogel and to
determine the collective diffusion coefficient through the hydrogel. The solute release coupled
with shrinking of the hydrogel particles was modelled with a bi-dimensional diffusion model
with moving boundary conditions. The influence of the variable diffusion coefficient is
observed and leads to a better description of the kinetic curve in the case of important
deformation around the LCST. A good accordance between experimental and calculated data
was obtained.
Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.
This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.
Entwicklung von Fermentationsstrategien zur stofflichen Nutzung von nachwachsenden Rohstoffen
(2022)
Biertreber stellt einen wichtigen Vertreter eines nachwachsenden Rohstoffes dar, da es sich dabei um ein niedrigpreisiges Nebenprodukt des Brauprozesses handelt, welches jedes Jahr in großen Mengen anfällt. In der vorliegenden Arbeit wurde Biertreber aus sieben verschiedenen Braurezepten, sowohl aus eigener Herstellung als auch industriellen Ursprungs, analysiert und in Bezug auf die zugrundeliegenden Braugänge klassifiziert. Darüber hinaus wurde der Treber durch Pressen in zwei separate Stoffströme aufgeteilt: eine flüssige und eine feste Fraktion. Für beide Fraktionen wurden Bioprozesse etabliert, um einerseits das flüssige Substrat (Treberpresssaft) mit einem Milchsäurebakterium (Lactobacillus delbrueckii subsp. lactis) zu Milchsäure und andererseits das feste Substrat (Treberrückstand) mit einem lignocellulolytischen und gemischtsäuregärung-betreibenden Stamm (Cellulomonas uda) zu Ethanol und Essigsäure umzusetzen. Außerdem wurde ein kinetisches Modell aufgestellt, welches u.a. die Milchsäurebildung und das Zellwachstum von L. delbrueckii subsp. lactis für drei Treberpresssäfte unterschiedlicher Braurezepturen, d.h. mit unterschiedlicher Nährstoffausstattung, in einer simultanen Verzuckerung und Fermentation vorhersagen konnte. Des Weiteren konnten die entwickelten Fermentationsstrategien zur Verwertung des Treberpresssaftes und Treberrückstandes sowie die zugrundeliegenden Prozessüberwachungs- und Regelungsstrategien auf Fermentationen mit den gleichen Organismen aber dem Substrat Wiesenschnitt – also einen weiteren nachwachsenden Rohstoff – übertragen werden.
The advent of heterogeneous many-core systems has increased the spectrum
of achievable performance from multi-threaded programming. As the processor components become more distributed, the cost of synchronization and
communication needed to access the shared resources increases. Concurrent
linearizable access to shared objects can be prohibitively expensive in a high
contention workload. Though there are various mechanisms (e.g., lock-free
data structures) to circumvent the synchronization overhead in linearizable
objects, it still incurs performance overhead for many concurrent data types.
Moreover, many applications do not require linearizable objects and apply
ad-hoc techniques to eliminate synchronous atomic updates.
In this thesis, we propose the Global-Local View Model. This programming model exploits the heterogeneous access latencies in many-core systems.
In this model, each thread maintains different views on the shared object: a
thread-local view and a global view. As the thread-local view is not shared,
it can be updated without incurring synchronization costs. The local updates
become visible to other threads only after the thread-local view is merged
with the global view. This scheme improves the performance at the expense
of linearizability.
Besides the weak operations on the local view, the model also allows strong
operations on the global view. Combining operations on the global and the
local views, we can build data types with customizable consistency semantics
on the spectrum between sequential and purely mergeable data types. Thus
the model provides a framework that captures the semantics of Multi-View
Data Types. We discuss a formal operational semantics of the model. We
also introduce a verification method to verify the correctness of the implementation of several multi-view data types.
Frequently, applications require updating shared objects in an “all-or-nothing” manner. Therefore, the mechanisms to synchronize access to individual objects are not sufficient. Software Transactional Memory (STM)
is a mechanism that helps the programmer to correctly synchronize access to
multiple mutable shared data by serializing the transactional reads and writes.
But under high contention, serializable transactions incur frequent aborts and
limit parallelism, which can lead to severe performance degradation.
Mergeable Transactional Memory (MTM), proposed in this thesis, allows accessing multi-view data types within a transaction. Instead of aborting
and re-executing the transaction, MTM merges its changes using the data-type
specific merge semantics. Thus it provides a consistency semantics that allows
for more scalability even under contention. The evaluation of our prototype
implementation in Haskell shows that mergeable transactions outperform serializable transactions even under low contention while providing a structured
and type-safe interface.
Towards A Non-tracking Web
(2016)
Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. Simply preventing third-party tracking without replacing the functionality it provides cannot be a viable solution; leaving publishers without essential services will hurt the sustainability of the entire ecosystem.
In this thesis, we present alternative approaches to bridge this gap between privacy for users and functionality for publishers and other entities. We first propose a general and interaction-based third-party cookie policy that prevents third-party tracking via cookies, yet enables social networking features for users when wanted, and does not interfere with non-tracking services for analytics and advertisements. We then present a system that enables publishers to obtain rich web analytics information (e.g., user demographics, other sites visited) without tracking the users across the web. While this system requires no new organizational players and is practical to deploy, it necessitates the publishers to pre-define answer values for the queries, which may not be feasible for many analytics scenarios (e.g., search phrases used, free-text photo labels). Our second system complements the first system by enabling publishers to discover previously unknown string values to be used as potential answers in a privacy-preserving fashion and with low computation overhead for clients as well as servers. These systems suggest that it is possible to provide non-tracking services with (at least) the same functionality as today’s tracking services.
We present a constructive theory for locally supported approximate identities on the unit ball in \(\mathbb{R}^3\). The uniform convergence of the convolutions of the derived kernels with an arbitrary continuous function \(f\) to \(f\), i.e. the defining property of an approximate identity, is proved. Moreover, an explicit representation for a class of such kernels is given. The original publication is available at www.springerlink.com
This research work focuses on the generation of a high resolution digital surface model featuring complex urban surface characteristics in order to enrich the database for runoff simulations of urban drainage systems. The discussion of global climate change and its possible consequences have taken centre stage over the last decade. Global climate change has triggered more erratic weather patterns by causing severe and unpredictable rainfall events in many parts of the world. The incidence of more frequent rainfall has led to the problem of increased flooding in urban areas. The increased property values of urban structures and threats to people's personal safety have hastened the demand for a detailed urban drainage simulation model for accurate flood prediction. Although the use of 2D hydraulic modelling approach in rural floodplains is in practice for quite a long time, the use of the same approach in urban floodplains is still in its infancy. The reason is mainly due to the lack of a high resolution topographic model describing urban surface characteristics properly.
High resolution surface data describing hydrologic and hydraulic properties of complex urban areas are the prerequisite to more accurately describing and simulating the flood water movement and thereby taking adequate measures against urban flooding. Airborne LiDAR (Light detection and ranging) is an efficient way of generating a high resolution Digital Surface Model (DSM) of any study area. The processing of high-density and large volume of unstructured LiDAR data is a difficult and time-consuming task towards generating fine resolution spatial databases when considering only human intervention. The application of robust algorithms in terms of processing this massive volume of data can significantly reduce the data processing time and thereby increase the degree of automation as well as accuracy.
This research work presents a number of techniques pertaining to processing, filtering and classification of LiDAR point data in order to achieve higher degree of automation and accuracy towards generating a high resolution urban surface model. This research work also describes the use of ancillary datasets such as aerial images and topographic maps in combination with LiDAR data for feature detection and surface characterization. The integration of various data sources facilitates detailed modelling of street networks and accurate detection of various urban surface types (e.g. grasslands, bare soil and impervious surfaces).
While the accurate characterization of various surface types contributes to the better modelling of rainfall runoff processes, the application of LiDAR-derived fine resolution DSM serves as input to 2D hydraulic models and capable of simulating surface flooding scenarios in cases the sewer systems are surcharged.
Thus, this research work develops high resolution spatial databases aiming at improving the accuracy of hydrologic and hydraulic databases of urban drainage systems. Later, these databases are given as input to a standard flood simulation software in order to: 1) test the suitability of the databases for running the simulation; 2) assess the performance of the hydraulic capacity of urban drainage systems and 3) predict and visualize the surface flooding scenarios in order to take necessary flood protection measures.
The goal of this work is to develop statistical natural language models and processing techniques
based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short-
Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more
robust, and easier to train than traditional methods, i.e., words list and rule-based models. They
improve the output of recognition systems and make them more accessible to users for browsing
and reading. These techniques are required, especially for historical books which might take
years of effort and huge costs to manually transcribe them.
The contributions of this thesis are several new methods which have high-performance computing and accuracy. First, an error model for improving recognition results is designed. As
a second contribution, a hyphenation model for difficult transcription for alignment purposes
is suggested. Third, a dehyphenation model is used to classify the hyphens in noisy transcription. The fourth contribution is using LSTM networks for normalizing historical orthography.
A size normalization alignment is implemented to equal the size of strings, before the training
phase. Using the LSTM networks as a language model to improve the recognition results is
the fifth contribution. Finally, the sixth contribution is a combination of Weighted Finite-State
Transducers (WFSTs), and LSTM applied on multiple recognition systems. These contributions
will be elaborated in more detail.
Context-dependent confusion rules is a new technique to build an error model for Optical
Character Recognition (OCR) corrections. The rules are extracted from the OCR confusions
which appear in the recognition outputs and are translated into edit operations, e.g., insertions,
deletions, and substitutions using the Levenshtein edit distance algorithm. The edit operations
are extracted in a form of rules with respect to the context of the incorrect string to build an
error model using WFSTs. The context-dependent rules assist the language model to find the
best candidate corrections. They avoid the calculations that occur in searching the language
model and they also make the language model able to correct incorrect words by using context-
dependent confusion rules. The context-dependent error model is applied on the university of
Washington (UWIII) dataset and the Nastaleeq script in Urdu dataset. It improves the OCR
results from an error rate of 1.14% to an error rate of 0.68%. It performs better than the
state-of-the-art single rule-based which returns an error rate of 1.0%.
This thesis describes a new, simple, fast, and accurate system for generating correspondences
between real scanned historical books and their transcriptions. The alignment has many challenges, first, the transcriptions might have different modifications, and layout variations than the
original book. Second, the recognition of the historical books have misrecognition, and segmentation errors, which make the alignment more difficult especially the line breaks, and pages will
not have the same correspondences. Adapted WFSTs are designed to represent the transcription. The WFSTs process Fraktur ligatures and adapt the transcription with a hyphenations
model that allows the alignment with respect to the varieties of the hyphenated words in the line
breaks of the OCR documents. In this work, several approaches are implemented to be used for
the alignment such as: text-segments, page-wise, and book-wise approaches. The approaches
are evaluated on German calligraphic (Fraktur) script historical documents dataset from “Wan-
derungen durch die Mark Brandenburg” volumes (1862-1889). The text-segmentation approach
returns an error rate of 2.33% without using a hyphenation model and an error rate of 2.0%
using a hyphenation model. Dehyphenation methods are presented to remove the hyphen from
the transcription. They provide the transcription in a readable and reflowable format to be used
for alignment purposes. We consider the task as classification problem and classify the hyphens
from the given patterns as hyphens for line breaks, combined words, or noise. The methods are
applied on clean and noisy transcription for different languages. The Decision Trees classifier
returns better performance on UWIII dataset and returns an accuracy of 98%. It returns 97%
on Fraktur script.
A new method for normalizing historical OCRed text using LSTM is implemented for different texts, ranging from Early New High German 14th - 16th centuries to modern forms in New
High German applied on the Luther bible. It performed better than the rule-based word-list
approaches. It provides a transcription for various purposes such as part-of-speech tagging and
n-grams. Also two new techniques are presented for aligning the OCR results and normalize the
size by using adding Character-Epsilons or Appending-Epsilons. They allow deletion and insertion in the appropriate position in the string. In normalizing historical wordforms to modern
wordforms, the accuracy of LSTM on seen data is around 94%, while the state-of-the-art combined rule-based method returns 93%. On unseen data, LSTM returns 88% and the combined
rule-based method returns 76%. In normalizing modern wordforms to historical wordforms, the
LSTM delivers the best performance and returns 93.4% on seen data and 89.17% on unknown
data.
In this thesis, a deep investigation has been done on constructing high-performance language
modeling for improving the recognition systems. A new method to construct a language model
using LSTM is designed to correct OCR results. The method is applied on UWIII and Urdu
script. The LSTM approach outperforms the state-of-the-art, especially for unseen tokens
during training. On the UWIII dataset, the LSTM returns reduction in OCR error rates from
1.14% to 0.48%. On the Nastaleeq script in Urdu dataset, the LSTM reduces the error rate
from 6.9% to 1.58%.
Finally, the integration of multiple recognition outputs can give higher performance than a
single recognition system. Therefore, a new method for combining the results of OCR systems is
explored using WFSTs and LSTM. It uses multiple OCR outputs and votes for the best output
to improve the OCR results. It performs better than the ISRI tool, Pairwise of Multiple Sequence and it helps to improve the OCR results. The purpose is to provide correct transcription
so that it can be used for digitizing books, linguistics purposes, N-grams, and part-of-speech
tagging. The method consists of two alignment steps. First, two recognition systems are aligned
using WFSTs. The transducers are designed to be more flexible and compatible with the different symbols in line and page breaks to avoid the segmentation and misrecognition errors.
The LSTM model then is used to vote the best candidate correction of the two systems and
improve the incorrect tokens which are produced during the first alignment. The approaches
are evaluated on OCRs output from the English UWIII and historical German Fraktur dataset
which are obtained from state-of-the-art OCR systems. The Experiments show that the error
rate of ISRI-Voting is 1.45%, the error rate of the Pairwise of Multiple Sequence is 1.32%, the
error rate of the Line-to-Page alignment is 1.26% and the error rate of the LSTM approach has
the best performance with 0.40%.
The purpose of this thesis is to contribute methods providing correct transcriptions corresponding to the original book. This is considered to be the first step towards an accurate and
more effective use of the documents in digital libraries.
Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)
Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.
The Context and Its Importance: In safety and reliability analysis, the information generated by Minimal Cut Set (MCS) analysis is large.
The Top Level event (TLE) that is the root of the fault tree (FT) represents a hazardous state of the system being analyzed.
MCS analysis helps in analyzing the fault tree (FT) qualitatively-and quantitatively when accompanied with quantitative measures.
The information shows the bottlenecks in the fault tree design leading to identifying weaknesses of the system being examined.
Safety analysis (containing the MCS analysis) is especially important for critical systems, where harm can be done to the environment or human causing injuries, or even death during the system usage.
Minimal Cut Set (MCS) analysis is performed using computers and generating a lot of information.
This phase is called MCS analysis I in this thesis.
The information is then analyzed by the analysts to determine possible issues and to improve the design of the system regarding its safety as early as possible.
This phase is called MCS analysis II in this thesis.
The goal of my thesis was developing interactive visualizations to support MCS analysis II of one fault tree (FT).
The Methodology: As safety visualization-in this thesis, Minimal Cut Set analysis II visualization-is an emerging field and no complete checklist regarding Minimal Cut Set analysis II requirements and gaps were available from the perspective of visualization and interaction capabilities,
I have conducted multiple studies using different methods with different data sources (i.e., triangulation of methods and data) for determining these requirements and gaps before developing and evaluating visualizations and interactions supporting Minimal Cut Set analysis II.
Thus, the following approach was taken in my thesis:
1- First, a triangulation of mixed methods and data sources was conducted.
2- Then, four novel interactive visualizations and one novel interaction widget were developed.
3- Finally, these interactive visualizations were evaluated both objectively and subjectively (compared to multiple safety tools)
from the point of view of users and developers of the safety tools that perform MCS analysis I with respect to their degree in supporting MCS analysis II and from the point of non-domain people using empirical strategies.
The Spiral tool supports analysts with different visions, i.e., full vision, color deficiency protanopia, deuteranopia, and tritanopia. It supports 100 out of 103 (97%) requirements obtained from the triangulation and it fills 37 out of 39 (95%) gaps. Its usability was rated high (better than their best currently used tools) by the users of the safety and reliability tools (RiskSpectrum, ESSaRel, FaultTree+, and a self-developed tool) and at least similar to the best currently used tools from the point of view of the CAFTA tool developers. Its quality was higher regarding its degree of supporting MCS analysis II compared to the FaultTree+ tool. The time spent for discovering the critical MCSs from a problem size of 540 MCSs (with a worst case of all equal order) was less than a minute while achieving 99.5% accuracy. The scalability of the Spiral visualization was above 4000 MCSs for a comparison task. The Dynamic Slider reduces the interaction movements up to 85.71% of the previous sliders and solves the overlapping thumb issues by the sliders provides the 3D model view of the system being analyzed provides the ability to change the coloring of MCSs according to the color vision of the user provides selecting a BE (i.e., multi-selection of MCSs), thus, can observe the BEs' NoO and provides its quality provides two interaction speeds for panning and zooming in the MCS, BE, and model views provide a MCS, a BE, and a physical tab for supporting the analysis starting by the MCSs, the BEs, or the physical parts. It combines MCS analysis results and the model of an embedded system enabling the analysts to directly relate safety information with the corresponding parts of the system being analyzed and provides an interactive mapping between the textual information of the BEs and MCSs and the parts related to the BEs.
Verifications and Assessments: I have evaluated all visualizations and the interaction widget both objectively and subjectively, and finally evaluated the final Spiral visualization tool also both objectively and subjectively regarding its perceived quality and regarding its degree of supporting MCS analysis II.
Der Erfolg von Transformationen hängt von sehr vielen Faktoren ab. In der vorliegenden Arbeit wurden einige ausgewählte Faktoren aufgegriffen und validiert. Die vorliegende
Arbeit enthält keine abschließende Beurteilung, der in der Literatur dargestellten Erfolgsfaktoren.
Change-Prozesse beeinflussen im organisationalen Kontext in den meisten Fällen die sozialen Beziehungen, sei es auf individueller, Team- oder gesamtorganisationaler
Ebene. Je grösser, einschneidender, tiefgehender oder radikaler die Veränderung, desto schwieriger sind der Ausgang und insbesondere die Erfolgsaussichten vorhersehbar. Gemäß den Forschungsergebnissen spielen neben den geprüften Erfolgsfaktoren auch die bisherigen Erlebnisse eine wichtige Rolle. Die Organisation kann die bisherigen Erfahrungen zu einem großen Teil selber steuern, reflektieren und daraus lernen. Die individuellen Change-Erfahrungen der Mitarbeitenden hingegen sind schwer zu fassen und können höchstens durch individuelle Begleitung verarbeitet werden.
Grundsätzlich kann bestätigt werden, dass die gewählten und validierten Maßnahmen erfolgsversprechend zur positiven Entwicklung der Organisationskultur eingesetzt werden
können. Neben der Prägung der Organisationskultur sollten Organisationen eine grundsätzliche Change-Affinität entwickeln und fördern. Die Interviews belegen, was in der Literatur postuliert wird: Die Organisationskultur ist die Basis der Zusammenarbeit in Organisationen und darf aus diesem Grund nicht vernachlässigt werden. Die Organisationskultur wird von allen befragten Personen als ein sehr wichtiges (wenn nicht sogar als wichtigstes) Element der Organisation betrachtet.
Die Kulturentwicklung in der Praxis wird als Nebeneffekt anderer Interventionen wahrgenommen. Eine klare Aussage, wie die Kultur in eine bestimmte Richtung entwickelt werden kann, wurde nicht getroffen.
This research explores the development of web based reference software for
characterisation of surface roughness for two-dimensional surface data. The reference software used for verification of surface characteristics makes the evaluation methods easier for clients. The algorithms used in this software
are based on International ISO standards. Most software used in industrial measuring
instruments may give variations in the parameters calculated due to numerical changes in
calculation. Such variations can be verified using the proposed reference software.
The evaluation of surface roughness is carried out in four major steps: data capture, data
align, data filtering and parameter calculation. This work walks through each of these steps
explaining how surface profiles are evaluated by pre-processing steps called fitting and
filtering. The analysis process is then followed by parameter evaluation according to DIN EN
ISO 4287 and DIN EN ISO 13565-2 standards to extract important information from the
profile to characterise surface roughness.
Die Arbeit behandelt das facettenreiche Gebiet des steuerlichen Verlustabzugs, namentlich die konkrete Ausgestaltung des Verlustvortrags. Neben der verfassungsrechtlichen Würdigung der relevanten Normen legt die Autorin besonderen Wert auf die Frage, wie § 10d EStG im Sinne eines kohärenten Verlustabzugssystems auszugestalten sei. Auch die Verwerfungen der Corona-Krise haben im Bereich des Untersuchungsgegenstandes rege gesetzgeberische Tätigkeit ausgelöst. Die Verfasserin nimmt dies zum Anlass, pandemiebedingte Änderungen gesondert auf Rechtmäßigkeit wie auch ökonomische Wirksamkeit zu prüfen und die Vereinbarkeit der Neuerungen mit dem bestehenden Normsystem zu hinterfragen.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope.
This document is the documentation for version 2020.02.
For more information, see https://www.lintim.net
Durch den Einsatz von Hohlkörpern in Stahlbetondecken können Beton, Stahl und folglich Gewicht eingespart werden. Die Materialeinsparung reduziert den Primärenergiebedarf sowie die Treibhausgasemissionen bei der Herstellung. Hierdurch stellen Hohlkörperdecken im Vergleich zu konventionellen Massivdecken eine ressourcenschonendere Bauweise dar. Infolge der deutlich reduzierten Eigenlast und einem im Verhältnis geringeren Steifigkeitsabfall können zudem Decken mit großen Spannweiten realisiert werden.
Die einzelnen Traganteile der Decken werden durch die Hohlkörper grundsätzlich nachteilig beeinflusst. Die Tragfähigkeit von Hohlkörperdecken mit abgeflachten rotationssymmetrischen Hohlkörpern wurde in der vorliegenden Dissertationsschrift im Detail analysiert. Auf Grundlage experimenteller und theoretischer Untersuchungen wurden Bemessungskonzepte für die Biegetragfähigkeit, die Querkrafttragfähigkeit, die Schubkraftübertragung in der Verbundfuge und das lokale Durchstanzen des Deckenspiegels oberhalb der Hohlkörper entwickelt. Unter Berücksichtigung der Bemessungskonzepte können die Hohlkörperdecken auf dem bauaufsichtlich geforderten Sicherheitsniveau hergestellt werden.
Für die Querkrafttragfähigkeit von Stahlbetondecken ohne Querkraftbewehrung steht derzeit kein allgemein anerkanntes mechanisch begründetes Bemessungskonzept zur Verfügung. Der Einfluss der einzelnen Traganteile auf das Versagen wurde experimentell analysiert. Hierzu wurden Versuche mit verlagerter Druckzone sowie mit ausgeschalteter Rissuferverzahnung und mit ausgeschalteter Dübelwirkung durchgeführt. Der rechnerische Einfluss der einzelnen Traganteile an der Gesamttragfähigkeit konnte durch die Nachrechnung von Versuchen zu Hohlkörper- und Installationsdecken unter Verwendung eines bestehenden mechanisch begründeten Rechenmodells visualisiert und verifiziert werden. Hierdurch wird ein Beitrag zum besseren Verständnis der Querkrafttragfähigkeit geleistet.
Es wurden Untersuchungen zur Expression und Wechsel von Serotypproteinen bei Paramecium primaurelia, Stamm 156 durchgeführt. Zum Nachweis der unterschiedlichen Serotypexpressionen wurden Immunofluoreszenzfärbungen und eine spezifische RT-PCR etabliert. Mit dieser Methode wurde der Ablauf eines temperaturinduzierten Serotypwechsels dokumentiert. Es wurde der Einfluss weiterer Umweltparameter auf die Ausprägung des Serotyps untersucht. Freilandexperimente sollten die Ausprägung der Serotypen unter multifaktorieller Reizeinwirkung zeigen. Zusätzlich konnte die Koexpression von zwei Serotypproteinen auf einer Zelle nachgewiesen werden.
In these notes we will discuss some aspects of a problem arising in carindustry. For the sake of clarity we will set the problem into an extremely simplified scheme. Suppose that we have a body which is emitting sound, and that the sound is measured at a finite number of points around the body. We wish to determine the intensity of the sound at an observation point which is moving.
Using molecular dynamics simulation, we study the cutting of an Fe single crystal using
tools with various rake angles α. We focus on the (110)[001] cut system, since here, the crystal
plasticity is governed by a simple mechanism for not too strongly negative rake angles. In this
case, the evolution of the chip is driven by the generation of edge dislocations with the Burgers
vector b = 1
2
[111], such that a fixed shear angle of φ = 54.7◦
is established. It is independent of
the rake angle of the tool. The chip form is rectangular, and the chip thickness agrees with the
theoretical result calculated for this shear angle from the law of mass conservation. We find that the
force angle χ between the direction of the force and the cutting direction is independent of the rake
angle; however, it does not obey the predictions of macroscopic cutting theories, nor the correlations
observed in experiments of (polycrystalline) cutting of mild steel. Only for (strongly) negative rake
angles, the mechanism of plasticity changes, leading to a complex chip shape or even suppressing the
formation of a chip. In these cases, the force angle strongly increases while the friction angle tends
to zero.
A growing share of all software development project work is being done by geographically distributed teams. To satisfy shorter product design cycles, expert team members for a development project may need to be r ecruited globally. Yet to avoid extensive travelling or r eplacement costs, distributed project work is preferred. Current-generation software engineering tools and ass ociated systems, processes, and methods were for the most part developed to be used within a single enterprise. Major innovations have lately been introduced to enable groupware applications on the Internet to support global collaboration. However, their deployment for distributed software projects requires further research. In partic ular, groupware methods must seamlessly be integrated with project and product management systems to make them attractive for industry. In this position paper we outline the major challenges concerning distributed (virtual) software projects. Based on our experiences with software process modeling and enactment environments, we then propose approaches to solve those challenges.
Tropical intersection theory
(2010)
This thesis consists of five chapters: Chapter 1 contains the basics of the theory and is essential for the rest of the thesis. Chapters 2-5 are to a large extent independent of each other and can be read separately. - Chapter 1: Foundations of tropical intersection theory In this first chapter we set up the foundations of a tropical intersection theory covering many concepts and tools of its counterpart in algebraic geometry such as affine tropical cycles, Cartier divisors, morphisms of tropical cycles, pull-backs of Cartier divisors, push-forwards of cycles and an intersection product of Cartier divisors and cycles. Afterwards, we generalize these concepts to abstract tropical cycles and introduce a concept of rational equivalence. Finally, we set up an intersection product of cycles and prove that every cycle is rationally equivalent to some affine cycle in the special case that our ambient cycle is R^n. We use this result to show that rational and numerical equivalence agree in this case and prove a tropical Bézout's theorem. - Chapter 2: Tropical cycles with real slopes and numerical equivalence In this chapter we generalize our definitions of tropical cycles to polyhedral complexes with non-rational slopes. We use this new definition to show that if our ambient cycle is a fan then every subcycle is numerically equivalent to some affine cycle. Finally, we restrict ourselves to cycles in R^n that are "generic" in some sense and study the concept of numerical equivalence in more detail. - Chapter 3: Tropical intersection products on smooth varieties We define an intersection product of tropical cycles on tropical linear spaces L^n_k and on other, related fans. Then, we use this result to obtain an intersection product of cycles on any "smooth" tropical variety. Finally, we use the intersection product to introduce a concept of pull-backs of cycles along morphisms of smooth tropical varieties and prove that this pull-back has all expected properties. - Chapter 4: Weil and Cartier divisors under tropical modifications First, we introduce "modifications" and "contractions" and study their basic properties. After that, we prove that under some further assumptions a one-to-one correspondence of Weil and Cartier divisors is preserved by modifications. In particular we can prove that on any smooth tropical variety we have a one-to-one correspondence of Weil and Cartier divisors. - Chapter 5: Chern classes of tropical vector bundles We give definitions of tropical vector bundles and rational sections of tropical vector bundles. We use these rational sections to define the Chern classes of such a tropical vector bundle. Moreover, we prove that these Chern classes have all expected properties. Finally, we classify all tropical vector bundles on an elliptic curve up to isomorphisms.
Der Flächennutzungsplan ist das zentrale Instrument der Gesamtplanung auf der gesamtstädtischen Ebene und kann gleichzeitig als Paradebeispiel für den – angesichts der in der Praxis zu lösenden Probleme nicht gerechtfertigten – Bedeutungsverlust formeller Pläne herangezogen werden. Mit der Bewältigung aktueller Herausforderungen der Stadtentwicklung konfrontiert, werden vor allem die seiner Aufstellung dienenden, zu langwierigen Verfahren und seine zu starren, Unsicherheiten in der tatsächlichen Entwicklung ungenügend berücksichtigenden Inhalte kritisiert. Folglich muss nach Möglichkeiten einer Weiterentwicklung des formellen Instrumentariums gesucht werden. Im Laufe der letzten Jahrzehnte wurden einige punktuelle Anpassungen im Modell des Flächennutzungsplanes vorgenommen. Des Weiteren sind Entwicklungen im benachbarten europäischen Ausland beachtenswert: Der im englischen Planungssystem neu eingeführte Local Development Framework soll sich durch Flexibilität und Modularität bei gleichzeitiger Stärkung der strategischen Steuerungswirkung seiner Inhalte auszeichnen. An einer systematischen Untersuchung der Erfordernisse, Potenziale und Grenzen einer Weiterentwicklung des Modells des Flächennutzungsplanes fehlt es bislang. Damit ein zukünftiges Modell die ihm zugedachten Wirkungen entfalten kann, ist zudem eine grundlegende Auseinandersetzung mit dem vorherrschenden Verständnis von gesamtstädtischer Planung und ihren Ergebnissen erforderlich. Vor diesem Hintergrund ist es das Ziel der vorliegenden Arbeit, das Modell des Flächennutzungsplanes systematisch abzuleiten und zu untersuchen, um es anschließend mit dem Ziel einer Erhöhung der Steuerungskraft der Inhalte des gesamtstädtischen Planes weiterentwickeln zu können. Hierbei fließen die Erkenntnisse aus einer Betrachtung des Local Development Framework mit ein. Die Arbeit kommt zu dem Ergebnis, dass trotz zahlreicher Anpassungen des Modells des Flächennutzungsplanes einige Charakteristika aus dessen Anfangszeit erhalten geblieben sind, die als nicht mehr angemessen bezeichnet werden müssen. Zu den Hauptschwächen des gegenwärtigen Modells zählen sein statischer Charakter und die unzureichende Berücksichtigung der Prozesshaftigkeit von Stadtentwicklung inklusive der Auseinandersetzung mit potenziellen Entwicklungsalternativen. Die Beschäftigung mit dem Local Development Framework zeigt, dass von einer Übertragbarkeit von Elementen auf das deutsche System auszugehen ist. Die erarbeiteten Vorschläge zu den Anpassungen am Modell des Flächennutzungsplanes eröffnen insgesamt die Möglichkeit, den Flächennutzungsplan zum modularen, dynamischen und strategischen Instrument gesamtstädtischer Planung weiterzuentwickeln. Im Fokus der Anpassungen stehen die neue Gesamtstruktur als Portfolio aus zeichnerischen und textlichen, formellen und informellen Bestandteilen, die Integration des Faktors „Zeit“ sowie sonstiger strategischer Aspekte von Stadtentwicklung – begleitet von einem neuen Verständnis vom Ergebnis gesamtstädtischer Planung, nach dem der Flächennutzungsplan nicht mehr als der eine Plan das kanonische Endprodukt darstellt, sondern kontinuierlich und mit seinen diversen Bestandteilen überprüft und fortentwickelt wird.
Determination of interaction between MCT1 and CAII via a mathematical and physiological approach
(2008)
The enzyme carbonic anhydrase isoform II (CAII), catalysing the hydration and dehydration of CO2, enhances transport activity of the monocarboxylate transporter isoform I (MCT1, SLC16A1) expressed in Xenopus oocytes by a mechanism that does not require CAII catalytic activity (Becker et al. (2005) J. Biol. Chem., 280). In the present study, we have investigated the mechanism of the CAII induced increase in transport activity by using electrophysiological techniques and a mathematical model of the MCT1 transport cycle. The model consists of six states arranged in cyclic fashion and features an ordered, mirror-symmetric, binding mechanism were binding and unbinding of the proton to the transport protein is considered to be the rate limiting step under physiological conditions. An explicit rate expression for the substrate °ux is derived using model reduction techniques. By treating the pools of intra- and extracellular MCT1 substrates as dynamic states, the time dependent kinetics are obtained by integration using the derived expression for the substrate °ux. The simulations were compared with experimental data obtained from MCT1-expressing oocytes injected with di®erent amounts of CAII. The model suggests that CAII increases the e®ective rate constants of the proton reactions, possibly by working as a proton antenna.
Information Visualization (InfoVis) and Human-Computer Interaction (HCI) have strong ties with each other. Visualization supports the human cognitive system by providing interactive and meaningful images of the underlying data. On the other side, the HCI domain cares about the usability of the designed visualization from the human perspectives. Thus, designing a visualization system requires considering many factors in order to achieve the desired functionality and the system usability. Achieving these goals will help these people in understanding the inside behavior of complex data sets in less time.
Graphs are widely used data structures to represent the relations between the data elements in complex applications. Due to the diversity of this data type, graphs have been applied in numerous information visualization applications (e.g., state transition diagrams, social networks, etc.). Therefore, many graph layout algorithms have been proposed in the literature to help in visualizing this rich data type. Some of these algorithms are used to visualize large graphs, while others handle the medium sized graphs. Regardless of the graph size, the resulting layout should be understandable from the users’ perspective and at the same time it should fulfill a list of aesthetic criteria to increase the representation readability. Respecting these two principles leads to produce a resulting graph visualization that helps the users in understanding and exploring the complex behavior of critical systems.
In this thesis, we utilize the graph visualization techniques in modeling the structural and behavioral aspects of embedded systems. Furthermore, we focus on evaluating the resulting representations from the users’ perspectives.
The core contribution of this thesis is a framework, called ESSAVis (Embedded Systems Safety Aspect Visualizer). This framework visualizes not only some of the safety aspects (e.g. CFT models) of embedded systems, but also helps the engineers and experts in analyzing the system safety critical situations. For this, the framework provides a 2Dplus3D environment in which the 2D represents the graph representation of the abstract data about the safety aspects of the underlying embedded system while the 3D represents the underlying system 3D model. Both views are integrated smoothly together in the 3D world fashion. In order to check the effectiveness and feasibility of the framework and its sub-components, we conducted many studies with real end users as well as with general users. Results of the main study that targeted the overall ESSAVis framework show high acceptance ratio and higher accuracy with better performance using the provided visual support of the framework.
The ESSAVis framework has been designed to be compatible with different 3D technologies. This enabled us to use the 3D stereoscopic depth of such technologies to encode nodes attributes in node-link diagrams. In this regard, we conducted an evaluation study to measure the usability of the stereoscopic depth cue approach, called the stereoscopic highlighting technique, against other selected visual cues (i.e., color, shape, and sizes). Based on the results, the thesis proposes the Reflection Layer extension to the stereoscopic highlighting technique, which was also evaluated from the users’ perspectives. Additionally, we present a new technique, called ExpanD (Expand in Depth), that utilizes the depth cue to show the structural relations between different levels of details in node-link diagrams. Results of this part opens a promising direction of the research in which visualization designers can get benefits from the richness of the 3D technologies in visualizing abstract data in the information visualization domain.
Finally, this thesis proposes the application of the ESSAVis frame- work as a visual tool in the educational training process of engineers for understanding the complex concepts. In this regard, we conducted an evaluation study with computer engineering students in which we used the visual representations produced by ESSAVis to teach the principle of the fault detection and the failure scenarios in embedded systems. Our work opens the directions to investigate many challenges about the design of visualization for educational purposes.
The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.
More than ten years ago, ER-ANT1 was shown to act as an ATP/ADP antiporter and to exist in the endoplasmic reticulum (ER) of higher plants. Because structurally different transporters generally mediate energy provision to the ER, the physiological function of ER-ANT1 was not directly evident.
Interestingly, mutant plants lacking ER-ANT1 exhibit a photorespiratory phenotype. Although many research efforts were undertaken, the possible connection between the transporter and photorespiration also remained elusive. Here, a forward genetic approach was used to decipher the role of ER-ANT1 in the plant context and its association to photorespiration.
This strategy identified that additional absence of a putative HAD-type phosphatase partially restored the photorespiratory phenotype. Localisation studies revealed that the corresponding protein is targeted to the chloroplast. Moreover, biochemical analyses demonstrate that the HAD-type phosphatase is specific for pyridoxal phosphate. These observations, together with transcriptional and metabolic data of corresponding single (ER-ANT1) and double (ER-ANT1, phosphatase) loss-of-function mutant plants revealed an unexpected connection of ER-ANT1 to vitamin B6 metabolism.
Finally, a scenario is proposed, which explains how ER-ANT1 may influence B6 vitamer phosphorylation, by this affects photorespiration and causes several other physiological alterations observed in the corresponding loss-of-function mutant plants.
Evaluation is an important issue for every scientific field and a necessity for an emerging soft-ware technology like case- based reasoning. This paper is a supplementation to the review of industrial case-based reasoning tools by K.-D. Althoff, E. Auriol, R. Barletta and M. Manago which describes the most detailed evaluation of commercial case-based reasoning tools currently available. The author focuses on some important aspects that correspond to the evaluation ofcase-based reasoning systems and gives links to ongoing research.
We present two techniques for reasoning from cases to solve classification tasks: Induction and case-based reasoning. We contrast the two technologies (that are often confused) and show how they complement each other. Based on this, we describe how they are integrated in one single platform for reasoning from cases: The Inreca system.
Case-Based Reasoning for Decision Support and Diagnostic Problem Solving: The INRECA Approach
(1995)
INRECA offers tools and methods for developing, validating, and maintaining decision support systems. INRECA's basic technologies are inductive and case-based reasoning, namely KATE -INDUCTION (cf., e.g., Manago, 1989; Manago, 1990) and S3-CASE, a software product based on PATDEX (cf., e.g., Wess,1991; Richter & Wess, 1991; Althoff & Wess, 1991). Induction extracts decision knowledge from case databases. It brings to light patterns among cases and helps monitoring trends over time. Case-based rea -soning relates the engineer's current problem to past experiences.
MOLTKE is a research project dealing with a complex technical application. After describing the domain of CNCmachining centers and the applied KA methods, we summarize the concrete KA problems which we have to handle. Then we describe a KA mechanism which supports an engineer in developing a diagnosis system. In chapter 6 weintroduce learning techniques operating on diagnostic cases and domain knowledge for improving the diagnostic procedure of MOLTKE. In the last section of this chapter we outline some essential aspects of organizationalknowledge which is heavily applied by engineers for analysing such technical systems (Qualitative Engineering). Finally we give a short overview of the actual state of realization and our future plans.
In this paper we will present a design model (in the sense of KADS) for the domain of technical diagnosis. Based on this we will describe the fully implemented expert system shell MOLTKE 3.0, which integrates common knowledge acquisition methods with techniques developed in the fields of Model-Based Diagnosis and Machine Learning, especially Case-Based Reasoning.
We present an approach to systematically describing case-based reasoning systems bydifferent kinds of criteria. One main requirement was the practical relevance of these criteria and their usability for real-life applications. We report on the results we achieved from a case study carried out in the INRECA1 Esprit project.
Case-based knowledge acquisition, learning and problem solving for diagnostic real world tasks
(1999)
Within this paper we focus on both the solution of real, complex problems using expert system technology and the acquisition of the necessary knowledge from a case-based reasoning point of view. The development of systems which can be applied to real world problems has to meet certain requirements. E.g., all available information sources have to be identified and utilized. Normally, this involves different types of knowledge for which several knowledge representation schemes are needed, because no scheme is equally natural for all sources. Facing empirical knowledge it is important to complement the use of manually compiled, statistic and otherwise induced knowledge by the exploitation of the intuitive understandability of case-based mechanisms. Thus, an integration of case-based and alternative knowledge acquisition and problem solving mechanisms is necessary. For this, the basis is to define the "role" which case-based inference can "play" within a knowledge acquisition workbench. We will discuss a concrete casebased architecture, which has been applied to technical diagnosis problems, and its integration into a knowledge acquisition workbench which includes compiled knowledge and explicit deep models, additionally.
Im Bereich der Expertensysteme ist das Problemlösen auf der Basis von Fallbeispielen ein derzeit sehr aktuelles Thema. Da sich sehr unterschiedliche Fachgebiete und Disziplinen hiermit auseinandersetzen, existiert allerdings eine entsprechende Vielfalt an Begriffen und Sichten auf fallbasiertes Problemlösen. In diesem Beitrag werden wir einige für das fallbasierte Problemlösen wichtige Begriffe präzisieren bzw. begriffliche Zusammenhänge aufdecken. Die dabei verfolgte Leitlinie ist weniger die, ein vollständiges Begriffsgebäude zu entwickeln, sondern einen ersten Schritt in Richtung eines einfachen Beschreibungsrahmens zu gehen, um damit den Vergleich verschiedener Ansätze und Systeme zu ermöglichen. Auf dieser Basis wird dann der derzeitige Stand der Forschung am Beispiel konkreter Systeme zur fallbasierten Diagnose dargelegt. Den Abschluss bildet eine Darstellung bislang offener Fragen und interessanter Forschungsziele.
Fallbasiertes Schliessen ist ein derzeit viel diskutierter Problemlösesansatz. Dieser Beitrag gibt einen Überblick über den aktuellen Stand der Forschung auf diesem Gebiet, insbesondere im Hinblick auf die Entwicklung von Expertensystemen (einen ersten Schritt in diese Richtung stellte bereits der Beitrag von Bartsch-Spörl, [BS87] dar). Dazu stellen wir die dem fallbasierten Schliessen zugrundeliegenden Mechanismen vor. Ergänzt wird dies durch den Vergleich mit alternativen Verfahren wie z.B. regelbasiertes, analoges und induktives Schliessen sowie eine ausführliche Literaturübersicht.
Retrieval of cases is one important step within the case-based reasoning paradigm. We propose an improvement of this stage in the process model for finding most similar cases with an average effort of O[log2n], n number of cases. The basic idea of the algorithm is to use the heterogeneity of the search space for a density-based structuring and to employ this precomputed structure, a k-d tree, for efficient case retrieval according to a given similarity measure sim. In addition to illustrating the basic idea, we present the expe- rimental results of a comparison of four different k-d tree generating strategies as well as introduce the notion of virtual bounds as a new one that significantly reduces the retrieval effort from a more pragmatic perspective. The presented approach is fully implemented within the (Patdex) system, a case-based reasoning system for diagnostic applications in engineering domains.
The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.
In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.
Weit ab von Wachstumskernen, raumordnerischen Entwicklungsachsen und ökonomi-scher Wettbewerbsfähigkeit befinden sich peripherisierte Räume in Nord-Thüringen bzw. im südlichen Sachsen-Anhalt. Der dort persistente Transformationsprozess ist durch Abwanderung, mangelnde Investitionen oder überdurchschnittlich hohe Arbeits-losenzahlen gekennzeichnet. Das Dilemma besteht darin, dass die durch nicht selbst verschuldete Abkopplung, Stigmatisierung und Abhängigkeiten gekennzeichneten Kommunen nicht in der Lage sind, durch endogene Kräfte sich neu zu erfinden, was eine Regenerierung möglich machte, um letztendlich in der Wertschöpfungskette den für Investoren derzeit unattraktiven Immobilienmarkt wieder zu beleben. Diese seit mehr als 20 Jahren durchlaufenen Entwicklungspfade wirken sich auf die Siedlungskör-per aus, die in vielen Orten zu perforieren drohen. Es ist festzustellen, dass der Prozess des Niedergangs längst noch nicht abgeschlossen ist.
Soziale Infrastrukturbauten, wie ehemaligen Schulen, Kitas und Krankenhäusern, sind im besonderen Maß von diesen Entwicklungen betroffen. Insbesondere durch den selbst verstärkenden Effekt des demografischen Wandels dienen sie als stadtplanerischer For-schungsgegenstand. Dies vor dem Hintergrund einer möglichen Inwertsetzung als städ-tebauliche Innenentwicklungsstrategie (Anpassung) nach dem diese Immobilien ihre ursprüngliche Nutzung verloren haben. Die Notwendigkeit zum stadtplanerischen Handeln ergibt sich u.a. aus der nicht selten städtebaulich exponierten Lage, als seltene bauliche Zeitzeugnisse auch als Teil eines Ensemble mit kulturhistorischem Wert sowie als Merkpunkte einer gesamtstädtischen bzw. dörflichen Ordnung.
Die Arbeit identifiziert die neuen Herausforderungen, die im Umgang mit leer stehen-den sozialen Infrastrukturbauten in peripherisierten Klein- und Mittelstädten durch die Eigentümer zu bewältigen sind und reflektiert kritisch die Wirksamkeit der informellen sowie formellen planerischen Instrumente. Es werden konkrete Vorschläge gemacht, wie das Immobilienmanagement sowie die Eigentümereinbindung bei sehr stark beru-higten Wohnimmobilienmärkten zu erfolgen hat. Weiterhin werden Strategieansätze des Verwaltungshandelns empfohlen, die auf die speziellen Marktbedingungen abge-stimmt sind.
Neben diesen aus der Theorie gewonnenen Analogieschlüssen zeigen die aus dem Feldexperiment in der o.g. Untersuchungsregion durch umfangreiche Erhebungen ope-rationalisierbare Daten. Aus dieser Dichte der Informationen entstanden valide Aussa-gen, deren Reliabilität in die Entwicklung einer Standortanalysedatenbank einflossen sind. Somit konnte nicht nur die Problemlage objektiv nachgewiesen werden, sondern es gelang auch in der Exploration ein für die Kommunen handhabbares Planungs-instrument zu entwickeln, das auch anderswohin übertragbar ist.
DeepKAF: A Knowledge Intensive Framework for Heterogeneous Case-Based Reasoning in Textual Domains
(2021)
Business-relevant domain knowledge can be found in plain text across message exchanges
among customer support tickets, employee message exchanges and other business transactions.
Decoding text-based domain knowledge can be a very demanding task since traditional
methods focus on a comprehensive representation of the business and its relevant paths. Such
a process can be highly complex, time-costly and of high maintenance effort, especially in
environments that change dynamically.
In this thesis, a novel approach is presented for developing hybrid case-based reasoning
(CBR) systems that bring together the benefits of deep learning approaches with CBR advantages.
Deep Knowledge Acquisition Framework (DeepKAF) is a domain-independent
framework that features the usage of deep neural networks and big data technologies to decode
the domain knowledge with the minimum involvement from the domain experts. While
this thesis is focusing more on the textual data because of the availability of the datasets, the
target CBR systems based on DeepKAF are able to deal with heterogeneous data where a
case can be represented by different attribute types and automatically extract the necessary
domain knowledge while keeping the ability to provide an adequate level of explainability.
The main focus within this thesis are automatic knowledge acquisition, building similarity
measures and cases retrieval.
Throughout the progress of this research, several sets of experiments have been conducted
and validated by domain experts. Past textual data produced over around 15 years have
been used for the needs of the conducted experiments. The text produced is a mixture
between English and German texts that were used to describe specific domain problems
with a lot of abbreviations. Based on these, the necessary knowledge repositories were built
and used afterwards in order to evaluate the suggested approach towards effective monitoring
and diagnosis of business workflows. Another public dataset has been used, the CaseLaw
dataset, to validate DeepKAF when dealing with longer text and cases with more attributes.
The CaseLaw dataset represents around 22 million cases from different US states.
Further work motivated by this thesis could investigate how different deep learning models
can be used within the CBR paradigm to solve some of the chronic CBR challenges and be
of benefit to large-scale multi-dimensional enterprises.
The main aim of this work was to obtain an approximate solution of the seismic traveltime tomography problems with the help of splines based on reproducing kernel Sobolev spaces. In order to be able to apply the spline approximation concept to surface wave as well as to body wave tomography problems, the spherical spline approximation concept was extended for the case where the domain of the function to be approximated is an arbitrary compact set in R^n and a finite number of discontinuity points is allowed. We present applications of such spline method to seismic surface wave as well as body wave tomography, and discuss the theoretical and numerical aspects of such applications. Moreover, we run numerous numerical tests that justify the theoretical considerations.
In this paper we construct spline functions based on a reproducing kernel Hilbert space to interpolate/approximate the velocity field of earthquake waves inside the Earth based on traveltime data for an inhomogeneous grid of sources (hypocenters) and receivers (seismic stations). Theoretical aspects including error estimates and convergence results as well as numerical results are demonstrated.
Biological clocks exist across all life forms and serve to coordinate organismal physiology with periodic environmental changes. The underlying mechanism of these clocks is predominantly based on cellular transcription-translation feedback loops in which clock proteins mediate the periodic expression of numerous genes. However, recent studies point to the existence of a conserved timekeeping mechanism independent of cellular transcription and translation, but based on cellular metabolism. These metabolic clocks were concluded based upon the observation of circadian and ultradian oscillations in the level of hyperoxidized peroxiredoxin proteins. Peroxiredoxins are enzymes found almost ubiquitously throughout life. Originally identified as H2O2 scavengers, recent studies show that peroxiredoxins can transfer oxidation to, and thereby regulate, a wide range of cellular proteins. Thus, it is conceivable that peroxiredoxins, using H2O2 as the primary signaling molecule, have the potential to integrate and coordinate much of cellular physiology and behavior with metabolic changes. Nonetheless, it remained unclear if peroxiredoxins are passive reporters of metabolic clock activity or active determinants of cellular timekeeping. Budding yeast possess an ultradian metabolic clock termed the Yeast Metabolic Cycle (YMC). The most obvious feature of the YMC is a high amplitude oscillation in oxygen consumption. Like circadian clocks, the YMC temporally compartmentalizes cellular processes (e.g. metabolism) and coordinates cellular programs such as gene expression and cell division. The YMC also exhibits oscillations in the level of hyperoxidized peroxiredoxin proteins.
In this study, I used the YMC clock model to investigate the role of peroxiredoxins in cellular timekeeping, as well as the coordination of cell division with the metabolic clock. I observed that cytosolic 2-Cys peroxiredoxins are essential for robust metabolic clock function. I provide direct evidence for oscillations in cytosolic H2O2 levels, as well as cyclical changes in oxidation state of a peroxiredoxin and a model peroxiredoxin target protein during the YMC. I noted two distinct metabolic states during the YMC: low oxygen consumption (LOC) and high oxygen consumption (HOC). I demonstrate that thiol-disulfide oxidation and reduction are necessary for switching between LOC and HOC. Specifically, a thiol reductant promotes switching to HOC, whilst a thiol oxidant prevents switching to HOC, forcing cells to remain in LOC. Transient peroxiredoxin inactivation triggered rapid and premature switching from LOC to HOC. Furthermore, I show that cell division is normally synchronized with the YMC and that deletion of typical 2-Cys peroxiredoxins leads to complete uncoupling of cell division from metabolic cycling. Moreover, metabolic oscillations are crucial for regulating cell cycle entry and exit. Intriguingly, switching to HOC is crucial for initiating cell cycle entry whilst switching to LOC is crucial for cell cycle completion and exit. Consequently, forcing cells to remain in HOC by application of a thiol reductant leads to multiple rounds of cell cycle entry despite failure to complete the preceding cell cycle. On the other hand, forcing cells to remain in LOC by treating with a thiol oxidant prevents initiation of cell cycle entry.
In conclusion, I propose that peroxiredoxins – by controlling metabolic cycles, which are in turn crucial for regulating the progression through cell cycle – play a central role in the coordination of cellular metabolism with cell division. This proposition, thus, positions peroxiredoxins as active players in the cellular timekeeping mechanism.
In this paper we study the space-time asymptotic behavior of the solutions and derivatives to th incompressible Navier-Stokes equations. Using moment estimates we obtain that strong solutions to the Navier-Stokes equations which decay in \(L^2\) at the rate of \(||u(t)||_2 \leq C(t+1)^{-\mu}\) will have the following pointwise space-time decay \[|D^{\alpha}u(x,t)| \leq C_{k,m} \frac{1}{(t+1)^{ \rho_o}(1+|x|^2)^{k/2}} \]
where \( \rho_o = (1-2k/n)( m/2 + \mu) + 3/4(1-2k/n)\), and \(|a |= m\). The dimension n is \(2 \leq n \leq 5\) and \(0\leq k\leq n\) and \(\mu \geq n/4\)
The level-set method has been recently introduced in the field of shape optimization, enabling a smooth representation of the boundaries on a fixed mesh and therefore leading to fast numerical algorithms. However, most of these algorithms use a Hamilton-Jacobi equation to connect the evolution of the level-set function with the deformation of the contours, and consequently they cannot create any new holes in the domain (at least in 2D). In this work, we propose an evolution equation for the level-set function based on a generalization of the concept of topological gradient. This results in a new algorithm allowing for all kinds of topology changes.
Der vorliegende Arbeits- und Forschungsbericht bietet eine Handreichung für Studiengangsentwickler_innen, um sie bei der Erstellung von Kompetenzprofilen zu unterstützen. Zu diesem Zweck werden drei verschiedene Tools der Kompetenzprofilerstellung vorgestellt. Diese umfassen die Stellenanzeigenanalyse, den Curriculumabgleich und Lehrendeninterviews. Diese Tools haben sich als sehr nützlich für die Entwicklung von kompetenzorientierten Studiengängen erwiesen. Die drei Verfahren werden einander gegenübergestellt und Implikationen für die Praxis abgeleitet. Dieser Bericht soll dazu beitragen, bedarfsorientierte Weiterbildungsangebote für die Region zu gestalten.
We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.
This study presents an energy-efficient ultra-low voltage standard-cell based memory in 28nm FD-SOI. The storage element (standard-cell latch) is replaced with a full- custom designed latch with 50 % less area. Error-free operation is demonstrated down to 450mV @ 9MHz. By utilizing body bias (BB) @ VDD = 0.5 V performance spans from 20 MHz @ BB=0V to 110MHz @ BB=1V.
An autoregressive-ARCH model with possible exogeneous variables is treated. We estimate the conditional volatility of the model by applying feedforward networks to the residuals and prove consistency and asymptotic normality for the estimates under the rate of feedforward networks complexity. Recurrent neural networks estimates of GARCH and value-at-risk is studied. We prove consistency and asymptotic normality for the recurrent neural networks ARMA estimator under the rate of recurrent networks complexity. We also overcome the estimation problem in stochastic variance models in discrete time by feedforward networks and the introduction of a new distributions on the innovations. We use the method to calculate market risk such as expected shortfall and Value-at risk. We tested this distribution together with other new distributions on the GARCH family models against other common distributions on the financial market such as Normal Inverse Gaussian, normal and the Student's t- distributions. As an application of the models, some German stocks are studied and the different approaches are compared together with the most common method of GARCH(1,1) fit.
A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.
Continuous fibre reinforced thermoplastics are a high competitive material class for
diversified applications because of their inherent properties like light-weight construction
potential, integral design, corrosion resistance and high energy absorption level.
Using these materials, one approach towards a large volume scaled part production
rate is covered by an automated process line, consisting of a pressing process for
semi-finised sheet material production, a thermoforming step and some additional
joining technologies. To allow short cycle times in the thermoforming step, the utilised
semi-finished sheet materials, which are often referred to as “organic sheets”, have
to be fully impregnated and consolidated.
Nowadays even this combination of outstanding physical and chemical material
properties combined with the economic processing technology are no guarantee for
the break-through of continuous fibre reinforced thermoplastics, mainly because of
the high material costs for the semi-finished sheet materials. These costs can be attributed
to a non adapted material selection or choice of process parameters, as well
as by unfavourable pressing process type itself.
Therefore the aim of the present investigations was to generate some alternatives
regarding the choice of raw materials, the set-up or the selection of the pressing
process line and to provide some theoretical tools for the determination of process
parameters and dimensions.
Concerning raw material aspects, the use of the blending technology is one promising
approach towards cost reduction for the matrix component. Novel characteristics
related to the fibre structure are CF-yarns with high filament numbers (e.g. 6K or 12K instead of 3K) or multiaxial fibre orientations. These two approaches were both conducted
for sheet materials with carbon fibre reinforcement and high temperature
thermoplastics.
Two new developed ternary blend matrices consisting of PEEK and PEI as the main
ingredients were tested in comparison with neat PEEK. PES and PSU were used as
the third blend component, which provides a cost reduction potential of approximately
30 % compared to the basis PEEK polymer. The results of the static pressing experiments
pointed out that the processing behaviour of the new blends is similar to
the neat PEEK matrix. A maximum process temperature of 410 °C should not be surpassed, otherwise thermal degradation will occur and will have a negative influence
on mechanical laminat properties. To accelerate the impregnation progress a
process pressure of 25 bar in combination with a sidewise opened tooling concept is
helpful. No differences were identified if film-stacking technique was substituted by
powder-prepreg-technology or vice versa. By increasing the yarn filament number
from 3K over 6K to 12K, which is equal to an increase in bundle diameter and therefore
transverse flow distance, the impregnation time has to be extended. If unspread
yarns are used, the risk of void entrapment rises tremendously, especially with 12K
and UD-structures. To reach full impregnation with a woven 6K-fabric, an increase of
process time of 20 to 30 % compared to a 3K textile structure is required. Furthermore,
it was shown that if only transverse flow is used for the impregnation of a UDstructure,
a maximum area weight of 300-400 g/m² should not be exceeded. Additionally,
the transport of air is strictly affected by the fibre orientation, because the
main amount of displaced air runs in longitudinal fibre direction. These facts play an
important role in the design of a multaxial laminat or an impregnation process for
such a structure and have to be taken into account.
Apart from these static pressing experiments the semi-continuous (stepwise compression
moulding) and continuous (double belt press processing) processing technology
were investigated and compared to each other. The first basic processing
trails on the stepwise compression moulding equipment were carried out with the material
system GF/PA66. Whereas the processing behaviour of this material combination
in a double belt press is known quite well, there is only little information about
semi-continuous processing. The performed trials pointed out that the resulting laminate
quality for both technologies only differs in the achievable local surface quality.
Mechanical laminate properties like three point bending stiffness and strength are
directly comparable. Due to the fact that there is only small experience with the stepwise
compression moulding process, potential improvements regarding surface Quality are feasible by adapting the step procedure and the temperature distribution within
the tooling concept. If laminates, produced by semi-continuous processing, are deployed
in a thermoforming process or in a non visible structural application, the surface
appearance only plays an inferior role.
The present results with high temperature thermoplastic matrices and CF do confirm
the positive assessment for the stepwise compression moulding technology, even though the mechanical laminate values have only reached 90 % of the data received
by static press processing. In comparison to the data from literature, 90 % is already
a high mechanical performance level. The results are quite promising for the use of
the semi-continuous technology, despite the process set-up and processing parameters
not being optimised. Furthermore there are tremendous advantages in processing
equipment costs.
Finally a process model was developed based on the experimental data pool. This
model can be characterised as a tool, which provides useful boundary conditions and
dimension values for the selection of a certain pressing process depending on the
desired material combination, laminate thickness and production output. The applicability
and accuracy of the model was proofed by a direct comparison between experimental
and calculated data.
First of all the temperature profile of the pressing process was generalised by a very
common structure. This profile reflects the main characteristics for the processing of
a thermoplastic composite material. Depending on the material combination, the
laminate thickness and the occurring heat transfers, several process- and processing-
portfolios were calculated. For a defined combination of the aforementioned parameters,
these portfolios directly provide the periods of time for heating and cooling
of the laminate structure. The last step is to convert these information into an equipment
dimension and to decide which machinery configuration fulfils these requirements.
Buses not arriving on time and then arriving all at once - this phenomenon is known from
busy bus routes and is called bus bunching.
This thesis combines the well studied but so far separate areas of bus-bunching prediction
and dynamic holding strategies, which allow to modulate buses’ dwell times at stops to
eliminate bus bunching. We look at real data of the Dublin Bus route 46A and present
a headway-based predictive-control framework considering all components like data
acquisition, prediction and control strategies. We formulate time headways as time series
and compare several prediction methods for those. Furthermore we present an analytical
model of an artificial bus route and discuss stability properties and dynamic holding
strategies using both data available at the time and predicted headway data. In a numerical
simulation we illustrate the advantages of the presented predictive-control framework
compared to the classical approaches which only use directly available data.
Laser-induced interstitial thermotherapy (LITT) is a minimally invasive procedure to destroy liver
tumors through thermal ablation. Mathematical models are the basis for computer simulations
of LITT, which support the practitioner in planning and monitoring the therapy.
In this thesis, we propose three potential extensions of an established mathematical model of
LITT, which is based on two nonlinearly coupled partial differential equations (PDEs) modeling
the distribution of the temperature and the laser radiation in the liver.
First, we introduce the Cattaneo–LITT model for delayed heat transfer in this context, prove its
well-posedness and study the effect of an inherent delay parameter numerically.
Second, we model the influence of large blood vessels in the heat-transfer model by means
of a spatially varying blood-perfusion rate. This parameter is unknown at the beginning of
each therapy because it depends on the individual patient and the placement of the LITT
applicator relative to the liver. We propose a PDE-constrained optimal-control problem for the
identification of the blood-perfusion rate, prove the existence of an optimal control and prove
necessary first-order optimality conditions. Furthermore, we introduce a numerical example
based on which we demonstrate the algorithmic solution of this problem.
Third, we propose a reformulation of the well-known PN model hierarchy with Marshak
boundary conditions as a coupled system of second-order PDEs to approximate the radiative-transfer
equation. The new model hierarchy is derived in a general context and is applicable
to a wide range of applications other than LITT. It can be generated in an automated way by
means of algebraic transformations and allows the solution with standard finite-element tools.
We validate our formulation in a general context by means of various numerical experiments.
Finally, we investigate the coupling of this new model hierarchy with the LITT model numerically.
Im vorliegenden Bericht werden die Erfahrungen und Ergebnisse aus dem Projekt OptCast zusammengestellt. Das Ziel dieses Projekts bestand (a) in der Anpassung der Methodik der automatischen Strukturoptimierung für Gussteile und (b) in der Entwicklung und Bereitstellung von gießereispezifischen Optimierungstools für Gießereien und Ingenieurbüros. Gießtechnische Restriktionen lassen sich nicht vollständig auf geometrische Restriktionen reduzieren, da die lokalen Eigenschaften nicht nur von der geometrischen Form des Gussteils, sondern auch vom verwendeten Material abhängen. Sie sind jedoch über eine Gießsimulation (Erstarrungssimulation und Eigenspannungsanalyse) adäquat erfassbar. Wegen dieser Erkenntnis wurde ein neuartiges Topologieoptimierungsverfahren unter Verwendung der Level-Set-Technik entwickelt, bei dem keine variable Dichte des Materials eingeführt wird. In jeder Iteration wird ein scharfer Rand des Bauteils berechnet. Somit ist die Gießsimulation in den iterativen Optimierungsprozess integrierbar.
This report discusses two approaches for a posteriori error indication in the linear elasticity solver DDFEM: An indicator based on the Richardson extrapolation and Zienkiewicz-Zhu-type indicator. The solver handles 3D linear elasticity steady-state problems. It uses own input language to describe the mesh and the boundary conditions. Finite element discretization over tetrahedral meshes with first or second order shape functions (hierarchical basis) has been used to resolve the model. The parallelization of the numerical method is based on the domain decomposition approach. DDFEM is highly portable over a set of parallel computer architectures supporting the MPI-standard.
Beim Bauen im Bestand werden häufig neue Stahlbetonbauteile kraftschlüssig an bestehende Tragstrukturen angeschlossen. Dies wird bei Ortbetonbauteilen günstig mit dem Übergreifungsstoß realisiert.
Bis Ende der 1950-er Jahre wurden im Stahlbetonbau überwiegend glatte Betonstähle verwendet, bevor sie mit einer Übergangszeit bis Ende der 1970-er Jahre von den heute eingesetzten gerippten Betonstählen abgelöst wurden. Im Gegensatz zu den seit 1925 genormten Übergreifungsstößen mit Betonstählen gleicher Art und Güte sind kombinierte Übergreifungsstöße von Glatt- und Rippenstählen jedoch bis heute nicht geregelt.
Zur Beseitigung dieses Defizits wurden im Rahmen dieser Arbeit differenzierte Bewehrungsregeln hergeleitet, die wissenschaftlich abgesicherte und gleichzeitig wirtschaftliche Lösungen für kombinierte Übergreifungsstöße ermöglichen, denn unter Einbeziehung des Rückbaus bestehender Altbetonsubstanz verlangt eine ökonomische Bauweise für Übergreifungsstöße von freigelegten historischen Glattstählen mit aktuell verwendeten Rippenstählen nach Vollstößen mit kleinstmöglichen Übergreifungslängen. Dabei sind die Anforderungen nach heute gültigem Regelwerk an die Zuverlässigkeit gegen Versagen im Grenzzustand der Tragfähigkeit (GZT) und die Sicherstellung der vorgegebenen Nutzung durch Begrenzung der Rissbreiten im Grenzzustand der Gebrauchstauglichkeit (GZG) zu beachten.
Für verschiedene kombinierte Übergreifungsstöße von mit Endhaken versehenen glatten Betonstählen BStI und gerippten Betonstählen B500 mit geraden Stabenden oder Endhaken wurden die erforderlichen Übergreifungslängen anhand systematisch aufgebauter Versuchsreihen empirisch ermittelt. Dabei wurde ein grundlegendes Verständnis für die Tragwirkung kombinierter Übergreifungsstöße gewonnen und ein allgemeingültiges Lastübertragungsmodell erarbeitet.
Zur Bemessung kombinierter Übergreifungsstöße wurde weiter ein Ingenieurmodell abgeleitet, welches die Tragwirkung derartiger Stöße zuverlässig beschreibt und die experimentell ermittelten Übergreifungslängen bestätigt. Dabei wurde unter Berücksichtigung der für den Verbund maßgebenden Betonzugfestigkeit, der Stahlspannungen und den Stabdurchmessern auf Basis statistischer Methoden ein Bemessungsdiagramm für die erforderliche Übergreifungslänge bestimmter Stoßkombinationen erarbeitet und eine ergänzende FE-Modellierung durchgeführt.
Darauf aufbauend werden allgemeingültige Gleichungen zur Ermittlung der Bemessungswerte der Übergreifungslängen kombinierter Übergreifungsstöße mit Glattstahl BStI und Rippenstahl B500 angegeben und Konstruktionsregeln für in der Praxis regelmäßig vorkommende Kombinationen von Stabdurchmessern, Betongüten und Verbundbedingungen erarbeitet, die für Kombi-Stöße gleichwertig zu den Regeln des EC2 für den Neubaufall angewendet werden können.
Die vorliegende Masterarbeit befasst sich mit der kulturellen Partizipation der einwandernden Bewölkerung an kulturellen Entwicklungsprozessen in deutschen Städten. Zum
einen fokussiert sich die Masterarbeit auf die Kulturorganisationen des sogenannten Dritten Sektors und den Nutzen, den diese durch einen besseren Zugang Neuzugewanderter in der Stadt zum Kulturangebot ziehen können. Es wird der Frage nachgegangen, wie die Einbindung der Bevölkerung mit Zuwanderungsgeschichte in die Kulturarbeit optimiert werden kann. Das könnte für lokale Kulturanbieter den Anschluss
an ein neues Publikum bedeuten und sich also positiv auf die Einnahmen auswirken.
Ein weiterer bedeutender Faktor, der damit in Verbindung stehen kann, ist das künstlerische Potenzial, das einzelne der neuen Stadtbürger womöglich mitbringen. Ein besserer
Zugang zu dieser Zielgruppe könnte für Kulturorganisationen auch gleichzeitig ein Durchbruch zu anderen Kunst- und Kulturtraditionen bedeuten. Es könnten andere Themen herausgearbeitet werden und das Programmangebot innovativer ausfallen.
Die kulturelle Einbindung Neuzugewanderter setzt aber eine intensive Auseinandersetzung
mit der soeben skizzierten Thematik voraus, für die Ressourcen wie Zeit notwendig
sind, aber auch die Bereitschaft, neue Wege einzuschlagen. Insofern erscheint es als zwingend notwendig, die kulturellen Bedürfnisse dieser Bevölkerungsgruppe zu erkunden, was als eine zentrale Aufgabe dieser Arbeit angesehen wird.
The paper presents some adaptive load balance techniques for the simulation of rarefied gas flows on parallel computers. It is shown that a static load balance is insufficient to obtain a scalable parallel efficiency. Hence, two adaptive techniques are investigated which are based on simple algorithms. Numerical results show that using heuristic techniques one can achieve a sufficiently high efficiency over a wide range of different hardware platforms.
Durch den kontinuierlichen Anstieg von aussortierten, nicht mehr verwendbaren Fast Fashion Produkten wird die Textilbranche zu einem allgegenwärtigen Problem in der heutigen Gesellschaft. Ein Bericht der Ellen MacArthur Foundation zeigte auf, dass bisher weniger als 1% des für die Herstellung von Kleidung verwendeten Materials recycelt wird. Um die Wiederverwendung und -verwertung von Kleidung zu ermöglichen, müssen weitreichende Veränderungen innerhalb der Textilindustrie stattfinden, aber sich auch das Verhalten von Konsumenten verändern.
Diese Arbeit soll deshalb die zentrale Fragestellung untersuchen, welche Maßnahmen unter Berücksichtigung des aktuellen Konsumentenverhaltens notwendig sind, um die Post-Verwendung von Kleidung nachhaltiger zu gestalten. In der vorliegenden Arbeit werden neue Erkenntnisse durch einen Mixed-Methods-Ansatz gewonnen. Um das Konsumentenverhalten hinsichtlich des Kleidungskonsums sowie der Verwendung und Verwertung von Gebrauchtkleidung beleuchten zu können, wird eine quantitative Online-Umfrage erstellt und durchgeführt. Im Zuge der Interpretation der quantitativen Forschung unterstützen zwei qualitative Experteninterviews den Erkenntnisgewinn. Darüber hinaus geben diese inhaltliche Unterstützung, um weitere Lösungsansätze für ein erfolgreiches Textilrecycling zu entwickeln. Der theoretische Rahmen der Arbeit wird durch eine Literaturrecherche unterstützt.
The transfer of substrates between to enzymes within a biosynthesis pathway is an effective way to synthesize the specific product and a good way to avoid metabolic interference. This process is called metabolic channeling and it describes the (in-)direct transfer of an intermediate molecule between the active sites of two enzymes. By forming multi-enzyme cascades the efficiency of product formation and the flux is elevated and intermediate products are transferred and converted in a correct manner by the enzymes.
During tetrapyrrole biosynthesis several substrate transfer events occur and are prerequisite for an optimal pigment synthesis. In this project the metabolic channeling process during the pink pigment phycoerythrobilin (PEB) was investigated. The responsible ferredoxin-dependent bilin reductases (FDBR) for PEB formation are PebA and PebB. During the pigment synthesis the intermediate molecule 15,16-dihydrobiliverdin (DHBV) is formed and transferred from PebA to PebB. While in earlier studies a metabolic channeling of DHBV was postulated, this work revealed new insights into the requirements of this protein-protein interaction. It became clear, that the most important requirement for the PebA/PebB interaction is based on the affinity to their substrate/product DHBV. The already high affinity of both enzymes to each other is enhanced in the presence of DHBV in the binding pocket of PebA which leads to a rapid transfer to the subsequent enzyme PebB. DHBV is a labile molecule and needs to be rapidly channeled in order to get correctly further reduced to PEB. Fluorescence titration experiments and transfer assays confirmed the enhancement effect of DHBV for its own transfer.
More insights became clear by creating an active fusion protein of PebA and PebB and comparing its reaction mechanism with standard FDBRs. This fusion protein was able to convert biliverdin IXα (BV IXα) to PEB similar to the PebS activity, which also can convert BV IXα via DHBV to PEB as a single enzyme. The product and intermediate of the reaction were identified via HPLC and UV-Vis spectroscopy.
The results of this work revealed that PebA and PebB interact via a proximity channeling process where the intermediate DHBV plays an important role for the interaction. It also highlights the importance of substrate channeling in the synthesis of PEB to optimize the flux of intermediates through this metabolic pathway.
In this thesis, we present the basic concepts of isogeometric analysis (IGA) and we consider Poisson's equation as model problem. Since in IGA the physical domain is parametrized via a geometry function that goes from a parameter domain, e.g. the unit square or unit cube, to the physical one, we present a class of parametrizations that can be viewed as a generalization of polar coordinates, known as the scaled boundary parametrizations (SB-parametrizations). These are easy to construct and are particularly attractive when only the boundary of a domain is available. We then present an IGA approach based on these parametrizations, that we call scaled boundary isogeometric analysis (SB-IGA). The SB-IGA derives the weak form of partial differential equations in a different way from the standard IGA. For the discretization projection
on a finite-dimensional space, we choose in both cases Galerkin's method. Thanks to this technique, we state an equivalence theorem for linear elliptic boundary value problems between the standard IGA, when it makes use of an SB-parametrization,
and the SB-IGA. We solve Poisson's equation with Dirichlet boundary conditions on different geometries and with different SB-parametrizations.
Since their invention in the 1980s, behaviour-based systems have become very popular among roboticists. Their component-based nature facilitates the distributed implementation of systems, fosters reuse, and allows for early testing and integration. However, the distributed approach necessitates the interconnection of many components into a network in order to realise complex functionalities. This network is crucial to the correct operation of the robotic system. There are few sound design techniques for behaviour networks, especially if the systems shall realise task sequences. Therefore, the quality of the resulting behaviour-based systems is often highly dependant on the experience of their developers.
This dissertation presents a novel integrated concept for the design and verification of behaviour-based systems that realise task sequences. Part of this concept is a technique for encoding task sequences in behaviour networks. Furthermore, the concept provides guidance to developers of such networks. Based on a thorough analysis of methods for defining sequences, Moore machines have been selected for representing complex tasks. With the help of the structured workflow proposed in this work and the developed accompanying tool support, Moore machines defining task sequences can be transferred automatically into corresponding behaviour networks, resulting in less work for the developer and a lower risk of failure.
Due to the common integration of automatically and manually created behaviour-based components, a formal analysis of the final behaviour network is reasonable. For this purpose, the dissertation at hand presents two verification techniques and justifies the selection of model checking. A novel concept for applying model checking to behaviour-based systems is proposed according to which behaviour networks are modelled as synchronised automata. Based on such automata, properties of behaviour networks that realise task sequences can be verified or falsified. Extensive graphical tool support has been developed in order to assist the developer during the verification process.
Several examples are provided in order to illustrate the soundness of the presented design and verification techniques. The applicability of the integrated overall concept to real-world tasks is demonstrated using the control system of an autonomous bucket excavator. It can be shown that the proposed design concept is suitable for developing complex sophisticated behaviour networks and that the presented verification technique allows for verifying real-world behaviour-based systems.
Die Zielsetzung dieser Arbeit bestand in der Entwicklung effizienter und nachhaltiger Verfahren zur Addition von N-H Nukleophilen an terminale Alkine und für die Insertion von CO2 in die C-H Bindung terminaler Alkine.
Im ersten Teil dieser Dissertation wurde der Mechanismus der Ruthenium-katalysierten Addition von Amiden an terminale Alkine eingehend durch eine Kombination von Kontrollexperimenten, kinetischen Studien, spektroskopischen Untersuchungen und theoretischen Berechnungen untersucht. Zunächst wurden vier literaturbekannte Katalysezyklen identifiziert, die plausible Mechanismen für die Hydroamidierung terminaler Alkine darstellen. Aufbauend auf nachgewiesenen Elementarschritten chemisch verwandter Reaktionen wurde zusätzlich ein weiterer Mechanismus für die Hydroamidierung abgeleitet. Anschließend wurde eine Reihe von Kontrollexperimenten durchgeführt, mit deren Hilfe einzelne Elementarschritte der Katalysezyklen falsifiziert und somit Mechanismen ausgeschlossen werden konnten. Um herauszufinden, ob die Hydroamidierung mit dem einzig verbliebenen Mechanismus zutreffend beschrieben werden kann, wurden spektroskopische Studien durchgeführt. Diese Untersuchungen wurden vor, während und nach Hydroamidierungstestreaktionen durchgeführt, und auf diese Weise konnten zahlreiche postulierte Intermediate nachgewiesen und der verbleibende Katalysezyklus bekräftigt werden. Die in diesen mechanistischen Studien gewonnenen Erkenntnisse wurden zur Entwicklung einer neuen Katalysatorgeneration mit ausgesprochen hoher Selektivität für die Bildung wertvoller Z-Enamide und Z-Enimide genutzt. Das synthetische Potential wurde zudem durch die Darstellung der biologisch aktiven Naturstoffe Lansiumamid A und B, Lansamid I sowie Botryllamid C und E demonstriert.
Im zweiten Teil dieser Arbeit gelang es, hocheffiziente Silber(I)/DMSO-katalysierte Methoden zur Carboxylierung terminaler Alkine mit CO2 bei Normaldruck zu entwickeln.
When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.
The rotational spinning of viscous jets is of interest in many industrial applications, including pellet manufacturing [4, 14, 19, 20] and drawing, tapering and spinning of glass and polymer fibers [8, 12, 13], see also [15, 21] and references within. In [12] an asymptotic model for the dynamics of curved viscous inertial fiber jets emerging from a rotating orifice under surface tension and gravity was deduced from the three-dimensional free boundary value problem given by the incompressible Navier-Stokes equations for a Newtonian fluid. In the terminology of [1], it is a string model consisting of balance equations for mass and linear momentum. Accounting for inner viscous transport, surface tension and placing no restrictions on either the motion or the shape of the jet’s center-line, it generalizes the previously developed string models for straight [3, 5, 6] and curved center-lines [4, 13, 19]. Moreover, the numerical results investigating the effects of viscosity, surface tension, gravity and rotation on the jet behavior coincide well with the experiments of Wong et.al. [20].
The optimal design of rotational production processes for glass wool manufacturing poses severe computational challenges to mathematicians, natural scientists and engineers. In this paper we focus exclusively on the spinning regime where thousands of viscous thermal glass jets are formed by fast air streams. Homogeneity and slenderness of the spun fibers are the quality features of the final fabric. Their prediction requires the computation of the fuidber-interactions which involves the solving of a complex three-dimensional multiphase problem with appropriate interface conditions. But this is practically impossible due to the needed high resolution and adaptive grid refinement. Therefore, we propose an asymptotic coupling concept. Treating the glass jets as viscous thermal Cosserat rods, we tackle the multiscale problem by help of momentum (drag) and heat exchange models that are derived on basis of slender-body theory and homogenization. A weak iterative coupling algorithm that is based on the combination of commercial software and self-implemented code for ow and rod solvers, respectively, makes then the simulation of the industrial process possible. For the boundary value problem of the rod we particularly suggest an adapted collocation-continuation method. Consequently, this work establishes a promising basis for future optimization strategies.
This work deals with the modeling and simulation of slender viscous jets exposed to gravity and rotation, as they occur in rotational spinning processes. In terms of slender-body theory we show the asymptotic reduction of a viscous Cosserat rod to a string system for vanishing slenderness parameter. We propose two string models, i.e. inertial and viscous-inertial string models, that differ in the closure conditions and hence yield a boundary value problem and an interface problem, respectively. We investigate the existence regimes of the string models in the four-parametric space of Froude, Rossby, Reynolds numbers and jet length. The convergence regimes where the respective string solution is the asymptotic limit to the rod turn out to be disjoint and to cover nearly the whole parameter space. We explore the transition hyperplane and derive analytically low and high Reynolds number limits. Numerical studies of the stationary jet behavior for different parameter ranges complete the work.
The detection and characterisation of undesired lead structures on shaft surfaces is a concern in production and quality control of rotary shaft lip-type sealing systems. The potential lead structures are generally divided into macro and micro lead based on their characteristics and formation. Macro lead measurement methods exist and are widely applied. This work describes a method to characterise micro lead on ground shaft surfaces. Micro lead is known as the deviation of main orientation of the ground micro texture from circumferential direction. Assessing the orientation of microscopic structures with arc minute accuracy with regard to circumferential direction requires exact knowledge of both the shaft’s orientation and the direction of surface texture. The shaft’s circumferential direction is found by calibration. Measuring systems and calibration procedures capable of calibrating shaft axis orientation with high accuracy and low uncertainty are described. The measuring systems employ areal-topographic measuring instruments suited for evaluating texture orientation. A dedicated evaluation scheme for texture orientation is based on the Radon transform of these topographies and parametrised for the application. Combining the calibration of circumferential direction with the evaluation of texture orientation the method enables the measurement of micro lead on ground shaft surfaces.