Doctoral Thesis
Refine
Year of publication
- 2014 (47) (remove)
Document Type
- Doctoral Thesis (47) (remove)
Language
- English (47) (remove)
Has Fulltext
- yes (47)
Keywords
- Activity recognition (2)
- Wearable computing (2)
- Adaptive Data Structure (1)
- AhRR (1)
- Algorithm (1)
- Boosting (1)
- CYP1A1 (1)
- Classification (1)
- Closure (1)
- Code Generation (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Informatik (14)
- Kaiserslautern - Fachbereich Mathematik (13)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (5)
- Kaiserslautern - Fachbereich Sozialwissenschaften (5)
- Kaiserslautern - Fachbereich Chemie (4)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (3)
- Fraunhofer (ITWM) (1)
- Kaiserslautern - Fachbereich ARUBI (1)
- Kaiserslautern - Fachbereich Biologie (1)
Embedded systems, ranging from very simple systems up to complex controllers, may
nowadays have quite challenging real-time requirements. Many embedded systems are reactive
systems that have to respond to environmental events and have to guarantee certain real-time
constrain. Their execution is usually divided into reaction steps, where in each step, the
system reads inputs from the environment and reacts to these by computing corresponding
outputs.
The synchronous Model of Computation (MoC) has proven to be well-suited for the
development of reactive real-time embedded systems whose paradigm directly reflects the
reactive nature of the systems it describes. Another advantage is the availability of formal
verification by model checking as a result of the deterministic execution based on a formal
semantics. Nevertheless, the increasing complexity of embedded systems requires to compensate
the natural disadvantages of model checking that suffers from the well-known state-space
explosion problem. It is therefore natural to try to integrate other verification methods with
the already established techniques. Hence, improvements to encounter these problems are
required, e.g., appropriate decomposition techniques, which encounter the disadvantages
of the model checking approach naturally. But defining decomposition techniques for synchronous
language is a difficult task, as a result of the inherent parallelism emerging from
the synchronous broadcast communication.
Inspired by the progress in the field of desynchronization of synchronous systems by
representing them in other MoCs, this work will investigate the possibility of adapting and use
methods and tools designed for other MoC for the verification of systems represented in the
synchronous MoC. Therefore, this work introduces the interactive verification of synchronous
systems based on the basic foundation of formal verification for sequential programs – the
Hoare calculus. Due to the different models of computation several problems have to be
solved. In particular due to the large amount of concurrency, several parts of the program
are active at the same point of time. In contrast to sequential programs, a decomposition
in the Hoare-logic style that is in some sense a symbolic execution from one control flow
location to another one requires the consideration of several flows here. Therefore, different
approaches for the interactive verification of synchronous systems are presented.
Additionally, the representation of synchronous systems by other MoCs and the influence
of the representation on the verification task by differently embedding synchronous system
in a single verification tool are elaborated.
The feasibility is shown by integration of the presented approach with the established
model checking methods by implementing the AIFProver on top of the Averest system.
In recent years the field of polymer tribology experienced a tremendous development
leading to an increased demand for highly sophisticated in-situ measurement methods.
Therefore, advanced measurement techniques were developed and established
in this study. Innovative approaches based on dynamic thermocouple, resistive electrical
conductivity, and confocal distance measurement methods were developed in
order to in-situ characterize both the temperature at sliding interfaces and real contact
area, and furthermore the thickness of transfer films. Although dynamic thermocouple
and real contact area measurement techniques were already used in similar
applications for metallic sliding pairs, comprehensive modifications were necessary to
meet the specific demands and characteristics of polymers and composites since
they have significantly different thermal conductivities and contact kinematics. By using
tribologically optimized PEEK compounds as reference a new measurement and
calculation model for the dynamic thermocouple method was set up. This method
allows the determination of hot spot temperatures for PEEK compounds, and it was
found that they can reach up to 1000 °C in case of short carbon fibers present in the
polymer. With regard to the non-isotropic characteristics of the polymer compound,
the contact situation between short carbon fibers and steel counterbody could be
successfully monitored by applying a resistive measurement method for the real contact
area determination. Temperature compensation approaches were investigated
for the transfer film layer thickness determination, resulting in in-situ measurements
with a resolution of ~0.1 μm. In addition to a successful implementation of the measurement
systems, failure mechanism processes were clarified for the PEEK compound
used. For the first time in polymer tribology the behavior of the most interesting
system parameters could be monitored simultaneously under increasing load
conditions. It showed an increasing friction coefficient, wear rate, transfer film layer
thickness, and specimen overall temperature when frictional energy exceeded the
thermal transport capabilities of the specimen. In contrast, the real contact area between
short carbon fibers and steel decreased due to the separation effect caused by
the transfer film layer. Since the sliding contact was more and more matrix dominated,
the hot spot temperatures on the fibers dropped, too. The results of this failure
mechanism investigation already demonstrate the opportunities which the new
measurement techniques provide for a deeper understanding of tribological processes,
enabling improvements in material composition and application design.
In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.
‘Dioxin-like’ (DL) compounds occur ubiquitously in the environment. Toxic responses associated with specific dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and polychlorinated biphenyls (PCBs) include dermal toxicity, immunotoxicity, liver toxicity, carcinogenicity, as well as adverse effects on reproduction, development, and endocrine functions. Most, if not all of these effects are believed to be due to interaction of these compounds with the aryl hydrocarbon receptor (AhR).
With tetrachlorodibenzo-p-dioxin (TCDD) as representatively most potent congener, a toxic equivalency factor (TEF) concept was employed, in which respective congeners were assigned to a certain TEF-value reflecting the compound’s toxicity relative to TCDD’s.
The EU-project ‘SYSTEQ’ aimed to develop, validate, and implement human systemic TEFs as indicators of toxicity for DL-congeners. Hence, the identification of novel quantifiable biomarkers of exposure was a major objective of the SYSTEQ project.
In order to approach to this objective, a mouse whole genome microarray analysis was applied using a set of seven individual congeners, termed the ‘core congeners’. These core congeners (TCDD, 1-PeCDD, 4-PeCDF, PCB 126, PCB 118, PCB 156, and the non dioxin-like PCB 153), which contribute to approximately 90% of toxic equivalents (TEQs) in the human food chain, were further tested in vivo as well as in vitro. The mouse whole genome microarray revealed a conserved list of differentially regulated genes and pathways associated with ‘dioxin-like’ effects.
A definite data-set of in vitro studies was supposed to function as a fundament for a probable establishment of novel TEFs. Thus, CYP1A induction measured by EROD activity, which represents a sensitive and yet best known marker for dioxin-like effects, was used to estimate potency and efficacy of selected congeners. For this study, primary rat hepatocytes and the rat hepatoma cell line H4IIE were used as well as the core congeners and an additional group of compounds of comparable relevance for the environment: 1,6-HxCDD, 1,4,6-HpCDD, TCDF, 1,4-HxCDF, 1,4,6-HpCDF, PCB 77, and PCB 105.
Besides, a human whole genome microarray experiment was applied in order to gain knowledge with respect to TCDD’s impact towards cells of the immune system. Hence, human primary blood mononuclear cells (PBMCs) were isolated from individuals and exposed to TCDD or to TCDD in combination with a stimulus (lipopolysaccharide (LPS), or phytohemagglutinin (PHA)). A few members of the AhR-gene batterie were found to be regulated, and minor data with respect to potential TCDD-mediated immunomodulatory effects were given. Still, obtained data in this regard was limited due to great inter-individual differences.
In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.
Test rig optimization
(2014)
Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.
Perceptual grouping is an integral part of visual object recognition. It organizes elements within our visual field according to a set of heuristics (grouping principles), most of which are not well understood. To identify their temporal processing dynamics (i.e., to identify whether they rely on neuronal feedforward or recurrent activation), we introduce the primed flanker task that is based on a firm empirical and theoretical background. In three sets of experiments, participants responded to visual stimuli that were either grouped by (1) similarity of brightness, shape, or size, (2) symmetry and closure, or (3) Good Gestalt. We investigated whether these grouping cues were effective in rapid visuomotor processing (i.e., in terms of response times, error rates, and priming effects) and whether the results met theory-driven indicators of feedforward processing. (1) In the first set of experiments with similarity cues, we varied subjective grouping strength and found that stronger grouping in the targets enhanced overall response times while stronger grouping in the primes enhanced priming effects in motor responses. We also obtained differences between rapid visuomotor processing and the subjective impression with cues of brightness and shape but not with cues of brightness and size. These results show that the primed flanker task is an objective measure for comparing different feedforward-transmitted groupings. (2) In the second set of experiments, we used the task to study grouping by symmetry and grouping by closure that are more complex than similarity cues. We obtained results that were mostly in accordance with a feedforward model. Some other factors (line of view, orientation of the symmetry axis) were irrelevant for processing of symmetry cues. Thus, these experiments suggest that closure and (possibly) viewpoint-independent symmetry cues are extracted rapidly during the first feedforward wave of neuronal processing. (3) In the third set of experiments, we used the task to study grouping by Good Gestalt (i.e., visual completion in occluded shapes). By varying the amount of occlusion, we found that the processing was in accordance with a feedforward model only when occlusion was very limited. Thus, these experiments suggest that Good Gestalt is not extracted rapidly during the first feedforward wave of neuronal processing but relies on recurrent activation. I conclude (1) that the primed flanker task is an excellent tool to identify and compare the processing characteristics of different grouping cues by behavioral means, (2) that grouping strength and other factors are strongly modulating these processing characteristics, which (3) challenges a dichotomous classification of grouping cues based on feedforward vs. recurrent processing (incremental grouping theory, Roelfsema, 2006), and (4) that a focus on temporal processing dynamics is necessary to understand perceptual grouping.
This thesis combines mass spectrometric studies on ionic dicarboxylic acids and transition metal cluster adsorbate complexes. IR-MPD spectra of protonated and deprotonated aliphatic and aromatic dicarboxylic acids provide insights in the nature of intramolecular hydrogen bonding. Investigations of their fragmentation behavior are supported by MP2 calculations. Prior work on cobalt transition metal clusters is extended to iron and nickel and three cobalt alloys have been studied.
Pedestrian Flow Models
(2014)
There have been many crowd disasters because of poor planning of the events. Pedestrian models are useful in analysing the behavior of pedestrians in advance to the events so that no pedestrians will be harmed during the event. This thesis deals with pedestrian flow models on microscopic, hydrodynamic and scalar scales. By following the Hughes' approach, who describes the crowd as a thinking fluid, we use the solution of the Eikonal equation to compute the optimal path for pedestrians. We start with the microscopic model for pedestrian flow and then derive the hydrodynamic and scalar models from it. We use particle methods to solve the governing equations. Moreover, we have coupled a mesh free particle method to the fixed grid for solving the Eikonal equation. We consider an example with a large number of pedestrians to investigate our models for different settings of obstacles and for different parameters. We also consider the pedestrian flow in a straight corridor and through T-junction and compare our numerical results with the experiments. A part of this work is devoted for finding a mesh free method to solve the Eikonal equation. Most of the available methods to solve the Eikonal equation are restricted to either cartesian grid or triangulated grid. In this context, we propose a mesh free method to solve the Eikonal equation, which can be applicable to any arbitrary grid and useful for the complex geometries.
This dissertation focuses on the evaluation of technical and environmental sustainability of water distribution systems based on scenario analysis. The decision support system is created to assist in the decision making-process and to visualize the results of the sustainability assessment for current and future populations and scenarios. First, a methodology is developed to assess the technical and environmental sustainability for the current and future water distribution system scenarios. Then, scenarios are produced to evaluate alternative solutions for the current water distribution system as well as future populations and water demand variations. Finally, a decision support system is proposed using a combination of several visualization approaches to increase the data readability and robustness for the sustainability evaluations of the water distribution system.
The technical sustainability of a water distribution system is measured using the sustainability index methodology which is based on the reliability, resiliency and vulnerability performance criteria. Hydraulic efficiency and water quality requirements are represented using the nodal pressure and water age parameters, respectively. The U.S. Environmental Protection Agency EPANET software is used to simulate hydraulic (i.e. nodal pressure) and water quality (i.e. water age) analysis in a case study. In addition, the environmental sustainability of a water network is evaluated using the “total fresh water use” and “total energy intensity” indicators. For each scenario, multi-criteria decision analysis is used to combine technical and environmental sustainability criteria for the study area.
The technical and environmental sustainability assessment methodology is first applied to the baseline scenario (i.e. the current water distribution system). Critical locations where hydraulic efficiency and water quality problems occur in the current system are identified. There are two major scenario options that are considered to increase the sustainability at these critical locations. These scenarios focus on creating alternative systems in order to test and verify the technical and environmental sustainability methodology rather than obtaining the best solution for the current and future water distribution systems. The first scenario is a traditional approach in order to increase the hydraulic efficiency and water quality. This scenario includes using additional network components such as booster pumps, valves etc. The second scenario is based on using reclaimed water supply to meet the non-potable water demand and fire flow. The fire flow simulation is specifically included in the sustainability assessment since regulations have significant impact on the urban water infrastructure design. Eliminating the fire flow need from potable water distribution systems would assist in saving fresh water resources as well as to reduce detention times.
The decision support system is created to visualize the results of each scenario and to effectively compare these results with each other. The EPANET software is a powerful tool used to conduct hydraulic and water quality analysis but for the decision support system purposes the visualization capabilities are limited. Therefore, in this dissertation, the hydraulic and water quality simulations are completed using EPANET software and the results for each scenario are visualized by combining several visualization techniques in order to provide a better data readability. The first technique introduced here is using small multiple maps instead of the animation technique to visualize the nodal pressure and water age parameters. This technique eliminates the change blindness and provides easy comparison of time steps. In addition, a procedure is proposed to aggregate the nodes along the edges in order to simplify the water network. A circle view technique is used to visualize two values of a single parameter (i.e. the nodal pressure or water age). The third approach is based on fitting the water network into a grid representation which assists in eliminating the irregular geographic distribution of the nodes and improves the visibility of each circle view. Finally, a prototype for an interactive decision support tool is proposed for the current population and water demand scenarios. Interactive tools enable analyzing of the aggregated nodes and provide information about the results of each of the current water distribution scenarios.