Refine
Year of publication
- 2014 (146) (remove)
Document Type
- Doctoral Thesis (78)
- Preprint (28)
- Periodical Part (17)
- Article (15)
- Working Paper (4)
- Other (2)
- Report (1)
- Study Thesis (1)
Has Fulltext
- yes (146)
Keywords
- Denkmäler (8)
- Monitoring (8)
- Raumplanung (8)
- Brücken (5)
- Bestandserhaltung (4)
- Zerstörungsfreie Prüfung (4)
- Multiobjective optimization (3)
- Querkraft (3)
- Zustandserfassung (3)
- Activity recognition (2)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (43)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (16)
- Kaiserslautern - Fachbereich Informatik (15)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (13)
- Kaiserslautern - Fachbereich Sozialwissenschaften (13)
- Kaiserslautern - Fachbereich Chemie (12)
- Kaiserslautern - Fachbereich Bauingenieurwesen (10)
- Universität (8)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (5)
- Kaiserslautern - Fachbereich Biologie (3)
Bei der Herstellung hochbelasteter Strukturbauteile aus Faser-Kunststoff-Verbund (FKV) wird verbreitet auf textile Halbzeuge wie Gewebe oder vernähte Biaxial-Gelege zurückgegrif-fen. Diese Halbzeuge zeigen im Verbund mit Kunststoffen periodische out-of-plane Roving-welligkeiten. Die Größe der Welligkeiten hängt unter anderem von den Fertigungsparametern der trockenen Halbzeuge ab. Durch das Verständnis der effektiven Kausalitäten zwischen Rovingwelligkeiten und mechanischem Verhalten soll eine bessere rechnerische Abschätzung der Materialkennwerte erzielt werden.
In dieser Arbeit wurde unter anderem der Einfluss der Welligkeitsparameter Amplitude und Wellenlänge auf die faserparallelen Kennwerte an unidirektional verstärkten Proben unter-sucht. Dafür wurden gezielt unterschiedliche Amplituden und Wellenlängen mit Hilfe von unidirektionalen Geweben in die Probekörper eingebracht. Der Einfluss der Rovingwelligkei-ten auf die Steifigkeiten war kleiner als auf die Festigkeiten. Bei Letzten war zu beobachten, dass die Druckfestigkeiten mehr von den Ondulationen beeinflusst wurden als die Zugfestig-keiten. Außerdem wurden die Welligkeiten und die mechanischen Kennwerte von textilen FKV-Geweben und -Gelegen bestimmt. Bei der Auswahl der untersuchten Halbzeuge war ein wichtiges Kriterium, dass diese auch in der Praxis Anwendung finden.
Im nächsten Schritt wurde ein vereinfachtes Finite-Elemente-Welligkeitsmodell entwickelt, welches es ermöglicht, die faserparallelen Kennwerte ohne zeit- und kostenintensive Materi-alversuche zu bestimmen. Speziell für Gewebe-Materialien mit großen Rovingwelligkeiten ist dieses Modell in der Lage, deutlich bessere Abschätzungen als vorhandene Methoden zu ge-ben. Weiterhin wurde auf dieser Basis ein Regressionsmodell für unidirektional verstärkte Materialien abgeleitet, welches auch dem Konstrukteur ohne Erfahrungen im Umgang mit Finiten-Elemente-Programmen die Anwendung des Welligkeitsmodells ermöglicht.
Der Vorteil des entwickelten Welligkeitsmodells wurde in einem dreistufigen Validierungs-programm nachgewiesen. Dieses beinhaltet den Übergang auf multidirektionale Laminate sowie komplexere Bauteilgeometrien. Die verbesserte Prognose mit Hilfe des Welligkeitsmo-dells zeigte sich vor allem bei Materialien mit großen Rovingwelligkeiten und bei Bauteilen die aufgrund eines Faserbruchs durch eine Druckbelastung versagten.
Continuum Mechanical Modeling of Dry Granular Systems: From Dilute Flow to Solid-Like Behavior
(2014)
In this thesis, we develop a granular hydrodynamic model which covers the three principal regimes observed in granular systems, i.e. the dilute flow, the dense flow and the solid-like regime. We start from a kinetic model valid at low density and extend its validity to the granular solid-like behavior. Analytical and numerical results show that this model reproduces a lot of complex phenomena like for instance slow viscoplastic motion, critical states and the pressure dip in sand piles. Finally we formulate a 1D version of the full model and develop a numerical method to solve it. We present two numerical examples, a filling simulation and the flow on an inclined plane where the three regimes are included.
The objective of this thesis consists in developing systematic event-triggered control designs for specified event generators, which is an important alternative to the traditional periodic sampling control. Sporadic sampling inherently arising in event-triggered control is determined by the event-triggering conditions. This feature invokes the desire of
finding new control theory as the traditional sampled-data theory in computer control.
Developing controller coupling with the applied event-triggering condition to maximize the control performance is the essence for event-triggered control design. In the design the stability of the control system needs to be ensured with the first priority. Concerning variant control aims they should be clearly incorporated in the design procedures. Considering applications in embedded control systems efficient implementation requires a low complexity of embedded software architectures. The thesis targets at offering such a design to further complete the theory of event-triggered control designs.
In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.
The present work investigated three important constructs in the field of psychology: creativity, intelligence and giftedness. The major objective was to clarify some aspects about each one of these three constructs, as well as some possible correlations between them. Of special interest were: (1) the relationship between creativity and intelligence - particularly the validity of the threshold theory; (2) the development of these constructs within average and above-average intelligent children and throughout grade levels; and (3) the comparison between the development of intelligence and creativity in above-average intelligent primary school children that participated in a special program for children classified as “gifted”, called Entdeckertag (ET), against an age-class- and-IQ matched control group. The ET is a pilot program which was implemented in 2004 by the Ministry for Education, Science, Youth and Culture of the state of Rhineland-Palatinate, Germany. The central goals of this program are the early recognition of gifted children and intervention, based on the areas of German language, general science and mathematics, and also to foster the development of a child’s creativity, social ability, and more. Five hypotheses were proposed and analyzed, and reported separately within five chapters. To analyze these hypotheses, a sample of 217 children recruited from first to fourth grade, and between the ages of six and ten years, was tested for intelligence and creativity. Children performed three tests: Standard Progressive Matrices (SPM) for the assessment of classical intelligence, Test of Creative Thinking – Drawing Production (TCT-DP) for the measurement of classical creativity, and Creative Reasoning Task (CRT) for the evaluation of convergent and divergent thinking, both in open problem spaces. Participants were divided according to two general cohorts: Intervention group (N = 43), composed of children participating in the Entdeckertag program, and a non-intervention group (N = 174), composed of children from the regular primary school. For the testing of the hypotheses, children were placed into more specific groups according to the particular hypothesis that was being tested. It could be concluded that creativity and intelligence were not significantly related and the threshold theory was not confirmed. Additionally, intelligence accounted for less than 1% of the variance within creativity; moreover, scores on intelligence were unable to predict later creativity scores. The development of classical intelligence and classical creativity throughout grade levels also presented a different pattern; intelligence grew increasingly and continually, whereas creativity stagnated after the third grade. Finally, the ET program proved to be beneficial for classical intelligence after two years of attendance, but no effect was found for creativity. Overall, results indicate that organizations and institutions such as schools should not look solely to intelligence performance, especially when aiming to identify and foster gifted or creative individuals.
For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.
In this paper we present a method for nonlinear frequency response analysis of mechanical vibrations of 3-dimensional solid structures.
For computing nonlinear frequency response to periodic excitations, we employ the well-established harmonic balance method.
A fundamental aspect for allowing a large-scale application of the method is model order reduction of the discretized equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information.
For an efficient spatial discretization of continuum mechanics nonlinear partial differential equations, including large deformations and hyperelastic material laws, we use the isogeometric finite element method, which has already been shown to possess advantages over classical finite element discretizations in terms of higher accuracy of numerical approximations in the fields of linear vibration and static large deformation analysis.
With several computational examples, we demonstrate the applicability and accuracy of the modal derivative reduction method for nonlinear static computations and vibration analysis.
Thus, the presented method opens a promising perspective on application of nonlinear frequency analysis to large-scale industrial problems.
The work presented in this thesis discusses the thermal and power management of multi-core processors (MCPs) with both two dimensional (2D) package and there dimensional (3D) package chips. The power and thermal management/balancing is of increasing concern and is a technological challenge to the MCP development and will be a main performance bottleneck for the development of MCPs. This thesis develops optimal thermal and power management policies for MCPs. The system thermal behavior for both 2D package and 3D package chips is analyzed and mathematical models are developed. Thereafter, the optimal thermal and power management methods are introduced.
Nowadays, the chips are generally packed in 2D technique, which means that there is only one layer of dies in the chip. The chip thermal behavior can be described by a 3D heat conduction partial differential equation (PDE). As the target is to balance the thermal behavior and power consumption among the cores, a group of one dimensional (1D) PDEs, which is derived from the developed 3D PDE heat conduction equation, is proposed to describe the thermal behavior of each core. Therefore, the thermal behavior of the MCP is described by a group of 1D PDEs. An optimal controller is designed to manage the power consumption and balance the temperature among the cores based on the proposed 1D model.
3D package is an advanced package technology, which contains at least 2 layers of dies stacked in one chip. Different from 2D package, the cooling system should be installed among the layers to reduce the internal temperature of the chip. In this thesis, the micro-channel liquid cooling system is considered, and the heat transfer character of the micro-channel is analyzed and modeled as an ordinary differential equation (ODE). The dies are discretized to blocks based on the chip layout with each block modeled as a thermal resistance and capacitance (R-C) circuit. Thereafter, the micro-channels are discretized. The thermal behavior of the whole system is modeled as an ODE system. The micro-channel liquid velocity is set according to the workload and the temperature of the dies. Under each velocity, the system can be described as a linear ODE model system and the whole system is a switched linear system. An H-infinity observer is designed to estimate the states. The model predictive control (MPC) method is employed to design the thermal and power management/balancing controller for each submodel.
The models and controllers developed in this thesis are verified by simulation experiments via MATLAB. The IBM cell 8 cores processor and water micro-channel cooling system developed by IBM Research in collaboration with EPFL and ETHZ are employed as the experiment objects.
In the present work, the phase transitions in different Fe/FeC systems were studied by using the molecular dynamics simulation and the Meyer-Entel interaction potential (also the Johnson potential for Fe-C interaction). Fe-bicrystal, thin film, Fe-C bulk and Fe-C nanowire systems were investigated to study the behaviour of the phase transition, where the energetics, dynamics and transformations pathways were analysed.
Technik ist in der heutigen Zeit allgegenwärtig. Bei all ihrer Omnipräsenz wird jedoch leicht übersehen, dass die Frage nach der Technik selber, d.h. die Frage danach, was genau unter „Technik“ überhaupt zu verstehen ist, bisher weitestgehend undeutlich geblieben ist.
Für die Philosophie erwächst daraus die Aufgabe, an dieser Stelle begriffsklärend einzugreifen.
Die vorliegende Arbeit hat zum Ziel, einen Beitrag zu einem besseren Verständnis von Technik und technischen Artefakten zu leisten. Die Argumentation gliedert sich dabei in zwei Schritte: Zuerst wird gezeigt, dass sich Technik nur in ihren Abgrenzungsverhältnissen zur Natur und zum Leben verstehen lässt und eine dem entsprechende Definition des Technikbegriffs vorgeschlagen. Anschließend wird daraus ein Verständnis technischer Artefakte im Sinne einer artefaktischen Technik abgeleitet.
Die Gliederung der Arbeit besteht dann im wesentlichen aus drei Teilen:
1. Das erste Kapitel dient der Einführung in die Problematik des Technikbegriffs:
Dabei wird in einem ersten Abschnitt auf die historische Dimension des Technikbegriffs verwiesen (1.1), anschließend die gegenwärtige Diskussion um den Technikbegriff zusammengefasst und kritisch bewertet (1.2) sowie Klassifikationen bzw. Kriterien hinsichtlich einer möglichen Definition des Technikbegriffs vorgeschlagen (1.3).
2. Das zweite Kapitel dient der Etablierung eines Technikbegriffs, der sich als semantisch abhängig von den Begriffen „Leben“ und „Natur“ erweist:
Dabei wird in einem ersten Abschnitt ein solches semantisches Verhältnis der Begriffe zueinander von anderen Möglichkeiten wechselseitiger Abgrenzung unterschieden (2.1). Sodann wird diese Abgrenzung mittels sogenannter Konstitutionsformen inhaltlich aufgefüllt (2.2). Nach der ausführlichen Erläuterung dieser Konstitutionsformen in ihrem paarweisen Zusammenhang, wird eine auf ihnen beruhende Definition von „Technik“ vorgeschlagen. In einem dritten Abschnitt wird das Modell der Konstitutionsformen um sogenannte Erschließungsformen erweitert, als diejenigen Fragehorizonte, mittels denen eine Binnendifferenzierung in verschieden Arten von Technik gelingt (2.3). In der Folge davon, wird eine Definition für eine jeweils „spezifische Technik“ vorgeschlagen.
3. Das dritte Kapitel dient der Untersuchung des ontologischen Status' technischer Artefakte:
Dabei werden technische Artefakte im Sinne einer „spezifischen Technik“ konkretisiert und damit als eine artefaktische Technik interpretiert (3.1). Anschließend wird überprüft, inwiefern sich eine solche Interpretation bezüglich a) der Frage, ob technische Artefakte natürliche Arten darstellen, bzw. b) des Problems der Koinzidenz von Objekten bewährt. Die aus diesen Überlegungen heraus gewonnenen Erkenntnisse werden abschließend in ihrer Anwendung auf Grenzfälle technischer Artefakte fruchtbar gemacht (3.2).
This thesis focuses on dealing with some new aspects of continuous time portfolio optimization by using the stochastic control method.
First, we extend the Busch-Korn-Seifried model for a large investor by using the Vasicek model for the short rate, and that problem is solved explicitly for two types of intensity functions.
Next, we justify the existence of the constant proportion portfolio insurance (CPPI) strategy in a framework containing a stochastic short rate and a Markov switching parameter. The effect of Vasicek short rate on the CPPI strategy has been studied by Horsky (2012). This part of the thesis extends his research by including a Markov switching parameter, and the generalization is based on the B\"{a}uerle-Rieder investment problem. The explicit solutions are obtained for the portfolio problem without the Money Market Account as well as the portfolio problem with the Money Market Account.
Finally, we apply the method used in Busch-Korn-Seifried investment problem to explicitly solve the portfolio optimization with a stochastic benchmark.
We propose a model for acid-mediated tumor invasion involving two different scales: the microscopic one, for the dynamics of intracellular protons and their exchange with their extracellular counterparts, and the macroscopic scale of interactions between tumor cell and normal cell populations, along with the evolution of extracellular protons. We also account for the tactic behavior of cancer cells, the latter being assumed to biase their motion according to a gradient of extracellular protons (following [2,31] we call this pH taxis). A time dependent (and also time delayed) carrying capacity for the tumor cells in response to the effects of acidity is considered as well. The global well posedness of the resulting multiscale model is proved with a regularization and fixed point argument. Numerical simulations are performed in order to illustrate the behavior of the model.
Das Konzept des Fairen Handels fußt auf der Umsetzung der Ziele ökonomischer, ökologischer und sozialer Nachhaltigkeit. In dieser Arbeit sollte, komplementär zur bisherigen Forschung und Literatur, mit Methoden der Verhaltensökonomik analysiert werden, inwieweit Verteilungspräferenzen hinsichtlich des Einkommens den Kauf von fair gehandelten Produkten beeinflussen.
This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.
Polychlorierte Dibenzo-p-dioxine wie 2,3,7,8-Tetrachlordibenzo-p-dioxin (TCDD), die zur Stoffgruppe der polyhalogenierten Kohlenwasserstoffe gehören, sind lipophile und persistente Umweltkontaminanten. Sie entstehen als unerwünschte Verunreinigungen bzw. Begleitstoffe vor allem während Verbrennungsprozessen organischer Materialien wie Holz oder Müll, im Tabakrauch sowie als Nebenprodukt chlororganischer Synthesen. Aufgrund ihrer hohen Lipophile reichern sie sich in der Nahrungskette an. Dioxine wie TCDD verursachen eine Vielzahl von biochemischen und toxischen Effekten wie z.B. Enzym-Induktion, Lebertoxizität, dermale Toxizität, Immuntoxizität und Kanzerogenität. Im Tiermodell konnte eine Beeinträchtigung des Fortpflanzungssystems und des Hormonhaushaltes beobachtet werden. In der Literatur herrscht Einigkeit, dass die meisten wenn nicht sogar alle toxischen Wirkungen der Dioxine über den Aryl-Hydrocarbon-Rezeptor (AhR) vermittelt werden. Im letzten Jahrzehnt konnten jedoch in mehreren Studien AhR-unabhängige TCDD-Wirkungen beobachtet werden. Nur wenige Studien befassen sich mit der Niere als Zielorgan, wobei belegt werden konnte, dass der AhR auch für die Entwicklung der Niere eine wichtige Rolle zu spielen scheint. Im Rahmen dieser Doktorarbeit wurde eine Tierstudie mit Wildtyp- und AhR-defizienten-Mäusen durchgeführt und dabei auch die Niere als Zielorgan betrachtet. Zunächst wurden weibliche und männliche Wildtyp- und AhR-defiziente-Mäuse einmalig mit TCDD (25 µg/kg KG) oral behandelt. Anschließend wurden Genexpressionsmuster in den Nieren mittels Microarray analysiert. In den Nieren behandelter Wildtyp-Mäuse wiesen 172 Gene, und in den Nieren behandelter AhR-defizienter-Mäuse wiesen 325 Gene eine veränderte Expression auf. In den Nieren behandelter AhR-defizienter-Mäuse wurden Gene hochreguliert, die in Prozesse des blutbildenden Systems, der Blutgerinnung sowie der Biosynthese von Sterolen und des Katabolismus von organischen Säuren involviert sind. Herrunterreguliert wurden Gene, die in Prozesse des Lipidmetabolismus, der Biosynthese sowie des Metabolismus kleinerer Moleküle und des Metabolismus von Hormonen involviert sind. Mittels RT-PCR wurde die im Microarray beobachtete erhöhte Expression einiger ausgewählter Gene des blutbildenden Systems (z. B. Hba-a1, Hbb-b1 und Rps14) und der Blutgerinnung (z. B. Fgg und F10) sowie einiger hepatischer Gene wie Lpl, Anxa1, c-Myc, Igfbp1, Esm1 und Cdh1 (Microarray-Analyse der Lebern wurde in einer früheren Studie gemessen) bestätigt. Des Weiteren wurde eine leichte, teilweise signifikant erhöhte Expression einiger pro-bzw. antiinflammatorischer Zytokine (IL-1α, IL-1ß, Il-6, IL-10 und TNF-α) in den Lebern und Milzen behandelter AhR-defizienter-Mäuse beobachtet. Die Induktion fremdstoffemtabolisiernder Enzyme wie Cyp1a1, Cyp1a2 und Cyp1b1 in den Organen Leber, Niere, Lunge und Milz behandelter Wildtyp-Mäuse konnte mittels Western-Blot und RT-PCR bestätigt werden, jedoch mit einer Ausnahme. In den Milzen behandelter Wildtyp-Mäuse konnte keine Induktion dieser Enzyme auf mRNA-Ebene beobachtet werden. In den Lebern behandelter AhR-defizienter-Mäuse war die Cyp1b1-Expression signifikant erhöht sowie die AhR-Expression vermindert. Die Induktion der Vitmain D-Rezeptor (VDR)-regulierten Gene Cyp24a1, Cyp27b1 und VDR in den Nieren behandelter AhR-defizienter-Mäuse weist auf die Aktivierung des VDR hin. Des Weiteren stellt die verminderte Expression von c-Myc in den Nieren TCDD-behandelter Knockout-Mäuse ein Hinweis für die Aktivierung des VDR dar. Durch HPLC/MS-MS-gestützte Untersuchung des Vollblutes von AhR-Wildtyp- und AhR-Knockout-Mäusen konnten Unterschiede und Gemeinsamkeiten zwischen den Genotypen bestimmt werden. So war es möglich Unterschiede im Aminosäuremetabolismus sowie im Tryptophan-Metabolismus zu identifizieren. Außerdem konnte gezeigt werden, dass sowohl im unbehandelten wie auch im TCDD-behandelten Zustand, Unterschiede zwischen den Genotypen bestanden.
In recent years the field of polymer tribology experienced a tremendous development
leading to an increased demand for highly sophisticated in-situ measurement methods.
Therefore, advanced measurement techniques were developed and established
in this study. Innovative approaches based on dynamic thermocouple, resistive electrical
conductivity, and confocal distance measurement methods were developed in
order to in-situ characterize both the temperature at sliding interfaces and real contact
area, and furthermore the thickness of transfer films. Although dynamic thermocouple
and real contact area measurement techniques were already used in similar
applications for metallic sliding pairs, comprehensive modifications were necessary to
meet the specific demands and characteristics of polymers and composites since
they have significantly different thermal conductivities and contact kinematics. By using
tribologically optimized PEEK compounds as reference a new measurement and
calculation model for the dynamic thermocouple method was set up. This method
allows the determination of hot spot temperatures for PEEK compounds, and it was
found that they can reach up to 1000 °C in case of short carbon fibers present in the
polymer. With regard to the non-isotropic characteristics of the polymer compound,
the contact situation between short carbon fibers and steel counterbody could be
successfully monitored by applying a resistive measurement method for the real contact
area determination. Temperature compensation approaches were investigated
for the transfer film layer thickness determination, resulting in in-situ measurements
with a resolution of ~0.1 μm. In addition to a successful implementation of the measurement
systems, failure mechanism processes were clarified for the PEEK compound
used. For the first time in polymer tribology the behavior of the most interesting
system parameters could be monitored simultaneously under increasing load
conditions. It showed an increasing friction coefficient, wear rate, transfer film layer
thickness, and specimen overall temperature when frictional energy exceeded the
thermal transport capabilities of the specimen. In contrast, the real contact area between
short carbon fibers and steel decreased due to the separation effect caused by
the transfer film layer. Since the sliding contact was more and more matrix dominated,
the hot spot temperatures on the fibers dropped, too. The results of this failure
mechanism investigation already demonstrate the opportunities which the new
measurement techniques provide for a deeper understanding of tribological processes,
enabling improvements in material composition and application design.
Perceptual grouping is an integral part of visual object recognition. It organizes elements within our visual field according to a set of heuristics (grouping principles), most of which are not well understood. To identify their temporal processing dynamics (i.e., to identify whether they rely on neuronal feedforward or recurrent activation), we introduce the primed flanker task that is based on a firm empirical and theoretical background. In three sets of experiments, participants responded to visual stimuli that were either grouped by (1) similarity of brightness, shape, or size, (2) symmetry and closure, or (3) Good Gestalt. We investigated whether these grouping cues were effective in rapid visuomotor processing (i.e., in terms of response times, error rates, and priming effects) and whether the results met theory-driven indicators of feedforward processing. (1) In the first set of experiments with similarity cues, we varied subjective grouping strength and found that stronger grouping in the targets enhanced overall response times while stronger grouping in the primes enhanced priming effects in motor responses. We also obtained differences between rapid visuomotor processing and the subjective impression with cues of brightness and shape but not with cues of brightness and size. These results show that the primed flanker task is an objective measure for comparing different feedforward-transmitted groupings. (2) In the second set of experiments, we used the task to study grouping by symmetry and grouping by closure that are more complex than similarity cues. We obtained results that were mostly in accordance with a feedforward model. Some other factors (line of view, orientation of the symmetry axis) were irrelevant for processing of symmetry cues. Thus, these experiments suggest that closure and (possibly) viewpoint-independent symmetry cues are extracted rapidly during the first feedforward wave of neuronal processing. (3) In the third set of experiments, we used the task to study grouping by Good Gestalt (i.e., visual completion in occluded shapes). By varying the amount of occlusion, we found that the processing was in accordance with a feedforward model only when occlusion was very limited. Thus, these experiments suggest that Good Gestalt is not extracted rapidly during the first feedforward wave of neuronal processing but relies on recurrent activation. I conclude (1) that the primed flanker task is an excellent tool to identify and compare the processing characteristics of different grouping cues by behavioral means, (2) that grouping strength and other factors are strongly modulating these processing characteristics, which (3) challenges a dichotomous classification of grouping cues based on feedforward vs. recurrent processing (incremental grouping theory, Roelfsema, 2006), and (4) that a focus on temporal processing dynamics is necessary to understand perceptual grouping.