Doctoral Thesis
Refine
Year of publication
- 2014 (78) (remove)
Document Type
- Doctoral Thesis (78) (remove)
Has Fulltext
- yes (78)
Keywords
- Activity recognition (2)
- Arylhydrocarbon-Rezeptor (2)
- Leber (2)
- Querkraft (2)
- Wearable computing (2)
- 2,3,7,8-Tetrachlordibenzo-p-dioxin (1)
- 2,3,7,8-tetrachlordibenzo-p-dioxine (1)
- AFS (1)
- AFSfein (1)
- Abfluss (1)
- Adaptive Data Structure (1)
- AhR-Knockout Mäuse (1)
- AhR-VDR-Crosstalk (1)
- AhR-deficient mice (1)
- AhRR (1)
- Algorithm (1)
- Angebotspreise (1)
- Artefakt (1)
- Bauen im Bestand (1)
- Betonfestigkeit (1)
- Betonstahl (1)
- Biegetragfähigkeit (1)
- Boosting (1)
- C-Si-Kupplung (1)
- CYP1A1 (1)
- Calcitriol (1)
- Classification (1)
- Closure (1)
- Code Generation (1)
- Computer graphics (1)
- Cycle Accuracy (1)
- Cyp1a1 (1)
- Cyp1a1 Genexpression (1)
- Cyp24a1 Genexpression (1)
- DL-PCBs (1)
- Dach (1)
- Daseinsvorsorge (1)
- Data Analysis (1)
- Dataset (1)
- Debt Management (1)
- Definition (1)
- Dekonsolidierung (1)
- Dendritische Zellen (1)
- Dioxin (1)
- Direct Numerical Simulation (1)
- Discrete Event Simulation (DES) (1)
- ELISA (1)
- EROD (1)
- Eikonal equation (1)
- Endlicher Automat (1)
- Entscheidung (1)
- Entscheidungstheorie (1)
- Entscheidungsunterstützung (1)
- Erschließungsform (1)
- Erwarteter Nutzen (1)
- Evaluation (1)
- Faser (1)
- Faserorientierung (1)
- Feasibility study (1)
- Feature extraction (1)
- Feststoff (1)
- Feststoffe (1)
- Fluoreszenz (1)
- Formale Grammatik (1)
- Formale Sprache (1)
- Fußgängerverkehr (1)
- Grouping by similarity (1)
- Herkunftsfläche (1)
- Hohlkörper (1)
- Hohlkörperdecke (1)
- Hybridmaterialien (1)
- Hypergraph (1)
- IP-XACT (1)
- Ileostomy (1)
- Immobilienpreis (1)
- Immunoblot (1)
- Intensity estimation (1)
- Interactive decision support systems (1)
- Invariante (1)
- Inwertsetzung (1)
- Kellerautomat (1)
- Kieselgel (1)
- Klein- und Mittelstädte (1)
- Knowledge Management (1)
- Koinzidenz (1)
- Konstitutionsform (1)
- LIR-Tree (1)
- Leichtbeton (1)
- Lokales Durchstanzen (1)
- MCM-41 (1)
- Machine learning (1)
- Makrophagen (1)
- Metabolismus (1)
- Methyleugenol (1)
- Microarray (1)
- Mikrosomen (1)
- Minimal training (1)
- Mobile system (1)
- Most (1)
- Murine Knochenmarkzellen (1)
- Mustererkennung (1)
- Mykotoxine (1)
- Nahmobilität (1)
- Niederschlag (1)
- Niederschlagsabfluss (1)
- Niere (1)
- Noise control (1)
- Normalbeton (1)
- OCR (1)
- Ontologie (1)
- Optimierung (1)
- Organosilica (1)
- Organosiloxane (1)
- PCDD/Fs (1)
- PM63 (1)
- PMO (1)
- Partial Differential Equations (1)
- Pedestrian FLow (1)
- Perceptual grouping (1)
- Personalisation (1)
- Pervasive health (1)
- Phenothiazin (1)
- Phenothiazinderivate (1)
- Phenylpropanoide (1)
- Photolumineszenz (1)
- Physical activity monitoring (1)
- Planung (1)
- Portfolio Selection (1)
- Qualität (1)
- Quantifizierung (1)
- Quantilwertbestimmung (1)
- Querkrafttragfähigkeit (1)
- Recommender Systems (1)
- Response Priming (1)
- Revitalisierung (1)
- Rheologie (1)
- SBA-15 (1)
- Schwermetallbelastung (1)
- Self-splitting objects (1)
- Semantic Web (1)
- Semantic Wikis (1)
- Shared Resource Modeling (1)
- Simulation (1)
- Sol-Gel (1)
- Sol-Gel-Verfahren (1)
- Soziale Infrastruktur (1)
- Sozialraumanalyse (1)
- Spatial Econometrics (1)
- Speech recognition (1)
- Stadtplanung (1)
- Stahlbeton (1)
- Stahlbetonbau (1)
- Stahlverbundbau (1)
- Stahlverbundkonstruktion (1)
- Statistische Schlussweise (1)
- Stokes Equations (1)
- Sustainability (1)
- Suzuki-Kupplung (1)
- Suzuki-Miyaura-Reaktion (1)
- Symmetry (1)
- SystemC (1)
- TIPARP (1)
- Technik (1)
- Technikbegriff (1)
- Technikphilosophie (1)
- Temporal Decoupling (1)
- Tensorfeld (1)
- Tetrachlordibenzo-p-dioxin (1)
- Thermoplast (1)
- Thiophen (1)
- Topology visualization (1)
- Transaction Level Modeling (TLM) (1)
- Traubensortierung (1)
- Trennkanalisation (1)
- Ubiquitous system (1)
- Unobtrusive instrumentations (1)
- Unsicherheit (1)
- Urban Water Supply (1)
- Verbunddecken (1)
- Verbundtragfähigkeit (1)
- Verkehrsfläche (1)
- Vitamin-D (1)
- Vitamin-D-Rezeptor (1)
- Volume rendering (1)
- Water resources (1)
- Wein (1)
- Werkstoffprüfung (1)
- XMCD (1)
- Xerogel (1)
- Xerogele (1)
- Zytokine (1)
- aryl hydrocarbon receptor (1)
- aryl-hydrocarbon-receptor (1)
- bioavailability (1)
- charakteristische Werkstofffestigkeiten (1)
- cobalt (1)
- coffee (1)
- demografischer Wandel (1)
- dioxin-like compounds (1)
- fatigue (1)
- flow cytometry (1)
- gas phase (1)
- geographic information systems (1)
- geology (1)
- hypergraph (1)
- invariant (1)
- iron (1)
- kidney (1)
- liver (1)
- magnetism (1)
- mesoporöse Materialien (1)
- metal cluster (1)
- moment (1)
- nickel (1)
- optimization (1)
- orbit (1)
- peripheral blood mononuclear cells (1)
- periphere Region (1)
- philosophy of technology (1)
- point cloud (1)
- polycyclische Aromaten (1)
- polyphenol (1)
- rat liver cell systems (1)
- relative effect potencies (1)
- single molecule magnet (1)
- soziale Infrastruktur (1)
- spin (1)
- tensor (1)
- tensorfield (1)
- terrain rendering (1)
- tetrachlorodibenzo-p-dioxin (1)
- toxic equivalency factor (TEF) concept (1)
- vectorfield (1)
- virtual reality (1)
- whole genome microarray analysis (1)
- zementgebundene Feinkornsysteme (1)
- Ökonometrie (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (15)
- Kaiserslautern - Fachbereich Informatik (14)
- Kaiserslautern - Fachbereich Chemie (12)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (12)
- Kaiserslautern - Fachbereich Bauingenieurwesen (7)
- Kaiserslautern - Fachbereich Sozialwissenschaften (7)
- Kaiserslautern - Fachbereich Biologie (3)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (3)
- Kaiserslautern - Fachbereich ARUBI (2)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (2)
In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.
Test rig optimization
(2014)
Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.
The recognition of day-to-day activities is still a very challenging and important research topic. During recent years, a lot of research has gone into designing and realizing smart environ- ments in different application areas such as health care, maintenance, sports or smart homes. As a result, a large amount of sensor modalities were developed, different types of activity and context recognition services were implemented and the resulting systems were benchmarked using state-of-the-art evaluation techniques. However, so far hardly any of these approaches have found their way into the market and consequently into the homes of real end-users on a large scale. The reason for this is, that almost all systems have one or more of the following characteristics in common: expensive high-end or prototype sensors are used which are not af- fordable or reliable enough for mainstream applications; many systems are deployed in highly instrumented environments or so-called "living labs", which are far from real-life scenarios and are often evaluated only in research labs; almost all systems are based on complex system con- figurations and/or extensive training data sets, which means that a large amount of data must be collected in order to install the system. Furthermore, many systems rely on a user and/or environment dependent training, which makes it even more difficult to install them on a large scale. Besides, a standardized integration procedure for the deployment of services in existing environments and smart homes has still not been defined. As a matter of fact, service providers use their own closed systems, which are not compatible with other systems, services or sensors. It is clear, that these points make it nearly impossible to deploy activity recognition systems in a real daily-life environment, to make them affordable for real users and to deploy them in hundreds or thousands of different homes.
This thesis works towards the solution of the above mentioned problems. Activity and context recognition systems designed for large-scale deployment and real-life scenarios are intro- duced. Systems are based on low-cost, reliable sensors and can be set up, configured and trained with little effort, even by technical laymen. It is because of these characteristics that we call our approach "minimally invasive". As a consequence, large amounts of training data, that are usu- ally required by many state-of-the-art approaches, are not necessary. Furthermore, all systems were integrated unobtrusively in real-world/similar to real-world environments and were evalu- ated under real-life, as well as similar to real-life conditions. The thesis addresses the following topics: First, a sub-room level indoor positioning system is introduced. The system is based on low-cost ceiling cameras and a simple computer vision tracking approach. The problem of user identification is solved by correlating modes of locomotion patterns derived from the trajectory of unidentified objects and on-body motion sensors. Afterwards, the issue of recognizing how and what mainstream household devices have been used for is considered. Based on a low-cost microphone, the water consumption of water-taps can be approximated by analyzing plumbing noise. Besides that, operating modes of mainstream electronic devices were recognized by using rule-based classifiers, electric current features and power measurement sensors. As a next step, the difficulty of spotting subtle, barely distinguishable hand activities and the resulting object interactions, within a data set containing a large amount of background data, is addressed. The problem is solved by introducing an on-body core system which is configured by simple, one-time physical measurements and minimal data collections. The lack of large training sets is compensated by fusing the system with activity and context recognition systems, that are able to reduce the search space observed. Amongst other systems, previously introduced approaches and ideas are revisited in this section. An in-depth evaluation shows the impact of each fusion procedure on the performance and run-time of the system. The approaches introduced are able to provide significantly better results than a state-of-the-art inertial system using large amounts of training data. The idea of using unobtrusive sensors has also been applied to the field of behavior analysis. Integrated smartphone sensors are used to detect behavioral changes of in- dividuals due to medium-term stress periods. Behavioral parameters related to location traces, social interactions and phone usage were analyzed to detect significant behavioral changes of individuals during stressless and stressful time periods. Finally, as a closing part of the the- sis, a standardization approach related to the integration of ambient intelligence systems (as introduced in this thesis) in real-life and large-scale scenarios is shown.
This dissertation focuses on the evaluation of technical and environmental sustainability of water distribution systems based on scenario analysis. The decision support system is created to assist in the decision making-process and to visualize the results of the sustainability assessment for current and future populations and scenarios. First, a methodology is developed to assess the technical and environmental sustainability for the current and future water distribution system scenarios. Then, scenarios are produced to evaluate alternative solutions for the current water distribution system as well as future populations and water demand variations. Finally, a decision support system is proposed using a combination of several visualization approaches to increase the data readability and robustness for the sustainability evaluations of the water distribution system.
The technical sustainability of a water distribution system is measured using the sustainability index methodology which is based on the reliability, resiliency and vulnerability performance criteria. Hydraulic efficiency and water quality requirements are represented using the nodal pressure and water age parameters, respectively. The U.S. Environmental Protection Agency EPANET software is used to simulate hydraulic (i.e. nodal pressure) and water quality (i.e. water age) analysis in a case study. In addition, the environmental sustainability of a water network is evaluated using the “total fresh water use” and “total energy intensity” indicators. For each scenario, multi-criteria decision analysis is used to combine technical and environmental sustainability criteria for the study area.
The technical and environmental sustainability assessment methodology is first applied to the baseline scenario (i.e. the current water distribution system). Critical locations where hydraulic efficiency and water quality problems occur in the current system are identified. There are two major scenario options that are considered to increase the sustainability at these critical locations. These scenarios focus on creating alternative systems in order to test and verify the technical and environmental sustainability methodology rather than obtaining the best solution for the current and future water distribution systems. The first scenario is a traditional approach in order to increase the hydraulic efficiency and water quality. This scenario includes using additional network components such as booster pumps, valves etc. The second scenario is based on using reclaimed water supply to meet the non-potable water demand and fire flow. The fire flow simulation is specifically included in the sustainability assessment since regulations have significant impact on the urban water infrastructure design. Eliminating the fire flow need from potable water distribution systems would assist in saving fresh water resources as well as to reduce detention times.
The decision support system is created to visualize the results of each scenario and to effectively compare these results with each other. The EPANET software is a powerful tool used to conduct hydraulic and water quality analysis but for the decision support system purposes the visualization capabilities are limited. Therefore, in this dissertation, the hydraulic and water quality simulations are completed using EPANET software and the results for each scenario are visualized by combining several visualization techniques in order to provide a better data readability. The first technique introduced here is using small multiple maps instead of the animation technique to visualize the nodal pressure and water age parameters. This technique eliminates the change blindness and provides easy comparison of time steps. In addition, a procedure is proposed to aggregate the nodes along the edges in order to simplify the water network. A circle view technique is used to visualize two values of a single parameter (i.e. the nodal pressure or water age). The third approach is based on fitting the water network into a grid representation which assists in eliminating the irregular geographic distribution of the nodes and improves the visibility of each circle view. Finally, a prototype for an interactive decision support tool is proposed for the current population and water demand scenarios. Interactive tools enable analyzing of the aggregated nodes and provide information about the results of each of the current water distribution scenarios.
Einfluss verschiedener Angussszenarien auf den Harzinjektionsprozess und dessen simulative Abbildung
(2014)
Die Herstellung von hochleistungs Kunststoff Verbunden für Strukturbauteile erfolgt
in der Automobilindustrie mittels Resin Transfer Molding (RTM), wobei die Kosten für
die Bauteile sehr hoch sind. Die Kosten müssen durch Prozessoptimierungen deutlich
reduziert werden, um eine breite Anwendung von faserverstärkten Kunststoff
Verbunde zu ermöglichen. Prozesssimulationen spielen hierbei eine entscheidende
Rolle, da zeitaufwendige und kostspielige Praxisversuche ersetzt werden können.
Aus diesem Grund wurden in dieser Arbeit die Potentiale der simulativen Abbbildung
des RTM-Prozesses untersucht. Basis der Simulationen bildete eine umfangreiche
Materialparameterstudie bei der die Permeabilität, von für die Automobilindustrie relevanten
Textilhalbzeugen im ungescherten und gescherten Zustand, untersucht
wurde. Somit konnte der Einfluss von Drapierung bei der Fließsimulation evaluiert
werden. Zudem wurde eine neue Methode zur Ermittlung der zeit-, vernetzungs- und
temperaturabhängigen Viskositätsverläufe von hochreaktiven Harzsystemen entwickelt
und angewendet. Die Fließsimulationsmethode wurde zunächst erfolgreich an
einem ebenen Plattenwerkzeug validiert, um zu zeigen, dass die ermittelten Materialparameter
korrekt bestimmt wurden.
Zur Validierung der Simulation wurde ein komplexes Technologieträgerwerkzeug
(TTW) entwickelt. Die Auslegung der Temperierung wurde mittels Temperiersimulationen
unterstützt. Untersuchungen an markanten Kantenbereichen, wie sie bei Automobilbauteilen
häufig auftreten, haben gezeigt, dass bei Kantenradien < 5 mm ein
Voreilen des Harzsystem zu beachten ist. Zudem konnte mittels verschiedener Angussleisten,
der Einfluss verschiedener Angussszenarien untersucht werden.
Mit Hilfe von Sensoren im TTW wurden die Prozessdaten protokolliert und anschließend
mit den Simulationen verglichen. Die Ergebnisse zeigen, dass die simulative
Abbildung des Füllprozesses bei einem komplexen RTM-Werkzeug, trotz einer Vielzahl
an Prozesseinflüssen, möglich ist. Die Abweichungen zwischen der Simulation
und dem Versuch lagen teilweise unter 15 %. Die Belastbarkeit der ermittelten Permeabilitäts-
und Viskositätswerte wurde dadurch nochmals bestätigt. Zudem zeigte
sich, dass die Angussleistenlänge einen signifikanten Einfluss auf die Prozesszeit
hat, wohingegen der Angussleistenquerschnitt eine untergeordnete Rolle spielt.
Weit ab von Wachstumskernen, raumordnerischen Entwicklungsachsen und ökonomi-scher Wettbewerbsfähigkeit befinden sich peripherisierte Räume in Nord-Thüringen bzw. im südlichen Sachsen-Anhalt. Der dort persistente Transformationsprozess ist durch Abwanderung, mangelnde Investitionen oder überdurchschnittlich hohe Arbeits-losenzahlen gekennzeichnet. Das Dilemma besteht darin, dass die durch nicht selbst verschuldete Abkopplung, Stigmatisierung und Abhängigkeiten gekennzeichneten Kommunen nicht in der Lage sind, durch endogene Kräfte sich neu zu erfinden, was eine Regenerierung möglich machte, um letztendlich in der Wertschöpfungskette den für Investoren derzeit unattraktiven Immobilienmarkt wieder zu beleben. Diese seit mehr als 20 Jahren durchlaufenen Entwicklungspfade wirken sich auf die Siedlungskör-per aus, die in vielen Orten zu perforieren drohen. Es ist festzustellen, dass der Prozess des Niedergangs längst noch nicht abgeschlossen ist.
Soziale Infrastrukturbauten, wie ehemaligen Schulen, Kitas und Krankenhäusern, sind im besonderen Maß von diesen Entwicklungen betroffen. Insbesondere durch den selbst verstärkenden Effekt des demografischen Wandels dienen sie als stadtplanerischer For-schungsgegenstand. Dies vor dem Hintergrund einer möglichen Inwertsetzung als städ-tebauliche Innenentwicklungsstrategie (Anpassung) nach dem diese Immobilien ihre ursprüngliche Nutzung verloren haben. Die Notwendigkeit zum stadtplanerischen Handeln ergibt sich u.a. aus der nicht selten städtebaulich exponierten Lage, als seltene bauliche Zeitzeugnisse auch als Teil eines Ensemble mit kulturhistorischem Wert sowie als Merkpunkte einer gesamtstädtischen bzw. dörflichen Ordnung.
Die Arbeit identifiziert die neuen Herausforderungen, die im Umgang mit leer stehen-den sozialen Infrastrukturbauten in peripherisierten Klein- und Mittelstädten durch die Eigentümer zu bewältigen sind und reflektiert kritisch die Wirksamkeit der informellen sowie formellen planerischen Instrumente. Es werden konkrete Vorschläge gemacht, wie das Immobilienmanagement sowie die Eigentümereinbindung bei sehr stark beru-higten Wohnimmobilienmärkten zu erfolgen hat. Weiterhin werden Strategieansätze des Verwaltungshandelns empfohlen, die auf die speziellen Marktbedingungen abge-stimmt sind.
Neben diesen aus der Theorie gewonnenen Analogieschlüssen zeigen die aus dem Feldexperiment in der o.g. Untersuchungsregion durch umfangreiche Erhebungen ope-rationalisierbare Daten. Aus dieser Dichte der Informationen entstanden valide Aussa-gen, deren Reliabilität in die Entwicklung einer Standortanalysedatenbank einflossen sind. Somit konnte nicht nur die Problemlage objektiv nachgewiesen werden, sondern es gelang auch in der Exploration ein für die Kommunen handhabbares Planungs-instrument zu entwickeln, das auch anderswohin übertragbar ist.
Durch den Einsatz von Hohlkörpern in Stahlbetondecken können Beton, Stahl und folglich Gewicht eingespart werden. Die Materialeinsparung reduziert den Primärenergiebedarf sowie die Treibhausgasemissionen bei der Herstellung. Hierdurch stellen Hohlkörperdecken im Vergleich zu konventionellen Massivdecken eine ressourcenschonendere Bauweise dar. Infolge der deutlich reduzierten Eigenlast und einem im Verhältnis geringeren Steifigkeitsabfall können zudem Decken mit großen Spannweiten realisiert werden.
Die einzelnen Traganteile der Decken werden durch die Hohlkörper grundsätzlich nachteilig beeinflusst. Die Tragfähigkeit von Hohlkörperdecken mit abgeflachten rotationssymmetrischen Hohlkörpern wurde in der vorliegenden Dissertationsschrift im Detail analysiert. Auf Grundlage experimenteller und theoretischer Untersuchungen wurden Bemessungskonzepte für die Biegetragfähigkeit, die Querkrafttragfähigkeit, die Schubkraftübertragung in der Verbundfuge und das lokale Durchstanzen des Deckenspiegels oberhalb der Hohlkörper entwickelt. Unter Berücksichtigung der Bemessungskonzepte können die Hohlkörperdecken auf dem bauaufsichtlich geforderten Sicherheitsniveau hergestellt werden.
Für die Querkrafttragfähigkeit von Stahlbetondecken ohne Querkraftbewehrung steht derzeit kein allgemein anerkanntes mechanisch begründetes Bemessungskonzept zur Verfügung. Der Einfluss der einzelnen Traganteile auf das Versagen wurde experimentell analysiert. Hierzu wurden Versuche mit verlagerter Druckzone sowie mit ausgeschalteter Rissuferverzahnung und mit ausgeschalteter Dübelwirkung durchgeführt. Der rechnerische Einfluss der einzelnen Traganteile an der Gesamttragfähigkeit konnte durch die Nachrechnung von Versuchen zu Hohlkörper- und Installationsdecken unter Verwendung eines bestehenden mechanisch begründeten Rechenmodells visualisiert und verifiziert werden. Hierdurch wird ein Beitrag zum besseren Verständnis der Querkrafttragfähigkeit geleistet.
In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.