Refine
Year of publication
- 2014 (146) (remove)
Document Type
- Doctoral Thesis (78)
- Preprint (28)
- Periodical Part (17)
- Article (15)
- Working Paper (4)
- Other (2)
- Report (1)
- Study Thesis (1)
Has Fulltext
- yes (146)
Keywords
- Denkmäler (8)
- Monitoring (8)
- Raumplanung (8)
- Brücken (5)
- Bestandserhaltung (4)
- Zerstörungsfreie Prüfung (4)
- Multiobjective optimization (3)
- Querkraft (3)
- Zustandserfassung (3)
- Activity recognition (2)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (43)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (16)
- Kaiserslautern - Fachbereich Informatik (15)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (13)
- Kaiserslautern - Fachbereich Sozialwissenschaften (13)
- Kaiserslautern - Fachbereich Chemie (12)
- Kaiserslautern - Fachbereich Bauingenieurwesen (10)
- Universität (8)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (5)
- Kaiserslautern - Fachbereich Biologie (3)
The classic approach in robust optimization is to optimize the solution with respect to the worst case scenario. This pessimistic approach yields solutions that perform best if the worst scenario happens, but also usually perform bad on average. A solution that optimizes the average performance on the other hand lacks in worst-case performance guarantee.
In practice it is important to find a good compromise between these two solutions. We propose to deal with this problem by considering it from a bicriteria perspective. The Pareto curve of the bicriteria problem visualizes exactly how costly it is to ensure robustness and helps to choose the solution with the best balance between expected and guaranteed performance.
Building upon a theoretical observation on the structure of Pareto solutions for problems with polyhedral feasible sets, we present a column generation approach that requires no direct solution of the computationally expensive worst-case problem. In computational experiments we demonstrate the effectivity of both the proposed algorithm, and the bicriteria perspective in general.
We consider the problem of evacuating an urban area caused by a natural or man-made disaster. There are several planning aspects that need to be considered in such a scenario, which are usually considered separately, due to their computational complexity. These aspects include: Which shelters are used to accommodate evacuees? How to schedule public transport for transit-dependent evacuees? And how do public and individual traffic interact? Furthermore, besides evacuation time, also the risk of the evacuation needs to be considered.
We propose a macroscopic multi-criteria optimization model that includes all of these questions simultaneously. As a mixed-integer programming formulation cannot handle instances of real-world size, we develop a genetic algorithm of NSGA-II type that is able to generate feasible solutions of good quality in reasonable computation times.
We extend the applicability of these methods by also considering how to aggregate instance data, and how to generate solutions for the original instance starting from a reduced solution.
In computational experiments using real-world data modelling the cities of Nice in France and Kaiserslautern in Germany, we demonstrate the effectiveness of our approach and compare the trade-off between different levels of data aggregation.
A new algorithm for optimization problems with three objective functions is presented which computes a representation for the set of nondominated points. This representation is guaranteed to have a desired coverage error and a bound on the number of iterations needed by the algorithm to meet this coverage error is derived. Since the representation does not necessarily contain nondominated points only, ideas to calculate bounds for the representation error are given. Moreover, the incorporation of domination during the algorithm and other quality measures are discussed.
A single facility problem in the plane is considered, where an optimal location has to be
identified for each of finitely many time-steps with respect to time-dependent weights and
demand points. It is shown that the median objective can be reduced to a special case of the
static multifacility median problem such that results from the latter can be used to tackle the
dynamic location problem. When using block norms as distance measure between facilities,
a Finite Dominating Set (FDS) is derived. For the special case with only two time-steps, the
resulting algorithm is analyzed with respect to its worst-case complexity. Due to the relation
between dynamic location problems for T time periods and T-facility problems, this algorithm
can also be applied to the static 2-facility location problem.
We develop a framework for shape optimization problems under state equation con-
straints where both state and control are discretized by B-splines or NURBS. In other
words, we use isogeometric analysis (IGA) for solving the partial differential equation and a nodal approach to change domains where control points take the place of nodes and where thus a quite general class of functions for representing optimal shapes and their boundaries becomes available. The minimization problem is solved by a gradient descent method where the shape gradient will be defined in isogeometric terms. This
gradient is obtained following two schemes, optimize first–discretize then and, reversely,
discretize first–optimize then. We show that for isogeometric analysis, the two schemes yield the same discrete system. Moreover, we also formulate shape optimization with respect to NURBS in the optimize first ansatz which amounts to finding optimal control points and weights simultaneously. Numerical tests illustrate the theory.
Starting from the two-scale model for pH-taxis of cancer cells introduced in [1], we consider here an extension accounting for tumor heterogeneity w.r.t. treatment sensitivity and a treatment approach including chemo- and radiotherapy. The effect of peritumoral region alkalinization on such therapeutic combination is investigated with the aid of numerical simulations.
We propose a model for acid-mediated tumor invasion involving two different scales: the microscopic one, for the dynamics of intracellular protons and their exchange with their extracellular counterparts, and the macroscopic scale of interactions between tumor cell and normal cell populations, along with the evolution of extracellular protons. We also account for the tactic behavior of cancer cells, the latter being assumed to biase their motion according to a gradient of extracellular protons (following [2,31] we call this pH taxis). A time dependent (and also time delayed) carrying capacity for the tumor cells in response to the effects of acidity is considered as well. The global well posedness of the resulting multiscale model is proved with a regularization and fixed point argument. Numerical simulations are performed in order to illustrate the behavior of the model.
Minmax regret optimization aims at finding robust solutions that perform best in the worst-case, compared to the respective optimum objective value in each scenario. Even for simple uncertainty sets like boxes, most polynomially solvable optimization problems have strongly NP-hard minmax regret counterparts. Thus, heuristics with performance guarantees can potentially be of great value, but only few such guarantees exist.
A very easy but effective approximation technique is to compute the midpoint solution of the original optimization problem, which aims at optimizing the average regret, and also the average nominal objective. It is a well-known result that the regret of the midpoint solution is at most 2 times the optimal regret. Besides some academic instances showing that this bound is tight, most instances reveal a way better approximation ratio.
We introduce a new lower bound for the optimal value of the minmax regret problem. Using this lower bound we state an algorithm that gives an instance dependent performance guarantee of the midpoint solution for combinatorial problems that is at most 2. The computational complexity of the algorithm depends on the minmax regret problem under consideration; we show that the sharpened guarantee can be computed in strongly polynomial time for several classes of combinatorial optimization problems.
To illustrate the quality of the proposed bound, we use it within a branch and bound framework for the robust shortest path problem. In an experimental study comparing this approach with a bound from the literature, we find a considerable improvement in computation times.
Cancer research is not only a fast growing field involving many branches of science, but also an intricate and diversified field rife with anomalies. One such anomaly is the
consistent reliance of cancer cells on glucose metabolism for energy production even in a normoxic environment. Glycolysis is an inefficient pathway for energy production and normally is used during hypoxic conditions. Since cancer cells have a high demand for energy
(e.g. for proliferation) it is somehow paradoxical for them to rely on such a mechanism. An emerging conjecture aiming to explain this behavior is that cancer cells
preserve this aerobic glycolytic phenotype for its use in invasion and metastasis. We follow this hypothesis and propose a new model
for cancer invasion, depending on the dynamics of extra- and intracellular protons, by building upon the existing ones. We incorporate random perturbations in the intracellular proton dynamics to account
for uncertainties affecting the cellular machinery. Finally, we address the well-posedness of our setting and use numerical simulations to illustrate the model predictions.
In this paper we construct a numerical solver for the Saint Venant equations. Special attention
is given to the balancing of the source terms, including the bottom slope and variable cross-
sectional profiles. Therefore a special discretization of the pressure law is used, in order to
transfer analytical properties to the numerical method. Based on this approximation a well-
balanced solver is developed, assuring the C-property and depth positivity. The performance
of this method is studied in several test cases focusing on accurate capturing of steady states.
Three dimensional (3d) point data is used in industry for measurement and reverse engineering. Precise point data is usually acquired with triangulating laser scanners or high precision structured light scanners. Lower precision point data is acquired by real-time structured light devices or by stereo matching with multiple cameras. The basic principle of all these methods is the so-called triangulation of 3d coordinates from two dimensional (2d) camera images.
This dissertation contributes a method for multi-camera stereo matching that uses a system of four synchronized cameras. A GPU based stereo matching method is presented to achieve a high quality reconstruction at interactive frame rates. Good depth resolution is achieved by allowing large disparities between the images. A multi level approach on the GPU allows a fast processing of these large disparities. In reverse engineering, hand-held laser scanners are used for the scanning of complex shaped objects. The operator of the scanner can scan complex regions slower, multiple times, or from multiple angles to achieve a higher point density. Traditionally, computer aided design (CAD) geometry is reconstructed in a separate step after the scanning. Errors or missing parts in the scan prevent a successful reconstruction. The contribution of this dissertation is an on-line algorithm that allows the reconstruction during the scanning of an object. Scanned points are added to the reconstruction and improve it on-line. The operator can detect the areas in the scan where the reconstruction needs additional data.
First, the point data is thinned out using an octree based data structure. Local normals and principal curvatures are estimated for the reduced set of points. These local geometric values are used for segmentation using a region growing approach. Implicit quadrics are fitted to these segments. The canonical form of the quadrics provides the parameters of basic geometric primitives.
An improved approach uses so called accumulated means of local geometric properties to perform segmentation and primitive reconstruction in a single step. Local geometric values can be added and removed on-line to these means to get a stable estimate over a complete segment. By estimating the shape of the segment it is decided which local areas are added to a segment. An accumulated score estimates the probability for a segment to belong to a certain type of geometric primitive. A boundary around the segment is reconstructed using a growing algorithm that ensures that the boundary is closed and avoids self intersections.
In recent years the field of polymer tribology experienced a tremendous development
leading to an increased demand for highly sophisticated in-situ measurement methods.
Therefore, advanced measurement techniques were developed and established
in this study. Innovative approaches based on dynamic thermocouple, resistive electrical
conductivity, and confocal distance measurement methods were developed in
order to in-situ characterize both the temperature at sliding interfaces and real contact
area, and furthermore the thickness of transfer films. Although dynamic thermocouple
and real contact area measurement techniques were already used in similar
applications for metallic sliding pairs, comprehensive modifications were necessary to
meet the specific demands and characteristics of polymers and composites since
they have significantly different thermal conductivities and contact kinematics. By using
tribologically optimized PEEK compounds as reference a new measurement and
calculation model for the dynamic thermocouple method was set up. This method
allows the determination of hot spot temperatures for PEEK compounds, and it was
found that they can reach up to 1000 °C in case of short carbon fibers present in the
polymer. With regard to the non-isotropic characteristics of the polymer compound,
the contact situation between short carbon fibers and steel counterbody could be
successfully monitored by applying a resistive measurement method for the real contact
area determination. Temperature compensation approaches were investigated
for the transfer film layer thickness determination, resulting in in-situ measurements
with a resolution of ~0.1 μm. In addition to a successful implementation of the measurement
systems, failure mechanism processes were clarified for the PEEK compound
used. For the first time in polymer tribology the behavior of the most interesting
system parameters could be monitored simultaneously under increasing load
conditions. It showed an increasing friction coefficient, wear rate, transfer film layer
thickness, and specimen overall temperature when frictional energy exceeded the
thermal transport capabilities of the specimen. In contrast, the real contact area between
short carbon fibers and steel decreased due to the separation effect caused by
the transfer film layer. Since the sliding contact was more and more matrix dominated,
the hot spot temperatures on the fibers dropped, too. The results of this failure
mechanism investigation already demonstrate the opportunities which the new
measurement techniques provide for a deeper understanding of tribological processes,
enabling improvements in material composition and application design.
In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.
In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful
in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement.
This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.
The ordered weighted averaging objective (OWA) is an aggregate function over multiple optimization criteria which received increasing attention by the research community over the last decade. Different to the ordered weighted sum, weights are attached to ordered objective functions (i.e., a weight for the largest value, a weight for the second-largest value and so on). As this contains max-min or worst-case optimization as a special case, OWA can also be considered as an alternative approach to robust optimization.
For linear programs with OWA objective, compact reformulations exist, which result in extended linear programs. We present new such reformulation models with reduced size. A computational comparison indicates that these formulations improve solution times.
Annual Report 2013
(2014)
Annual Report, Jahrbuch AG Magnetismus
Anorganisch-organische Hybridmaterialien basierend auf photolumineszierenden Sol-Gel-Vorstufen
(2014)
In den vergangenen Jahrzehnten haben sich periodisch strukturierte Organosilika (PMO) und metallorganische Netzwerke (MOFs) mit poröser Struktur als äußerst vielseitige Materialien mit interessanten physikalischen und chemischen Eigenschaften erwiesen. Neben ihrer Anwendung in der Katalyse und der Gasadsorption wurden verschiedene Hybridmaterialien in jüngerer Vergangenheit auch hinsichtlich Photoumineszenz, Energiekonversion und Halbleitereigenschaften untersucht. Die Bandbreite möglicher Applikationen reicht dabei von Sensoren über Beschichtungen und dekorative Keramiken bis hin zu elektronischen Bauteilen, wie etwa Transistoren (OFETs) und optischen Fasern (POFs). In der vorliegenden Arbeit werden auf Grundlage der bisher bekannten Forschung neue Ansätze zur Darstellung anorganisch-organischer Hybridmaterialien, insbesondere mikro- und mesoporöser Organosilika, vorgestellt. Ausgangspunkt für neuartige PMO und andere organisch modifizierte mesoporöse Silika vom MCM41- oder SBA-15-Typ ist die Synthese der hier erstmals präsenterten Organosilane, welche je nach Verwendungszweck über eine oder mehrere hydrolysierbare Trialkokysilylfunktionen verfügen. Während Alkylsiloxane mit Pyren, Acridon oder Dithien-2-ylphenothiazin als fluoreszierender Endgruppe über „klassische“ organische Synthesewege, wie z. B. die Knüpfung von Amid- und Sulfonamidfunktionen dargestellt werden, lassen sich komplex funktionalisierte Arylsiloxane nicht über die sonst übliche Grignard-Reaktion erhalten. Im Rahmen dieser Arbeit wird eine dreistufige Reaktionssequenz vorgestellt, mit deren Hilfe stark fluoreszierende, arylenverbrückte Trimethoysilylthiophene in sehr guten bis quantitativen Ausbeuten zugänglich sind. Den ersten Schritt bei der Darstellung der Zielmoleküle bildet die sehr schnelle und effiziente Suzuki- Myaura-Kupplung der Aryliodide oder -bromide mit Thiophen-2-boronsäure in basischen Alkohol/Wasser-Gemischen. Dieser C-C-Verknüpfung folgt die Wohl-Ziegler-Halogenierung der Thienylreste in der C-H-aziden 5-Position mittels N-Brom- oder N-Iodsuccinimid. Die abschließende, Pd(0)-vermittelte Bildung von C-Si-Bindungen, welche bei Einsatz eines Buchwald-Hartwig-Liganden sehr selektiv das gewünschte Arylsiloxan liefert, ermöglicht letztlich die problemlose Aufarbeitung der hydrolyseempfindlichen Produkte. Sowohl bei Verwendung der pyrenhaltigen Organosilane als auch mit den thiophenhaltigen Vorstufen lassen sich in templat-gesteuerten Sol-Gel-Ansätzen keine mesoporösen Materialien erhalten. Ursache dafür ist in den meisten Fällen vermutlich die Aggregationsneigung der recht großen aromatischen Chromophore und ihre damit verbundene schlechte Wasserlöslichkeit. Im Falle der thiophensubstituierten Arene wurde zudem im alkalischen Medium die von Inagaki et al. vorhergesagte Protodesilylierung des Chromophors zum reduzierten Aromaten gefunden. Durch Umesterung der Thienylsiloxane in Triethylenglykol-Monomethylether (TGM) lassen sich jedoch mit Wasser sehr gut mischbare Stammlösungen der Chromophore erzeugen. Mit Hilfe dieser Lösungen sind in niedriger Verdünnung zwar ebenfalls keine geordneten Organosilika zugänglich, allerdings wurden in einer Eintopfreaktion unter Säurekatalyse transparente Xerogele dargestellt, die selbst bei minimalen Farbstoffkonzentrationen von ca. 6 • 10E-6 mol/g intensiv lumineszieren und spezifische BET-Oberflächen von bis zu 360 m2/g besitzen. Die Analyse der CIE-Farbkoordinaten der resultierenden Fluorogele wurde mit Hilfe ihrer Festkörper Fluoreszenzspektren durchgeführt und offenbart, dass die Darstellung nahezu „ideal“ weiß emittierender Organosilika durch Mischung verschieden emittierender Chromophore realisiert werden kann. Ein umgeestertes, silyliertes Derivat des 5,8-Dithien-2-yl-quinoxalins erweist sich zudem als eigenständig weiß photolumineszierendes Molekül, das Licht mit einer Farbtemperatur von ca. 5.500 K nahe der Schwarzkörper-Kurve emittiert. Die hierin vorgestellten amorphen Fluorogele sind insbesondere im Hinblick auf eine „Verglasung“ von Farbstoffen für Beleuchtungsanwendungen von Interesse. Im Gegensatz zu den geschilderten Versuchen zur Darstellung neuartiger PMOs verliefen sowohl die basenkatalysierte Darstellung eines Acridon-haltigen MCM-41-Derivates als auch die NH4F-abhängige Synthese eines phenothiazinhaltigen SBA-15-Analogons bei stark saurem pH erfolgreich. Beide Materialien weisen sowohl hohe spezifische BET-Oberflächen als auch eine geordnete, hexagonale Porenstruktur auf und wurden unter anderem mit Hilfe von Röntgen-Pulverdiffraktometrie (XRD), N2-Physisorptionsmessungen (BET) und Festkörper Kernresonanzspektroskopie (MAS-NMR) charakterisiert. Mit Hilfe unterschiedlicher Additive können die Absorptions- sowie die Fluoreszenzeigenschaften dieser Festkörper reversibel oder irreversibel modifiziert werden. So lässt sich die intensive bläuliche Fluoreszenz des immobilisierten Acridonchromophors durch Komplexbildung der Carbonylfunktion mit stark lewis-aziden Metallsalzen wie Sc(III)- oder Bi(III)triflat bathochrom um Δλ = 20 nm ins Grünliche verschieben. Das phenothiazinhaltige Organokieselgel, in dessen Poren sowohl die Chromophore als auch organische Ammoniumspezies kovalent verankert sind, wird durch Behandlung NOBF4 quasi-reversibel oxidiert, wobei sich paramagnetische, stark farbige Radikalkationen innerhalb des Festkörpers ausbilden. Sowohl die Oxidation als auch die Rereduktion des Materials mit Hilfe von Ascorbinsäure wurden mittels Elektronenspinresonanz- (ESR), Infrarot- (ATR-IR) und Fluoreszenzspektroskopie verfolgt. Aus der Charakterisierung der organischen Chromophore sowie der darauf basierenden Hybridmaterialien ergeben sich Anknüpfungspunkte zu weiteren Verbindungsklassen und deren möglichen Anwendungen. Die zuvor beschriebenen Thienylhalogenide können mit Hilfe einer Heck-Carboalkoxylierung in die entsprechenden n-Alkylester überführt werden. Die Synthese der sich daraus ableitenden MOF-Linker sowie stark fluoreszierender Phosphonate für die Beschichtung von keramischen Oberflächen wurde in einem Kooperationsprojekt mit Frau Dr. E. Keceli bearbeitet und wird im Rahmen dieser Arbeit ebenfalls beschrieben. Gleiches gilt für die Erforschung der Bandlücken verschiedener thienylierter Phenothiazine, welche als Modellverbindungen für die Entwicklung farbstoffhaltiger organischer UV-Solarzellen (UV-DSCs) dienen könnten. Die hierfür notwendigen Experimente sind Gegenstand einer laufenden Kooperation mit der AG Ziegler (Fachbereich Physik, TU Kaiserslautern). Künftiges Ziel dieser Forschung soll es sein, unter Berücksichtigung der in dieser Arbeit vorgestellten Ergebnisse, sowohl Polymere auf Basis p-dotierter Phenothiazine als auch weitere Hybridsilika mit erhöhtem Farbstoffgehalt und verbesserter thermischer und mechanischer Stabilität zu entwicken. Bei der Materialsynthese sollte insbesondere der Einfluss von Parametern wie Präformationszeit, pH-Wert und der Natur der eingesetzten Sol-Gel-Vorstufen näher beleuchtet werden, um ggf. zu einem systematischen Verständnis der Strukturbildung der hierin vorgestellten Organosilika zu gelangen.
This thesis discusses several applications of computational topology to the visualization
of scalar fields. Scalar field data come from different measurements and simulations. The
intrinsic properties of this kind of data, which make the visualization of it to a complicated
task, are the large size and presence of noise. Computational topology is a powerful tool
for automatic feature extraction, which allows the user to interpret the information contained
in the dataset in a more efficient way. Utilizing it one can make the main purpose of
scientific visualization, namely extracting knowledge from data, a more convenient task.
Volume rendering is a class of methods designed for realistic visual representation of 3D
scalar fields. It is used in a wide range of applications with different data size, noise
rate and requirements on interactivity and flexibility. At the moment there is no known
technique which can meet the needs of every application domain, therefore development
of methods solving specific problems is required. One of such algorithms, designed for
rendering of noisy data with high frequencies is presented in the first part of this thesis.
The method works with multidimensional transfer functions and is especially suited for
functions exhibiting sharp features. Compared with known methods the presented algorithm
achieves better visual quality with a faster performance in presence of mentioned
features. An improvement on the method utilizing a topological theory, Morse theory, and
a topological construct, Morse-Smale complex, is also presented in this part of the thesis.
The improvement allows for performance speedup at a little precomputation and memory
cost.
The usage of topological methods for feature extraction on a real world dataset often
results in a very large feature space which easily leads to information overflow. Topology
simplification is designed to reduce the number of features and allow a domain expert
to concentrate on the most important ones. In the terms of Morse theory features are
represented by critical points. An importance measure which is usually used for removing
critical points is called homological persistence. Critical points are cancelled pairwise
according to their homological persistence value. In the presence of outlier-like noise
homological persistence has a clear drawback: the outliers get a high importance value
assigned and therefore are not being removed. In the second part of this thesis a new
importance measure is presented which is especially suited for data with outliers. This
importance measure is called scale space persistence. The algorithm for the computation
of this measure is based on the scale space theory known from the area of computer
vision. The development of a critical point in scale space gives information about its
spacial extent, therefore outliers can be distinguished from other critical points. The usage
of the presented importance measure is demonstrated on a real world application, crater
identification on a surface of Mars.
The third part of this work presents a system for general interactive topology analysis
and exploration. The development of such a system is motivated by the fact that topological
methods are often considered to be complicated and hard to understand, because
application of topology for visualization requires deep understanding of the mathematical
background behind it. A domain expert exploring the data using topology for feature
extraction needs an intuitive way to manipulate the exploration process. The presented
system is based on an intuitive notion of a scene graph, where the user can choose and
place the component blocks to achieve an individual result. This way the domain expert
can extract more knowledge from given data independent on the application domain. The
tool gives the possibility for calculation and simplification of the underlying topological
structure, Morse-Smale complex, and also the visualization of parts of it. The system also
includes a simple generic query language to acquire different structures of the topological
structure at different levels of hierarchy.
The fourth part of this dissertation is concentrated on an application of computational
geometry for quality assessment of a triangulated surface. Quality assessment of a triangulation
is called surface interrogation and is aimed for revealing intrinsic irregularities
of a surface. Curvature and continuity are the properties required to design a visually
pleasing geometric object. For example, a surface of a manufactured body usually should
be convex without bumps of wiggles. Conventional rendering methods hide the regions
of interest because of smoothing or interpolation. Two new methods which are presented
here: curvature estimation using local fitting with B´ezier patches and computation of reflection
lines for visual representation of continuity, are specially designed for assessment
problems. The examples and comparisons presented in this part of the thesis prove the
benefits of the introduced algorithms. The methods are also well suited for concurrent visualization
of the results from simulation and surface interrogation to reveal the possible
intrinsic relationship between them.
Zur Verankerung von Bügelschenkeln im Brandfall mit 90°-Winkelhaken gibt es bisher nur widersprüchliche Aussagen. Während in der kommentierten Fassung des EC2 90°-Winkelhaken als nicht geeignet für Anforderungen größer R90 bezeichnet werden, widerspricht Heft 600 des DAfStB dieser Aussage und erklärt 90°-Winkelhaken für geeignet.
Ziel der vorliegenden Arbeit ist es das Tragverhalten von 90°- und 135°-Winkelhaken, sowohl im gerissenen, als auch im ungerissenen Querschnitt unter Brandbeanspruchung zu untersuchen.
Es konnten signifikante Unterschiede im Tragverhalten gezeigt und Hinweise für die Praxis erarbeitet werden.
Brücken und andere Ingenieurbauwerke im Zuge von Straßen sind in Deutschland regelmäßig alle 6 Jahre einer handnahen Prüfung zu unterziehen. Diese Verpflichtung gilt für alle Baulastträger. Dabei werden alle relevanten Schäden erfasst, bewertet und dokumentiert. Je nach Art des Schadens können weitergehende Untersuchungen im Rahmen einer Objektbezogenen Schadensanalyse notwendig werden. Damit das Bauwerk dadurch nicht weiter geschädigt wird, sollten für Vor-Ort-Untersuchungen zerstörungsfreie Prüfverfahren eingesetzt werden.
Damit die Bauwerksprüfungen qualitativ hochwertig durchgeführt werden können, ist wichtig hierfür entsprechende Aus- und Fortbildung anzubieten. Hierfür werden seit geraumer Zeit entsprechende Lehrgänge angeboten vom „Verein zur Förderung der Qualitätssicherung und Zertifizierung der Aus-/Fortbildung von Ingenieurinnen/Ingenieuren der Bauwerksprüfung“ (VFIB) angeboten.