## Doctoral Thesis

### Refine

#### Faculty / Organisational entity

- Fachbereich Chemie (232)
- Fachbereich Mathematik (175)
- Fachbereich Maschinenbau und Verfahrenstechnik (115)
- Fachbereich Biologie (84)
- Fachbereich Informatik (80)
- Fachbereich ARUBI (68)
- Fachbereich Elektrotechnik und Informationstechnik (52)
- Fachbereich Bauingenieurwesen (20)
- Fachbereich Sozialwissenschaften (18)
- Fachbereich Physik (17)

#### Year of publication

#### Document Type

- Doctoral Thesis (875) (remove)

#### Keywords

- Visualisierung (12)
- Phasengleichgewicht (10)
- Modellierung (9)
- Simulation (8)
- Apoptosis (7)
- Katalyse (7)
- Flüssig-Flüssig-Extraktion (6)
- Mobilfunk (6)
- Polyphenole (6)
- Apoptosis (5)

- A Viscosity Adaptive Lattice Boltzmann Method (2015)
- The present thesis describes the development and validation of a viscosity adaption method for the numerical simulation of non-Newtonian fluids on the basis of the Lattice Boltzmann Method (LBM), as well as the development and verification of the related software bundle SAM-Lattice. By now, Lattice Boltzmann Methods are established as an alternative approach to classical computational fluid dynamics methods. The LBM has been shown to be an accurate and efficient tool for the numerical simulation of weakly compressible or incompressible fluids. Fields of application reach from turbulent simulations through thermal problems to acoustic calculations among others. The transient nature of the method and the need for a regular grid based, non body conformal discretization makes the LBM ideally suitable for simulations involving complex solids. Such geometries are common, for instance, in the food processing industry, where fluids are mixed by static mixers or agitators. Those fluid flows are often laminar and non-Newtonian. This work is motivated by the immense practical use of the Lattice Boltzmann Method, which is limited due to stability issues. The stability of the method is mainly influenced by the discretization and the viscosity of the fluid. Thus, simulations of non-Newtonian fluids, whose kinematic viscosity depend on the shear rate, are problematic. Several authors have shown that the LBM is capable of simulating those fluids. However, the vast majority of the simulations in the literature are carried out for simple geometries and/or moderate shear rates, where the LBM is still stable. Special care has to be taken for practical non-Newtonian Lattice Boltzmann simulations in order to keep them stable. A straightforward way is to truncate the modeled viscosity range by numerical stability criteria. This is an effective approach, but from the physical point of view the viscosity bounds are chosen arbitrarily. Moreover, these bounds depend on and vary with the grid and time step size and, therefore, with the simulation Mach number, which is freely chosen at the start of the simulation. Consequently, the modeled viscosity range may not fit to the actual range of the physical problem, because the correct simulation Mach number is unknown a priori. A way around is, to perform precursor simulations on a fixed grid to determine a possible time step size and simulation Mach number, respectively. These precursor simulations can be time consuming and expensive, especially for complex cases and a number of operating points. This makes the LBM unattractive for use in practical simulations of non-Newtonian fluids. The essential novelty of the method, developed in the course of this thesis, is that the numerically modeled viscosity range is consistently adapted to the actual physically exhibited viscosity range through change of the simulation time step and the simulation Mach number, respectively, while the simulation is running. The algorithm is robust, independent of the Mach number the simulation was started with, and applicable for stationary flows as well as transient flows. The method for the viscosity adaption will be referred to as the "viscosity adaption method (VAM)" and the combination with LBM leads to the "viscosity adaptive LBM (VALBM)". Besides the introduction of the VALBM, a goal of this thesis is to offer assistance in the spirit of a theory guide to students and assistant researchers concerning the theory of the Lattice Boltzmann Method and its implementation in SAM-Lattice. In Chapter 2, the mathematical foundation of the LBM is given and the route from the BGK approximation of the Boltzmann equation to the Lattice Boltzmann (BGK) equation is delineated in detail. The derivation is restricted to isothermal flows only. Restrictions of the method, such as low Mach number flows are highlighted and the accuracy of the method is discussed. SAM-Lattice is a C++ software bundle developed by the author and his colleague Dipl.-Ing. Andreas Schneider. It is a highly automated package for the simulation of isothermal flows of incompressible or weakly compressible fluids in 3D on the basis of the Lattice Boltzmann Method. By the time of writing of this thesis, SAM-Lattice comprises 5 components. The main components are the highly automated lattice generator SamGenerator and the Lattice Boltzmann solver SamSolver. Postprocessing is done with ParaSam, which is our extension of the open source visualization software ParaView. Additionally, domain decomposition for MPI parallelism is done by SamDecomposer, which makes use of the graph partitioning library MeTiS. Finally, all mentioned components can be controlled through a user friendly GUI (SamLattice) implemented by the author using QT, including features to visually track output data. In Chapter 3, some fundamental aspects on the implementation of the main components, including the corresponding flow charts will be discussed. Actual details on the implementation are given in the comprehensive programmers guides to SamGenerator and SamSolver. In order to ensure the functionality of the implementation of SamSolver, the solver is verified in Chapter 4 for Stokes's First Problem, the suddenly accelerated plate, and for Stokes's Second Problem, the oscillating plate, both for Newtonian fluids. Non-Newtonian fluids are modeled in SamSolver with the power-law model according to Ostwald de Waele. The implementation for non-Newtonian fluids is verified for the Hagen-Poiseuille channel flow in conjunction with a convergence analysis of the method. At the same time, the local grid refinement as it is implemented in SamSolver, is verified. Finally, the verification of higher order boundary conditions is done for the 3D Hagen-Poiseuille pipe flow for both Newtonian and non-Newtonian fluids. In Chapter 5, the theory of the viscosity adaption method is introduced. For the adaption process, a target collision frequency or target simulation Mach number must be chosen and the distributions must be rescaled according to the modified time step size. A convenient choice is one of the stability bounds. The time step size for the adaption step is deduced from the target collision frequency \(\Omega_t\) and the currently minimal or maximal shear rate in the system, while obeying auxiliary conditions for the simulation Mach number. The adaption is done in the collision step of the Lattice Boltzmann algorithm. We use the transformation matrices of the MRT model to map from distribution space to moment space and vice versa. The actual scaling of the distributions is conducted on the back mapping, because we use the transformation matrix on the basis of the new adaption time step size. It follows an additional rescaling of the non-equilibrium part of the distributions, because of the form of the definition for the discrete stress tensor in the LBM context. For that reason it is clear, that the VAM is applicable for the SRT model as well as the MRT model, where there is virtually no extra cost in the latter case. Also, in Chapter 5, the multi level treatment will be discussed. Depending on the target collision frequency and the target Mach number, the VAM can be used to optimally use the viscosity range that can be modeled within the stability bounds or it can be used to drastically accelerate the simulation. This is shown in Chapter 6. The viscosity adaptive LBM is verified in the stationary case for the Hagen-Poiseuille channel flow and in the transient case for the Wormersley flow, i.e., the pulsatile 3D Hagen-Poiseuille pipe flow. Although, the VAM is used here for fluids that can be modeled with the power-law approach, the implementation of the VALBM is straightforward for other non-Newtonian models, e.g., the Carreau-Yasuda or Cross model. In the same chapter, the VALBM is validated for the case of a propeller viscosimeter developed at the chair SAM. To this end, the experimental data of the torque on the impeller of three shear thinning non-Newtonian liquids serve for the validation. The VALBM shows excellent agreement with experimental data for all of the investigated fluids and in every operating point. For reasons of comparison, a series of standard LBM simulations is carried out with different simulation Mach numbers, which partly show errors of several hundred percent. Moreover, in Chapter 7, a sensitivity analysis on the parameters used within the VAM is conducted for the simulation of the propeller viscosimeter. Finally, the accuracy of non-Newtonian Lattice Boltzmann simulations with the SRT and the MRT model is analyzed in detail. Previous work for Newtonian fluids indicate that depending on the numerical value of the collision frequency \(\Omega\), additional artificial viscosity is introduced due to the finite difference scheme, which negatively influences the accuracy. For the non-Newtonian case, an error estimate in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. The estimation of the error minimum is excellent in regions where the \(\Omega\) error is the dominant source of error as opposed to the compressibility error. Result of this dissertation is a verified and validated software bundle on the basis of the viscosity adaptive Lattice Boltzmann Method. The work restricts itself on the simulation of isothermal, laminar flows with small Mach numbers. As further research goals, the testing of the VALBM with minimal error estimate and the investigation of the VALBM in the case of turbulent flows is suggested.

- A Consistent Large Eddy Approach for Lattice Boltzmann Methods and its Application to Complex Flows (2015)
- Lattice Boltzmann Methods have shown to be promising tools for solving fluid flow problems. This is related to the advantages of these methods, which are among others, the simplicity in handling complex geometries and the high efficiency in calculating transient flows. Lattice Boltzmann Methods are mesoscopic methods, based on discrete particle dynamics. This is in contrast to conventional Computational Fluid Dynamics methods, which are based on the solution of the continuum equations. Calculations of turbulent flows in engineering depend in general on modeling, since resolving of all turbulent scales is and will be in near future far beyond the computational possibilities. One of the most auspicious modeling approaches is the large eddy simulation, in which the large, inhomogeneous turbulence structures are directly computed and the smaller, more homogeneous structures are modeled. In this thesis, a consistent large eddy approach for the Lattice Boltzmann Method is introduced. This large eddy model includes, besides a subgrid scale model, appropriate boundary conditions for wall resolved and wall modeled calculations. It also provides conditions for turbulent domain inlets. For the case of wall modeled simulations, a two layer wall model is derived in the Lattice Boltzmann context. Turbulent inlet conditions are achieved by means of a synthetic turbulence technique within the Lattice Boltzmann Method. The proposed approach is implemented in the Lattice Boltzmann based CFD package SAM-Lattice, which has been created in the course of this work. SAM-Lattice is feasible of the calculation of incompressible or weakly compressible, isothermal flows of engineering interest in complex three dimensional domains. Special design targets of SAM-Lattice are high automatization and high performance. Validation of the suggested large eddy Lattice Boltzmann scheme is performed for pump intake flows, which have not yet been treated by LBM. Even though, this numerical method is very suitable for this kind of vortical flows in complicated domains. In general, applications of LBM to hydrodynamic engineering problems are rare. The results of the pump intake validation cases reveal that the proposed numerical approach is able to represent qualitatively and quantitatively the very complex flows in the intakes. The findings provided in this thesis can serve as the basis for a broader application of LBM in hydrodynamic engineering problems.

- New N,N,P-Ligands and Their Heterobimetallic Complexes (2015)
- The aim of this work was to synthesize and characterize new bidentate N,N,P-ligands and their corresponding heterobimetallic complexes. These bidentate pyridylpyrimidine aminophosphine ligands were synthesized by ring closure of two different enaminones ( 3-(dimethylamino)-1-(pyridine-2-yl)-prop-2-en-1-one or 3-(dimethylamino)-1-(pyridine-2-yl)-but-2-en-1-one) with excess amount of guanidinium salts in the presence of base. The novel phosphine functionalized guanidinium salts were prepared from 2-(diphenylphosphinyl)ethylamine or 3-(diphenyl-phosphinyl)propylamine. These bidentate N,N,P-ligands contain hard and soft donor sites which allows the coordination of two different metal centers and bimetallic complexes. These bimetallic complexes can exhibit a unique behavior as a result of a cooperation between the two metal atoms. First, the gold(I) complexes of all these four different ligands were synthesized. The gold metal coordinates only to the phosphorus atom. It was proved by X-Ray crystallography technique and 31P NMR spectroscopy. Addition to the gold(I)-monometallic complexes, trans- coordinated rhodium complex of (2-amino)pyridylpyrimidine aminophosphine ligand was successfully prepared. The characterization of this complex was achieved by NMR and IR spectroscopy. Reacting the mono gold(I) complexes with the different metal salts like Pd(PhCN)2Cl2, ZnCl2, [Ru(p-cymene)Cl2] dimer gave the target heterobimetallic complexes. The second metal centers coordinated to the N,N donor site which was proved by the help of NMR spectroscopy and ESI-MS measurements. The Au(I) and Au-Zn complexes of N,N,P-ligands were examined as catalysts for the hydroamidation reaction of cyclohexene with p-toluenesulfonamide. They did not show activities under the tested conditions. Further studies are necessary to understand the catalytic activities and cooperativity between the two metal atoms. In addition, bi-and trimetallic complexes with the rhodium compound could be synthesized and tested in different organic transformations. Furthermore, the chiral hydroxyl[2.2]paracyclophane substituted with five different aminopyrimidines were accomplished. These aminopyrimidine ligands were synthesized by a cyclization reaction with hydroxyl[2.2]paracyclophane substituted enaminone and excess amount of corresponding guanidinium salts under basic conditions. In the last part of this work, kinetic studies of cyclopalladation reaction of the 2-(arylaminopyrimidin-4-yl)pyridine ligands with Pd(PhCN)2 These measurements were carried out by using UV-Vis spectroscopy. The spectral studies of cyclometallation step showed that the reaction fits a second order kinetics. In addition to this, a full kinetic investigation was performed at different temperatures and the activation parameters of complex formation were calculated.

- User-Centered Collaborative Visualization (2015)
- The last couple of years have marked the entire field of information technology with the introduction of a new global resource, called data. Certainly, one can argue that large amounts of information and highly interconnected and complex datasets were available since the dawn of the computer and even centuries before. However, it has been only a few years since digital data has exponentially expended, diversified and interconnected into an overwhelming range of domains, generating an entire universe of zeros and ones. This universe represents a source of information with the potential of advancing a multitude of fields and sparking valuable insights. In order to obtain this information, this data needs to be explored, analyzed and interpreted. While a large set of problems can be addressed through automatic techniques from fields like artificial intelligence, machine learning or computer vision, there are various datasets and domains that still rely on the human intuition and experience in order to parse and discover hidden information. In such instances, the data is usually structured and represented in the form of an interactive visual representation that allows users to efficiently explore the data space and reach valuable insights. However, the experience, knowledge and intuition of a single person also has its limits. To address this, collaborative visualizations allow multiple users to communicate, interact and explore a visual representation by building on the different views and knowledge blocks contributed by each person. In this dissertation, we explore the potential of subjective measurements and user emotional awareness in collaborative scenarios as well as support flexible and user- centered collaboration in information visualization systems running on tabletop displays. We commence by introducing the concept of user-centered collaborative visualization (UCCV) and highlighting the context in which it applies. We continue with a thorough overview of the state-of-the-art in the areas of collaborative information visualization, subjectivity measurement and emotion visualization, combinable tabletop tangibles, as well as browsing history visualizations. Based on a new web browser history visualization for exploring user parallel browsing behavior, we introduce two novel user-centered techniques for supporting collaboration in co-located visualization systems. To begin with, we inspect the particularities of detecting user subjectivity through brain-computer interfaces, and present two emotion visualization techniques for touch and desktop interfaces. These visualizations offer real-time or post-task feedback about the users’ affective states, both in single-user and collaborative settings, thus increasing the emotional self-awareness and the awareness of other users’ emotions. For supporting collaborative interaction, a novel design for tabletop tangibles is described together with a set of specifically developed interactions for supporting tabletop collaboration. These ring-shaped tangibles minimize occlusion, support touch interaction, can act as interaction lenses, and describe logical operations through nesting operations. The visualization and the two UCCV techniques are each evaluated individually capturing a set of advantages and limitations of each approach. Additionally, the collaborative visualization supported by the two UCCV techniques is also collectively evaluated in three user studies that offer insight into the specifics of interpersonal interaction and task transition in collaborative visualization. The results show that the proposed collaboration support techniques do not only improve the efficiency of the visualization, but also help maintain the collaboration process and aid a balanced social interaction.

- Robustness for regression models with asymmetric error distribution (2015)
- In this work we focus on the regression models with asymmetrical error distribution, more precisely, with extreme value error distributions. This thesis arises in the framework of the project "Robust Risk Estimation". Starting from July 2011, this project won three years funding by the Volkswagen foundation in the call "Extreme Events: Modelling, Analysis, and Prediction" within the initiative "New Conceptual Approaches to Modelling and Simulation of Complex Systems". The project involves applications in Financial Mathematics (Operational and Liquidity Risk), Medicine (length of stay and cost), and Hydrology (river discharge data). These applications are bridged by the common use of robustness and extreme value statistics. Within the project, in each of these applications arise issues, which can be dealt with by means of Extreme Value Theory adding extra information in the form of the regression models. The particular challenge in this context concerns asymmetric error distributions, which significantly complicate the computations and make desired robustification extremely difficult. To this end, this thesis makes a contribution. This work consists of three main parts. The first part is focused on the basic notions and it gives an overview of the existing results in the Robust Statistics and Extreme Value Theory. We also provide some diagnostics, which is an important achievement of our project work. The second part of the thesis presents deeper analysis of the basic models and tools, used to achieve the main results of the research. The second part is the most important part of the thesis, which contains our personal contributions. First, in Chapter 5, we develop robust procedures for the risk management of complex systems in the presence of extreme events. Mentioned applications use time structure (e.g. hydrology), therefore we provide extreme value theory methods with time dynamics. To this end, in the framework of the project we considered two strategies. In the first one, we capture dynamic with the state-space model and apply extreme value theory to the residuals, and in the second one, we integrate the dynamics by means of autoregressive models, where the regressors are described by generalized linear models. More precisely, since the classical procedures are not appropriate to the case of outlier presence, for the first strategy we rework classical Kalman smoother and extended Kalman procedures in a robust way for different types of outliers and illustrate the performance of the new procedures in a GPS application and a stylized outlier situation. To apply approach to shrinking neighborhoods we need some smoothness, therefore for the second strategy, we derive smoothness of the generalized linear model in terms of L2 differentiability and create sufficient conditions for it in the cases of stochastic and deterministic regressors. Moreover, we set the time dependence in these models by linking the distribution parameters to the own past observations. The advantage of our approach is its applicability to the error distributions with the higher dimensional parameter and case of regressors of possibly different length for each parameter. Further, we apply our results to the models with generalized Pareto and generalized extreme value error distributions. Finally, we create the exemplary implementation of the fixed point iteration algorithm for the computation of the optimally robust in uence curve in R. Here we do not aim to provide the most exible implementation, but rather sketch how it should be done and retain points of particular importance. In the third part of the thesis we discuss three applications, operational risk, hospitalization times and hydrological river discharge data, and apply our code to the real data set taken from Jena university hospital ICU and provide reader with the various illustrations and detailed conclusions.

- Worst-Case Portfolio Optimization: Transaction Costs and Bubbles (2015)
- In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions. In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario. In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE. In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.

- Optimal Multilevel Monte Carlo Algorithms for Parametric Integration and Initial Value Problems (2015)
- We intend to find optimal deterministic and randomized algorithms for three related problems: multivariate integration, parametric multivariate integration, and parametric initial value problems. The main interest is concentrated on the question, in how far randomization affects the precision of an approximation. We want to understand when and to which extent randomized algorithms are superior to deterministic ones. All problems are studied for Banach space valued input functions. The analysis of Banach space valued problems is motivated by the investigation of scalar parametric problems; these can be understood as particular cases of Banach space valued problems. The gain achieved by randomization depends on the underlying Banach space. For each problem, we introduce deterministic and randomized algorithms and provide the corresponding convergence analysis. Moreover, we also provide lower bounds for the general Banach space valued settings, and thus, determine the complexity of the problems. It turns out that the obtained algorithms are order optimal in the deterministic setting. In the randomized setting, they are order optimal for certain classes of Banach spaces, which includes the L_p spaces and any finite dimensional Banach space. For general Banach spaces, they are optimal up to an arbitrarily small gap in the order of convergence.

- Metallacetylide und mehrkernige Komplexe mit neuen chelatisierenden Alkinylliganden (2015)
- Metallacetylide sind Verbindungen aus Metallen und Liganden, die durch Deprotonierung terminaler Alkine entstehen. Komplexe dieser Art weisen eine Vielzahl verschiedener Eigenschaften und Verwendungsmöglichkeiten auf. Der lineare Charakter der Acetylideinheit und ihre π-ungesättigte Natur macht sie zu geeigneten Bildungskomponenten für die Herstellung molekularer Leiter oder organometallischer oligo- sowie polymerer Materialien mit Eigenschaften wie z. B. optischer Nichtlinearität, Lumineszenz, elektrischer Leitfähigkeit und Flüssigkristallinität. Zwar existieren zahlreiche Arbeiten zur Herstellung von Metall-acetylidkomplexen, die Möglichkeit Liganden zu synthetisieren, die eine mehrzähnige, sogenannte chelatisierende Funktion mit einer terminalen Alkinyleinheit kombinieren, wurde jedoch nur von wenigen Gruppen verfolgt. Mit Hilfe solcher Liganden ist es nicht nur möglich, durch kovalente Anbindung eines Metalls an die C-C-Dreifachbindung Metallacetylide zu erhalten, sondern diese durch Koordination weiterer Metallzentren an die Chelatfunktion effizient zu stabilen, mehrkernigen Metallkomplexen zu erweitern. Im Rahmen dieser Arbeit wurden zwei Liganden synthetisiert, die derartige Eigenschaften besitzen: Das zweizähnige Alkin 2-(1-(Prop-2-yn-1-yl)-1H-pyrazol-3-yl)pyridin und das dreizähnige Diin 2,6-Bis(1(prop-2-yn-1-yl)-1H-pyrazol-3-yl)pyridin. Die Liganden wurden unter steter Optimierung der Reaktionsbedingungen durch Propargylierung von 2-Pyrazolylpyridin bzw. 2,6-Bispyrazolylpyridin synthetisiert. In einer Reihe von Versuchen mit verschiedenen Übergangsmetallen wurde das Monoalkin auf seine Fähigkeit hin untersucht, sowohl Metallacetylide als auch Komplexe durch Koordination an die chelatisierenden Stickstoffatome zu bilden. Dabei wurden ein Gold(I)monoacetylid mit Triphenylphosphan als Neutralligand sowie zwei Platindiacetylide erhalten. Die Platinkomplexe unterscheiden sich durch ihre Neutralliganden: es wurde einerseits Triphenylphosphan, andererseits ein dppe eingesetzt. Die unterschiedliche Natur der Phosphanliganden wirkt sich auf die Konformation der Komplexe aus: Während die einzähnigen Triphenylphosphanmoleküle die Positionierung der Alkinylliganden in trans-Stellung erlauben, liegt der dppe-haltige Komplex in einer cis-Konformation vor. Aus beiden Komplexen wurden durch Koordination weiterer Metallzentren dreikernige Verbindungen synthetisiert. Dabei konnte das Produkt aus trans-Komplex mit Ruthenium(II) isoliert und charakterisiert werden, während das cis-Produkt nur analytisch nachgewiesen, jedoch nicht erfolgreich von den Nebenprodukten abgetrennt werden konnte. Für das dppe-haltige Platindiacetylid wurde die Fähigkeit Zink zu koordinieren in verschiedenen Versuchsreihen überprüft, die unter Variation der Zinkkonzentration und auch des Lösungsmittels durchgeführt wurden. Mit Hilfe von ESI-MS-Messungen konnte belegt werden, dass Zink(II)-Kationen an das Diacetylid koordinieren und in Abhängigkeit der Zinkkonzentration verschiedene Produkte erhalten werden. Das Lösungsmittel zeigt ebenfalls einen Einfluss auf Produktbildung. Während in Aceton nur zwei verschiedene Zinkaddukte beobachtet werden können, wird in Acetonitril sogar die Bildung von Ketten und Clustern beobachtet. Der dreizähnige Diinligand wurde erfolgreich mit Ruthenium umgesetzt, sodass ein oktaedrischer Komplex entsand, bei dem der Ruthenium(II)-Kern dreifach an den von mir synthetisierten Liganden koordiniert war und neben einem Triphenylphosphan auch zwei Chloridoliganden trug. Mit diesem wurde die katalytische Transferhydrierung von Acetophenon mit Isopropanol durchgeführt, wobei die zur Deprotonierung des Alkohols zugegebene Menge an Base variiert wurde. Die Aktivität des Komplexes in der Transferhydrierung konnte bewiesen werden. Die zugegebene Basenmenge zeigte bei der angewendeten Versuchsdurchführung nur einen geringen Einfluss auf den Umsatz.

- Schwingfestigkeit und Mikrostruktur von ultrafeinkörnigem C45 (2015)
- Es gibt vielfache Ansätze, die mechanischen Eigenschaften von Werkstoffen zu verbessern, wobei in den meisten Fällen mit einer Erhöhung der quasistatischen Festigkeit eine Verringerung der Duktilität einhergeht. Bei ultrafeinkörnigen Werkstoffen ist diese Verringerung des Verformungsvermögens, aufgrund des dominierenden Verfestigungsmechanismus der Korngrenzenverfestigung, nicht zwangsläufig die Folge. Bei der Forschung mit Werkstoffen, die eine sehr kleine Korngröße aufweisen, stellt dieser Aspekt die Hauptmotivation dar. Im Rahmen der aktuellen Arbeit wurden mehrere ultrafeinkörnige Modifikationen des Vergütungsstahles C45 mikrostrukturell, mit Hilfe von Raster- sowie Transmissionselektronenmikroskopie, aber auch Nanoindentation charakterisiert. Es konnten verbreitet Korngrößen unter 1 µm festgestellt werden, was der Definition ultrafeinkörnig entspricht. Anschließend folgte eine Korrelation der zyklischen Eigenschaften, welche mittels 4-Punkt-Mikrobiegeversuchen untersucht wurden und der Mikrostruktur. Da die hochfesten Ausprägungen der ultrafeinkörnigen Modifikationen in großen Teilen Rissinitiierung an nichtmetallischen Einschlüssen zeigten, erfolgte eine bruchmechanische Betrachtung mittels Spannungsintensitätsfaktoren. Als Quintessenz der vorliegenden Arbeit steht ein Modell, welches die Ermüdungseigenschaften von ultrafeinkörnigen Werkstoffen zusammenfasst. Mehrere Eigenschaften, wie das Auftreten von innerer Rissinitiierung, sowie insgesamt die extrem hohen Härten und Ermüdungsfestigkeiten (vergleichbar mit bainitisiertem 100Cr6), wurden im Rahmen der Arbeit erstmals bei diesen Modifikationen nachgewiesen und gehörten zuvor nicht zum Stand der Technik.

- Modern dehydrogenative amination reactions (2015)
- Nitrogen element is preponderant in Nature. Found in its simplest form as diatomic gas in the air, as well as in elaborated molecules such as the double helix of DNA, this element is indisputably essential for life. Indeed, nitrogen is omnipresent in all metabolic pathways. With the advent of green chemistry, researchers attempt to functionalize arenes without pre-functionalization of the later for the establishment of C-C bond formation. Why not C-N bond formation? We investigated new oxidative amination reactions by cross-dehydrogenative-coupling. Concerned by atom economy and green processes, our objectives were: 1) to obviate pre-activation or pre-oxidation of both C-H coupling partner and N-aminating agent. 2) to avoid the use of chelating directing group. We achieved C-N bond formation for some classes of amines. Thus, we will describe the reactivity of cyclic secondary amines: carbazole, in presence of catalytic amount of ruthenium (II) and copper (II) to build the challenging C-N bond between two carbazoles. The initial mechanistic experiments will be present and discuss. Then, we will describe more challenging hetero-coupling formation between diarylamines and carbazoles. The new ruthenium (II)/ copper (II) catalytic system allowed forming the ortho-N-carbazolation of diarylamines. This reaction performed under mild conditions (O2 as terminal oxidant) displays an unusual intramolecular N-H••N interaction in the novel class of compounds. Finally, we will present a surprising metal free C-N bond formation between the ubiquitous phenols and the phenothiazines. Initially conducted in the presence of transition metals (RuII/CuII), this reaction proved to be efficient with the only effect of cumene and O2. Those components suggest a mechanism initiated by a Hock process. An initial infra-red analysis might point out a strong intramolecular O-H••N interaction in the resulting products. These first elements of reactivity, developed within the laboratory for “modern dehydrogenative amination reactions”, will be presented and discussed.