Refine
Year of publication
- 2012 (33) (remove)
Document Type
- Doctoral Thesis (33) (remove)
Language
- English (33) (remove)
Has Fulltext
- yes (33)
Keywords
- Transaction Costs (2)
- Arithmetic data-path (1)
- Bildverarbeitung (1)
- Bioinformatik (1)
- Carbon footprint (1)
- Chlamydomonas reinhardii (1)
- Cohen-Lenstra heuristic (1)
- Computeralgebra (1)
- Consistent Price Processes (1)
- Data path (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (12)
- Kaiserslautern - Fachbereich Chemie (6)
- Kaiserslautern - Fachbereich Informatik (6)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (5)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (2)
- Kaiserslautern - Fachbereich Biologie (1)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (1)
Development of New Methods for the Synthesis of Aldehydes, Arenes and Trifluoromethylated Compounds
(2012)
In the 1st project, successful development of 2nd generation of a palladium catalyst for the selective hydrogenation of carboxylic acids to aldehydes was accomplished. This project was done in cooperation with Dipl. Chem. Thomas Fett from Boeringer Ingelheim, Austria. The new catalyst is highly effective for the conversion of diversely functionalized aromatic, heteroaromatic and aliphatic carboxylic acids to the corresponding aldehydes in the presence of pivalic anhydride at 5 bar hydrogen pressure, which was otherwise achieved either at 30 bar of hydrogen pressure or by using waste intensive hypophosphite bases as reducing agent. Our method has increased the synthetic importance of this valuable transformation. Selective hydrogenation of carboxylic acids to the corresponding aldehydes is now possible with industrial hydrogenation equipment as well as laboratory scale glass autoclaves. It might also convince the synthetic organic chemists to use this transformation for routine aldehyde synthesis in the laboratories.
In the 2nd project, a microwave assisted Cu-catalyzed protodecarboxylation of arenecarboxylic acids to arenes is achieved. This work was done in collaboration with Dipl. Chem. Filipe Manjolinho under the supervision of Dr. Nuria Rodríguez. In the presence of 1-5 mol% of inexpensive CuI/1,10-phenanthroline catalyst generated in situ under microwave radiations, diversely functionalized arenes and heteroarene carboxylic acids have been decarboxylated to the corresponding arenes in good yields at 190 °C in 5-15 min. The loss of volatile arenes with the release of CO2 is controled by the use of sealed high pressure resistant microwave vessels. These reactions are highly beneficial for parallel synthesis in drug discovery due to their short reaction time. Microwave technology will also help in the future to develop more effective catalysts for protodecarboxylation rections.
Based on the microwave assisted protodecarboxylation strategy, decarboxylative coupling of arenecarboxylic acids with aryl triflates and tosylates was also conducted under microwave radiation which provided higher yields of the corresponding biphenyls from deactivated substrates in short reaction time compared to the conventional heating.
In the 3rd project, crystalline, potassium (trifluoromethyl)trimethoxyborate was successfully applied for the synthesis of benzotrifluorides under the oxidative conditions. This project was done in cooperation with Dipl. Chem. Annette Buba. In the presence of Cu(OAc)2 and molecular oxygen, arylboronates were coupled with K+[CF3B(OMe)3] in DMSO at 60 °C. A variety of benzotriflurides was synthesized in good yields under the optimized reaction conditions. This protocol for the oxidative trifluoromethylation of arylboronates is the base for the development of decarboxylative trifluoromethylation reaction of arenecarboxylic acids.
The 4th project discloses the simple and straightforward synthesis of trifluoromethylated alcohols by nucleophilic addition of potassium (trifluoromethyl)trimethoxyborate to carbonyl compounds. This project was done in cooperation with Dr. Thomas Knauber and Dipl. Chem. Annette Buba. In the presence of K+[CF3B(OMe)3] in THF at 60 °C, diversely functionalized aldehydes and ketones were successfully converted into the corresponding trifluoromethylated alcohols.
The 3rd and 4th projects demonstrate the successful establishment of crystalline and shelf stable potassium (trifluoromethyl)trimethoxyborate as highly versatile CF3-source in nucleophilic trifluoromethylation reactions. These new protocols are characterized by their user-friendliness and broad applicability under mild reaction conditions, thus they are beneficial for late stage introduction of CF3-group into organic molecules.
Standard bases are one of the main tools in computational commutative algebra. In 1965
Buchberger presented a criterion for such bases and thus was able to introduce a first approach for their computation. Since the basic version of this algorithm is rather inefficient
due to the fact that it processes lots of useless data during its execution, active research for
improvements of those kind of algorithms is quite important.
In this thesis we introduce the reader to the area of computational commutative algebra with a focus on so-called signature-based standard basis algorithms. We do not only
present the basic version of Buchberger’s algorithm, but give an extensive discussion of different attempts optimizing standard basis computations, from several sorting algorithms
for internal data up to different reduction processes. Afterwards the reader gets a complete
introduction to the origin of signature-based algorithms in general, explaining the under-
lying ideas in detail. Furthermore, we give an extensive discussion in terms of correctness,
termination, and efficiency, presenting various different variants of signature-based standard basis algorithms.
Whereas Buchberger and others found criteria to discard useless computations which
are completely based on the polynomial structure of the elements considered, Faugère presented a first signature-based algorithm in 2002, the F5 Algorithm. This algorithm is famous for generating much less computational overhead during its execution. Within this
thesis we not only present Faugère’s ideas, we also generalize them and end up with several
different, optimized variants of his criteria for detecting redundant data.
Being not completely focussed on theory, we also present information about practical
aspects, comparing the performance of various implementations of those algorithms in the
computer algebra system Singular over a wide range of example sets.
In the end we give a rather extensive overview of recent research in this area of computational commutative algebra.
In urban planning, both measuring and communicating sustainability are among the most recent concerns. Therefore, the primary emphasis of this thesis concerns establishing metrics and visualization techniques in order to deal with indicators of sustainability.
First, this thesis provides a novel approach for measuring and monitoring two indicators of sustainability - urban sprawl and carbon footprints – at the urban neighborhood scale. By designating different sectors of relevant carbon emissions as well as different household categories, this thesis provides detailed information about carbon emissions in order to estimate impacts of daily consumption decisions and travel behavior by household type. Regarding urban sprawl, a novel gridcell-based indicator model is established, based on different dimensions of urban sprawl.
Second, this thesis presents a three-step-based visualization method, addressing predefined requirements for geovisualizations and visualizing those indicator results, introduced above. This surface-visualization combines advantages from both common GIS representation and three-dimensional representation techniques within the field of urban planning, and is assisted by a web-based graphical user interface which allows for accessing the results by the public.
In addition, by focusing on local neighborhoods, this thesis provides an alternative approach in measuring and visualizing both indicators by utilizing a Neighborhood Relation Diagram (NRD), based on weighted Voronoi diagrams. Thus, the user is able to a) utilize original census data, b) compare direct impacts of indicator results on the neighboring cells, and c) compare both indicators of sustainability visually.
The increasing complexity of modern SoC designs makes tasks of SoC formal verification
a lot more complex and challenging. This motivates the research community to develop
more robust approaches that enable efficient formal verification for such designs.
It is a common scenario to apply a correctness by integration strategy while a SoC
design is being verified. This strategy assumes formal verification to be implemented in
two major steps. First of all, each module of a SoC is considered and verified separately
from the other blocks of the system. At the second step – when the functional correctness
is successfully proved for every individual module – the communicational behavior has
to be verified between all the modules of the SoC. In industrial applications, SAT/SMT-based interval property checking(IPC) has become widely adopted for SoC verification. Using IPC approaches, a verification engineer is able to afford solving a wide range of important verification problems and proving functional correctness of diverse complex components in a modern SoC design. However, there exist critical parts of a design where formal methods often lack their robustness. State-of-the-art property checkers fail in proving correctness for a data path of an industrial central processing unit (CPU). In particular, arithmetic circuits of a realistic size (32 bits or 64 bits) – especially implementing multiplication algorithms – are well-known examples when SAT/SMT-based
formal verification may reach its capacity very fast. In cases like this, formal verification
is replaced with simulation-based approaches in practice. Simulation is a good methodology that may assure a high rate of discovered bugs hidden in a SoC design. However, in contrast to formal methods, a simulation-based technique cannot guarantee the absence of errors in a design. Thus, simulation may still miss some so-called corner-case bugs in the design. This may potentially lead to additional and very expensive costs in terms of time, effort, and investments spent for redesigns, refabrications, and reshipments of new chips.
The work of this thesis concentrates on studying and developing robust algorithms
for solving hard arithmetic decision problems. Such decision problems often originate from a task of RTL property checking for data-path designs. Proving properties of those
designs can efficiently be performed by solving SMT decision problems formulated with
the quantifier-free logic over fixed-sized bit vectors (QF-BV).
This thesis, firstly, proposes an effective algebraic approach based on a Gröbner basis theory that allows to efficiently decide arithmetic problems. Secondly, for the case of custom-designed components, this thesis describes a sophisticated modeling technique which is required to restore all the necessary arithmetic description from these components. Further, this thesis, also, explains how methods from computer algebra and the modeling techniques can be integrated into a common SMT solver. Finally, a new QF-BV SMT solver is introduced.
Today, polygonal models occur everywhere in graphical applications, since they are easy
to render and to compute and a very huge set of tools are existing for generation and
manipulation of polygonal data. But modern scanning devices that allow a high quality
and large scale acquisition of complex real world models often deliver a large set of
points as resulting data structure of the scanned surface. A direct triangulation of those
point clouds does not always result in good models. They often contain problems like
holes, self-intersections and non manifold structures. Also one often looses important
surface structures like sharp corners and edges during a usual surface reconstruction.
So it is suitable to stay a little longer in the point based world to analyze the point cloud
data with respect to such features and apply a surface reconstruction method afterwards
that is known to construct continuous and smooth surfaces and extend it to reconstruct
sharp features.
In this thesis we consider the problem of maximizing the growth rate with proportional and fixed costs in a framework with one bond and one stock, which is modeled as a jump diffusion with compound Poisson jumps. Following the approach from [1], we prove that in this framework it is optimal for an investor to follow a CB-strategy. The boundaries depend only on the parameters of the underlying stock and bond. Now it is natural to ask for the investor who follows a CB-strategy which is given by the stopping times \((\tau_i)_{i\in\mathbb N}\) and impulses \((\eta_i)_{i\in\mathbb N}\) how often he has to rebalance. In other words we want to obtain the limit of the inter trading times
\[
\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^n(\tau_{i+1}-\tau_{i}).
\]
We are able to obtain this limit which is given by the expected first exit time of the risky fraction process from some interval under the invariant measure of the Markov chain \((\eta_i)_{i\in\mathbb N}\) using the Ergodic Theorem from von Neumann and Birkhoff. In general, it is difficult to obtain the expectation of the first exit time for the process with jumps. Because of the jump part, when the process crosses the boundaries of the interval an overshoot may occur which makes it difficult to obtain the distribution. Nevertheless we can obtain the first exit time if the process has only negative jumps using scale functions. The main difficulty of this approach is that the scale functions are known only up to their Laplace transforms. In [2] and [3] the closed-form expression for the scale function of the Levy process with phase-type distributed jumps is obtained. Phase-type distributions build a rich class of positive-valued distributions: the exponential, hyperexponential, Erlang, hyper-Erlang and Coxian distributions. Since the scale function is given as a function in a closed form we can differentiate to obtain the expected first exit time using the fluctuation identities explicitly.
[1] Irle, A. and Sass,J.: Optimal portfolio policies under fixed and proportional transaction costs, Advances in Applied Probability 38, 916-942.
[2] Egami, M., Yamazaki, K.: On scale functions of spectrally negative Levy processes with phase-type jumps, working paper, July 3.
[3]Egami, M., Yamazaki, K.: Precautionary measures for credit risk management in jump models, working paper, June 17.
This thesis deals with the relationship between no-arbitrage and (strictly) consistent price processes for a financial market with proportional transaction costs
in a discrete time model. The exact mathematical statement behind this relationship is formulated in the so-called Fundamental Theorem of Asset Pricing (FTAP). Among the many proofs of the FTAP without transaction costs there
is also an economic intuitive utility-based approach. It relies on the economic
intuitive fact that the investor can maximize his expected utility from terminal
wealth. This approach is rather constructive since the equivalent martingale measure is then given by the marginal utility evaluated at the optimal terminal payoff.
However, in the presence of proportional transaction costs such a utility-based approach for the existence of consistent price processes is missing in the literature. So far, rather deep methods from functional analysis or from the theory of random sets have been used to show the FTAP under proportional transaction costs.
For the sake of existence of a utility-maximizing payoff we first concentrate on a generic single-period model with only one risky asset. The marignal utility evaluated at the optimal terminal payoff yields the first component of a
consistent price process. The second component is given by the bid-ask prices
depending on the investors optimal action. Even more is true: nearby this consistent price process there are many strictly consistent price processes. Their exact structure allows us to apply this utility-maximizing argument in a multi-period model. In a backwards induction we adapt the given bid-ask prices in such a way so that the strictly consistent price processes found from maximizing utility can be extended to terminal time. In addition possible arbitrage opportunities of the 2nd kind vanish which can present for the original bid-ask process. The notion of arbitrage opportunities of the 2nd kind has been so
far investigated only in models with strict costs in every state. In our model
transaction costs need not be present in every state.
For a model with finitely many risky assets a similar idea is applicable. However, in the single-period case we need to develop new methods compared
to the single-period case with only one risky asset. There are mainly two reasons
for that. Firstly, it is not at all obvious how to get a consistent price process
from the utility-maximizing payoff, since the consistent price process has to be
found for all assets simultaneously. Secondly, we need to show directly that the
so-called vector space property for null payoffs implies the robust no-arbitrage condition. Once this step is accomplished we can à priori use prices with a
smaller spread than the original ones so that the consistent price process found
from the utility-maximizing payoff is strictly consistent for the original prices.
To make the results applicable for the multi-period case we assume that the prices are given by compact and convex random sets. Then the multi-period case is similar to the case with only one risky asset but more demanding with regard to technical questions.
The goal of this work is to develop a simulation-based algorithm, allowing the prediction
of the effective mechanical properties of textiles on the basis of their microstructure
and corresponding properties of fibers. This method can be used for optimization of the
microstructure, in order to obtain a better stiffness or strength of the corresponding fiber
material later on. An additional aspect of the thesis is that we want to take into account the microcontacts
between fibers of the textile. One more aspect of the thesis is the accounting for the thickness of thin fibers in the
textile. An introduction of an additional asymptotics with respect to a small parameter,
the relation between the thickness and the representative length of the fibers, allows a
reduction of local contact problems between fibers to 1-dimensional problems, which
reduces numerical computations significantly.
A fiber composite material with periodic microstructure and multiple frictional microcontacts
between fibers is studied. The textile is modeled by introducing small geometrical
parameters: the periodicity of the microstructure and the characteristic
diameter of fibers. The contact linear elasticity problem is considered. A two-scale
approach is used for obtaining the effective mechanical properties.
The algorithm using asymptotic two-scale homogenization for computation of the
effective mechanical properties of textiles with periodic rod or fiber microstructure
is proposed. The algorithm is based on the consequent passing to the asymptotics
with respect to the in-plane period and the characteristic diameter of fibers. This
allows to come to the equivalent homogenized problem and to reduce the dimension
of the auxiliary problems. Further numerical simulations of the cell problems give
the effective material properties of the textile.
The homogenization of the boundary conditions on the vanishing out-of-plane interface
of a textile or fiber structured layer has been studied. Introducing additional
auxiliary functions into the formal asymptotic expansion for a heterogeneous
plate, the corresponding auxiliary and homogenized problems for a nonhomogeneous
Neumann boundary condition were deduced. It is incorporated into the right hand
side of the homogenized problem via effective out-of-plane moduli.
FiberFEM, a C++ finite element code for solving contact elasticity problems, is
developed. The code is based on the implementation of the algorithm for the contact
between fibers, proposed in the thesis.
Numerical examples of homogenization of geotexiles and wovens are obtained in the
work by implementation of the developed algorithm. The effective material moduli
are computed numerically using the finite element solutions of the auxiliary contact
problems obtained by FiberFEM.
The discrete nature of the dispersed phase (swarm of droplet) in stirred and pulsed liquid-liquid extraction columns makes its mathematical modelling of such complex system a tedious task. The dispersed phase is considered as a population of droplets distributed randomly with respect to their internal properties (such as: droplet size and solute concentration) at a specific location in space. Hence, the population balance equation has been emerged as a mathematical tool to model and describe such complex behaviour. However, the resulting model is too complicated. Accordingly, the analytical solution of such a mathematical model does not exist except for particular cases. Therefore, numerical solutions are resorted to in general. This is due to the inherent nonlinearities in the convective and diffusive terms as well as the appearance of many integrals in the source term. However, modelling and simulation of liquid extraction columns is not an easy task because of the discrete nature of the dispersed phase, which consist of population of droplets. The natural frame work for taking this into account is the population balance approach.
In part of this doctoral thesis work, a rigours mathematical model based on the bivariate population balance frame work (the base of LLECMOD ‘‘Liquid-Liquid Extraction Column Module’’) for the steady state and dynamic simulation of pulsed (sieve plate and packed) liquid-liquid extraction columns is developed. The model simulates the coupled hydrodynamic and mass transfer for pulsed (packed and sieve plate) extraction columns. The model is programmed using visual digital FORTRAN and then integrated into the LLECMOD program. Within LLECMOD the user can simulate different types of extraction columns including stirred and pulsed ones. The basis of LLECMOD depends on stable robust numerical algorithms based on an extended version of a fixed pivot technique after Attarakih et al., 2003 (to take into account interphase solute transfer) and advanced computational fluid dynamics numerical methods. Experimental validated correlations are used for the estimation of the droplet terminal velocity in extraction columns based on single and swarm droplet experiments in laboratory scale devices. Additionally, recent published correlations for turbulent energy dissipation, droplet breakage and coalescence frequencies are discussed as been used in this version of LLECMOD. Moreover, coalescence model from literature derived from a stochastical description have been modified to fit the deterministic population model. As a case study, LLECMOD is used here to simulate the steady state performance of pulsed extraction columns under different operating conditions, which include pulsation intensity and volumetric flow rates are simulated. The effect of pulsation intensity (on the holdup, mean droplet diameter and solute concentration) is found to have more profound effect on systems of high interfacial tension. On the hand, the variation of volumetric flow rates have substantial effect on the holdup, mean droplet diameter and solute concentration profiles for chemical systems with low interfacial tension. Two chemical test systems recommended by the European Federation of Chemical Engineering (water-acetone (solute)-n-butyl acetate and water-acetone (solute)-toluene) and an industrial test system are used in the simulation. Model predictions are successfully validated against steady state and transient experimental data, where good agreements are achieved. The simulated results (holdup, mean droplet diameter and mass transfer profiles) compared to the experimental data show that LLECMOD is a powerful simulation tool, which can efficiently predict the dynamic and steady state performance of pulsed extraction columns.
In other part of this doctoral thesis work, the steady state performance of extraction columns is studied taking into account the effect of dispersed phase inlet condition (light or heavy phase is dispersed) and the direction of mass transfer (from continuous to dispersed phase and vice versa) using the population balance framework. LLECMOD, a program that uses multivariate population balance models, is extended to take into account the direction of mass transfer and the dispersed phase inlet. As a case study, LLECMOD is used to simulate pilot plant RDC columns where the steady state mean flow properties (dispersed phase hold up and droplet mean diameter) and the solute concentration profiles are compared to the available experimental data. Three chemical systems were used: sulpholane–benzene–n-heptane, water–acetone–toluene and water–acetone–n-butyl acetate. The dispersed phase inlet and the direction of mass transfer as well as the chemical system physical properties are found to have profound effect on the steady state performance of the RDC column. For example, the mean droplet diameter is found to persist invariant when the heavy phase is dispersed and the extractor efficiency is higher when the direction of mass transfer is from the continuous to the dispersed phase. For the purpose of experimental validation, it is found that LLECMOD predictions are in good agreement with the available experimental data concerning the dispersed phase hold up, mean droplet diameter and solute concentration profiles in both phases.
In a further part of this doctoral thesis, a mathematical model is developed for liquid extraction columns based on the multivariate population balance equation (PBE) and the primary secondary particle method (PSPM) introduced by Attarakih, 2010 (US Patent Application: 0100106467). It is extended to include the momentum balance for the dispersed phase. The advantage of momentum balance is to eliminate the need for often conflicting correlations used in estimating the terminal velocity of single and swarm of droplets. The resulting mathematical model is complex due to the integral nature of the population balance equation. To reduce the complexity of this model, while maintaining most of the information drawn from the continuous population balance equation, the concept of the PSPM is used. Based on the multivariate population balance equation and the PSPM a mathematical model is developed for any liquid extraction column. The secondary particle could be envisaged as a fluid particle carrying information about the distribution as it is evolved in space and time, in the meanwhile the primary particles carry the mean properties of the population such as total droplet concentration; mean droplet diameter dispersed phase hold up and so on. This information reflects the particle-particle interactions (breakage and coalescence) and transport (convection and diffusion). The developed model is discretized in space using a first-order upwind method, while semi-implicit first-order scheme in time is used to simulate a pilot plant RDC extraction column. Here the effect of the number of primary particles (classes) on the final predicted solution is investigated. Numerical results show that the solution converge fast even as the number of primary particle is increased. The terminal droplet velocity of the individual primary particle is found the most sensitive to the number of primary particles. Other mean population properties like the droplet mean diameter, mean hold up and the concentration profiles are also found to converge along the column height by increasing the number of primary particles. The predicted steady state profiles (droplet diameter, holdup and the concentration profiles) along a pilot RDC extraction column are compared to the experimental data where good agreement is achieved.
In addition to this a robust rigorous mathematical model based on the bivariate population balance equation is developed to predict the steady state and dynamic behaviour of the interacting hydrodynamics and mass transfer in Kühni extraction columns. The developed model is extended to include the momentum balance for the calculation of the droplet velocity. The effects of step changes in the important input variables (such as volumetric flow rates, rotational speed, inlet solute concentrations etc.) on the output variables (dispersed phase holdup, mean droplet diameter and the concentration profiles) are investigated.
The last topic of this doctoral thesis is developed to transient problems. The unsteady state analysis reveals the fact that the largest time constant (slowest response) is due to the mass transfer. On the contrary, the hydrodynamic response of the dispersed phase holdup is very fast when compared to the mass transfer due to the relative fast motion of the dispersed droplets with respect to the continuous phase. The dynamic behaviour of the dispersed and continuous phases shows a lag time that increases away from the feed points of both phases. Moreover, the solute concentration response shows a highly nonlinear behaviour due to both positive and negative step changes in the input variables. The simulation results are in good agreement with the experimental ones and show the usefulness of the model.
The scientific intention of this work was to synthesize and characterize new bidentate, tridentate and multidentate ligands and to apply them in heterogenous catalysis. For each type of the ligands, new methods of synthesis were developed. Starting from 1,1'-(pyridine-2,6-diyl)diethanone and dimethylpyridine-2,6-dicarboxylate different bispyrazolpyridines were
synthesized and novel ruthenium complexes of the type (L)(NNN)RuCl2 could be obtained. The complexes with L = triphenylphosphine turned out to be highly efficient
catalyst precursors for the transfer hydrogenation of aromatic ketones. Introduction of a butyl group in the 5-positions of the pyrazoles leads to a pronounced increase of catalytic activity.
To find a method for the synthesis of bispyrimidinepyridines, different reactants and condition were applied and it was found that these tridentate ligands can be obtained by mixing and grinding the tetraketone with guanidinium carbonate and silica, which plays the role of a catalyst in this ring closing reaction.
The bidentate 2-amino-4-(2-pyridinyl)pyrimidines were synthesized from different substrates according to the desired substituent on the pyrimidine ring.
Reacting these bidentate ligands with the ruthenium(II) precursor [(η6-cymene)Ru(Cl)(μ
2-Cl)]2 gave cationic ruthenium(II) complexes of the type [(η6-cymene)Ru(Cl)(adpm)]Cl (adpm = chelating 2-amino-4-(2-yridinyl)pyrimidine ligand). Stirring the freshly prepared complexes with either NaBPh4, NaBF4 or KPF6, the chloride anion was exchanged against other coordinating anions (BF4-, PF6-, BPh4-).Some of these ruthenium complexes have shown very special activities in the transfer hydrogenation of ketones by reacting them in the absence of the base. This led to detailed investigations on the mechanism of this reaction. According to the activities and with the help
of ESI-MS experiments and DFT calculations, a mechanism was proposed for the transfer hydrogenation of acetophenone in the absence of the base. It shows that in the absence of the base, a C-H bond activation at the pyrimidine ring should occur to activate the catalyst.
The palladium complexes of bidentate N,N ligands were examined in coupling reactions. As expected, they did not show very special activities.
Multidentate ligands, having pyrimidine groups as relatively soft donors for late transition metals and simultaneously possessing a binding position for a hard Lewis-acid, could be obtained using the new synthesized bidentate and tridentate ligands.