Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (1255) (entfernen)
Sprache
- Deutsch (671)
- Englisch (583)
- Mehrsprachig (1)
Schlagworte
- Visualisierung (18)
- Simulation (16)
- Apoptosis (12)
- Katalyse (12)
- Finite-Elemente-Methode (11)
- Phasengleichgewicht (11)
- Stadtplanung (11)
- Mobilfunk (10)
- Modellierung (10)
- Eisen (9)
Fachbereich / Organisatorische Einheit
- Fachbereich Chemie (304)
- Fachbereich Maschinenbau und Verfahrenstechnik (229)
- Fachbereich Mathematik (225)
- Fachbereich Informatik (139)
- Fachbereich Biologie (99)
- Fachbereich ARUBI (71)
- Fachbereich Elektrotechnik und Informationstechnik (66)
- Fachbereich Bauingenieurwesen (38)
- Fachbereich Sozialwissenschaften (29)
- Fachbereich Raum- und Umweltplanung (22)
Embedded systems have become ubiquitous in everyday life, and especially in the automotive industry. New applications challenge their design by introducing a new class of problems that are based on a detailed analysis of the environmental situation. Situation analysis systems rely on models and algorithms of the domain of computational geometry. The basic model is usually an Euclidean plane, which contains polygons to represent the objects of the environment. Usual implementations of computational geometry algorithms cannot be directly used for safety-critical systems. First, a strict analysis of their correctness is indispensable and second, nonfunctional requirements with respect to the limited resources must be considered. This thesis proposes a layered approach to a polygon-processing system. On top of rational numbers, a geometry kernel is formalised at first. Subsequently, geometric primitives form a second layer of abstraction that is used for plane sweep and polygon algorithms. These layers do not only divide the whole system into manageable parts but make it possible to model problems and reason about them at the appropriate level of abstraction. This structure is used for the verification as well as the implementation of the developed polygon-processing library.
In the filling process of a car tank, the formation of foam plays an unwanted role, as it may prevent the tank from being completely filled or at least delay the filling. Therefore it is of interest to optimize the geometry of the tank using numerical simulation in such a way that the influence of the foam is minimized. In this dissertation, we analyze the behaviour of the foam mathematically on the mezoscopic scale, that is for single lamellae. The most important goals are on the one hand to gain a deeper understanding of the interaction of the relevant physical effects, on the other hand to obtain a model for the simulation of the decay of a lamella which can be integrated in a global foam model. In the first part of this work, we give a short introduction into the physical properties of foam and find that the Marangoni effect is the main cause for its stability. We then develop a mathematical model for the simulation of the dynamical behaviour of a lamella based on an asymptotic analysis using the special geometry of the lamella. The result is a system of nonlinear partial differential equations (PDE) of third order in two spatial and one time dimension. In the second part, we analyze this system mathematically and prove an existence and uniqueness result for a simplified case. For some special parameter domains the system can be further simplified, and in some cases explicit solutions can be derived. In the last part of the dissertation, we solve the system using a finite element approach and discuss the results in detail.
The detection and characterisation of undesired lead structures on shaft surfaces is a concern in production and quality control of rotary shaft lip-type sealing systems. The potential lead structures are generally divided into macro and micro lead based on their characteristics and formation. Macro lead measurement methods exist and are widely applied. This work describes a method to characterise micro lead on ground shaft surfaces. Micro lead is known as the deviation of main orientation of the ground micro texture from circumferential direction. Assessing the orientation of microscopic structures with arc minute accuracy with regard to circumferential direction requires exact knowledge of both the shaft’s orientation and the direction of surface texture. The shaft’s circumferential direction is found by calibration. Measuring systems and calibration procedures capable of calibrating shaft axis orientation with high accuracy and low uncertainty are described. The measuring systems employ areal-topographic measuring instruments suited for evaluating texture orientation. A dedicated evaluation scheme for texture orientation is based on the Radon transform of these topographies and parametrised for the application. Combining the calibration of circumferential direction with the evaluation of texture orientation the method enables the measurement of micro lead on ground shaft surfaces.
1,3-Diynes are frequently found as an important structural motif in natural products, pharmaceuticals and bioactive compounds, electronic and optical materials and supramolecular molecules. Copper and palladium complexes are widely used to prepare 1,3-diynes by homocoupling of terminal alkynes; albeit the potential of nickel complexes towards the same is essentially unexplored. Although a detailed study on the reported nickel-acetylene chemistry has not been carried out, a generalized mechanism featuring a nickel(II)/nickel(0) catalytic cycle has been proposed. In the present work, a detailed mechanistic aspect of the nickel-mediated homocoupling reaction of terminal alkynes is investigated through the isolation and/or characterization of key intermediates from both the stoichiometric and the catalytic reactions. A nickel(II) complex [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) containing a tetradentate N,N′-dimethyl-2,11-diaza[3.3](2,6)pyridinophane (L-N4Me2) as ligand was used as catalyst for homocoupling of terminal alkynes by employing oxygen as oxidant at room temperature. A series of dinuclear nickel(I) complexes bridged by a 1,3-diyne ligand have been isolated from stoichiometric reaction between [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) and lithium acetylides. The dinuclear nickel(I)-diyne complexes [{Ni(L-N4Me2)}2(RC4R)](ClO4)2 (2) were well characterized by X-ray crystal structures, various spectroscopic methods, SQUID and DFT calculation. The complexes not only represent as a key intermediate in aforesaid catalytic reaction, but also describe the first structurally characterized dinuclear nickel(I)-diyne complexes. In addition, radical trapping and low temperature UV-Vis-NIR experiments in the formation of the dinuclear nickel(I)-diyne confirm that the reactions occurring during the reduction of nickel(II) to nickel(I) and C-C bond formation of 1,3-diyne follow non-radical concerted mechanism. Furthermore, spectroscopic investigation on the reactivity of the dinuclear nickel(I)-diyne complex towards molecular oxygen confirmed the formation of a mononuclear nickel(I)-diyne species [Ni(L-N4Me2)(RC4R)]+ (4) and a mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) which were converted to free 1,3-diyne and an unstable dinuclear nickel(II) species [{Ni(L-N4Me2)}2(O2)]2+ (6). A mononuclear nickel(I)-alkyne complex [Ni(L-N4Me2)(PhC2Ph)](ClO4).MeOH (3) and the mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) were isolated/generated and characterized to confirm the formulation of aforementioned mononuclear nickel(I)-diyne and mononuclear nickel(III)-peroxo species. Spectroscopic experiments on the catalytic reaction mixture also confirm the presence of aforesaid intermediates. Results of both stoichiometric and catalytic reactions suggested an intriguing mechanism involving nickel(II)/nickel(I)/nickel(III) oxidation states in contrast to the reported nickel(II)/nickel(0) catalytic cycle. These findings are expected to open a new paradigm towards nickel-catalyzed organic transformations.
Nowadays, accounting, charging and billing users' network resource consumption are commonly used for the purpose of facilitating reasonable network usage, controlling congestion, allocating cost, gaining revenue, etc. In traditional IP traffic accounting systems, IP addresses are used to identify the corresponding consumers of the network resources. However, there are some situations in which IP addresses cannot be used to identify users uniquely, for example, in multi-user systems. In these cases, network resource consumption can only be ascribed to the owners of these hosts instead of corresponding real users who have consumed the network resources. Therefore, accurate accountability in these systems is practically impossible. This is a flaw of the traditional IP address based IP traffic accounting technique. This dissertation proposes a user based IP traffic accounting model which can facilitate collecting network resource usage information on the basis of users. With user based IP traffic accounting, IP traffic can be distinguished not only by IP addresses but also by users. In this dissertation, three different schemes, which can achieve the user based IP traffic accounting mechanism, are discussed in detail. The inband scheme utilizes the IP header to convey the user information of the corresponding IP packet. The Accounting Agent residing in the measured host intercepts IP packets passing through it. Then it identifies the users of these IP packets and inserts user information into the IP packets. With this mechanism, a meter located in a key position of the network can intercept the IP packets tagged with user information, extract not only statistic information, but also IP addresses and user information from the IP packets to generate accounting records with user information. The out-of-band scheme is a contrast scheme to the in-band scheme. It also uses an Accounting Agent to intercept IP packets and identify the users of IP traffic. However, the user information is transferred through a separated channel, which is different from the corresponding IP packets' transmission. The Multi-IP scheme provides a different solution for identifying users of IP traffic. It assigns each user in a measured host a unique IP address. Through that, an IP address can be used to identify a user uniquely without ambiguity. This way, traditional IP address based accounting techniques can be applied to achieve the goal of user based IP traffic accounting. In this dissertation, a user based IP traffic accounting prototype system developed according to the out-of-band scheme is also introduced. The application of user based IP traffic accounting model in the distributed computing environment is also discussed.
The present situation of control engineering in the context of automated production can be described as a tension field between its desired outcome and its actual consideration. On the one hand, the share of control engineering compared to the other engineering domains has significantly increased within the last decades due to rising automation degrees of production processes and equipment. On the other hand, the control engineering domain is still underrepresented within the production engineering process. Another limiting factor constitutes a lack of methods and tools to decrease the amount of software engineering efforts and to permit the development of innovative automation applications that ideally support the business requirements.
This thesis addresses this challenging situation by means of the development of a new control engineering methodology. The foundation is built by concepts from computer science to promote structuring and abstraction mechanisms for the software development. In this context, the key sources for this thesis are the paradigm of Service-oriented Architecture and concepts from Model-driven Engineering. To mold these concepts into an integrated engineering procedure, ideas from Systems Engineering are applied. The overall objective is to develop an engineering methodology to improve the efficiency of control engineering by a higher adaptability of control software and decreased programming efforts by reuse.
A Multi-Phase Flow Model Incorporated with Population Balance Equation in a Meshfree Framework
(2011)
This study deals with the numerical solution of a meshfree coupled model of Computational Fluid Dynamics (CFD) and Population Balance Equation (PBE) for liquid-liquid extraction columns. In modeling the coupled hydrodynamics and mass transfer in liquid extraction columns one encounters multidimensional population balance equation that could not be fully resolved numerically within a reasonable time necessary for steady state or dynamic simulations. For this reason, there is an obvious need for a new liquid extraction model that captures all the essential physical phenomena and still tractable from computational point of view. This thesis discusses a new model which focuses on discretization of the external (spatial) and internal coordinates such that the computational time is drastically reduced. For the internal coordinates, the concept of the multi-primary particle method; as a special case of the Sectional Quadrature Method of Moments (SQMOM) is used to represent the droplet internal properties. This model is capable of conserving the most important integral properties of the distribution; namely: the total number, solute and volume concentrations and reduces the computational time when compared to the classical finite difference methods, which require many grid points to conserve the desired physical quantities. On the other hand, due to the discrete nature of the dispersed phase, a meshfree Lagrangian particle method is used to discretize the spatial domain (extraction column height) using the Finite Pointset Method (FPM). This method avoids the extremely difficult convective term discretization using the classical finite volume methods, which require a lot of grid points to capture the moving fronts propagating along column height.
A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction
(2017)
Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.
The simulation of cutting process challenges established methods due to large deformations and topological changes. In this work a particle finite element method (PFEM) is presented, which combines the benefits of discrete modeling techniques and methods based on continuum mechanics. A crucial part of the PFEM is the detection of the boundary of a set of particles. The impact of this boundary detection method on the structural integrity is examined and a relation of the key parameter of the method to the eigenvalues of strain tensors is elaborated. The influence of important process parameters on the cutting force is studied and a comparison to an empirical relation is presented.
The dissertation is concerned with the numerical solution of Fokker-Planck equations in high dimensions arising in the study of dynamics of polymeric liquids. Traditional methods based on tensor product structure are not applicable in high dimensions for the number of nodes required to yield a fixed accuracy increases exponentially with the dimension; a phenomenon often referred to as the curse of dimension. Particle methods or finite point set methods are known to break the curse of dimension. The Monte Carlo method (MCM) applied to such problems are 1/sqrt(N) accurate, where N is the cardinality of the point set considered, independent of the dimension. Deterministic version of the Monte Carlo method called the quasi Monte Carlo method (QMC) are quite effective in integration problems and accuracy of the order of 1/N can be achieved, up to a logarithmic factor. However, such a replacement cannot be carried over to particle simulations due to the correlation among the quasi-random points. The method proposed by Lecot (C.Lecot and F.E.Khettabi, Quasi-Monte Carlo simulation of diffusion, Journal of Complexity, 15 (1999), pp.342-359) is the only known QMC approach, but it not only leads to large particle numbers but also the proven order of convergence is 1/N^(2s) in dimension s. We modify the method presented there, in such a way that the new method works with reasonable particle numbers even in high dimensions and has better order of convergence. Though the provable order of convergence is 1/sqrt(N), the results show less variance and thus the proposed method still slightly outperforms standard MCM.
This thesis is concerned with a phase field model for martensitic transformations in metastable austenitic steels. Within the phase field approach an order parameter is introduced to indicate whether the present phase is austenite or martensite. The evolving microstructure is described by the evolution of the order parameter, which is assumed to follow the time-dependent Ginzburg-Landau equation. The elastic phase field model is enhanced in two different ways to take further phenomena into account. First, dislocation movement is considered by a crystal plasticity setting. Second, the elastic model for martensitic transformations is combined with a phase field model for fracture. Finite element simulations are used to study the single effects separately which contribute to the microstructure formation.
In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.
The main goal of this work is to model size effects, as they occur in materials with an intrinsic microstructure at the consideration of specimens that are not by orders larger than this microstructure. The micromorphic continuum theory as a generalized continuum theory is well suited to account for the occuring size effects. Thereby additional degrees of freedoms capture the independent deformations of these microstructures, while they provide additional balance equation. In this thesis, the deformational and configurational mechanics of the micromorphic continuum is exploited in a finite-deformation setting. A constitutive and numerical framework is developed, in which also the material-force method is advanced. Furthermore the multiscale modelling of thin material layers with a heterogeneous substructure is of interest. To this end, a computational homogenization framework is developed, which allows to obtain the constitutive relation between traction and separation based on the properties of the underlying micromorphic mesostructure numerically in a nested solution scheme. Within the context of micromorphic continuum mechanics, concepts of both gradient and micromorphic plasticity are developed by systematically varying key ingredients of the respective formulations.
The interest of the exploration of new hydrocarbon fields as well as deep geothermal reservoirs is permanently growing. The analysis of seismic data specific for such exploration projects is very complex and requires the deep knowledge in geology, geophysics, petrology, etc from interpreters, as well as the ability of advanced tools that are able to recover some particular properties. There again the existing wavelet techniques have a huge success in signal processing, data compression, noise reduction, etc. They enable to break complicate functions into many simple pieces at different scales and positions that makes detection and interpretation of local events significantly easier.
In this thesis mathematical methods and tools are presented which are applicable to the seismic data postprocessing in regions with non-smooth boundaries. We provide wavelet techniques that relate to the solutions of the Helmholtz equation. As application we are interested in seismic data analysis. A similar idea to construct wavelet functions from the limit and jump relations of the layer potentials was first suggested by Freeden and his Geomathematics Group.
The particular difficulty in such approaches is the formulation of limit and
jump relations for surfaces used in seismic data processing, i.e., non-smooth
surfaces in various topologies (for example, uniform and
quadratic). The essential idea is to replace the concept of parallel surfaces known for a smooth regular surface by certain appropriate substitutes for non-smooth surfaces.
By using the jump and limit relations formulated for regular surfaces, Helmholtz wavelets can be introduced that recursively approximate functions on surfaces with edges and corners. The exceptional point is that the construction of wavelets allows the efficient implementation in form of
a tree algorithm for the fast numerical computation of functions on the boundary.
In order to demonstrate the
applicability of the Helmholtz FWT, we study a seismic image obtained by the reverse time migration which is based on a finite-difference implementation. In fact, regarding the requirements of such migration algorithms in filtering and denoising the wavelet decomposition is successfully applied to this image for the attenuation of low-frequency
artifacts and noise. Essential feature is the space localization property of
Helmholtz wavelets which numerically enables to discuss the velocity field in
pointwise dependence. Moreover, the multiscale analysis leads us to reveal additional geological information from optical features.
The present thesis describes the development and validation of a viscosity adaption method for the numerical simulation of non-Newtonian fluids on the basis of the Lattice Boltzmann Method (LBM), as well as the development and verification of the related software bundle SAM-Lattice.
By now, Lattice Boltzmann Methods are established as an alternative approach to classical computational fluid dynamics
methods. The LBM has been shown to be an accurate and efficient tool for the numerical simulation of weakly compressible or incompressible fluids. Fields of application reach from turbulent simulations through thermal problems to acoustic calculations among others. The transient nature of the method and the need for a regular grid based, non body conformal discretization makes the LBM ideally suitable for simulations involving complex solids. Such geometries are common, for instance, in the food processing industry, where fluids are mixed by static mixers or agitators. Those fluid flows are often laminar and non-Newtonian.
This work is motivated by the immense practical use of the Lattice Boltzmann Method, which is limited due to stability issues. The stability of the method is mainly influenced by the discretization and the viscosity of the fluid. Thus, simulations of non-Newtonian fluids, whose kinematic viscosity depend on the shear rate, are problematic. Several authors have shown that the LBM is capable of simulating those fluids. However, the vast majority of the simulations in the literature are carried out for simple geometries and/or moderate shear rates, where the LBM is still stable. Special care has to be taken for practical non-Newtonian Lattice Boltzmann simulations in order to keep them stable. A straightforward way is to truncate the modeled viscosity range by numerical stability criteria. This is an effective approach, but from the physical point of view the viscosity bounds are chosen arbitrarily. Moreover, these bounds depend on and vary with the grid and time step size and, therefore, with the simulation Mach number, which is freely chosen at the start of the simulation. Consequently, the modeled viscosity range may not fit to the actual range of the physical problem, because the correct simulation Mach number is unknown a priori. A way around is, to perform precursor simulations on a fixed grid to determine a possible time step size and simulation Mach number, respectively. These precursor simulations can be time consuming and expensive, especially for complex cases and a number of operating points. This makes the LBM unattractive for use in practical simulations of non-Newtonian fluids.
The essential novelty of the method, developed in the course of this thesis, is that the numerically modeled viscosity range is consistently adapted to the actual physically exhibited viscosity range through change of the simulation time step and the simulation Mach number, respectively, while the simulation is running. The algorithm is robust, independent of the Mach number the simulation was started with, and applicable for stationary flows as well as transient flows. The method for the viscosity adaption will be referred to as the "viscosity adaption method (VAM)" and the combination with LBM leads to the "viscosity adaptive LBM (VALBM)".
Besides the introduction of the VALBM, a goal of this thesis is to offer assistance in the spirit of a theory guide to students and assistant researchers concerning the theory of the Lattice Boltzmann Method and its implementation in SAM-Lattice. In Chapter 2, the mathematical foundation of the LBM is given and the route from the BGK approximation of the Boltzmann equation to the Lattice Boltzmann (BGK) equation is delineated in detail.
The derivation is restricted to isothermal flows only. Restrictions of the method, such as low Mach number flows are highlighted and the accuracy of the method is discussed.
SAM-Lattice is a C++ software bundle developed by the author and his colleague Dipl.-Ing. Andreas Schneider. It is a highly automated package for the simulation of isothermal flows of incompressible or weakly compressible fluids in 3D on the basis of the Lattice Boltzmann Method. By the time of writing of this thesis, SAM-Lattice comprises 5 components. The main components are the highly automated lattice generator SamGenerator and the Lattice Boltzmann solver SamSolver. Postprocessing is done with ParaSam, which is our extension of the
open source visualization software ParaView. Additionally, domain decomposition for MPI
parallelism is done by SamDecomposer, which makes use of the graph partitioning library MeTiS. Finally, all mentioned components can be controlled through a user friendly GUI (SamLattice) implemented by the author using QT, including features to visually track output data.
In Chapter 3, some fundamental aspects on the implementation of the main components, including the corresponding flow charts will be discussed. Actual details on the implementation are given in the comprehensive programmers guides to SamGenerator and SamSolver.
In order to ensure the functionality of the implementation of SamSolver, the solver is verified in Chapter 4 for Stokes's First Problem, the suddenly accelerated plate, and for Stokes's Second Problem, the oscillating plate, both for Newtonian fluids. Non-Newtonian fluids are modeled in SamSolver with the power-law model according to Ostwald de Waele. The implementation for non-Newtonian fluids is verified for the Hagen-Poiseuille channel flow in conjunction with a convergence analysis of the method. At the same time, the local grid refinement as it is implemented in SamSolver, is verified. Finally, the verification of higher order boundary conditions is done for the 3D Hagen-Poiseuille pipe flow for both Newtonian and non-Newtonian fluids.
In Chapter 5, the theory of the viscosity adaption method is introduced. For the adaption process, a target collision frequency or target simulation Mach number must be chosen and the distributions must be rescaled according to the modified time step size. A convenient choice is one of the stability bounds. The time step size for the adaption step is deduced from the target collision frequency \(\Omega_t\) and the currently minimal or maximal shear rate in the system, while obeying auxiliary conditions for the simulation Mach number. The adaption is done in the collision step of the Lattice Boltzmann algorithm. We use the transformation matrices of the MRT model to map from distribution space to moment space and vice versa. The actual scaling of the distributions is conducted on the back mapping, because we use the transformation matrix on the basis of the new adaption time step size. It follows an additional rescaling of the non-equilibrium part of the distributions, because of the form of the definition for the discrete stress tensor in the LBM context. For that reason it is clear, that the VAM is applicable for the SRT model as well as the MRT model, where there is virtually no extra cost in the latter case. Also, in Chapter 5, the multi level treatment will be discussed.
Depending on the target collision frequency and the target Mach number, the VAM can be used to optimally use the viscosity range that can be modeled within the stability bounds or it can be used to drastically accelerate the simulation. This is shown in Chapter 6. The viscosity adaptive LBM is verified in the stationary case for the Hagen-Poiseuille channel flow and in the transient case for the Wormersley flow, i.e., the pulsatile 3D Hagen-Poiseuille pipe flow. Although, the VAM is used here for fluids that can be modeled with the power-law approach, the implementation of the VALBM is straightforward for other non-Newtonian models, e.g., the Carreau-Yasuda or Cross model. In the same chapter, the VALBM is validated for the case of a propeller viscosimeter developed at the chair SAM. To this end, the experimental data of the torque on the impeller of three shear thinning non-Newtonian liquids serve for the validation. The VALBM shows excellent agreement with experimental data for all of the investigated fluids and in every operating point. For reasons of comparison, a series of standard LBM simulations is carried out with different simulation Mach numbers, which partly show errors of several hundred percent. Moreover, in Chapter 7, a sensitivity analysis on the parameters used within the VAM is conducted for the simulation of the propeller viscosimeter.
Finally, the accuracy of non-Newtonian Lattice Boltzmann simulations with the SRT and the MRT model is analyzed in detail. Previous work for Newtonian fluids indicate that depending on the numerical value of the collision frequency \(\Omega\), additional artificial viscosity is introduced due to the finite difference scheme, which negatively influences the accuracy. For the non-Newtonian case, an error estimate in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. The estimation of the error minimum is excellent in regions where the \(\Omega\) error is the dominant source of error as opposed to the compressibility error.
Result of this dissertation is a verified and validated software bundle on the basis of the viscosity adaptive Lattice Boltzmann Method. The work restricts itself on the simulation of isothermal, laminar flows with small Mach numbers. As further research goals, the testing of the VALBM with minimal error estimate and the investigation of the VALBM in the case of turbulent flows is suggested.
This dissertation is intended to transport the theory of Serre functors into the context of A-infinity-categories. We begin with an introduction to multicategories and closed multicategories, which form a framework in which the theory of A-infinity-categories is developed. We prove that (unital) A-infinity-categories constitute a closed symmetric multicategory. We define the notion of A-infinity-bimodule similarly to Tradler and show that it is equivalent to an A-infinity-functor of two arguments which takes values in the differential graded category of complexes of k-modules, where k is a commutative ground ring. Serre A-infinity-functors are defined via A-infinity-bimodules following ideas of Kontsevich and Soibelman. We prove that a unital closed under shifts A-infinity-category over a field admits a Serre A-infinity-functor if and only if its homotopy category admits an ordinary Serre functor. The proof uses categories and Serre functors enriched in the homotopy category of complexes of k-modules. Another important ingredient is an A-infinity-version of the Yoneda Lemma.
In dieser Arbeit wurden für die Moleküle B3, B3- und C3+ mit dem MR-CI-Verfahren hochgenaue Potentialfächen für ein oder mehrere elektronische Zustände berechnet. Alle drei Moleküle besitzen elektronisch entartete Jahn-Teller-Zustände. Im Gegesatz zu den früher untersuchten Alkalitrimeren liegt hier die konische Durchschneidung so tief, dass sie bei der Schwingungsanalyse berücksichtigt werden muss und daher eine diabatische Behandlung erfordlich ist. Für den X<-1E'-Übergang im B3 konnte die Übereinstimmung des berechneten Spektrum mit dem gemessenen durch den Einsatz des größeren Basissatzes VQZ, im Vergleich zu den bereits veröffentlichten Ergebnissen, nochmals deutlich verbessert werden. Für den berechneten 00-Übergang ist im gemessen Spektrum kein Übergang zu beobachten. Neben der guten Übereinstimmung der anderen Peaks wird diese These auch die T00 Energie gestützt. Die einfache Progression des experimentellen X<-2E'-Übergangs im B3 konnte ebenfalls in gute Übereinstimmung berechnet werden. Die einfache und kurze Progression ergibt sich aus der Tatsache, dass praktisch keine Jahn-Teller-Verzerrung vorhanden ist, und beide Teilflächen fast deckungsgleich sind. Für den X<-1E'-Übergang des B3- wurde ebenfalls ein Spektrum simuliert, allerdings findet sich keine Übereinstimmung zu den gemessenen Übergängen. Da die beobachtete Elektronenablöseenergie nur unwesentlich oberhalb der Elektronenanregungsenergie liegt und im Hinblick auf die starken X<-2E'-Absorptionen des B3 in der gleichen Messung bleibt offen, welche Strukturn im Experiment zu sehen sind. Zum C3+ wurde eine Schwingungsanalyse für den E'-Grundzustand durchgeführt. Experimentelle Vergleichswerte fehlen in diesem Fall. Allerdings konnte die bereits seit mehr als einem Jahrzehnt diskutierte Höhe der Isomerisierungsenergie zwischen gewinkelter und linearer Geometrie sehr genau, auf nun 6.8 +- 0.5 kcal/mol festgelegt werden. Bei vibronischer Betrachtung unter Einbeziehung der Nullpunktsenergien reduziert sich diese auf bzw. 4.8 kcal/mol. Ausserdem wurde die Existenz eines linearen Minimums bestätigt. C3+ liefert auch ein sehr schönes Beispiel für die Verschränkung verschiedener lokaler und globaler Schwingungszustände, was zu einer irregulären Abfolge von Zuständen führt. Für die Reaktivität des C3+ wurde beobachtet, dass es unterhalb von 50 K die höchste Reaktivität besitzt und darüber deutlich abnimmt. Auf dieses Verhalten liefert die Schwingungsanalyse keine Antwort, da bis selbst zur Raumtemperatur keine thermische Schwingungsanregung statt finden kann.
Im Mittelpunkt der Untersuchungen standen Bismut-Aren-Komplexe der Reihen [(C6H6)BiCl3-n]n+ (n = 0 – 3) und [(MemC6H6-m)BiCl3] (m = 0 – 3). Außerdem wurden die leichteren Homologen [(C6H6)SbCl3] und [(C6H6)AsCl3] untersucht, um Gruppentrends zu erkennen. Es wurden auch die zu den Komplexen [(C6H6)BiCl3-n]n+ (n = 1 – 3) isoelektronischen Blei-Aren-Komplexe mit in die Betrachtungen einbezogen. Von prinzipiellem Interesse ist der Komplex [(C6H6)2Pb]2+, der als Prototyp eines bent-Sandwich-Bis(aren)hauptgruppenelement-Komplexes aufzufassen ist. Die Strukturen der Neutralkomplexe und Komplexkationen wurden auf MP2(fc)/6-31+G(d)(C,H);SBKJC(d)(Bi,Cl)-Niveau optimiert. Die Wechselwirkungsenergie in [(C6H6)BiCl3] beträgt –23 kJ/mol (MP4). Betrachtungen der Elektronenlokalisierungsfunktion und der Molekülorbitale trugen zum Verständnis der Bindungsverhältnisse in den untersuchten Aren-Komplexen bei. Eine Kristallstrukturanalyse bestätigte die gefundenen Trends der Rechnungen. Die Aren-Komplexe mit leichteren Zentralatomen sind weniger stabil. Außerdem steigt die Stabilität mit der Basizität des Aren-Liganden leicht und mit der Ladung der Komplexe stark an. Zur Untersuchung (B3LYP/6-311+G(d)) der Bindungsverhältnisse im P4-Ring des Tetrakis(amino)-1l5,3l5-Tetraphosphets und der P-P-Bindung seines [2+2]-Cycloreversionsprodukts wurden die Elektronenlokalisierungsfunktionen, wichtige Molekülorbitale, Bindungsordnungen, berechnete chemische Verschiebungen der Phosphorkerne und kernunabhängige chemische Verschiebungen betrachtet. Danach kann für die P-P-Bindung im P4-Ring des Tetraphosphets ein beträchtlicher p-Bindungsanteil angenommen werden. Die P-P-Bindung im [2+2]-Cycloreversionsprodukt kann als elektronisch ungewöhnliche, durch Coulombkräfte verkürzte Doppelbindung verstanden werden. Bei präparativen Arbeiten zu funktionalisierten Aminoarsanen wurde vor einiger Zeit ein Tetrakis(amino)diarsan mit einer außerordentlich langen As-As-Bindung (2,673(3) Å) erhalten. Quantenchemische Rechnungen (B3LYP/6-31+G(d)) bestätigen und unterstützen diesen Befund und räumen letzte Zweifel im Zusammenhang mit der kristallstrukturanalytischen Bestimmung der Bindungslänge aus.
Diese Arbeit gehört in die algebraische Geometrie und die Darstellungstheorie und stellt eine Beziehung zwischen beiden Gebieten dar. Man beschäftigt sich mit den abgeleiteten Kategorien auf flachen Entartungen projektiver Geraden und elliptischer Kurven. Als Mittel benutzt man die Technik der Matrixprobleme. Das Hauptergebnis dieser Dissertation ist der folgende Satz: SATZ. Sei X ein Zykel projektiver Geraden. Dann gibt es drei Typen unzerlegbarer Objekte in D^-(Coh_X): - Shifts von Wolkenkratzergarben in einem regulären Punkt; - Bänder B(w,m,lambda), - Saiten S(w). Ganz analog beweist man die Zahmheit der abgeleiteten Kategorien vieler assoziativer Algebren.