### Refine

#### Year of publication

- 2010 (56) (remove)

#### Document Type

- Report (26)
- Doctoral Thesis (22)
- Preprint (4)
- Bachelor Thesis (1)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical Part (1)

#### Language

- English (56) (remove)

#### Keywords

- Erwarteter Nutzen (2)
- Lagrangian mechanics (2)
- Numerische Strömungssimulation (2)
- Portfolio Selection (2)
- Stochastische dynamische Optimierung (2)
- numerical upscaling (2)
- optimal control (2)
- portfolio choice (2)
- work effort (2)
- Abstraction (1)

#### Faculty / Organisational entity

Tropical intersection theory
(2010)

This thesis consists of five chapters: Chapter 1 contains the basics of the theory and is essential for the rest of the thesis. Chapters 2-5 are to a large extent independent of each other and can be read separately. - Chapter 1: Foundations of tropical intersection theory In this first chapter we set up the foundations of a tropical intersection theory covering many concepts and tools of its counterpart in algebraic geometry such as affine tropical cycles, Cartier divisors, morphisms of tropical cycles, pull-backs of Cartier divisors, push-forwards of cycles and an intersection product of Cartier divisors and cycles. Afterwards, we generalize these concepts to abstract tropical cycles and introduce a concept of rational equivalence. Finally, we set up an intersection product of cycles and prove that every cycle is rationally equivalent to some affine cycle in the special case that our ambient cycle is R^n. We use this result to show that rational and numerical equivalence agree in this case and prove a tropical Bézout's theorem. - Chapter 2: Tropical cycles with real slopes and numerical equivalence In this chapter we generalize our definitions of tropical cycles to polyhedral complexes with non-rational slopes. We use this new definition to show that if our ambient cycle is a fan then every subcycle is numerically equivalent to some affine cycle. Finally, we restrict ourselves to cycles in R^n that are "generic" in some sense and study the concept of numerical equivalence in more detail. - Chapter 3: Tropical intersection products on smooth varieties We define an intersection product of tropical cycles on tropical linear spaces L^n_k and on other, related fans. Then, we use this result to obtain an intersection product of cycles on any "smooth" tropical variety. Finally, we use the intersection product to introduce a concept of pull-backs of cycles along morphisms of smooth tropical varieties and prove that this pull-back has all expected properties. - Chapter 4: Weil and Cartier divisors under tropical modifications First, we introduce "modifications" and "contractions" and study their basic properties. After that, we prove that under some further assumptions a one-to-one correspondence of Weil and Cartier divisors is preserved by modifications. In particular we can prove that on any smooth tropical variety we have a one-to-one correspondence of Weil and Cartier divisors. - Chapter 5: Chern classes of tropical vector bundles We give definitions of tropical vector bundles and rational sections of tropical vector bundles. We use these rational sections to define the Chern classes of such a tropical vector bundle. Moreover, we prove that these Chern classes have all expected properties. Finally, we classify all tropical vector bundles on an elliptic curve up to isomorphisms.

A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.

The optimal design of rotational production processes for glass wool manufacturing poses severe computational challenges to mathematicians, natural scientists and engineers. In this paper we focus exclusively on the spinning regime where thousands of viscous thermal glass jets are formed by fast air streams. Homogeneity and slenderness of the spun fibers are the quality features of the final fabric. Their prediction requires the computation of the fuidber-interactions which involves the solving of a complex three-dimensional multiphase problem with appropriate interface conditions. But this is practically impossible due to the needed high resolution and adaptive grid refinement. Therefore, we propose an asymptotic coupling concept. Treating the glass jets as viscous thermal Cosserat rods, we tackle the multiscale problem by help of momentum (drag) and heat exchange models that are derived on basis of slender-body theory and homogenization. A weak iterative coupling algorithm that is based on the combination of commercial software and self-implemented code for ow and rod solvers, respectively, makes then the simulation of the industrial process possible. For the boundary value problem of the rod we particularly suggest an adapted collocation-continuation method. Consequently, this work establishes a promising basis for future optimization strategies.

This work deals with the modeling and simulation of slender viscous jets exposed to gravity and rotation, as they occur in rotational spinning processes. In terms of slender-body theory we show the asymptotic reduction of a viscous Cosserat rod to a string system for vanishing slenderness parameter. We propose two string models, i.e. inertial and viscous-inertial string models, that differ in the closure conditions and hence yield a boundary value problem and an interface problem, respectively. We investigate the existence regimes of the string models in the four-parametric space of Froude, Rossby, Reynolds numbers and jet length. The convergence regimes where the respective string solution is the asymptotic limit to the rod turn out to be disjoint and to cover nearly the whole parameter space. We explore the transition hyperplane and derive analytically low and high Reynolds number limits. Numerical studies of the stationary jet behavior for different parameter ranges complete the work.

The purpose of Exploration in Oil Industry is to "discover" an oil-containing geological formation from exploration data. In the context of this PhD project this oil-containing geological formation plays the role of a geometrical object, which may have any shape. The exploration data may be viewed as a "cloud of points", that is a finite set of points, related to the geological formation surveyed in the exploration experiment. Extensions of topological methodologies, such as homology, to point clouds are helpful in studying them qualitatively and capable of resolving the underlying structure of a data set. Estimation of topological invariants of the data space is a good basis for asserting the global features of the simplicial model of the data. For instance the basic statistical idea, clustering, are correspond to dimension of the zero homology group of the data. A statistics of Betti numbers can provide us with another connectivity information. In this work represented a method for topological feature analysis of exploration data on the base of so called persistent homology. Loosely, this is the homology of a growing space that captures the lifetimes of topological attributes in a multiset of intervals called a barcode. Constructions from algebraic topology empowers to transform the data, to distillate it into some persistent features, and to understand then how it is organized on a large scale or at least to obtain a low-dimensional information which can point to areas of interest. The algorithm for computing of the persistent Betti numbers via barcode is realized in the computer algebra system "Singular" in the scope of the work.

The Train Marshalling Problem consists of rearranging an incoming train in a marshalling yard in such a way that cars with the same destinations appear consecutively in the final train and the number of needed sorting tracks is minimized. Besides an initial roll-in operation, just one pull-out operation is allowed. This problem was introduced by Dahlhaus et al. who also showed that the problem is NP-complete. In this paper, we provide a new lower bound on the optimal objective value by partitioning an appropriate interval graph. Furthermore, we consider the corresponding online problem, for which we provide upper and lower bounds on the competitiveness and a corresponding optimal deterministic online algorithm. We provide an experimental evaluation of our lower bound and algorithm which shows the practical tightness of the results.

In the generalized max flow problem, the aim is to find a maximum flow in a generalized network, i.e., a network with multipliers on the arcs that specify which portion of the flow entering an arc at its tail node reaches its head node. We consider this problem for the class of series-parallel graphs. First, we study the continuous case of the problem and prove that it can be solved using a greedy approach. Based on this result, we present a combinatorial algorithm that runs in O(m*m) time and a dynamic programming algorithm with running time O(m*log(m)) that only computes the maximum flow value but not the flow itself. For the integral version of the problem, which is known to be NP-complete, we present a pseudo-polynomial algorithm.

Ever since Mark Weiser’s vision of Ubiquitous Computing the importance of context has increased in the computer science domain. Future Ambient Intelligent Environments will assist humans in their everyday activities, even without them being constantly aware of it. Objects in such environments will have small computers embedded into them which have the ability to predict human needs from the current context and adapt their behavior accordingly. This vision equally applies to future production environments. In modern factories workers and technical staff members are confronted with a multitude of devices from various manufacturers, all with different user interfaces, interaction concepts and degrees of complexity. Production processes are highly dynamic, whole modules can be exchanged or restructured. Both factors force users to continuously change their mental model of the environment. This complicates their workflows and leads to avoidable user errors or slips in judgement. In an Ambient Intelligent Production Environment these challenges have to be approached. The SmartMote is a universal control device for ambient intelligent production environments like the SmartFactoryKL. It copes with the problems mentioned above by integrating all the user interfaces into a single, holistic and mobile device. Following an automated Model-Based User Interface Development (MBUID) process it generates a fully functional graphical user interface from an abstract task-based description of the environment during run-time. This work introduces an approach to integrating context, namely the user’s location, as an adaptation basis into the MBUID process. A Context Model is specified, which stores location information in a formal and precise way. Connected sensors continuously update the model with new values. The model is complemented by a reasoning component which uses an extensible set of rules. These rules are used to derive more abstract context information from basic sensor data and for providing this information to the MBUID process. The feasibility of the approach is shown by using the example of Interaction Zones, which let developers describe different task models depending on the user’s location. Using the context model to determine when a user enters or leaves a zone, the generator can adapt the graphical user interface accordingly. Context-awareness and the potential to adapt to the current context of use are key requirements of applications in ambient intelligent environments. The approach presented here provides a clear procedure and extension scheme for the consideration of additional context types. As context has significant influence on the overall User Experience, this results not only in a better usefulness, but also in an improved usability of the SmartMote.

In the classical Merton investment problem of maximizing the expected utility from terminal wealth and intermediate consumption stock prices are independent of the investor who is optimizing his investment strategy. This is reasonable as long as the considered investor is small and thus does not influence the asset prices. However for an investor whose actions may affect the financial market the framework of the classical investment problem turns out to be inappropriate. In this thesis we provide a new approach to the field of large investor models. We study the optimal investment problem of a large investor in a jump-diffusion market which is in one of two states or regimes. The investor’s portfolio proportions as well as his consumption rate affect the intensity of transitions between the different regimes. Thus the investor is ’large’ in the sense that his investment decisions are interpreted by the market as signals: If, for instance, the large investor holds 25% of his wealth in a certain asset then the market may regard this as evidence for the corresponding asset to be priced incorrectly, and a regime shift becomes likely. More specifically, the large investor as modeled here may be the manager of a big mutual fund, a big insurance company or a sovereign wealth fund, or the executive of a company whose stocks are in his own portfolio. Typically, such investors have to disclose their portfolio allocations which impacts on market prices. But even if a large investor does not disclose his portfolio composition as it is the case of several hedge funds then the other market participants may speculate about the investor’s strategy which finally could influence the asset prices. Since the investor’s strategy only impacts on the regime shift intensities the asset prices do not necessarily react instantaneously. Our model is a generalization of the two-states version of the Bäuerle-Rieder model. Hence as the Bäuerle-Rieder model it is suitable for long investment periods during which market conditions could change. The fact that the investor’s influence enters the intensities of the transitions between the two states enables us to solve the investment problem of maximizing the expected utility from terminal wealth and intermediate consumption explicitly. We present the optimal investment strategy for a large investor with CRRA utility for three different kinds of strategy-dependent regime shift intensities – constant, step and affine intensity functions. In each case we derive the large investor’s optimal strategy in explicit form only dependent on the solution of a system of coupled ODEs of which we show that it admits a unique global solution. The thesis is organized as follows. In Section 2 we repeat the classical Merton investment problem of a small investor who does not influence the market. Further the Bäuerle-Rieder investment problem in which the market states follow a Markov chain with constant transition intensities is discussed. Section 3 introduces the aforementioned investment problem of a large investor. Besides the mathematical framework and the HJB-system we present a verification theorem that is necessary to verify the optimality of the solutions to the investment problem that we derive later on. The explicit derivation of the optimal investment strategy for a large investor with power utility is given in Section 4. For three kinds of intensity functions – constant, step and affine – we give the optimal solution and verify that the corresponding ODE-system admits a unique global solution. In case of the strategy-dependent intensity functions we distinguish three particular kinds of this dependency – portfolio-dependency, consumption-dependency and combined portfolio- and consumption-dependency. The corresponding results for an investor having logarithmic utility are shown in Section 5. In the subsequent Section 6 we consider the special case of a market consisting of only two correlated stocks besides the money market account. We analyze the investor’s optimal strategy when only the position in one of those two assets affects the market state whereas the position in the other asset is irrelevant for the regime switches. Various comparisons of the derived investment problems are presented in Section 7. Besides the comparisons of the particular problems with each other we also dwell on the sensitivity of the solution concerning the parameters of the intensity functions. Finally we consider the loss the large investor had to face if he neglected his influence on the market. In Section 8 we conclude the thesis.

Mechanical and electrical properties of carbon nanofiber–ceramic nanoparticle–polymer composites
(2010)

The present research is focused on the manufacturing and analysis of composites consisting of a thermosetting polymer reinforced with fillers of nanometric dimensions. The materials were chosen to be an epoxy resin matrix and two different kinds of fillers: electrically conductive carbon nanofibers (CNFs) and ceramic titanium dioxide (TiO2) and aluminium dioxide (Al2O3) nanoparticles. In an initial step of the work, in order to understand the effect that each kind of filler had when added separately to the polymer matrix, CNF–EP and ceramic nanoparticle–EP composites were manufactured and tested. Each type of filler was dispersed in the polymer matrix using two different dispersion technologies. CNFs were dispersed in the resin with the aid of a three roll calender (TRC) whereas a torus bead mill (TML) was used in the ceramic nanoparticle case. Calendering proved to be an efficient method to disperse the untreated CNFs in the polymer matrix. The study of the physical properties of undispersed CNF composites showed that the tensile strength and the maximum sustained strain, were more sensitive to the state of dispersion of the nanofibers than the elastic modulus, fracture toughness, impact energy and electrical conductivity (for filler loadings above the percolation threshold of the system). Rheological investigation of the uncured CNF–epoxy mixture at different stages of dispersion indicated the formation of an interconnected nanofiber network within the matrix after the initial steps of calendering. CNF–EP composites showed better mechanical performance than the unmodified polymer matrix. However, the tensile modulus and strength of the CNF composites accused the presence of remaining nanofiber clusters and did not reach theoretically predicted values. Fracture toughness and resistance against impact did not seem to be so sensitive to the state of nanofiber dispersion and improved consistently with the incorporation of the CNFs. The electrical conductivity of the CNF composites saw an eight orders of magnitude percolative enhancement with increasing nanofiber content. The percolation threshold for the achieved level of CNF dispersion was found to be 0.14 vol. %. It was also determined that, for these composites, the main mechanism of electrical transmission was the electron tunnelling mechanism. Ceramic nanoparticle–EP composites were manufactured using TiO2 and Al2O3 particles as fillers in the epoxy matrix. Mechanical dispersion of the nanoparticles in the liquid polymer by means of a torus bead mill dissolver led to homogeneous distributions of particles in the matrix. Remaining particle agglomerates had a mean value of 80 nm. However, micrometer sized agglomerates could clearly be observed in the microscopical analysis of the composites, especially in the TiO2 case. The inclusion of the nanoparticles in the epoxy resin resulted in a general improvement of the modulus, strength, maximum sustained strain, fracture toughness and impact energy of the polymer matrix. Nanoparticles were able to overcome the stiffness/toughness problem. On the other hand, nanoparticle–EP composites showed lower electrical conductivity than the neat epoxy. In general, there were no significant differences between the incorporation of TiO2 or Al2O3 particles. Based on the previous results, CNFs and nanoparticles were combined as fillers to create a nanocomposite that could benefit from the electrical properties provided by the conductive CNFs and, at the same time, have improved mechanical performance thanks to the presence of the well dispersed ceramic nanoparticles. Nanoparticles and CNFs were dispersed separately to create two batches which were blended together in a dissolver mixer. This method proved effective to create well dispersed CNF–nanoparticle–epoxy composites which showed improved electrical and mechanical properties compared with the neat polymer matrix. The well dispersed ceramic nanofillers were able to introduce additional energy dissipating mechanisms in the CNF–EP composites that resulted in an improvement of their mechanical performance. With high volume loadings of nanoparticles most of the reinforcement came from the presence of the nanoparticles in the polymer matrix. Therefore, the observed trends were, in essence, similar to the ones observed in the ceramic nanoparticle–EP composites. The enhancement in the mechanical performance of the CNF composites with the inclusion of ceramic nanoparticles came at the price of an increase in the percolation threshold and a reduction of the electrical conductivity of the CNF–nanoparticle–EP composites compared with the CNF–EP materials. A modified Weber and Kamal’s fiber contact model (FCM) was used to explain the electrical behaviour of the CNF–nanoparticle–EP composites once percolation was achieved. This model was able to fit rather accurately the experimentally measured conductivity of these composites.

Model-based fault diagnosis and fault-tolerant control for a nonlinear electro-hydraulic system
(2010)

The work presented in this thesis discusses the model-based fault diagnosis and fault-tolerant control with application to a nonlinear electro-hydraulic system. High performance control with guaranteed safety and reliability for electro-hydraulic systems is a challenging task due to the high nonlinearity and system uncertainties. This thesis developed a diagnosis integrated fault-tolerant control (FTC) strategy for the electro-hydraulic system. In fault free case the nominal controller is in operation for achieving the best performance. If the fault occurs, the controller will be automatically reconfigured based on the fault information provided by the diagnosis system. Fault diagnosis and reconfigurable controller are the key parts for the proposed methodology. The system and sensor faults both are studied in the thesis. Fault diagnosis consists of fault detection and isolation (FDI). A model-base residual generating is realized by calculating the redundant information from the system model and available signal. In this thesis differential-geometric approach is employed, which gives a general formulation of FDI problem and is more compact and transparent among various model-based approaches. The principle of residual construction with differential-geometric method is to find an unobservable distribution. It indicates the existence of a system transformation, with which the unknown system disturbance can be decoupled. With the observability codistribution algorithm the local weak observability of transformed system is ensured. A Fault detection observer for the transformed system can be constructed to generate the residual. This method cannot isolated sensor faults. In the thesis the special decision making logic (DML) is designed based on the individual signal analysis of the residuals to isolate the fault. The reconfigurable controller is designed with the backstepping technique. Backstepping method is a recursive Lyapunov-based approach and can deal with nonlinear systems. Some system variables are considered as ``virtual controls'' during the design procedure. Then the feedback control laws and the associate Lyapunov function can be constructed by following step-by-step routine. For the electro-hydraulic system adaptive backstepping controller is employed for compensate the impact of the unknown external load in the fault free case. As soon as the fault is identified, the controller can be reconfigured according to the new modeling of faulty system. The system fault is modeled as the uncertainty of system and can be tolerated by parameter adaption. The senor fault acts to the system via controller. It can be modeled as parameter uncertainty of controller. All parameters coupled with the faulty measurement are replaced by its approximation. After the reconfiguration the pre-specified control performance can be recovered. FDI integrated FTC based on backstepping technique is implemented successfully on the electro-hydraulic testbed. The on-line robust FDI and controller reconfiguration can be achieved. The tracking performance of the controlled system is guaranteed and the considered faults can be tolerated. But the problem of theoretical robustness analysis for the time delay caused by the fault diagnosis is still open.

The scope of this paper is to enhance the model for the own-company stockholder (given in Desmettre, Gould and Szimayer (2010)), who can voluntarily performance-link his personal wealth to his management success by acquiring stocks in the own-company whose value he can directly influence via spending work effort. The executive is thereby characterized by a parameter of risk aversion and the two work effectiveness parameters inverse work productivity and disutility stress. We extend the model to a constant absolute risk aversion framework using an exponential utility/disutility set-up. A closed-form solution is given for the optimal work effort an executive will apply and we derive the optimal investment strategies of the executive. Furthermore, we determine an up-front fair cash compensation applying an indifference utility rationale. Our study shows to a large extent that the results previously obtained are robust under the choice of the utility/disutility set-up.

We consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market includ- ing the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility. The power utility case is discussed as well. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two posi- tions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibility to benefit from her work effort by acquiring own-company shares. This gives insight into aspects of optimal contract design. Our framework is applicable to the pharma- ceutical and financial industry, and the IT sector.

In this work, we develop a framework for analyzing an executive’s own- company stockholding and work effort preferences. The executive, character- ized by risk aversion and work effectiveness parameters, invests his personal wealth without constraint in the financial market, including the stock of his own company whose value he can directly influence with work effort. The executive’s utility-maximizing personal investment and work effort strategy is derived in closed form for logarithmic and power utility and for exponential utility for the case of zero interest rates. Additionally, a utility indifference rationale is applied to determine his fair compensation. Being unconstrained by performance contracting, the executive’s work effort strategy establishes a base case for theoretical or empirical assessment of the benefits or otherwise of constraining executives with performance contracting. Further, we consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market including the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility and power utility. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two positions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibilityto benefit from her work effort by acquiring own-company shares. This givesinsight into aspects of optimal contract design. Our framework is applicable to the pharmaceutical and financial industry, as well as the IT sector.

The aim of this thesis was to link Computational Fluid Dynamics (CFD) and Population Balance Modelling (PBM) to gain a combined model for the prediction of counter-current liquid-liquid extraction columns. Parts of the doctoral thesis project were done in close cooperation with the Fraunhofer ITWM. Their in-house CFD code Finite Pointset Method (FPM) was further developed for two-phase simulations and used for the CFD-PBM coupling. The coupling and all simulations were also carried out in the commercial CFD code Fluent in parallel. For the solution methods of the PBM there was a close cooperation with Prof. Attarakih from the Al-Balqa Applied University in Amman, Jordan, who developed a new adaptive method, the Sectional Quadrature Method of Moments (SQMOM). At the beginning of the project, there was a lack of two-phase liquid-liquid CFD simulations and their experimental validation in literature. Therefore, stand-alone CFD simulations without PBM were carried out both in FPM and Fluent to test the predictivity of CFD for stirred liquid-liquid extraction columns. The simulations were validated by Particle Image Velocimetry (PIV) measurements. The two-phase PIV measurements were possible when using an iso-optical system, where the refractive indices of both liquid phases are identical. These investigations were done in segments of two Rotating Disc Contactors with 150mm and 450mm diameter to validate CFD at lab and at industrial scale. CFD results of the aqueous phase velocities, hold-up, droplet raising velocities and turbulent energy dissipation were compared to experimental data. The results show that CFD can predict most phenomena and there was an overall good agreement. In the next steps, different solution methods for the PBM, e.g. the SQMOM and the Quadrature Method of Moments (QMOM) were implemented, varied and tested in Fluent and FPM in a two-fluid model. In addition, different closures for coalescence and breakage were implemented to predict drop size distributions and Sauter mean diameters in the RDC DN150 column. These results show that a prediction of the droplet size distribution is possible, even when no adjustable parameters are used. A combined multi-fluid CFD-PBM model was developed by means of the SQMOM to overcome drawbacks of the two-fluid approach. Benefits of the multi-fluid approach could be shown, but the high computational load was also visible. Therefore, finally, the One Primary One Secondary Particle Method (OPOSPM), which is a very easy and efficient special case of the SQMOM, was introduced in CFD to simulate a full pilot plant column of the RDC DN150. The OPOSPM offers the possibility of a one equation model for the solution of the PBM in CFD. The predicted results for the mean droplet diameter and the dispersed phase hold up agree well with literature data. The results also show that the new CFD-PBM model is very efficient from computational point of view (two times less than the QMOM and five times less than the method of classes). The overall results give rise to the expectation that the coupled CFD-PBM model will lead to a better, faster and more cost-efficient layout of counter-current extraction columns in future.

Simulation of multibody systems (mbs) is an inherent part in developing and design of complex mechanical systems. Moreover, simulation during operation gained in importance in the recent years, e.g. for HIL-, MIL- or monitoring applications. In this paper we discuss the numerical simulation of multibody systems on different platforms. The main section of this paper deals with the simulation of an established truck model [9] on different platforms, one microcontroller and two real-time processor boards. Additional to numerical C-code the latter platforms provide the possibility to build the model with a commercial mbs tool, which is also investigated. A survey of different ways of generating code and equations of mbs models is given and discussed concerning handling, possible limitations as well as performance. The presented benchmarks are processed under terms of on-board real time applications. A further important restriction, caused by the real-time requirement, is a fixed integration step size. Whence, carefully chosen numerical integration algorithms are necessary, especially in the case of closed loops in the model. We investigate linearly-implicit time integration methods with fixed step size, so-called Rosenbrock methods, and compare them with respect to their accuracy and performance on the tested processors.

Wetlands are special areas that they offer habitat for terrestrial and water life. Wetlands are nest sides also for amphibian, for this reason wetlands offer wide range diversity for species. Wetlands are also reproduction regions for birds. Wetlands have special importance for ecosystem because they obstruct erosion. Wetlands absorb contaminants from water therefore wetlands contribute to clean water and they offer more potable water. Wetlands obstruct waterflood. In that case wetlands must be maintained and conserved. Wetlands must be conserved because wetlands vanish very rapidly because of contamination, excessively agriculture, urban sprawl, dams…etc. this PhD thesis contributes to solve problems of wetlands that they are affected from urbanization especially metropolitan areas. Growth of cities requires more land for settlements; the more settlements bring about the more urban sprawl. The more urban sprawl deteriorates more natural regions. In this cycle wetlands are also affected from urbanization effects. In this sense some precautions should be developed in order to protect wetlands from urbanization effect. These precautions should include anticipation about effects of urbanization. An important tool for conserving wetlands and protecting these regions from cities is land uses and land use planning in city and regional planning. First step of land use planning is determination of settlement appropriateness. Settlement appropriateness contributes to choose correct locations for settlement in this respect wetlands can be affected in minimum level from urban sprawl. This PhD thesis inquires a method about buffer zones around wetlands and Thresholds in basin of wetlands; and this method is examined in two case study areas Mogan and Büyükçekmece Lake. According to results of Mogan and Büyükçekmece Lake the PhD method will be generalized to other quasi wetlands that they exist near cities and are affected from urban sprawl.

The modelling of hedge funds poses a difficult problem since the available reported data sets are often small and incomplete. We propose a switching regression model for hedge funds, in which the coefficients are able to switch between different regimes. The coefficients are governed by a Markov chain in discrete time. The different states of the Markov chain represent different states of the economy, which influence the performance of the independent variables. Hedge fund indices are chosen as regressors. The parameter estimation for the switching parameter as well as for the switching error term is done through a filtering technique for hidden Markov models developed by Elliott (1994). Recursive parameter estimates are calculated through a filter-based EM-algorithm, which uses the hidden information of the underlying Markov chain. Our switching regression model is applied on hedge fund series and hedge fund indices from the HFR database.

A number of water flow problems in porous media are modelled by Richards’ equation [1]. There exist a lot of different applications of this model. We are concerned with the simulation of the pressing section of a paper machine. This part of the industrial process provides the dewatering of the paper layer by the use of clothings, i.e. press felts, which absorb the water during pressing [2]. A system of nips are formed in the simplest case by rolls, which increase sheet dryness by pressing against each other (see Figure 1). A lot of theoretical studies were done for Richards’ equation (see [3], [4] and references therein). Most articles consider the case of x-independent coefficients. This simplifies the system considerably since, after Kirchhoff’s transformation of the problem, the elliptic operator becomes linear. In our case this condition is not satisfied and we have to consider nonlinear operator of second order. Moreover, all these articles are concerned with the nonstationary problem, while we are interested in the stationary case. Due to complexity of the physical process our problem has a specific feature. An additional convective term appears in our model because the porous media moves with the constant velocity through the pressing rolls. This term is zero in immobile porous media. We are not aware of papers, which deal with such kind of modified steady Richards’ problem. The goal of this paper is to obtain the stability results, to show the existence of a solution to the discrete problem, to prove the convergence of the approximate solution to the weak solution of the modified steady Richards’ equation, which describes the transport processes in the pressing section. In Section 2 we present the model which we consider. In Section 3 a numerical scheme obtained by the finite volume method is given. The main part of this paper is theoretical studies, which are given in Section 4. Section 5 presents a numerical experiment. The conclusion of this work is given in Section 6.

We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued time series models based on other types of thinning operations.