KLUEDO RSS FeedKLUEDO Dokumente/documents
https://kluedo.ub.uni-kl.de/index/index/
Wed, 18 Jun 2008 15:29:14 +0200Wed, 18 Jun 2008 15:29:14 +0200A Comparative Study of the Vasicek and the CIR Model of the Short Rate
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1979
In this work, we analyze two important and simple models of short rates, namely Vasicek and CIR models. The models are described and then the sensitivity of the models with respect to changes in the parameters are studied. Finally, we give the results for the estimation of the model parameters by using two different ways.S. Zeytun; A. Guptareporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1979Wed, 18 Jun 2008 15:29:14 +0200Heterogeneous redundancy in software quality prediction using a hybrid Bayesian approach
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1980
With the ever-increasing significance of software in our everyday lives, it is vital to afford reliable software quality estimates. Typically, quantitative software quality analyses rely on either statistical fault prediction methods (FPMs) or stochastic software reliability growth models (SRGMs). Adopting solely FPMs or SRGMs, though, may result in biased predictions that do not account for uncertainty in the distinct prediction methods; thus rendering the prediction less reliable. This paper identifies flaws of the individual prediction methods and suggests a hybrid prediction approach that combines FPMs and SRGMs. We adopt FPMs for initially estimating the expected number of failures for fi- nite failure SRGMs. Initial parameter estimates yield more accurate reliability predictions until sufficient failures are observed that enable stable parameter estimates in SRGMs. Being at the equilibrium level of FPM and SRGM pre- dictions we suggest combining the competing prediction methods with respect to the principle of heterogeneous redundancy. That is, we propose using the in- dividual methods separately and combining their predictions. In this paper we suggest Bayesian model averaging (BMA) for combining the different methods. The hybrid approach allows early reliability estimates and encourages higher confidence in software quality predictions.G. Hanselmann; A. Sarishvilireporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1980Wed, 18 Jun 2008 15:29:09 +0200A novel non-linear approach to minimal area rectangular packing
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1981
This paper disscuses the minimal area rectangular packing problem of how to pack a set of specified, non-overlapping rectangels into a rectangular container of minimal area. We investigate different mathematical programming approaches of this and introduce a novel approach based on non-linear optimization and the \\\"tunneling effect\\\" achieved by a relaxation of the non-overlapping constraints.V. Maag; M. Berger; A. Winterfeld; K.-H. Küferreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1981Wed, 18 Jun 2008 15:29:02 +0200Pareto navigation – systematic multicriteria-based IMRT treatment plan determination
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1982
Background and purpose Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Material and methods Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms “selection” and “restriction”. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle – a set constructed from precomputed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions.M. Monz; K.-H. Küfer; T. Bortfeld; C. Thiekereporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1982Wed, 18 Jun 2008 15:28:56 +0200On the role of modeling parameters in IMRT plan optimization
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1983
Modeling and formulation of optimization problems in IMRT planning comprises the choice of various values such as function-specific parameters or constraint bounds. These values also affect the characteristics of the optimization problem and thus the form of the resulting optimal plans. This publication utilizes concepts of sensitivity analysis and elasticity in convex optimization to analyze the dependence of optimal plans on the modeling parameters. It also derives general rules of thumb how to choose and modify the parameters in order to obtain the desired IMRT plan. These rules are numerically validated for an exemplary IMRT planning problems.M. Krause; A. Scherrerreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1983Wed, 18 Jun 2008 15:28:50 +0200Computation of the permeability of porous materials from their microstructure by FFF-Stokes
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1984
A fully automatic procedure is proposed to rapidly compute the permeability of porous materials from their binarized microstructure. The discretization is a simplified version of Peskin’s Immersed Boundary Method, where the forces are applied at the no-slip grid points. As needed for the computation of permeability, steady flows at zero Reynolds number are considered. Short run-times are achieved by eliminating the pressure and velocity variables using an Fast Fourier Transform-based and 4 Poisson problembased fast inversion approach on rectangular parallelepipeds with periodic boundary conditions. In reference to calling it a fast method using fictitious or artificial forces, the implementation is called FFF-Stokes. Large scale computations on 3d images are quickly and automatically performed to estimate the permeability of some sample materials. A matlab implementation is provided to allow readers to experience the automation and speed of the method for realistic three-dimensional models.A. Wiegmannreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1984Wed, 18 Jun 2008 15:28:44 +0200Facility Location and Supply Chain Management – A comprehensive review
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1985
Facility location decisions play a critical role in the strategic design of supply chain networks. In this paper, an extensive literature review of facility location models in the context of supply chain management is given. Following a brief review of core models in facility location, we identify basic features that such models must capture to support decision-making involved in strategic supply chain planning. In particular, the integration of location decisions with other decisions relevant to the design of a supply chain network is discussed. Furthermore, aspects related to the structure of the supply chain network, including those specific to reverse logistics, are also addressed. Significant contributions to the current state-of-the-art are surveyed taking into account numerous factors. Supply chain performance measures and optimization techniques are also reviewed. Applications of facility location models to supply chain network design ranging across various industries are discussed. Finally, a list of issues requiring further research are highlighted.T. Melo; S. Nickel; F. Saldanha da Gamareporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1985Wed, 18 Jun 2008 15:28:38 +0200Bringing robustness to patient flow management through optimized patient transports in hospitals
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1986
Intra-hospital transports are often required for diagnostic or therapeutic reasons. Depending on the hospital layout, transportation between nursing wards and service units is either provided by ambulances or by trained personnel who accompany patients on foot. In many large German hospitals, the patient transport service is poorly managed and lacks workflow coordination. This contributes to higher hospital costs (e.g. when a patient is not delivered to the operating room on time) and to patient inconvenience due to longer waiting times. We have designed a computer-based planning system - Opti-TRANS c - that supports all phases of the transportation flow, ranging from travel booking, dispatching transport requests to monitoring and reporting trips in real-time. The methodology developed to solve the underlying optimization problem - a dynamic dial-a-ride problem with hospital-specific constraints - draws on fast heuristic methods to ensure the efficient and timely provision of transports. We illustrate the strong impact of Opti-TRANS c on the daily performance of the patient transportation service of a large German hospital. The major benefits obtained with the new tool include streamlined transportation processes and workflow, significant savings and improved patient satisfaction. Moreover, the new planning system has contributed to increase awareness among hospital staff about the importance of implementing efficient logistics practices.T. Hanne; T. Melo; S. Nickelreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1986Wed, 18 Jun 2008 15:28:32 +0200An efficient approach for upscaling properties of composite materials with high contrast of coefficients
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1987
An efficient approach for calculating the effective heat conductivity for a class of industrial composite materials, such as metal foams, fibrous glass materials, and the like, is discussed. These materials, used in insulation or in advanced heat exchangers, are characterized by a low volume fraction of the highly conductive material (glass or metal) having a complex, network-like structure and by a large volume fraction of the insulator (air). We assume that the composite materials have constant macroscopic thermal conductivity tensors, which in principle can be obtained by standard up-scaling techniques, that use the concept of representative elementary volumes (REV), i.e. the effective heat conductivities of composite media can be computed by post-processing the solutions of some special cell problems for REVs. We propose, theoretically justify, and numerically study an efficient approach for calculating the effective conductivity for media for which the ratio of low and high conductivities satisfies 1. In this case one essentially only needs to solve the heat equation in the region occupied by the highly conductive media. For a class of problems we show, that under certain conditions on the microscale geometry, the proposed approach produces an upscaled conductivity that is O() close to the exact upscaled permeability. A number of numerical experiments are presented in order to illustrate the accuracy and the limitations of the proposed method. Applicability of the presented approach to upscaling other similar problems, e.g. flow in fractured porous media, is also discussed.R. Ewing; O. Iliev; R. Lazarov; I. Rybak; J. Willemsreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1987Wed, 18 Jun 2008 15:28:21 +0200New approaches to hub location problems in public transport planning
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1988
In this paper, a new mixed integer mathematical programme is proposed for the application of Hub Location Problems (HLP) in public transport planning. This model is among the few existing ones for this application. Some classes of valid inequalities are proposed yielding a very tight model. To solve instances of this problem where existing standard solvers fail, two approaches are proposed. The first one is an exact accelerated Benders decomposition algorithm and the latter a greedy neighborhood search. The computational results substantiate the superiority of our solution approaches to existing standard MIP solvers like CPLEX, both in terms of computational time and problem instance size that can be solved. The greedy neighborhood search heuristic is shown to be extremely efficient.S. Gelareh; S. Nickelreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1988Wed, 18 Jun 2008 15:28:01 +0200Survey of 3d image segmentation methods
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1978
This report reviews selected image binarization and segmentation methods that have been proposed and which are suitable for the processing of volume images. The focus is on thresholding, region growing, and shape–based methods. Rather than trying to give a complete overview of the field, we review the original ideas and concepts of selected methods, because we believe this information to be important for judging when and under what circumstances a segmentation algorithm can be expected to work properly.O. Wirjadireporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1978Wed, 18 Jun 2008 15:26:53 +0200POLYBORI: A Gröbner basis framework for Boolean polynomials
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1976
This work presents a new framework for Gröbner basis computations with Boolean polynomials. Boolean polynomials can be modeled in a rather simple way, with both coefficients and degree per variable lying in {0, 1}. The ring of Boolean polynomials is, however, not a polynomial ring, but rather the quotient ring of the polynomial ring over the field with two elements modulo the field equations x2 = x for each variable x. Therefore, the usual polynomial data structures seem not to be appropriate for fast Gröbner basis computations. We introduce a specialized data structure for Boolean polynomials based on zero-suppressed binary decision diagrams (ZDDs), which is capable of handling these polynomials more efficiently with respect to memory consumption and also computational speed. Furthermore, we concentrate on high-level algorithmic aspects, taking into account the new data structures as well as structural properties of Boolean polynomials. For example, a new useless-pair criterion for Gröbner basis computations in Boolean rings is introduced. One of the motivations for our work is the growing importance of formal hardware and software verification based on Boolean expressions, which suffer – besides from the complexity of the problems – from the lack of an adequate treatment of arithmetic components. We are convinced that algebraic methods are more suited and we believe that our preliminary implementation shows that Gröbner bases on specific data structures can be capable to handle problems of industrial size.M. Brickenstein; A. Dreyerreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1976Wed, 28 May 2008 10:24:18 +0200On two-level preconditioners for flow in porous media
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1975
Two-level domain decomposition preconditioner for 3D flows in anisotropic highly heterogeneous porous media is presented. Accurate finite volume discretization based on multipoint flux approximation (MPFA) for 3D pressure equation is employed to account for the jump discontinuities of full permeability tensors. DD/MG type preconditioner for above mentioned problem is developed. Coarse scale operator is obtained from a homogenization type procedure. The influence of the overlapping as well as the influence of the smoother and cell problem formulation is studied. Results from numerical experiments are presented and discussed.R. Ewing; O. Iliev; R. Lazarov; I. Rybakreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1975Wed, 28 May 2008 10:24:09 +0200On upscaling heat conductivity for a class of industrial problems
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1974
Calculating effective heat conductivity for a class of industrial problems is discussed. The considered composite materials are glass and metal foams, fibrous materials, and the like, used in isolation or in advanced heat exchangers. These materials are characterized by a very complex internal structure, by low volume fraction of the higher conductive material (glass or metal), and by a large volume fraction of the air. The homogenization theory (when applicable), allows to calculate the effective heat conductivity of composite media by postprocessing the solution of special cell problems for representative elementary volumes (REV). Different formulations of such cell problems are considered and compared here. Furthermore, the size of the REV is studied numerically for some typical materials. Fast algorithms for solving the cell problems for this class of problems, are presented and discussed.O. Iliev; I. Rybak; J. Willemsreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1974Wed, 28 May 2008 10:24:01 +0200On approximation property of multipoint flux approximation method
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1973
Approximation property of multipoint flux approximation (MPFA) approach for elliptic equations with discontinuous full tensor coefficients is discussed here. Finite volume discretization of the above problem is presented in the case of jump discontinuities for the permeability tensor. First order approximation for the fluxes is proved. Results from numerical experiments are presented and discussed.O. Iliev; I. Rybakreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1973Wed, 28 May 2008 10:23:55 +0200On numerical upscaling for flows in heterogeneous porous media
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1972
A numerical upscaling approach, NU, for solving multiscale elliptic problems is discussed. The main components of this NU are: i) local solve of auxil- iary problems in grid blocks and formal upscaling of the obtained re sults to build a coarse scale equation; ii) global solve of the upscaled coarse scale equation; and iii) reconstruction of a fine scale solution by solving local block problems on a dual coarse grid. By its structure NU is similar to other methods for solving multiscale elliptic problems, such as the multiscale finite element method, the multiscale mixed finite element method, the numerical subgrid upscaling method, heterogeneous multiscale method, and the multiscale finite volume method. The difference with those methods is in the way the coarse scale equation is build and solved, and in the way the fine scale solution is reconstructed. Essential components of the presented here NU approach are the formal homogenization in the coarse blocks and the usage of so called multipoint flux approximation method, MPFA. Unlike the usual usage as MPFA as a discretiza- tion method for single scale elliptic problems with tensor discontinuous coefficients, we consider its usage as a part of a numerical upscaling approach. The main aim of this paper is to compare NU with the MsFEM. In particular, it is shown that the resonance effect, which limits the application of the Multiscale FEM, does not appear, or it is significantly relaxed, when the presented here numerical upscaling approach is applied.O. Iliev; I. Rybakreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1972Wed, 28 May 2008 10:23:47 +0200Kernel Fisher discriminant functions – a concise and rigorous introduction
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1971
In the article the application of kernel functions – the so-called »kernel trick« – in the context of Fisher’s approach to linear discriminant analysis is described for data sets subdivided into two groups and having real attributes. The relevant facts about functional Hilbert spaces and kernel functions including their proofs are presented. The approximative algorithm published in [Mik3] to compute a discriminant function given the data and a kernel function is briefly reviewed. As an illustration of the technique an artificial data set is analysed using the algorithm just mentioned.H. Knafreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1971Wed, 28 May 2008 10:23:40 +0200Resampling-Methoden zur mse-Korrektur und Anwendungen in der Betriebsfestigkeit
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1970
Von sicherheitsrelevanten Bauteilen im Automobilbau verlangt man, dass beim Kunden bis zur Zeit/Strecke q0 höchstens ein Anteil p0 ausgefallen ist. Die Verifikation dieses Quantils geschieht in einer Reihe von Versuchen, bei denen die Bauteile mit einer typischen Kraft zyklisch belastet werden, bis ein gewisses, im Vorfeld festgelegtes, Schadensbild auftritt und die Anzahl Ti der Zyklen („Schwingspiele“) als Lebensdauer notiert wird. Typischerweise ist der Stichprobenumfang N dabei sehr gering (N < 10), während gleichzeitig ein extremes Quantil 0 p0 0, 1 verifiziert werden soll. Verwendet man als Lebensdauerverteilung eine Weibulloder Lognormalverteilung, so tritt in den Quantilschätzern ein deutlicher Bias auf, der beseitigt werden soll. Da es sich hierbei in der Regel um einen positiven Bias handelt, würde man Bauteile als serientauglich einstufen, obwohl sie möglicherweise deutlich unter den Vorgaben liegen. Die Berechnung von Konfidenzintervallen für Quantile geschieht über Delta-Methoden, die ebenfalls schlechte Resultate liefern (in Form einer zu geringen empirischen Signifikanz linksseiter Intervalle). Im Folgenden werden Verallgemeinerungen der Bootstrap- und Jackknife- Biaskorrektur vorgestellt, welche nicht nur versuchen den Bias zu beseitigen, sondern direkt den mittleren quadratischen Fehler des Schätzers weitestgehend zu reduzieren. Simulationsstudien zeigen, dass dies für geringe Stichprobenumfänge gelingt. Außerdem wird untersucht, inwiefern die Methode in Kombination mit der Bootstrap-Quantil-Methode einen verbesserten Intervallschätzer für Quantile liefert. Dabei werden simulierte Daten betrachtet, deren Parameter repräsentativ für Lebensdauerverteilungen von sicherheitsrelevanten Bauteilen sind.S. Feth; J. Franke; M. Speckertreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1970Wed, 28 May 2008 10:23:33 +0200Dynamics of curved viscous fibers with surface tension
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1969
In this paper we extend the slender body theory for the dynamics of a curved inertial viscous Newtonian fiber [23] by the inclusion of surface tension in the systematic asymptotic framework and the deduction of boundary conditions for the free fiber end, as it occurs in rotational spinning processes of glass fibers. The fiber ow is described by a three-dimensional free boundary value problem in terms of instationary incompressible Navier-Stokes equations under the neglect of temperature dependence. From standard regular expansion techniques in powers of the slenderness parameter we derive asymptotically leading-order balance laws for mass and momentum combining the inner viscous transport with unrestricted motion and shape of the fiber center-line which becomes important in the practical application. For the numerical investigation of the effects due to surface tension, viscosity, gravity and rotation on the fiber behavior we apply a fnite volume method with implicit flux discretization.N. Marheineke; R. Wegenerreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1969Wed, 28 May 2008 10:23:24 +0200On parallel numerical algorithms for simulating industrial filtration problems
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1968
The performance of oil filters used in the automotive industry can be significantly improved, especially when computer simulation is an essential component of the design process. In this paper, we consider parallel numerical algorithms for solving mathematical models describing the process of filtration, filtering out solid particles from liquid oil. The Navier-Stokes-Brinkmann system of equations is used to describe the laminar flow of incompressible isothermal oil. The space discretization in the complicated filter geometry is based on the finite-volume method. Special care is taken for an accurate approximation of velocity and pressure on the interface between the fluid and the porous media. The time discretization used here is a proper modification of the fractional time step discretization (cf. Chorin scheme) of the Navier-Stokes equations, where the Brinkmann term is considered at both, prediction and correction substeps. A data decomposition method is used to develop a parallel algorithm, where the domain is distributed among processors by using a structured reference grid. The MPI library is used to implement the data communication part of the algorithm. A theoretical model is proposed for the estimation of the complexity of the given parallel algorithm and a scalability analysis is done on the basis of this model. Results of computational experiments are presented, and the accuracy and efficiency of the parallel algorithm is tested on real industrial geometries.R. Ciegis; O. Iliev; Z. Lakdawalareporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1968Wed, 28 May 2008 10:23:18 +0200Modeling and simulation of the pressing section of a paper machine
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1967
We are concerned with modeling and simulation of the pressing section of a paper machine. We state a two-dimensional model of a press nip which takes into account elasticity and flow phenomena. Nonlinear filtration laws are incorporated into the flow model. We present a numerical solution algorithm and a numerical investigation of the model with special focus on inertia effects.S. Riefreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1967Wed, 28 May 2008 10:23:11 +0200Hydrodynamic limit of the Fokker-Planck-equation describing fiber lay-down processes
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1966
In this paper, a stochastic model [5] for the turbulent fiber laydown in the industrial production of nonwoven materials is extended by including a moving conveyor belt. In the hydrodynamic limit corresponding to large noise values, the transient and stationary joint probability distributions are determined using the method of multiple scales and the Chapman-Enskog method. Moreover, exponential convergence towards the stationary solution is proven for the reduced problem. For special choices of the industrial parameters, the stochastic limit process is an Ornstein{Uhlenbeck. It is a good approximation of the fiber motion even for moderate noise values. Moreover, as shown by Monte{Carlo simulations, the limiting process can be used to assess the quality of nonwoven materials in the industrial application by determining distributions of functionals of the process.L. Bonilla; T. Götz; A. Klar; N. Marheineke; R. Wegenerreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1966Wed, 28 May 2008 10:23:04 +0200Numerical study of two-grid preconditioners for 1d elliptic problems with highly oscillating discontinuous coefficients
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1964
Abstract — Various advanced two-level iterative methods are studied numerically and compared with each other in conjunction with finite volume discretizations of symmetric 1-D elliptic problems with highly oscillatory discontinuous coefficients. Some of the methods considered rely on the homogenization approach for deriving the coarse grid operator. This approach is considered here as an alternative to the well-known Galerkin approach for deriving coarse grid operators. Different intergrid transfer operators are studied, primary consideration being given to the use of the so-called problemdependent prolongation. The two-grid methods considered are used as both solvers and preconditioners for the Conjugate Gradient method. The recent approaches, such as the hybrid domain decomposition method introduced by Vassilevski and the globallocal iterative procedure proposed by Durlofsky et al. are also discussed. A two-level method converging in one iteration in the case where the right-hand side is only a function of the coarse variable is introduced and discussed. Such a fast convergence for problems with discontinuous coefficients arbitrarily varying on the fine scale is achieved by a problem-dependent selection of the coarse grid combined with problem-dependent prolongation on a dual grid. The results of the numerical experiments are presented to illustrate the performance of the studied approaches.O. Iliev; R. Lazarov; J. Willemsreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1964Wed, 28 May 2008 10:22:58 +0200Parallel software tool for decomposing and meshing of 3d structures
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1963
An algorithm for automatic parallel generation of three-dimensional unstructured computational meshes based on geometrical domain decomposition is proposed in this paper. Software package build upon proposed algorithm is described. Several practical examples of mesh generation on multiprocessor computational systems are given. It is shown that developed parallel algorithm enables us to reduce mesh generation time significantly (dozens of times). Moreover, it easily produces meshes with number of elements of order 5 · 107, construction of those on a single CPU is problematic. Questions of time consumption, efficiency of computations and quality of generated meshes are also considered.E. Ivanov; O. Gluchshenko; H. Andrä; A. Kudryavtsevreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1963Wed, 28 May 2008 10:22:50 +0200Smooth intensity maps and the Bortfeld-Boyer sequencer
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1962
It has been empirically verified that smoother intensity maps can be expected to produce shorter sequences when step-and-shoot collimation is the method of choice. This work studies the length of sequences obtained by the sequencing algorithm by Bortfeld and Boyer using a probabilistic approach. The results of this work build a theoretical foundation for the up to now only empirically validated fact that if smoothness of intensity maps is considered during their calculation, the solutions can be expected to be more easily applied.Ph. Süss; K.-H. Küferreporthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1962Wed, 28 May 2008 10:22:43 +0200