KLUEDO RSS FeedKLUEDO Dokumente/documents
https://kluedo.ub.uni-kl.de/index/index/
Mon, 03 Apr 2000 00:00:00 +0200Mon, 03 Apr 2000 00:00:00 +0200A Finite - Volume Particle Method for CompressibleFlows
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/739
We derive a new class of particle methods for conservation laws, which are based on numerical flux functions to model the interactions between moving particles. The derivation is similar to that of classical Finite-Volume methods; except that the fixed grid structure in the Finite-Volume method is substituted by so-called mass packets of particles. We give some numerical results on a shock wave solution for Burgers equation as well as the well-known one-dimensional shock tube problem.Dietmar Hietel; Konrad Steiner; Jens Struckmeierpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/739Mon, 03 Apr 2000 00:00:00 +0200Helmholtz Resonators with Large Aperture
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/740
The lowest resonant frequency of a cavity resonator is usually approximated by the classical Helmholtz formula. However, if the opening is rather large and the front wall is narrow this formula is no longer valid. Here we present a correction which is of third order in the ratio of the diameters of aperture and cavity. In addition to the high accuracy it allows to estimate the damping due to radiation. The result is found by applying the method of matched asymptotic expansions. The correction contains form factors describing the shapes of opening and cavity. They are com- puted for a number of standard geometries. Results are compared with numerical computations.Jan Mohringpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/740Mon, 03 Apr 2000 00:00:00 +0200Damage Diagnosis of Rotors: Application of Hilbert Transform and Multi-Hypothesis Testing
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/741
In this paper, a combined approach to damage diagnosis of rotors is proposed. The intention is to employ signal-based as well as model-based procedures for an improved detection of size and location of the damage. In a first step, Hilbert transform signal processing techniques allow for a computation of the signal envelope and the instantaneous frequency, so that various types of non-linearities due to a damage may be identified and classified based on measured response data. In a second step, a multi-hypothesis bank of Kalman Filters is employed for the detection of the size and location of the damage based on the information of the type of damage provided by the results of the Hilbert transform.Michael Feldmann; Susanne Seiboldpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/741Mon, 03 Apr 2000 00:00:00 +0200Robust Reliability of Diagnostic Multi-Hypothesis Algorithms: Application to Rotating Machinery
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/742
Damage diagnosis based on a bank of Kalman filters, each one conditioned on a specific hypothesized system condition, is a well recognized and powerful diagnostic tool. This multi-hypothesis approach can be applied to a wide range of damage conditions. In this paper, we will focus on the diagnosis of cracks in rotating machinery. The question we address is: how to optimize the multi-hypothesis algorithm with respect to the uncertainty of the spatial form and location of cracks and their resulting dynamic effects. First, we formulate a measure of the reliability of the diagnostic algorithm, and then we discuss modifications of the diagnostic algorithm for the maximization of the reliability. The reliability of a diagnostic algorithm is measured by the amount of uncertainty consistent with no-failure of the diagnosis. Uncertainty is quantitatively represented with convex models.Yakov Ben-Haim; Susanne Seiboldpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/742Mon, 03 Apr 2000 00:00:00 +0200Three-dimensional Radiative Heat Transfer in Glass Cooling Processes
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/743
For the numerical simulation of 3D radiative heat transfer in glasses and glass melts, practically applicable mathematical methods are needed to handle such problems optimal using workstation class computers. Since the exact solution would require super-computer capabilities we concentrate on approximate solutions with a high degree of accuracy. The following approaches are studied: 3D diffusion approximations and 3D ray-tracing methods.Frank-Thomas Lentes; Norbert Siedowpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/743Mon, 03 Apr 2000 00:00:00 +0200A hierarchy of models for multilane vehicular traffic PART I: Modeling
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/744
In the present paper multilane models for vehicular traffic are considered. A microscopic multilane model based on reaction thresholds is developed. Based on this model an Enskog like kinetic model is developed. In particular, care is taken to incorporate the correlations between the vehicles. From the kinetic model a fluid dynamic model is derived. The macroscopic coefficients are deduced from the underlying kinetic model. Numerical simulations are presented for all three levels of description in [10]. Moreover, a comparison of the results is given there.Axel Klar; Raimund Wegenerpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/744Mon, 03 Apr 2000 00:00:00 +0200A hierarchy of models for multilane vehicular traffic PART II: Numerical and stochastic investigations
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/745
In this paper the work presented in [6] is continued. The present paper contains detailed numerical investigations of the models developed there. A numerical method to treat the kinetic equations obtained in [6] are presented and results of the simulations are shown. Moreover, the stochastic correlation model used in [6] is described and investigated in more detail.Axel Klar; Raimund Wegenerpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/745Mon, 03 Apr 2000 00:00:00 +0200Boundary Layers and Domain Decomposition for Radiative Heat Transfer and Diffusion Equations: Applications to Glass Manufacturing Processes
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/746
In this paper domain decomposition methods for radiative transfer problems including conductive heat transfer are treated. The paper focuses on semi-transparent materials, like glass, and the associated conditions at the interface between the materials. Using asymptotic analysis we derive conditions for the coupling of the radiative transfer equations and a diffusion approximation. Several test cases are treated and a problem appearing in glass manufacturing processes is computed. The results clearly show the advantages of a domain decomposition approach. Accuracy equivalent to the solution of the global radiative transfer solution is achieved, whereas computation time is strongly reduced.Axel Klar; Norbert Siedwopreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/746Mon, 03 Apr 2000 00:00:00 +0200Heterogeneous catalysis modelling and numerical simulation in rarefied gas flows
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/747
A new approach is proposed to model and simulate numerically heterogeneous catalysis in rarefied gas flows. It is developed to satisfy all together the following points: i) describe the gas phase at the microscopic scale, as required in rarefied flows, ii) describe the wall at the macroscopic scale, to avoid prohibitive computational costs and consider not only crystalline but also amorphous surfaces, iii) reproduce on average macroscopic laws correlated with experimental results and iv) derive ana- lytic models in a systematic and exact way. The problem is stated in the general framework of a non static flow in the vicinity of a catalytic and non porous surface (without ageing). It is shown that the exact and systematic resolution method based on the Laplace transform, introduced previously by the author to model collisions in the gas phase, can be extended to the present problem. The proposed approach is applied to the modelling of the Eley-Rideal and Langmuir-Hinshelwood recombinations, assuming that the coverage is locally at equilibrium. The models are developed considering one atomic species and extended to the gen eral case of several atomic species. Numerical calculations show that the models derived in this way reproduce with accuracy behaviours observed experimentally.Isabelle Choquetpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/747Mon, 03 Apr 2000 00:00:00 +0200Efficient Texture Analysis of Binary Images
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/748
A new method of determining some characteristics of binary images is proposed based on a special linear filtering. This technique enables the estimation of the area fraction, the specific line length, and the specific integral of curvature. Furthermore, the specific length of the total projection is obtained, which gives detailed information about the texture of the image. The influence of lateral and directional resolution depending on the size of the applied filter mask is discussed in detail. The technique includes a method of increasing directional resolution for texture analysis while keeping lateral resolution as high as possible.Joachim Ohser; Bernd Steinbach; Christian Langpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/748Mon, 03 Apr 2000 00:00:00 +0200Homogenization for viscoelasticity of the integral type with aging and shrinkage
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/749
A multi-phase composite with periodic distributed inclusions with a smooth boundary is considered in this contribution. The composite component materials are supposed to be linear viscoelastic and aging (of the non-convolution integral type, for which the Laplace transform with respect to time is not effectively applicable) and are subjected to isotropic shrinkage. The free shrinkage deformation can be considered as a fictitious temperature deformation in the behavior law. The procedure presented in this paper proposes a way to determine average (effective homogenized) viscoelastic and shrinkage (temperature) composite properties and the homogenized stress-field from known properties of the components. This is done by the extension of the asymptotic homogenization technique known for pure elastic non-homogeneous bodies to the non-homogeneous thermo-viscoelasticity of the integral non-convolution type. Up to now, the homogenization theory has not covered viscoelasticity of the integral type. Sanchez-Palencia (1980), Francfort & Suquet (1987) (see [2], [9]) have consid- ered homogenization for viscoelasticity of the differential form and only up to the first derivative order. The integral-modeled viscoelasticity is more general then the differential one and includes almost all known differential models. The homogenization procedure is based on the construction of an asymptotic solution with respect to a period of the composite structure. This reduces the original problem to some auxiliary boundary value problems of elasticity and viscoelasticity on the unit periodic cell, of the same type as the original non-homogeneous problem. The existence and uniqueness results for such problems were obtained for kernels satisfying some constrain conditions. This is done by the extension of the Volterra integral operator theory to the Volterra operators with respect to the time, whose 1 kernels are space linear operators for any fixed time variables. Some ideas of such approach were proposed in [11] and [12], where the Volterra operators with kernels depending additionally on parameter were considered. This manuscript delivers results of the same nature for the case of the space-operator kernels.Julia Orlikpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/749Mon, 03 Apr 2000 00:00:00 +0200Inverse radiation therapy planning a multiple objective optimisation approach
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/847
For some decades radiation therapy has been proved successful in cancer treatment. It is the major task of clinical radiation treatment planning to realise on the one hand a high level dose of radiation in the cancer tissue in order to obtain maximum tumour control. On the other hand it is obvious that it is absolutely necessary to keep in the tissue outside the tumour, particularly in organs at risk, the unavoidable radiation as low as possible. No doubt, these two objectives of treatment planning high level dose in the tumour, low radiation outside the tumour have a basically contradictory nature. Therefore, it is no surprise that inverse mathematical models with dose distribution bounds tend to be infeasible in most cases. Thus, there is need for approximations compromising between overdosing the organs at risk and underdosing the target volume. Differing from the currently used time consuming iterative approach, which measures deviation from an ideal (non-achievable) treatment plan using recursively trial-and-error weights for the organs of interest, we go a new way trying to avoid a priori weight choices and consider the treatment planning problem as a multiple objective linear programming problem: with each organ of interest, target tissue as well as organs at risk, we associate an objective function measuring the maximal deviation from the prescribed doses. We build up a data base of relatively few efficient solutions representing and approximating the variety of Pareto solutions of the multiple objective linear programming problem. This data base can be easily scanned by physicians looking for an adequate treatment plan with the aid of an appropriate online tool.Horst W. Hamacher; K.-H. Küferpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/847Mon, 03 Apr 2000 00:00:00 +0200Considerations about the Estimation of the Size Distribution in Wicksel's Corpuscle Problem
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/972
Wicksell's corpuscle problem deals with the estimation of the size distribution of a population of particles, all having the same shape, using a lower imensional sampling probe. This problem was originary formulated for particle systems occurring in life sciences but its solution is of actual and increasing interest in materials science. From a mathematical point of view, Wicksell's problem is an inverse problem where the interesting size distribution is the unknown part of a Volterra equation. The problem is often regarded ill-posed, because the structure of the integrand implies unstable numerical solutions. The accuracy of the numerical solutions is considered here using the condition number, which allows to compare different numerical methods with different (equidistant) class sizes and which indicates, as one result, that a finite section thickness of the probe reduces the numerical problems. Furthermore, the relative error of estimation is computed which can be split into two parts. One part consists of the relative discretization error that increases for increasing class size, and the second part is related to the relative statistical error which increases with decreasing class size. For both parts, upper bounds can be given and the sum of them indicates an optimal class width depending on some specific constants.Joachim Ohser; Konrad Sandaupreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/972Fri, 18 Feb 2000 00:00:00 +0100Solving nonconvex planar location problems by finite dominating sets
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/973
It is well-known that some of the classical location problems with polyhedral gauges can be solved in polynomial time by finding a finite dominating set, i.e. a finite set of candidates guaranteed to contain at least one optimal location. In this paper it is first established that this result holds for a much larger class of problems than currently considered in the literature. The model for which this result can be proven includes, for instance, location problems with attraction and repulsion, and location-allocation problems. Next, it is shown that the approximation of general gauges by polyhedral ones in the objective function of our general model can be analyzed with regard to the subsequent error in the optimal objective value. For the approximation problem two different approaches are described, the sandwich procedure and the greedy algorithm. Both of these approaches lead - for fixed epsilon - to polynomial approximation algorithms with accuracy epsilon for solving the general model considered in this paper.Emilio Carrizosa; Horst W. Hamacher; Rolf Klein; Stefan Nickelpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/973Fri, 18 Feb 2000 00:00:00 +0100On the Analysis of Spatial Binary Images
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/845
This paper deals with the characterization of microscopically heterogeneous, but macroscopically homogeneous spatial structures. A new method is presented which is strictly based on integral-geometric formulae such as Crofton's intersection formulae and Hadwiger's recursive de nition of the Euler number. The corresponding algorithms have clear advantages over other techniques. As an example of application we consider the analysis of spatial digital images produced by means of Computer Assisted Tomo- graphy.Christian Lang; Joachim Ohser; Rudolf Hilferpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/845Mon, 20 Sep 1999 00:00:00 +0200On the Construction of Discrete Equilibrium Distributions for Kinetic Schemes
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/848
A general approach to the construction of discrete equilibrium dis- tributions is presented. Such distribution functions can be used to set up Kinetic Schemes as well as Lattice Boltzmann methods. The general principles are also applied to the construction of Chapman Enskog dis- tributions which are used in Kinetic Schemes for compressible Navier Stokes equations.Michael Junkpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/848Mon, 20 Sep 1999 00:00:00 +0200A new discrete velocity method for Navier-Stokes equations
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/849
The relation between the Lattice Boltzmann Method, which has re- cently become popular, and the Kinetic Schemes, which are routinely used in Computational Fluid Dynamics, is explored. A new discrete velocity model for the numerical solution of Navier-Stokes equations for incom- pressible uid ow is presented by combining both the approaches. The new scheme can be interpreted as a pseudo-compressibility method and, for a particular choice of parameters, this interpretation carries over to the Lattice Boltzmann Method.Michael Junk; S. V. Raghurama Raopreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/849Mon, 20 Sep 1999 00:00:00 +0200Mathematics as a Key to Key Technologies
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/850
Helmut Neunzertpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/850Mon, 20 Sep 1999 00:00:00 +0200On Center Cycles in Grid Graphs
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/846
Finding "good" cycles in graphs is a problem of great interest in graph theory as well as in locational analysis. We show that the center and median problems are NP hard in general graphs. This result holds both for the variable cardinality case (i.e. all cycles of the graph are considered) and the fixed cardinality case (i.e. only cycles with a given cardinality p are feasible). Hence it is of interest to investigate special cases where the problem is solvable in polynomial time. In grid graphs, the variable cardinality case is, for instance, trivially solvable if the shape of the cycle can be chosen freely. If the shape is fixed to be a rectangle one can analyse rectangles in grid graphs with, in sequence, fixed dimension, fixed cardinality, and variable cardinality. In all cases a com plete characterization of the optimal cycles and closed form expressions of the optimal objective values are given, yielding polynomial time algorithms for all cases of center rectangle problems. Finally, it is shown that center cycles can be chosen as rectangles for small cardinalities such that the center cycle problem in grid graphs is in these cases completely solved.Horst W. Hamacher; Anita Schöbelpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/846Fri, 17 Sep 1999 00:00:00 +0200