The paper presents a fast implementation of a constructive method to generate a special class of low-discrepancy sequences which are based on Van Neumann-Kakutani tranformations. Such sequences can be used in various simulation codes where it is necessary to generate a certain number of uniformly distributed random numbers on the unit interval.; From a theoretical point of view the uniformity of a sequence is measured in terms of the discrepancy which is a special distance between a finite set of points and the uniform distribution on the unit interval.; Numerical results are given on the cost efficiency of different generators on different hardware architectures as well as on the corresponding uniformity of the sequences. As an example for the efficient use of low-discrepancy sequences in a complex simulation code results are presented for the simulation of a hypersonic rarefied gas flow.
This paper considers the numerical solution of a transmission boundary-value problem for the time-harmonic Maxwell equations with the help of a special finite volume discretization. Applying this technique to several three-dimensional test problems, we obtain large, sparse, complex linear systems, which are solved by using BiCG, CGS, BiCGSTAB resp., GMRES. We combine these methods with suitably chosen preconditioning matrices and compare the speed of convergence.
Discrete families of functions with the property that every function in a certain space can be represented by its formal Fourier series expansion are developed on the sphere. A Fourier series type expansion is obviously true if the family is an orthonormal basis of a Hilbert space, but it also can hold in situations where the family is not orthogonal and is overcomplete. Furthermore, all functions in our approach are axisymmetric (depending only on the spherical distance) so that they can be used adequately in (rotation) invariant pseudodifferential equations on the frames (ii) Gauss- Weierstrass frames, and (iii) frames consisting of locally supported kernel functions. Abel-Poisson frames form families of harmonic functions and provide us with powerful approximation tools in potential theory. Gauss-Weierstrass frames are intimately related to the diffusion equation on the sphere and play an important role in multiscale descriptions of image processing on the sphere. The third class enables us to discuss spherical Fourier expansions by means of axisymmetric finite elements.
Spline functions that interpolate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A pointwise convergence theorem containing explicit constants yields a useable error bound.
Many discrepancy principles are known for choosing the parameter \(\alpha\) in the regularized operator equation \((T^*T+ \alpha I)x_\alpha^\delta = T^*y^\delta\), \(||y-y^d||\leq \delta\), in order to approximate the minimal norm least-squares solution of the operator equation \(Tx=y\). In this paper we consider a class of discrepancy principles for choosing the regularization parameter when \(T^*T\) and \(T^*y^\delta\) are approximated by \(A_n\) and \(z_n^\delta\) respectively with \(A_n\) not necessarily self - adjoint. Thisprocedure generalizes the work of Engl and Neubauer (1985),and particular cases of the results are applicable to the regularized projection method as well as to a degenerate kernel method considered by Groetsch (1990).
On a family F of probability measures on a measure space we consider the Hellinger and Kullback-Leibler distances. We show that under suitable regulari ty conditions Jeffreys' prior is proportional to the k-dimensional Hausdorff measure w.r.t. Hellinger dis tance respectively to the k2 -dimensional Hausdorff measure w.r.t. Kullback-Leibler distance. The proof i s based on an area-formula for the Hausdorff measure w.r.t. to generalized distances.
A compact subset E of the complex plane is called removable if all bounded analytic functions on its complement are constant or, equivalently, i f its analytic capacity vanishes. The problem of finding a geometric characterization of the removable sets is more than a hundred years old and still not comp letely solved.
Questions arising from Statistical Decision Theory, Bayes Methods and other probability theoretic fields lead to concepts of orthogonality of a family of probability measures. In this paper we therefore give a sketch of a generalized information theory which is very helpful in considering and answering those questions. In this adapted information theory Shannon's classical transition channels modelled by finite stochastic matrices are replaced by compact families of probability measures that are uniformly integrable. These channels are characterized by concepts such as information rate and capacity and by optimal priors and the optimal mixture distribution. For practical studies we introduce an algorithm to calculate the capacity of the whole probability family which is appli cable even for general output space. We then explain how the algorithm works and compare its numerical costs with those of the classical Arimoto-Blahut-algorithm.
It is of basic interest to assess the quality of the decisions of a statistician, based on the outcoming data of a statistical experiment, in the context of a given model class P of probability distributions. The statistician picks a particular distribution P , suffering a loss by not picking the 'true' distribution P' . There are several relevant loss functions, one being based on the the relative entropy function or Kullback Leibler information distance. In this paper we prove a general 'minimax risk equals maximin (Bayes) risk' theorem for the Kullback Leibler loss under the hypothesis of a dominated and compact family of distributions over a Polish observation space with suitably integrable densities. We also find that there is always an optimal Bayes strategy (i.e. a suitable prior) achieving the minimax value. Further, we see that every such minimax optimal strategy leads to the same distribution P in the convex closure of the model class. Finally, we give some examples to illustrate the results and to indicate, how the minimax result reflects in the structure of least favorable priors. This paper is mainly based on parts of this author's doctorial thesis.