KLUEDO RSS FeedNeueste Dokumente / Latest documents
https://kluedo.ub.uni-kl.de/index/index/
Thu, 26 Jul 2012 12:24:29 +0200Thu, 26 Jul 2012 12:24:29 +0200Minimization and Parameter Estimation for Seminorm Regularization Models with I-Divergence Constraints
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3218
This papers deals with the minimization of seminorms \(\|L\cdot\|\) on \(\mathbb R^n\) under the constraint of a bounded I-divergence \(D(b,H\cdot)\). The I-divergence is also known as Kullback-Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data. Typically, \(H\) represents here, e.g., a linear blur operator and \(L\) is some discrete derivative operator. Our preference for the constrained approach over
the corresponding penalized version is based on the fact that the I-divergence of data
corrupted, e.g., by Poisson noise or multiplicative Gamma noise can be estimated by statistical methods. Our minimization technique rests upon relations between constrained and penalized convex problems and resembles the idea of Morozov's discrepancy principle.
More precisely, we propose first-order primal-dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. The most interesting of these proximal minimization problems is an I-divergence constrained least squares problem. We solve this problem by connecting it to the corresponding I-divergence
penalized least squares problem with an appropriately chosen regularization parameter. Therefore, our algorithm produces not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which convergences to a regularization parameter so that the penalized problem has the same solution as our constrained one. In other words, the solution of this penalized problem fulfills the I-divergence constraint. We provide the proofs which are necessary to understand
our approach and demonstrate the performance of our algorithms for different
image restoration examples.Tanja Teuber; Gabriele Steidl; Raymond Honfu Chanpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3218Thu, 26 Jul 2012 12:24:29 +0200Denoising by Higher Order Statistics
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2765
A standard approach for deducing a variational denoising method is the maximum a posteriori strategy. Here, the denoising result is chosen in such a way that it maximizes the conditional density function of the reconstruction given its observed noisy version. Unfortunately, this approach does not imply that the empirical distribution of the reconstructed noise components follows the statistics of the assumed noise model. In this paper, we propose to overcome this drawback by applying an additional transformation to the random vector modeling the noise. This transformation is then incorporated into the standard denoising approach and leads to a more sophisticated data fidelity term, which forces the removed noise components to have the desired statistical properties. The good properties of our new approach are demonstrated for additive Gaussian noise by numerical examples. Our method shows to be especially well suited for data containing high frequency structures, where other denoising methods which assume a certain smoothness of the signal cannot restore the small structures.Tanja Teuber; Steffen Remmele; Jürgen Hesser; Gabriele Steidlpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2765Thu, 06 Oct 2011 09:26:37 +0000