Refine
Document Type
- Preprint (3) (remove)
Language
- English (3)
Has Fulltext
- yes (3)
Keywords
Faculty / Organisational entity
A standard approach for deducing a variational denoising method is the maximum a posteriori strategy. Here, the denoising result is chosen in such a way that it maximizes the conditional density function of the reconstruction given its observed noisy version. Unfortunately, this approach does not imply that the empirical distribution of the reconstructed noise components follows the statistics of the assumed noise model. In this paper, we propose to overcome this drawback by applying an additional transformation to the random vector modeling the noise. This transformation is then incorporated into the standard denoising approach and leads to a more sophisticated data fidelity term, which forces the removed noise components to have the desired statistical properties. The good properties of our new approach are demonstrated for additive Gaussian noise by numerical examples. Our method shows to be especially well suited for data containing high frequency structures, where other denoising methods which assume a certain smoothness of the signal cannot restore the small structures.
This paper presents a new similarity measure and nonlocal filters for images corrupted by multiplicative noise. The considered filters are generalizations of the nonlocal means filter of Buades et al., which is known to be well suited for removing additive Gaussian noise. To adapt to different noise models, the patch comparison involved in this filter has first of all to be performed by a suitable noise dependent similarity measure. To this purpose, we start by studying a probabilistic measure recently proposed for general noise models by Deledalle et al. We analyze this measure in the context of conditional density functions and examine its properties for images corrupted by additive and multiplicative noise. Since it turns out to have unfavorable properties for multiplicative noise we deduce a new similarity measure consisting of a probability density function specially chosen for this type of noise. The properties of our new measure are studied theoretically as well as by numerical experiments. To obtain the final nonlocal filters we apply a weighted maximum likelihood estimation framework, which also incorporates the noise statistics. Moreover, we define the weights occurring in these filters using our new similarity measure and propose different adaptations to further improve the results. Finally, restoration results for images corrupted by multiplicative Gamma and Rayleigh noise are presented to demonstrate the very good performance of our nonlocal filters.
This papers deals with the minimization of seminorms \(\|L\cdot\|\) on \(\mathbb R^n\) under the constraint of a bounded I-divergence \(D(b,H\cdot)\). The I-divergence is also known as Kullback-Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data. Typically, \(H\) represents here, e.g., a linear blur operator and \(L\) is some discrete derivative operator. Our preference for the constrained approach over
the corresponding penalized version is based on the fact that the I-divergence of data
corrupted, e.g., by Poisson noise or multiplicative Gamma noise can be estimated by statistical methods. Our minimization technique rests upon relations between constrained and penalized convex problems and resembles the idea of Morozov's discrepancy principle.
More precisely, we propose first-order primal-dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. The most interesting of these proximal minimization problems is an I-divergence constrained least squares problem. We solve this problem by connecting it to the corresponding I-divergence
penalized least squares problem with an appropriately chosen regularization parameter. Therefore, our algorithm produces not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which convergences to a regularization parameter so that the penalized problem has the same solution as our constrained one. In other words, the solution of this penalized problem fulfills the I-divergence constraint. We provide the proofs which are necessary to understand
our approach and demonstrate the performance of our algorithms for different
image restoration examples.