KLUEDO RSS FeedKLUEDO Dokumente/documents
https://kluedo.ub.uni-kl.de/index/index/
Mon, 18 Aug 2014 08:34:49 +0200Mon, 18 Aug 2014 08:34:49 +0200First Order Algorithms in Variational Image Processing
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3852
Variational methods in imaging are nowadays developing towards a quite universal and
exible
tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting,
segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form
D(Ku) + alpha R(u) to min_u
;
where the functional D is a data fidelity term also depending on some input data f and
measuring the deviation of Ku from such and R is a regularization functional. Moreover
K is a (often linear) forward operator modeling the dependence of data on an underlying
image, and alpha is a positive regularization parameter. While D is often smooth and (strictly)
convex, the current practice almost exclusively uses nonsmooth regularization functionals.
The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof, cf. [28, 31, 40], or l_1-norms of coeefficients arising
from scalar products with some frame system, cf. [73] and references therein.
The efficient solution of such variational problems in imaging demands for appropriate algorithms.
Taking into account the specific structure as a sum of two very different terms
to be minimized, splitting algorithms are a quite canonical choice. Consequently this field
has revived the interest in techniques like operator splittings or augmented Lagrangians. In
this chapter we shall provide an overview of methods currently developed and recent results
as well as some computational studies providing a comparison of different methods and also
illustrating their success in applications.
We start with a very general viewpoint in the first sections, discussing basic notations, properties
of proximal maps, firmly non-expansive and averaging operators, which form the basis
of further convergence arguments. Then we proceed to a discussion of several state-of-the
art algorithms and their (theoretical) convergence properties. After a section discussing issues
related to the use of analogous iterative schemes for ill-posed problems, we present some practical convergence studies in numerical examples related to PET and spectral CT reconstruction.Martin Burger; Alexander Sawatzky; Gabriele Steidlpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3852Mon, 18 Aug 2014 08:34:49 +0200Linearized Riesz Transform and Quasi-Monogenic Shearlets
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3596
The only quadrature operator of order two on \(L_2 (\mathbb{R}^2)\) which covaries with orthogonal
transforms, in particular rotations is (up to the sign) the Riesz transform. This property
was used for the construction of monogenic wavelets and curvelets. Recently, shearlets
were applied for various signal processing tasks. Unfortunately, the Riesz transform does
not correspond with the shear operation. In this paper we propose a novel quadrature operator called linearized Riesz transform which is related to the shear operator. We prove
properties of this transform and analyze its performance versus the usual Riesz transform numerically. Furthermore, we demonstrate the relation between the corresponding
optical filters. Based on the linearized Riesz transform we introduce finite discrete quasi-monogenic shearlets and prove that they form a tight frame. Numerical experiments show
the good fit of the directional information given by the shearlets and the orientation ob-
tained from the quasi-monogenic shearlet coefficients. Finally we provide experiments on
the directional analysis of textures using our quasi-monogenic shearlets.Sören Häuser; Bettina Heise; Gabriele Steidlpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3596Thu, 22 Aug 2013 11:00:31 +0200Homogeneous Penalizers and Constraints in Convex Image Restoration
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3347
Recently convex optimization models were successfully applied
for solving various problems in image analysis and restoration.
In this paper, we are interested in relations between
convex constrained optimization problems
of the form
\({\rm argmin} \{ \Phi(x)\) subject to \(\Psi(x) \le \tau \}\)
and their penalized counterparts
\({\rm argmin} \{\Phi(x) + \lambda \Psi(x)\}\).
We recall general results on the topic by the help of an epigraphical projection.
Then we deal with the special setting \(\Psi := \| L \cdot\|\) with \(L \in \mathbb{R}^{m,n}\)
and \(\Phi := \varphi(H \cdot)\),
where \(H \in \mathbb{R}^{n,n}\) and \(\varphi: \mathbb R^n \rightarrow \mathbb{R} \cup \{+\infty\} \)
meet certain requirements which are often fulfilled in image processing models.
In this case we prove by incorporating the dual problems
that there exists a bijective function
such that
the solutions of the constrained problem coincide with those of the
penalized problem if and only if \(\tau\) and \(\lambda\) are in the graph
of this function.
We illustrate the relation between \(\tau\) and \(\lambda\) for various problems
arising in image processing.
In particular, we point out the relation to the Pareto frontier for joint sparsity problems.
We demonstrate the performance of the
constrained model in restoration tasks of images corrupted by Poisson noise
with the \(I\)-divergence as data fitting term \(\varphi\)
and in inpainting models with the constrained nuclear norm.
Such models can be useful if we have a priori knowledge on the image rather than on the noise level.René Ciak; Behrang Shafei; Gabriele Steidlarticlehttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3347Thu, 15 Nov 2012 09:15:14 +0100Minimization and Parameter Estimation for Seminorm Regularization Models with I-Divergence Constraints
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3218
This papers deals with the minimization of seminorms \(\|L\cdot\|\) on \(\mathbb R^n\) under the constraint of a bounded I-divergence \(D(b,H\cdot)\). The I-divergence is also known as Kullback-Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data. Typically, \(H\) represents here, e.g., a linear blur operator and \(L\) is some discrete derivative operator. Our preference for the constrained approach over
the corresponding penalized version is based on the fact that the I-divergence of data
corrupted, e.g., by Poisson noise or multiplicative Gamma noise can be estimated by statistical methods. Our minimization technique rests upon relations between constrained and penalized convex problems and resembles the idea of Morozov's discrepancy principle.
More precisely, we propose first-order primal-dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. The most interesting of these proximal minimization problems is an I-divergence constrained least squares problem. We solve this problem by connecting it to the corresponding I-divergence
penalized least squares problem with an appropriately chosen regularization parameter. Therefore, our algorithm produces not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which convergences to a regularization parameter so that the penalized problem has the same solution as our constrained one. In other words, the solution of this penalized problem fulfills the I-divergence constraint. We provide the proofs which are necessary to understand
our approach and demonstrate the performance of our algorithms for different
image restoration examples.Tanja Teuber; Gabriele Steidl; Raymond Honfu Chanpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3218Thu, 26 Jul 2012 12:24:29 +0200Supervised and Transductive Multi-Class Segmentation Using p-Laplacians and RKHS methods
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3169
This paper considers supervised multi-class image segmentation: from a labeled set of
pixels in one image, we learn the segmentation and apply it to the rest of the image or to other similar images. We study approaches with p-Laplacians, (vector-valued) Reproducing Kernel Hilbert
Spaces (RKHSs) and combinations of both. In all approaches we construct segment membership
vectors. In the p-Laplacian model the segment membership vectors have to fulfill a certain probability simplex constraint. Interestingly, we could prove that this is not really a constraint in the case p=2 but is automatically fulfilled. While the 2-Laplacian model gives a good general segmentation, the case of the 1-Laplacian tends to neglect smaller segments. The RKHS approach has
the benefit of fast computation. This direction is motivated by image colorization, where a given
dab of color is extended to a nearby region of similar features or to another image. The connection
between colorization and multi-class segmentation is explored in this paper with an application to
medical image segmentation. We further consider an improvement using a combined method. Each
model is carefully considered with numerical experiments for validation, followed by medical image
segmentation at the end.Sung Ha Kang; Behrang Shafei; Gabriele Steidlpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3169Fri, 08 Jun 2012 23:03:52 +0200Homogeneous Penalizers and Constraints in Convex Image Restoration
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2866
Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form \(min\{\Phi(x)\) subject to \(\Psi(x)\le\tau\}\) and their non-constrained, penalized counterparts \(min\{\Phi(x)+\lambda\Psi(x)\}\). We start with general considerations of the topic and provide a novel proof which ensures that a solution of the constrained problem with given \(\tau\) is also a solution of the on-constrained problem for a certain \(\lambda\). Then we deal with the special setting that \(\Psi\) is a semi-norm and \(\Phi=\phi(Hx)\), where \(H\) is a linear, not necessarily invertible operator and \(\phi\) is essentially smooth and strictly convex. In this case we can prove via the dual problems that there exists a bijective function which maps \(\tau\) from a certain interval to \(\lambda\) such that the solutions of the constrained problem coincide with those of the non-constrained problem if and only if \(\tau\) and \(\lambda\) are in the graph of this function. We illustrate the relation between \(\tau\) and \(\lambda\) by various problems arising in image processing. In particular, we demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise and in inpainting models with constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level.René Ciak; Behrang Shafei; Gabriele Steidlpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2866Thu, 02 Feb 2012 05:02:50 +0000Denoising by Higher Order Statistics
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2765
A standard approach for deducing a variational denoising method is the maximum a posteriori strategy. Here, the denoising result is chosen in such a way that it maximizes the conditional density function of the reconstruction given its observed noisy version. Unfortunately, this approach does not imply that the empirical distribution of the reconstructed noise components follows the statistics of the assumed noise model. In this paper, we propose to overcome this drawback by applying an additional transformation to the random vector modeling the noise. This transformation is then incorporated into the standard denoising approach and leads to a more sophisticated data fidelity term, which forces the removed noise components to have the desired statistical properties. The good properties of our new approach are demonstrated for additive Gaussian noise by numerical examples. Our method shows to be especially well suited for data containing high frequency structures, where other denoising methods which assume a certain smoothness of the signal cannot restore the small structures.Tanja Teuber; Steffen Remmele; Jürgen Hesser; Gabriele Steidlpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2765Thu, 06 Oct 2011 09:26:37 +0000On Cyclic Gradient Descent Reprojection
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2742
In recent years, convex optimization methods were successfully applied for various image processing tasks and a large number of first-order methods were designed to minimize the corresponding functionals. Interestingly, it was shown recently by Grewenig et al. that the simple idea of so-called “superstep cycles” leads to very efficient schemes for time-dependent (parabolic) image enhancement problems as well as for steady state (elliptic) image compression tasks. The ”superstep cycles” approach is similar to the nonstationary (cyclic)
Richardson method which has been around for over sixty years.
In this paper, we investigate the incorporation of superstep cycles into the gradient descent reprojection method. We show for two problems in compressive sensing and image processing, namely the LASSO approach and the Rudin-Osher-Fatemi model that the resulting simple cyclic gradient descent reprojection algorithm can numerically compare with various state-of-the-art first-order algorithms. However, due to the nonlinear
projection within the algorithm convergence proofs even under restrictive assumptions on the linear operators appear to be hard. We demonstrate the difficulties by studying the
simplest case of a two-cycle algorithm in R^2 with projections onto the Euclidian ball.
Simon Setzer; Gabriele Steidl; Jan Morgenthalerpreprinthttps://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2742Mon, 19 Sep 2011 02:26:37 +0200