### Refine

#### Document Type

- Preprint (2)
- Article (1)
- Doctoral Thesis (1)

#### Language

- English (4) (remove)

#### Keywords

This paper considers supervised multi-class image segmentation: from a labeled set of
pixels in one image, we learn the segmentation and apply it to the rest of the image or to other similar images. We study approaches with p-Laplacians, (vector-valued) Reproducing Kernel Hilbert
Spaces (RKHSs) and combinations of both. In all approaches we construct segment membership
vectors. In the p-Laplacian model the segment membership vectors have to fulfill a certain probability simplex constraint. Interestingly, we could prove that this is not really a constraint in the case p=2 but is automatically fulfilled. While the 2-Laplacian model gives a good general segmentation, the case of the 1-Laplacian tends to neglect smaller segments. The RKHS approach has
the benefit of fast computation. This direction is motivated by image colorization, where a given
dab of color is extended to a nearby region of similar features or to another image. The connection
between colorization and multi-class segmentation is explored in this paper with an application to
medical image segmentation. We further consider an improvement using a combined method. Each
model is carefully considered with numerical experiments for validation, followed by medical image
segmentation at the end.

This thesis is divided into two parts. Both cope with multi-class image segmentation and utilize
non-smooth optimization algorithms.
The topic of the first part, namely unsupervised segmentation, is the application of clustering
to image pixels. Therefore, we start with an introduction of the biconvex center-based clustering
algorithms c-means and fuzzy c-means, where c denotes the number of classes. We show that
fuzzy c-means can be seen as an approximation of c-means in terms of power means.
Since noise is omnipresent in our image data, these simple clustering models are not suitable
for its segmentation. To this end, we introduce a general and finite dimensional segmentation
model that consists of a data term stemming from the aforementioned clustering models plus a
continuous regularization term. We tackle this optimization model via an alternating minimiza-
tion approach called regularized c-centers (RcC). Thereby, we fix the centers and optimize the
segment membership of the pixels and vice versa. In this general setting, we prove convergence
in the sense of set-valued algorithms using Zangwill’s Theory [172].
Further, we present a segmentation model with a total variation regularizer. While updating
the cluster centers is straightforward for fixed segment memberships of the pixels, updating the
segment membership can be solved iteratively via non-smooth, convex optimization. Thereby,
we do not iterate a convex optimization algorithm until convergence. Instead, we stop as soon as
we have a certain amount of decrease in the objective functional to increase the efficiency. This
algorithm is a particular implementation of RcC providing also the corresponding convergence
theory. Moreover, we show the good performance of our method in various examples such as
simulated 2d images of brain tissue and 3d volumes of two materials, namely a multi-filament
composite superconductor and a carbon fiber reinforced silicon carbide ceramics. Thereby, we
exploit the property of the latter material that two components have no common boundary in
our adapted model.
The second part of the thesis is concerned with supervised segmentation. We leave the area
of center based models and investigate convex approaches related to graph p-Laplacians and
reproducing kernel Hilbert spaces (RKHSs). We study the effect of different weights used to
construct the graph. In practical experiments we show on the one hand image types that
are better segmented by the p-Laplacian model and on the other hand images that are better
segmented by the RKHS-based approach. This is due to the fact that the p-Laplacian approach
provides smoother results, while the RKHS approach provides often more accurate and detailed
segmentations. Finally, we propose a novel combination of both approaches to benefit from the
advantages of both models and study the performance on challenging medical image data.

Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form \(min\{\Phi(x)\) subject to \(\Psi(x)\le\tau\}\) and their non-constrained, penalized counterparts \(min\{\Phi(x)+\lambda\Psi(x)\}\). We start with general considerations of the topic and provide a novel proof which ensures that a solution of the constrained problem with given \(\tau\) is also a solution of the on-constrained problem for a certain \(\lambda\). Then we deal with the special setting that \(\Psi\) is a semi-norm and \(\Phi=\phi(Hx)\), where \(H\) is a linear, not necessarily invertible operator and \(\phi\) is essentially smooth and strictly convex. In this case we can prove via the dual problems that there exists a bijective function which maps \(\tau\) from a certain interval to \(\lambda\) such that the solutions of the constrained problem coincide with those of the non-constrained problem if and only if \(\tau\) and \(\lambda\) are in the graph of this function. We illustrate the relation between \(\tau\) and \(\lambda\) by various problems arising in image processing. In particular, we demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise and in inpainting models with constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level.

Recently convex optimization models were successfully applied
for solving various problems in image analysis and restoration.
In this paper, we are interested in relations between
convex constrained optimization problems
of the form
\({\rm argmin} \{ \Phi(x)\) subject to \(\Psi(x) \le \tau \}\)
and their penalized counterparts
\({\rm argmin} \{\Phi(x) + \lambda \Psi(x)\}\).
We recall general results on the topic by the help of an epigraphical projection.
Then we deal with the special setting \(\Psi := \| L \cdot\|\) with \(L \in \mathbb{R}^{m,n}\)
and \(\Phi := \varphi(H \cdot)\),
where \(H \in \mathbb{R}^{n,n}\) and \(\varphi: \mathbb R^n \rightarrow \mathbb{R} \cup \{+\infty\} \)
meet certain requirements which are often fulfilled in image processing models.
In this case we prove by incorporating the dual problems
that there exists a bijective function
such that
the solutions of the constrained problem coincide with those of the
penalized problem if and only if \(\tau\) and \(\lambda\) are in the graph
of this function.
We illustrate the relation between \(\tau\) and \(\lambda\) for various problems
arising in image processing.
In particular, we point out the relation to the Pareto frontier for joint sparsity problems.
We demonstrate the performance of the
constrained model in restoration tasks of images corrupted by Poisson noise
with the \(I\)-divergence as data fitting term \(\varphi\)
and in inpainting models with the constrained nuclear norm.
Such models can be useful if we have a priori knowledge on the image rather than on the noise level.