Modern digital imaging technologies, such as digital microscopy or micro-computed tomography, deliver such large amounts of 2D and 3D-image data that manual processing becomes infeasible. This leads to a need for robust, flexible and automatic image analysis tools in areas such as histology or materials science, where microstructures are being investigated (e.g. cells, fiber systems). General-purpose image processing methods can be used to analyze such microstructures. These methods usually rely on segmentation, i.e., a separation of areas of interest in digital images. As image segmentation algorithms rarely adapt well to changes in the imaging system or to different analysis problems, there is a demand for solutions that can easily be modified to analyze different microstructures, and that are more accurate than existing ones. To address these challenges, this thesis contributes a novel statistical model for objects in images and novel algorithms for the image-based analysis of microstructures. The first contribution is a novel statistical model for the locations of objects (e.g. tumor cells) in images. This model is fully trainable and can therefore be easily adapted to many different image analysis tasks, which is demonstrated by examples from histology and materials science. Using algorithms for fitting this statistical model to images results in a method for locating multiple objects in images that is more accurate and more robust to noise and background clutter than standard methods. On simulated data at high noise levels (peak signal-to-noise ratio below 10 dB), this method achieves detection rates up to 10% above those of a watershed-based alternative algorithm. While objects like tumor cells can be described well by their coordinates in the plane, the analysis of fiber systems in composite materials, for instance, requires a fully three dimensional treatment. Therefore, the second contribution of this thesis is a novel algorithm to determine the local fiber orientation in micro-tomographic reconstructions of fiber-reinforced polymers and other fibrous materials. Using simulated data, it will be demonstrated that the local orientations obtained from this novel method are more robust to noise and fiber overlap than those computed using an established alternative gradient-based algorithm, both in 2D and 3D. The property of robustness to noise of the proposed algorithm can be explained by the fact that a low-pass filter is used to detect local orientations. But even in the absence of noise, depending on fiber curvature and density, the average local 3D-orientation estimate can be about 9° more accurate compared to that alternative gradient-based method. Implementations of that novel orientation estimation method require repeated image filtering using anisotropic Gaussian convolution filters. These filter operations, which other authors have used for adaptive image smoothing, are computationally expensive when using standard implementations. Therefore, the third contribution of this thesis is a novel optimal non-orthogonal separation of the anisotropic Gaussian convolution kernel. This result generalizes a previous one reported elsewhere, and allows for efficient implementations of the corresponding convolution operation in any dimension. In 2D and 3D, these implementations achieve an average performance gain by factors of 3.8 and 3.5, respectively, compared to a fast Fourier transform-based implementation. The contributions made by this thesis represent improvements over state-of-the-art methods, especially in the 2D-analysis of cells in histological resections, and in the 2D and 3D-analysis of fibrous materials.
The recognition of patterns and structures has gained importance for dealing with the growing amount of data being generated by sensors and simulations. Most existing methods for pattern recognition are tailored for scalar data and non-correlated data of higher dimensions. The recognition of general patterns in flow structures is possible, but not yet practically usable, due to the high computation effort. The main goal of this work is to present methods for comparative visualization of flow data, amongst others, based on a new method for efficient pattern recognition on flow data. This work is structured in three parts: At first, a known feature-based approach for pattern recognition on flow data, the Clifford convolution, has been applied to color edge detection, and been extended to non-uniform grids. However, this method is still computationally expensive for a general pattern recognition, since the recognition algorithm has to be applied for numerous different scales and orientations of the query pattern. A more efficient and accurate method for pattern recognition on flow data is presented in the second part. It is based upon a novel mathematical formulation of moment invariants for flow data. The common moment invariants for pattern recognition are not applicable on flow data, since they are only invariant on non-correlated data. Because of the spatial correlation of flow data, the moment invariants had to be redefined with different basis functions to satisfy the demands for an invariant mapping of flow data. The computation of the moment invariants is done by a multi-scale convolution of the complete flow field with the basis functions. This pre-processing computation time almost equals the time for the pattern recognition of one single general pattern with the former algorithms. However, after having computed the moments once, they can be indexed and used as a look-up-table to recognize any desired pattern quickly and interactively. This results in a flexible and easy-to-use tool for the analysis of patterns in 2d flow data. For an improved rendering of the recognized features, an importance driven streamline algorithm has been developed. The density of the streamlines can be adjusted by using importance maps. The result of a pattern recognition can be used as such a map, for example. Finally, new comparative flow visualization approaches utilizing the streamline approach, the flow pattern matching, and the moment invariants are presented.