## Preprints (rote Reihe) des Fachbereich Mathematik

277

A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number.

274

This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts).
Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem.

283

The first part of this paper studies a Levenberg-Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fréchet derivative of the nonlinear operator at the exact solution, the new Levenberg-Marquardt scheme is suitable for ill-posed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and dornain
topologies. Estimates of this type are established in the second part of the paper for ill-posed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new Levenberg-Marquardt scheme is
studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineering literature.

280

This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration.
These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data.