### Filtern

#### Erscheinungsjahr

- 2007 (14) (entfernen)

#### Dokumenttyp

- Preprint (14) (entfernen)

#### Schlagworte

- Mixture Models (2)
- 2-d kernel regression (1)
- Algorithmics (1)
- Earth's disturbing potential (1)
- Gauge Distances (1)
- Geometric Ergodicity (1)
- Identifiability (1)
- Inverses Problem (1)
- Legendre Wavelets (1)
- Lineare Integralgleichung (1)

As a first approximation the Earth is a sphere; as a second approximation it may be considered an ellipsoid of revolution. The deviations of the actual Earth's gravity field from the ellipsoidal 'normal' field are so small that they can be understood to be linear. The splitting of the Earth's gravity field into a 'normal' and a remaining small 'disturbing' field considerably simplifies the problem of its determination. Under the assumption of an ellipsoidal Earth model high observational accuracy is achievable only if the deviation (deflection of the vertical) of the physical plumb line, to which measurements refer, from the ellipsoidal normal is not ignored. Hence, the determination of the disturbing potential from known deflections of the vertical is a central problem of physical geodesy. In this paper we propose a new, well-promising method for modelling the disturbing potential locally from the deflections of the vertical. Essential tools are integral formulae on the sphere based on Green's function of the Beltrami operator. The determination of the disturbing potential from deflections of the vertical is formulated as a multiscale procedure involving scale-dependent regularized versions of the surface gradient of the Green function. The modelling process is based on a multiscale framework by use of locally supported surface curl-free vector wavelets.

This paper presents a wavelet analysis of temporal and spatial variations of the Earth's gravitational potential based on tensor product wavelets. The time--space wavelet concept is realized by combining Legendre wavelets for the time domain and spherical wavelets for the space domain. In consequence, a multiresolution analysis for both, temporal and spatial resolution, is formulated within a unified concept. The method is then numerically realized by using first synthetically generated data and, finally, several real data sets.

Given a directed graph G = (N,A) with arc capacities u and a minimum cost flow problem defined on G, the capacity inverse minimum cost flow problem is to find a new capacity vector u' for the arc set A such that a given feasible flow x' is optimal with respect to the modified capacities. Among all capacity vectors u' satisfying this condition, we would like to find one with minimum ||u' - u|| value. We consider two distance measures for ||u' - u||, rectilinear and Chebyshev distances. By reduction from the feedback arc set problem we show that the capacity inverse minimum cost flow problem is NP-hard in the rectilinear case. On the other hand, it is polynomially solvable by a greedy algorithm for the Chebyshev norm. In the latter case we propose a heuristic for the bicriteria problem, where we minimize among all optimal solutions the number of affected arcs. We also present computational results for this heuristic.

We derive some asymptotics for a new approach to curve estimation proposed by Mr'{a}zek et al. cite{MWB06} which combines localization and regularization. This methodology has been considered as the basis of a unified framework covering various different smoothing methods in the analogous two-dimensional problem of image denoising. As a first step for understanding this approach theoretically, we restrict our discussion here to the least-squares distance where we have explicit formulas for the function estimates and where we can derive a rather complete asymptotic theory from known results for the Priestley-Chao curve estimate. In this paper, we consider only the case where the bias dominates the mean-square error. Other situations are dealt with in subsequent papers.

Given an undirected connected network and a weight function finding a basis of the cut space with minimum sum of the cut weights is termed Minimum Cut Basis Problem. This problem can be solved, e.g., by the algorithm of Gomory and Hu [GH61]. If, however, fundamentality is required, i.e., the basis is induced by a spanning tree T in G, the problem becomes NP-hard. Theoretical and numerical results on that topic can be found in Bunke et al. [BHMM07] and in Bunke [Bun06]. In the following we present heuristics with complexity O(m log n) and O(mn), where n and m are the numbers of vertices and edges respectively, which obtain upper bounds on the aforementioned problem and in several cases outperform the heuristics of Schwahn [Sch05].

In this article a new data-adaptive method for smoothing of bivariate functions is developed. The smoothing is done by kernel regression with rotational invariant bivariate kernels. Two or three local bandwidth parameters are chosen automatically by a two-step plug-in approach. The algorithm starts with small global bandwidth parameters, which adapt during a few iterations to the noisy image. In the next step local bandwidths are estimated. Some general asymptotic results about Gasser-Müller-estimators and optimal bandwidth selection are given. The derived local bandwidth estimators converge and are asymptotically normal.

We study nonlinear finite element discretizations for the density gradient equation in the quantum drift diffusion model. Especially, we give a finite element description of the so--called nonlinear scheme introduced by {it Ancona}. We prove the existence of discrete solutions and provide a consistency and convergence analysis, which yields the optimal order of convergence for both discretizations. The performance of both schemes is compared numerically, especially with respect to the influence of approximate vacuum boundary conditions.

In contrast to p-hub problems with a summation objective (p-hub median), minmax hub problems (p-hub center) have not attained much attention in the literature. In this paper, we give a polyhedral analysis of the uncapacitated single allocation p-hub center problem (USApHCP). The analysis will be based on a radius formulation which currently yields the most efficient solution procedures. We show which of the valid inequalities in this formulation are facet-defining and present non-elementary classes of facets, for which we propose separation problems. A major part in our argumentation will be the close connection between polytopes of the USApHCP and the uncapacitated p-facility location (pUFL). Hence, the new classes of facets can also be used to improve pUFL formulations.

Given an undirected, connected network G = (V,E) with weights on the edges, the cut basis problem is asking for a maximal number of linear independent cuts such that the sum of the cut weights is minimized. Surprisingly, this problem has not attained as much attention as its graph theoretic counterpart, the cycle basis problem. We consider two versions of the problem, the unconstrained and the fundamental cut basis problem. For the unconstrained case, where the cuts in the basis can be of an arbitrary kind, the problem can be written as a multiterminal network flow problem and is thus solvable in strongly polynomial time. The complexity of this algorithm improves the complexity of the best algorithms for the cycle basis problem, such that it is preferable for cycle basis problems in planar graphs. In contrast, the fundamental cut basis problem, where all cuts in the basis are obtained by deleting an edge, each, from a spanning tree T is shown to be NP-hard. We present heuristics, integer programming formulations and summarize first experiences with numerical tests.

In this paper we construct spline functions based on a reproducing kernel Hilbert space to interpolate/approximate the velocity field of earthquake waves inside the Earth based on traveltime data for an inhomogeneous grid of sources (hypocenters) and receivers (seismic stations). Theoretical aspects including error estimates and convergence results as well as numerical results are demonstrated.