Kaiserslautern - Fachbereich Mathematik
Refine
Year of publication
- 2011 (30) (remove)
Document Type
- Doctoral Thesis (14)
- Preprint (10)
- Report (6)
Keywords
- autoregressive process (3)
- neural network (3)
- nonparametric regression (3)
- (dynamic) network flows (1)
- CUSUM statistic (1)
- Change analysis (1)
- Chow Quotient (1)
- Copula (1)
- Credit Default Swap (1)
- Dynamic Network Flows (1)
- FPTAS (1)
- Finite Pointset Method (1)
- FlowLoc (1)
- Green’s function (1)
- INGARCH (1)
- Integer-valued time series (1)
- Knapsack problem (1)
- Local smoothing (1)
- Markov Chain (1)
- Markov Kette (1)
- Mathematik (1)
- Mathematische Modellierung (1)
- Methode der Fundamentallösungen (1)
- Momentum and Mas Transfer (1)
- Multi Primary and One Second Particle Method (1)
- Multiphase Flows (1)
- Non-commutative Computer Algebra (1)
- Order (1)
- Parallel volume (1)
- Poisson autoregression (1)
- Population Balance Equation (1)
- Poroelastizität (1)
- Pseudopolynomial-Time Algorithm (1)
- Rank test (1)
- Restricted Shortest Path (1)
- Shortest path problem (1)
- Similarity measures (1)
- Standard basis (1)
- Theorie schwacher Lösungen (1)
- Tiefengeothermie (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Wills functional (1)
- additive Gaussian noise (1)
- change analysis (1)
- changepoint test (1)
- combinatorial optimization (1)
- connectedness (1)
- convex optimization (1)
- correlated errors (1)
- denoising (1)
- gradient descent reprojection (1)
- gravitation (1)
- heuristic (1)
- higher-order moments (1)
- image restoration (1)
- local search algorithm (1)
- locally supported wavelets (1)
- location theory (1)
- magnetic field (1)
- maximum a posteriori estimation (1)
- maximum likelihood estimation (1)
- method of fundamental solutions (1)
- multicriteria optimization (1)
- multiplicative noise (1)
- non-convex body (1)
- non-local filtering (1)
- numerical irreducible decomposition (1)
- oblique derivative (1)
- poroelasticity (1)
- potential (1)
- regularization methods (1)
- resource constrained shortest path problem (1)
- sequential test (1)
- single layer kernel (1)
- spherical decomposition (1)
- splines (1)
- strongly polynomial-time algorithm (1)
- superstep cycles (1)
- uniform central limit theorem (1)
- universal objective function (1)
- weak solution theory (1)
Faculty / Organisational entity
In this paper we develop testing procedures for the detection of structural changes in nonlinear autoregressive processes. For the detection procedure we model the regression function by a single layer feedforward neural network. We show that CUSUM-type tests based on cumulative sums of estimated residuals, that have been intensively studied for linear regression, can be extended to this case. The limit distribution under the null hypothesis is obtained, which is needed to construct asymptotic tests. For a large class of alternatives it is shown that the tests have asymptotic power one. In this case, we obtain a consistent change-point estimator which is related to the test statistics. Power and size are further investigated in a small simulation study with a particular emphasis on situations where the model is misspecified, i.e. the data is not generated by a neural network but some other regression function. As illustration, an application on the Nile data set as well as S&P log-returns is given.
We consider an autoregressive process with a nonlinear regression function that is modeled by a feedforward neural network. We derive a uniform central limit theorem which is useful in the context of change-point analysis. We propose a test for a change in the autoregression function which - by the uniform central limit theorem - has asymptotic power one for a large class of alternatives including local alternatives.
We consider a variant of a knapsack problem with a fixed cardinality constraint. There are three objective functions to be optimized: one real-valued and two integer-valued objectives. We show that this problem can be solved efficiently by a local search. The algorithm utilizes connectedness of a subset of feasible solutions and has optimal run-time.
We study the efficient computation of Nash and strong equilibria in weighted bottleneck games. In such a game different players interact on a set of resources in the way that every player chooses a subset of the resources as her strategy. The cost of a single resource depends on the total weight of players choosing it and the personal cost every player tries to minimize is the cost of the most expensive resource in her strategy, the bottleneck value. To derive efficient algorithms for finding Nash equilibria in these games, we generalize a tranformation of a bottleneck game into a special congestion game introduced by Caragiannis et al. [1]. While investigating the transformation we introduce so-called lexicographic games, in which the aim of a player is not only to minimize her bottleneck value but to lexicographically minimize the ordered vector of costs of all resources in her strategy. For the special case of network bottleneck games, i.e., the set of resources are the edges of a graph and the strategies are paths, we analyse different Greedy type methods and their limitations for extension-parallel and series-parallel graphs.
We provide a space domain oriented separation of magnetic fields into parts generated by sources in the exterior and sources in the interior of a given sphere. The separation itself is well-known in geomagnetic modeling, usually in terms of a spherical harmonic analysis or a wavelet analysis that is spherical harmonic based. However, it can also be regarded as a modification of the Helmholtz decomposition for which we derive integral representations with explicitly known convolution kernels. Regularizing these singular kernels allows a multiscale representation of the magnetic field with locally supported wavelets. This representation is applied to a set of CHAMP data for crustal field modeling.
In a dynamic network, the quickest path problem asks for a path minimizing the time needed to send a given amount of flow from source to sink along this path. In practical settings, for example in evacuation or transportation planning, the reliability of network arcs depends on the specific scenario of interest. In this circumstance, the question of finding a quickest path among all those having at least a desired path reliability arises. In this article, this reliable quickest path problem is solved by transforming it to the restricted quickest path problem. In the latter, each arc is associated a nonnegative cost value and the goal is to find a quickest path among those not exceeding a predefined budget with respect to the overall (additive) cost value. For both, the restricted and reliable quickest path problem, pseudopolynomial exact algorithms and fully polynomial-time approximation schemes are proposed.
In this paper the multi terminal q-FlowLoc problem (q-MT-FlowLoc) is introduced. FlowLoc problems combine two well-known modeling tools: (dynamic) network flows and locational analysis. Since the q-MT-FlowLoc problem is NP-hard we give a mixed integer programming formulation and propose a heuristic which obtains a feasible solution by calculating a maximum flow in a special graph H. If this flow is also a minimum cost flow, various versions of the heuristic can be obtained by the use of different cost functions. The quality of this solutions is compared.
A standard approach for deducing a variational denoising method is the maximum a posteriori strategy. Here, the denoising result is chosen in such a way that it maximizes the conditional density function of the reconstruction given its observed noisy version. Unfortunately, this approach does not imply that the empirical distribution of the reconstructed noise components follows the statistics of the assumed noise model. In this paper, we propose to overcome this drawback by applying an additional transformation to the random vector modeling the noise. This transformation is then incorporated into the standard denoising approach and leads to a more sophisticated data fidelity term, which forces the removed noise components to have the desired statistical properties. The good properties of our new approach are demonstrated for additive Gaussian noise by numerical examples. Our method shows to be especially well suited for data containing high frequency structures, where other denoising methods which assume a certain smoothness of the signal cannot restore the small structures.
In this paper we develop monitoring schemes for detecting structural changes
in nonlinear autoregressive models. We approximate the regression function by a
single layer feedforward neural network. We show that CUSUM-type tests based
on cumulative sums of estimated residuals, that have been intensively studied
for linear regression in both an offline as well as online setting, can be extended
to this model. The proposed monitoring schemes reject (asymptotically) the null
hypothesis only with a given probability but will detect a large class of alternatives
with probability one. In order to construct these sequential size tests the limit
distribution under the null hypothesis is obtained.
For computational reasons, the spline interpolation of the Earth's gravitational potential is usually done in a spherical framework. In this work, however, we investigate a spline method with respect to the real Earth. We are concerned with developing the real Earth oriented strategies and methods for the Earth's gravitational potential determination. For this purpose we introduce the reproducing kernel Hilbert space of Newton potentials on and outside given regular surface with reproducing kernel defined as a Newton integral over it's interior. We first give an overview of thus far achieved results considering approximations on regular surfaces using surface potentials (Chapter 3). The main results are contained in the fourth chapter where we give a closer look to the Earth's gravitational potential, the Newton potentials and their characterization in the interior and the exterior space of the Earth. We also present the L2-decomposition for regions in R3 in terms of distributions, as a main strategy to impose the Hilbert space structure on the space of potentials on and outside a given regular surface. The properties of the Newton potential operator are investigated in relation to the closed subspace of harmonic density functions. After these preparations, in the fifth chapter we are able to construct the reproducing kernel Hilbert space of Newton potentials on and outside a regular surface. The spline formulation for the solution to interpolation problems, corresponding to a set of bounded linear functionals is given, and corresponding convergence theorems are proven. The spline formulation reflects the specifics of the Earth's surface, due to the representation of the reproducing kernel (of the solution space) as a Newton integral over the inner space of the Earth. Moreover, the approximating potential functions have the same domain of harmonicity as the actual Earth's gravitational potential, i.e., they are harmonic outside and continuous on the Earth's surface. This is a step forward in comparison to the spherical harmonic spline formulation involving functions harmonic down to the Runge sphere. The sixth chapter deals with the representation of the used kernel in the spherical case. It turns out that in the case of the spherical Earth, this kernel can be considered a kind of generalization to spherically oriented kernels, such as Abel-Poisson or the singularity kernel. We also investigate the existence of the closed expression of the kernel. However, at this point it remains to be unknown to us. So, in Chapter 7, we are led to consider certain discretization methods for integrals over regions in R3, in connection to theory of the multidimensional Euler summation formula for the Laplace operator. We discretize the Newton integral over the real Earth (representing the spline function) and give a priori estimates for approximate integration when using this discretization method. The last chapter summarizes our results and gives some directions for the future research.