Refine
Language
- English (5)
Has Fulltext
- yes (5)
Keywords
- time series (2)
- Anisotropic smoothness classes (1)
- Locally stationary processes (1)
- Non-linear wavelet thresholding (1)
- Nonstationary processes (1)
- adaptive estimation (1)
- evolutionary spectrum (1)
- non-Gaussia non-i.i.d. errors (1)
- nonlinear thresholding (1)
- nonparametric regression and (spectral) density estimation (1)
Faculty / Organisational entity
We develop a test for stationarity of a time series against the alternative of a time-changing covariance structure. Using localized versions of the periodogram, we obtain empirical versions of a reasonable notion of a time-varying spectral density. Coefficients w.r.t. a Haar wavelet series expansion of such a time-varying periodogram are a possible indicator whether there is some deviation from covariance stationarity. We propose a test based on the limit distribution of these empirical coefficients.
We derive minimax rates for estimation in anisotropic smoothness classes. This rate is attained by a coordinatewise thresholded wavelet estimator based on a tensor product basis with separate scale parameter for every dimension. It is shown that this basis is superior to its one-scale multiresolution analog, if different degrees of smoothness in different directions are present.; As an important application we introduce a new adaptive wavelet estimator of the time-dependent spectrum of a locally stationary time series. Using this model which was resently developed by Dahlhaus, we show that the resulting estimator attains nearly the rate, which is optimal in Gaussian white noise, simultaneously over a wide range of smoothness classes. Moreover, by our new approach we overcome the difficulty of how to choose the right amount of smoothing, i.e. how to adapt to the appropriate resolution, for reconstructing the local structure of the evolutionary spectrum in the time-frequency plane.
With this article we first like to give a brief review on wavelet thresholding methods in non-Gaussian and non-i.i.d. situations, respectively. Many of these applications are based on Gaussian approximations of the empirical coefficients. For regression and density estimation with independent observations, we establish joint asymptotic normality of the empirical coefficients by means of strong approximations. Then we describe how one can prove asymptotic normality under mixing conditions on the observations by cumulant techniques.; In the second part, we apply these non-linear adaptive shrinking schemes to spectral estimation problems for both a stationary and a non-stationary time series setup. For the latter one, in a model of Dahlhaus on the evolutionary spectrum of a locally stationary time series, we present two different approaches. Moreover, we show that in classes of anisotropic function spaces an appropriately chosen wavelet basis automatically adapts to possibly different degrees of regularity for the different directions. The resulting fully-adaptive spectral estimator attains the rate that is optimal in the idealized Gaussian white noise model up to a logarithmic factor.
We consider nonparametric estimation of the coefficients a_i(.), i=1,...,p, on a time-varying autoregressive process. Choosing an orthonormal wavelet basis representation of the functions a_i(.), the empirical wavelet coefficients are derived from the time series data as the solution of a least squares minimization problem. In order to allow the a_i(.) to be functions of inhomogeneous regularity, we apply nonlinear thresholding to the empirical coefficients and obtain locally smoothed estimates of the a_i(.). We show that the resulting estimators attain the usual minimax L_2-rates up to a logarithmic factor, simultaneously in a large scale of Besov classes. The finite-sample behaviour of our procedure is demonstrated by application to two typical simulated examples.
Knowledge about the distribution of a statistical estimator is important for various purposes like, for example, the construction of confidence intervals for model parameters or the determiation of critical values of tests. A widely used method to estimate this distribution is the so-called bootstrap which is based on an imitation of the probabilistic structure of the data generating process on the basis of the information provided by a given set of random observations. In this paper we investigate this classical method in the context of artificial neural networks used for estimating a mapping from input to output space. We establish consistency results for bootstrap estimates of the distribution of parameter estimates.