Refine
Year of publication
Has Fulltext
- yes (21)
Keywords
- Mixture Models (2)
- changepoint test (2)
- hidden variables (2)
- mixture (2)
- nonparametric regression (2)
- time series (2)
- AR-ARCH (1)
- Autoregression (1)
- CUSUM statistic (1)
- EM algorith (1)
Faculty / Organisational entity
In this paper, we demonstrate the power of functional data models for a statistical analysis of stimulus-response experiments which is a quite natural way to look at this kind of data and which makes use of the full information available. In particular, we focus on the detection of a change in the mean of the response in a series of stimulus-response curves where we also take into account dependence in time.
In this paper, we discuss the problem of testing for a changepoint in the structure
of an integer-valued time series. In particular, we consider a test statistic
of cumulative sum (CUSUM) type for general Poisson autoregressions of order
1. We investigate the asymptotic behaviour of conditional least-squares estimates
of the parameters in the presence of a changepoint. Then, we derive the
asymptotic distribution of the test statistic under the hypothesis of no change,
allowing for the calculation of critical values. We prove consistency of the test,
i.e. asymptotic power 1, and consistency of the corresponding changepoint estimate.
As an application, we have a look at changepoint detection in daily
epileptic seizure counts from a clinical study.
We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued time series models based on other types of thinning operations.
Maximum Likelihood Estimators for Markov Switching Autoregressive Processes with ARCH Component
(2009)
We consider a mixture of AR-ARCH models where the switching between the basic states of the observed time series is controlled by a hidden Markov chain. Under simple conditions, we prove consistency and asymptotic normality of the maximum likelihood parameter estimates combining general results on asymptotics of Douc et al (2004) and of geometric ergodicity of Franke et al (2007).
We consider data generating mechanisms which can be represented as mixtures of finitely many regression or autoregression models. We propose nonparametric estimators for the functions characterizing the various mixture components based on a local quasi maximum likelihood approach and prove their consistency. We present an EM algorithm for calculating the estimates numerically which is mainly based on iteratively applying common local smoothers and discuss its convergence properties.
We consider data generating mechanisms which can be represented as mixtures of finitely many regression or autoregression models. We propose nonparametric estimators for the functions characterizing the various mixture components based on a local quasi maximum likelihood approach and prove their consistency. We present an EM algorithm for calculating the estimates numerically which is mainly based on iteratively applying common local smoothers and discuss its convergence properties.
We derive some asymptotics for a new approach to curve estimation proposed by Mr'{a}zek et al. cite{MWB06} which combines localization and regularization. This methodology has been considered as the basis of a unified framework covering various different smoothing methods in the analogous two-dimensional problem of image denoising. As a first step for understanding this approach theoretically, we restrict our discussion here to the least-squares distance where we have explicit formulas for the function estimates and where we can derive a rather complete asymptotic theory from known results for the Priestley-Chao curve estimate. In this paper, we consider only the case where the bias dominates the mean-square error. Other situations are dealt with in subsequent papers.
We consider the problem of estimating the conditional quantile of a time series at time \(t\) given observations of the same and perhaps other time series available at time \(t-1\). We discuss sieve estimates which are a nonparametric versions of the Koenker-Bassett regression quantiles and do not require the specification of the innovation law. We prove consistency of those estimates and illustrate their good performance for light- and heavy-tailed distributions of the innovations with a small simulation study. As an economic application, we use the estimates for calculating the value at risk of some stock price series.
In this paper we consider a CHARME Model, a class of generalized mixture of nonlinear nonparametric AR-ARCH time series. We apply the theory of Markov models to derive asymptotic stability of this model. Indeed, the goal is to provide some sets of conditions under which our model is geometric ergodic and therefore satisfies some mixing conditions. This result can be considered as the basis toward an asymptotic theory for our model.
We consider the problem of estimating the conditional quantile of a time series at time t given observations of the same and perhaps other time series available at time t-1. We discuss an estimate which we get by inverting a kernel estimate of the conditional distribution function, and prove its asymptotic normality and uniform strong consistency. We illustrate the good performance of the estimate for light and heavy-tailed distributions of the innovations with a small simulation study.
In this paper we derive nonparametric stochastic volatility models in discrete time. These models generalize parametric autoregressive random variance models, which have been applied quite successfully to nancial time series. For the proposed models we investigate nonparametric kernel smoothers. It is seen that so-called nonparametric deconvolution estimators could be applied in this situation and that consistency results known for nonparametric errors- in-variables models carry over to the situation considered herein.
Kernel smoothing in nonparametric autoregressive schemes offers a powerful tool in modelling time series. In this paper it is shown that the bootstrap can be used for estimating the distribution of kernel smoothers. This can be done by mimicking the stochastic nature of the whole process in the bootstrap resampling or by generating a simple regression model. Consistency of these bootstrap procedures will be shown.
In the following, we discuss a procedure for interpolating a spatial-temporal stochastic process. We stick to a particular, moderately general model but the approach can be easily transered to other similar problems. The original data, which motivated this work, are measurements of gas concentrations (SO2, NO, O2) and several meteorological parameters (temperature, sun radiation, procipitation, wind speed etc.). These date have been and are still recorded twice every hour at several irregularly located places in the forests of the state Rheinland-Pfalz as part of a program monitoring the air pollution in the forests.
Knowledge about the distribution of a statistical estimator is important for various purposes like, for example, the construction of confidence intervals for model parameters or the determiation of critical values of tests. A widely used method to estimate this distribution is the so-called bootstrap which is based on an imitation of the probabilistic structure of the data generating process on the basis of the information provided by a given set of random observations. In this paper we investigate this classical method in the context of artificial neural networks used for estimating a mapping from input to output space. We establish consistency results for bootstrap estimates of the distribution of parameter estimates.
In this paper we deal with the problem of fitting an autoregression of order p to given data coming from a stationary autoregressive process with infinite order. The paper is mainly concerned with the selection of an appropriate order of the autoregressive model. Based on the so-called final prediction error (FPE) a bootstrap order selection can be proposed, because it turns out that one relevant expression occuring in the FPE is ready for the application of the bootstrap principle. Some asymptotic properties of the bootstrap order selection are proved. To carry through the bootstrap procedure an autoregression with increasing but non-stochastic order is fitted to the given data. The paper is concluded by some simulations.