### Refine

#### Year of publication

#### Keywords

- Mixture Models (2)
- changepoint test (2)
- hidden variables (2)
- mixture (2)
- nonparametric regression (2)
- time series (2)
- AR-ARCH (1)
- Autoregression (1)
- CUSUM statistic (1)
- EM algorith (1)

Knowledge about the distribution of a statistical estimator is important for various purposes like, for example, the construction of confidence intervals for model parameters or the determiation of critical values of tests. A widely used method to estimate this distribution is the so-called bootstrap which is based on an imitation of the probabilistic structure of the data generating process on the basis of the information provided by a given set of random observations. In this paper we investigate this classical method in the context of artificial neural networks used for estimating a mapping from input to output space. We establish consistency results for bootstrap estimates of the distribution of parameter estimates.

In this paper we derive nonparametric stochastic volatility models in discrete time. These models generalize parametric autoregressive random variance models, which have been applied quite successfully to nancial time series. For the proposed models we investigate nonparametric kernel smoothers. It is seen that so-called nonparametric deconvolution estimators could be applied in this situation and that consistency results known for nonparametric errors- in-variables models carry over to the situation considered herein.

Kernel smoothing in nonparametric autoregressive schemes offers a powerful tool in modelling time series. In this paper it is shown that the bootstrap can be used for estimating the distribution of kernel smoothers. This can be done by mimicking the stochastic nature of the whole process in the bootstrap resampling or by generating a simple regression model. Consistency of these bootstrap procedures will be shown.

In the following, we discuss a procedure for interpolating a spatial-temporal stochastic process. We stick to a particular, moderately general model but the approach can be easily transered to other similar problems. The original data, which motivated this work, are measurements of gas concentrations (SO2, NO, O2) and several meteorological parameters (temperature, sun radiation, procipitation, wind speed etc.). These date have been and are still recorded twice every hour at several irregularly located places in the forests of the state Rheinland-Pfalz as part of a program monitoring the air pollution in the forests.

In this paper we deal with the problem of fitting an autoregression of order p to given data coming from a stationary autoregressive process with infinite order. The paper is mainly concerned with the selection of an appropriate order of the autoregressive model. Based on the so-called final prediction error (FPE) a bootstrap order selection can be proposed, because it turns out that one relevant expression occuring in the FPE is ready for the application of the bootstrap principle. Some asymptotic properties of the bootstrap order selection are proved. To carry through the bootstrap procedure an autoregression with increasing but non-stochastic order is fitted to the given data. The paper is concluded by some simulations.

Neural networks are now a well-established tool for solving classification and forecasting problems in financial applications (compare, e.g., Bol et al., 1996, Evans, 1997, Rehkugler and Zimmermann, 1994, Refenes 1995, and Refenes et al. 1996a) though many practioners are still suspicious against too evident success stories. One reason may be that the construction of an appropriate network which provides a reasonable solution to a complex data-analytic problem is rarely made explicit in the literature. In this paper, we try to contribute to filling this gap by discussing in detail the problem of dynamically allocating capital to various components of a currency portfolio in such a manner that the average gain will be larger than for certain benchmark portfolios. We base our solution on feedforward neural networks which are constructed employing various statistical model selection procedures described in, e.g., (Anders, 1997, or Refenes et al., 1996b). Neural networks which are used as the basis of trading strategies in finance should be assessed differently than in technical applications. The task is not to construct a network which provides good forecasts with respect to mean-square error of some quantities of interest or to provide good approximation of some given target values, but to achieve a good performance in economic terms. For portfolio allocation, the main goal is to achieve on the average a large return combined with a small risk. Therefore, we do not consider forecasts of the foreign exchange (FX-) rate time series using neural networks, but we try to get the allocation directly as the output of a network. Furthermore, we do not minimize some estimation or prediction error, but we try to maximize an economically meaningful performance measure, the risk-adjusted return, directly (compare also Heitkamp, 1996). In the subsequent chapter, we describe the details of the portfolio allocation problem. The following two chapters provide some technical information on how the networks were fitted to the available data and how the network inputs and outputs were selected. In chapter 5, finally, we discuss the promising results.

We consider the problem of estimating the conditional quantile of a time series at time t given observations of the same and perhaps other time series available at time t-1. We discuss an estimate which we get by inverting a kernel estimate of the conditional distribution function, and prove its asymptotic normality and uniform strong consistency. We illustrate the good performance of the estimate for light and heavy-tailed distributions of the innovations with a small simulation study.

We derive some asymptotics for a new approach to curve estimation proposed by Mr'{a}zek et al. cite{MWB06} which combines localization and regularization. This methodology has been considered as the basis of a unified framework covering various different smoothing methods in the analogous two-dimensional problem of image denoising. As a first step for understanding this approach theoretically, we restrict our discussion here to the least-squares distance where we have explicit formulas for the function estimates and where we can derive a rather complete asymptotic theory from known results for the Priestley-Chao curve estimate. In this paper, we consider only the case where the bias dominates the mean-square error. Other situations are dealt with in subsequent papers.

In this paper we consider a CHARME Model, a class of generalized mixture of nonlinear nonparametric AR-ARCH time series. We apply the theory of Markov models to derive asymptotic stability of this model. Indeed, the goal is to provide some sets of conditions under which our model is geometric ergodic and therefore satisfies some mixing conditions. This result can be considered as the basis toward an asymptotic theory for our model.

We consider the problem of estimating the conditional quantile of a time series at time \(t\) given observations of the same and perhaps other time series available at time \(t-1\). We discuss sieve estimates which are a nonparametric versions of the Koenker-Bassett regression quantiles and do not require the specification of the innovation law. We prove consistency of those estimates and illustrate their good performance for light- and heavy-tailed distributions of the innovations with a small simulation study. As an economic application, we use the estimates for calculating the value at risk of some stock price series.

We consider data generating mechanisms which can be represented as mixtures of finitely many regression or autoregression models. We propose nonparametric estimators for the functions characterizing the various mixture components based on a local quasi maximum likelihood approach and prove their consistency. We present an EM algorithm for calculating the estimates numerically which is mainly based on iteratively applying common local smoothers and discuss its convergence properties.

Maximum Likelihood Estimators for Markov Switching Autoregressive Processes with ARCH Component
(2009)

We consider a mixture of AR-ARCH models where the switching between the basic states of the observed time series is controlled by a hidden Markov chain. Under simple conditions, we prove consistency and asymptotic normality of the maximum likelihood parameter estimates combining general results on asymptotics of Douc et al (2004) and of geometric ergodicity of Franke et al (2007).

We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued time series models based on other types of thinning operations.

In this paper, we discuss the problem of testing for a changepoint in the structure
of an integer-valued time series. In particular, we consider a test statistic
of cumulative sum (CUSUM) type for general Poisson autoregressions of order
1. We investigate the asymptotic behaviour of conditional least-squares estimates
of the parameters in the presence of a changepoint. Then, we derive the
asymptotic distribution of the test statistic under the hypothesis of no change,
allowing for the calculation of critical values. We prove consistency of the test,
i.e. asymptotic power 1, and consistency of the corresponding changepoint estimate.
As an application, we have a look at changepoint detection in daily
epileptic seizure counts from a clinical study.

In this paper, we demonstrate the power of functional data models for a statistical analysis of stimulus-response experiments which is a quite natural way to look at this kind of data and which makes use of the full information available. In particular, we focus on the detection of a change in the mean of the response in a series of stimulus-response curves where we also take into account dependence in time.