首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Summary.  Time series arise often in environmental monitoring settings, which typically involve measuring processes repeatedly over time. In many such applications, observations are irregularly spaced and, additionally, are not distributed normally. An example is water monitoring data collected in Boston Harbor by the Massachusetts Water Resources Authority. We describe a simple robust approach for estimating regression parameters and a first-order autocorrelation parameter in a time series where the observations are irregularly spaced. Estimates are obtained from an estimating equation that is constructed as a linear combination of estimated innovation errors, suitably made robust by symmetric and possibly bounded functions. Under an assumption of data missing completely at random and mild regularity conditions, the proposed estimating equation yields consistent and asymptotically normal estimates. Simulations suggest that our estimator performs well in moderate sample sizes. We demonstrate our method on Secchi depth data collected from Boston Harbor.  相似文献   

2.
Continuous-time autoregressive processes have been applied successfully in many fields and are particularly advantageous in the modeling of irregularly spaced or high-frequency time series data. A convenient nonlinear extension of this model are continuous-time threshold autoregressions (CTAR). CTAR allow for greater flexibility in model parameters and can represent a regime switching behavior. However, so far only Gaussian CTAR processes have been defined, so that this model class could not be used for data with jumps, as frequently observed in financial applications. Hence, as a novelty, we construct CTAR processes with jumps in this paper. Existence of a unique weak solution and weak consistency of an Euler approximation scheme is proven. As a closed form expression of the likelihood is not available, we use kernel-based particle filtering for estimation. We fit our model to the Physical Electricity Index and show that it describes the data better than other comparable approaches.  相似文献   

3.
Summary.  The purpose of the paper is to propose a frequency domain approach for irregularly spaced data on R d . We extend the original definition of a periodogram for time series to that for irregularly spaced data and define non-parametric and parametric spectral density estimators in a way that is similar to the classical approach. Introduction of the mixed asymptotics, which are one of the asymptotics for irregularly spaced data, makes it possible to provide asymptotic theories to the spectral estimators. The asymptotic result for the parametric estimator is regarded as a natural extension of the classical result for regularly spaced data to that for irregularly spaced data. Empirical studies are also included to illustrate the frequency domain approach in comparisons with the existing spatial and frequency domain approaches.  相似文献   

4.
Formulating the model first in continuous time, we have developed a state space approach to the problem of testing for threshold-type nonlinearity when the data are irregularly spaced.  相似文献   

5.
Pre-election surveys are usually conducted several times to forecast election results before the actual voting. It is common that each survey includes a substantial number of non-responses and that the successive survey results are seen as a stochastic multinomial time series evolving over time. We propose a dynamic Bayesian model to examine how multinomial time series evolve over time for the irregularly observed contingency tables and to determine how sensitively the dynamic structure reacts to an unexpected event, such as a candidate scandal. Further, we test whether non-responses are non-ignorable to determine if non-responses need to be imputed for better forecast. We also suggest a Bayesian method that overcomes the boundary solution problem and show that the proposed method outperforms the previous Bayesian methods. Our dynamic Bayesian model is applied to the two pre-election surveys for the 2007 Korea presidential candidate election and for the 1998 Ohio general election.  相似文献   

6.
Summary.  We propose an adaptive varying-coefficient spatiotemporal model for data that are observed irregularly over space and regularly in time. The model is capable of catching possible non-linearity (both in space and in time) and non-stationarity (in space) by allowing the auto-regressive coefficients to vary with both spatial location and an unknown index variable. We suggest a two-step procedure to estimate both the coefficient functions and the index variable, which is readily implemented and can be computed even for large spatiotemporal data sets. Our theoretical results indicate that, in the presence of the so-called nugget effect, the errors in the estimation may be reduced via the spatial smoothing—the second step in the estimation procedure proposed. The simulation results reinforce this finding. As an illustration, we apply the methodology to a data set of sea level pressure in the North Sea.  相似文献   

7.
This paper considers inference for both spatial lattice data with possibly irregularly shaped sampling region and non‐lattice data, by extending the recently proposed self‐normalization (SN) approach from stationary time series to the spatial setup. A nice feature of the SN method is that it avoids the choice of tuning parameters, which are usually required for other non‐parametric inference approaches. The extension is non‐trivial as spatial data has no natural one‐directional time ordering. The SN‐based inference is convenient to implement and is shown through simulation studies to provide more accurate coverage compared with the widely used subsampling approach. We also illustrate the idea of SN using a real data example.  相似文献   

8.
We focus on regression analysis of irregularly observed longitudinal data which often occur in medical follow-up studies and observational investigations. The model for such data involves two processes: a longitudinal response process of interest and an observation process controlling observation times. Restrictive models and questionable assumptions, such as Poisson assumption and independent censoring time assumption, were posed in previous works for analysing longitudinal data. In this paper, we propose a more general model together with a robust estimation approach for longitudinal data with informative observation times and censoring times, and the asymptotic normalities of the proposed estimators are established. Both simulation studies and real data application indicate that the proposed method is promising.  相似文献   

9.
When the data has been collected regularly over time and irregularly over space, it is difficult to impose an explicit auto-regressive structure over the space as it is over time. We study a phenomenon on a number of fixed locations. On each location the process forms an auto-regressive time series. The second-order dependence over space is reflected by the covariance matrix of the noise process, which is ‘white’ in time but not over the space. We consider the asymptotic properties of our inference methods, when the number of recordings in time only tends to infinity.  相似文献   

10.
Summary.  We introduce a semiparametric approach for modelling the effect of concurrent events on an outcome of interest. Concurrency manifests itself as temporal and spatial dependences. By temporal dependence we mean the effect of an event in the past. Modelling this effect is challenging since events arrive at irregularly spaced time intervals. For the spatial part we use an abstract notion of 'feature space' to conceptualize distances among a set of item features. We motivate our model in the context of on-line auctions by modelling the effect of concurrent auctions on an auction's price. Our concurrency model consists of three components: a transaction-related component that accounts for auction design and bidding competition, a spatial component that takes into account similarity between item features and a temporal component that accounts for recently closed auctions. To construct each of these we borrow ideas from spatial and mixed model methodology. The power of this model is illustrated on a large and diverse set of laptop auctions on eBay.com. We show that our model results in superior predictive performance compared with a set of competitor models. The model also allows for new insight into the factors that drive price in on-line auctions and their relationship to bidding competition, auction design, product variety and temporal learning effects.  相似文献   

11.
Time series data observed at unequal time intervals (irregular data) occur quite often and this usually poses problems in its analysis. A recursive form of the exponentially smoothed estimated is here proposed for a nonlinear model with irregularly observed data and its asymptotic properties are discussed An alternative smoother to that of Wright (1985) is also derived. Numerical comparison is made between the resulting estimates and other smoothed estimates.  相似文献   

12.
ABSTRACT

We aim at analysing geostatistical and areal data observed over irregularly shaped spatial domains and having a distribution within the exponential family. We propose a generalized additive model that allows to account for spatially varying covariate information. The model is fitted by maximizing a penalized log-likelihood function, with a roughness penalty term that involves a differential quantity of the spatial field, computed over the domain of interest. Efficient estimation of the spatial field is achieved resorting to the finite element method, which provides a basis for piecewise polynomial surfaces. The proposed model is illustrated by an application to the study of criminality in the city of Portland, OR, USA.  相似文献   

13.
We develop a hierarchical Gaussian process model for forecasting and inference of functional time series data. Unlike existing methods, our approach is especially suited for sparsely or irregularly sampled curves and for curves sampled with nonnegligible measurement error. The latent process is dynamically modeled as a functional autoregression (FAR) with Gaussian process innovations. We propose a fully nonparametric dynamic functional factor model for the dynamic innovation process, with broader applicability and improved computational efficiency over standard Gaussian process models. We prove finite-sample forecasting and interpolation optimality properties of the proposed model, which remain valid with the Gaussian assumption relaxed. An efficient Gibbs sampling algorithm is developed for estimation, inference, and forecasting, with extensions for FAR(p) models with model averaging over the lag p. Extensive simulations demonstrate substantial improvements in forecasting performance and recovery of the autoregressive surface over competing methods, especially under sparse designs. We apply the proposed methods to forecast nominal and real yield curves using daily U.S. data. Real yields are observed more sparsely than nominal yields, yet the proposed methods are highly competitive in both settings. Supplementary materials, including R code and the yield curve data, are available online.  相似文献   

14.
We analyse a flexible parametric estimation technique for a competing risks (CR) model with unobserved heterogeneity, by extending a local mixed proportional hazard single risk model for continuous duration time to a local mixture CR (LMCR) model for discrete duration time. The state-specific local hazard function for the LMCR model is per definition a valid density function if we have either one or two destination states. We conduct Monte Carlo experiments to compare the estimated parameters of the LMCR model, and to compare the estimated parameters of a CR model based on a Heckman–Singer-type (HS-type) technique, with the data-generating process parameters. The Monte Carlo results show that the LMCR model performs better or at least as good as the HS-type model with respect to the estimated structure parameters in most of the cases, but relatively poorer with respect to the estimated duration-dependence parameters.  相似文献   

15.
Universal kriging is a form of interpolation that takes into account the local trends in data when minimizing the error associated with the estimator. Under multivariate normality assumptions, the given predictor is the best linear unbiased predictor. but if the underlying distribution is not normal, the estimator will not be unbiased and will be vulnerable to outliers. With spatial data, it is not only the presence of outliers that may spoil the predictions, but also the boundary sites. usually corners, that tend to have high leverage. As an alternative, a weighted one-step generalized M estimator of the location parameters in a spatial linear model is proposed. It is especially recommended in the case of irregularly spaced data.  相似文献   

16.
Motivated by a specific problem concerning the relationship between radar reflectance and rainfall intensity, the paper develops a space–time model for use in environmental monitoring applications. The model is cast as a high dimensional multivariate state space time series model, in which the cross-covariance structure is derived from the spatial context of the component series, in such a way that its interpretation is essentially independent of the particular set of spatial locations at which the data are recorded. We develop algorithms for estimating the parameters of the model by maximum likelihood, and for making spatial predictions of the radar calibration parameters by using realtime computations. We apply the model to data from a weather radar station in Lancashire, England, and demonstrate through empirical validation the predictive performance of the model.  相似文献   

17.
The most natural parametric distribution to consider is the Weibull model because it allows for both the proportional hazard model and accelerated failure time model. In this paper, we propose a new bivariate Weibull regression model based on censored samples with common covariates. There are some interesting biometrical applications which motivate to study bivariate Weibull regression model in this particular situation. We obtain maximum likelihood estimators for the parameters in the model and test the significance of the regression parameters in the model. We present a simulation study based on 1000 samples and also obtain the power of the test statistics.  相似文献   

18.
We propose a state-space approach for GARCH models with time-varying parameters able to deal with non-stationarity that is usually observed in a wide variety of time series. The parameters of the non-stationary model are allowed to vary smoothly over time through non-negative deterministic functions. We implement the estimation of the time-varying parameters in the time domain through Kalman filter recursive equations, finding a state-space representation of a class of time-varying GARCH models. We provide prediction intervals for time-varying GARCH models and, additionally, we propose a simple methodology for handling missing values. Finally, the proposed methodology is applied to the Chilean Stock Market (IPSA) and to the American Standard&Poor's 500 index (S&P500).  相似文献   

19.
In this paper we discuss the recursive (or on line) estimation in (i) regression and (ii) autoregressive integrated moving average (ARIMA) time series models. The adopted approach uses Kalman filtering techniques to calculate estimates recursively. This approach is used for the estimation of constant as well as time varying parameters. In the first section of the paper we consider the linear regression model. We discuss recursive estimation both for constant and time varying parameters. For constant parameters, Kalman filtering specializes to recursive least squares. In general, we allow the parameters to vary according to an autoregressive integrated moving average process and update the parameter estimates recursively. Since the stochastic model for the parameter changes will "be rarely known, simplifying assumptions have to be made. In particular we assume a random walk model for the time varying parameters and show how to determine whether the parameters are changing over time. This is illustrated with an example.  相似文献   

20.
The theoretical price of a financial option is given by the expectation of its discounted expiry time payoff. The computation of this expectation depends on the density of the value of the underlying instrument at expiry time. This density depends on both the parametric model assumed for the behaviour of the underlying, and the values of parameters within the model, such as volatility. However neither the model, nor the parameter values are known. Common practice when pricing options is to assume a specific model, such as geometric Brownian Motion, and to use point estimates of the model parameters, thereby precisely defining a density function.We explicitly acknowledge the uncertainty of model and parameters by constructing the predictive density of the underlying as an average of model predictive densities, weighted by each model's posterior probability. A model's predictive density is constructed by integrating its transition density function by the posterior distribution of its parameters. This is an extension to Bayesian model averaging. Sampling importance-resampling and Monte Carlo algorithms implement the computation. The advantage of this method is that rather than falsely assuming the model and parameter values are known, inherent ignorance is acknowledged and dealt with in a mathematically logical manner, which utilises all information from past and current observations to generate and update option prices. Moreover point estimates for parameters are unnecessary. We use this method to price a European Call option on a share index.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号