首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many wavelet shrinkage methods assume that the data are observed on an equally spaced grid of length of the form 2J for some J. These methods require serious modification or preprocessed data to cope with irregularly spaced data. The lifting scheme is a recent mathematical innovation that obtains a multiscale analysis for irregularly spaced data. A key lifting component is the “predict” step where a prediction of a data point is made. The residual from the prediction is stored and can be thought of as a wavelet coefficient. This article exploits the flexibility of lifting by adaptively choosing the kind of prediction according to a criterion. In this way the smoothness of the underlying ‘wavelet’ can be adapted to the local properties of the function. Multiple observations at a point can readily be handled by lifting through a suitable choice of prediction. We adapt existing shrinkage rules to work with our adaptive lifting methods. We use simulation to demonstrate the improved sparsity of our techniques and improved regression performance when compared to both wavelet and non-wavelet methods suitable for irregular data. We also exhibit the benefits of our adaptive lifting on the real inductance plethysmography and motorcycle data.  相似文献   

2.
Summary.  The purpose of the paper is to propose a frequency domain approach for irregularly spaced data on R d . We extend the original definition of a periodogram for time series to that for irregularly spaced data and define non-parametric and parametric spectral density estimators in a way that is similar to the classical approach. Introduction of the mixed asymptotics, which are one of the asymptotics for irregularly spaced data, makes it possible to provide asymptotic theories to the spectral estimators. The asymptotic result for the parametric estimator is regarded as a natural extension of the classical result for regularly spaced data to that for irregularly spaced data. Empirical studies are also included to illustrate the frequency domain approach in comparisons with the existing spatial and frequency domain approaches.  相似文献   

3.
In this work we propose an autoregressive model with parameters varying in time applied to irregularly spaced non-stationary time series. We expand all the functional parameters in a wavelet basis and estimate the coefficients by least squares after truncation at a suitable resolution level. We also present some simulations in order to evaluate both the estimation method and the model behavior on finite samples. Applications to silicates and nitrites irregularly observed data are provided as well.  相似文献   

4.
ABSTRACT The analysis of a set of data consisting of N short (≤20 observations each) multivariate time series, where the observations are irregularly spaced and where observations for the different components of each multivariate series are observed at different times, is discussed. With the increased use of automatic recording devices in many fields, data such as these, which are of course samples from smooth response curves, are becoming more common. In this application, which was a clinical trial comparing two cements for use in hip replacement surgery, the key to the analysis was in recognizing that the interest lay in the degree to which the five curves representing a patient's vital signs deviated from baseline (i.e., normal for that patient) during surgery. This enabled the statisticians to define appropriate response variables. The analysis included Rosseeuw's (1984) technique for the identification of multivariate outliers and logistic regressions to identify any effects on the process producing the outliers due to treatment or covariates.  相似文献   

5.
Summary.  Likelihood methods are often difficult to use with large, irregularly sited spatial data sets, owing to the computational burden. Even for Gaussian models, exact calculations of the likelihood for n observations require O ( n 3) operations. Since any joint density can be written as a product of conditional densities based on some ordering of the observations, one way to lessen the computations is to condition on only some of the 'past' observations when computing the conditional densities. We show how this approach can be adapted to approximate the restricted likelihood and we demonstrate how an estimating equations approach allows us to judge the efficacy of the resulting approximation. Previous work has suggested conditioning on those past observations that are closest to the observation whose conditional density we are approximating. Through theoretical, numerical and practical examples, we show that there can often be considerable benefit in conditioning on some distant observations as well.  相似文献   

6.
When the data has been collected regularly over time and irregularly over space, it is difficult to impose an explicit auto-regressive structure over the space as it is over time. We study a phenomenon on a number of fixed locations. On each location the process forms an auto-regressive time series. The second-order dependence over space is reflected by the covariance matrix of the noise process, which is ‘white’ in time but not over the space. We consider the asymptotic properties of our inference methods, when the number of recordings in time only tends to infinity.  相似文献   

7.
This paper describes inference methods for functional data under the assumption that the functional data of interest are smooth latent functions, characterized by a Gaussian process, which have been observed with noise over a finite set of time points. The methods we propose are completely specified in a Bayesian environment that allows for all inferences to be performed through a simple Gibbs sampler. Our main focus is in estimating and describing uncertainty in the covariance function. However, these models also encompass functional data estimation, functional regression where the predictors are latent functions, and an automatic approach to smoothing parameter selection. Furthermore, these models require minimal assumptions on the data structure as the time points for observations do not need to be equally spaced, the number and placement of observations are allowed to vary among functions, and special treatment is not required when the number of functional observations is less than the dimensionality of those observations. We illustrate the effectiveness of these models in estimating latent functional data, capturing variation in the functional covariance estimate, and in selecting appropriate smoothing parameters in both a simulation study and a regression analysis of medfly fertility data.  相似文献   

8.
We propose an efficient and robust method for variance function estimation in semiparametric longitudinal data analysis. The method utilizes a local log‐linear approximation for the variance function and adopts a generalized estimating equation approach to account for within subject correlations. We show theoretically and empirically that our method outperforms estimators using working independence that ignores the correlations. The Canadian Journal of Statistics 39: 656–670; 2011. © 2011 Statistical Society of Canada  相似文献   

9.
Continuous-time autoregressive processes have been applied successfully in many fields and are particularly advantageous in the modeling of irregularly spaced or high-frequency time series data. A convenient nonlinear extension of this model are continuous-time threshold autoregressions (CTAR). CTAR allow for greater flexibility in model parameters and can represent a regime switching behavior. However, so far only Gaussian CTAR processes have been defined, so that this model class could not be used for data with jumps, as frequently observed in financial applications. Hence, as a novelty, we construct CTAR processes with jumps in this paper. Existence of a unique weak solution and weak consistency of an Euler approximation scheme is proven. As a closed form expression of the likelihood is not available, we use kernel-based particle filtering for estimation. We fit our model to the Physical Electricity Index and show that it describes the data better than other comparable approaches.  相似文献   

10.
In this article, we propose a general framework for performance evaluation of organizations and individuals over time using routinely collected performance variables or indicators. Such variables or indicators are often correlated over time, with missing observations, and often come from heavy-tailed distributions shaped by outliers. Two new double robust and model-free strategies are used for evaluation (ranking) of sampling units. Strategy 1 can handle missing data using residual maximum likelihood (RML) at stage two, while strategy two handles missing data at stage one. Strategy 2 has the advantage that overcomes the problem of multicollinearity. Strategy one requires independent indicators for the construction of the distances, where strategy two does not. Two different domain examples are used to illustrate the application of the two strategies. Example one considers performance monitoring of gynecologists and example two considers the performance of industrial firms.  相似文献   

11.
Utilizing time series modeling entails estimating the model parameters and dispersion. Classical estimators for autocorrelated observations are sensitive to presence of different types of outliers and lead to bias estimation and misinterpretation. It is important to present robust methods for parameters estimation which are not influenced by contaminations. In this article, an estimation method entitled Iteratively Robust Filtered Fast? τ(IRFFT) is proposed for general autoregressive models. In comparison to other commonly accepted methods, this method is more efficient and has lower sensitivity to contaminations due to having desirable robustness properties. This has been demonstrated by applying MSE, influence function, and breakdown point criteria.  相似文献   

12.
We modify Ramsay's algorithm for estimating monotonic transformations in regression and extend it to autoregression, where strict monotonicity is an essential requirement. Compared with other methods, our method can capture some characteristics that are pertinent to the time series and is much easier to implement. An order selection method is introduced and developed. Some real data sets are analysed.  相似文献   

13.
Summary.  Statistical agencies make changes to the data collection methodology of their surveys to improve the quality of the data collected or to improve the efficiency with which they are collected. For reasons of cost it may not be possible to estimate the effect of such a change on survey estimates or response rates reliably, without conducting an experiment that is embedded in the survey which involves enumerating some respondents by using the new method and some under the existing method. Embedded experiments are often designed for repeated and overlapping surveys; however, previous methods use sample data from only one occasion. The paper focuses on estimating the effect of a methodological change on estimates in the case of repeated surveys with overlapping samples from several occasions. Efficient design of an embedded experiment that covers more than one time point is also mentioned. All inference is unbiased over an assumed measurement model, the experimental design and the complex sample design. Other benefits of the approach proposed include the following: it exploits the correlation between the samples on each occasion to improve estimates of treatment effects; treatment effects are allowed to vary over time; it is robust against incorrectly rejecting the null hypothesis of no treatment effect; it allows a wide set of alternative experimental designs. This paper applies the methodology proposed to the Australian Labour Force Survey to measure the effect of replacing pen-and-paper interviewing with computer-assisted interviewing. This application considered alternative experimental designs in terms of their statistical efficiency and their risks to maintaining a consistent series. The approach proposed is significantly more efficient than using only 1 month of sample data in estimation.  相似文献   

14.
Frequently in process monitoring, situations arise in which the order that events occur cannot be distinguished, motivating the need to accommodate multiple observations occurring at the same time, or concurrent observations. The risk-adjusted Bernoulli cumulative sum (CUSUM) control chart can be used to monitor the rate of an adverse event by fitting a risk-adjustment model, followed by a likelihood ratio-based scoring method that produces a statistic that can be monitored. In our paper, we develop a risk-adjusted Bernoulli CUSUM control chart for concurrent observations. Furthermore, we adopt a novel approach that uses a combined mixture model and kernel density estimation approach in order to perform risk-adjustment with regard to spatial location. Our proposed method allows for monitoring binary outcomes through time with multiple observations at each time point, where the chart is spatially adjusted for each Bernoulli observation's estimated probability of the adverse event. A simulation study is presented to assess the performance of the proposed monitoring scheme. We apply our method using data from Wayne County, Michigan between 2005 and 2014 to monitor the rate of foreclosure as a percentage of all housing transactions.  相似文献   

15.
Factor analysis of multivariate spatial data is considered. A systematic approach for modeling the underlying structure of potentially irregularly spaced, geo-referenced vector observations is proposed. Statistical inference procedures for selecting the number of factors and for model building are discussed. We derive a condition under which a simple and practical inference procedure is valid without specifying the form of distributions and factor covariance functions. The multivariate prediction problem is also discussed, and a procedure combining the latent variable modeling and a measurement-error-free kriging technique is introduced. Simulation results and an example using agricultural data are presented.  相似文献   

16.
The purpose of the paper is to propose an autocorrelogram estimation procedure for irregularly spaced data which are modelled as subordinated continuous time-series processes. Such processes, also called time-deformed stochastic processes, have been proposed in a variety of contexts. Before entertaining the possibility of modelling such time series, one is interested in examining simple diagnostics and data summaries. With continuous-time processes this is a challenging task which can be accomplished via kernel estimation. The paper develops the conceptual framework, the estimation procedure and its asymptotic properties. An illustrative empirical example is also provided.  相似文献   

17.
Markov regression models are useful tools for estimating the impact of risk factors on rates of transition between multiple disease states. Alzheimer's disease (AD) is an example of a multi-state disease process in which great interest lies in identifying risk factors for transition. In this context, non-homogeneous models are required because transition rates change as subjects age. In this report we propose a non-homogeneous Markov regression model that allows for reversible and recurrent disease states, transitions among multiple states between observations, and unequally spaced observation times. We conducted simulation studies to demonstrate performance of estimators for covariate effects from this model and compare performance with alternative models when the underlying non-homogeneous process was correctly specified and under model misspecification. In simulation studies, we found that covariate effects were biased if non-homogeneity of the disease process was not accounted for. However, estimates from non-homogeneous models were robust to misspecification of the form of the non-homogeneity. We used our model to estimate risk factors for transition to mild cognitive impairment (MCI) and AD in a longitudinal study of subjects included in the National Alzheimer's Coordinating Center's Uniform Data Set. Using our model, we found that subjects with MCI affecting multiple cognitive domains were significantly less likely to revert to normal cognition.  相似文献   

18.
This paper presents variance extraction procedures for univariate time series. The volatility of a times series is monitored allowing for non-linearities, jumps and outliers in the level. The volatility is measured using the height of triangles formed by consecutive observations of the time series. This idea was proposed by Rousseeuw and Hubert [1996. Regression-free and robust estimation of scale for bivariate data. Comput. Statist. Data Anal. 21, 67–85] in the bivariate setting. This paper extends their procedure to apply for online scale estimation in time series analysis. The statistical properties of the new methods are derived and finite sample properties are given. A financial and a medical application illustrate the use of the procedures.  相似文献   

19.
Standard unit-root and cointegration tests are sensitive to atypical events such as outliers and structural breaks. In this article, we use outlier-robust estimation techniques to examine the impact of these events on cointegration analysis. Our outlier-robust cointegration test provides a new diagnostic tool for signaling when standard cointegration results might be driven by a few aberrant observations. A main feature of our approach is that the proposed robust estimator can be used to compute weights for all observations, which in turn can be used to identify the approximate dates of atypical events. We evaluate our method using simulated data and a Monte Carlo experiment. We also present an empirical example showing the usefulness of the proposed analysis.  相似文献   

20.
Formulating the model first in continuous time, we have developed a state space approach to the problem of testing for threshold-type nonlinearity when the data are irregularly spaced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号