首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is important to study historical temperature time series prior to the industrial revolution so that one can view the current global warming trend from a long‐term historical perspective. Because there are no instrumental records of such historical temperature data, climatologists have been interested in reconstructing historical temperatures using various proxy time series. In this paper, the authors examine a state‐space model approach for historical temperature reconstruction which not only makes use of the proxy data but also information on external forcings. A challenge in the implementation of this approach is the estimation of the parameters in the state‐space model. The authors developed two maximum likelihood methods for parameter estimation and studied the efficiency and asymptotic properties of the associated estimators through a combination of theoretical and numerical investigations. The Canadian Journal of Statistics 38: 488–505; 2010 © 2010 Crown in the right of Canada  相似文献   

2.
Various test statistics are discussed which can be used for detecting changes in the parameters of an autoregressive time series. In this first part of our study, the limiting behavior of the test statistics is derived under the null hypothesis of no change as well as under alternatives. In a forthcoming second part of our investigation, these asymptotic results will be compared to some corresponding bootstrap procedures, and a small simulation study will be conducted.  相似文献   

3.
We study an autoregressive time series model with a possible change in the regression parameters. Approximations to the critical values for change-point tests are obtained through various bootstrapping methods. Theoretical results show that the bootstrapping procedures have the same limiting behavior as their asymptotic counterparts discussed in Hušková et al. [2007. On the detection of changes in autoregressive time series, I. Asymptotics. J. Statist. Plann. Inference 137, 1243–1259]. In fact, a small simulation study illustrates that the bootstrap tests behave better than the original asymptotic tests if performance is measured by the αα- and ββ-errors, respectively.  相似文献   

4.
5.
6.
Tree‐based methods are frequently used in studies with censored survival time. Their structure and ease of interpretability make them useful to identify prognostic factors and to predict conditional survival probabilities given an individual's covariates. The existing methods are tailor‐made to deal with a survival time variable that is measured continuously. However, survival variables measured on a discrete scale are often encountered in practice. The authors propose a new tree construction method specifically adapted to such discrete‐time survival variables. The splitting procedure can be seen as an extension, to the case of right‐censored data, of the entropy criterion for a categorical outcome. The selection of the final tree is made through a pruning algorithm combined with a bootstrap correction. The authors also present a simple way of potentially improving the predictive performance of a single tree through bagging. A simulation study shows that single trees and bagged‐trees perform well compared to a parametric model. A real data example investigating the usefulness of personality dimensions in predicting early onset of cigarette smoking is presented. The Canadian Journal of Statistics 37: 17‐32; 2009 © 2009 Statistical Society of Canada  相似文献   

7.
In this article, we address the testing problem for additivity in nonparametric regression models. We develop a kernel‐based consistent test of a hypothesis of additivity in nonparametric regression, and establish its asymptotic distribution under a sequence of local alternatives. Compared to other existing kernel‐based tests, the proposed test is shown to effectively ameliorate the influence from estimation bias of the additive component of the nonparametric regression, and hence increase its efficiency. Most importantly, it avoids the tuning difficulties by using estimation‐based optimal criteria, while there is no direct tuning strategy for other existing kernel‐based testing methods. We discuss the usage of the new test and give numerical examples to demonstrate the practical performance of the test. The Canadian Journal of Statistics 39: 632–655; 2011. © 2011 Statistical Society of Canada  相似文献   

8.
There are several levels of sophistication when specifying the bandwidth matrix H to be used in a multivariate kernel density estimator, including H to be a positive multiple of the identity matrix, a diagonal matrix with positive elements or, in its most general form, a symmetric positive‐definite matrix. In this paper, the author proposes a data‐based method for choosing the smoothing parametrization to be used in the kernel density estimator. The procedure is fully illustrated by a simulation study and some real data examples. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

9.
Accurate diagnosis of disease is a critical part of health care. New diagnostic and screening tests must be evaluated based on their abilities to discriminate diseased conditions from non‐diseased conditions. For a continuous‐scale diagnostic test, a popular summary index of the receiver operating characteristic (ROC) curve is the area under the curve (AUC). However, when our focus is on a certain region of false positive rates, we often use the partial AUC instead. In this paper we have derived the asymptotic normal distribution for the non‐parametric estimator of the partial AUC with an explicit variance formula. The empirical likelihood (EL) ratio for the partial AUC is defined and it is shown that its limiting distribution is a scaled chi‐square distribution. Hybrid bootstrap and EL confidence intervals for the partial AUC are proposed by using the newly developed EL theory. We also conduct extensive simulation studies to compare the relative performance of the proposed intervals and existing intervals for the partial AUC. A real example is used to illustrate the application of the recommended intervals. The Canadian Journal of Statistics 39: 17–33; 2011 © 2011 Statistical Society of Canada  相似文献   

10.
Autoregressive models with switching regime are a frequently used class of nonlinear time series models, which are popular in finance, engineering, and other fields. We consider linear switching autoregressions in which the intercept and variance possibly switch simultaneously, while the autoregressive parameters are structural and hence the same in all states, and we propose quasi‐likelihood‐based tests for a regime switch in this class of models. Our motivation is from financial time series, where one expects states with high volatility and low mean together with states with low volatility and higher mean. We investigate the performance of our tests in a simulation study, and give an application to a series of IBM monthly stock returns. The Canadian Journal of Statistics 40: 427–446; 2012 © 2012 Statistical Society of Canada  相似文献   

11.
A new test for detecting a change in linear regression parameters assuming a general weakly dependent error structure is given. It extends earlier methods based on cumulative sums assuming independent errors. The novelty is in the new standardization method and in smoothing when the time series is dominated by high frequencies. Simulations show the excellent performance of the test. Examples are taken from environmental applications. The algorithm is easy to implement. Testing for multiple changes can be done by segmentation. The Canadian Journal of Statistics 38:65–79; 2010 © 2009 Statistical Society of Canada  相似文献   

12.
Motivated by time series of atmospheric concentrations of certain pollutants the authors develop bent‐cable regression for autocorrelated errors. Bent‐cable regression extends the popular piecewise linear (broken‐stick) model, allowing for a smooth change region of any non‐negative width. Here the authors consider autoregressive noise added to a bent‐cable mean structure, with unknown regression and time series parameters. They develop asymptotic theory for conditional least‐squares estimation in a triangular array framework, wherein each segment of the bent cable contains an increasing number of observations while the autoregressive order remains constant as the sample size grows. They explore the theory in a simulation study, develop implementation details, apply the methodology to the motivating pollutant dataset, and provide a scientific interpretation of the bent‐cable change point not discussed previously. The Canadian Journal of Statistics 38: 386–407; 2010 © 2010 Statistical Society of Canada  相似文献   

13.
This paper discusses multivariate interval‐censored failure time data observed when several correlated survival times of interest exist and only interval censoring is available for each survival time. Such data occur in many fields, for instance, studies of the development of physical symptoms or diseases in several organ systems. A marginal inference approach was used to create a linear transformation model and applied to bivariate interval‐censored data arising from a diabetic retinopathy study and an AIDS study. The results of simulation studies that were conducted to evaluate the performance of the presented approach suggest that it performs well. The Canadian Journal of Statistics 41: 275–290; 2013 © 2013 Statistical Society of Canada  相似文献   

14.
The authors derive closed‐form expressions for the full, profile, conditional and modified profile likelihood functions for a class of random growth parameter models they develop as well as Garcia's additive model. These expressions facilitate the determination of parameter estimates for both types of models. The profile, conditional and modified profile likelihood functions are maximized over few parameters to yield a complete set of parameter estimates. In the development of their random growth parameter models the authors specify the drift and diffusion coefficients of the growth parameter process in a natural way which gives interpretive meaning to these coefficients while yielding highly tractable models. They fit several of their random growth parameter models and Garcia's additive model to stock market data, and discuss the results. The Canadian Journal of Statistics 38: 474–487; 2010 © 2010 Statistical Society of Canada  相似文献   

15.
The study of differences among groups is an interesting statistical topic in many applied fields. It is very common in this context to have data that are subject to mechanisms of loss of information, such as censoring and truncation. In the setting of a two‐sample problem with data subject to left truncation and right censoring, we develop an empirical likelihood method to do inference for the relative distribution. We obtain a nonparametric generalization of Wilks' theorem and construct nonparametric pointwise confidence intervals for the relative distribution. Finally, we analyse the coverage probability and length of these confidence intervals through a simulation study and illustrate their use with a real data set on gastric cancer. The Canadian Journal of Statistics 38: 453–473; 2010 © 2010 Statistical Society of Canada  相似文献   

16.
Many methods have been developed for the nonparametric estimation of a mean response function, but most of these methods do not lend themselves to simultaneous estimation of the mean response function and its derivatives. Recovering derivatives is important for analyzing human growth data, studying physical systems described by differential equations, and characterizing nanoparticles from scattering data. In this article the authors propose a new compound estimator that synthesizes information from numerous pointwise estimators indexed by a discrete set. Unlike spline and kernel smooths, the compound estimator is infinitely differentiable; unlike local regression smooths, the compound estimator is self‐consistent in that its derivatives estimate the derivatives of the mean response function. The authors show that the compound estimator and its derivatives can attain essentially optimal convergence rates in consistency. The authors also provide a filtration and extrapolation enhancement for finite samples, and the authors assess the empirical performance of the compound estimator and its derivatives via a simulation study and an application to real data. The Canadian Journal of Statistics 39: 280–299; 2011 © 2011 Statistical Society of Canada  相似文献   

17.
When confronted with multiple covariates and a response variable, analysts sometimes apply a variable‐selection algorithm to the covariate‐response data to identify a subset of covariates potentially associated with the response, and then wish to make inferences about parameters in a model for the marginal association between the selected covariates and the response. If an independent data set were available, the parameters of interest could be estimated by using standard inference methods to fit the postulated marginal model to the independent data set. However, when applied to the same data set used by the variable selector, standard (“naive”) methods can lead to distorted inferences. The authors develop testing and interval estimation methods for parameters reflecting the marginal association between the selected covariates and response variable, based on the same data set used for variable selection. They provide theoretical justification for the proposed methods, present results to guide their implementation, and use simulations to assess and compare their performance to a sample‐splitting approach. The methods are illustrated with data from a recent AIDS study. The Canadian Journal of Statistics 37: 625–644; 2009 © 2009 Statistical Society of Canada  相似文献   

18.
The paper concerns the problem of applying singular spectrum analysis to time series with missing data. A method of filling in the missing data is proposed and is applied to time series of finite rank. Conditions of exact reconstruction of missing data are developed and versions of the algorithm applicable to real-life time series are presented. The proposed algorithms result in the extraction of additive components of time series such as trends and periodic components, with simultaneous filling in of the missing data. An example is presented.  相似文献   

19.
The median is a commonly used parameter to characterize biomarker data. In particular, with two vastly different underlying distributions, comparing medians provides different information than comparing means; however, very few tests for medians are available. We propose a series of two‐sample median‐specific tests using empirical likelihood methodology and investigate their properties. We present the technical details of incorporating the relevant constraints into the empirical likelihood function for in‐depth median testing. An extensive Monte Carlo study shows that the proposed tests have excellent operating characteristics even under unfavourable occasions such as non‐exchangeability under the null hypothesis. We apply the proposed methods to analyze biomarker data from Western blot analysis to compare normal cells with bronchial epithelial cells from a case–control study. The Canadian Journal of Statistics 39: 671–689; 2011. © 2011 Statistical Society of Canada  相似文献   

20.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号