首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
5.
6.
7.
8.
9.
For each positive integer k, a set of k-principal points of a distribution is the set of k points that optimally represent the distribution in terms of mean squared distance. However, explicit form of k-principal points is often difficult to obtain. Hence a theorem established by Tarpey et al. (1995 Tarpey , T. , Li , L. , Flury , B. D. ( 1995 ). Principal points and self-consistent points of elliptical distributions . Ann. Statist. 23 : 102112 .[Crossref], [Web of Science ®] [Google Scholar]) has been influential in the literature, which states that when the distribution is elliptically symmetric, any set of k-principal points is in the linear subspace spanned by some principal eigenvectors of the covariance matrix. This theorem is called a “principal subspace theorem”. Recently, Yamamoto and Shinozaki (2000b Yamamoto , W. , Shinozaki , N. ( 2000b ). Two principal points for multivariate location mixtures of spherically symmetric distributions . J. Japan Statist. Soc. 30 : 5363 .[Crossref] [Google Scholar]) derived a principal subspace theorem for 2-principal points of a location mixture of spherically symmetric distributions. In their article, the ratio of mixture was set to be equal. This article derives a further result by considering a location mixture with unequal mixture ratio.  相似文献   

10.
We reinvestigate the empirical problem of lag length selection in unit root tests when using the augmented Dickey–Fuller test based on GLS-detrending. We extend the Ng and Perron (1995 Ng , S. , Perron , P. ( 1995 ). Unit root tests in ARMA models with data-dependent methods for the selection of the truncation lag . Journal of American Statistical Association 90 : 268281 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) work on this issue by applying the finite sample critical values calculated using the formulae proposed by Cheung and Lai (1995 Cheung , Y. W. , Lai , K. S. ( 1995 ). Lag order and critical values of a modified Dickey–Fuller test . Oxford Bulletin of Business and Economics 57 : 411418 .[Crossref] [Google Scholar]). Unlike Ng and Perron (2001 Ng , S. , Perron , P. (2001). Lag length selection and the construction of unit root tests with good size and power. Econometrica 69:15191554.[Crossref], [Web of Science ®] [Google Scholar]) we find through simulation studies that the method of selecting lag length using the sequential t-test in the ADF regression of GLS-detrended series performs the best in most cases.  相似文献   

11.
12.
In this article, we consider the M-estimators for the linear regression model when both response and covariate variables are subject to double censoring. The proposed estimators are constructed as some functional of three types of estimators for a bivariate survival distribution. The first two estimators are the generalizations of the Campbell and Földes (1982 Campbell, G. and Földes, A. 1982. “Large sample properties of nonparametric statistical inference”. In Nonparametric Statistical Inference., Edited by: Gnredenko, B. V., Puri, M. L. and Vineze, I. 103122. Amsterdam: North-Holland.  [Google Scholar]) and Dabrowska (1988 Dabrowska, D. M. 1988. Kaplan-Meier estimate on the plane. Annals of Statistics, 18: 14751489. [Crossref], [Web of Science ®] [Google Scholar]) estimators proposed by Shen (2009 Shen, P. S. 2009. Nonparametric estimation of the bivariate survival function one modified form of doubly censored data. Computational Statistics, 25: 203313. [Crossref], [Web of Science ®] [Google Scholar]). The third estimator is the generalization of the Prentice and Cai (1992 Prentice, R. L. and Cai, J. 1992. Covariance and survivor function estimation using censored multivariate failure time data. Biometrika, 79: 495512. [Crossref], [Web of Science ®] [Google Scholar]) estimator. The consistency of the proposed M-estimators is established. A simulation study is conducted to investigate the performance of the proposed estimators. Furthermore, the simple bootstrap methods are used to estimate standard deviations and construct interval estimators.  相似文献   

13.
Two types of estimates of process level, namely repeated median estimates (Siegel, 1982 Siegel , A. F. ( 1982 ). Robust regression using repeated medians . Biometrika 69 : 242244 .[Crossref], [Web of Science ®] [Google Scholar]) and full online estimates (Gather et al., 2006 Gather , U. , Schettlinger , K. , Fried , R. ( 2006 ). Online signal extraction by robust linear regression . Computational Statistics 21 : 3351 .[Crossref], [Web of Science ®] [Google Scholar]) based on repeated median filters, are used to develop control charts. The distributional properties of the estimates are studied using simulation and these are found to closely follow normal distribution. The repeated median being robust against outliers with asymptotically 50% breakdown value and having small standard deviation is found to be useful as a basis for monitoring process averages. The control charts using repeated median estimates have been recommended for general use.  相似文献   

14.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007 Chaubey , Y. P. , Sen , A. , Sen , P. K. ( 2007 ). A new smooth density estimator for non-negative random variables. Technical Report No. 1/07. Department of Mathematics and Statistics, Concordia University, Montreal, Canada . [Google Scholar]) for the empirical survival function studied in Bagai and Prakasa Rao (1991 Bagai , I. , Prakasa Rao , B. L. S. ( 1991 ). Estimation of the survival function for stationary associated processes . Statist. Probab. Lett. 12 : 385391 .[Crossref], [Web of Science ®] [Google Scholar]) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011 Chaubey , Y. P. , Dewan , I. , Li , J. ( 2011 ). Smooth estimation of survival and density functions for a stationary associated process using poisson weights . Statist. Probab. Lett. 81 : 267276 .[Crossref], [Web of Science ®] [Google Scholar]) showing that the two estimators have comparable finite sample global as well as local behavior.  相似文献   

15.
Consider a skewed population. Suppose an intelligent guess could be made about an interval that contains the population mean. There may exist biased estimators with smaller mean squared error than the arithmetic mean within such an interval. This article indicates when it is advisable to shrink the arithmetic mean towards a guessed interval using root estimators. The goal is to obtain an estimator that is better near the average of natural origins. An estimator proposed. This estimator contains the Thompson (1968 Thompson , J. R. ( 1968 ). Accuracy borrowing in the estimation of the mean by shrinkage towards an interval . J. Amer. Statist. Assoc. 63 : 953963 . [CSA] [CROSSREF] [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) ordinary shrinkage estimator, the Jenkins et al. (1973 Jenkins , O. C. , Ringer , L. J. , Hartley , H. O. ( 1973 ). Root estimators . J Amer. Statist. Assoc. 68 : 414419 . [CSA] [CROSSREF] [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) square-root estimator, and the arithmetic sample mean as special cases. The bias and the mean squared error of the proposed more general estimator is compared with the three special cases. Shrinkage coefficients that yield minimum mean squared error estimators are obtained. The proposed estimator is considerably more efficient than the three special cases. This remains true for highly skewed populations. The merits of the proposed shrinkage square-root estimator are supported by the results of numerical and simulation studies.  相似文献   

16.
Based on the semiparametric median regression analysis for the right-censored data developed by Ying et al. (1995 Ying , Z. , Jung , S. H. , Wei , L. J. ( 1995 ). Survival analysis with median regression models . J. Amer. Statist. Assoc. 90 : 178184 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), an empirical likelihood based inferential procedure for the regression coefficients is proposed. The limiting distribution of the proposed log-empirical likelihood ratio test statistic follows a chi-squared distribution, which corresponds to the standard asymptotic results of the empirical likelihood method. The inference about the subsets of the entire regression coefficients vector is discussed. The proposed method is illustrated by some simulation studies.  相似文献   

17.
18.
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997 Wulfsohn , M. S. , Tsiatis , A. A. ( 1997 ). A joint model for survival and longitudinal data measured with error . Biometrics 53 ( 1 ): 330339 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) and Song et al. (2002 Song , X. , Davidian , M. , Tsiatis , A. A. ( 2002 ). A Semiparametric likelihood approach to joint modeling of longitudinal and time-to-event data . Biometrics 58 ( 4 ): 742753 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). In this article, similar to Henderson et al. (2002 Henderson , R. , Diggle , P. J. , Dobson , A. ( 2002 ). Identification and efficacy of longitudinal markers for survival . Biostatistics 3 ( 1 ): 3350 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data.  相似文献   

19.
In this article, we establish several recurrence relations for the single and product moments of progressively Type-II right censored order statistics from a log-logistic distribution. The use of these relations in a systematic recursive manner would enable the computation of all the means, variances and covariances of progressively Type-II right censored order statistics from the log-logistic distribution for all sample sizes n, effective sample sizes m, and all progressive censoring schemes (R 1,…, R m ). The results established here generalize the corresponding results for the usual order statistics due to Balakrishnan and Malik (1987 Balakrishnan , N. , Malik , H. J. ( 1987 ). Moments of order statistics from truncated log-logistic distribution . J. Statist. Plann. Infer. 17 : 251267 .[Crossref], [Web of Science ®] [Google Scholar]) and Balakrishnan et al. (1987 Balakrishnan , N. , Malik , H. J. , Puthenpura , S. ( 1987 ). Best linear unbiased estimation of location and scale parameters of the log-logistic distribution . Commun. Statist. Theor. Meth. 16 : 34773495 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]). The moments so determined are then utilized to derive best linear unbiased estimators for the scale- and location-scale log-logistic distributions. A comparison of these estimates with the maximum likelihood estimates is made through Monte Carlo simulation. The best linear unbiased predictors of progressively censored failure times is then discussed briefly. Finally, a numerical example is presented to illustrate all the methods of inference developed here.  相似文献   

20.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001 Tusher , V. G. , Tibshirani , R. , Chu , G. ( 2001 ). Significance analysis of microarrys applied to the ionizing radiation response . Proceedings of the National Academy of Sciences 98 : 51165121 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号