首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   82篇
  免费   0篇
理论方法论   1篇
统计学   81篇
  2021年   2篇
  2019年   1篇
  2018年   1篇
  2017年   4篇
  2016年   1篇
  2015年   1篇
  2014年   1篇
  2013年   31篇
  2012年   11篇
  2011年   2篇
  2010年   2篇
  2009年   7篇
  2008年   2篇
  2007年   2篇
  2006年   1篇
  2005年   1篇
  2002年   1篇
  2001年   2篇
  2000年   2篇
  1999年   2篇
  1998年   1篇
  1992年   1篇
  1989年   1篇
  1984年   1篇
  1980年   1篇
排序方式: 共有82条查询结果,搜索用时 31 毫秒
1.
We consider a method of moments approach for dealing with censoring at zero for data expressed in levels when researchers would like to take logarithms. A Box–Cox transformation is employed. We explore this approach in the context of linear regression where both dependent and independent variables are censored. We contrast this method to two others, (1) dropping records of data containing censored values and (2) assuming normality for censored observations and the residuals in the model. Across the methods considered, where researchers are interested primarily in the slope parameter, estimation bias is consistently reduced using the method of moments approach.  相似文献   
2.
This article considers the problem of estimating the parameters of Weibull distribution under progressive Type-I interval censoring scheme with beta-binomial removals. Classical as well as the Bayesian procedures for the estimation of unknown model parameters have been developed. The Bayes estimators are obtained under SELF and GELF using MCMC technique. The performance of the estimators, has been discussed in terms of their MSEs. Further, expression for the expected number of total failures has been obtained. A real dataset of the survival times for patients with plasma cell myeloma is used to illustrate the suitability of the proposed methodology.  相似文献   
3.
This paper considers the estimation problem when lifetimes are Weibull distributed and are collected under a Type-II progressive censoring with random removals, where the number of units removed at each failure time follows a uniform discrete distribution. The expected time of this censoring plan is discussed and compared numerically to that under a Type II censoring without removal. Maximum likelihood estimator of the parameters and their asymptotic variances are derived.  相似文献   
4.

Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large-sample procedure relies on the property that the distribution of the t -like quantities is close to the standard normal in large samples. In this paper, we use a new procedure based on both simulation and asymptotic theory to determine the sample size for a test plan. Unlike the complete data case, the t -like quantities are not pivotal quantities in general when data are time censored. However we show that the distribution of the t -like quantities only depend on the expected proportion failing and obtain the distributions by simulation for both complete and time censoring case when data follow Weibull distribution. We find that the large-sample procedure usually underestimates the sample size even when it is said to be 200 or more. The sample size given by the proposed procedure insures the requested nominal accuracy and confidence of the estimation when the test plan results in complete or time censored data. Some useful figures displaying the required sample size for the new procedure are also presented.  相似文献   
5.
In many complex diseases such as cancer, a patient undergoes various disease stages before reaching a terminal state (say disease free or death). This fits a multistate model framework where a prognosis may be equivalent to predicting the state occupation at a future time t. With the advent of high-throughput genomic and proteomic assays, a clinician may intent to use such high-dimensional covariates in making better prediction of state occupation. In this article, we offer a practical solution to this problem by combining a useful technique, called pseudo-value (PV) regression, with a latent factor or a penalized regression method such as the partial least squares (PLS) or the least absolute shrinkage and selection operator (LASSO), or their variants. We explore the predictive performances of these combinations in various high-dimensional settings via extensive simulation studies. Overall, this strategy works fairly well provided the models are tuned properly. Overall, the PLS turns out to be slightly better than LASSO in most settings investigated by us, for the purpose of temporal prediction of future state occupation. We illustrate the utility of these PV-based high-dimensional regression methods using a lung cancer data set where we use the patients’ baseline gene expression values.  相似文献   
6.
Plotting of log−log survival functions against time for different categories or combinations of categories of covariates is perhaps the easiest and most commonly used graphical tool for checking proportional hazards (PH) assumption. One problem in the utilization of the technique is that the covariates need to be categorical or made categorical through appropriate grouping of the continuous covariates. Subjectivity in the decision making on the basis of eye-judgment of the plots and frequent inconclusiveness arising in situations where the number of categories and/or covariates gets larger are among other limitations of this technique. This paper proposes a non-graphical (numerical) test of the PH assumption that makes use of log−log survival function. The test enables checking proportionality for categorical as well as continuous covariates and overcomes the other limitations of the graphical method. Observed power and size of the test are compared to some other tests of its kind through simulation experiments. Simulations demonstrate that the proposed test is more powerful than some of the most sensitive tests in the literature in a wide range of survival situations. An example of the test is given using the widely used gastric cancer data.  相似文献   
7.
In addition to his contributions to biostatistics and clinical trials, Paul Meier had a long-term interest in the legal applications of statistics. As part of this, he had extensive experience as a statistical consultant. Legal consulting can be a minefield, but as a result of his background, Paul had excellent advice to give to those starting out on how to function successfully in this environment.  相似文献   
8.
In a clinical trial with the time to an event as the outcome of interest, we may randomize a number of matched subjects, such as litters, to different treatments. The number of treatments equals the number of subjects per litter, two in the case of twins. In this case, the survival times of matched subjects could be dependent. Although the standard rank tests, such as the logrank and Wilcoxon tests, for independent samples may be used to test the equality of marginal survival distributions, their standard error should be modified to accommodate the possible dependence of survival times between matched subjects. In this paper we propose a method of calculating the standard error of the rank tests for paired two-sample survival data. The method is naturally extended to that for K-sample tests under dependence.  相似文献   
9.
Abstract

In this article, we consider the problem of estimating regression coefficients for a linear model with censored and truncated data based on regression depth. Any line can be given a rank using regression depth and the deepest regression line is the line with the maximum regression depth. We propose a method to define the regression depth of a line in the presence of censoring and truncation. We show how the proposed regression performs through analyzing Stanford heart transplant data and AIDS incubation data.  相似文献   
10.
This paper considers a class of summary measures of the dependence between a pair of failure time variables over a finite follow-up region. The class consists of measures that are weighted averages of local dependence measures, and includes the cross-ratio-measure and finite region version of Kendall's τ; recently proposed by the authors. Two new special cases are identified that can avoid the need to estimate the bivariate survivor function and that admit explicit variance estimators. Nonparametric estimators of such dependence measures are proposed and are shown to be consistent and asymptotically normal with variances that can be consistently estimated. Properties of selected estimators are evaluated in a simulation study, and the method is illustrated through an analysis of Australian Twin Study data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号