首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Planning a study using the General Linear Univariate Model often involves sample size calculation based on a variance estimated in an earlier study. Noncentrality, power, and sample size inherit the randomness. Additional complexity arises if the estimate has been censored. Left censoring occurs when only significant tests lead to a power calculation, while right censoring occurs when only non-significant tests lead to a power calculation. We provide simple expressions for straightforward computation of the distribution function, moments, and quantiles of the censored variance estimate, estimated noncentrality, power, and sample size. We also provide convenient approximations and evaluate their accuracy. The results allow demonstrating that ignoring right censoring falsely widens confidence intervals for noncentrality and power, while ignoring left censoring falsely narrows the confidence intervals. The new results allow assessing and avoiding the potentially substantial bias that censoring may create.  相似文献   

2.
In this article, optimal progressive censoring schemes are examined for the nonparametric confidence intervals of population quantiles. The results obtained can be universally applied to any continuous probability distribution. By using the interval mass as an optimality criterion, the optimization process is free of the actual observed values from the sample and needs only the initial sample size n and the number of complete failures m. Using several sample sizes combined with various degrees of censoring, the results of the optimization are presented here for the population median at selected levels of confidence (99, 95, and 90%). With the optimality criterion under consideration, the efficiencies of the worst progressive Type-II censoring scheme and ordinary Type-II censoring scheme are also examined in comparison to the best censoring scheme obtained for fixed n and m.  相似文献   

3.
Epstein [Truncated life tests in the exponential case, Ann. Math. Statist. 25 (1954), pp. 555–564] introduced a hybrid censoring scheme (called Type-I hybrid censoring) and Chen and Bhattacharyya [Exact confidence bounds for an exponential parameter under hybrid censoring, Comm. Statist. Theory Methods 17 (1988), pp. 1857–1870] derived the exact distribution of the maximum-likelihood estimator (MLE) of the mean of a scaled exponential distribution based on a Type-I hybrid censored sample. Childs et al. [Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution, Ann. Inst. Statist. Math. 55 (2003), pp. 319–330] provided an alternate simpler expression for this distribution, and also developed analogous results for another hybrid censoring scheme (called Type-II hybrid censoring). The purpose of this paper is to derive the exact bivariate distribution of the MLE of the parameter vector of a two-parameter exponential model based on hybrid censored samples. The marginal distributions are derived and exact confidence bounds for the parameters are obtained. The results are also used to derive the exact distribution of the MLE of the pth quantile, as well as the corresponding confidence bounds. These exact confidence intervals are then compared with parametric bootstrap confidence intervals in terms of coverage probabilities. Finally, we present some numerical examples to illustrate the methods of inference developed here.  相似文献   

4.
Importance resampling is an approach that uses exponential tilting to reduce the resampling necessary for the construction of nonparametric bootstrap confidence intervals. The properties of bootstrap importance confidence intervals are well established when the data is a smooth function of means and when there is no censoring. However, in the framework of survival or time-to-event data, the asymptotic properties of importance resampling have not been rigorously studied, mainly because of the unduly complicated theory incurred when data is censored. This paper uses extensive simulation to show that, for parameter estimates arising from fitting Cox proportional hazards models, importance bootstrap confidence intervals can be constructed if the importance resampling probabilities of the records for the n individuals in the study are determined by the empirical influence function for the parameter of interest. Our results show that, compared to uniform resampling, importance resampling improves the relative mean-squared-error (MSE) efficiency by a factor of nine (for n = 200). The efficiency increases significantly with sample size, is mildly associated with the amount of censoring, but decreases slightly as the number of bootstrap resamples increases. The extra CPU time requirement for calculating importance resamples is negligible when compared to the large improvement in MSE efficiency. The method is illustrated through an application to data on chronic lymphocytic leukemia, which highlights that the bootstrap confidence interval is the preferred alternative to large sample inferences when the distribution of a specific covariate deviates from normality. Our results imply that, because of its computational efficiency, importance resampling is recommended whenever bootstrap methodology is implemented in a survival framework. Its use is particularly important when complex covariates are involved or the survival problem to be solved is part of a larger problem; for instance, when determining confidence bounds for models linking survival time with clusters identified in gene expression microarray data.  相似文献   

5.
6.
With data collection in environmental science and bioassay, left censoring because of nondetects is a problem. Similarly in reliability and life data analysis right censoring frequently occurs. There is a need for goodness of fit tests that can adapt to left or right censored data and be used to check important distributional assumptions without becoming too difficult to regularly implement in practice. A new test statistic is derived from a plot of the standardized spacings between the order statistics versus their ranks. Any linear or curvilinear pattern is evidence against the null distribution. When testing the Weibull or extreme value null hypothesis this statistic has a null distribution that is approximately F for most combinations of sample size and censoring of practical interest. Our statistic is compared to the Mann-Scheuer-Fertig statistic which also uses the standardized spacings between the order statistics. The results of a simulation study show the two tests are competitive in terms of power. Although the Mann-Scheuer-Fertig statistic is somewhat easier to compute, our test enjoys advantages in the accuracy of the F approximation and the availability of a graphical diagnostic.  相似文献   

7.
Heckman’s two-step procedure (Heckit) for estimating the parameters in linear models from censored data is frequently used by econometricians, despite of the fact that earlier studies cast doubt on the procedure. In this paper it is shown that estimates of the hazard h for approaching the censoring limit, the latter being used as an explanatory variable in the second step of the Heckit, can induce multicollinearity. The influence of the censoring proportion and sample size upon bias and variance in three types of random linear models are studied by simulations. From these results a simple relation is established that describes how absolute bias depends on the censoring proportion and the sample size. It is also shown that the Heckit may work with non-normal (Laplace) distributions, but it collapses if h deviates too much from that of the normal distribution. Data from a study of work resumption after sick-listing are used to demonstrate that the Heckit can be very risky.  相似文献   

8.
Heterogeneity of variances of treatment groups influences the validity and power of significance tests of location in two distinct ways. First, if sample sizes are unequal, the Type I error rate and power are depressed if a larger variance is associated with a larger sample size, and elevated if a larger variance is associated with a smaller sample size. This well-established effect, which occurs in t and F tests, and to a lesser degree in nonparametric rank tests, results from unequal contributions of pooled estimates of error variance in the computation of test statistics. It is observed in samples from normal distributions, as well as non-normal distributions of various shapes. Second, transformation of scores from skewed distributions with unequal variances to ranks produces differences in the means of the ranks assigned to the respective groups, even if the means of the initial groups are equal, and a subsequent inflation of Type I error rates and power. This effect occurs for all sample sizes, equal and unequal. For the t test, the discrepancy diminishes, and for the Wilcoxon–Mann–Whitney test, it becomes larger, as sample size increases. The Welch separate-variance t test overcomes the first effect but not the second. Because of interaction of these separate effects, the validity and power of both parametric and nonparametric tests performed on samples of any size from unknown distributions with possibly unequal variances can be distorted in unpredictable ways.  相似文献   

9.
We consider two-stage adaptive designs for clinical trials where data from the two stages are dependent. This occurs when additional data are obtained from patients during their second stage follow-up. While the proposed flexible approach allows modifications of trial design, sample size, or statistical analysis using the first stage data, there is no need for a complete prespecification of the adaptation rule. Methods are provided for an adaptive closed testing procedure, for calculating overall adjusted p-values, and for obtaining unbiased estimators and confidence bounds for parameters that are invariant to modifications. A motivating example is used to illustrate these methods.  相似文献   

10.
ABSTRACT

In this paper, under Type-I progressive hybrid censoring sample, we obtain maximum likelihood estimator of unknown parameter when the parent distribution belongs to proportional hazard rate family. We derive the conditional probability density function of the maximum likelihood estimator using moment-generating function technique. The exact confidence interval is obtained and compared by conducting a Monte Carlo simulation study for burr Type XII distribution. Finally, we obtain the Bayes and posterior regret gamma minimax estimates of the parameter under a precautionary loss function with precautionary index k = 2 and compare their behavior via a Monte Carlo simulation study.  相似文献   

11.
An important question that arises in clinical trials is how many additional observations, if any, are required beyond those originally planned. This has satisfactorily been answered in the case of two-treatment double-blind clinical experiments. However, one may be interested in comparing a new treatment with its competitors, which may be more than one. This problem is addressed in this investigation involving responses from arbitrary distributions, in which the mean and the variance are not functionally related. First, a solution in determining the initial sample size for specified level of significance and power at a specified alternative is obtained. Then it is shown that when the initial sample size is large, the nominal level of significance and the power at the pre-specified alternative are fairly robust for the proposed sample size re-estimation procedure. An application of the results is made to the blood coagulation functionality problem considered by Kropf et al. [Multiple comparisons of treatments with stable multivariate tests in a two-stage adaptive design, including a test for non-inferiority, Biom. J. 42(8) (2000), pp. 951–965].  相似文献   

12.
Studies of diagnostic tests are often designed with the goal of estimating the area under the receiver operating characteristic curve (AUC) because the AUC is a natural summary of a test's overall diagnostic ability. However, sample size projections dealing with AUCs are very sensitive to assumptions about the variance of the empirical AUC estimator, which depends on two correlation parameters. While these correlation parameters can be estimated from the available data, in practice it is hard to find reliable estimates before the study is conducted. Here we derive achievable bounds on the projected sample size that are free of these two correlation parameters. The lower bound is the smallest sample size that would yield the desired level of precision for some model, while the upper bound is the smallest sample size that would yield the desired level of precision for all models. These bounds are important reference points when designing a single or multi-arm study; they are the absolute minimum and maximum sample size that would ever be required. When the study design includes multiple readers or interpreters of the test, we derive bounds pertaining to the average reader AUC and the ‘pooled’ or overall AUC for the population of readers. These upper bounds for multireader studies are not too conservative when several readers are involved.  相似文献   

13.
In this paper, progressive-stress accelerated life tests are applied when the lifetime of a product under design stress follows the exponentiated distribution [G(x)]α. The baseline distribution, G(x), follows a general class of distributions which includes, among others, Weibull, compound Weibull, power function, Pareto, Gompertz, compound Gompertz, normal and logistic distributions. The scale parameter of G(x) satisfies the inverse power law and the cumulative exposure model holds for the effect of changing stress. A special case for an exponentiated exponential distribution has been discussed. Using type-II progressive hybrid censoring and MCMC algorithm, Bayes estimates of the unknown parameters based on symmetric and asymmetric loss functions are obtained and compared with the maximum likelihood estimates. Normal approximation and bootstrap confidence intervals for the unknown parameters are obtained and compared via a simulation study.  相似文献   

14.

Recently, exact confidence bounds and exact likelihood inference have been developed based on hybrid censored samples by Chen and Bhattacharyya [Chen, S. and Bhattacharyya, G.K. (1998). Exact confidence bounds for an exponential parameter under hybrid censoring. Communications in StatisticsTheory and Methods, 17, 1857–1870.], Childs et al. [Childs, A., Chandrasekar, B., Balakrishnan, N. and Kundu, D. (2003). Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution. Annals of the Institute of Statistical Mathematics, 55, 319–330.], and Chandrasekar et al. [Chandrasekar, B., Childs, A. and Balakrishnan, N. (2004). Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoring. Naval Research Logistics, 51, 994–1004.] for the case of the exponential distribution. In this article, we propose an unified hybrid censoring scheme (HCS) which includes many cases considered earlier as special cases. We then derive the exact distribution of the maximum likelihood estimator as well as exact confidence intervals for the mean of the exponential distribution under this general unified HCS. Finally, we present some examples to illustrate all the methods of inference developed here.  相似文献   

15.
Confidence intervals for the pth-quantile Q of a two-parameter exponential distribution provide useful information on the plausible range of Q, and only inefficient equal-tail confidence intervals have been discussed in the statistical literature so far. In this article, the construction of the shortest possible confidence interval within a family of two-sided confidence intervals is addressed. This shortest confidence interval is always shorter, and can be substantially shorter, than the corresponding equal-tail confidence interval. Furthermore, the computational intensity of both methodologies is similar, and therefore it is advantageous to use the shortest confidence interval. It is shown how the results provided in this paper can apply to data obtained from progressive Type II censoring, with standard Type II censoring as a special case. The applications of more complex confidence interval constructions through acceptance set inversions that can employ prior information are also discussed.  相似文献   

16.
Traditionally, in clinical development plan, phase II trials are relatively small and can be expected to result in a large degree of uncertainty in the estimates based on which Phase III trials are planned. Phase II trials are also to explore appropriate primary efficacy endpoint(s) or patient populations. When the biology of the disease and pathogenesis of disease progression are well understood, the phase II and phase III studies may be performed in the same patient population with the same primary endpoint, e.g. efficacy measured by HbA1c in non-insulin dependent diabetes mellitus trials with treatment duration of at least three months. In the disease areas that molecular pathways are not well established or the clinical outcome endpoint may not be observed in a short-term study, e.g. mortality in cancer or AIDS trials, the treatment effect may be postulated through use of intermediate surrogate endpoint in phase II trials. However, in many cases, we generally explore the appropriate clinical endpoint in the phase II trials. An important question is how much of the effect observed in the surrogate endpoint in the phase II study can be translated into the clinical effect in the phase III trial. Another question is how much of the uncertainty remains in phase III trials. In this work, we study the utility of adaptation by design (not by statistical test) in the sense of adapting the phase II information for planning the phase III trials. That is, we investigate the impact of using various phase II effect size estimates on the sample size planning for phase III trials. In general, if the point estimate of the phase II trial is used for planning, it is advisable to size the phase III trial by choosing a smaller alpha level or a higher power level. The adaptation via using the lower limit of the one standard deviation confidence interval from the phase II trial appears to be a reasonable choice since it balances well between the empirical power of the launched trials and the proportion of trials not launched if a threshold lower than the true effect size of phase III trial can be chosen for determining whether the phase III trial is to be launched.  相似文献   

17.
The study of differences among groups is an interesting statistical topic in many applied fields. It is very common in this context to have data that are subject to mechanisms of loss of information, such as censoring and truncation. In the setting of a two‐sample problem with data subject to left truncation and right censoring, we develop an empirical likelihood method to do inference for the relative distribution. We obtain a nonparametric generalization of Wilks' theorem and construct nonparametric pointwise confidence intervals for the relative distribution. Finally, we analyse the coverage probability and length of these confidence intervals through a simulation study and illustrate their use with a real data set on gastric cancer. The Canadian Journal of Statistics 38: 453–473; 2010 © 2010 Statistical Society of Canada  相似文献   

18.
In this article, we deal with a two-parameter exponentiated half-logistic distribution. We consider the estimation of unknown parameters, the associated reliability function and the hazard rate function under progressive Type II censoring. Maximum likelihood estimates (M LEs) are proposed for unknown quantities. Bayes estimates are derived with respect to squared error, linex and entropy loss functions. Approximate explicit expressions for all Bayes estimates are obtained using the Lindley method. We also use importance sampling scheme to compute the Bayes estimates. Markov Chain Monte Carlo samples are further used to produce credible intervals for the unknown parameters. Asymptotic confidence intervals are constructed using the normality property of the MLEs. For comparison purposes, bootstrap-p and bootstrap-t confidence intervals are also constructed. A comprehensive numerical study is performed to compare the proposed estimates. Finally, a real-life data set is analysed to illustrate the proposed methods of estimation.  相似文献   

19.
The power of normal-theory tests about means depends on a noncentrality parameter which is a function of the unknown parameter σ. In order to calculate power and to solve sample-size problems based on power, differences between hypothesized and alternative values of the means are frequently selected as a multiple of σ, a choice which eliminates σ from the noncentrality parameter and permits a solution. Perhaps a more natural (but equivalent) way to express alternatives is to give one or more means as the quantile of order p (say Qp ) of a distribution with another mean. As we will demonstrate, this kind of alternative also eliminates σ from the problem.  相似文献   

20.
In terms of the risk of making a Type I error in evaluating a null hypothesis of equality, requiring two independent confirmatory trials with two‐sided p‐values less than 0.05 is equivalent to requiring one confirmatory trial with two‐sided p‐value less than 0.001 25. Furthermore, the use of a single confirmatory trial is gaining acceptability, with discussion in both ICH E9 and a CPMP Points to Consider document. Given the growing acceptance of this approach, this note provides a formula for the sample size savings that are obtained with the single clinical trial approach depending on the levels of Type I and Type II errors chosen. For two replicate trials each powered at 90%, which corresponds to a single larger trial powered at 81%, an approximate 19% reduction in total sample size is achieved with the single trial approach. Alternatively, a single trial with the same sample size as the total sample size from two smaller trials will have much greater power. For example, in the case where two trials are each powered at 90% for two‐sided α=0.05 yielding an overall power of 81%, a single trial using two‐sided α=0.001 25 would have 91% power. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号