首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In clinical trials with survival data, investigators may wish to re-estimate the sample size based on the observed effect size while the trial is ongoing. Besides the inflation of the type-I error rate due to sample size re-estimation, the method for calculating the sample size in an interim analysis should be carefully considered because the data in each stage are mutually dependent in trials with survival data. Although the interim hazard estimate is commonly used to re-estimate the sample size, the estimate can sometimes be considerably higher or lower than the hypothesized hazard by chance. We propose an interim hazard ratio estimate that can be used to re-estimate the sample size under those circumstances. The proposed method was demonstrated through a simulation study and an actual clinical trial as an example. The effect of the shape parameter for the Weibull survival distribution on the sample size re-estimation is presented.  相似文献   

2.
Planning and conducting interim analysis are important steps for long-term clinical trials. In this article, the concept of conditional power is combined with the classic analysis of variance (ANOVA) for a study of two-stage sample size re-estimation based on interim analysis. The overall Type I and Type II errors would be inflated by interim analysis. We compared the effects on re-estimating sample sizes with and without the adjustment of Type I and Type II error rates due to interim analysis.  相似文献   

3.
In group sequential clinical trials, there are several sample size re-estimation methods proposed in the literature that allow for change of sample size at the interim analysis. Most of these methods are based on either the conditional error function or the interim effect size. Our simulation studies compared the operating characteristics of three commonly used sample size re-estimation methods, Chen et al. (2004), Cui et al. (1999), and Muller and Schafer (2001). Gao et al. (2008) extended the CDL method and provided an analytical expression of lower and upper threshold of conditional power where the type I error is preserved. Recently, Mehta and Pocock (2010) extensively discussed that the real benefit of the adaptive approach is to invest the sample size resources in stages and increasing the sample size only if the interim results are in the so called “promising zone” which they define in their article. We incorporated this concept in our simulations while comparing the three methods. To test the robustness of these methods, we explored the impact of incorrect variance assumption on the operating characteristics. We found that the operating characteristics of the three methods are very comparable. In addition, the concept of promising zone, as suggested by MP, gives the desired power and smaller average sample size, and thus increases the efficiency of the trial design.  相似文献   

4.
In vitro permeation tests (IVPT) offer accurate and cost-effective development pathways for locally acting drugs, such as topical dermatological products. For assessment of bioequivalence, the FDA draft guidance on generic acyclovir 5% cream introduces a new experimental design, namely the single-dose, multiple-replicate per treatment group design, as IVPT pivotal study design. We examine the statistical properties of its hypothesis testing method—namely the mixed scaled average bioequivalence (MSABE). Meanwhile, some adaptive design features in clinical trials can help researchers make a decision earlier with fewer subjects or boost power, saving resources, while controlling the impact on family-wise error rate. Therefore, we incorporate MSABE in an adaptive design combining the group sequential design and sample size re-estimation. Simulation studies are conducted to study the passing rates of the proposed methods—both within and outside the average bioequivalence limits. We further consider modifications to the adaptive designs applied for IVPT BE trials, such as Bonferroni's adjustment and conditional power function. Finally, a case study with real data demonstrates the advantages of such adaptive methods.  相似文献   

5.
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.  相似文献   

6.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

7.
This article develops the theory of multistep ahead forecasting for vector time series that exhibit temporal nonstationarity and co-integration. We treat the case of a semi-infinite past by developing the forecast filters and the forecast error filters explicitly. We also provide formulas for forecasting from a finite data sample. This latter application can be accomplished by using large matrices, which remains practicable when the total sample size is moderate. Expressions for the mean square error of forecasts are also derived and can be implemented readily. The flexibility and generality of these formulas are illustrated by four diverse applications: forecasting euro area macroeconomic aggregates; backcasting fertility rates by racial category; forecasting long memory inflation data; and forecasting regional housing starts using a seasonally co-integrated model.  相似文献   

8.
The optimal sample size comparing two Poisson rates when the counts are underreported is investigated. We consider two sampling scenarios. We first consider the case where only underreported data will be sampled and rely on informative prior distributions to obtain posterior identifiability. We also consider the case where an expensive infallible search method and a fallible method are available. An interval based sample size criterion is used in both sampling scenarios. Since the posterior distributions of the two rates are functions of confluent hypergeometric and hypergeometric functions simulation based methods are necessary to perform the sample size determination scheme.  相似文献   

9.
Interest in confirmatory adaptive combined phase II/III studies with treatment selection has increased in the past few years. These studies start comparing several treatments with a control. One (or more) treatment(s) is then selected after the first stage based on the available information at an interim analysis, including interim data from the ongoing trial, external information and expert knowledge. Recruitment continues, but now only for the selected treatment(s) and the control, possibly in combination with a sample size reassessment. The final analysis of the selected treatment(s) includes the patients from both stages and is performed such that the overall Type I error rate is strictly controlled, thus providing confirmatory evidence of efficacy at the final analysis. In this paper we describe two approaches to control the Type I error rate in adaptive designs with sample size reassessment and/or treatment selection. The first method adjusts the critical value using a simulation-based approach, which incorporates the number of patients at an interim analysis, the true response rates, the treatment selection rule, etc. We discuss the underlying assumptions of simulation-based procedures and give several examples where the Type I error rate is not controlled if some of the assumptions are violated. The second method is an adaptive Bonferroni-Holm test procedure based on conditional error rates of the individual treatment-control comparisons. We show that this procedure controls the Type I error rate, even if a deviation from a pre-planned adaptation rule or the time point of such a decision is necessary.  相似文献   

10.
The Monitoring Avian Productivity and Survivorship (MAPS) programme is a cooperative effort to provide annual regional indices of adult population size and post-fledging productivity and estimates of adult survival rates from data pooled from a network of constant-effort mist-netting stations across North America. This paper provides an overview of the field and analytical methods currently employed by MAPS, a discussion of the assumptions underlying the use of these techniques, and a discussion of the validity of some of these assumptions based on data gathered during the first 5 years (1989-1993) of the programme, during which time it grew from 17 to 227 stations. Ageand species-specific differences in dispersal characteristics are important factors affecting the usefulness of the indices of adult population size and productivity derived from MAPS data. The presence of transients, heterogeneous capture probabilities among stations, and the large sample sizes required by models to deal effectively with these two considerations are important factors affecting the accuracy and precision of survival rate estimates derived from MAPS data. Important results from the first 5 years of MAPS are: (1) indices of adult population size derived from MAPS mist-netting data correlated well with analogous indices derived from point-count data collected at MAPS stations; (2) annual changes in productivity indices generated by MAPS were similar to analogous changes documented by direct nest monitoring and were generally as expected when compared to annual changes in weather during the breeding season; and (3) a model using between-year recaptures in Cormack-Jolly-Seber (CJS) mark-recapture analyses to estimate the proportion of residents among unmarked birds was found, for most tropical-wintering species sampled, to provide a better fit with the available data and more realistic and precise estimates of annual survival rates of resident birds than did standard CJS mark-recapture analyses. A detailed review of the statistical characteristics of MAPS data and a thorough evaluation of the field and analytical methods used in the MAPS programme are currently under way.  相似文献   

11.
The standard approach to construct nonparametric tolerance intervals is to use the appropriate order statistics, provided a minimum sample size requirement is met. However, it is well-known that this traditional approach is conservative with respect to the nominal level. One way to improve the coverage probabilities is to use interpolation. However, the extension to the case of two-sided tolerance intervals, as well as for the case when the minimum sample size requirement is not met, have not been studied. In this paper, an approach using linear interpolation is proposed for improving coverage probabilities for the two-sided setting. In the case when the minimum sample size requirement is not met, coverage probabilities are shown to improve by using linear extrapolation. A discussion about the effect on coverage probabilities and expected lengths when transforming the data is also presented. The applicability of this approach is demonstrated using three real data sets.  相似文献   

12.
In the analysis of time-to-event data, restricted mean survival time has been well investigated in the literature and provided by many commercial software packages, while calculating mean survival time remains as a challenge due to censoring or insufficient follow-up time. Several researchers have proposed a hybrid estimator of mean survival based on the Kaplan–Meier curve with an extrapolated tail. However, this approach often leads to biased estimate due to poor estimate of the parameters in the extrapolated “tail” and the large variability associated with the tail of the Kaplan–Meier curve due to small set of patients at risk. Two key challenges in this approach are (1) where the extrapolation should start and (2) how to estimate the parameters for the extrapolated tail. The authors propose a novel approach to calculate mean survival time to address these two challenges. In the proposed approach, an algorithm is used to search if there are any time points where the hazard rates change significantly. The survival function is estimated by the Kaplan–Meier method prior to the last change point and approximated by an exponential function beyond the last change point. The parameter in the exponential function is estimated locally. Mean survival time is derived based on this survival function. The simulation and case studies demonstrated the superiority of the proposed approach.  相似文献   

13.
Dropout is a persistent problem for a longitudinal study. We exhibit the shortcomings of the last observation carried forward method. It produces biased estimates of change in an outcome from baseline to study endpoint under informative dropout. We developed a theoretical quantification of the effect of such bias on type I and type II error rates. We present results for a setup where a subject either completes the study or drops out during one particular interval, and also under the setup in which subjects could drop out at any time during the study. The type I error rate steadily increases when time to dropout decreases or the common sample size increases. The inflation in type I error rate can be substantially high when reasons for dropout in the two groups differ; when there is a large difference in dropout rates between the control and treatment groups and when the common sample size is large; even when dropout subjects have one or two fewer observations than the completers. Similar results are also observed for type II error rates. A study can have very low power when early recovered patients in the treatment group and worsening patients in the control group drop out even near the end of the study.  相似文献   

14.
An important question that arises in clinical trials is how many additional observations, if any, are required beyond those originally planned. This has satisfactorily been answered in the case of two-treatment double-blind clinical experiments. However, one may be interested in comparing a new treatment with its competitors, which may be more than one. This problem is addressed in this investigation involving responses from arbitrary distributions, in which the mean and the variance are not functionally related. First, a solution in determining the initial sample size for specified level of significance and power at a specified alternative is obtained. Then it is shown that when the initial sample size is large, the nominal level of significance and the power at the pre-specified alternative are fairly robust for the proposed sample size re-estimation procedure. An application of the results is made to the blood coagulation functionality problem considered by Kropf et al. [Multiple comparisons of treatments with stable multivariate tests in a two-stage adaptive design, including a test for non-inferiority, Biom. J. 42(8) (2000), pp. 951–965].  相似文献   

15.
Many authors have shown that a combined analysis of data from two or more types of recapture survey brings advantages, such as the ability to provide more information about parameters of interest. For example, a combined analysis of annual resighting and monthly radio-telemetry data allows separate estimates of true survival and emigration rates, whereas only apparent survival can be estimated from the resighting data alone. For studies involving more than one type of survey, biologists should consider how to allocate the total budget to the surveys related to the different types of marks so that they will gain optimal information from the surveys. For example, since radio tags and subsequent monitoring are very costly, while leg bands are cheap, the biologists should try to balance costs with information obtained in deciding how many animals should receive radios. Given a total budget and specific costs, it is possible to determine the allocation of sample sizes to different types of marks in order to minimize the variance of parameters of interest, such as annual survival and emigration rates. In this paper, we propose a cost function for a study where all birds receive leg bands and a subset receives radio tags and all new releases occur at the start of the study. Using this cost function, we obtain the allocation of sample sizes to the two survey types that minimizes the standard error of survival rate estimates or, alternatively, the standard error of emigration rates. Given the proposed costs, we show that for high resighting probability, e.g. 0.6, tagging roughly 10-40% of birds with radios will give survival estimates with standard errors within the minimum range. Lower resighting rates will require a higher percentage of radioed birds. In addition, the proposed costs require tagging the maximum possible percentage of radioed birds to minimize the standard error of emigration estimates.  相似文献   

16.
For right-censored survival data, the information that whether the observed time is survival or censoring time is frequently lost. This is the case for the competing risk data. In this article, we consider statistical inference for the right-censored survival data with censoring indicators missing at random under the proportional mean residual life model. Simple and augmented inverse probability weighted estimating equation approaches are developed, in which the nonmissingness probability and some unknown conditional expectations are estimated by the kernel smoothing technique. The asymptotic properties of all the proposed estimators are established, while extensive simulation studies demonstrate that our proposed methods perform well under the moderate sample size. At last, the proposed method is applied to a data set from a stage II breast cancer trial.  相似文献   

17.
Acceptance sampling, a category of statistical quality control, deals with the confidence of the product's quality. In certain times, it is necessary to deal with the error in the demanding distribution counting on the sample size and the pertained population size, in determining the necessitated sample size for the acute exactitude. Further this sample size with minimized error is utilized in deriving the most beneficial OC curve. Neural networks have been used to train the data with the resulting error and their matching toleration level for the sample sizes of different population sizes. This trained network can be used to foster automated acceptance or rejection of the sample size to be used for a better OC curve based on the minimized error, ensuing time reduction of the burdened work. It is better explained in this paper with the geo-statistics data, using SAS program.  相似文献   

18.
The type I and II error rates of several statistical tests for seasonality in monthly data were investigated through a computer simulation study at two nominal significance levels, α=1% and α=5%. Three models were used for the variation: annual sinusoidal; semi—annual sinusoidal; and a curve which is constant in all but three consecutive months of the year, when it exhibits a constant increase (a “one—pulse” model). The statistical tests are compared in terms of the simulation results. These results may be applied to calculate either the sample size required to detect seasonal variation of fixed amplitude or the probability of detecting seasonal variation of variable amplitude with a fixed sample size. A numerical case study is given  相似文献   

19.
The utility of blinded sample size re-estimation for clinical trials depends on the ability to estimate variability without providing information about the true treatment difference, and on some reasonable assurance that the method is not likely to cause the sample size to be increased when the treatment effect is better than anticipated. We show that violations of these properties are unlikely to occur in practice.  相似文献   

20.
We discuss the impact of misspecifying fully parametric proportional hazards and accelerated life models. For the uncensored case, misspecified accelerated life models give asymptotically unbiased estimates of covariate effect, but the shape and scale parameters depend on the misspecification. The covariate, shape and scale parameters differ in the censored case. Parametric proportional hazards models do not have a sound justification for general use: estimates from misspecified models can be very biased, and misleading results for the shape of the hazard function can arise. Misspecified survival functions are more biased at the extremes than the centre. Asymptotic and first order results are compared. If a model is misspecified, the size of Wald tests will be underestimated. Use of the sandwich estimator of standard error gives tests of the correct size, but misspecification leads to a loss of power. Accelerated life models are more robust to misspecification because of their log-linear form. In preliminary data analysis, practitioners should investigate proportional hazards and accelerated life models; software is readily available for several such models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号