首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
This paper illustrates how the design and statistical analysis of the primary endpoint of a proof‐of‐concept study can be formulated within a Bayesian framework and is motivated by and illustrated with a Pfizer case study in chronic kidney disease. It is shown how decision criteria for success can be formulated, and how the study design can be assessed in relation to these, both using the traditional approach of probability of success conditional on the true treatment difference and also using Bayesian assurance and pre‐posterior probabilities. The case study illustrates how an informative prior on placebo response can have a dramatic effect in reducing sample size, saving time and resource, and we argue that in some cases, it can be considered unethical not to include relevant literature data in this way. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross‐over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (?0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non‐central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under‐ or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
When two‐component parallel systems are tested, the data consist of Type‐II censored data X(i), i= 1, n, from one component, and their concomitants Y [i] randomly censored at X(r), the stopping time of the experiment. Marshall & Olkin's (1967) bivariate exponential distribution is used to illustrate statistical inference procedures developed for this data type. Although this data type is motivated practically, the likelihood is complicated, and maximum likelihood estimation is difficult, especially in the case where the parameter space is a non‐open set. An iterative algorithm is proposed for finding maximum likelihood estimates. This article derives several properties of the maximum likelihood estimator (MLE) including existence, uniqueness, strong consistency and asymptotic distribution. It also develops an alternative estimation method with closed‐form expressions based on marginal distributions, and derives its asymptotic properties. Compared with variances of the MLEs in the finite and large sample situations, the alternative estimator performs very well, especially when the correlation between X and Y is small.  相似文献   

6.
Abstract. We consider the problem of estimating the joint distribution function of the event time and a continuous mark variable when the event time is subject to interval censoring case 1 and the continuous mark variable is only observed in case the event occurred before the time of inspection. The non‐parametric maximum likelihood estimator in this model is known to be inconsistent. We study two alternative smooth estimators, based on the explicit (inverse) expression of the distribution function of interest in terms of the density of the observable vector. We derive the pointwise asymptotic distribution of both estimators.  相似文献   

7.
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time‐to‐event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach. In the unmatched pair approach, the confidence interval is constructed based on bootstrap resampling, and the hypothesis testing is based on the non‐parametric method by Finkelstein and Schoenfeld (1999). Luo et al. (2015) developed a close‐form variance estimator of the win ratio for the unmatched pair approach, based on a composite endpoint with two components and a specific algorithm determining winners, losers and ties. We extend the unmatched pair approach to provide a generalized analytical solution to both hypothesis testing and confidence interval construction for the win ratio, based on its logarithmic asymptotic distribution. This asymptotic distribution is derived via U‐statistics following Wei and Johnson (1985). We perform simulations assessing the confidence intervals constructed based on our approach versus those per the bootstrap resampling and per Luo et al. We have also applied our approach to a liver transplant Phase III study. This application and the simulation studies show that the win ratio can be a better statistical measure than the odds ratio when the importance order among components matters; and the method per our approach and that by Luo et al., although derived based on large sample theory, are not limited to a large sample, but are also good for relatively small sample sizes. Different from Pocock et al. and Luo et al., our approach is a generalized analytical method, which is valid for any algorithm determining winners, losers and ties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Molecularly targeted, genomic‐driven, and immunotherapy‐based clinical trials continue to be advanced for the treatment of relapse or refractory cancer patients, where the growth modulation index (GMI) is often considered a primary endpoint of treatment efficacy. However, there little literature is available that considers the trial design with GMI as the primary endpoint. In this article, we derived a sample size formula for the score test under a log‐linear model of the GMI. Study designs using the derived sample size formula are illustrated under a bivariate exponential model, the Weibull frailty model, and the generalized treatment effect size. The proposed designs provide sound statistical methods for a single‐arm phase II trial with GMI as the primary endpoint.  相似文献   

9.
Clinical trials with event‐time outcomes as co‐primary contrasts are common in many areas such as infectious disease, oncology, and cardiovascular disease. We discuss methods for calculating the sample size for randomized superiority clinical trials with two correlated time‐to‐event outcomes as co‐primary contrasts when the time‐to‐event outcomes are exponentially distributed. The approach is simple and easily applied in practice. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
The most common forecasting methods in business are based on exponential smoothing, and the most common time series in business are inherently non‐negative. Therefore it is of interest to consider the properties of the potential stochastic models underlying exponential smoothing when applied to non‐negative data. We explore exponential smoothing state space models for non‐negative data under various assumptions about the innovations, or error, process. We first demonstrate that prediction distributions from some commonly used state space models may have an infinite variance beyond a certain forecasting horizon. For multiplicative error models that do not have this flaw, we show that sample paths will converge almost surely to zero even when the error distribution is non‐Gaussian. We propose a new model with similar properties to exponential smoothing, but which does not have these problems, and we develop some distributional properties for our new model. We then explore the implications of our results for inference, and compare the short‐term forecasting performance of the various models using data on the weekly sales of over 300 items of costume jewelry. The main findings of the research are that the Gaussian approximation is adequate for estimation and one‐step‐ahead forecasting. However, as the forecasting horizon increases, the approximate prediction intervals become increasingly problematic. When the model is to be used for simulation purposes, a suitably specified scheme must be employed.  相似文献   

11.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

12.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

13.
In this paper, we propose a design that uses a short‐term endpoint for accelerated approval at interim analysis and a long‐term endpoint for full approval at final analysis with sample size adaptation based on the long‐term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlation. Family‐wise error rate is proved to be strongly controlled for the two endpoints, sample size adaptation, and two analysis time points with the proposed designs. We show that using alpha exhaustive designs greatly improve the power when both endpoints are effective, and the power difference between the two adaptation rules is minimal. The proposed design can be extended to more general settings. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
Determination of an adequate sample size is critical to the design of research ventures. For clustered right-censored data, Manatunga and Chen [Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes. Biometrics. 2000;56(2):616–621] proposed a sample size calculation based on considering the bivariate marginal distribution as Clayton copula model. In addition to the Clayton copula, other important family of copulas, such as Gumbel and Frank copulas are also well established in multivariate survival analysis. However, sample size calculation based on these assumptions has not been fully investigated yet. To broaden the scope of Manatunga and Chen [Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes. Biometrics. 2000;56(2):616–621]'s research and achieve a more flexible sample size calculation for clustered right-censored data, we extended the work by assuming the marginal distribution as bivariate Gumbel and Frank copulas. We evaluate the performance of the proposed method and investigate the impacts of the accrual times, follow-up times and the within-clustered correlation effect of the study. The proposed method is applied to two real-world studies, and the R code is made available to users.  相似文献   

15.
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Non‐random sampling is a source of bias in empirical research. It is common for the outcomes of interest (e.g. wage distribution) to be skewed in the source population. Sometimes, the outcomes are further subjected to sample selection, which is a type of missing data, resulting in partial observability. Thus, methods based on complete cases for skew data are inadequate for the analysis of such data and a general sample selection model is required. Heckman proposed a full maximum likelihood estimation method under the normality assumption for sample selection problems, and parametric and non‐parametric extensions have been proposed. We generalize Heckman selection model to allow for underlying skew‐normal distributions. Finite‐sample performance of the maximum likelihood estimator of the model is studied via simulation. Applications illustrate the strength of the model in capturing spurious skewness in bounded scores, and in modelling data where logarithm transformation could not mitigate the effect of inherent skewness in the outcome variable.  相似文献   

17.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

18.
In many morbidity/mortality studies, composite endpoints are considered. Although the primary interest is to demonstrate that an invention delays death, the expected death rate is often that low that studies focussing on survival exclusively are not feasible. Components of the composite endpoint are chosen such that their occurrence is predictive for time to death. Therefore, if the time to non-fatal events is censored by death, censoring is no longer independent. As a consequence, the analysis of the components of a composite endpoint cannot be reasonably performed using classical methods for the analysis of survival times like Kaplan-Meier estimates or log-rank tests. In this paper we visualize the impact of disregarding dependent censoring during the analysis and discuss practicable alternatives for the analysis of morbidity/mortality studies. In the context of simulations we provide evidence that copula-based methods have the potential to deliver practically unbiased estimates of hazards of components of a composite endpoint. At the same time, they require minimal assumptions, which is important since not all assumptions are generally verifiable because of censoring. Therefore, there are alternative ways to analyze morbidity/mortality studies more appropriately by accounting for the dependencies among the components of composite endpoints. Despite the limitations mentioned, these alternatives can at minimum serve as sensitivity analyses to check the robustness of the currently used methods.  相似文献   

19.
Abstract. General autoregressive moving average (ARMA) models extend the traditional ARMA models by removing the assumptions of causality and invertibility. The assumptions are not required under a non‐Gaussian setting for the identifiability of the model parameters in contrast to the Gaussian setting. We study M‐estimation for general ARMA processes with infinite variance, where the distribution of innovations is in the domain of attraction of a non‐Gaussian stable law. Following the approach taken by Davis et al. (1992) and Davis (1996) , we derive a functional limit theorem for random processes based on the objective function, and establish asymptotic properties of the M‐estimator. We also consider bootstrapping the M‐estimator and extend the results of Davis & Wu (1997) to the present setting so that statistical inferences are readily implemented. Simulation studies are conducted to evaluate the finite sample performance of the M‐estimation and bootstrap procedures. An empirical example of financial time series is also provided.  相似文献   

20.
In this paper, we consider some results on distribution theory of multivariate progressively Type‐II censored order statistics. We also establish some characterizations of Freund's bivariate exponential distribution based on the lack of memory property.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号