首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In clinical trials, a covariate-adjusted response-adaptive (CARA) design allows a subject newly entering a trial a better chance of being allocated to a superior treatment regimen based on cumulative information from previous subjects, and adjusts the allocation according to individual covariate information. Since this design allocates subjects sequentially, it is natural to apply a sequential method for estimating the treatment effect in order to make the data analysis more efficient. In this paper, we study the sequential estimation of treatment effect for a general CARA design. A stopping criterion is proposed such that the estimates satisfy a prescribed precision when the sampling is stopped. The properties of estimates and stopping time are obtained under the proposed stopping rule. In addition, we show that the asymptotic properties of the allocation function, under the proposed stopping rule, are the same as those obtained in the non-sequential/fixed sample size counterpart. We then illustrate the performance of the proposed procedure with some simulation results using logistic models. The properties, such as the coverage probability of treatment effect, correct allocation proportion and average sample size, for diverse combinations of initial sample sizes and tuning parameters in the utility function are discussed.  相似文献   

2.
Allocation of samples in stratified and/or multistage sampling is one of the central issues of sampling theory. In a survey of a population often the constraints for precision of estimators of subpopulations parameters have to be taken care of during the allocation of the sample. Such issues are often solved with mathematical programming procedures. In many situations it is desirable to allocate the sample, in a way which forces the precision of estimates at the subpopulations level to be both: optimal and identical, while the constraints of the total (expected) size of the sample (or samples, in two-stage sampling) are imposed. Here our main concern is related to two-stage sampling schemes. We show that such problem in a wide class of sampling plans has an elegant mathematical and computational solution. This is done due to a suitable definition of the optimization problem, which enables to solve it through a linear algebra setting involving eigenvalues and eigenvectors of matrices defined in terms of some population quantities. As a final result, we obtain a very simple and relatively universal method for calculating the subpopulation optimal and equal-precision allocation which is based on one of the most standard algorithms of linear algebra (available, e.g., in R software). Theoretical solutions are illustrated through a numerical example based on the Labour Force Survey. Finally, we would like to stress that the method we describe allows to accommodate quite automatically for different levels of precision priority for subpopulations.  相似文献   

3.
When conducting research with controlled experiments, sample size planning is one of the important decisions that researchers have to make. However, current methods do not adequately address this issue with regard to variance heterogeneity with some cost constraints for comparing several treatment means. This paper proposes a sample size allocation ratio in the fixed-effect heterogeneous analysis of variance when group variances are unequal and in cases where the sampling and/or variable cost has some constraints. The efficient sample size allocation is determined for the purpose of minimizing total cost with a designated power or maximizing the power with a given total cost. Finally, the proposed method is verified by using the index of relative efficiency and the corresponding total cost and the total sample size needed. We also apply our method in a pain management trial to decide an efficient sample size. Simulation studies also show that the proposed sample size formulas are efficient in terms of statistical power. SAS and R codes are provided in the appendix for easy application.  相似文献   

4.
In stratified sampling, methods for the allocation of effort among strata usually rely on some measure of within-stratum variance. If we do not have enough information about these variances, adaptive allocation can be used. In adaptive allocation designs, surveys are conducted in two phases. Information from the first phase is used to allocate the remaining units among the strata in the second phase. Brown et al. [Adaptive two-stage sequential sampling, Popul. Ecol. 50 (2008), pp. 239–245] introduced an adaptive allocation sampling design – where the final sample size was random – and an unbiased estimator. Here, we derive an unbiased variance estimator for the design, and consider a related design where the final sample size is fixed. Having a fixed final sample size can make survey-planning easier. We introduce a biased Horvitz–Thompson type estimator and a biased sample mean type estimator for the sampling designs. We conduct two simulation studies on honey producers in Kurdistan and synthetic zirconium distribution in a region on the moon. Results show that the introduced estimators are more efficient than the available estimators for both variable and fixed sample size designs, and the conventional unbiased estimator of stratified simple random sampling design. In order to evaluate efficiencies of the introduced designs and their estimator furthermore, we first review some well-known adaptive allocation designs and compare their estimator with the introduced estimators. Simulation results show that the introduced estimators are more efficient than available estimators of these well-known adaptive allocation designs.  相似文献   

5.
In clinical trials with binary endpoints, the required sample size does not depend only on the specified type I error rate, the desired power and the treatment effect but also on the overall event rate which, however, is usually uncertain. The internal pilot study design has been proposed to overcome this difficulty. Here, nuisance parameters required for sample size calculation are re-estimated during the ongoing trial and the sample size is recalculated accordingly. We performed extensive simulation studies to investigate the characteristics of the internal pilot study design for two-group superiority trials where the treatment effect is captured by the relative risk. As the performance of the sample size recalculation procedure crucially depends on the accuracy of the applied sample size formula, we firstly explored the precision of three approximate sample size formulae proposed in the literature for this situation. It turned out that the unequal variance asymptotic normal formula outperforms the other two, especially in case of unbalanced sample size allocation. Using this formula for sample size recalculation in the internal pilot study design assures that the desired power is achieved even if the overall rate is mis-specified in the planning phase. The maximum inflation of the type I error rate observed for the internal pilot study design is small and lies below the maximum excess that occurred for the fixed sample size design.  相似文献   

6.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   

7.
For time‐to‐event data, the power of the two sample logrank test for the comparison of two treatment groups can be greatly influenced by the ratio of the number of patients in each of the treatment groups. Despite the possible loss of power, unequal allocations may be of interest due to a need to collect more data on one of the groups or to considerations related to the acceptability of the treatments to patients. Investigators pursuing such designs may be interested in the cost of the unbalanced design relative to a balanced design with respect to the total number of patients required for the study. We present graphical displays to illustrate the sample size adjustment factor, or ratio of the sample size required by an unequal allocation compared to the sample size required by a balanced allocation, for various survival rates, treatment hazards ratios, and sample size allocation ratios. These graphical displays conveniently summarize information in the literature and provide a useful tool for planning sample sizes for the two sample logrank test. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi‐group comparison detecting the difference in parameters between groups under mixed‐effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Sampling cost is a crucial factor in sample size planning, particularly when the treatment group is more expensive than the control group. To either minimize the total cost or maximize the statistical power of the test, we used the distribution-free Wilcoxon–Mann–Whitney test for two independent samples and the van Elteren test for randomized block design, respectively. We then developed approximate sample size formulas when the distribution of data is abnormal and/or unknown. This study derived the optimal sample size allocation ratio for a given statistical power by considering the cost constraints, so that the resulting sample sizes could minimize either the total cost or the total sample size. Moreover, for a given total cost, the optimal sample size allocation is recommended to maximize the statistical power of the test. The proposed formula is not only innovative, but also quick and easy. We also applied real data from a clinical trial to illustrate how to choose the sample size for a randomized two-block design. For nonparametric methods, no existing commercial software for sample size planning has considered the cost factor, and therefore the proposed methods can provide important insights related to the impact of cost constraints.  相似文献   

10.
We present a surprising though obvious result that seems to have been unnoticed until now. In particular, we demonstrate the equivalence of two well-known problems—the optimal allocation of the fixed overall sample size n among L strata under stratified random sampling and the optimal allocation of the H = 435 seats among the 50 states for apportionment of the U.S. House of Representatives following each decennial census. In spite of the strong similarity manifest in the statements of the two problems, they have not been linked and they have well-known but different solutions; one solution is not explicitly exact (Neyman allocation), and the other (equal proportions) is exact. We give explicit exact solutions for both and note that the solutions are equivalent. In fact, we conclude by showing that both problems are special cases of a general problem. The result is significant for stratified random sampling in that it explicitly shows how to minimize sampling error when estimating a total TY while keeping the final overall sample size fixed at n; this is usually not the case in practice with Neyman allocation where the resulting final overall sample size might be near n + L after rounding. An example reveals that controlled rounding with Neyman allocation does not always lead to the optimum allocation, that is, an allocation that minimizes variance.  相似文献   

11.
This paper considers the design of accelerated life test (ALT) sampling plans under Type I progressive interval censoring with random removals. We assume that the lifetime of products follows a Weibull distribution. Two levels of constant stress higher than the use condition are used. The sample size and the acceptability constant that satisfy given levels of producer's risk and consumer's risk are found. In particular, the optimal stress level and the allocation proportion are obtained by minimizing the generalized asymptotic variance of the maximum likelihood estimators of the model parameters. Furthermore, for validation purposes, a Monte Carlo simulation is conducted to assess the true probability of acceptance for the derived sampling plans.  相似文献   

12.
We discuss 3 alternative approaches to sample size calculation: traditional sample size calculation based on power to show a statistically significant effect, sample size calculation based on assurance, and sample size based on a decision‐theoretic approach. These approaches are compared head‐to‐head for clinical trial situations in rare diseases. Specifically, we consider 3 case studies of rare diseases (Lyell disease, adult‐onset Still disease, and cystic fibrosis) with the aim to plan the sample size for an upcoming clinical trial. We outline in detail the reasonable choice of parameters for these approaches for each of the 3 case studies and calculate sample sizes. We stress that the influence of the input parameters needs to be investigated in all approaches and recommend investigating different sample size approaches before deciding finally on the trial size. Highly influencing for the sample size are choice of treatment effect parameter in all approaches and the parameter for the additional cost of the new treatment in the decision‐theoretic approach. These should therefore be discussed extensively.  相似文献   

13.
In practice, it is important to find optimal allocation strategies for continuous response with multiple treatments under some optimization criteria. In this article, we focus on exponential responses. For a multivariate test of homogeneity, we obtain the optimal allocation strategies to maximize power while (1) fixing sample size and (2) fixing expected total responses. Then the doubly adaptive biased coin design [Hu, F., Zhang, L.-X., 2004. Asymptotic properties of doubly adaptive biased coin designs for multi-treatment clinical trials. The Annals of Statistics 21, 268–301] is used to implement the optimal allocation strategies. Simulation results show that the proposed procedures have advantages over complete randomization with respect to both inferential (power) and ethical standpoints on average. It is important to note that one can usually implement optimal allocation strategies numerically for other continuous responses, though it is usually not easy to get the closed form of the optimal allocation theoretically.  相似文献   

14.

Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large-sample procedure relies on the property that the distribution of the t -like quantities is close to the standard normal in large samples. In this paper, we use a new procedure based on both simulation and asymptotic theory to determine the sample size for a test plan. Unlike the complete data case, the t -like quantities are not pivotal quantities in general when data are time censored. However we show that the distribution of the t -like quantities only depend on the expected proportion failing and obtain the distributions by simulation for both complete and time censoring case when data follow Weibull distribution. We find that the large-sample procedure usually underestimates the sample size even when it is said to be 200 or more. The sample size given by the proposed procedure insures the requested nominal accuracy and confidence of the estimation when the test plan results in complete or time censored data. Some useful figures displaying the required sample size for the new procedure are also presented.  相似文献   

15.
The extreme value distribution has been extensively used to model natural phenomena such as rainfall and floods, and also in modeling lifetimes and material strengths. Maximum likelihood estimation (MLE) for the parameters of the extreme value distribution leads to likelihood equations that have to be solved numerically, even when the complete sample is available. In this paper, we discuss point and interval estimation based on progressively Type-II censored samples. Through an approximation in the likelihood equations, we obtain explicit estimators which are approximations to the MLEs. Using these approximate estimators as starting values, we obtain the MLEs using an iterative method and examine numerically their bias and mean squared error. The approximate estimators compare quite favorably to the MLEs in terms of both bias and efficiency. Results of the simulation study, however, show that the probability coverages of the pivotal quantities (for location and scale parameters) based on asymptotic normality are unsatisfactory for both these estimators and particularly so when the effective sample size is small. We, therefore, suggest the use of unconditional simulated percentage points of these pivotal quantities for the construction of confidence intervals. The results are presented for a wide range of sample sizes and different progressive censoring schemes. We conclude with an illustrative example.  相似文献   

16.
The optimal sample size comparing two Poisson rates when the counts are underreported is investigated. We consider two sampling scenarios. We first consider the case where only underreported data will be sampled and rely on informative prior distributions to obtain posterior identifiability. We also consider the case where an expensive infallible search method and a fallible method are available. An interval based sample size criterion is used in both sampling scenarios. Since the posterior distributions of the two rates are functions of confluent hypergeometric and hypergeometric functions simulation based methods are necessary to perform the sample size determination scheme.  相似文献   

17.
Summary.  The identification of factors that increase the chances of a certain disease is one of the classical and central issues in epidemiology. In this context, a typical measure of the association between a disease and risk factor is the odds ratio. We deal with design problems that arise for Bayesian inference on the odds ratio in the analysis of case–control studies. We consider sample size determination and allocation criteria for both interval estimation and hypothesis testing. These criteria are then employed to determine the sample size and proportions of units to be assigned to cases and controls for planning a study on the association between the incidence of a non-Hodgkin's lymphoma and exposition to pesticides by eliciting prior information from a previous study.  相似文献   

18.
This paper considers constant stress accelerated life tests terminated by a Type II censoring regime at one of the stress levels. We consider a model based on Weibull distributions with constant shape and a log-linear link between scale and the stress factor. We obtain expectations associated with the likelihood function, and use these to obtain asymptotically valid variances and correlations for maximum likelihood estimates of model parameters. We illustrate their calculation, and assess agreement with observed counterparts for finite samples in simulation experiments. We then use moments to compare the information obtained from variants of the design, and show that, with an appropriate allocation of items to stress levels, the design yields better estimates of model parameters and related quantities than a single stress experiment.  相似文献   

19.
In this article, we consider the Bayes and empirical Bayes problem of the current population mean of a finite population when the sample data is available from other similar (m-1) finite populations. We investigate a general class of linear estimators and obtain the optimal linear Bayes estimator of the finite population mean under a squared error loss function that considered the cost of sampling. The optimal linear Bayes estimator and the sample size are obtained as a function of the parameters of the prior distribution. The corresponding empirical Bayes estimates are obtained by replacing the unknown hyperparameters with their respective consistent estimates. A Monte Carlo study is conducted to evaluate the performance of the proposed empirical Bayes procedure.  相似文献   

20.
In stratified sample surveys, the problem of determining the optimum allocation is well known due to articles published in 1923 by Tschuprow and in 1934 by Neyman. The articles suggest the optimum sample sizes to be selected from each stratum for which sampling variance of the estimator is minimum for fixed total cost of the survey or the cost is minimum for a fixed precision of the estimator. If in a sample survey more than one characteristic is to be measured on each selected unit of the sample, that is, the survey is a multi-response survey, then the problem of determining the optimum sample sizes to various strata becomes more complex because of the non-availability of a single optimality criterion that suits all the characteristics. Many authors discussed compromise criterion that provides a compromise allocation, which is optimum for all characteristics, at least in some sense. Almost all of these authors worked out the compromise allocation by minimizing some function of the sampling variances of the estimators under a single cost constraint. A serious objection to this approach is that the variances are not unit free so that minimizing any function of variances may not be an appropriate objective to obtain a compromise allocation. This fact suggests the use of coefficient of variations instead of variances. In the present article, the problem of compromise allocation is formulated as a multi-objective non-linear programming problem. By linearizing the non-linear objective functions at their individual optima, the problem is approximated to an integer linear programming problem. Goal programming technique is then used to obtain a solution to the approximated problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号