首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two‐phase sampling is often used for estimating a population total or mean when the cost per unit of collecting auxiliary variables, x, is much smaller than the cost per unit of measuring a characteristic of interest, y. In the first phase, a large sample s1 is drawn according to a specific sampling design p(s1) , and auxiliary data x are observed for the units is1 . Given the first‐phase sample s1 , a second‐phase sample s2 is selected from s1 according to a specified sampling design {p(s2s1) } , and (y, x) is observed for the units is2 . In some cases, the population totals of some components of x may also be known. Two‐phase sampling is used for stratification at the second phase or both phases and for regression estimation. Horvitz–Thompson‐type variance estimators are used for variance estimation. However, the Horvitz–Thompson ( Horvitz & Thompson, J. Amer. Statist. Assoc. 1952 ) variance estimator in uni‐phase sampling is known to be highly unstable and may take negative values when the units are selected with unequal probabilities. On the other hand, the Sen–Yates–Grundy variance estimator is relatively stable and non‐negative for several unequal probability sampling designs with fixed sample sizes. In this paper, we extend the Sen–Yates–Grundy ( Sen , J. Ind. Soc. Agric. Statist. 1953; Yates & Grundy , J. Roy. Statist. Soc. Ser. B 1953) variance estimator to two‐phase sampling, assuming fixed first‐phase sample size and fixed second‐phase sample size given the first‐phase sample. We apply the new variance estimators to two‐phase sampling designs with stratification at the second phase or both phases. We also develop Sen–Yates–Grundy‐type variance estimators of the two‐phase regression estimators that make use of the first‐phase auxiliary data and known population totals of some of the auxiliary variables.  相似文献   

2.
two‐stage studies may be chosen optimally by minimising a single characteristic like the maximum sample size. However, given that an investigator will initially select a null treatment e?ect and the clinically relevant di?erence, it is better to choose a design that also considers the expected sample size for each of these values. The maximum sample size and the two expected sample sizes are here combined to produce an expected loss function to ?nd designs that are admissible. Given the prior odds of success and the importance of the total sample size, minimising the expected loss gives the optimal design for this situation. A novel triangular graph to represent the admissible designs helps guide the decision‐making process. The H 0‐optimal, H 1‐optimal, H 0‐minimax and H 1‐minimax designs are all particular cases of admissible designs. The commonly used H 0‐optimal design is rarely good when allowing stopping for e?cacy. Additionally, the δ‐minimax design, which minimises the maximum expected sample size, is sometimes admissible under the loss function. However, the results can be varied and each situation will require the evaluation of all the admissible designs. Software to do this is provided. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
To efficiently and completely correct for selection bias in adaptive two-stage trials, uniformly minimum variance conditionally unbiased estimators (UMVCUEs) have been derived for trial designs with normally distributed data. However, a common assumption is that the variances are known exactly, which is unlikely to be the case in practice. We extend the work of Cohen and Sackrowitz (Statistics & Probability Letters, 8(3):273-278, 1989), who proposed an UMVCUE for the best performing candidate in the normal setting with a common unknown variance. Our extension allows for multiple selected candidates, as well as unequal stage one and two sample sizes.  相似文献   

4.
This paper proposes an affine‐invariant test extending the univariate Wilcoxon signed‐rank test to the bivariate location problem. It gives two versions of the null distribution of the test statistic. The first version leads to a conditionally distribution‐free test which can be used with any sample size. The second version can be used for larger sample sizes and has a limiting χ22 distribution under the null hypothesis. The paper investigates the relationship with a test proposed by Jan & Randles (1994). It shows that the Pitman efficiency of this test relative to the new test is equal to 1 for elliptical distributions but that the two tests are not necessarily equivalent for non‐elliptical distributions. These facts are also demonstrated empirically in a simulation study. The new test has the advantage of not requiring the assumption of elliptical symmetry which is needed to perform the asymptotic version of the Jan and Randles test.  相似文献   

5.
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross‐over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (?0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non‐central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under‐ or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
Most multivariate statistical techniques rely on the assumption of multivariate normality. The effects of nonnormality on multivariate tests are assumed to be negligible when variance–covariance matrices and sample sizes are equal. Therefore, in practice, investigators usually do not attempt to assess multivariate normality. In this simulation study, the effects of skewed and leptokurtic multivariate data on the Type I error and power of Hotelling's T 2 were examined by manipulating distribution, sample size, and variance–covariance matrix. The empirical Type I error rate and power of Hotelling's T 2 were calculated before and after the application of generalized Box–Cox transformation. The findings demonstrated that even when variance–covariance matrices and sample sizes are equal, small to moderate changes in power still can be observed.  相似文献   

7.
In prior works, this group demonstrated the feasibility of valid adaptive sequential designs for crossover bioequivalence studies. In this paper, we extend the prior work to optimize adaptive sequential designs over a range of geometric mean test/reference ratios (GMRs) of 70–143% within each of two ranges of intra‐subject coefficient of variation (10–30% and 30–55%). These designs also introduce a futility decision for stopping the study after the first stage if there is sufficiently low likelihood of meeting bioequivalence criteria if the second stage were completed, as well as an upper limit on total study size. The optimized designs exhibited substantially improved performance characteristics over our previous adaptive sequential designs. Even though the optimized designs avoided undue inflation of type I error and maintained power at 80%, their average sample sizes were similar to or less than those of conventional single stage designs. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
ABSTRACT

In many real life problems one assumes a normal model because the sample histogram looks unimodal, symmetric, and/or the standard tests like the Shapiro-Wilk test favor such a model. However, in reality, the assumption of normality may be misplaced since the normality tests often fail to detect departure from normality (especially for small sample sizes) when the data actually comes from slightly heavier tail symmetric unimodal distributions. For this reason it is important to see how the existing normal variance estimators perform when the actual distribution is a t-distribution with k degrees of freedom (d.f.) (t k -distribution). This note deals with the performance of standard normal variance estimators under the t k -distributions. It is shown that the relative ordering of the estimators is preserved for both the quadratic loss as well as the entropy loss irrespective of the d.f. and the sample size (provided the risks exist).  相似文献   

9.
The planning of bioequivalence (BE) studies, as for any clinical trial, requires a priori specification of an effect size for the determination of power and an assumption about the variance. The specified effect size may be overly optimistic, leading to an underpowered study. The assumed variance can be either too small or too large, leading, respectively, to studies that are underpowered or overly large. There has been much work in the clinical trials field on various types of sequential designs that include sample size reestimation after the trial is started, but these have seen only little use in BE studies. The purpose of this work was to validate at least one such method for crossover design BE studies. Specifically, we considered sample size reestimation for a two-stage trial based on the variance estimated from the first stage. We identified two methods based on Pocock's method for group sequential trials that met our requirement for at most negligible increase in type I error rate.  相似文献   

10.
Clinical phase II trials in oncology are conducted to determine whether the activity of a new anticancer treatment is promising enough to merit further investigation. Two‐stage designs are commonly used for this situation to allow for early termination. Designs proposed in the literature so far have the common drawback that the sample sizes for the two stages have to be specified in the protocol and have to be adhered to strictly during the course of the trial. As a consequence, designs that allow a higher extent of flexibility are desirable. In this article, we propose a new adaptive method that allows an arbitrary modification of the sample size of the second stage using the results of the interim analysis or external information while controlling the type I error rate. If the sample size is not changed during the trial, the proposed design shows very similar characteristics to the optimal two‐stage design proposed by Chang et al. (Biometrics 1987; 43:865–874). However, the new design allows the use of mid‐course information for the planning of the second stage, thus meeting practical requirements when performing clinical phase II trials in oncology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Multivariate hypothesis testing in studies of vegetation is likely to be hindered by unrealistic assumptions when based on conventional statistical methods. This can be overcome by randomization tests. In this paper, the accuracy and power of a MANOVA randomization test are evaluated for one and two factors with interaction with simulated data from three distributions. The randomization test is based on the partitioning of sum of squares computed from Euclidean distances. In one-factor designs, sample size and variance inequality were evaluated. The results showed a high level of accuracy. The power curve was higher with normal distribution, lower with uniform, intermediate with lognormal and was sensitive to variance inequality. In two-factor designs, three methods of permutations and two statistics were compared. The results showed that permutation of the residuals with F pseudo is accurate and can give good power for testing the interaction and restricted permutation for testing main factors.  相似文献   

13.
Investigators and epidemiologists often use statistics based on the parameters of a multinomial distribution. Two main approaches have been developed to assess the inferences of these statistics. The first one uses asymptotic formulae which are valid for large sample sizes. The second one computes the exact distribution, which performs quite well for small samples. They present some limitations for sample sizes N neither large enough to satisfy the assumption of asymptotic normality nor small enough to allow us to generate the exact distribution. We analytically computed the 1/N corrections of the asymptotic distribution for any statistics based on a multinomial law. We applied these results to the kappa statistic in 2×2 and 3×3 tables. We also compared the coverage probability obtained with the asymptotic and the corrected distributions under various hypothetical configurations of sample size and theoretical proportions. With this method, the estimate of the mean and the variance were highly improved as well as the 2.5 and the 97.5 percentiles of the distribution, allowing us to go down to sample sizes around 20, for data sets not too asymmetrical. The order of the difference between the exact and the corrected values was 1/N2 for the mean and 1/N3 for the variance.  相似文献   

14.
Before carrying out a full scale bioequivalence trial, it is desirable to conduct a pilot trial to decide if a generic drug product shows promise of bioequivalence. The purpose of a pilot trial is to screen test formulations, and hence small sample sizes can be used. Based on the outcome of the pilot trial, one can decide whether or not a full scale pivotal trial should be carried out to assess bioequivalence. This article deals with the design of a pivotal trial, based on the evidence from the pilot trial. A two-stage adaptive procedure is developed in order to determine the sample size and the decision rule for the pivotal trial, for testing average bioequivalence using the two one-sided test (TOST). Numerical implementation of the procedure is discussed in detail, and the required tables are provided. Numerical results indicate that the required sample sizes could be smaller than that recommended by the FDA for a single trial, especially when the pilot study provides strong evidence in favor of bioequivalence.  相似文献   

15.
Traditional bioavailability studies assess average bioequivalence (ABE) between the test (T) and reference (R) products under the crossover design with TR and RT sequences. With highly variable (HV) drugs whose intrasubject coefficient of variation in pharmacokinetic measures is 30% or greater, assertion of ABE becomes difficult due to the large sample sizes needed to achieve adequate power. In 2011, the FDA adopted a more relaxed, yet complex, ABE criterion and supplied a procedure to assess this criterion exclusively under TRR‐RTR‐RRT and TRTR‐RTRT designs. However, designs with more than 2 periods are not always feasible. This present work investigates how to evaluate HV drugs under TR‐RT designs. A mixed model with heterogeneous residual variances is used to fit data from TR‐RT designs. Under the assumption of zero subject‐by‐formulation interaction, this basic model is comparable to the FDA‐recommended model for TRR‐RTR‐RRT and TRTR‐RTRT designs, suggesting the conceptual plausibility of our approach. To overcome the distributional dependency among summary statistics of model parameters, we develop statistical tests via the generalized pivotal quantity (GPQ). A real‐world data example is given to illustrate the utility of the resulting procedures. Our simulation study identifies a GPQ‐based testing procedure that evaluates HV drugs under practical TR‐RT designs with desirable type I error rate and reasonable power. In comparison to the FDA's approach, this GPQ‐based procedure gives similar performance when the product's intersubject standard deviation is low (≤0.4) and is most useful when practical considerations restrict the crossover design to 2 periods.  相似文献   

16.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

17.
Hartley's test for homogeneity of k normal‐distribution variances is based on the ratio between the maximum sample variance and the minimum sample variance. In this paper, the author uses the same statistic to test for equivalence of k variances. Equivalence is defined in terms of the ratio between the maximum and minimum population variances, and one concludes equivalence when Hartley's ratio is small. Exact critical values for this test are obtained by using an integral expression for the power function and some theoretical results about the power function. These exact critical values are available both when sample sizes are equal and when sample sizes are unequal. One related result in the paper is that Hartley's test for homogeneity of variances is no longer unbiased when the sample sizes are unequal. The Canadian Journal of Statistics 38: 647–664; 2010 © 2010 Statistical Society of Canada  相似文献   

18.
In this paper we define a class of unbalanced designs, denoted by Ck,s,t, for estimating the components of variance in a k-stage nested random effects linear model. This class contains many of the designs proposed in the literature for nested components of variance models. We focus on the three-state model and discuss the determination of locally optimal designs within this class using a systematic computer search. For large sample sizes we show that approximate optimal designs may be obtained using a limit argument combined with numerical optimization. A comparison of our designs with previously published designs suggests that, in many cases, our designs result in substantial gains in efficiency.  相似文献   

19.
《统计学通讯:理论与方法》2012,41(13-14):2465-2489
The Akaike information criterion, AIC, and Mallows’ C p statistic have been proposed for selecting a smaller number of regressors in the multivariate regression models with fully unknown covariance matrix. All of these criteria are, however, based on the implicit assumption that the sample size is substantially larger than the dimension of the covariance matrix. To obtain a stable estimator of the covariance matrix, it is required that the dimension of the covariance matrix is much smaller than the sample size. When the dimension is close to the sample size, it is necessary to use ridge-type estimators for the covariance matrix. In this article, we use a ridge-type estimators for the covariance matrix and obtain the modified AIC and modified C p statistic under the asymptotic theory that both the sample size and the dimension go to infinity. It is numerically shown that these modified procedures perform very well in the sense of selecting the true model in large dimensional cases.  相似文献   

20.
The size of the two-sample t test is generally thought to be robust against nonnormal distributions if the sample sizes are large. This belief is based on central limit theory, and asymptotic expansions of the moments of the t statistic suggest that robustness may be improved for moderate sample sizes if the variance, skewness, and kurtosis of the distributions are matched, particularly if the sample sizes are also equal.

It is shown that asymptotic arguments such as these can be misleading and that, in fact, the size of the t test can be as large as unity if the distributions are allowed to be completely arbitrary. Restricting the distributions to be identical or symmetric (but otherwise arbitrary) does not guarantee that the size can be controlled either, but controlling the tail-heaviness of the distributions does. The last result is proved more generally for the k-sample F test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号