首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The problem of testing for treatment effect based on binary response data is considered, assuming that the sample size for each experimental unit and treatment combination is random. It is assumed that the sample size follows a distribution that belongs to a parametric family. The uniformly most powerful unbiased tests, which are equivalent to the likelihood ratio tests, are obtained when the probability of the sample size being zero is positive. For the situation where the sample sizes are always positive, the likelihood ratio tests are derived. These test procedures, which are unconditional on the random sample sizes, are useful even when the random sample sizes are not observed. Some examples are presented as illustration.  相似文献   

2.
In this paper we discuss the sample size problem for balanced one-way ANOVA under a posterior Bayesian formulation of the problem. Using the distribution theory of appropriate quadratic forms we derive explicit sample sizes for prespecified posterior precisions. Comparisons with classical sample sizes are made. Instead of extensive tables, a mathematica program for sample size calculation is given. The formulations given in this article form a foundational step towards Bayesian calculation of sample size, in general.  相似文献   

3.
This article presents the goodness-of-fit tests for the Laplace distribution based on its maximum entropy characterization result. The critical values of the test statistics estimated by Monte Carlo simulations are tabulated for various window and sample sizes. The test statistics use an entropy estimator depending on the window size; so, the choice of the optimal window size is an important problem. The window sizes for yielding the maximum power of the tests are given for selected sample sizes. Power studies are performed to compare the proposed tests with goodness-of-fit tests based on the empirical distribution function. Simulation results report that entropy-based tests have consistently higher power than EDF tests against almost all alternatives considered.  相似文献   

4.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   

5.
In this article, lower bounds for expected sample size of sequential selection procedures are constructed for the problem of selecting the most probable event of k-variate multinomial distribution. The study is based on Volodin’s universal lower bounds for expected sample size of statistical inference procedures. The obtained lower bounds are used to estimate the efficiency of some selection procedures in terms of their expected sample sizes.  相似文献   

6.
Cochran's rule for the minimum sample size to ensure adequate coverage of nominal 95% confidence intervals is derived by using the Edgeworth expansion for the distribution function of the standardized sample mean. The rule is extended for confidence intervals based on the Studentized sample mean. The performance of the rule and Edgeworth approximations for smaller sample sizes are examined by simulation.  相似文献   

7.
Determination of an adequate sample size is critical to the design of research ventures. For clustered right-censored data, Manatunga and Chen [Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes. Biometrics. 2000;56(2):616–621] proposed a sample size calculation based on considering the bivariate marginal distribution as Clayton copula model. In addition to the Clayton copula, other important family of copulas, such as Gumbel and Frank copulas are also well established in multivariate survival analysis. However, sample size calculation based on these assumptions has not been fully investigated yet. To broaden the scope of Manatunga and Chen [Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes. Biometrics. 2000;56(2):616–621]'s research and achieve a more flexible sample size calculation for clustered right-censored data, we extended the work by assuming the marginal distribution as bivariate Gumbel and Frank copulas. We evaluate the performance of the proposed method and investigate the impacts of the accrual times, follow-up times and the within-clustered correlation effect of the study. The proposed method is applied to two real-world studies, and the R code is made available to users.  相似文献   

8.
Selecting the optimal progressive censoring scheme for the exponential distribution according to Pitman closeness criterion is discussed. For small sample sizes the Pitman closeness probabilities are calculated explicitly, and it is shown that the optimal progressive censoring scheme is the usual Type-II right censoring case. It is conjectured that this to be the case for all sample sizes. A general algorithm is also presented for the numerical computation of the Pitman closeness probabilities between any two progressive censoring schemes of the same size.  相似文献   

9.
《Statistical Methodology》2013,10(6):563-572
Selecting the optimal progressive censoring scheme for the exponential distribution according to Pitman closeness criterion is discussed. For small sample sizes the Pitman closeness probabilities are calculated explicitly, and it is shown that the optimal progressive censoring scheme is the usual Type-II right censoring case. It is conjectured that this to be the case for all sample sizes. A general algorithm is also presented for the numerical computation of the Pitman closeness probabilities between any two progressive censoring schemes of the same size.  相似文献   

10.
Optimal designs for a logistic regression model with over-dispersion introduced by a beta-binomial distribution are characterized. Designs are defined by a set of design points and design weights as usual but, in addition, the experimenter must also make a choice of a sub-sampling design specifying the distribution of observations on sample sizes. In an earlier work it has been shown that Ds-optimal sampling designs for estimation of the parameters of the beta-binomial distribution are supported on at most two design points. This admits a simplified approach using single sample sizes. Linear predictor values for Ds-optimal designs using a common sample size are tabulated for different levels of over-dispersion and choice of subsets of parameters.  相似文献   

11.
Acceptance sampling, a category of statistical quality control, deals with the confidence of the product's quality. In certain times, it is necessary to deal with the error in the demanding distribution counting on the sample size and the pertained population size, in determining the necessitated sample size for the acute exactitude. Further this sample size with minimized error is utilized in deriving the most beneficial OC curve. Neural networks have been used to train the data with the resulting error and their matching toleration level for the sample sizes of different population sizes. This trained network can be used to foster automated acceptance or rejection of the sample size to be used for a better OC curve based on the minimized error, ensuing time reduction of the burdened work. It is better explained in this paper with the geo-statistics data, using SAS program.  相似文献   

12.
Recently, a new non-randomized parallel design is proposed by Tian (2013) for surveys with sensitive topics. However, the sample size formulae associated with testing hypotheses for the parallel model are not yet available. As a crucial component in surveys, the sample size formulae with the parallel design are developed in this paper by using the power analysis method for both the one- and two-sample problems. We consider both the one- and two-sample problems. The asymptotic power functions and the corresponding sample size formulae for both the one- and two-sided tests based on the large-sample normal approximation are derived. The performance is assessed through comparing the asymptotic power with the exact power and reporting the ratio of the sample sizes with the parallel model and the design of direct questioning. We numerically compare the sample sizes needed for the parallel design with those required for the crosswise and triangular models. Two theoretical justifications are also provided. An example from a survey on ‘sexual practices’ in San Francisco, Las Vegas and Portland is used to illustrate the proposed methods.  相似文献   

13.
Summary.  The problem motivating the paper is the determination of sample size in clinical trials under normal likelihoods and at the substantive testing stage of a financial audit where normality is not an appropriate assumption. A combination of analytical and simulation-based techniques within the Bayesian framework is proposed. The framework accommodates two different prior distributions: one is the general purpose fitting prior distribution that is used in Bayesian analysis and the other is the expert subjective prior distribution, the sampling prior which is believed to generate the parameter values which in turn generate the data. We obtain many theoretical results and one key result is that typical non-informative prior distributions lead to very small sample sizes. In contrast, a very informative prior distribution may either lead to a very small or a very large sample size depending on the location of the centre of the prior distribution and the hypothesized value of the parameter. The methods that are developed are quite general and can be applied to other sample size determination problems. Some numerical illustrations which bring out many other aspects of the optimum sample size are given.  相似文献   

14.
In this study, our aim was to investigate the changes of different data structures and different sample sizes on the structural equation modeling and the influence of these factors on the model fit measures. Examining the created structural equation modeling under different data structures and sample sizes, the evaluation of model fit measures were performed with a simulation study. As a result of the simulation study, optimization and negative variance estimation problems have been encountered depending on the sample size and changing correlations. It was observed that these problems disappeared either by increasing the sample size or the correlations between the variables in factor. For upcoming studies, the choice of RMSEA and IFI model fit measures can be suggested in all sample sizes and the correlation values for data sets are ensured the multivariate normal distribution assumption.  相似文献   

15.
Two-stage k-sample designs for the ordered alternative problem   总被引:2,自引:0,他引:2  
In preclinical studies and clinical dose-ranging trials, the Jonckheere-Terpstra test is widely used in the assessment of dose-response relationships. Hewett and Spurrier (1979) presented a two-stage analog of the test in the context of large sample sizes. In this paper, we propose an exact test based on Simon's minimax and optimal design criteria originally used in one-arm phase II designs based on binary endpoints. The convergence rate of the joint distribution of the first and second stage test statistics to the limiting distribution is studied, and design parameters are provided for a variety of assumed alternatives. The behavior of the test is also examined in the presence of ties, and the proposed designs are illustrated through application in the planning of a hypercholesterolemia clinical trial. The minimax and optimal two-stage procedures are shown to be preferable as compared with the one-stage procedure because of the associated reduction in expected sample size for given error constraints.  相似文献   

16.
Sample size determination for testing the hypothesis of equality of proportions with a specified type I and type I1 error probabilitiesis of ten based on normal approximation to the binomial distribution. When the proportionsinvolved are very small, the exact distribution of the test statistic may not follow the assumed distribution. Consequently, the sample size determined by the test statistic may not result in the sespecifiederror probabilities. In this paper the author proposes a square root formula and compares it with several existing sample size approximation methods. It is found that with small proportion (p≦.01) the squar eroot formula provides the closest approximation to the exact sample sizes which attain a specified type I and type II error probabilities. Thes quare root formula is simple inform and has the advantage that equal differencesare equally detectable.  相似文献   

17.
A Monte Carlo study of the size and power of tests of equality of two covariance matrices is carried out. Tests based upon normality assumptions, elliptical distribution assumptions as well as distribution-free tests are compared. Samples are generated from normal, elliptical and non-elliptical populations. The elliptical-theory tests, in particular, have poor size properties for both elliptical distributions with moderate sample sizes and for non-elliptical distributions.  相似文献   

18.
A large sample approximation of the least favorable configuration for a fixed sample size selection procedure for negative binomial populations is proposed. A normal approximation of the selection procedure is also presented. Optimal sample sizes required to be drawn from each population and the bounds for the sample sizes are tabulated. Sample sizes obtained using the approximate least favorable configuration are compared with those obtained using the exact least favorable configuration. Alternate form of the normal approximation to the probability of correct selection is also presented. The relation between the required sample size and the number of populations involved is studied.  相似文献   

19.
Two new statistics are proposed for testing the identity of high-dimensional covariance matrix. Applying the large dimensional random matrix theory, we study the asymptotic distributions of our proposed statistics under the situation that the dimension p and the sample size n tend to infinity proportionally. The proposed tests can accommodate the situation that the data dimension is much larger than the sample size, and the situation that the population distribution is non-Gaussian. The numerical studies demonstrate that the proposed tests have good performance on the empirical powers for a wide range of dimensions and sample sizes.  相似文献   

20.
Alternative ways of using Monte Carlo methods to implement a Cox-type test for separate families of hypotheses are considered. Monte Carlo experiments are designed to compare the finite sample performances of Pesaran and Pesaran's test, a RESET test, and two Monte Carlo hypothesis test procedures. One of the Monte Carlo tests is based on the distribution of the log-likelihood ratio and the other is based on an asymptotically pivotal statistic. The Monte Carlo results provide strong evidence that the size of the Pesaran and Pesaran test is generally incorrect, except for very large sample sizes. The RESET test has lower power than the other tests. The two Monte Carlo tests perform equally well for all sample sizes and are both clearly preferred to the Pesaran and Pesaran test, even in large samples. Since the Monte Carlo test based on the log-likelihood ratio is the simplest to calculate, we recommend using it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号