首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
two‐stage studies may be chosen optimally by minimising a single characteristic like the maximum sample size. However, given that an investigator will initially select a null treatment e?ect and the clinically relevant di?erence, it is better to choose a design that also considers the expected sample size for each of these values. The maximum sample size and the two expected sample sizes are here combined to produce an expected loss function to ?nd designs that are admissible. Given the prior odds of success and the importance of the total sample size, minimising the expected loss gives the optimal design for this situation. A novel triangular graph to represent the admissible designs helps guide the decision‐making process. The H 0‐optimal, H 1‐optimal, H 0‐minimax and H 1‐minimax designs are all particular cases of admissible designs. The commonly used H 0‐optimal design is rarely good when allowing stopping for e?cacy. Additionally, the δ‐minimax design, which minimises the maximum expected sample size, is sometimes admissible under the loss function. However, the results can be varied and each situation will require the evaluation of all the admissible designs. Software to do this is provided. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
The sampling designs dependent on sample moments of auxiliary variables are well known. Lahiri (Bull Int Stat Inst 33:133–140, 1951) considered a sampling design proportionate to a sample mean of an auxiliary variable. Sing and Srivastava (Biometrika 67(1):205–209, 1980) proposed the sampling design proportionate to a sample variance while Wywiał (J Indian Stat Assoc 37:73–87, 1999) a sampling design proportionate to a sample generalized variance of auxiliary variables. Some other sampling designs dependent on moments of an auxiliary variable were considered e.g. in Wywiał (Some contributions to multivariate methods in, survey sampling. Katowice University of Economics, Katowice, 2003a); Stat Transit 4(5):779–798, 2000) where accuracy of some sampling strategies were compared, too.These sampling designs cannot be useful in the case when there are some censored observations of the auxiliary variable. Moreover, they can be much too sensitive to outliers observations. In these cases the sampling design proportionate to the order statistic of an auxiliary variable can be more useful. That is why such an unequal probability sampling design is proposed here. Its particular cases as well as its conditional version are considered, too. The sampling scheme implementing this sampling design is proposed. The inclusion probabilities of the first and second orders were evaluated. The well known Horvitz–Thompson estimator is taken into account. A ratio estimator dependent on an order statistic is constructed. It is similar to the well known ratio estimator based on the population and sample means. Moreover, it is an unbiased estimator of the population mean when the sample is drawn according to the proposed sampling design dependent on the appropriate order statistic.  相似文献   

3.
In single-arm clinical trials with survival outcomes, the Kaplan–Meier estimator and its confidence interval are widely used to assess survival probability and median survival time. Since the asymptotic normality of the Kaplan–Meier estimator is a common result, the sample size calculation methods have not been studied in depth. An existing sample size calculation method is founded on the asymptotic normality of the Kaplan–Meier estimator using the log transformation. However, the small sample properties of the log transformed estimator are quite poor in small sample sizes (which are typical situations in single-arm trials), and the existing method uses an inappropriate standard normal approximation to calculate sample sizes. These issues can seriously influence the accuracy of results. In this paper, we propose alternative methods to determine sample sizes based on a valid standard normal approximation with several transformations that may give an accurate normal approximation even with small sample sizes. In numerical evaluations via simulations, some of the proposed methods provided more accurate results, and the empirical power of the proposed method with the arcsine square-root transformation tended to be closer to a prescribed power than the other transformations. These results were supported when methods were applied to data from three clinical trials.  相似文献   

4.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

5.
Effect sizes are an important component of experimental design, data analysis, and interpretation of statistical results. In some situations, an effect size of clinical or practical importance may be unknown to the researcher. In other situations, the researcher may be interested in comparing observed effect sizes to known standards to quantify clinical importance. In these cases, the notion of relative effect sizes (small, medium, large) can be useful as benchmarks. Although there is generally an extensive literature on relative effect sizes for continuous data, little of this research has focused on relative effect sizes for measures of risk that are common in epidemiological or biomedical studies. The aim of this paper, therefore, is to extend existing relative effect sizes to the relative risk, odds ratio, hazard ratio, rate ratio, and Mantel–Haenszel odds ratio for related samples. In most scenarios with equal group allocation, effect sizes of 1.22, 1.86, and 3.00 can be taken as small, medium, and large, respectively. The odds ratio for a non rare event is a notable exception and modified relative effect sizes are 1.32, 2.38, and 4.70 in that situation.  相似文献   

6.
The performance of different information criteria – namely Akaike, corrected Akaike (AICC), Schwarz–Bayesian (SBC), and Hannan–Quinn – is investigated so as to choose the optimal lag length in stable and unstable vector autoregressive (VAR) models both when autoregressive conditional heteroscedasticity (ARCH) is present and when it is not. The investigation covers both large and small sample sizes. The Monte Carlo simulation results show that SBC has relatively better performance in lag-choice accuracy in many situations. It is also generally the least sensitive to ARCH regardless of stability or instability of the VAR model, especially in large sample sizes. These appealing properties of SBC make it the optimal criterion for choosing lag length in many situations, especially in the case of financial data, which are usually characterized by occasional periods of high volatility. SBC also has the best forecasting abilities in the majority of situations in which we vary sample size, stability, variance structure (ARCH or not), and forecast horizon (one period or five). frequently, AICC also has good lag-choosing and forecasting properties. However, when ARCH is present, the five-period forecast performance of all criteria in all situations worsens.  相似文献   

7.
A means for utilizing auxiliary information in surveys is to sample with inclusion probabilities proportional to given size values, to use a πps design, preferably with fixed sample size. A novel candidate in that context is Pareto πps. This sampling scheme was derived by limit considerations and it works with a degree of approximation for finite samples. Desired and factual inclusion probabilities do not agree exactly, which in turn leads to some estimator bias. The central topic in this paper is to derive conditions for the bias to be negligible.Practically useful information on small sample behavior of Pareto πps can, to the best of our understanding, be gained only by numerical studies. Earlier investigations to that end have been too limited to allow general conclusions, while this paper reports on findings from an extensive numerical study. The chief conclusion is that the estimator bias is negligible in almost all situations met in survey practice.  相似文献   

8.
Assuming that the frequency of occurrence follows the Poisson distribution, we develop sample size calculation procedures for testing equality based on an exact test procedure and an asymptotic test procedure under an AB/BA crossover design. We employ Monte Carlo simulation to demonstrate the use of these sample size formulae and evaluate the accuracy of sample size calculation formula derived from the asymptotic test procedure with respect to power in a variety of situations. We note that when both the relative treatment effect of interest and the underlying intraclass correlation between frequencies within patients are large, the sample size calculation based on the asymptotic test procedure can lose accuracy. In this case, the sample size calculation procedure based on the exact test is recommended. On the other hand, if the relative treatment effect of interest is small, the minimum required number of patients per group will be large, and the asymptotic test procedure will be valid for use. In this case, we may consider use of the sample size calculation formula derived from the asymptotic test procedure to reduce the number of patients needed for the exact test procedure. We include an example regarding a double‐blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these sample size formulae. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
The tabled significance values of the Kolmogorov-Smirnov goodness-of-fit statistic determined for continuous underlying distributions are conservative for applications involving discrete underlying distributions. Conover (1972) proposed an efficient method for computing the exact significance level of the Kolmogorov-Smirnov test for discrete distributions; however, he warned against its use for large sample sizes because “the calculations become too difficult.”

In this work we explore the relationship between sample size and the computational effectiveness of Conover's formulas, where “computational effectiveness” is taken to mean the accuracy attained with a fixed precision of machine arithmetic. The nature of the difficulties in calculations is pointed out. It is indicated that, despite these difficulties, Conover's method of computing the Kolmogorov-Smirnov significance level for discrete distributions can still be a useful tool for a wide range of sample sizes.  相似文献   

10.
Summary.  Realistic statistical modelling of observational data often suggests a statistical model which is not fully identified, owing to potential biases that are not under the control of study investigators. Bayesian inference can be implemented with such a model, ideally with the most precise prior knowledge that can be ascertained. However, as a consequence of the non-identifiability, inference cannot be made arbitrarily accurate by choosing the sample size to be sufficiently large. In turn, this has consequences for sample size determination. The paper presents a sample size criterion that is based on a quantification of how much Bayesian learning can arise in a given non-identified model. A global perspective is adopted, whereby choosing larger sample sizes for some studies necessarily implies that some other potentially worthwhile studies cannot be undertaken. This suggests that smaller sample sizes should be selected with non-identified models, as larger sample sizes constitute a squandering of resources in making estimator variances very small compared with their biases. Particularly, consider two investigators planning the same study, one of whom admits to the potential biases at hand and consequently uses a non-identified model, whereas the other pretends that there are no biases, leading to an identified but less realistic model. It is seen that the former investigator always selects a smaller sample size than the latter, with the difference being quite marked in some illustrative cases.  相似文献   

11.
A rotation scheme for a stratified multi-stage sample, discussed in this paper, was designed to statisfy the following conditions: (i) there is a constraint on the number of units that can be replaced in each round, and (ii) it is relatively inexpensive to increase the sample size gradually. An example of these conditions was observed in the development of a plan for measuring the accuracy of the billing process of a telephone company. Estimators of the population proportion of elements that possess a specified characteristic are also derived. Each estimator is a weighted average of the corresponding estimates based on the retained units from the original sample and on the new units, where the weight of the estimate based on the new units increases over time. While this rotation scheme is discussed in connection with the billing accuracy of a telephone company, the methodology can be applied to other similar problems.  相似文献   

12.
In this article, the Food and Drug Administration's (FDA) medical device substantial equivalence provision will be briefly introduced and some statistical methods useful for evaluating device equivalence are discussed. A sample size formula is derived for limits of agreement, which may be used to assess statistical equivalence in certain medical device situations. Simulation findings on the formula are presented, which can be used to guide sample size selections in common practical situations. Examples of the sample size procedure are given.  相似文献   

13.
This paper investigates, by means of Monte Carlo simulation, the effects of different choices of order for autoregressive approximation on the fully efficient parameter estimates for autoregressive moving average models. Four order selection criteria, AIC, BIC, HQ and PKK, were compared and different model structures with varying sample sizes were used to contrast the performance of the criteria. Some asymptotic results which provide a useful guide for assessing the performance of these criteria are presented. The results of this comparison show that there are marked differences in the accuracy implied using these alternative criteria in small sample situations and that it is preferable to apply BIC criterion, which leads to greater precision of Gaussian likelihood estimates, in such cases. Implications of the findings of this study for the estimation of time series models are highlighted.  相似文献   

14.
A technique is given for drawing valid inferences in cases where performance characteristics of statistical procedures (e.g. power for a test, or probability of a correct selection for a selection procedure) depend upon unknown parameters (e.g. an unknown variance). The technique is especially useful in situations where sample sizes are small (e.g. in many medical trials); the “usual” approximate procedures are found to be misleading in such cases.  相似文献   

15.
Power-divergence goodness-of-fit statistics have asymptotically a chi-squared distribution. Asymptotic results may not apply in small-sample situations, and the exact significance of a goodness-of-fit statistic may potentially be over- or under-stated by the asymptotic distribution. Several correction terms have been proposed to improve the accuracy of the asymptotic distribution, but their performance has only been studied for the equiprobable case. We extend that research to skewed hypotheses. Results are presented for one-way multinomials involving k = 2 to 6 cells with sample sizes N = 20, 40, 60, 80 and 100 and nominal test sizes f = 0.1, 0.05, 0.01 and 0.001. Six power-divergence goodness-of-fit statistics were investigated, and five correction terms were included in the study. Our results show that skewness itself does not affect the accuracy of the asymptotic approximation, which depends only on the magnitude of the smallest expected frequency (whether this comes from a small sample with the equiprobable hypothesis or a large sample with a skewed hypothesis). Throughout the conditions of the study, the accuracy of the asymptotic distribution seems to be optimal for Pearson's X2 statistic (the power-divergence statistic of index u = 1) when k > 3 and the smallest expected frequency is as low as between 0.1 and 1.5 (depending on the particular k, N and nominal test size), but a computationally inexpensive improvement can be obtained in these cases by using a moment-corrected h2 distribution. If the smallest expected frequency is even smaller, a normal correction yields accurate tests through the log-likelihood-ratio statistic G2 (the power-divergence statistic of index u = 0).  相似文献   

16.
We investigate the construction of a BCa-type bootstrap procedure for setting approximate prediction intervals for an efficient estimator θm of a scalar parameter θ, based on a future sample of size m. The results are also extended to nonparametric situations, which can be used to form bootstrap prediction intervals for a large class of statistics. These intervals are transformation-respecting and range-preserving. The asymptotic performance of our procedure is assessed by allowing both the past and future sample sizes to tend to infinity. The resulting intervals are then shown to be second-order correct and second-order accurate. These second-order properties are established in terms of min(m, n), and not the past sample size n alone.  相似文献   

17.
Selecting the optimal progressive censoring scheme for the exponential distribution according to Pitman closeness criterion is discussed. For small sample sizes the Pitman closeness probabilities are calculated explicitly, and it is shown that the optimal progressive censoring scheme is the usual Type-II right censoring case. It is conjectured that this to be the case for all sample sizes. A general algorithm is also presented for the numerical computation of the Pitman closeness probabilities between any two progressive censoring schemes of the same size.  相似文献   

18.
《Statistical Methodology》2013,10(6):563-572
Selecting the optimal progressive censoring scheme for the exponential distribution according to Pitman closeness criterion is discussed. For small sample sizes the Pitman closeness probabilities are calculated explicitly, and it is shown that the optimal progressive censoring scheme is the usual Type-II right censoring case. It is conjectured that this to be the case for all sample sizes. A general algorithm is also presented for the numerical computation of the Pitman closeness probabilities between any two progressive censoring schemes of the same size.  相似文献   

19.
This paper discusses regression analysis of clustered interval-censored failure time data, which often occur in medical follow-up studies among other areas. For such data, sometimes the failure time may be related to the cluster size, the number of subjects within each cluster or we have informative cluster sizes. For the problem, we present a within-cluster resampling method for the situation where the failure time of interest can be described by a class of linear transformation models. In addition to the establishment of the asymptotic properties of the proposed estimators of regression parameters, an extensive simulation study is conducted for the assessment of the finite sample properties of the proposed method and suggests that it works well in practical situations. An application to the example that motivated this study is also provided.  相似文献   

20.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号