共查询到20条相似文献,搜索用时 0 毫秒
1.
Three Bayesian methods are considered for the determination of sample sizes for sampling from the Laplace distribution – the distribution of time between rare events – with a normal prior. These methods are applied to the sizing of aircraft mid-air collisions in a navigation system or large flight path deviations of aircraft in air traffic management scenarios. A computer program handles all computations and gives a good insight about the best suggested method. 相似文献
2.
Bayesian sample size estimation for equivalence and non-inferiority tests for diagnostic methods is considered. The goal of the study is to test whether a new screening test of interest is equivalent to, or not inferior to the reference test, which may or may not be a gold standard. Sample sizes are chosen by the model performance criteria of average posterior variance, length and coverage probability. In the absence of a gold standard, sample sizes are evaluated by the ratio of marginal probabilities of the two screening tests; whereas in the presence of gold standard, sample sizes are evaluated by the measures of sensitivity and specificity. 相似文献
3.
Jing Cao J. Jack Lee Susan Alber 《Journal of statistical planning and inference》2009,139(12):4111-4122
A challenge for implementing performance-based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the desired coverage rate and interval length over all (or a subset of) possible datasets. For most models, the WOC produces the largest sample size among the three criteria, and sample sizes obtained by the ACC and the ALC are not the same. For Bayesian sample size determination for normal means and differences between normal means, we investigate, for the first time, the direction and magnitude of differences between the ACC and ALC sample sizes. For fixed hyperparameter values, we show that the difference of the ACC and ALC sample size depends on the nominal coverage, and not on the nominal interval length. There exists a threshold value of the nominal coverage level such that below the threshold the ALC sample size is larger than the ACC sample size, and above the threshold the ACC sample size is larger. Furthermore, the ACC sample size is more sensitive to changes in the nominal coverage. We also show that for fixed hyperparameter values, there exists an asymptotic constant ratio between the WOC sample size and the ALC (ACC) sample size. Simulation studies are conducted to show that similar relationships among the ACC, ALC, and WOC may hold for estimating binomial proportions. We provide a heuristic argument that the results can be generalized to a larger class of models. 相似文献
4.
Bayesian sample size determination for estimating binomial parameters from data subject to misclassification 总被引:1,自引:0,他引:1
E. Rahme L. Joseph & T. W. Gyorkos 《Journal of the Royal Statistical Society. Series C, Applied statistics》2000,49(1):119-128
We investigate the sample size problem when a binomial parameter is to be estimated, but some degree of misclassification is possible. The problem is especially challenging when the degree to which misclassification occurs is not exactly known. Motivated by a Canadian survey of the prevalence of toxoplasmosis infection in pregnant women, we examine the situation where it is desired that a marginal posterior credible interval for the prevalence of width w has coverage 1−α, using a Bayesian sample size criterion. The degree to which the misclassification probabilities are known a priori can have a very large effect on sample size requirements, and in some cases achieving a coverage of 1−α is impossible, even with an infinite sample size. Therefore, investigators must carefully evaluate the degree to which misclassification can occur when estimating sample size requirements. 相似文献
5.
This article considers sample size determination methods based on Bayesian credible intervals for θ, an unknown real-valued parameter of interest. We consider clinical trials and assume that θ represents the difference in the effects of a new and a standard therapy. In this context, it is typical to identify an interval of parameter values that imply equivalence of the two treatments (range of equivalence). Then, an experiment designed to show superiority of the new treatment is successful if it yields evidence that θ is sufficiently large, i.e. if an interval estimate of θ lies entirely above the superior limit of the range of equivalence. Following a robust Bayesian approach, we model uncertainty on prior specification with a class Γ of distributions for θ and we assume that the data yield robust evidence if, as the prior varies in Γ, the lower bound of the credible set inferior limit is sufficiently large. Sample size criteria in the article consist in selecting the minimal number of observations such that the experiment is likely to yield robust evidence. These criteria are based on summaries of the predictive distributions of lower bounds of the random inferior limits of credible intervals. The method is developed for the conjugate normal model and applied to a trial for surgery of gastric cancer. 相似文献
6.
S. K. Sahu T. M. F. Smith 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2006,169(2):235-253
Summary. The problem motivating the paper is the determination of sample size in clinical trials under normal likelihoods and at the substantive testing stage of a financial audit where normality is not an appropriate assumption. A combination of analytical and simulation-based techniques within the Bayesian framework is proposed. The framework accommodates two different prior distributions: one is the general purpose fitting prior distribution that is used in Bayesian analysis and the other is the expert subjective prior distribution, the sampling prior which is believed to generate the parameter values which in turn generate the data. We obtain many theoretical results and one key result is that typical non-informative prior distributions lead to very small sample sizes. In contrast, a very informative prior distribution may either lead to a very small or a very large sample size depending on the location of the centre of the prior distribution and the hypothesized value of the parameter. The methods that are developed are quite general and can be applied to other sample size determination problems. Some numerical illustrations which bring out many other aspects of the optimum sample size are given. 相似文献
7.
Tom Burr Herb Fry Brian McVey Eric Sander Joseph Cavanaugh Andrew Neath 《统计学通讯:模拟与计算》2013,42(3):507-520
The Bayesian information criterion (BIC) is widely used for variable selection. We focus on the regression setting for which variations of the BIC have been proposed. A version that includes the Fisher Information matrix of the predictor variables performed best in one published study. In this article, we extend the evaluation, introduce a performance measure involving how closely posterior probabilities are approximated, and conclude that the version that includes the Fisher Information often favors regression models having more predictors, depending on the scale and correlation structure of the predictor matrix. In the image analysis application that we describe, we therefore prefer the standard BIC approximation because of its relative simplicity and competitive performance at approximating the true posterior probabilities. 相似文献
8.
Andrew Montgomery Hartley 《Pharmaceutical statistics》2015,14(6):488-514
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
9.
Laisheng Wei 《统计学通讯:理论与方法》2014,43(18):3866-3892
In the past two decades, Pitman closeness (PC) criterion has been studied intensively in China. But many of research works were written in Chinese, which cannot be accessed by researchers from other countries. In this paper, we briefly summarize part of main results on the PC criterion in linear model in China. First, we present the basic model and some definitions. Then, we introduce the PC superiority for covariance adjustment estimate, and a class of biased estimates such as a kind of linear estimate, James–Stein estimate and the principal components estimate. Third, we introduce Bayesian PC superiorities for several different linear models such as ordinary univariate regression model, multivariate linear model and analysis of variance model. Finally, some results of robustness under Bayesian PC criterion are shown. 相似文献
10.
Osvaldo Marrero 《Journal of applied statistics》2019,46(5):798-813
We propose what appears to be the first Bayesian procedure for the analysis of seasonal variation when the sample size and the amplitude are small. Such data occur often in the medical sciences, where seasonality analyses and environmental considerations can help clarify disease etiologies. The method is explained in terms of a simple physico-geometric setting. We present the Bayesian version of a frequentist test that performs well. Two examples of real data illustrate the procedure's application. 相似文献
11.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated. 相似文献
12.
The authors consider Bayesian analysis for continuous‐time Markov chain models based on a conditional reference prior. For such models, inference of the elapsed time between chain observations depends heavily on the rate of decay of the prior as the elapsed time increases. Moreover, improper priors on the elapsed time may lead to improper posterior distributions. In addition, an infinitesimal rate matrix also characterizes this class of models. Experts often have good prior knowledge about the parameters of this matrix. The authors show that the use of a proper prior for the rate matrix parameters together with the conditional reference prior for the elapsed time yields a proper posterior distribution. The authors also demonstrate that, when compared to analyses based on priors previously proposed in the literature, a Bayesian analysis on the elapsed time based on the conditional reference prior possesses better frequentist properties. The type of prior thus represents a better default prior choice for estimation software. 相似文献
13.
Andrew M. Hartley 《Pharmaceutical statistics》2012,11(3):230-240
Adaptive sample size redetermination (SSR) for clinical trials consists of examining early subsets of on‐trial data to adjust prior estimates of statistical parameters and sample size requirements. Blinded SSR, in particular, while in use already, seems poised to proliferate even further because it obviates many logistical complications of unblinded methods and it generally introduces little or no statistical or operational bias. On the other hand, current blinded SSR methods offer little to no new information about the treatment effect (TE); the obvious resulting problem is that the TE estimate scientists might simply ‘plug in’ to the sample size formulae could be severely wrong. This paper proposes a blinded SSR method that formally synthesizes sample data with prior knowledge about the TE and the within‐treatment variance. It evaluates the method in terms of the type 1 error rate, the bias of the estimated TE, and the average deviation from the targeted power. The method is shown to reduce this average deviation, in comparison with another established method, over a range of situations. The paper illustrates the use of the proposed method with an example. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
14.
Reza Pourtaheri 《统计学通讯:理论与方法》2017,46(4):1927-1940
Recently, three-level control chart has been developed for monitoring situations in which an appropriate quality measure uses three discrete levels to classify a product characteristic. In this paper, the variable parameters control charts for multinomial data are developed with a three-level classification scheme. We compare it with various adaptive charts to show that this modified scheme can improve the performance. In order to evaluate and compare the performance of this scheme, adjusted average time to signal is used as the performance measure. Results indicate that the proposed chart has improved performance and is relatively sensitive to small shifts. 相似文献
15.
We investigate the exact coverage and expected length properties of the model averaged tail area (MATA) confidence interval proposed by Turek and Fletcher, CSDA, 2012, in the context of two nested, normal linear regression models. The simpler model is obtained by applying a single linear constraint on the regression parameter vector of the full model. For given length of response vector and nominal coverage of the MATA confidence interval, we consider all possible models of this type and all possible true parameter values, together with a wide class of design matrices and parameters of interest. Our results show that, while not ideal, MATA confidence intervals perform surprisingly well in our regression scenario, provided that we use the minimum weight within the class of weights that we consider on the simpler model. 相似文献
16.
This article deals with a Bayesian predictive approach for two-stage sequential analyses in clinical trials, applied to both frequentist and Bayesian tests. We propose to make a predictive inference based on the notion of satisfaction index and the data accrued so far together with future data. The computations and the simulation results concern an inferential problem, related to the binomial model. 相似文献
17.
One critical issue in the Bayesian approach is choosing the priors when there is not enough prior information to specify hyperparameters. Several improper noninformative priors for capture-recapture models were proposed in the literature. It is known that the Bayesian estimate can be sensitive to the choice of priors, especially when sample size is small to moderate. Yet, how to choose a noninformative prior for a given model remains a question. In this paper, as the first step, we consider the problem of estimating the population size for Mt model using noninformative priors. The Mt model has prodigious application in wildlife management, ecology, software liability, epidemiological study, census under-count, and other research areas. Four commonly used noninformative priors are considered. We find that the choice of noninformative priors depends on the number of sampling occasions only. The guidelines on the choice of noninformative priors are provided based on the simulation results. Propriety of applying improper noninformative prior is discussed. Simulation studies are developed to inspect the frequentist performance of Bayesian point and interval estimates with different noninformative priors under various population sizes, capture probabilities, and the number of sampling occasions. The simulation results show that the Bayesian approach can provide more accurate estimates of the population size than the MLE for small samples. Two real-data examples are given to illustrate the method. 相似文献
18.
Studies of diagnostic tests are often designed with the goal of estimating the area under the receiver operating characteristic curve (AUC) because the AUC is a natural summary of a test's overall diagnostic ability. However, sample size projections dealing with AUCs are very sensitive to assumptions about the variance of the empirical AUC estimator, which depends on two correlation parameters. While these correlation parameters can be estimated from the available data, in practice it is hard to find reliable estimates before the study is conducted. Here we derive achievable bounds on the projected sample size that are free of these two correlation parameters. The lower bound is the smallest sample size that would yield the desired level of precision for some model, while the upper bound is the smallest sample size that would yield the desired level of precision for all models. These bounds are important reference points when designing a single or multi-arm study; they are the absolute minimum and maximum sample size that would ever be required. When the study design includes multiple readers or interpreters of the test, we derive bounds pertaining to the average reader AUC and the ‘pooled’ or overall AUC for the population of readers. These upper bounds for multireader studies are not too conservative when several readers are involved. 相似文献
19.
This paper derives Akaike information criterion (AIC), corrected AIC, the Bayesian information criterion (BIC) and Hannan and Quinn’s information criterion for approximate factor models assuming a large number of cross-sectional observations and studies the consistency properties of these information criteria. It also reports extensive simulation results comparing the performance of the extant and new procedures for the selection of the number of factors. The simulation results show the di?culty of determining which criterion performs best. In practice, it is advisable to consider several criteria at the same time, especially Hannan and Quinn’s information criterion, Bai and Ng’s ICp2 and BIC3, and Onatski’s and Ahn and Horenstein’s eigenvalue-based criteria. The model-selection criteria considered in this paper are also applied to Stock and Watson’s two macroeconomic data sets. The results differ considerably depending on the model-selection criterion in use, but evidence suggesting five factors for the first data and five to seven factors for the second data is obtainable. 相似文献
20.
S. Min 《统计学通讯:模拟与计算》2017,46(3):2267-2282
In this article, we develop a Bayesian variable selection method that concerns selection of covariates in the Poisson change-point regression model with both discrete and continuous candidate covariates. Ranging from a null model with no selected covariates to a full model including all covariates, the Bayesian variable selection method searches the entire model space, estimates posterior inclusion probabilities of covariates, and obtains model averaged estimates on coefficients to covariates, while simultaneously estimating a time-varying baseline rate due to change-points. For posterior computation, the Metropolis-Hastings within partially collapsed Gibbs sampler is developed to efficiently fit the Poisson change-point regression model with variable selection. We illustrate the proposed method using simulated and real datasets. 相似文献