首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article considers sample size determination methods based on Bayesian credible intervals for θθ, an unknown real-valued parameter of interest. We consider clinical trials and assume that θθ represents the difference in the effects of a new and a standard therapy. In this context, it is typical to identify an interval of parameter values that imply equivalence of the two treatments (range of equivalence). Then, an experiment designed to show superiority of the new treatment is successful if it yields evidence that θθ is sufficiently large, i.e. if an interval estimate of θθ lies entirely above the superior limit of the range of equivalence. Following a robust Bayesian approach, we model uncertainty on prior specification with a class ΓΓ of distributions for θθ and we assume that the data yield robust evidence   if, as the prior varies in ΓΓ, the lower bound of the credible set inferior limit is sufficiently large. Sample size criteria in the article consist in selecting the minimal number of observations such that the experiment is likely to yield robust evidence. These criteria are based on summaries of the predictive distributions of lower bounds of the random inferior limits of credible intervals. The method is developed for the conjugate normal model and applied to a trial for surgery of gastric cancer.  相似文献   

2.
In assessing biosimilarity between two products, the question to ask is always “How similar is similar?” Traditionally, the equivalence of the means between products is the primary consideration in a clinical trial. This study suggests an alternative assessment for testing a certain percentage of the population of differences lying within a prespecified interval. In doing so, the accuracy and precision are assessed simultaneously by judging whether a two-sided tolerance interval falls within a prespecified acceptance range. We further derive an asymptotic distribution of the tolerance limits to determine the sample size for achieving a targeted level of power. Our numerical study shows that the proposed two-sided tolerance interval test controls the type I error rate and provides sufficient power. A real example is presented to illustrate our proposed approach.  相似文献   

3.
A challenge for implementing performance-based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the desired coverage rate and interval length over all (or a subset of) possible datasets. For most models, the WOC produces the largest sample size among the three criteria, and sample sizes obtained by the ACC and the ALC are not the same. For Bayesian sample size determination for normal means and differences between normal means, we investigate, for the first time, the direction and magnitude of differences between the ACC and ALC sample sizes. For fixed hyperparameter values, we show that the difference of the ACC and ALC sample size depends on the nominal coverage, and not on the nominal interval length. There exists a threshold value of the nominal coverage level such that below the threshold the ALC sample size is larger than the ACC sample size, and above the threshold the ACC sample size is larger. Furthermore, the ACC sample size is more sensitive to changes in the nominal coverage. We also show that for fixed hyperparameter values, there exists an asymptotic constant ratio between the WOC sample size and the ALC (ACC) sample size. Simulation studies are conducted to show that similar relationships among the ACC, ALC, and WOC may hold for estimating binomial proportions. We provide a heuristic argument that the results can be generalized to a larger class of models.  相似文献   

4.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

5.
The GARCH and stochastic volatility (SV) models are two competing, well-known and often used models to explain the volatility of financial series. In this paper, we consider a closed form estimator for a stochastic volatility model and derive its asymptotic properties. We confirm our theoretical results by a simulation study. In addition, we propose a set of simple, strongly consistent decision rules to compare the ability of the GARCH and the SV model to fit the characteristic features observed in high frequency financial data such as high kurtosis and slowly decaying autocorrelation function of the squared observations. These rules are based on a number of moment conditions that is allowed to increase with sample size. We show that our selection procedure leads to choosing the model that fits best, or the simplest model under equivalence, with probability one as the sample size increases. The finite sample size behavior of our procedure is analyzed via simulations. Finally, we provide an application to stocks in the Dow Jones industrial average index.  相似文献   

6.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

7.
The optimal sample size comparing two Poisson rates when the counts are underreported is investigated. We consider two sampling scenarios. We first consider the case where only underreported data will be sampled and rely on informative prior distributions to obtain posterior identifiability. We also consider the case where an expensive infallible search method and a fallible method are available. An interval based sample size criterion is used in both sampling scenarios. Since the posterior distributions of the two rates are functions of confluent hypergeometric and hypergeometric functions simulation based methods are necessary to perform the sample size determination scheme.  相似文献   

8.
In mixed linear models, it is frequently of interest to test hypotheses on the variance components. F-test and likelihood ratio test (LRT) are commonly used for such purposes. Current LRTs available in literature are based on limiting distribution theory. With the development of finite sample distribution theory, it becomes possible to derive the exact test for likelihood ratio statistic. In this paper, we consider the problem of testing null hypotheses on the variance component in a one-way balanced random effects model. We use the exact test for the likelihood ratio statistic and compare the performance of F-test and LRT. Simulations provide strong support of the equivalence between these two tests. Furthermore, we prove the equivalence between these two tests mathematically.  相似文献   

9.
In this article, we propose an exponentially weighted moving average (EWMA) control chart for the shape parameter β of Weibull processes. The chart is based on a moving range when a single measurement is taken per sampling period. We consider both one-sided (lower-sided and upper-sided) and two-sided control charts. We perform simulations to estimate control limits that achieve a specified average run length (ARL) when the process is in control. The control limits we derive are ARL unbiased in that they result in ARL that is shorter than the stable-process ARL when β has shifted. We also perform simulations to determine Phase I sample size requirements if control limits are based on an estimate of β. We compare the ARL performance of the proposed chart to that of the moving range chart proposed in the literature.  相似文献   

10.
In this paper, we consider Marshall–Olkin extended exponential (MOEE) distribution which is capable of modelling various shapes of failure rates and aging criteria. The purpose of this paper is three fold. First, we derive the maximum likelihood estimators of the unknown parameters and the observed the Fisher information matrix from progressively type-II censored data. Next, the Bayes estimates are evaluated by applying Lindley’s approximation method and Markov Chain Monte Carlo method under the squared error loss function. We have performed a simulation study in order to compare the proposed Bayes estimators with the maximum likelihood estimators. We also compute 95% asymptotic confidence interval and symmetric credible interval along with the coverage probability. Third, we consider one-sample and two-sample prediction problems based on the observed sample and provide appropriate predictive intervals under classical as well as Bayesian framework. Finally, we analyse a real data set to illustrate the results derived.  相似文献   

11.
summary In this paper we derive the predictive density function of a future observation under the assumption of Edgeworth-type non-normal prior distribution for the unknown mean of a normal population. Fixed size single sample and sequential sampling inspection plans, in a decisive prediction framework, are examined for their sensitivity to departures from normality of the prior distribution. Numerical illustrations indicate that the decision to market the remaining items of a given lot for a fixed size plan may be sensitive to the presence of skewness or kurtosis in the prior distribution. However, Bayes'decision based on the sequential plan may not change though expected gains may change with variation in the non-normality of the prior distribution.  相似文献   

12.
In this paper, we derive the predictive distributions of one and several future responses including their average, on the basis of a type II doubly censored sample from a two parameter exponential life testing model. We also determine the highest predictive density interval for a future response. A numerical example is provided to illustrate the results.  相似文献   

13.
We consider small sample equivalence tests for exponentialy. Statistical inference in this setting is particularly challenging since equivalence testing procedures typically require much larger sample sizes, in comparison with classical “difference tests,” to perform well. We make use of Butler's marginal likelihood for the shape parameter of a gamma distribution in our development of small sample equivalence tests for exponentiality. We consider two procedures using the principle of confidence interval inclusion, four Bayesian methods, and the uniformly most powerful unbiased (UMPU) test where a saddlepoint approximation to the intractable distribution of a canonical sufficient statistic is used. We perform small sample simulation studies to assess the bias of our various tests and show that all of the Bayes posteriors we consider are integrable. Our simulation studies show that the saddlepoint-approximated UMPU method performs remarkably well for small sample sizes and is the only method that consistently exhibits an empirical significance level close to the nominal 5% level.  相似文献   

14.
In this paper, we develop a matching prior for the product of means in several normal distributions with unrestricted means and unknown variances. For this problem, properly assigning priors for the product of normal means has been issued because of the presence of nuisance parameters. Matching priors, which are priors matching the posterior probabilities of certain regions with their frequentist coverage probabilities, are commonly used but difficult to derive in this problem. We developed the first order probability matching priors for this problem; however, the developed matching priors are unproper. Thus, we apply an alternative method and derive a matching prior based on a modification of the profile likelihood. Simulation studies show that the derived matching prior performs better than the uniform prior and Jeffreys’ prior in meeting the target coverage probabilities, and meets well the target coverage probabilities even for the small sample sizes. In addition, to evaluate the validity of the proposed matching prior, Bayesian credible interval for the product of normal means using the matching prior is compared to Bayesian credible intervals using the uniform prior and Jeffrey’s prior, and the confidence interval using the method of Yfantis and Flatman.  相似文献   

15.
This article is concerned with making predictive inference on the basis of a doubly censored sample from a two-parameter Rayleigh life model. We derive the predictive distributions for a single future response, the ith future response, and several future responses. We use the Bayesian approach in conjunction with an improper flat prior for the location parameter and an independent proper conjugate prior for the scale parameter to derive the predictive distributions. We conclude with a numerical example in which the effect of the hyperparameters on the mean and standard deviation of the predictive density is assessed.  相似文献   

16.
Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi-Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our simulations show that in comparison to other approximations, our method has better performance. Our methods are readily applied to non-Gaussian spatial data and as an example, the analysis of the rhizoctonia root rot data is presented.  相似文献   

17.
We derive a new Bayesian credible interval estimator for comparing two Poisson rates when counts are underreported and an additional validation data set is available. We provide a closed-form posterior density for the difference between the two rates that yields insightful information on which prior parameters influence the posterior the most. We also apply the new interval estimator to a real-data example, investigate the performance of the credible interval, and examine the impact of informative priors on the rate difference posterior via Monte Carlo simulations.  相似文献   

18.
The statistical inference problem on effect size indices is addressed using a series of independent two-armed experiments from k arbitrary populations. The effect size parameter simply quantifies the difference between two groups. It is a meaningful index to be used when data are measured on different scales. In the context of bivariate statistical models, we define estimators of the effect size indices and propose large sample testing procedures to test the homogeneity of these indices. The null and non-null distributions of the proposed testing procedures are derived and their performance is evaluated via Monte Carlo simulation. Further, three types of interval estimation of the proposed indices are considered for both combined and uncombined data. Lower and upper confidence limits for the actual effect size indices are obtained and compared via bootstrapping. It is found that the length of the intervals based on the combined effect size estimator are almost half the length of the intervals based on the uncombined effect size estimators. Finally, we illustrate the proposed procedures for hypothesis testing and interval estimation using a real data set.  相似文献   

19.
When the X ¥ control chart is used to monitor a process, three parameters should be determined: the sample size, the sampling interval between successive samples, and the control limits of the chart. Duncan presented a cost model to determine the three parameters for an X ¥ chart. Alexander et al. combined Duncan's cost model with the Taguchi loss function to present a loss model for determining the three parameters. In this paper, the Burr distribution is employed to conduct the economic-statistical design of X ¥ charts for non-normal data. Alexander's loss model is used as the objective function, and the cumulative function of the Burr distribution is applied to derive the statistical constraints of the design. An example is presented to illustrate the solution procedure. From the results of the sensitivity analyses, we find that small values of the skewness coefficient have no significant effect on the optimal design; however, a larger value of skewness coefficient leads to a slightly larger sample size and sampling interval, as well as wider control limits. Meanwhile, an increase on the kurtosis coefficient results in an increase on the sample size and wider control limits.  相似文献   

20.
When counting the number of chemical parts in air pollution studies or when comparing the occurrence of congenital malformations between a uranium mining town and a control population, we often assume Poisson distribution for the number of these rare events. Some discussions on sample size calculation under Poisson model appear elsewhere, but all these focus on the case of testing equality rather than testing equivalence. We discuss sample size and power calculation on the basis of exact distribution under Poisson models for testing non-inferiority and equivalence with respect to the mean incidence rate ratio. On the basis of large sample theory, we further develop an approximate sample size calculation formula using the normal approximation of a proposed test statistic for testing non-inferiority and an approximate power calculation formula for testing equivalence. We find that using these approximation formulae tends to produce an underestimate of the minimum required sample size calculated from using the exact test procedure. On the other hand, we find that the power corresponding to the approximate sample sizes can be actually accurate (with respect to Type I error and power) when we apply the asymptotic test procedure based on the normal distribution. We tabulate in a variety of situations the minimum mean incidence needed in the standard (or the control) population, that can easily be employed to calculate the minimum required sample size from each comparison group for testing non-inferiority and equivalence between two Poisson populations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号