首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian sample size estimation for equivalence and non-inferiority tests for diagnostic methods is considered. The goal of the study is to test whether a new screening test of interest is equivalent to, or not inferior to the reference test, which may or may not be a gold standard. Sample sizes are chosen by the model performance criteria of average posterior variance, length and coverage probability. In the absence of a gold standard, sample sizes are evaluated by the ratio of marginal probabilities of the two screening tests; whereas in the presence of gold standard, sample sizes are evaluated by the measures of sensitivity and specificity.  相似文献   

2.
The power of a clinical trial is partly dependent upon its sample size. With continuous data, the sample size needed to attain a desired power is a function of the within-group standard deviation. An estimate of this standard deviation can be obtained during the trial itself based upon interim data; the estimate is then used to re-estimate the sample size. Gould and Shih proposed a method, based on the EM algorithm, which they claim produces a maximum likelihood estimate of the within-group standard deviation while preserving the blind, and that the estimate is quite satisfactory. However, others have claimed that the method can produce non-unique and/or severe underestimates of the true within-group standard deviation. Here the method is thoroughly examined to resolve the conflicting claims and, via simulation, to assess its validity and the properties of its estimates. The results show that the apparent non-uniqueness of the method's estimate is due to an apparently innocuous alteration that Gould and Shih made to the EM algorithm. When this alteration is removed, the method is valid in that it produces the maximum likelihood estimate of the within-group standard deviation (and also of the within-group means). However, the estimate is negatively biased and has a large standard deviation. The simulations show that with a standardized difference of 1 or less, which is typical in most clinical trials, the standard deviation from the combined samples ignoring the groups is a better estimator, despite its obvious positive bias.  相似文献   

3.
A version of the multiple decsion problem is studied in which the procedure is based only on the current observation and the previous decision. A necessary and sufficient condition for inconsistency of the stepwise maximum likelihood procedure is shown to be the boundedness of the likelihood ratios. In the case of consistency the (typically slow) rate of convergence to zero of the error probabilities is determined.  相似文献   

4.
A large sample approximation of the least favorable configuration for a fixed sample size selection procedure for negative binomial populations is proposed. A normal approximation of the selection procedure is also presented. Optimal sample sizes required to be drawn from each population and the bounds for the sample sizes are tabulated. Sample sizes obtained using the approximate least favorable configuration are compared with those obtained using the exact least favorable configuration. Alternate form of the normal approximation to the probability of correct selection is also presented. The relation between the required sample size and the number of populations involved is studied.  相似文献   

5.

Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large-sample procedure relies on the property that the distribution of the t -like quantities is close to the standard normal in large samples. In this paper, we use a new procedure based on both simulation and asymptotic theory to determine the sample size for a test plan. Unlike the complete data case, the t -like quantities are not pivotal quantities in general when data are time censored. However we show that the distribution of the t -like quantities only depend on the expected proportion failing and obtain the distributions by simulation for both complete and time censoring case when data follow Weibull distribution. We find that the large-sample procedure usually underestimates the sample size even when it is said to be 200 or more. The sample size given by the proposed procedure insures the requested nominal accuracy and confidence of the estimation when the test plan results in complete or time censored data. Some useful figures displaying the required sample size for the new procedure are also presented.  相似文献   

6.
In experiments designed to estimate a binomial parameter, sample sizes are often calculated to ensure that the point estimate will be within a desired distance from the true value with sufficiently high probability. Since exact calculations resulting from the standard formulation of this problem can be difficult, “conservative” and/or normal approximations are frequently used. In this paper, some problems with the current formulation are given, and a modified criterion that leads to some improvement is provided. A simple algorithm that calculates the exact sample sizes under the modified criterion is provided, and these sample sizes are compared to those given by the standard approximate criterion, as well as to an exact conservative Bayesian criterion.  相似文献   

7.
Using historical data for Bayesian sample size determination   总被引:2,自引:0,他引:2  
Summary.  We consider the sample size determination (SSD) problem, which is a basic yet extremely important aspect of experimental design. Specifically, we deal with the Bayesian approach to SSD, which gives researchers the possibility of taking into account pre-experimental information and uncertainty on unknown parameters. At the design stage, this fact offers the advantage of removing or mitigating typical drawbacks of classical methods, which might lead to serious miscalculation of the sample size. In this context, the leading idea is to choose the minimal sample size that guarantees a probabilistic control on the performance of quantities that are derived from the posterior distribution and used for inference on parameters of interest. We are concerned with the use of historical data—i.e. observations from previous similar studies—for SSD. We illustrate how the class of power priors can be fruitfully employed to deal with lack of homogeneity between historical data and observations of the upcoming experiment. This problem, in fact, determines the necessity of discounting prior information and of evaluating the effect of heterogeneity on the optimal sample size. Some of the most popular Bayesian SSD methods are reviewed and their use, in concert with power priors, is illustrated in several medical experimental contexts.  相似文献   

8.
In this paper, we discuss optimum sample size determination for a bounded linex loss function (blinex). The linex loss function is often used when losses are asymmetric, but it has the disadvantage that it can only be used if the moment-generating function of the posterior distribution exists. Blinex loss has the advantage that it can always be calculated. Also many authors have argued that a bounded loss function is to be preferred. We have obtained the optimum sample size for a number of distributions when the cost of sampling is linear. The form of the posterior risk function does not allow an analytical solution so simulation is necessary.  相似文献   

9.
A variable sample size (VSS) scheme directly monitoring the coefficient of variation (CV), instead of monitoring the transformed statistics, is proposed. Optimal chart parameters are computed based on two criteria: (i) minimizing the out-of-control ARL (ARL1) and (ii) minimizing the out-of-control ASS (ASS1). Then the performances are compared between these two criteria. The advantages of the proposed chart over the VSS chart based on the transformed statistics in the existing literature are: the former (i) provides an easier alternative as no transformation is involved and (ii) requires less number of observations to detect a shift when ASS1 is minimized.  相似文献   

10.
There are several approaches to assess or demonstrate pharmacokinetic dose proportionality. One statistical method is the traditional ANOVA model, where dose proportionality is evaluated using the bioequivalence limits. A more informative method is the mixed effects Power Model, where dose proportionality is assessed using a decision rule for the estimated slope. Here we propose analytical derivations of sample sizes for various designs (including crossover, incomplete block and parallel group designs) to be analysed according to the Power Model.  相似文献   

11.
When using nonparametric methods to analyze factorial designs with repeated measures, the ANOVA-type rank test has gained popularity due to its robustness and appropriate type I error control. This article proposes power and sample size calculation formulas under two scenarios where the nonparametric regression coefficients are known or they are unknown but a pilot study is available. When a pilot study is available, the formulas do not need any assumption on the underlying population distributions. Simulation results confirm the accuracy of the proposed methods. An STZ rat excisional wound study is used to demonstrate the application of the methods.  相似文献   

12.
For time‐to‐event data, the power of the two sample logrank test for the comparison of two treatment groups can be greatly influenced by the ratio of the number of patients in each of the treatment groups. Despite the possible loss of power, unequal allocations may be of interest due to a need to collect more data on one of the groups or to considerations related to the acceptability of the treatments to patients. Investigators pursuing such designs may be interested in the cost of the unbalanced design relative to a balanced design with respect to the total number of patients required for the study. We present graphical displays to illustrate the sample size adjustment factor, or ratio of the sample size required by an unequal allocation compared to the sample size required by a balanced allocation, for various survival rates, treatment hazards ratios, and sample size allocation ratios. These graphical displays conveniently summarize information in the literature and provide a useful tool for planning sample sizes for the two sample logrank test. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
14.
The situation where a certain type of seed need to be classified into one of three different categories, according to its germination level, can be formulated as the simultaneous test of three statistical hypotheses. In this paper, the problem of assessing an optimal sample size for the simultaneous test of several hypotheses on a Bernoulli process is studied within the Bayesian framework, and a solution is obtained for a wide class of prior distributions and a logarithmic loss function.  相似文献   

15.
A common approach to analysing clinical trials with multiple outcomes is to control the probability for the trial as a whole of making at least one incorrect positive finding under any configuration of true and false null hypotheses. Popular approaches are to use Bonferroni corrections or structured approaches such as, for example, closed-test procedures. As is well known, such strategies, which control the family-wise error rate, typically reduce the type I error for some or all the tests of the various null hypotheses to below the nominal level. In consequence, there is generally a loss of power for individual tests. What is less well appreciated, perhaps, is that depending on approach and circumstances, the test-wise loss of power does not necessarily lead to a family wise loss of power. In fact, it may be possible to increase the overall power of a trial by carrying out tests on multiple outcomes without increasing the probability of making at least one type I error when all null hypotheses are true. We examine two types of problems to illustrate this. Unstructured testing problems arise typically (but not exclusively) when many outcomes are being measured. We consider the case of more than two hypotheses when a Bonferroni approach is being applied while for illustration we assume compound symmetry to hold for the correlation of all variables. Using the device of a latent variable it is easy to show that power is not reduced as the number of variables tested increases, provided that the common correlation coefficient is not too high (say less than 0.75). Afterwards, we will consider structured testing problems. Here, multiplicity problems arising from the comparison of more than two treatments, as opposed to more than one measurement, are typical. We conduct a numerical study and conclude again that power is not reduced as the number of tested variables increases.  相似文献   

16.
A sample size selection procedure for paired comparisons of means is presented which controls the half width of the confidence intervals while allowing for unequal variances of treatment means.  相似文献   

17.
In this paper we obtain the. sampling properties of a scale-free estimator of "redundancy," an information theoretic measure, which is used by economists and communications engineers. We then propose a new test of exponentiality based upon the sample redun- dancy. This test is asymptotically unbiased against a large class of alternatives, and it performs a t least as well as a recently proposed test with respect to power and asymptotic relative effi- ciency.  相似文献   

18.
In stratified otolaryngologic (or ophthalmologic) studies, the misleading results may be obtained when ignoring the confounding effect and the correlation between responses from two ears. Score statistic and Wald-type statistic are presented to test equality in a stratified bilateral-sample design, and their corresponding sample size formulae are given. Score statistic for testing homogeneity of difference between two proportions and score confidence interval of the common difference of two proportions in a stratified bilateral-sample design are derived. Empirical results show that (1) score statistic and Wald-type statistic based on dependence model assumption outperform other statistics in terms of the type I error rates; (2) score confidence interval demonstrates reasonably good coverage property; (3) sample size formula via Wald-type statistic under dependence model assumption is rather accurate. A real example is used to illustrate the proposed methodologies.  相似文献   

19.
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs.  相似文献   

20.
The purpose of our study is to propose a. procedure for determining the sample size at each stage of the repeated group significance, tests intended to compare the efficacy of two treatments when a response variable is normal. It is necessary to devise a procedure for reducing the maximum sample size because a large number of sample size are often used in group sequential test. In order to reduce the sample size at each stage, we construct the repeated confidence boundaries which enable us to find which of the two treatments is the more effective at an early stage. Thus we use the recursive formulae of numerical integrations to determine the sample size at the intermediate stage. We compare our procedure with Pocock's in terms of maximum sample size and average sample size in the simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号