首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A generalization of Anderson's sequential probability ratio test procedure is proposed in which the continuation region is bounded by a pair of converging lines up to a certain stage of the experiment and later by another pair of converging lines until the procedure is truncated at a predetermined stage of the experiment. The OC and the ASN functions have been derived. For certain parameter values the proposed procedure attains lower average sample numbers than that attainable by any other known procedure.  相似文献   

2.
A sequential method for approximating a general permutation test (SAPT) is proposed and evaluated. Permutations are randomly generated from some set G, and a sequential probability ratio test (SPRT) is used to determine whether an observed test statistic falls sufficiently far in the tail of the permutation distribution to warrant rejecting some hypothesis. An estimate and bounds on the power function of the SPRT are used to find bounds on the effective significance level of the SAPT. Guidelines are developed for choosing parameters in order to obtain a desired significance level and minimize the number of permutations needed to reach a decision. A theoretical estimate of the average number of permutations under the null hypothesis is given along with simulation results demonstrating the power and average number of permutations for various alternatives. The sequential approximation retains the generality of the permutation test,- while avoiding the computational complexities that arise in attempting to computer the full permutation distribution exactly  相似文献   

3.
In this paper, classical optimum tests for symmetry of two-piece normal distribution is derived. Uniformly most powerful one-sided test for the skewness parameter is obtained when the location and scale parameters are known and is compared with sequential probability ratio test. An ad-hoc test for symmetry and likelihood ratio test for symmetry for large samples, can be found in literature for this distribution. But in this paper, we derive exact likelihood ratio test for symmetry, when location parameter is known. The exact power of the test is evaluated for different sample sizes.  相似文献   

4.
We consider methods of computing exactly the probability of “acceptance” and the “average sample size needed” for the sequential probability ratio test (SPRT) and likewise the newer “2-SPRT,” concerning the value of a Bernoulli parameter. The methods permit one to approximate, iteratively, the desired operating characteristics for the test.  相似文献   

5.
A two-sample partially sequential probability ratio test (PSPRT) is considered for the two-sample location problem with one sample fixed and the other sequential. Observations are assumed to come from two normal poptilatlons with equal and known variances. Asymptotically in the fixed-sample size the PSPRT is a truncated Wald one sample sequential probability test. Brownian motion approximations for boundary-crossing probabilities and expected sequential sample size are obtained. These calculations are compared to values obtained by Monte Carlo simulation.  相似文献   

6.
Given an inverse Gaussian distribution I(.μ,a2μ) with known coefficient of variation a, the hypothesis HO: .μ <ce:glyph name="dbnd6"/> μo is tested against H1: μ <ce:glyph name="dbnd6"/> μ1 using the sequential probability ratio test. The maximum of the expected sample number is shown to occur when μ is approximately equal to the geometric mean of μoand μ1 and it is shown that this maximum value depends on .μo and μ1 only through their ratio. It is observed that the test can be used to discriminate between two one-sided hypotheses.  相似文献   

7.
In this paper, we derive sequential conditional probability ratio tests to compare diagnostic tests without distributional assumptions on test results. The test statistics in our method are nonparametric weighted areas under the receiver-operating characteristic curves. By using the new method, the decision of stopping the diagnostic trial early is unlikely to be reversed should the trials continue to the planned end. The conservatism reflected in this approach to have more conservative stopping boundaries during the course of the trial is especially appealing for diagnostic trials since the end point is not death. In addition, the maximum sample size of our method is not greater than a fixed sample test with similar power functions. Simulation studies are performed to evaluate the properties of the proposed sequential procedure. We illustrate the method using data from a thoracic aorta imaging study.  相似文献   

8.
In this paper control charts for the mean of a multivariate Gaussian process are considered. Using the generalized likelihood ratio approach and the sequential probability ratio test under an additional constraint on the magnitude of the change various types of CUSUM control charts are derived. It is analyzed under which conditions these schemes are directionally invariant. These charts are compared with several other control schemes proposed in literature. The performance of the charts is studied based on the maximum average delay.  相似文献   

9.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution.  相似文献   

10.
In this article, we systematically study the optimal truncated group sequential test on binomial proportions. Through analysis of the cost structure, average test cost is introduced as a new optimality criterion. According to the new criterion, the optimal tests on different design parameters including the boundaries, success discriminant value, stage sample vector, stage size, and the maximum sample size are defined. Since the computation time in finding optimal designs by exhaustive search is intolerably long, group sequential sample space sorting method and procedures are developed to find the near-optimal ones. In comparison with the international standard ISO2859-1, the truncated group sequential designs proposed in this article can reduce the average test costs around 20%.  相似文献   

11.
We consider the problem of testing which of two normally distributed treatments has the largest mean, when the tested populations incorporate a covariate. From the class of procedures using the invariant sequential probability ratio test we derive an optimal allocation that minimizes, in a continuous time setting, the expected sampling costs. Simulations show that this procedure reduces the number of observations from the costlier treatment and categories while maintaining an overall sample size closer to the “pairwise” procedure. A randomized trial example is given.  相似文献   

12.
The authors propose a class of statistics based on Rao's score for the sequential testing of composite hypotheses comparing two treatments (populations). Asymptotic approximations of the statistics lead them to propose sequential tests and to derive their monitoring boundaries. As special cases, they construct sequential versions of the two‐sample t‐test for normal populations and two‐sample z‐score tests for binomial populations. The proposed algorithms are simple and easy to compute, as no numerical integration is required. Furthermore, the user can analyze the data at any time regardless of how many inspections have been made. Monte Carlo simulations allow the authors to compare the power and the average stopping time (also known as average sample number) of the proposed tests to those of nonsequential and group sequential tests. A two‐armed comparative clinical trial in patients with adult leukemia allows them to illustrate the efficiency of their methods in the case of binary responses.  相似文献   

13.
A rank test based on the number of ‘near-matches’ among within-block rankings is proposed for stochastically ordered alternatives in a randomized block design with t treatments and b blocks. The asymptotic relative efficiency of this test with respect to the Page test is computed as number of blocks increases to infinity. A sequential analog of the above test procedure is also considered. A repeated significance test procedure is developed and average sample number is computed asymptotically under the null hypothesis as well as under a sequence of contiguous alternatives.  相似文献   

14.
With rapid development of computing technology, Bayesian statistics have increasingly gained more attention in various areas of public health. However, the full potential of Bayesian sequential methods applied to vaccine safety surveillance has not yet been realized, despite acknowledged practical benefits and philosophical advantages of Bayesian statistics. In this paper, we describe how sequential analysis can be performed in a Bayesian paradigm in the field of vaccine safety. We compared the performance of the frequentist sequential method, specifically, Maximized Sequential Probability Ratio Test (MaxSPRT), and a Bayesian sequential method using simulations and a real world vaccine safety example. The performance is evaluated using three metrics: false positive rate, false negative rate, and average earliest time to signal. Depending on the background rate of adverse events, the Bayesian sequential method could significantly improve the false negative rate and decrease the earliest time to signal. We consider the proposed Bayesian sequential approach to be a promising alternative for vaccine safety surveillance.  相似文献   

15.
An overview of risk-adjusted charts   总被引:2,自引:1,他引:1  
Summary.  The paper provides an overview of risk-adjusted charts, with examples based on two data sets: the first consisting of outcomes following cardiac surgery and patient factors contributing to the Parsonnet score; the second being age–sex-adjusted death-rates per year under a single general practitioner. Charts presented include the cumulative sum (CUSUM), resetting sequential probability ratio test, the sets method and Shewhart chart. Comparisons between the charts are made. Estimation of the process parameter and two-sided charts are also discussed. The CUSUM is found to be the least efficient, under the average run length (ARL) criterion, of the resetting sequential probability ratio test class of charts, but the ARL criterion is thought not to be sensible for comparisons within that class. An empirical comparison of the sets method and CUSUM, for binary data, shows that the sets method is more efficient when the in-control ARL is small and more efficient for a slightly larger range of in-control ARLs when the change in parameter being tested for is larger. The Shewart p -chart is found to be less efficient than the CUSUM even when the change in parameter being tested for is large.  相似文献   

16.
This paper gives a method for decomposing many sequential probability ratio tests into smaller independent components called “modules”. A function of some characteristics of modules can be used to determine the asymptotically most efficient of a set of statistical tests in which a, the probability of type I error equals β, the probability of type II error. The same test is seen also to give the asymptotically most efficient of the corresponding set of tests in which a is not equal to β. The “module” method is used to give an explanation for the super-efficiency of the play-the-winner and play-the-loser rules in two-sample binomial sampling. An example showing how complex cases can be analysed numerically using this method is also given.  相似文献   

17.
A new multivariate approach to quality control is presented. On the basis of sequential probability ratio tests, a multivariate cumulative sum chart is derived. Using an approximation to the noncentral x2 distribution, a linear decision rule is obtained.  相似文献   

18.
In this paper, a CUSUM procedure is given for monitoring for a decrease in the variance (process improvement) as well as a two-sided CUSUM which monitors for both increases and decreases in the variance. The observations are assumed to be independent and normally distributed. The procedure is based on the log¬arithm of the likelihood ratio of the probability density functions under the two competing hypotheses. Formulae that approximate the average run length of the CUSUM procedure for detecting an increase (or decrease) in the variance of a normal distribution are given. These formulae, when corrected for the overshoot from the boundary, provide a very accurate approximation  相似文献   

19.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

20.
Two-parameter Gompertz distribution has been introduced as a lifetime model for reliability inference recently. In this paper, the Gompertz distribution is proposed for the baseline lifetimes of components in a composite system. In this composite system, failure of a component induces increased load on the surviving components and thus increases component hazard rate via a power-trend process. Point estimates of the composite system parameters are obtained by the method of maximum likelihood. Interval estimates of the baseline survival function are obtained by using the maximum-likelihood estimator via a bootstrap percentile method. Two parametric bootstrap procedures are proposed to test whether the hazard rate function changes with the number of failed components. Intensive simulations are carried out to evaluate the performance of the proposed estimation procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号