首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This paper develops clinical trial designs that compare two treatments with a binary outcome. The imprecise beta class (IBC), a class of beta probability distributions, is used in a robust Bayesian framework to calculate posterior upper and lower expectations for treatment success rates using accumulating data. The posterior expectation for the difference in success rates can be used to decide when there is sufficient evidence for randomized treatment allocation to cease. This design is formally related to the randomized play‐the‐winner (RPW) design, an adaptive allocation scheme where randomization probabilities are updated sequentially to favour the treatment with the higher observed success rate. A connection is also made between the IBC and the sequential clinical trial design based on the triangular test. Theoretical and simulation results are presented to show that the expected sample sizes on the truly inferior arm are lower using the IBC compared with either the triangular test or the RPW design, and that the IBC performs well against established criteria involving error rates and the expected number of treatment failures.  相似文献   

2.
Abstract

Two problems need to be solved before being able to give proper advice to couples undergoing in vitro fertilization therapy. Firstly, does the long-run success rate really converge to 100%? Secondly, what the success rate can be expected within a reasonable finite number of cycles? We propose a model based on a Weibull distribution. Data on 23,520 couples were used to calculate the cumulative pregnancy rate.  相似文献   

3.
The number of success runs for nonhomogeneous markov dependent trials are represented as the sum of Bernoulli trials and the expected value of runs are obtained by using this representation. The distribution and bounds for the distribution of the longest run are derived for markov dependent trials.  相似文献   

4.
Vaccine experiments with a binary outcome typically use a small number of animals for financial and ethical reasons. The choice of a design, characterized by the total number of animals and the allocation of animals to treated and control groups, needs to be based on an assessment of change in expected size and power, with corresponding changes in the nominal significance level. This paper shows how an analysis of the conditional and the expected size and power of the Fisher-exact test, given predicted values for the proportions of success in control and treated groups, can lead to appropriate decision rules.  相似文献   

5.
Predictive criteria, including the adjusted squared multiple correlation coefficient, the adjusted concordance correlation coefficient, and the predictive error sum of squares, are available for model selection in the linear mixed model. These criteria all involve some sort of comparison of observed values and predicted values, adjusted for the complexity of the model. The predicted values can be conditional on the random effects or marginal, i.e., based on averages over the random effects. These criteria have not been investigated for model selection success.

We used simulations to investigate selection success rates for several versions of these predictive criteria as well as several versions of Akaike's information criterion and the Bayesian information criterion, and the pseudo F-test. The simulations involved the simple scenario of selection of a fixed parameter when the covariance structure is known.

Several variance–covariance structures were used. For compound symmetry structures, higher success rates for the predictive criteria were obtained when marginal rather than conditional predicted values were used. Information criteria had higher success rates when a certain term (normally left out in SAS MIXED computations) was included in the criteria. Various penalty functions were used in the information criteria, but these had little effect on success rates. The pseudo F-test performed as expected. For the autoregressive with random effects structure, the results were the same except that success rates were higher for the conditional version of the predictive error sum of squares.

Characteristics of the data, such as the covariance structure, parameter values, and sample size, greatly impacted performance of various model selection criteria. No one criterion was consistently better than the others.  相似文献   

6.
In a Bayesian analysis of finite mixture models, parameter estimation and clustering are sometimes less straightforward than might be expected. In particular, the common practice of estimating parameters by their posterior mean, and summarizing joint posterior distributions by marginal distributions, often leads to nonsensical answers. This is due to the so-called 'label switching' problem, which is caused by symmetry in the likelihood of the model parameters. A frequent response to this problem is to remove the symmetry by using artificial identifiability constraints. We demonstrate that this fails in general to solve the problem, and we describe an alternative class of approaches, relabelling algorithms , which arise from attempting to minimize the posterior expected loss under a class of loss functions. We describe in detail one particularly simple and general relabelling algorithm and illustrate its success in dealing with the label switching problem on two examples.  相似文献   

7.
ABSTRACT

Just as Bayes extensions of the frequentist optimal allocation design have been developed for the two-group case, we provide a Bayes extension of optimal allocation in the three-group case. We use the optimal allocations derived by Jeon and Hu [Optimal adaptive designs for binary response trials with three treatments. Statist Biopharm Res. 2010;2(3):310–318] and estimate success probabilities for each treatment arm using a Bayes estimator. We also introduce a natural lead-in design that allows adaptation to begin as early in the trial as possible. Simulation studies show that the Bayesian adaptive designs simultaneously increase the power and expected number of successfully treated patients compared to the balanced design. And compared to the standard adaptive design, the natural lead-in design introduced in this study produces a higher expected number of successes whilst preserving power.  相似文献   

8.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   

9.
In this paper, we consider the statistical inference for the success probability in the case of start-up demonstration tests in which rejection of units is possible when a pre-fixed number of failures is observed before the required number of consecutive successes are achieved for acceptance of the unit. Since the expected value of the stopping time is not a monotone function of the unknown parameter, the method of moments is not useful in this situation. Therefore, we discuss two estimation methods for the success probability: (1) the maximum likelihood estimation (MLE) via the expectation-maximization (EM) algorithm and (2) Bayesian estimation with a beta prior. We examine the small-sample properties of the MLE and Bayesian estimator. Finally, we present an example to illustrate the method of inference discussed here.  相似文献   

10.
Factors affecting dispersal and recruitment in animal populations will play a prominent role in the dynamics of populations. This is particularly the case for subdivided populations where the dispersal of individuals among patches may lead to local extinction and 'rescue effects'. A long-term observational study carried out in Brittany, France, and involving colour-ringed Black-legged Kittiwakes (Rissa tridactyla) suggested that the reproductive success of conspecifics (or some social correlate) could be one important factor likely to affect dispersal and recruitment. By dispersing from patches where the local reproductive success was low and recruiting to patches where the local reproductive success was high, individual birds could track spatio-temporal variations in the quality of breeding patches (the quality of breeding patches can be affected by different factors, such as food availability, the presence of predators or ectoparasites, which can vary in space and time at different scales). Such an observational study may nevertheless have confounded the role of conspecific reproductive success with the effect of a correlated factor (e.g. the local activities of a predator). In other words, individuals may have been influenced directly by the factor responsible for the low local reproductive success or indirectly by the low success of their neighbours. Thus, an experimental approach was needed to address this question. Estimates of demographic parameters (other than reproductive success) and studies of the response of marked individuals to changes in their environment usually face problems associated with variability in the probability of detecting individuals and with nonindependence among events occurring on a local scale. Further, very few studies on dispersal have attempted to address the causal nature of relationships by experimentally manipulating factors. Here we present an experiment designed to test for an effect of local reproductive success of conspecifics on behavioural decisions of individuals regarding dispersal and recruitment. The experiment was carried out on Kittiwakes within a large seabird colony in northern Norway. It involved (i) the colour banding of several hundreds of birds; (ii) the manipulation (increase/decrease) of the local reproductive success of breeding groups on cliffpatches; and (iii) the detailed survey of attendance and activities of birds on these patches. It also involved the manipulation of the nest content of marked individuals breeding within these patches (individuals failing at the egg stage were expected to respond in terms of dispersal to the success of their neighbours). This allowed us to test whether a lower local reproductive success would lower (1) the attendance of breeders at the end of the breeding season; (2) the presence of prospecting birds; and (3) the proportion of failed breeders that came back to breed on the same patch the year after. In this paper, we discuss how we dealt with (I) the use of return rates to infer differences in dispersal rates; (II) the trade-off between sample sizes and local treatment levels; and (III) potential differences in detection probabilities among locations. We also present some results to illustrate the design and implementation of the experiment.  相似文献   

11.
We specify three classes of one-sided and two-sided 1-α confidence intervals with certain monotonicity and symmetry on the confidence limits for the probability of success, the parameter in a binomial distribution. For each class of one-sided confidence intervals the smallest interval, in the sense of the set inclusion, is obtained based on the direct analysis of coverage probability functions. A simple sufficient and necessary condition for the existence of the smallest two-sided confidence interval is provided and the smallest interval is derived if it exists. Thus the proposed intervals are uniformly most accurate, and have the uniformly minimum expected length as well.  相似文献   

12.
A method is presented for the sequential analysis of experiments involving two treatments to which response is dichotomous. Composite hypotheses about the difference in success probabilities are tested, and covariate information is utilized in the analysis. The method is based upon a generalization of Bartlett’s (1946) procedure for using the maximum likelihood estimate of a nuisance parameter in a Sequential Probability Ratio Test (SPRT). Treatment assignment rules studied include pure randomization, randomized blocks, and an adaptive rule which tends to assign the superior treatment to the majority of subjects. It is shown that the use of covariate information can result in important reductions in the expected sample size for specified error probabilities, and that the use of covariate information is essential for the elimination of bias when adaptive assignment rules are employed. Designs of the type presented are easily generated, as the termination criterion is the same as for a Wald SPRT of simple hypotheses.  相似文献   

13.
Summary.  We estimate cause–effect relationships in empirical research where exposures are not completely controlled, as in observational studies or with patient non-compliance and self-selected treatment switches in randomized clinical trials. Additive and multiplicative structural mean models have proved useful for this but suffer from the classical limitations of linear and log-linear models when accommodating binary data. We propose the generalized structural mean model to overcome these limitations. This is a semiparametric two-stage model which extends the structural mean model to handle non-linear average exposure effects. The first-stage structural model describes the causal effect of received exposure by contrasting the means of observed and potential exposure-free outcomes in exposed subsets of the population. For identification of the structural parameters, a second stage 'nuisance' model is introduced. This takes the form of a classical association model for expected outcomes given observed exposure. Under the model, we derive estimating equations which yield consistent, asymptotically normal and efficient estimators of the structural effects. We examine their robustness to model misspecification and construct robust estimators in the absence of any exposure effect. The double-logistic structural mean model is developed in more detail to estimate the effect of observed exposure on the success of treatment in a randomized controlled blood pressure reduction trial with self-selected non-compliance.  相似文献   

14.
The determination of a stopping rule for the detection of the time of an increase in the success probability of a sequence of independent Bernoulli trials is discussed. Both success probabilities are assumed unknown. A Bayesian approach is applied; the distribution of the location of the shift in the success probability is assumed geometric and the success probabilities are assumed to have known joint prior distribution. The costs involved are penalties for late or early stoppings. The nature of the optimal dynamic programming solution is discussed and a procedure for obtaining a suboptimal stopping rule is determined. The results indicate that the detection procedure is quite effective.  相似文献   

15.
In this article, we propose a simple method of constructing confidence intervals for a function of binomial success probabilities and for a function of Poisson means. The method involves finding an approximate fiducial quantity (FQ) for the parameters of interest. A FQ for a function of several parameters can be obtained by substitution. For the binomial case, the fiducial approach is illustrated for constructing confidence intervals for the relative risk and the ratio of odds. Fiducial inferential procedures are also provided for estimating functions of several Poisson parameters. In particular, fiducial inferential approach is illustrated for interval estimating the ratio of two Poisson means and for a weighted sum of several Poisson means. Simple approximations to the distributions of the FQs are also given for some problems. The merits of the procedures are evaluated by comparing them with those of existing asymptotic methods with respect to coverage probabilities, and in some cases, expected widths. Comparison studies indicate that the fiducial confidence intervals are very satisfactory, and they are comparable or better than some available asymptotic methods. The fiducial method is easy to use and is applicable to find confidence intervals for many commonly used summary indices. Some examples are used to illustrate and compare the results of fiducial approach with those of other available asymptotic methods.  相似文献   

16.
In this paper we consider mean of success run lengths appearing in a sequence of binary trials. We derive the exact and limiting distributions of mean success run length for i.i.d. Bernoulli trials. The exact distribution of the corresponding random variable is also derived for a sequence of Markov-dependent Bernoulli trials. In addition, a combinatorial formula for the distribution of any success run statistic defined on Markov-dependent trials is presented.  相似文献   

17.
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs.  相似文献   

18.
In this article, the expected total costs of three kinds of quality cost functions for the one-sided sequential screening procedure based on the individual misclassification error are obtained, where the expected total cost is the sum of the expected cost of inspection, the expected cost of rejection, and the expected cost of quality. The computational formulas for three kinds of expected total costs are derived when k screening variables are allocated into r stages. The optimal allocation combination is determined based on the criterion of minimum expected total cost. At last, we give one example to illustrate the selection of the optimal allocation combination for the sequential screening procedure.  相似文献   

19.
This paper describes the distinction between the concept of statistical power and the probability of getting a successful trial. While one can choose a very high statistical power to detect a certain treatment effect, the high statistical power does not necessarily translate to a high success probability if the treatment effect to detect is based on the perceived ability of the drug candidate. The crucial factor hinges on our knowledge of the drug's ability to deliver the effect used to power the study. The paper discusses a framework to calculate the 'average success probability' and demonstrates how uncertainty about the treatment effect could affect the average success probability for a confirmatory trial. It complements an earlier work by O'Hagan et al. (Pharmaceutical Statistics 2005; 4:187-201) published in this journal. Computer codes to calculate the average success probability are included.  相似文献   

20.
SUMMARY This paper considers the expected experiment times for Weibull-distributed lifetimes under type II progressive censoring, with the numbers of removals being random. The formula to compute the expected experiment times is given. A detailed numerical study of this expected time is carried out for different combinations of model parameters. Furthermore, the ratio of the expected experiment time under this type of progressive censoring to the expected experiment time under complete sampling is studied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号