首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose flexible group sequential designs using type I and type II error probability spending functions. The proposed designs preserve the overall significance level and power and allow the repeated testing to be perloimed at a flexible schedule. Computational methods are described. An example on a mega clinical trial is provided.  相似文献   

2.
One sided two stage sign tests that allow for both first stage acceptance and rejection of the null hypothesis are presented. The decision constants are selected so that the power curve of the two stage test matches the power curve of the usual sign test at three specified points. These tests are compared with two stages sign tests recently presented by Colton and McPherson (1976) which allow only first stage rejection of the null hypothesis.  相似文献   

3.
The Committee for Medicinal Products for Human Use (CHMP) is currently preparing a guideline on 'methodological issues in confirmatory clinical trials with flexible design and analysis plan'. PSI (Statisticians in the Pharmaceutical Industry) sponsored a meeting of pharmaceutical statisticians with an interest in the area to share experiences and identify potential opportunities for adaptive designs in late-phase clinical drug development. This article outlines the issues raised, resulting discussions and consensus views reached. Adaptive designs have potential utility in late-phase clinical development. Sample size re-estimation seems to be valuable and widely accepted, but should be made independent of the observed treatment effect where possible. Where unblinding is necessary, careful consideration needs to be given to preserving the integrity of the trial. An area where adaptive designs can be particularly beneficial is to allow dose selection in pivotal trials via adding/dropping treatment arms; for example, combining phase II and III of the drug development program. The more adaptations made during a late-phase clinical trial, the less likely that the clinical trial would be considered as a confirmatory trial. In all cases it would be advisable to consult with regulatory agencies at the protocol design stage. All involved should remain open to scientifically valid opportunities to improve drug development.  相似文献   

4.
We show how mutually utility independent hierarchies, which weigh the various costs of an experiment against benefits expressed through a mixed Bayes linear utility representing the potential gains in knowledge from the experiment, provide a flexible and intuitive methodology for experimental design which remains tractable even for complex multivariate problems. A key feature of the approach is that we allow imprecision in the trade-offs between the various costs and benefits. We identify the Pareto optimal designs under the imprecise specification and suggest a criterion for selecting between such designs. The approach is illustrated with respect to an experiment related to the oral glucose tolerance test.  相似文献   

5.
Optimal three-stage designs with equal sample sizes at each stage are presented and compared to fixed sample designs, fully sequential designs, designs restricted to use the fixed sample critical value at the final stage, and to modifications of other group sequential designs previously proposed in the literature. Typically, the greatest savings realized with interim analyses are obtained by the first interim look. More than 50% of the savings possible with a fully sequential design can be realized with a simple two-stage design. Three-stage designs can realize as much as 75% of the possible savings. Without much loss in efficiency, the designs can be modified so that the critical value at the final stage equals the usual fixed sample value while maintaining the overall level of significance, alleviating some potential confusion should a final stage be necessary. Some common group sequential designs, modified to allow early acceptance of the null hypothesis, are shown to be nearly optimal in some settings while performing poorly in others. An example is given to illustrate the use of several three-stage plans in the design of clinical trials.  相似文献   

6.
We consider two-stage adaptive designs for clinical trials where data from the two stages are dependent. This occurs when additional data are obtained from patients during their second stage follow-up. While the proposed flexible approach allows modifications of trial design, sample size, or statistical analysis using the first stage data, there is no need for a complete prespecification of the adaptation rule. Methods are provided for an adaptive closed testing procedure, for calculating overall adjusted p-values, and for obtaining unbiased estimators and confidence bounds for parameters that are invariant to modifications. A motivating example is used to illustrate these methods.  相似文献   

7.
One of the variance reduction methods in simulation experiments is negative correlation induction, and in particular the use of the antithetic variates. The simultaneous use of antithetic variates and an acceptance–rejection method has been studied in some papers, where the inducted negative correlation has been calculated. In this study, the factors affecting the inducted negative correlation rate are addressed. To do this, the beta distribution is first selected to generate negatively correlated random variates using the acceptance–rejection method. The effects of both the efficiency of the acceptance–rejection method and the initial negative correlation rate on the inducted negative correlation are explored. Results show that both factors have significant effects; therefore, a combination of both can lead to algorithms better able to generate negative correlations.  相似文献   

8.
This paper develops an algorithm for uniform random generation over a constrained simplex, which is the intersection of a standard simplex and a given set. Uniform sampling from constrained simplexes has numerous applications in different fields, such as portfolio optimization, stochastic multi-criteria decision analysis, experimental design with mixtures and decision problems involving discrete joint distributions with imprecise probabilities. The proposed algorithm is developed by combining the acceptance–rejection and conditional methods along with the use of optimization tools. The acceptance rate of the algorithm is analytically compared to that of a crude acceptance–rejection algorithm, which generates points over the simplex and then rejects any points falling outside the intersecting set. Finally, using convex optimization, the setup phase of the algorithm is detailed for the special cases where the intersecting set is a general convex set, a convex set defined by a finite number of convex constraints or a polyhedron.  相似文献   

9.
Based on a representation of a stochastic integral of Ornstein–Uhlenbeck (O–U) type, the exact simulation algorithm of the tempered stable O–U process is given in this paper. The algorithm employs the double rejection method and the general acceptance–rejection technique. The time complexity of the double rejection method is uniformly bounded over all values of the parameter. And the acceptance probability of the acceptance–rejection technique can be improved to as close to 1 as possible. Thus, the implementation of the algorithm is efficient. The performance of the simulation method is evidenced by some empirical results.  相似文献   

10.
We propose to combine two quite powerful ideas that have recently appeared in the Markov chain Monte Carlo literature: adaptive Metropolis samplers and delayed rejection. The ergodicity of the resulting non-Markovian sampler is proved, and the efficiency of the combination is demonstrated with various examples. We present situations where the combination outperforms the original methods: adaptation clearly enhances efficiency of the delayed rejection algorithm in cases where good proposal distributions are not available. Similarly, delayed rejection provides a systematic remedy when the adaptation process has a slow start.  相似文献   

11.
Process capability analysis is applied to monitor the process quality. Process capability can be quantified by process capability index. These indices have wide application in quality control methods and acceptance sampling plans. In this paper, we introduce a double-sampling plan based on process capability index. In this type of scheme, under a decision rule and with the specified rejection and acceptance numbers, the second sample is selected and the decision of rejection or acceptance is made based on the information obtained from two samples. The purpose of this scheme is to reduce the average sample number in order to reduce the time and cost of sampling. A comparison study has been conducted in order to evaluate the performance of proposed method in comparison with classical single sampling plans.  相似文献   

12.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   

13.
Summary.  Efron's biased coin design is a well-known randomization technique that helps to neutralize selection bias in sequential clinical trials for comparing treatments, while keeping the experiment fairly balanced. Extensions of the biased coin design have been proposed by several researchers who have focused mainly on the large sample properties of their designs. We modify Efron's procedure by introducing an adjustable biased coin design, which is more flexible than his. We compare it with other existing coin designs; in terms of balance and lack of predictability, its performance for small samples appears in many cases to be an improvement with respect to the other sequential randomized allocation procedures.  相似文献   

14.
The design of a sampling inspection plan is usually based on the properties of the operating characteristic curve. This approach ensures that the plan has adequate power to discriminate a lot of acceptable quality from a lot rejectable quality. However, the designed plan need not necessarily have adequate predictive power when the sentenced lot is re-inspected using the same sampling plan. The paper introduces a new design approach for the single sampling attributes plan ensuring the decision of acceptance or rejection is consistent for both current and any future inspection of the lot. The proposed design controls the risks of a future sample leading to a contradictory decision of acceptance or rejection of the lot. An increase in sample size is required to achieve the required predictive power if the true lot quality fails to be at very low fraction non-conforming levels. A Bayesian analysis is also provided.  相似文献   

15.
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show that efficacy comparisons lead to more adaptation than center comparisons, though at some power loss, skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.  相似文献   

16.
An acceptance sampling plan is a method used to make a decision about acceptance or rejection of a product, based on adherence to a standard. Meanwhile, process capability indices (PCIs) have been applied in different manufacturing industries as capability measures based on specified criteria which include process departure from a target, process consistency, process yield and process loss. In this paper, a repetitive group sampling (RGS) plan based on PCI is introduced for variables’ inspection. First, the optimal parameters of the developed RGS plan are obtained considering constraints related to the risk of consumers and producers and also a double sampling plan, a multiple dependent state sampling plan and a sampling plan for resubmitted lots have been designed. Finally, after the development of variable sampling plans based on the Bayesian and exact approach, a comparison study has been performed between the developed RGS plan and other types of sampling plans and the results are elaborated.  相似文献   

17.
Measurement of the effect of extreme observations on test statistics has been discussed by many authors. Test resistance to rejection (acceptance) is one of the most appealing methods. Using the distribution of the test statistic, the exact and asymptotic distributions of the test resistance to rejection (acceptance) are introduced. The usage of the distribution is emphasized in the case of the sign test and Spearman’s rho.  相似文献   

18.
αn–Designs     
This paper defines a broad class of resolvable incomplete block designs called αn–designs, of which the original α–designs are a special case with n = 1. The statistical and mathematical properties of α–designs extend naturally to these n –dimensional designs. They are a flexible class of resolvable designs appropriate for use in factorial experiments, in constructing efficient t –latinized resolvable block designs, and for enhancing the existing class of α–designs for a single treatment factor.  相似文献   

19.
Response surface designs are widely used in industries like chemicals, foods, pharmaceuticals, bioprocessing, agrochemicals, biology, biomedicine, agriculture and medicine. One of the major objectives of these designs is to study the functional relationship between one or more responses and a number of quantitative input factors. However, biological materials have more run to run variation than in many other experiments, leading to the conclusion that smaller response surface designs are inappropriate. Thus designs to be used in these research areas should have greater replication. Gilmour (2006) introduced a wide class of designs called “subset designs” which are useful in situations in which run to run variation is high. These designs allow the experimenter to fit the second order response surface model. However, there are situations in which the second order model representation proves to be inadequate and unrealistic due to the presence of lack of fit caused by third or higher order terms in the true response surface model. In such situations it becomes necessary for the experimenter to estimate these higher order terms. In this study, the properties of subset designs, in the context of the third order response surface model, are explored.  相似文献   

20.
Adaptive designs are effective mechanisms for flexibly allocating experimental resources. In clinical trials particularly, such designs allow researchers to balance short- and long-term goals. Unfortunately, fully sequential strategies require outcomes from all previous allocations prior to the next allocation. This can prolong an experiment unduly. As a result, we seek designs for models that specifically incorporate delays.We utilize a delay model in which patients arrive according to a Poisson process and their response times are exponential. We examine three designs with an eye towards minimizing patient losses: a delayed two-armed bandit rule which is optimal for the model and objective of interest; a newly proposed hyperopic rule; and a randomized play-the-winner rule. The results show that, except when the delay rate is several orders of magnitude different than the patient arrival rate, the delayed response bandit is nearly as efficient as the immediate response bandit. The delayed hyperopic design also performs extremely well throughout the range of delays, despite the fact that the rate of delay is not one of its design parameters. The delayed randomized play-the-winner rule is far less efficient than either of the other methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号