首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show that efficacy comparisons lead to more adaptation than center comparisons, though at some power loss, skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.  相似文献   

2.
In biomedical studies, the testing problem of two sample survival curves is commonly seen. The most popular approach is the log-rank test. However, the log-rank test may lead to misleading results when two survival curves cross each other. From Li et al., it is difficult to find a good method to test two sample survival curves for all situations. Here, we propose a strategy procedure to combine some existing approaches for the testing problem. Then, we conduct simulations to examine the power and Type I error rate, and compare the proposed methods with five competitive approaches from Li et al. under various crossing situations of two survival curves. From the results, we suggest the Strategy 2 for the two survival curves testing problem, which has higher power and appropriate Type I error for each situation. Finally, we analyze two real data examples with the proposed methods for illustrations.  相似文献   

3.
The problem of testing whether two samples of possibly right-censored survival data come from the same distribution is considered. The aim is to develop a test which is capable of detection of a wide spectrum of alternatives. A new class of tests based on Neyman's embedding idea is proposed. The null hypothesis is tested against a model where the hazard ratio of the two survival distributions is expressed by several smooth functions. A data-driven approach to the selection of these functions is studied. Asymptotic properties of the proposed procedures are investigated under fixed and local alternatives. Small-sample performance is explored via simulations which show that the power of the proposed tests appears to be more robust than the power of some versatile tests previously proposed in the literature (such as combinations of weighted logrank tests, or Kolmogorov–Smirnov tests).  相似文献   

4.
5.
Studies on event occurrence may be conducted in experiments, where one or more treatment groups are compared to a control group. Most of the randomized trials are designed with equally sized groups, but this design is not always the best one. The statistical power of the study may be larger with unequal sample sizes, and researchers may want to place more participants in one group relative to the other due to resource constraints or costs. The optimal designs for discrete-time survival endpoints in trials with two groups, where different proportions of subjects in the experimental group are taken into account, can be studied using the generalized linear model. Applying a cost function, the optimal combination of the number of subjects and periods in the study and the optimal allocation ratio can be found. It is observed that the ratio of the recruitment costs in both groups, the ratio of the recruitment cost in the control group to the cost of obtaining a measurement, the size of the treatment effect, and the shape of the survival distribution have the greatest influence on the optimal design.  相似文献   

6.
A distribution free two stage test based on ranks for the multivariate two sample location problem is presented. The asymptotic distribution of the first and second stage test statistics is derived. Results of a Monte Carlo power study are used to compare the two stage test with the usual one stage test. A brief table of critical values is also presented. The test is illustrated by using data from an exercise study conducted by the Multipurpose Arthritis center.  相似文献   

7.
This article considers the problem of estimating the parameters of Weibull distribution under progressive Type-I interval censoring scheme with beta-binomial removals. Classical as well as the Bayesian procedures for the estimation of unknown model parameters have been developed. The Bayes estimators are obtained under SELF and GELF using MCMC technique. The performance of the estimators, has been discussed in terms of their MSEs. Further, expression for the expected number of total failures has been obtained. A real dataset of the survival times for patients with plasma cell myeloma is used to illustrate the suitability of the proposed methodology.  相似文献   

8.
In recent years adaptive designs have been becoming popular in the context of clinical trials. The purpose of the present work is to provide a sequential two-treatment allocation rule for when the response variables are continuous. The rule is ethical as well as sometimes optimal depending upon the nature of the distribution of the study variables. We examine the various properties of the rule.  相似文献   

9.
Received: May 5, 1999; revised version: June 15, 2000  相似文献   

10.
A strictly nonparametric bivariate test for two sample location problem is proposed. The proposed test is easy to apply and does not require the stringent condition of affine-symmetry or elliptical symmetry which is required by some of the major tests available for the same problem. The power function of the proposed test is calculated. The asymptotic distribution of the proposed test statistic is found to be normal. The power of proposed test is compared with some of the well-known tests under various distributions using Monte Carlo simulation technique. The power study shows that the proposed test statistic performs better than most of the test statistics for almost all the distributions considered here. As soon as the underlying population structure deviates from normality, the ability of the proposed test statistic to detect the smallest shift in location increases as compared to its competitors. The application of the test is shown by using a data set.  相似文献   

11.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   

12.
This paper proposes a new estimator for bivariate distribution functions under random truncation and random censoring. The new method is based on a polar coordinate transformation, which enables us to transform a bivariate survival function to a univariate survival function. A consistent estimator for the transformed univariate function is proposed. Then the univariate estimator is transformed back to a bivariate estimator. The estimator converges weakly to a zero-mean Gaussian process with an easily estimated covariance function. Consistent truncation probability estimate is also provided. Numerical studies show that the distribution estimator and truncation probability estimator perform remarkably well.  相似文献   

13.
For ultrahigh-dimensional data, independent feature screening has been demonstrated both theoretically and empirically to be an effective dimension reduction method with low computational demanding. Motivated by the Buckley–James method to accommodate censoring, we propose a fused Kolmogorov–Smirnov filter to screen out the irrelevant dependent variables for ultrahigh-dimensional survival data. The proposed model-free screening method can work with many types of covariates (e.g. continuous, discrete and categorical variables) and is shown to enjoy the sure independent screening property under mild regularity conditions without requiring any moment conditions on covariates. In particular, the proposed procedure can still be powerful when covariates are strongly dependent on each other. We further develop an iterative algorithm to enhance the performance of our method while dealing with the practical situations where some covariates may be marginally unrelated but jointly related to the response. We conduct extensive simulations to evaluate the finite-sample performance of the proposed method, showing that it has favourable exhibition over the existing typical methods. As an illustration, we apply the proposed method to the diffuse large-B-cell lymphoma study.  相似文献   

14.
A randomized two-stage adaptive Bayesian design is proposed and studied for allocation and comparison in a phase III clinical trial with survival time as treatment response. Several exact and limiting properties of the design and the follow-up inference are studied, both numerically and theoretically, and are compared with a single-stage randomized procedure. The applicability of the proposed methodology is illustrated by using some real data.  相似文献   

15.
Summary.  Efron's biased coin design is a well-known randomization technique that helps to neutralize selection bias in sequential clinical trials for comparing treatments, while keeping the experiment fairly balanced. Extensions of the biased coin design have been proposed by several researchers who have focused mainly on the large sample properties of their designs. We modify Efron's procedure by introducing an adjustable biased coin design, which is more flexible than his. We compare it with other existing coin designs; in terms of balance and lack of predictability, its performance for small samples appears in many cases to be an improvement with respect to the other sequential randomized allocation procedures.  相似文献   

16.
We study two sequential, response-adaptive randomized designs for clinical trials; one has been proposed in Bandyopadhyay and Biswas (Biometrika 88: 409–419, 2001) and in Biswas and Basu (Sankhya Ser B 63:27–42, 2001), the other stems from the randomly reinforced urn introduced and studied in Muliere et al. (J Stat Plan Inference 136:1853–1874, 2006a). Both designs can be used in clinical trials where the response from each patient is a continuous variable. Comparison is conducted through numerical studies and along a new guideline for the evaluation of a response-adaptive design.  相似文献   

17.
A common problem in randomized controlled clinical trials is the optimal assignment of patients to treatment protocols, The traditional optimal design assumes a single criterion, although in reality, there are usually more than one objective in a clinical trial. In this paper, optimal treatment allocation schemes are found for a dual-objective clinical trial with a binary response. A graphical method for finding the optimal strategy is proposed and illustrative examples are discussed.  相似文献   

18.
The area between two survival curves is an intuitive test statistic for the classical two‐sample testing problem. We propose a bootstrap version of it for assessing the overall homogeneity of these curves. Our approach allows ties in the data as well as independent right censoring, which may differ between the groups. The asymptotic distribution of the test statistic as well as of its bootstrap counterpart are derived under the null hypothesis, and their consistency is proven for general alternatives. We demonstrate the finite sample superiority of the proposed test over some existing methods in a simulation study and illustrate its application by a real‐data example.  相似文献   

19.
20.
We consider the problem of sequentially deciding which of two treatments is superior, A class of simple approximate sequential tests is proposed. These have the probabilities of correct selection approximately independent of the sampling rule and depending on unknown parameters only through the function of interest, such as the difference or ratio of mean responses. The tests are obtained by using a normal approximation, and this is employed to derive approximate expressions for the probabilities of correct selection and the expected sample sizes. A class of data-dependent sampling rules is proposed for minimizing any weighted average of the expected sample sizes on the two treatments, with the weights being allowed to depend on unknown parameters. The tests are studied in the particular cases of exponentially.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号