首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
The problem of comparing several experimental treatments to a standard arises frequently in medical research. Various multi-stage randomized phase II/III designs have been proposed that select one or more promising experimental treatments and compare them to the standard while controlling overall Type I and Type II error rates. This paper addresses phase II/III settings where the joint goals are to increase the average time to treatment failure and control the probability of toxicity while accounting for patient heterogeneity. We are motivated by the desire to construct a feasible design for a trial of four chemotherapy combinations for treating a family of rare pediatric brain tumors. We present a hybrid two-stage design based on two-dimensional treatment effect parameters. A targeted parameter set is constructed from elicited parameter pairs considered to be equally desirable. Bayesian regression models for failure time and the probability of toxicity as functions of treatment and prognostic covariates are used to define two-dimensional covariate-adjusted treatment effect parameter sets. Decisions at each stage of the trial are based on the ratio of posterior probabilities of the alternative and null covariate-adjusted parameter sets. Design parameters are chosen to minimize expected sample size subject to frequentist error constraints. The design is illustrated by application to the brain tumor trial.  相似文献   

2.
In many clinical studies, a commonly encountered problem is to compare the survival probabilities of two treatments for a given patient with a certain set of covariates, and there is often a need to make adjustments for other covariates that may affect outcomes. One approach is to plot the difference between the two subject-specific predicted survival estimates with a simultaneous confidence band. Such a band will provide useful information about when these two treatments differ and which treatment has a better survival probability. In this paper, we show how to construct such a band based on the additive risk model and we use the martingale central limit theorem to derive its asymptotic distribution. The proposed method is evaluated from a simulation study and is illustrated with two real examples.  相似文献   

3.
The paper deals with the problem of using contours as the basis for defining probability distributions. First, the most general probability densities with given contours are obtained and the particular cases of circular and elliptical contours are dealt with. It is shown that the so-called elliptically contoured distributions do not include all possible cases. Next, the case of contours defined by polar coordinates is analyzed including its simulation and parameter estimation. Finally, the case of cumulative distribution functions with given contours is discussed. Several examples are used for illustrative purposes.  相似文献   

4.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   

5.
To estimate parameters defined by estimating equations with covariates missing at random, we consider three bias-corrected nonparametric approaches based on inverse probability weighting, regression and augmented inverse probability weighting. However, when the dimension of covariates is not low, the estimation efficiency will be affected due to the curse of dimensionality. To address this issue, we propose a two-stage estimation procedure by using the dimension-reduced kernel estimation in conjunction with bias-corrected estimating equations. We show that the resulting three estimators are asymptotically equivalent and achieve the desirable properties. The impact of dimension reduction in nonparametric estimation of parameters is also investigated. The finite-sample performance of the proposed estimators is studied through simulation, and an application to an automobile data set is also presented.  相似文献   

6.
The conventional phase II trial design paradigm is to make the go/no-go decision based on the hypothesis testing framework. Statistical significance itself alone, however, may not be sufficient to establish that the drug is clinically effective enough to warrant confirmatory phase III trials. We propose the Bayesian optimal phase II trial design with dual-criterion decision making (BOP2-DC), which incorporates both statistical significance and clinical relevance into decision making. Based on the posterior probability that the treatment effect reaches the lower reference value (statistical significance) and the clinically meaningful value (clinical significance), BOP2-DC allows for go/consider/no-go decisions, rather than a binary go/no-go decision. BOP2-DC is highly flexible and accommodates various types of endpoints, including binary, continuous, time-to-event, multiple, and coprimary endpoints, in single-arm and randomized trials. The decision rule of BOP2-DC is optimized to maximize the probability of a go decision when the treatment is effective or minimize the expected sample size when the treatment is futile. Simulation studies show that the BOP2-DC design yields desirable operating characteristics. The software to implement BOP2-DC is freely available at www.trialdesign.org .  相似文献   

7.
For analyzing incidence data on diabetes and health problems, the bivariate geometric probability distribution is a natural choice but remained unexplored largely due to lack of models linking covariates with the probabilities of bivariate incidence of correlated outcomes. In this paper, bivariate geometric models are proposed for two correlated incidence outcomes. The extended generalized linear models are developed to take into account covariate dependence of the bivariate probabilities of correlated incidence outcomes for diabetes and heart diseases for the elderly population. The estimation and test procedures are illustrated using the Health and Retirement Study data. Two models are shown in this paper, one based on conditional-marginal approach and the other one based on the joint probability distribution with an association parameter. The joint model with association parameter appears to be a very good choice for analyzing the covariate dependence of the joint incidence of diabetes and heart diseases. Bootstrapping is performed to measure the accuracy of estimates and the results indicate very small bias.  相似文献   

8.
Inference for the state occupation probabilities, given a set of baseline covariates, is an important problem in survival analysis and time to event multistate data. We introduce an inverse censoring probability re-weighted semi-parametric single index model based approach to estimate conditional state occupation probabilities of a given individual in a multistate model under right-censoring. Besides obtaining a temporal regression function, we also test the potential time varying effect of a baseline covariate on future state occupation. We show that the proposed technique has desirable finite sample performances and its performance is competitive when compared with three other existing approaches. We illustrate the proposed methodology using two different data sets. First, we re-examine a well-known data set dealing with leukemia patients undergoing bone marrow transplant with various state transitions. Our second illustration is based on data from a study involving functional status of a set of spinal cord injured patients undergoing a rehabilitation program.  相似文献   

9.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

10.
11.
A graphical procedure for the display of treatment means that enables one to determine the statistical significance of the observed differences is presented. It is shown that the widely used least significant difference and honestly significant difference statistics can be used to construct plots in which any two means whose uncertainty intervals do not overlap are significantly different at the assigned probability level. It is argued that these plots, because of their straightforward decision rules, are more effective than those that show the observed means with standard errors or confidence limits. Several examples of the proposed displays are included to illustrate the procedure.  相似文献   

12.
In this article, maximum likelihood estimates of an exchangeable multinomial distribution using a parametric form to model the parameters as functions of covariates are derived. The non linearity of the exchangeable multinomial distribution and the parametric model make direct application of Newton Rahpson and Fisher's scoring algorithms computationally infeasible. Instead parameter estimates are obtained as solutions to an iterative weighted least-squares algorithm. A completely monotonic parametric form is proposed for defining the marginal probabilities that results in a valid probability model.  相似文献   

13.
The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme. The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria. The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making. In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044–0.142 for an ACR20 outcome and 0.057–0.213 for an ACR50 outcome. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Efficient statistical inference on nonignorable missing data is a challenging problem. This paper proposes a new estimation procedure based on composite quantile regression (CQR) for linear regression models with nonignorable missing data, that is applicable even with high-dimensional covariates. A parametric model is assumed for modelling response probability, which is estimated by the empirical likelihood approach. Local identifiability of the proposed strategy is guaranteed on the basis of an instrumental variable approach. A set of data-based adaptive weights constructed via an empirical likelihood method is used to weight CQR functions. The proposed method is resistant to heavy-tailed errors or outliers in the response. An adaptive penalisation method for variable selection is proposed to achieve sparsity with high-dimensional covariates. Limiting distributions of the proposed estimators are derived. Simulation studies are conducted to investigate the finite sample performance of the proposed methodologies. An application to the ACTG 175 data is analysed.  相似文献   

15.
We study how to select or combine estimators of the average treatment effect (ATE) and the average treatment effect on the treated (ATT) in the presence of multiple sets of covariates. We consider two cases: (1) all sets of covariates satisfy the unconfoundedness assumption and (2) some sets of covariates violate the unconfoundedness assumption locally. For both cases, we propose a data-driven covariate selection criterion (CSC) to minimize the asymptotic mean squared errors (AMSEs). Based on our CSC, we propose new average estimators of ATE and ATT, which include the selected estimators based on a single set of covariates as a special case. We derive the asymptotic distributions of our new estimators and propose how to construct valid confidence intervals. Our Monte Carlo simulations show that in finite samples, our new average estimators achieve substantial efficiency gains over the estimators based on a single set of covariates. We apply our new estimators to study the impact of inherited control on firm performance.  相似文献   

16.
This paper is concerned primarily with subset selection procedures based on the sample mediansof logistic populations. A procedure is given which chooses a nonempty subset from among kindependent logistic populations, having a common known variance, so that the populations with thelargest location parameter is contained in the subset with a pre‐specified probability. Theconstants required to apply the median procedure with small sample sizes (≤= 19) are tabulated and can also be used to construct simultaneous confidence intervals. Asymptotic formulae are provided for application with larger sample sizes. It is shown that, under certain situations, rules based on the median are substantially more efficient than analogous procedures based either on sample means or on the sum of joint ranks.  相似文献   

17.
The paper revisits the concept of a power series distribution by defining its series function, its power parameter, and hence its probability generating function. Realization that the series function for a particular distribution is a special case of a recognized mathematical function enables distributions to be classified into families. Examples are the generalized hypergeometric family and the q-series family, both of which contain generalizations of the geometric distribution. The Lerch function (a third generalization of the geometric series) is the series function for the Lerch family. A list of distributions belonging to the Lerch family is provided.  相似文献   

18.
Multivariate model validation is a complex decision-making problem involving comparison of multiple correlated quantities, based upon the available information and prior knowledge. This paper presents a Bayesian risk-based decision method for validation assessment of multivariate predictive models under uncertainty. A generalized likelihood ratio is derived as a quantitative validation metric based on Bayes’ theorem and Gaussian distribution assumption of errors between validation data and model prediction. The multivariate model is then assessed based on the comparison of the likelihood ratio with a Bayesian decision threshold, a function of the decision costs and prior of each hypothesis. The probability density function of the likelihood ratio is constructed using the statistics of multiple response quantities and Monte Carlo simulation. The proposed methodology is implemented in the validation of a transient heat conduction model, using a multivariate data set from experiments. The Bayesian methodology provides a quantitative approach to facilitate rational decisions in multivariate model assessment under uncertainty.  相似文献   

19.
The inverse gaussian distribution is a flexible model which has been extensively applied in the theory of generalized linear models and accelerated life testing where early failure times predominate. More recently it has received attention in areas such as quality control, and as an underlying model that provides an alternative to the analysis of variance. In reliability testing and acceptance sampling data acquisition is often in the face of scarce resources and may be both costly and time-consuming. In such settings it is desirable to reach a statistically sound decision as quickly as possible. Based on sequential probability ratio tests (SPRT), sequential sampling plans provide one method of arriving at a timely, statistically based decision. A sequential sampling plan for the inverse gaussian process mean when the value of the shape parameter of the density is known is presented in this paper.  相似文献   

20.
Based on the inverse probability weight method, we, in this article, construct the empirical likelihood (EL) and penalized empirical likelihood (PEL) ratios of the parameter in the linear quantile regression model when the covariates are missing at random, in the presence and absence of auxiliary information, respectively. It is proved that the EL ratio admits a limiting Chi-square distribution. At the same time, the asymptotic normality of the maximum EL and PEL estimators of the parameter is established. Also, the variable selection of the model in the presence and absence of auxiliary information, respectively, is discussed. Simulation study and a real data analysis are done to evaluate the performance of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号