首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Response adaptive randomization (RAR) methods for clinical trials are susceptible to imbalance in the distribution of influential covariates across treatment arms. This can make the interpretation of trial results difficult, because observed differences between treatment groups may be a function of the covariates and not necessarily because of the treatments themselves. We propose a method for balancing the distribution of covariate strata across treatment arms within RAR. The method uses odds ratios to modify global RAR probabilities to obtain stratum‐specific modified RAR probabilities. We provide illustrative examples and a simple simulation study to demonstrate the effectiveness of the strategy for maintaining covariate balance. The proposed method is straightforward to implement and applicable to any type of RAR method or outcome.  相似文献   

2.
Abstract. This article deals with two problems concering the probabilities of causation defined by Pearl (Causality: models, reasoning, and inference, 2nd edn, 2009, Cambridge University Press, New York) namely, the probability that one observed event was a necessary (or sufficient, or both) cause of another; one is to derive new bounds, and the other is to provide the covariate selection criteria. Tian & Pearl (Ann. Math. Artif. Intell., 28, 2000, 287–313) showed how to bound the probabilities of causation using information from experimental and observational studies, with minimal assumptions about the data‐generating process, and identifiable conditions for these probabilities. In this article, we derive narrower bounds using covariate information that is available from those studies. In addition, we propose the conditional monotonicity assumption so as to further narrow the bounds. Moreover, we discuss the covariate selection problem from the viewpoint of the estimation accuracy, and show that selecting a covariate that has a direct effect on an outcome variable cannot always improve the estimation accuracy, which is contrary to the situation in linear regression models. These results provide more accurate information for public policy, legal determination of responsibility and personal decision making.  相似文献   

3.
A nonparametric method is considered which yields smoothed estimates of the response probabilities when the response variable is categorical. The method is based on Lauder's (1983) direct kernel estimates which are extended to allow for ordinal kernels. Thus one can make use of the ordinal scale of the response variable. A class of predictive loss functions is introduced on which the cross-validatory choice of smoothing parameters is based. Plots of the smoothed response probabilities may be used to uncover the form of covariate effects  相似文献   

4.
We consider a modelling approach to longitudinal data that aims at estimating flexible covariate effects in a model where the sampling probabilities are modelled explicitly. The joint modelling yields simple estimators that are easy to compute and analyse, even if the sampling of the longitudinal responses interacts with the response level. An incorrect model for the sampling probabilities results in biased estimates. Non-representative sampling occurs, for example, if patients with an extreme development (based on extreme values of the response) are called in for additional examinations and measurements. We allow covariate effects to be time-varying or time-constant. Estimates of covariate effects are obtained by solving martingale equations locally for the cumulative regression functions. Using Aalen's additive model for the sampling probabilities, we obtain simple expressions for the estimators and their asymptotic variances. The asymptotic distributions for the estimators of the non-parametric components as well as the parametric components of the model are derived drawing on general martingale results. Two applications are presented. We consider the growth of cystic fibrosis patients and the prothrombin index for liver cirrhosis patients. The conclusion about the growth of the cystic fibrosis patients is not altered when adjusting for a possible non-representativeness in the sampling, whereas we reach substantively different conclusions about the treatment effect for the liver cirrhosis patients.  相似文献   

5.
We propose a new weighting (WT) method to handle missing categorical outcomes in longitudinal data analysis using generalized estimating equations (GEE). The proposed WT provides a valid GEE estimator when the data are missing at random (MAR), and has more stable weights and shows advantage in efficiency compared to the inverse probability weighing method in the presence of small observation probabilities. The WT estimator is similar to the stabilized weighting (SWT) estimator under mild conditions, but it is more stable and efficient than SWT when the associations of the outcome with the observation probabilities and the covariate are strong.  相似文献   

6.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   

7.
We consider the additive hazards regression analysis by utilising auxiliary covariate information to improve the efficiency of the statistical inference when the primary covariate is ascertained only for a randomly selected subsample. We construct a martingale-based estimating equation for the regression parameter and establish the asymptotic consistency and normality of the resultant estimator. Simulation study shows that our proposed method can improve the efficiency compared with the estimator which discards the auxiliary covariate information. A real example is also analysed as an illustration.  相似文献   

8.
Sequential analyses in clinical trials have ethical and economic advantages over fixed sample size methods. The sequential probability ratio test (SPRT) is a hypothesis testing procedure which evaluates data as it is collected. The original SPRT was developed by Wald for one-parameter families of distributions and later extended by Bartlett to handle the case of nuisance parameters. However, Bartlett's SPRT requires independent and identically distributed observations. In this paper we show that Bartlett's SPRT can be applied to generalized linear model (GLM) contexts. Then we propose an SPRT analysis methodology for a Poisson generalized linear mixed model (GLMM) that is suitable for our application to the design of a multicenter randomized clinical trial that compares two preventive treatments for surgical site infections. We validate the methodology with a simulation study that includes a comparison to Neyman–Pearson and Bayesian fixed sample size test designs and the Wald SPRT.  相似文献   

9.
Abstract.  In a case–cohort design a random sample from the study cohort, referred as a subcohort, and all the cases outside the subcohort are selected for collecting extra covariate data. The union of the selected subcohort and all cases are referred as the case–cohort set. Such a design is generally employed when the collection of information on an extra covariate for the study cohort is expensive. An advantage of the case–cohort design over more traditional case–control and the nested case–control designs is that it provides a set of controls which can be used for multiple end-points, in which case there is information on some covariates and event follow-up for the whole study cohort. Here, we propose a Bayesian approach to analyse such a case–cohort design as a cohort design with incomplete data on the extra covariate. We construct likelihood expressions when multiple end-points are of interest simultaneously and propose a Bayesian data augmentation method to estimate the model parameters. A simulation study is carried out to illustrate the method and the results are compared with the complete cohort analysis.  相似文献   

10.
K. Fischer  Chr Thiele 《Statistics》2013,47(2):281-289
Linear discriminant rules for two symmetrical distributions, which only need the first and second moments of these distributions, are presented. The rules are based on Zhezhel's idea using the most unfavourable probabilities of misclassification as an optimality criterion. Also a rule is considered which deals with distributions differing in a location and scale parameter.  相似文献   

11.
Summary.  Sparse clustered data arise in finely stratified genetic and epidemiologic studies and pose at least two challenges to inference. First, it is difficult to model and interpret the full joint probability of dependent discrete data, which limits the utility of full likelihood methods. Second, standard methods for clustered data, such as pairwise likelihood and the generalized estimating function approach, are unsuitable when the data are sparse owing to the presence of many nuisance parameters. We present a composite conditional likelihood for use with sparse clustered data that provides valid inferences about covariate effects on both the marginal response probabilities and the intracluster pairwise association. Our primary focus is on sparse clustered binary data, in which case the method proposed utilizes doubly discordant quadruplets drawn from each stratum to conduct inference about the intracluster pairwise odds ratios.  相似文献   

12.
Abstract.  A simple and standard approach for analysing multistate model data is to model all transition intensities and then compute a summary measure such as the transition probabilities based on this. This approach is relatively simple to implement but it is difficult to see what the covariate effects are on the scale of interest. In this paper, we consider an alternative approach that directly models the covariate effects on transition probabilities in multistate models. Our new approach is based on binomial modelling and inverse probability of censoring weighting techniques and is very simple to implement by standard software. We show how to do flexible regression models with possibly time-varying covariate effects.  相似文献   

13.
This article considers misclassification of categorical covariates in the context of regression analysis; if unaccounted for, such errors usually result in mis-estimation of model parameters. With the presence of additional covariates, we exploit the fact that explicitly modelling non-differential misclassification with respect to the response leads to a mixture regression representation. Under the framework of mixture of experts, we enable the reclassification probabilities to vary with other covariates, a situation commonly caused by misclassification that is differential on certain covariates and/or by dependence between the misclassified and additional covariates. Using Bayesian inference, the mixture approach combines learning from data with external information on the magnitude of errors when it is available. In addition to proving the theoretical identifiability of the mixture of experts approach, we study the amount of efficiency loss resulting from covariate misclassification and the usefulness of external information in mitigating such loss. The method is applied to adjust for misclassification on self-reported cocaine use in the Longitudinal Studies of HIV-Associated Lung Infections and Complications.  相似文献   

14.
In the presence of covariate information, the proportional hazards model is one of the most popular models. In this paper, in a Bayesian nonparametric framework, we use a Markov (Lévy-driven) process to model the baseline hazard rate. Previous Bayesian nonparametric models have been based on neutral to the right processes, which have a number of drawbacks, such as discreteness of the cumulative hazard function. We allow the covariates to be time dependent functions and develop a full posterior analysis via substitution sampling. A detailed illustration is presented.  相似文献   

15.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
In an experiment to compare K(<2) treatments, suppose that eligible subjects arrive at an experimental site sequentially and must be treated immediately. In this paper, we assume that the size of the experiment cannot be predetermined and propose and analyze a class of treatment assignment rules which offer compromises between the complete randomization and the perfect balance schemes, A special case of these assignment rules is thoroughly investigated and is featured in the numerical compu-tations. For practical use, a method of implementation of this special rule is provided.  相似文献   

17.
The problem of comparing mean responses for several treatments applied to a common population is considered. The analysis of co-variance ‘ANCOVA’ is frequently used to take advantage of covariate information in this setting, but in many cases ANCOVA's assumption of parallel regression functions precludes the use of ANCOVA. In this paper, an alternative method is developed which does not make this assumption  相似文献   

18.
Two-phase study designs can reduce cost and other practical burdens associated with large scale epidemiologic studies by limiting ascertainment of expensive covariates to a smaller but informative sub-sample (phase-II) of the main study (phase-I). During the analysis of such studies, however, subjects who are selected at phase-I but not at phase-II, remain informative as they may have partial covariate information. A variety of semi-parametric methods now exist for incorporating such data from phase-I subjects when the covariate information can be summarized into a finite number of strata. In this article, we consider extending the pseudo-score approach proposed by Chatterjee et al. (J Am Stat Assoc 98:158–168, 2003) using a kernel smoothing approach to incorporate information on continuous phase-I covariates. Practical issues and algorithms for implementing the methods using existing software are discussed. A sandwich-type variance estimator based on the influence function representation of the pseudo-score function is proposed. Finite sample performance of the methods are studies using simulated data. Advantage of the proposed smoothing approach over alternative methods that use discretized phase-I covariate information is illustrated using two-phase data simulated within the National Wilms Tumor Study (NWTS).  相似文献   

19.
In the problem of parametric statistical inference with a finite parameter space, we propose some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior information. The rules satisfy the likelihood principle and a basic consistency principle ('avoiding sure loss'), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. One of the rules also satisfies fundamental frequentist principles. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and to use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile three general approaches to statistical inference: likelihood inference, coherent inference and frequentist inference.  相似文献   

20.
Covariate informed product partition models incorporate the intuitively appealing notion that individuals or units with similar covariate values a priori have a higher probability of co-clustering than those with dissimilar covariate values. These methods have been shown to perform well if the number of covariates is relatively small. However, as the number of covariates increase, their influence on partition probabilities overwhelm any information the response may provide in clustering and often encourage partitions with either a large number of singleton clusters or one large cluster resulting in poor model fit and poor out-of-sample prediction. This same phenomenon is observed in Bayesian nonparametric regression methods that induce a conditional distribution for the response given covariates through a joint model. In light of this, we propose two methods that calibrate the covariate-dependent partition model by capping the influence that covariates have on partition probabilities. We demonstrate the new methods’ utility using simulation and two publicly available datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号