首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Covariate informed product partition models incorporate the intuitively appealing notion that individuals or units with similar covariate values a priori have a higher probability of co-clustering than those with dissimilar covariate values. These methods have been shown to perform well if the number of covariates is relatively small. However, as the number of covariates increase, their influence on partition probabilities overwhelm any information the response may provide in clustering and often encourage partitions with either a large number of singleton clusters or one large cluster resulting in poor model fit and poor out-of-sample prediction. This same phenomenon is observed in Bayesian nonparametric regression methods that induce a conditional distribution for the response given covariates through a joint model. In light of this, we propose two methods that calibrate the covariate-dependent partition model by capping the influence that covariates have on partition probabilities. We demonstrate the new methods’ utility using simulation and two publicly available datasets.  相似文献   

2.
The response adaptive randomization (RAR) method is used to increase the number of patients assigned to more efficacious treatment arms in clinical trials. In many trials evaluating longitudinal patient outcomes, RAR methods based only on the final measurement may not benefit significantly from RAR because of its delayed initiation. We propose a Bayesian RAR method to improve RAR performance by accounting for longitudinal patient outcomes (longitudinal RAR). We use a Bayesian linear mixed effects model to analyze longitudinal continuous patient outcomes for calculating a patient allocation probability. In addition, we aim to mitigate the loss of statistical power because of large patient allocation imbalances by embedding adjusters into the patient allocation probability calculation. Using extensive simulation we compared the operating characteristics of our proposed longitudinal RAR method with those of the RAR method based only on the final measurement and with an equal randomization method. Simulation results showed that our proposed longitudinal RAR method assigned more patients to the presumably superior treatment arm compared with the other two methods. In addition, the embedded adjuster effectively worked to prevent extreme patient allocation imbalances. However, our proposed method may not function adequately when the treatment effect difference is moderate or less, and still needs to be modified to deal with unexpectedly large departures from the presumed longitudinal data model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Odile Pons 《Statistics》2013,47(4):273-293
A semi-Markov model with covariates is proposed for a multi-state process with a finite number of states such that the transition probabilities between the states and the distribution functions of the duration times between the occurrence of two states depend on a discrete covariate. The hazard rates for the time elapsed between two successive states depend on the covariate through a proportional hazards model involving a set of regression parameters, while the transition probabilities depend on the covariate in an unspecified way. We propose estimators for these parameters and for the cumulative hazard functions of the sojourn times. A difficulty comes from the fact that when a sojourn time in a state is right-censored, the next state is unknown. We prove that our estimators are consistent and asymptotically Gaussian under the model constraints.  相似文献   

4.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

5.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

6.
We investigate multiple features of response adaptive randomization (RAR) in the context of a multiple arm randomized trial with control, where the primary goal is the identification of the best arm for use in a broader patient population. We maintain constant control allocation and vary the length of time until RAR is started, interim frequency, the underlying quantity used to calculate the randomization probabilities, and a threshold resulting in temporary arm dropping. We evaluate the designs on five metrics measuring benefit to the internal trial population, the future external population, and statistical estimation. Our results indicate these features have minimal interaction within the space explored, with preference for earlier activation of RAR, more frequent interim analyses, randomizing in proportion to the probability each arm is the best, and aggressive thresholding for temporarily dropping arms. The results illustrate useful principles for maximizing the benefit of RAR in practice.  相似文献   

7.
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate‐dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate‐dependent censoring. We consider a covariate‐adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate‐adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate‐adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research. Here, cancer relapse and death in complete remission are two competing risks.  相似文献   

8.
In this paper, the dependence of transition probabilities on covariates and a test procedure for covariate dependent Markov models are examined. The nonparametric test for the role of waiting time proposed by Jones and Crowley [M. Jones, J. Crowley, Nonparametric tests of the Markov model for survival data Biometrika 79 (3) (1992) 513–522] has been extended here to transitions and reverse transitions. The limitation of the Jones and Crowley method is that it does not take account of other covariates that might have association with the probabilities of transition. A simple test procedure is proposed that can be employed for testing: (i) the significance of association between covariates and transition probabilities, and (ii) the impact of waiting time on the transition probabilities. The procedure is illustrated using panel data on hospitalization of the elderly population in the USA from the Health and Retirement Survey (HRS).  相似文献   

9.
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the unconditional inference is desired. In addition, the covariate‐adjusted Cox model requires the relatively strong assumption of proportional hazards for each covariate. To overcome these challenges, a nonparametric randomization‐based analysis of covariance method was proposed to estimate the covariate‐adjusted hazard ratios for multivariate time‐to‐event outcomes. However, empirical evaluations of the performance (power and type I error rate) of the method have not been studied. Although the method is derived for multivariate situations, for most registration trials, the primary endpoint is a univariate outcome. Therefore, this approach is applied to univariate outcomes, and performance is evaluated through a simulation study in this paper. Stratified analysis is also investigated. As an illustration of the method, we also apply the covariate‐adjusted and unadjusted analyses to an oncology trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Adjustment for covariates is a time-honored tool in statistical analysis and is often implemented by including the covariates that one intends to adjust as additional predictors in a model. This adjustment often does not work well when the underlying model is misspecified. We consider here the situation where we compare a response between two groups. This response may depend on a covariate for which the distribution differs between the two groups one intends to compare. This creates the potential that observed differences are due to differences in covariate levels rather than “genuine” population differences that cannot be explained by covariate differences. We propose a bootstrap-based adjustment method. Bootstrap weights are constructed with the aim of aligning bootstrap–weighted empirical distributions of the covariate between the two groups. Generally, the proposed weighted-bootstrap algorithm can be used to align or match the values of an explanatory variable as closely as desired to those of a given target distribution. We illustrate the proposed bootstrap adjustment method in simulations and in the analysis of data on the fecundity of historical cohorts of French-Canadian women.  相似文献   

11.
For analyzing incidence data on diabetes and health problems, the bivariate geometric probability distribution is a natural choice but remained unexplored largely due to lack of models linking covariates with the probabilities of bivariate incidence of correlated outcomes. In this paper, bivariate geometric models are proposed for two correlated incidence outcomes. The extended generalized linear models are developed to take into account covariate dependence of the bivariate probabilities of correlated incidence outcomes for diabetes and heart diseases for the elderly population. The estimation and test procedures are illustrated using the Health and Retirement Study data. Two models are shown in this paper, one based on conditional-marginal approach and the other one based on the joint probability distribution with an association parameter. The joint model with association parameter appears to be a very good choice for analyzing the covariate dependence of the joint incidence of diabetes and heart diseases. Bootstrapping is performed to measure the accuracy of estimates and the results indicate very small bias.  相似文献   

12.
Motivated by a potential-outcomes perspective, the idea of principal stratification has been widely recognized for its relevance in settings susceptible to posttreatment selection bias such as randomized clinical trials where treatment received can differ from treatment assigned. In one such setting, we address subtleties involved in inference for causal effects when using a key covariate to predict membership in latent principal strata. We show that when treatment received can differ from treatment assigned in both study arms, incorporating a stratum-predictive covariate can make estimates of the "complier average causal effect" (CACE) derive from observations in the two treatment arms with different covariate distributions. Adopting a Bayesian perspective and using Markov chain Monte Carlo for computation, we develop posterior checks that characterize the extent to which incorporating the pretreatment covariate endangers estimation of the CACE. We apply the method to analyze a clinical trial comparing two treatments for jaw fractures in which the study protocol allowed surgeons to overrule both possible randomized treatment assignments based on their clinical judgment and the data contained a key covariate (injury severity) predictive of treatment received.  相似文献   

13.
We propose methods for Bayesian inference for missing covariate data with a novel class of semi-parametric survival models with a cure fraction. We allow the missing covariates to be either categorical or continuous and specify a parametric distribution for the covariates that is written as a sequence of one dimensional conditional distributions. We assume that the missing covariates are missing at random (MAR) throughout. We propose an informative class of joint prior distributions for the regression coefficients and the parameters arising from the covariate distributions. The proposed class of priors are shown to be useful in recovering information on the missing covariates especially in situations where the missing data fraction is large. Properties of the proposed prior and resulting posterior distributions are examined. Also, model checking techniques are proposed for sensitivity analyses and for checking the goodness of fit of a particular model. Specifically, we extend the Conditional Predictive Ordinate (CPO) statistic to assess goodness of fit in the presence of missing covariate data. Computational techniques using the Gibbs sampler are implemented. A real data set involving a melanoma cancer clinical trial is examined to demonstrate the methodology.  相似文献   

14.
We consider parametric regression problems with some covariates missing at random. It is shown that the regression parameter remains identifiable under natural conditions. When the always observed covariates are discrete, we propose a semiparametric maximum likelihood method, which does not require parametric specification of the missing data mechanism or the covariate distribution. The global maximum likelihood estimator (MLE), which maximizes the likelihood over the whole parameter set, is shown to exist under simple conditions. For ease of computation, we also consider a restricted MLE which maximizes the likelihood over covariate distributions supported by the observed values. Under regularity conditions, the two MLEs are asymptotically equivalent and strongly consistent for a class of topologies on the parameter set.  相似文献   

15.
In this contribution we aim at improving ordinal variable selection in the context of causal models for credit risk estimation. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric thus keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal level. A noticeable instance of this regards the situation in which ordinal variables result from rankings of companies that are to be evaluated according to different macro and micro economic aspects, leading to ordinal covariates that correspond to various ratings, that entail different magnitudes of the probability of default. For each given covariate, we suggest to partition the statistical units in as many groups as the number of observed levels of the covariate. We then assume individual defaults to be homogeneous within each group and heterogeneous across groups. Our aim is to compare and, therefore select, the partition structures resulting from the consideration of different explanatory covariates. The metric we choose for variable comparison is the calculation of the posterior probability of each partition. The application of our proposal to a European credit risk database shows that it performs well, leading to a coherent and clear method for variable averaging of the estimated default probabilities.  相似文献   

16.
A model for analyzing release-recapture data is presented that generalizes a previously existing individual covariate model to include multiple groups of animals. As in the previous model, the generalized version includes selection parameters that relate individual covariates to survival potential. Significance of the selection parameters was equivalent to significance of the individual covariates. Simulation studies were conducted to investigate three inferential properties with respect to the selection parameters: (1) sample size requirements, (2) validity of the likelihood ratio test (LRT) and (3) power of the LRT. When the survival and capture probabilities ranged from 0.5 to 1.0, a total sample size of 300 was necessary to achieve a power of 0.80 at a significance level of 0.1 when testing the significance of the selection parameters. However, only half that (a total of 150) was necessary for the distribution of the maximum likelihood estimators of the selection parameters to approximate their asymptotic distributions. In general, as the survival and capture probabilities decreased, the sample size requirements increased. The validity of the LRT for testing the significance of the selection parameters was confirmed because the LRT statistic was distributed as theoretically expected under the null hypothesis, i.e. like a chi 2 random variable. When the baseline survival model was fully parameterized with population and interval effects, the LRT was also valid in the presence of unaccounted for random variation. The power of the LRT for testing the selection parameters was unaffected by over-parameterization of the baseline survival and capture models. The simulation studies showed that for testing the significance of individual covariates to survival the LRT was remarkably robust to assumption violations.  相似文献   

17.
A model for analyzing release-recapture data is presented that generalizes a previously existing individual covariate model to include multiple groups of animals. As in the previous model, the generalized version includes selection parameters that relate individual covariates to survival potential. Significance of the selection parameters was equivalent to significance of the individual covariates. Simulation studies were conducted to investigate three inferential properties with respect to the selection parameters: (1) sample size requirements, (2) validity of the likelihood ratio test (LRT) and (3) power of the LRT. When the survival and capture probabilities ranged from 0.5 to 1.0, a total sample size of 300 was necessary to achieve a power of 0.80 at a significance level of 0.1 when testing the significance of the selection parameters. However, only half that (a total of 150) was necessary for the distribution of the maximum likelihood estimators of the selection parameters to approximate their asymptotic distributions. In general, as the survival and capture probabilities decreased, the sample size requirements increased. The validity of the LRT for testing the significance of the selection parameters was confirmed because the LRT statistic was distributed as theoretically expected under the null hypothesis, i.e. like a chi 2 random variable. When the baseline survival model was fully parameterized with population and interval effects, the LRT was also valid in the presence of unaccounted for random variation. The power of the LRT for testing the selection parameters was unaffected by over-parameterization of the baseline survival and capture models. The simulation studies showed that for testing the significance of individual covariates to survival the LRT was remarkably robust to assumption violations.  相似文献   

18.
We propose a method for estimating parameters in generalized linear models with missing covariates and a non-ignorable missing data mechanism. We use a multinomial model for the missing data indicators and propose a joint distribution for them which can be written as a sequence of one-dimensional conditional distributions, with each one-dimensional conditional distribution consisting of a logistic regression. We allow the covariates to be either categorical or continuous. The joint covariate distribution is also modelled via a sequence of one-dimensional conditional distributions, and the response variable is assumed to be completely observed. We derive the E- and M-steps of the EM algorithm with non-ignorable missing covariate data. For categorical covariates, we derive a closed form expression for the E- and M-steps of the EM algorithm for obtaining the maximum likelihood estimates (MLEs). For continuous covariates, we use a Monte Carlo version of the EM algorithm to obtain the MLEs via the Gibbs sampler. Computational techniques for Gibbs sampling are proposed and implemented. The parametric form of the assumed missing data mechanism itself is not `testable' from the data, and thus the non-ignorable modelling considered here can be viewed as a sensitivity analysis concerning a more complicated model. Therefore, although a model may have `passed' the tests for a certain missing data mechanism, this does not mean that we have captured, even approximately, the correct missing data mechanism. Hence, model checking for the missing data mechanism and sensitivity analyses play an important role in this problem and are discussed in detail. Several simulations are given to demonstrate the methodology. In addition, a real data set from a melanoma cancer clinical trial is presented to illustrate the methods proposed.  相似文献   

19.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   

20.
Incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With generalized linear models, when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights proposed in Ibrahim (1990). In this article, we extend the EM by the method of weights to survival outcomes whose distributions may not fall in the class of generalized linear models. This method requires the estimation of the parameters of the distribution of the covariates. We present a clinical trials example with five covariates, four of which have some missing values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号