首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Several scientific questions are of interest in phase III trials of prophylactic human immunodeficiency virus (HIV) vaccines. In this paper we focus on some issues related to evaluating the direct protective effects of a vaccine in reducing susceptibility, VES, and its effect on reducing infectiousness, VEI. An estimation of VEI generally requires information on contacts between the infective and susceptible individuals. By augmenting the primary participants of an HIV vaccine trial by their steady sexual partners, information can be collected that allows an estimation of VEI as well as VES. Exposure to infection information, however, may be expensive and difficult to collect. A vaccine trial design can include a small validation set with good exposure to infection data to correct bias in a larger, simpler main study with only coarse exposure data. The large main study increases the efficiency of the small validation set. More research into the combination of different levels of information in vaccine trial design will yield more efficient and less biased estimates of the efficacy measures of interest.  相似文献   

2.
Under the case-cohort design introduced by Prentice (Biometrica 73:1–11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.  相似文献   

3.
A common objective of cohort studies and clinical trials is to assess time-varying longitudinal continuous biomarkers as correlates of the instantaneous hazard of a study endpoint. We consider the setting where the biomarkers are measured in a designed sub-sample (i.e., case-cohort or two-phase sampling design), as is normative for prevention trials. We address this problem via joint models, with underlying biomarker trajectories characterized by a random effects model and their relationship with instantaneous risk characterized by a Cox model. For estimation and inference we extend the conditional score method of Tsiatis and Davidian (Biometrika 88(2):447–458, 2001) to accommodate the two-phase biomarker sampling design using augmented inverse probability weighting with nonparametric kernel regression. We present theoretical properties of the proposed estimators and finite-sample properties derived through simulations, and illustrate the methods with application to the AIDS Clinical Trials Group 175 antiretroviral therapy trial. We discuss how the methods are useful for evaluating a Prentice surrogate endpoint, mediation, and for generating hypotheses about biological mechanisms of treatment efficacy.  相似文献   

4.
In this article, we formulate a class of semiparametric marginal means models with a mixture of time-varying and time-independent parameters for analyzing panel data. For inference about the regression parameters, an estimation procedure is developed and asymptotic properties of the proposed estimators are established. In addition, some tests are presented for investigating whether or not covariate effects vary with time. The finite-sample behavior of the proposed methods is examined in simulation studies, and the data from an AIDS clinical trial study are used to illustrate the methodology.  相似文献   

5.
The success of a seasonal influenza vaccine efficacy trial depends not only upon the design but also upon the annual epidemic characteristics. In this context, simulation methods are an essential tool in evaluating the performances of study designs under various circumstances. However, traditional methods for simulating time‐to‐event data are not suitable for the simulation of influenza vaccine efficacy trials because of the seasonality and heterogeneity of influenza epidemics. Instead, we propose a mathematical model parameterized with historical surveillance data, heterogeneous frailty among the subjects, survey‐based heterogeneous number of daily contact, and a mixed vaccine protection mechanism. We illustrate our methodology by generating multiple‐trial data similar to a large phase III trial that failed to show additional relative vaccine efficacy of an experimental adjuvanted vaccine compared with the reference vaccine. We show that small departures from the designing assumptions, such as a smaller range of strain protection for the experimental vaccine or the chosen endpoint, could lead to smaller probabilities of success in showing significant relative vaccine efficacy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
In comparison to other experimental studies, multicollinearity appears frequently in mixture experiments, a special study area of response surface methodology, due to the constraints on the components composing the mixture. In the analysis of mixture experiments by using a special generalized linear model, logistic regression model, multicollinearity causes precision problems in the maximum-likelihood logistic regression estimate. Therefore, effects due to multicollinearity can be reduced to a certain extent by using alternative approaches. One of these approaches is to use biased estimators for the estimation of the coefficients. In this paper, we suggest the use of logistic ridge regression (RR) estimator in the cases where there is multicollinearity during the analysis of mixture experiments using logistic regression. Also, for the selection of the biasing parameter, we use fraction of design space plots for evaluating the effect of the logistic RR estimator with respect to the scaled mean squared error of prediction. The suggested graphical approaches are illustrated on the tumor incidence data set.  相似文献   

7.
This article generalizes the popular stochastic volatility in mean model to allow for time-varying parameters in the conditional mean. The estimation of this extension is nontrival since the volatility appears in both the conditional mean and the conditional variance, and its coefficient in the former is time-varying. We develop an efficient Markov chain Monte Carlo algorithm based on band and sparse matrix algorithms instead of the Kalman filter to estimate this more general variant. The methodology is illustrated with an application that involves U.S., U.K., and Germany inflation. The estimation results show substantial time-variation in the coefficient associated with the volatility, highlighting the empirical relevance of the proposed extension. Moreover, in a pseudo out-of-sample forecasting exercise, the proposed variant also forecasts better than various standard benchmarks.  相似文献   

8.
Summary.  We present a multivariate logistic regression model for the joint analysis of longitudinal multiple-source binary data. Longitudinal multiple-source binary data arise when repeated binary measurements are obtained from two or more sources, with each source providing a measure of the same underlying variable. Since the number of responses on each subject is relatively large, the empirical variance estimator performs poorly and cannot be relied on in this setting. Two methods for obtaining a parsimonious within-subject association structure are considered. An additional complication arises with estimation, since maximum likelihood estimation may not be feasible without making unrealistically strong assumptions about third- and higher order moments. To circumvent this, we propose the use of a generalized estimating equations approach. Finally, we present an analysis of multiple-informant data obtained longitudinally from a psychiatric interventional trial that motivated the model developed in the paper.  相似文献   

9.
We develop flexible semiparametric time series methods for the estimation of the causal effect of monetary policy on macroeconomic aggregates. Our estimator captures the average causal response to discrete policy interventions in a macrodynamic setting, without the need for assumptions about the process generating macroeconomic outcomes. The proposed estimation strategy, based on propensity score weighting, easily accommodates asymmetric and nonlinear responses. Using this estimator, we show that monetary tightening has clear effects on the yield curve and on economic activity. Monetary accommodation, however, appears to generate less pronounced responses from both. Estimates for recent financial crisis years display a similarly dampened response to monetary accommodation.  相似文献   

10.
Motivated by a potential-outcomes perspective, the idea of principal stratification has been widely recognized for its relevance in settings susceptible to posttreatment selection bias such as randomized clinical trials where treatment received can differ from treatment assigned. In one such setting, we address subtleties involved in inference for causal effects when using a key covariate to predict membership in latent principal strata. We show that when treatment received can differ from treatment assigned in both study arms, incorporating a stratum-predictive covariate can make estimates of the "complier average causal effect" (CACE) derive from observations in the two treatment arms with different covariate distributions. Adopting a Bayesian perspective and using Markov chain Monte Carlo for computation, we develop posterior checks that characterize the extent to which incorporating the pretreatment covariate endangers estimation of the CACE. We apply the method to analyze a clinical trial comparing two treatments for jaw fractures in which the study protocol allowed surgeons to overrule both possible randomized treatment assignments based on their clinical judgment and the data contained a key covariate (injury severity) predictive of treatment received.  相似文献   

11.
Three approaches to multivariate estimation for categorical data using randomized response (RR) are described. In the first approach, practical only for 2×2 contingency tables, a multi-proportions design is used. In the second approach, a separate RR trial is used for each variate and it is noted that the multi­variate design matrix of conditional probabilities is given by the Kroneeker product of the univariate design matrices of each trial, provided that the trials are independent of each other in a certain sense. The third approach requires only a single randomization and thus may be viewed as the use of vector response. Finally, a special-purpose bivariate design is presented.  相似文献   

12.
Many assumptions, including assumptions regarding treatment effects, are made at the design stage of a clinical trial for power and sample size calculations. It is desirable to check these assumptions during the trial by using blinded data. Methods for sample size re‐estimation based on blinded data analyses have been proposed for normal and binary endpoints. However, there is a debate that no reliable estimate of the treatment effect can be obtained in a typical clinical trial situation. In this paper, we consider the case of a survival endpoint and investigate the feasibility of estimating the treatment effect in an ongoing trial without unblinding. We incorporate information of a surrogate endpoint and investigate three estimation procedures, including a classification method and two expectation–maximization (EM) algorithms. Simulations and a clinical trial example are used to assess the performance of the procedures. Our studies show that the expectation–maximization algorithms highly depend on the initial estimates of the model parameters. Despite utilization of a surrogate endpoint, all three methods have large variations in the treatment effect estimates and hence fail to provide a precise conclusion about the treatment effect. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
The Randomized Response (RR) technique is a well-established interview procedure which guarantees privacy protection in social surveys dealing with sensitive items. The RR method assumes a stochastic mechanism to create uncertainty about the true status of the respondents in order to ensure privacy protection and to avoid tendencies to dissimulate or respond in a socially desirable direction. A very general model for the RR method was introduced by Franklin (Commun Stat Theory Methods 18:489?C505, 1989) when a single-sensitive question is under study. However, since social surveys are often based on questionnaires containing more than a single-sensitive question, the analysis of multivariate RR data is of considerable interest. This paper focuses on the generalization of the Franklin model in a multiple-sensitive question setting and on related inferential issues.  相似文献   

14.
Abstract.  In this paper, we consider a semiparametric time-varying coefficients regression model where the influences of some covariates vary non-parametrically with time while the effects of the remaining covariates follow certain parametric functions of time. The weighted least squares type estimators for the unknown parameters of the parametric coefficient functions as well as the estimators for the non-parametric coefficient functions are developed. We show that the kernel smoothing that avoids modelling of the sampling times is asymptotically more efficient than a single nearest neighbour smoothing that depends on the estimation of the sampling model. The asymptotic optimal bandwidth is also derived. A hypothesis testing procedure is proposed to test whether some covariate effects follow certain parametric forms. Simulation studies are conducted to compare the finite sample performances of the kernel neighbourhood smoothing and the single nearest neighbour smoothing and to check the empirical sizes and powers of the proposed testing procedures. An application to a data set from an AIDS clinical trial study is provided for illustration.  相似文献   

15.
In the evaluation of efficacy of a vaccine to protect against disease caused by finitely many diverse infectious pathogens, it is often important to assess if vaccine protection depends on variations of the exposing pathogen. This problem can be formulated under a competing risks model where the endpoint event is the infection and the cause of failure is the infecting strain type determined after the infection is diagnosed. The strain-specific vaccine efficacy is defined as one minus the cause-specific hazard ratio (vaccine/placebo). This paper develops some simple procedures for testing if the vaccine affords protection against various strains and if and how the strain-specific vaccine efficacy depends on the type of exposing strain, adjusting for covariate effects. The Cox proportional hazards model is used to relate the cause-specific outcomes to explanatory variables. The finite sample properties of proposed tests are studied through simulations and are shown to have good performances. The tests developed are applied to the data collected from an oral cholera vaccine trial.  相似文献   

16.
This article proposes a novel non-stationary BINMA time series model by extending two INMA processes where their innovation series follow the bivariate Poisson under time-varying moment assumptions. This article also demonstrates, through simulation studies, the use and superiority of the generalized quasi-likelihood (GQL) approach to estimate the regression effects, which is computationally less complicated as compared to conditional maximum likelihood estimation (CMLE) and the feasible generalized least squares (FGLS). The serial and bivariate dependence correlations are estimated by a robust method of moments.  相似文献   

17.
Time-course gene sets are collections of predefined groups of genes in some patients gathered over time. The analysis of time-course gene sets for testing gene sets which vary significantly over time is an important context in genomic data analysis. In this paper, the method of generalized estimating equations (GEEs), which is a semi-parametric approach, is applied to time-course gene set data. We propose a special structure of working correlation matrix to handle the association among repeated measurements of each patient over time. Also, the proposed working correlation matrix permits estimation of the effects of the same gene among different patients. The proposed approach is applied to an HIV therapeutic vaccine trial (DALIA-1 trial). This data set has two phases: pre-ATI and post-ATI which depend on a vaccination period. Using multiple testing, the significant gene sets in the pre-ATI phase are detected and data on two randomly selected gene sets in the post-ATI phase are also analyzed. Some simulation studies are performed to illustrate the proposed approaches. The results of the simulation studies confirm the good performance of our proposed approach.  相似文献   

18.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

19.
Time-varying coefficient models with autoregressive and moving-average–generalized autoregressive conditional heteroscedasticity structure are proposed for examining the time-varying effects of risk factors in longitudinal studies. Compared with existing models in the literature, the proposed models give explicit patterns for the time-varying coefficients. Maximum likelihood and marginal likelihood (based on a Laplace approximation) are used to estimate the parameters in the proposed models. Simulation studies are conducted to evaluate the performance of these two estimation methods, which is measured in terms of the Kullback–Leibler divergence and the root mean square error. The marginal likelihood approach leads to the more accurate parameter estimates, although it is more computationally intensive. The proposed models are applied to the Framingham Heart Study to investigate the time-varying effects of covariates on coronary heart disease incidence. The Bayesian information criterion is used for specifying the time series structures of the coefficients of the risk factors.  相似文献   

20.
We explore the impact of time-varying subsequent therapy on the statistical power and treatment effects in survival analysis. The marginal structural model (MSM) with stabilized inverse probability treatment weights (sIPTW) was used to account for the effects due to the subsequent therapy. Simulations were performed to compare the MSM-sIPTW method with the conventional method without accounting for the time-varying covariate such as subsequent therapy that is dependent on the initial response of the treatment effect. The results of the simulations indicated that the statistical power, thereby the Type I error, of the trials to detect the frontline treatment effect could be inflated if no appropriate adjustment was made for the impact due to the add-on effects of the subsequent therapy. Correspondingly, the hazard ratio between the treatment groups may be overestimated by the conventional analysis methods. In contrast, MSM-sIPTW can maintain the Type I error rate and gave unbiased estimates of the hazard ratio for the treatment. Two real examples were used to discuss the potential clinical implications. The study demonstrated the importance of accounting for time-varying subsequent therapy for obtaining unbiased interpretation of data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号