首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Background: In age‐related macular degeneration (ARMD) trials, the FDA‐approved endpoint is the loss (or gain) of at least three lines of vision as compared to baseline. The use of such a response endpoint entails a potentially severe loss of information. A more efficient strategy could be obtained by using longitudinal measures of the change in visual acuity. In this paper we investigate, by using data from two randomized clinical trials, the mean and variance–covariance structures of the longitudinal measurements of the change in visual acuity. Methods: Individual patient data were collected in 234 patients in a randomized trial comparing interferon‐ α with placebo and in 1181 patients in a randomized trial comparing three active doses of pegaptanib with sham. A linear model for longitudinal data was used to analyze the repeated measurements of the change in visual acuity. Results: For both trials, the data were adequately summarized by a model that assumed a quadratic trend for the mean change in visual acuity over time, a power variance function, and an antedependence correlation structure. The power variance function was remarkably similar for the two datasets and involved the square root of the measurement time. Conclusions: The similarity of the estimated variance functions and correlation structures for both datasets indicates that these aspects may be a genuine feature of the measurements of changes in visual acuity in patients with ARMD. The feature can be used in the planning and analysis of trials that use visual acuity as the clinical endpoint of interest. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Abstract

It is one of the important issues in survival analysis to compare two hazard rate functions to evaluate treatment effect. It is quite common that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. In certain applications, besides survival data, we also have related longitudinal data available regarding some time-dependent covariates. In such cases, a joint model that accommodates both types of data can allow us to infer the association between the survival and longitudinal data and to assess the treatment effect better. In this paper, we propose a modelling approach for comparing two crossing hazard rate functions by joint modelling survival and longitudinal data. Maximum likelihood estimation is used in estimating the parameters of the proposed joint model using the EM algorithm. Asymptotic properties of the maximum likelihood estimators are studied. To illustrate the virtues of the proposed method, we compare the performance of the proposed method with several existing methods in a simulation study. Our proposed method is also demonstrated using a real dataset obtained from an HIV clinical trial.  相似文献   

3.
Recent advances in technology have allowed researchers to collect large scale complex biological data, simultaneously, often in matrix format. In genomic studies, for instance, measurements from tens to hundreds of thousands of genes are taken from individuals across several experimental groups. In time course microarray experiments, gene expression is measured at several time points for each individual across the whole genome resulting in a high-dimensional matrix for each gene. In such experiments, researchers are faced with high-dimensional longitudinal data. Unfortunately, traditional methods for longitudinal data are not appropriate for high-dimensional situations. In this paper, we use the growth curve model and introduce test useful for high-dimensional longitudinal data and evaluate its performance using simulations. We also show how our approach can be used to filter genes in time course genomic experiments. We illustrate this using publicly available genomic data, involving experiments comparing normal human lung tissue with vanadium pentoxide treated human lung tissue, designed with the aim of understanding the susceptibility of individuals working in petro-chemical factories to airway re-modelling. Using our method, we were able to filter out 1053 (about 5 %) genes as non-noise genes from a pool of  22,277. Although our focus is on hypothesis testing, we also provided modified maximum likelihood estimator for the mean parameter of the growth curve model and assessed its performance through bias and mean squared error.  相似文献   

4.
In this paper we propose a test for second order stochastic dominance (SSD), for the case where both distribution functions are unknown. This is a generalization of a test proposed by Deshpande and Singh (1985), who compare a new random prospect with a known distribution function. We then show that our test is based on comparing the mean minus one half of Gini's mean difference of the distributions, which is known to be a necessary condition for SSD, as developed in the economics literature (Yitzhaki, 1982).  相似文献   

5.
Semiparametric regression models and estimating covariance functions are very useful in longitudinal study. Unfortunately, challenges arise in estimating the covariance function of longitudinal data collected at irregular time points. In this article, for mean term, a partially linear model is introduced and for covariance structure, a modified Cholesky decomposition approach is proposed to heed the positive-definiteness constraint. We estimate the regression function by using the local linear technique and propose quasi-likelihood estimating equations for both the mean and covariance structures. Moreover, asymptotic normality of the resulting estimators is established. Finally, simulation study and real data analysis are used to illustrate the proposed approach.  相似文献   

6.
When analyzing incomplete longitudinal clinical trial data, it is often inappropriate to assume that the occurrence of missingness is at random, especially in cases where visits are entirely missed. We present a framework that simultaneously models multivariate incomplete longitudinal data and a non-ignorable missingness mechanism using a Bayesian approach. A criterion measure is presented for comparing models. We demonstrate the feasibility of the methodology through reanalysis of two of the longitudinal measures from a clinical trial of penicillamine treatment for scleroderma patients. We compare the results for univariate and bivariate, ignorable and non-ignorable missingness models.  相似文献   

7.
We discuss event histories from the point of view of longitudinal data analysis, comparing several possible inferential objectives. We show that the Nelson–Aalen estimate of a cumulative intensity may be derived as a limiting solution to a sequence of generalized estimating equations for intermittently observed longitudinal count data. We outline a potential use for the theory in interval-censored recurrent-event models, and demonstrate its applicability using data from a Toronto arthritis clinic. We also discuss connections with rate models, along with some implications for the longitudinal analyst.  相似文献   

8.
Many clinical research studies evaluate a time‐to‐event outcome, illustrate survival functions, and conventionally report estimated hazard ratios to express the magnitude of the treatment effect when comparing between groups. However, it may not be straightforward to interpret the hazard ratio clinically and statistically when the proportional hazards assumption is invalid. In some recent papers published in clinical journals, the use of restricted mean survival time (RMST) or τ ‐year mean survival time is discussed as one of the alternative summary measures for the time‐to‐event outcome. The RMST is defined as the expected value of time to event limited to a specific time point corresponding to the area under the survival curve up to the specific time point. This article summarizes the necessary information to conduct statistical analysis using the RMST, including the definition and statistical properties of the RMST, adjusted analysis methods, sample size calculation, information fraction for the RMST difference, and clinical and statistical meaning and interpretation. Additionally, we discuss how to set the specific time point to define the RMST from two main points of view. We also provide developed SAS codes to determine the sample size required to detect an expected RMST difference with appropriate power and reconstruct individual survival data to estimate an RMST reference value from a reported survival curve.  相似文献   

9.
We investigate a sequential procedure for comparing two treatments in a binomial clinical trial. The procedure uses play-the-winner sampling with termination as soon as the absolute difference in the number of successes of the two treatments reaches a critical value. The important aspect of our procedure is that the critical value is modified as the experiment progresses. Numerical results are given which show that this procedure is preferred to all other existing procedures on the basis of the sample size on the poorer treatment and also on the basis of total sample size.  相似文献   

10.
Researchers are increasingly using the standardized difference to compare the distribution of baseline covariates between treatment groups in observational studies. Standardized differences were initially developed in the context of comparing the mean of continuous variables between two groups. However, in medical research, many baseline covariates are dichotomous. In this article, we explore the utility and interpretation of the standardized difference for comparing the prevalence of dichotomous variables between two groups. We examined the relationship between the standardized difference, and the maximal difference in the prevalence of the binary variable between two groups, the relative risk relating the prevalence of the binary variable in one group compared to the prevalence in the other group, and the phi coefficient for measuring correlation between the treatment group and the binary variable. We found that a standardized difference of 10% (or 0.1) is equivalent to having a phi coefficient of 0.05 (indicating negligible correlation) for the correlation between treatment group and the binary variable.  相似文献   

11.
Time‐varying coefficient models are widely used in longitudinal data analysis. These models allow the effects of predictors on response to vary over time. In this article, we consider a mixed‐effects time‐varying coefficient model to account for the within subject correlation for longitudinal data. We show that when kernel smoothing is used to estimate the smooth functions in time‐varying coefficient models for sparse or dense longitudinal data, the asymptotic results of these two situations are essentially different. Therefore, a subjective choice between the sparse and dense cases might lead to erroneous conclusions for statistical inference. In order to solve this problem, we establish a unified self‐normalized central limit theorem, based on which a unified inference is proposed without deciding whether the data are sparse or dense. The effectiveness of the proposed unified inference is demonstrated through a simulation study and an analysis of Baltimore MACS data.  相似文献   

12.
As researchers increasingly rely on linear mixed models to characterize longitudinal data, there is a need for improved techniques for selecting among this class of models which requires specification of both fixed and random effects via a mean model and variance-covariance structure. The process is further complicated when fixed and/or random effects are non nested between models. This paper explores the development of a hypothesis test to compare non nested linear mixed models based on extensions of the work begun by Sir David Cox. We assess the robustness of this approach for comparing models containing correlated measures of body fat for predicting longitudinal cardiometabolic risk.  相似文献   

13.
In this paper we present a Bayesian non-parametric analysis of survival time data, involving information from two types of treatment. We present an easy to compute Bayes factor comparing two model assumptions: no treatment difference and treatment difference and use this to model summaries for each of the two treatments, in particular predictive distributions.  相似文献   

14.
Varying-coefficient models are very useful for longitudinal data analysis. In this paper, we focus on varying-coefficient models for longitudinal data. We develop a new estimation procedure using Cholesky decomposition and profile least squares techniques. Asymptotic normality for the proposed estimators of varying-coefficient functions has been established. Monte Carlo simulation studies show excellent finite-sample performance. We illustrate our methods with a real data example.  相似文献   

15.
We propose a flexible functional approach for modelling generalized longitudinal data and survival time using principal components. In the proposed model the longitudinal observations can be continuous or categorical data, such as Gaussian, binomial or Poisson outcomes. We generalize the traditional joint models that treat categorical data as continuous data by using some transformations, such as CD4 counts. The proposed model is data-adaptive, which does not require pre-specified functional forms for longitudinal trajectories and automatically detects characteristic patterns. The longitudinal trajectories observed with measurement error or random error are represented by flexible basis functions through a possibly nonlinear link function, combining dimension reduction techniques resulting from functional principal component (FPC) analysis. The relationship between the longitudinal process and event history is assessed using a Cox regression model. Although the proposed model inherits the flexibility of non-parametric methods, the estimation procedure based on the EM algorithm is still parametric in computation, and thus simple and easy to implement. The computation is simplified by dimension reduction for random coefficients or FPC scores. An iterative selection procedure based on Akaike information criterion (AIC) is proposed to choose the tuning parameters, such as the knots of spline basis and the number of FPCs, so that appropriate degree of smoothness and fluctuation can be addressed. The effectiveness of the proposed approach is illustrated through a simulation study, followed by an application to longitudinal CD4 counts and survival data which were collected in a recent clinical trial to compare the efficiency and safety of two antiretroviral drugs.  相似文献   

16.
In this paper we study estimating the joint conditional distributions of multivariate longitudinal outcomes using regression models and copulas. For the estimation of marginal models, we consider a class of time-varying transformation models and combine the two marginal models using nonparametric empirical copulas. Our models and estimation method can be applied in many situations where the conditional mean-based models are not good enough. Empirical copulas combined with time-varying transformation models may allow quite flexible modelling for the joint conditional distributions for multivariate longitudinal data. We derive the asymptotic properties for the copula-based estimators of the joint conditional distribution functions. For illustration we apply our estimation method to an epidemiological study of childhood growth and blood pressure.  相似文献   

17.
It is of interest that researchers study competing risks in which subjects may fail from any one of k causes. Comparing any two competing risks with covariate effects is very important in medical studies. In this paper, we develop tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels under the additive risk model by a weighted difference of estimates of cumulative cause-specific hazard rates. Motivated by McKeague et al. (2001), we construct simultaneous confidence bands for the difference of two conditional cumulative incidence functions as a useful graphical tool. In addition, we conduct a simulation study, and the simulation result shows that the proposed procedure has a good finite sample performance. A melanoma data set in clinical trial is used for the purpose of illustration.  相似文献   

18.
Nonparametric estimation and inferences of conditional distribution functions with longitudinal data have important applications in biomedical studies, such as epidemiological studies and longitudinal clinical trials. Estimation approaches without any structural assumptions may lead to inadequate and numerically unstable estimators in practice. We propose in this paper a nonparametric approach based on time-varying parametric models for estimating the conditional distribution functions with a longitudinal sample. Our model assumes that the conditional distribution of the outcome variable at each given time point can be approximated by a parametric model after local Box–Cox transformation. Our estimation is based on a two-step smoothing method, in which we first obtain the raw estimators of the conditional distribution functions at a set of disjoint time points, and then compute the final estimators at any time by smoothing the raw estimators. Applications of our two-step estimation method have been demonstrated through a large epidemiological study of childhood growth and blood pressure. Finite sample properties of our procedures are investigated through a simulation study. Application and simulation results show that smoothing estimation from time-variant parametric models outperforms the existing kernel smoothing estimator by producing narrower pointwise bootstrap confidence band and smaller root mean squared error.  相似文献   

19.
In this article, we propose a novel approach to fit a functional linear regression in which both the response and the predictor are functions. We consider the case where the response and the predictor processes are both sparsely sampled at random time points and are contaminated with random errors. In addition, the random times are allowed to be different for the measurements of the predictor and the response functions. The aforementioned situation often occurs in longitudinal data settings. To estimate the covariance and the cross‐covariance functions, we use a regularization method over a reproducing kernel Hilbert space. The estimate of the cross‐covariance function is used to obtain estimates of the regression coefficient function and of the functional singular components. We derive the convergence rates of the proposed cross‐covariance, the regression coefficient, and the singular component function estimators. Furthermore, we show that, under some regularity conditions, the estimator of the coefficient function has a minimax optimal rate. We conduct a simulation study and demonstrate merits of the proposed method by comparing it to some other existing methods in the literature. We illustrate the method by an example of an application to a real‐world air quality dataset. The Canadian Journal of Statistics 47: 524–559; 2019 © 2019 Statistical Society of Canada  相似文献   

20.
We consider situations where subjects in a longitudinal study experience recurrent events. However, the events are observed only in the form of counts for intervals which can vary across subjects. Methods for estimating the mean and rate functions of the recurrent-event processes are presented, based on loglinear regression models which incorporate piecewise-constant baseline rate functions. Robust methods and methods based on mixed Poisson processes are compared in a simulation study and in an example involving superficial bladder tumours in humans. Both approaches provide a simple and effective way to deal with interval-grouped data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号