首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Summary.  Cohort studies of individuals infected with the human immunodeficiency virus (HIV) provide useful information on the past pattern of HIV diagnoses, progression of the disease and use of antiretroviral therapy. We propose a new method for using individual data from an open prevalent cohort study to estimate the incidence of HIV, by jointly modelling the HIV diagnosis, the inclusion in the cohort and the progression of the disease in a Markov model framework. The estimation procedure involves the construction of a likelihood function which takes into account the probability of observing the total number of subjects who are enrolled in the cohort and the probabilities of passage through the stages of disease for each observed subject conditionally on being included in the cohort. The estimator of the HIV infection rate is defined as the function which maximizes a penalized likelihood, and the solution of this maximization problem is approximated on a basis of cubic M -splines. The method is illustrated by using cohort data from a hospital-based surveillance system of HIV infection in Aquitaine, a region of south-western France. A simulation study is performed to study the ability of the model to reconstruct the incidence of HIV from prevalent cohort data.  相似文献   

2.
This paper focuses on the distribution of the skew normal sample mean. For a random sample drawn from a skew normal population, we derive the density function and the moment generating function of the sample mean. The density function derived can be used for statistical inference on the disease occurrence time of twins in epidemiology, in which the skew normal model plays a key role.  相似文献   

3.
The hazard function describes the instantaneous rate of failure at a time t, given that the individual survives up to t. In applications, the effect of covariates produce changes in the hazard function. When dealing with survival analysis, it is of interest to identify where a change point in time has occurred. In this work, covariates and censored variables are considered in order to estimate a change-point in the Weibull regression hazard model, which is a generalization of the exponential model. For this more general model, it is possible to obtain maximum likelihood estimators for the change-point and for the parameters involved. A Monte Carlo simulation study shows that indeed, it is possible to implement this model in practice. An application with clinical trial data coming from a treatment of chronic granulomatous disease is also included.  相似文献   

4.
Competing risks are common in clinical cancer research, as patients are subject to multiple potential failure outcomes, such as death from the cancer itself or from complications arising from the disease. In the analysis of competing risks, several regression methods are available for the evaluation of the relationship between covariates and cause-specific failures, many of which are based on Cox’s proportional hazards model. Although a great deal of research has been conducted on estimating competing risks, less attention has been devoted to linear regression modeling, which is often referred to as the accelerated failure time (AFT) model in survival literature. In this article, we address the use and interpretation of linear regression analysis with regard to the competing risks problem. We introduce two types of AFT modeling framework, where the influence of a covariate can be evaluated in relation to either a cause-specific hazard function, referred to as cause-specific AFT (CS-AFT) modeling in this study, or the cumulative incidence function of a particular failure type, referred to as crude-risk AFT (CR-AFT) modeling. Simulation studies illustrate that, as in hazard-based competing risks analysis, these two models can produce substantially different effects, depending on the relationship between the covariates and both the failure type of principal interest and competing failure types. We apply the AFT methods to data from non-Hodgkin lymphoma patients, where the dataset is characterized by two competing events, disease relapse and death without relapse, and non-proportionality. We demonstrate how the data can be analyzed and interpreted, using linear competing risks regression models.  相似文献   

5.
Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level.  相似文献   

6.
We review and discuss numerical inversion of the characteristic function as a tool for obtaining cumulative distribution functions. With the availability of high-speed computing and symbolic computation software, the method is ideally suited for instructional purposes, particularly in the illustration of the inversion theorems covered in graduate probability courses. The method is also available as an alternative to asymptotic approximations, Monte Carlo, or bootstrap techniques when analytic expressions for the distribution function are not available. We illustrate the method with several examples, including one which is concerned with the detection of possible clusters of disease in an epidemiologic study.  相似文献   

7.
In this article, we formulate a semiparametric model for counting processes in which the effect of covariates is to transform the time scale for a baseline rate function. We assume an arbitrary dependence structure for the counting process and propose a class of estimating equations for the regression parameters. Asymptotic results for these estimators are derived. In addition, goodness of fit methods for assessing the adequacy of the accelerated rates model are proposed. The finite-sample behavior of the proposed methods is examined in simulation studies, and data from a chronic granulomatous disease study are used to illustrate the methodology.  相似文献   

8.
In this article, we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo expectation–maximization (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix-based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group study of significant cervical lesion diagnosis in women with atypical glandular cells of undetermined significance to compare the diagnostic accuracy of a histology-based evaluation, a carbonic anhydrase-IX biomarker-based test and a human papillomavirus DNA test.  相似文献   

9.
Summary.  A spatiotemporal model is developed to analyse epidemics of airborne plant diseases which are spread by spores. The observations consist of measurements of the severity of disease at different times, different locations in the horizontal plane and different heights in the vegetal cover. The model describes the joint distribution of the occurrence and the severity of the disease. The three-dimensional dispersal of spores is modelled by combining a horizontal and a vertical dispersal function. Maximum likelihood combined with a parametric bootstrap is suggested to estimate the model parameters and the uncertainty that is attached to them. The spatiotemporal model is used to analyse a yellow rust epidemic in a wheatfield. In the analysis we pay particular attention to the selection and the estimation of the dispersal functions.  相似文献   

10.
Dilated cardiomyopathy is a disease of unknown cause characterized by dilation and impaired function of one or both ventricles. Most cases are believed to be sporadic, although familial forms have been detected. The familial form has been estimated to have a relative frequency of about 25%. Since, except for familial history, familial form has no other characteristics that could help in classifying the two diseases, the estimate of the frequency of the familial form should take into account a possible misclassification error. In our study, 100 cases were randomly selected in a prospective series of 350 patients. Out of them, 28 index cases were included in the analysis: 12 were known to be familial, and 88 were believed to be sporadic. After extensive clinical examination of the relatives, 3 patients supposed to have a sporadic form were found to have a familial form. 13 cases had a confirmed sporadic disease. Models in the Log-Linear Product class (LLP) have been used to separate classification errors from underlying patterns of disease incidence. The most conservative crude estimate of the misclassification error is 16.1% (CI 0.22- 23.27%), which leads to a crude estimate of the frequency of the familiar form of about 60%. An estimate of the disease frequency, adjusted for taking into consideration the sampling plan, is 40.93% (CI 32.29-44.17%). The results are consistent with the hypothesis that genetic factors are still underestimated, although they represent a major cause of the disease.  相似文献   

11.
We study the effect of additive and multiplicative Berkson measurement error in Cox proportional hazard model. By plotting the true and the observed survivor function and the true and the observed hazard function dependent on the exposure one can get ideas about the effect of this type of error on the estimation of the slope parameter corresponding to the variable measured with error. As an example, we analyze the measurement error in the situation of the German Uranium Miners Cohort Study both with graphical methods and with a simulation study. We do not see a substantial bias in the presence of small measurement error and in the rare disease case. Even the effect of a Berkson measurement error with high variance, which is not unrealistic in our example, is a negligible attenuation of the observed effect. However, this effect is more pronounced for multiplicative measurement error.  相似文献   

12.
Missing data cause challenging issues, particularly in phase III registration trials, as highlighted by the European Medicines Agency (EMA) and the US National Research Council. We explore, as a case study, how the issues from missing data were tackled in a double‐blind phase III trial in subjects with autosomal dominant polycystic kidney disease. A total of 1445 subjects were randomized in a 2:1 ratio to receive active treatment (tolvaptan), or placebo. The primary outcome, the rate of change in total kidney volume, favored tolvaptan (P < .0001). The key secondary efficacy endpoints of clinical progression of disease and rate of decline in kidney function also favored tolvaptan. However, as highlighted by Food and Drug Administration and EMA, the interpretation of results was hampered by a high number of unevenly distributed dropouts, particularly early dropouts. In this paper, we outline the analyses undertaken to address the issue of missing data thoroughly. “Tipping point analyses” were performed to explore how extreme and detrimental outcomes among subjects with missing data must be to overturn the positive treatment effect attained in those subjects who had complete data. Nonparametric rank‐based analyses were also performed accounting for missing data. In conclusion, straightforward and transparent analyses directly taking into account missing data convincingly support the robustness of the preplanned analyses on the primary and secondary endpoints. Tolvaptan was confirmed to be effective in slowing total kidney volume growth, which is considered an efficacy endpoint by EMA, and in lessening the decline in renal function in patients with autosomal dominant polycystic kidney disease.  相似文献   

13.
Repeated neuropsychological measurements, such as mini-mental state examination (MMSE) scores, are frequently used in Alzheimer’s disease (AD) research to study change in cognitive function of AD patients. A question of interest among dementia researchers is whether some AD patients exhibit transient “plateaus” of cognitive function in the course of the disease. We consider a statistical approach to this question, based on irregularly spaced repeated MMSE scores. We propose an algorithm that formalizes the measurement of an apparent cognitive plateau, and a procedure to evaluate the evidence of plateaus in AD using this algorithm based on applying the algorithm to the observed data and to data sets simulated from a linear mixed model. We apply these methods to repeated MMSE data from the Michigan Alzheimer’s Disease Research Center, finding a high rate of apparent plateaus and also a high rate of false discovery. Simulation studies are also conducted to assess the performance of the algorithm. In general, the false discovery rate of the algorithm is high unless the rate of decline is high compared with the measurement error of the cognitive test. It is argued that the results are not a problem of the specific algorithm chosen, but reflect a lack of information concerning the presence of plateaus in the data.  相似文献   

14.
Abstract

Frailty models are used in survival analysis to account for unobserved heterogeneity in individual risks to disease and death. To analyze bivariate data on related survival times (e.g., matched pairs experiments, twin, or family data), shared frailty models were suggested. Shared frailty models are frequently used to model heterogeneity in survival analysis. The most common shared frailty model is a model in which hazard function is a product of random factor(frailty) and baseline hazard function which is common to all individuals. There are certain assumptions about the baseline distribution and distribution of frailty. In this paper, we introduce shared gamma frailty models with reversed hazard rate. We introduce Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. Also, we apply the proposed model to the Australian twin data set.  相似文献   

15.
The counting process with the Cox-type intensity function has been commonly used to analyse recurrent event data. This model essentially assumes that the underlying counting process is a time-transformed Poisson process and that the covariates have multiplicative effects on the mean and rate function of the counting process. Recently, Pepe and Cai, and Lawless and co-workers have proposed semiparametric procedures for making inferences about the mean and rate function of the counting process without the Poisson-type assumption. In this paper, we provide a rigorous justification of such robust procedures through modern empirical process theory. Furthermore, we present an approach to constructing simultaneous confidence bands for the mean function and describe a class of graphical and numerical techniques for checking the adequacy of the fitted mean–rate model. The advantages of the robust procedures are demonstrated through simulation studies. An illustration with multiple-infection data taken from a clinical study on chronic granulomatous disease is also provided.  相似文献   

16.
The geographical relative risk function is a useful tool for investigating the spatial distribution of disease based on case and control data. The most common way of estimating this function is using the ratio of bivariate kernel density estimates constructed from the locations of cases and controls, respectively. An alternative is to use a local-linear (LL) estimator of the log-relative risk function. In both cases, the choice of bandwidth is critical. In this article, we examine the relative performance of the two estimation techniques using a variety of data-driven bandwidth selection methods, including likelihood cross-validation (CV), least-squares CV, rule-of-thumb reference methods, and a new approximate plug-in (PI) bandwidth for the LL estimator. Our analysis includes the comparison of asymptotic results; a simulation study; and application of the estimators on two real data sets. Our findings suggest that the density ratio method implemented with the least-squares CV bandwidth selector is generally best, with the LL estimator with PI bandwidth being competitive in applications with strong large-scale trends but much worse in situations with elliptical clusters.  相似文献   

17.
Identification of the type of disease pattern and spread in a field is critical in epidemiological investigations of plant diseases. For example, an aggregation pattern of infected plants suggests that, at the time of observation, the pathogen is spreading from a proximal source. Conversely, a random pattern suggests a lack of spread from a proximal source. Most of the existing methods of spatial pattern analysis work with only one variety of plant at each location and with uniform genetic disease susceptibility across the field. Pecan orchards, used in this study, and other orchard crops are usually composed of different varieties with different levels of susceptibility to disease. A new measure is suggested to characterize the spatio-temporal transmission patterns of disease; a Monte Carlo test procedure is proposed to test whether the transmission of disease is random or aggregated. In addition, we propose a mixed-transmission model, which allows us to quantify the degree of aggregation effect.  相似文献   

18.
There has been a recurring interest in models for survival data which hypothesize subpopulations of individuals highly susceptible to some type of adverse event. Other individuals are assumed to be at much less risk. Most commonly, in clinical trials, these models attempt to estimate the fraction of patients cured of disease. The use of such models is examined, and the likelihood function is advocated as an informative inference tool.  相似文献   

19.
We model the Alzheimer's disease-related phenotype response variables observed on irregular time points in longitudinal Genome-Wide Association Studies as sparse functional data and propose nonparametric test procedures to detect functional genotype effects while controlling the confounding effects of environmental covariates. Our new functional analysis of covariance tests are based on a seemingly unrelated kernel smoother, which takes into account the within-subject temporal correlations, and thus enjoy improved power over existing functional tests. We show that the proposed test combined with a uniformly consistent nonparametric covariance function estimator enjoys the Wilks phenomenon and is minimax most powerful. Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative database, where an application of the proposed test lead to the discovery of new genes that may be related to Alzheimer's disease.  相似文献   

20.
Unmeasured confounding is a common problem in observational studies. This article presents simple formulae that can set the bounds of the confounding risk ratio under three standard populations of the exposed, unexposed, and total groups. The bounds are derived by considering the confounding risk ratio as a function of the prevalence of a covariate, and can be constructed using only information about either the exposure–confounder or the disease–confounder relationship. The formulae can be extended to the confounding odds ratio in case–control studies, and the confounding risk difference is discussed. The application of these formulae is demonstrated using an example in which estimation may suffer from bias due to population stratification. The formulae can help to provide a realistic picture of the potential impact of bias due to confounding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号