首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary. On the basis of serological data from prevalence studies of rubella, mumps and hepatitis A, the paper describes a flexible local maximum likelihood method for the estimation of the rate at which susceptible individuals acquire infection at different ages. In contrast with parametric models that have been used before in the literature, the local polynomial likelihood method allows this age-dependent force of infection to be modelled without making any assumptions about the parametric structure. Moreover, this method allows for simultaneous nonparametric estimation of age-specific incidence and prevalence. Unconstrained models may lead to negative estimates for the force of infection at certain ages. To overcome this problem and to guarantee maximal flexibility, the local smoother can be constrained to be monotone. It turns out that different parametric and nonparametric estimates of the force of infection can exhibit considerably different qualitative features like location and the number of maxima, emphasizing the importance of a well-chosen flexible statistical model.  相似文献   

2.
This research focuses on the estimation of tumor incidence rates from long-term animal studies which incorporate interim sacrifices. A nonparametric stochastic model is described with transition rates between states corresponding to the tumor incidence rate, the overall death rate, and the death rate for tumor-free animals. Exact analytic solutions for the maximum likelihood estimators of the hazard rates are presented, and their application to data from a long-term animal study is illustrated by an example. Unlike many common methods for estimation and comparison of tumor incidence rates among treatment groups, the estimators derived in this paper require no assumptions regarding tumor lethality or treatment lethality. The small sample operating characteristics of these estimators are evaluated using Monte Carlo simulation studies.  相似文献   

3.
The estimation of the incidence of tumours in an animal carcinogenicity study is complicated by the occult nature of the tumours involved (i.e. tumours are not observable before an animal's death). Also, the lethality of tumours is generally unknown, making the tumour incidence function non-identifiable without interim sacrifices, cause-of-death data or modelling assumptions. Although Kaplan–Meier curves for overall survival are typically displayed, obtaining analogous plots for tumour incidence generally requires fairly elaborate model fitting. We present a case-study of tetrafluoroethylene to illustrate a simple method for estimating the incidence of tumours as a function of more easily estimable components. One of the components, tumour prevalence, is modelled by using a generalized additive model, which leads to estimates that are more flexible than those derived under the usual parametric models. A multiplicative assumption for tumour lethality allows for the incorporation of concomitant information, such as the size of tumours. Our approach requires only terminal sacrifice data although additional sacrifice data are easily accommodated. Simulations are used to illustrate the estimator proposed and to evaluate its properties. The method also yields a simple summary measure of tumour lethality, which can be helpful in interpreting the results of a study.  相似文献   

4.
A fundamental problem with the latent-time framework in competing risks is the lack of identifiability of the joint distribution. Given observed covariates along with assumptions as to the form of their effect, then identifiability may obtain. However it is difficult to check any assumptions about form since a more general model may lose identifiability. This paper considers a general framework for modelling the effect of covariates, with the single assumption that the copula dependency structure of the latent times is invariant to the covariates. This framework consists of a set of functions: the covariate-time transformations. The main result produces bounds on these functions, which are derived solely from the crude incidence functions. These bounds are a useful model checking tool when considering the covariate-time transformation resulting from any particular set of further assumptions. An example is given where the widely-used assumption of independent competing risks is checked.  相似文献   

5.
The fitting of age-dependent HIV incidence models to AIDS data is a computationally intensive task, particularly when allowance is made for non-proportional dependence of the infection rate on age. This paper presents a computational alternative to a very intensive method described by Rosenberg (1994). Our approach is to use the EM algorithm on a discretized form of the model used by Rosenberg (1994). The EM approach has certain attractive features including ease of implementation and flexibility. of model specification. It also conveniently generalizes to allow smoothed estimation and less detailed forms of age-specific AIDS data.  相似文献   

6.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours.  相似文献   

7.
Most parametric statistical methods are based on a set of assumptions: normality, linearity and homoscedasticity. Transformation of a metric response is a popular method to meet these assumptions. In particular, transformation of the response of a linear model is a popular method when attempting to satisfy the Gaussian assumptions on the error components in the model. A particular problem with common transformations such as the logarithm or the Box–Cox family is that negative and zero data values cannot be transformed. This paper proposes a new transformation which allows negative and zero data values. The method for estimating the transformation parameter consider an objective criteria based on kurtosis and skewness for achieving normality. Use of the new transformation and the method for estimating the transformation parameter are illustrated with three data sets.  相似文献   

8.
Regression models with random effects are proposed for joint analysis of negative binomial and ordinal longitudinal data with nonignorable missing values under fully parametric framework. The presented model simultaneously considers a multivariate probit regression model for the missing mechanisms, which provides the ability of examining the missing data assumptions and a multivariate mixed model for the responses. Random effects are used to take into account the correlation between longitudinal responses of the same individual. A full likelihood-based approach that allows yielding maximum likelihood estimates of the model parameters is used. The model is applied to a medical data, obtained from an observational study on women, where the correlated responses are the ordinal response of osteoporosis of the spine and negative binomial response is the number of joint damage. A sensitivity of the results to the assumptions is also investigated. The effect of some covariates on all responses are investigated simultaneously.  相似文献   

9.
Comparing k Cumulative Incidence Functions Through Resampling Methods   总被引:2,自引:0,他引:2  
Tests for the equality of k cumulative incidence functions in a competing risks model are proposed. Test statistics are based on a vector of processes related to the cumulative incidence functions. Since their asymptotic distributions appear very complicated and depend on the underlying distribution of the data, two resampling techniques, namely the well-known bootstrap method and the so-called random symmetrization method, are used to approximate the critical values of the tests. Without making any assumptions on the nature of dependence between the risks, the tests allow one to compare k risks simultaneously for k 2 under the random censorship model. Tests against ordered alternatives are also considered. Simulation studies indicate that the proposed tests perform very well with moderate sample size. A real application to cancer mortality data is given.  相似文献   

10.
In prospective cohort studies, individuals are usually recruited according to a certain cross-sectional sampling criterion. The prevalent cohort is defined as a group of individuals who are alive but possibly with disease at the beginning of the study. It is appealing to incorporate the prevalent cases to estimate the incidence rate of disease before the enrollment. The method of back calculation of incidence rate has been used to estimate the incubation time from human immunodeficiency virus (HIV) infection to AIDS. The time origin is defined as the time of HIV infection. In aging cohort studies, the primary time scale is age of disease onset, subjects have to survive certain years to be enrolled into the study, thus creating left truncation (delay entry). The current methods usually assume that either the disease incidence is rare or the excess mortality due to disease is small compared with the healthy subjects. So far the validity of the results based on these assumptions has not been examined. In this paper, a simple alternative method is proposed to estimate dementia incidence rate before enrollment using prevalent cohort data with left truncation. Furthermore, simulations are used to examine the performance of the estimation of disease incidence under different assumptions of disease incidence rates and excess mortality hazards due to disease. As application, the method is applied to the prevalent cases of dementia from the Honolulu-Asia Aging Study to estimate the dementia incidence rate and to assess the effect of hypertension, Apoe 4 and education on dementia onset.  相似文献   

11.
In the analysis of competing risks data, cumulative incidence function is a useful summary of the overall crude risk for a failure type of interest. Mixture regression modeling has served as a natural approach to performing covariate analysis based on this quantity. However, existing mixture regression methods with competing risks data either impose parametric assumptions on the conditional risks or require stringent censoring assumptions. In this article, we propose a new semiparametric regression approach for competing risks data under the usual conditional independent censoring mechanism. We establish the consistency and asymptotic normality of the resulting estimators. A simple resampling method is proposed to approximate the distribution of the estimated parameters and that of the predicted cumulative incidence functions. Simulation studies and an analysis of a breast cancer dataset demonstrate that our method performs well with realistic sample sizes and is appropriate for practical use.  相似文献   

12.
In prospective cohort studies individuals are usually recruited according to a certain cross-sectional sampling criterion. The prevalent cohort is defined as a group of individuals who are alive but possibly with disease at the beginning of the study. It is appealing to incorporate the prevalent cases to estimate the incidence rate of disease before the enrollment. The method of back calculation of incidence rate has been used to estimate the incubation time from HIV infection to AIDS. The time origin is defined as the time of HIV infection. In aging cohort studies, the primary time scale is age of disease onset, subjects have to survive certain years to be enrolled into the study, thus creating left truncation (delay entry). The current methods usually assume that either the disease incidence is rare or the excess mortality due to disease is small compared to the healthy subjects. By far the validity of the results based on these assumptions has not been examined. In this paper, a simple alternative method is proposed to estimate dementia incidence rate before enrollment using prevalent cohort data with left truncation. Furthermore simulations are used to examine the performance of the estimation of disease incidence under different assumptions of disease incidence rates and excess mortality hazards due to disease. As application, the method is applied to the prevalent cases of dementia from the Honolulu Asia Aging Study to estimate dementia incidence rate and to assess the effect of hypertension, Apoe 4 and education on dementia onset.  相似文献   

13.
All statistical methods involve basic model assumptions, which if violated render results of the analysis dubious. A solution to such a contingency is to seek an appropriate model or to modify the customary model by introducing additional parameters. Both of these approaches are in general cumbersome and demand uncommon expertise. An alternative is to transform the data to achieve compatibility with a well understood and convenient customary model with readily available software. The well-known example is the Box–Cox data transformation developed in order to make the normal theory linear model usable even when the assumptions of normality and homoscedasticity are not met.In reliability analysis the model appropriateness is determined by the nature of the hazard function. The well-known Weibull distribution is the most commonly employed model for this purpose. However, this model, which allows only a small spectrum of monotone hazard rates, is especially inappropriate if the data indicate bathtub-shaped hazard rates.In this paper, a new model based on the use of data transformation is presented for modeling bathtub-shaped hazard rates. Parameter estimation methods are studied for this new (transformation) approach. Examples and results of comparisons between the new model and other bathtub-shaped models are shown to illustrate the applicability of this new model.  相似文献   

14.
In many longitudinal studies of recurrent events there is an interest in assessing how recurrences vary over time and across treatments or strata in the population. Usual analyses of such data assume a parametric form for the distribution of the recurrences over time. Here, we consider a semiparametric model for the analysis of such longitudinal studies where data are collected as panel counts. The model is a non-homogeneous Poisson process with a multiplicative intensity incorporating covariates through a proportionality assumption. Heterogeneity is accounted for in the model through subject-specific random effects. The key feature of the model is the use of regression splines to model the distribution of recurrences over time. This provides a flexible and robust method of relaxing parametric assumptions. In addition, quasi-likelihood methods are proposed for estimation, requiring only first and second moment assumptions to obtain consistent estimates. Simulations demonstrate that the method produces estimators of the rate with low bias and whose standardized distributions are well approximated by the normal. The usefulness of this approach, especially as an exploratory tool, is illustrated by analyzing a study designed to assess the effectiveness of a pheromone treatment in disturbing the mating habits of the Cherry Bark Tortrix moth.  相似文献   

15.
This paper presents an extension of instrumental variable estimation to nonlinear regression models. For the linear model, the extended estimator is equivalent to the two-stage least squares estimator. The extended estimator is consistent for an important class of nonlinear models, including the logistic model, under relatively weak assumptions on the distribution of the measurement error. An example and simulation study are presented for the logistic regression model. The simulations suggest the estimator is reasonably efficient.  相似文献   

16.
Likelihood-based, mixed-effects models for repeated measures (MMRMs) are occasionally used in primary analyses for group comparisons of incomplete continuous longitudinal data. Although MMRM analysis is generally valid under missing-at-random assumptions, it is invalid under not-missing-at-random (NMAR) assumptions. We consider the possibility of bias of estimated treatment effect using standard MMRM analysis in a motivational case, and propose simple and easily implementable pattern mixture models within the framework of mixed-effects modeling, to handle the NMAR data with differential missingness between treatment groups. The proposed models are a new form of pattern mixture model that employ a categorical time variable when modeling the outcome and a continuous time variable when modeling the missingness-data patterns. The models can directly provide an overall estimate of the treatment effect of interest using the average of the distribution of the missingness indicator and a categorical time variable in the same manner as MMRM analysis. Our simulation results indicate that the bias of the treatment effect for MMRM analysis was considerably larger than that for the pattern mixture model analysis under NMAR assumptions. In the case study, it would be dangerous to interpret only the results of the MMRM analysis, and the proposed pattern mixture model would be useful as a sensitivity analysis for treatment effect evaluation.  相似文献   

17.
Advances in computation mean that it is now possible to fit a wide range of complex models to data, but there remains the problem of selecting a model on which to base reported inferences. Following an early suggestion of Box & Tiao, it seems reasonable to seek 'inference robustness' in reported models, so that alternative assumptions that are reasonably well supported would not lead to substantially different conclusions. We propose a four-stage modelling strategy in which we iteratively assess and elaborate an initial model, measure the support for each of the resulting family of models, assess the influence of adopting alternative models on the conclusions of primary interest, and identify whether an approximate model can be reported. The influence-support plot is then introduced as a tool to aid model comparison. The strategy is semi-formal, in that it could be embedded in a decision-theoretic framework but requires substantive input for any specific application. The one restriction of the strategy is that the quantity of interest, or 'focus', must retain its interpretation across all candidate models. It is, therefore, applicable to analyses whose goal is prediction, or where a set of common model parameters are of interest and candidate models make alternative distributional assumptions. The ideas are illustrated by two examples. Technical issues include the calibration of the Kullback-Leibler divergence between marginal distributions, and the use of alternative measures of support for the range of models fitted.  相似文献   

18.
One way to analyze the AB-BA crossover trial with multivariate response is proposed. The multivariate model is given and the assumptions discussed. Two possibilities for the treatment eff ects hypothesis are considered. The statistical tests include the use of Hotelling's T 2 statistic, and a transformation equivalent to that of Jones and Kenward for the univariate case. Data from a nutrition experiment in Mexico illustrate the method. The multiple comparisons are carried out using Bonferroni intervals and the validity of the assumptions is explored. The main conclusions include the finding that some of the assumptions are not a requirement for the multivariate analysis; however, the sample sizes are important.  相似文献   

19.
The accurate estimation of an individual's usual dietary intake is an important topic in nutritional epidemiology. This paper considers the best linear unbiased predictor (BLUP) computed from repeatedly measured dietary data and derives several nonparametric prediction intervals for true intake. However, the performance of the BLUP and the validity of prediction intervals depends on whether required model assumptions for the true intake estimation problem hold. To address this issue, the paper examines how the BLUP and prediction intervals behave in the case of a violation of model assumptions, and then proposes an analysis pipeline for checking them with data.  相似文献   

20.
This paper discusses methods for clustering a continuous covariate in a survival analysis model. The advantages of using a categorical covariate defined from discretizing a continuous covariate (via clustering) is (i) enhanced interpretability of the covariate''s impact on survival and (ii) relaxing model assumptions that are usually required for survival models, such as the proportional hazards model. Simulations and an example are provided to illustrate the methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号