首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours.  相似文献   

2.
Analyses of carcinogenicity experiments involving occult (hidden) tumours are usually based on cause-of-death information or the results of many interim sacrifices. A simple compartmental model is described that does not involve the cause of death. The method of analysis requires only one interim sacrifice, in addition to the usual terminal kill, to ensure that the tumour incidence rates can be estimated. One advantage of the approach is demonstrated in the analysis of glomerulosclerosis following exposure to ionizing radiation. Although the semiparametric model involves fewer parameters, estimates of key functions derived in this analysis are similar to those obtained previously by using a nonparametric method that involves many more parameters.  相似文献   

3.
Information derived from interim sacrifices or on cause of death is routinely used in the statistical analyses of carcinogenicity experiments involving occult tumours. The authors describe a simple semiparametric model which does not require this information. Natural deaths during the experiment and the usual terminal sacrifice provide sufficient information to ensure that the tumour incidence rates, which are of primary interest in occult‐tumour studies, can be estimated nonparametrically. The advantages of this semiparametric approach to the analysis of survival/sacrifice experiments are illustrated using data from a study on benzyl acetate conducted under the U. S. National Toxicology Program. The results derived compare favourably with those obtained using a previously published approach to the analysis of tumorigenicity data.  相似文献   

4.
A fully parametric multistate model is explored for the analysis of animal carcinogenicity experiments in which the time of tumour onset is not known. This model does not require assumptions about tumour lethality or cause of death judgements and can be fitted in the absence of sacrifice data. The model is constructed as a three-state model with simple parametric forms for the transition rates. Maximum likelihood methods are used to estimate the transition rates and different treatment groups are compared using likelihood ratio tests. Selection of an appropriate model and methods to assess the fit of the model are illustrated with data from animal experiments. Comparisons with standard methods are made.  相似文献   

5.
Statistical inference about tumorigenesis should focus on the tumour incidence rate. Unfortunately, in most animal carcinogenicity experiments, tumours are not observable in live animals and censoring of the tumour onset times is informative. In this paper, we propose a Bayesian method for analysing data from such studies. Our approach focuses on the incidence of tumours and accommodates occult tumours and censored onset times without restricting tumour lethality, relying on cause-of-death data, or requiring interim sacrifices. We represent the underlying state of nature by a multistate stochastic process and assume general probit models for the time-specific transition rates. These models allow the incorporation of covariates, historical control data and subjective prior information. The inherent flexibility of this approach facilitates the interpretation of results, particularly when the sample size is small or the data are sparse. We use a Gibbs sampler to estimate the relevant posterior distributions. The methods proposed are applied to data from a US National Toxicology Program carcinogenicity study.  相似文献   

6.
Summary.  A Bayesian intensity model is presented for studying a bioassay problem involving interval-censored tumour onset times, and without discretization of times of death. Both tumour lethality and base-line hazard rates are estimated in the absence of cause-of-death information. Markov chain Monte Carlo methods are used in the numerical estimation, and sophisticated group updating algorithms are applied to achieve reasonable convergence properties. This method was tried on the rat tumorigenicity data that have previously been analysed by Ahn, Moon and Kodell, and our results seem to be more realistic.  相似文献   

7.
A model for the lifetime of a system is considered in which the system is susceptible to simultaneous failures of two or more components, the failures having a common external cause. Three sets of discrete failure data from the US nuclear industry are examined to motivate and illustrate the model derivation: they are for motor-operated valves, cooling fans and emergency diesel generators. To achieve target reliabilities, these components must be placed in systems that have built-in redundancy. Consequently, multiple failures due to a common cause are critical in the risk of core meltdown. Vesely has offered a simple methodology for inference, called the binomial failure rate model: external events are assumed to be governed by a Poisson shock model in which resulting shocks kill X out of m system components, X having a binomial distribution with parameters ( m , p ), 0< p <1. In many applications the binomial failure rate model fits failure data poorly, and the model has not typically been applied to probabilistic risk assessments in the nuclear industry. We introduce a realistic generalization of the binomial failure rate model by assigning a mixing distribution to the unknown parameter p . The distribution is generally identifiable, and its unique nonparametric maximum likelihood estimator can be obtained by using a simple iterative scheme.  相似文献   

8.
A graphical technique is introduced to assess the adequacy of the method of unweighted means in providing approximate F -tests for an unbalanced random model. These tests are similar to those obtained under a balanced ANOVA. The proposed technique is simple and can easily be used to determine the effects of imbalance and values of the variance components on the adequacy of the approximation. The one-way and two-way random models are used to illustrate the proposed methodology. Extensions to higher-order models are also mentioned.  相似文献   

9.
Commonly used tests to detect carcinogenic potential of a test compound make extreme assumptions about the lethality of tumors, due to their occult nature. In this paper we compare a nonparametric test, which uses interim sacrifice to avoid such assumptions, with these tests using simulation based on the EDOl data. Results indicate that in the presence of a significant difference in the mortality rate with treatment, commonly used methods could fail to maintain the nominal significance level. However, when there is no difference in the mortality rate, such procedures are robust to the underlying assumptions about the lethality of tumors and more powerful than the nonparametric test using interim sacrifice.  相似文献   

10.
Covariate measurement error problems have been extensively studied in the context of right‐censored data but less so for current status data. Motivated by the zebrafish basal cell carcinoma (BCC) study, where the occurrence time of BCC was only known to lie before or after a sacrifice time and where the covariate (Sonic hedgehog expression) was measured with error, the authors describe a semiparametric maximum likelihood method for analyzing current status data with mismeasured covariates under the proportional hazards model. They show that the estimator of the regression coefficient is asymptotically normal and efficient and that the profile likelihood ratio test is asymptotically Chi‐squared. They also provide an easily implemented algorithm for computing the estimators. They evaluate their method through simulation studies, and illustrate it with a real data example. The Canadian Journal of Statistics 39: 73–88; 2011 © 2011 Statistical Society of Canada  相似文献   

11.
Abstract.  We study a binary regression model using the complementary log–log link, where the response variable Δ is the indicator of an event of interest (for example, the incidence of cancer, or the detection of a tumour) and the set of covariates can be partitioned as ( X ,  Z ) where Z (real valued) is the primary covariate and X (vector valued) denotes a set of control variables. The conditional probability of the event of interest is assumed to be monotonic in Z , for every fixed X . A finite-dimensional (regression) parameter β describes the effect of X . We show that the baseline conditional probability function (corresponding to X  =  0 ) can be estimated by isotonic regression procedures and develop an asymptotically pivotal likelihood-ratio-based method for constructing (asymptotic) confidence sets for the regression function. We also show how likelihood-ratio-based confidence intervals for the regression parameter can be constructed using the chi-square distribution. An interesting connection to the Cox proportional hazards model under current status censoring emerges. We present simulation results to illustrate the theory and apply our results to a data set involving lung tumour incidence in mice.  相似文献   

12.
A new statistical approach is developed for estimating the carcinogenic potential of drugs and other chemical substances used by humans. Improved statistical methods are developed for rodent tumorigenicity assays that have interval sacrifices but not cause-of-death data. For such experiments, this paper proposes a nonparametric maximum likelihood estimation method for estimating the distributions of the time to onset of and the time to death from the tumour. The log-likelihood function is optimized using a constrained direct search procedure. Using the maximum likelihood estimators, the number of fatal tumours in an experiment can be imputed. By applying the procedure proposed to a real data set, the effect of calorie restriction is investigated. In this study, we found that calorie restriction delays the tumour onset time significantly for pituitary tumours. The present method can result in substantial economic savings by relieving the need for a case-by-case assignment of the cause of death or context of observation by pathologists. The ultimate goal of the method proposed is to use the imputed number of fatal tumours to modify Peto's International Agency for Research on Cancer test for application to tumorigenicity assays that lack cause-of-death data.  相似文献   

13.
In this paper, we address the problem of simulating from a data-generating process for which the observed data do not follow a regular probability distribution. One existing method for doing this is bootstrapping, but it is incapable of interpolating between observed data. For univariate or bivariate data, in which a mixture structure can easily be identified, we could instead simulate from a Gaussian mixture model. In general, though, we would have the problem of identifying and estimating the mixture model. Instead of these, we introduce a non-parametric method for simulating datasets like this: Kernel Carlo Simulation. Our algorithm begins by using kernel density estimation to build a target probability distribution. Then, an envelope function that is guaranteed to be higher than the target distribution is created. We then use simple accept–reject sampling. Our approach is more flexible than others, can simulate intelligently across gaps in the data, and requires no subjective modelling decisions. With several univariate and multivariate examples, we show that our method returns simulated datasets that, compared with the observed data, retain the covariance structures and have distributional characteristics that are remarkably similar.  相似文献   

14.
Efficient inference for regression models requires that the heteroscedasticity be taken into account. We consider statistical inference under heteroscedasticity in a semiparametric measurement error regression model, in which some covariates are measured with errors. This paper has multiple components. First, we propose a new method for testing the heteroscedasticity. The advantages of the proposed method over the existing ones are that it does not need any nonparametric estimation and does not involve any mismeasured variables. Second, we propose a new two-step estimator for the error variances if there is heteroscedasticity. Finally, we propose a weighted estimating equation-based estimator (WEEBE) for the regression coefficients and establish its asymptotic properties. Compared with existing estimators, the proposed WEEBE is asymptotically more efficient, avoids undersmoothing the regressor functions and requires less restrictions on the observed regressors. Simulation studies show that the proposed test procedure and estimators have nice finite sample performance. A real data set is used to illustrate the utility of our proposed methods.  相似文献   

15.
We generalize the Gaussian mixture transition distribution (GMTD) model introduced by Le and co-workers to the mixture autoregressive (MAR) model for the modelling of non-linear time series. The models consist of a mixture of K stationary or non-stationary AR components. The advantages of the MAR model over the GMTD model include a more full range of shape changing predictive distributions and the ability to handle cycles and conditional heteroscedasticity in the time series. The stationarity conditions and autocorrelation function are derived. The estimation is easily done via a simple EM algorithm and the model selection problem is addressed. The shape changing feature of the conditional distributions makes these models capable of modelling time series with multimodal conditional distributions and with heteroscedasticity. The models are applied to two real data sets and compared with other competing models. The MAR models appear to capture features of the data better than other competing models do.  相似文献   

16.
Incorporation of historical controls using semiparametric mixed models   总被引:1,自引:0,他引:1  
The analysis of animal carcinogenicity data is complicated by various statistical issues. A topic of recent debate is how to control for the effect of the animal's body weight on the outcome of interest, the onset of tumours. We propose a method which incorporates historical information from the control animals in previously conducted experiments. We allow non-linearity in the effects of body weight by modelling the relationship nonparametrically through a penalized spline. A simple extension of the penalized spline model allows the relationship between weight and the onset of tumour to vary from one experiment to another.  相似文献   

17.
The complication in analysing tumour data is that the tumours detected in a screening programme tend to be slowly progressive, which is the so-called left-truncated sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumour growth function, Ghosh [Proportional hazards regression for cancer studies, Biometrics 64 (2008), pp. 141–148] developed estimation procedures for proportional hazards model. In this note, by modelling growth function as a function of covariates and parameterizing the distribution function of left truncation time, we demonstrate that Ghosh's approach can be extended to the case when each subject has a specific growth function. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators for the regression parameters in the proportional hazards model.  相似文献   

18.
Dimension reduction with bivariate responses, especially a mix of a continuous and categorical responses, can be of special interest. One immediate application is to regressions with censoring. In this paper, we propose two novel methods to reduce the dimension of the covariates of a bivariate regression via a model-free approach. Both methods enjoy a simple asymptotic chi-squared distribution for testing the dimension of the regression, and also allow us to test the contributions of the covariates easily without pre-specifying a parametric model. The new methods outperform the current one both in simulations and in analysis of a real data. The well-known PBC data are used to illustrate the application of our method to censored regression.  相似文献   

19.
Generalized additive mixed models are proposed for overdispersed and correlated data, which arise frequently in studies involving clustered, hierarchical and spatial designs. This class of models allows flexible functional dependence of an outcome variable on covariates by using nonparametric regression, while accounting for correlation between observations by using random effects. We estimate nonparametric functions by using smoothing splines and jointly estimate smoothing parameters and variance components by using marginal quasi-likelihood. Because numerical integration is often required by maximizing the objective functions, double penalized quasi-likelihood is proposed to make approximate inference. Frequentist and Bayesian inferences are compared. A key feature of the method proposed is that it allows us to make systematic inference on all model components within a unified parametric mixed model framework and can be easily implemented by fitting a working generalized linear mixed model by using existing statistical software. A bias correction procedure is also proposed to improve the performance of double penalized quasi-likelihood for sparse data. We illustrate the method with an application to infectious disease data and we evaluate its performance through simulation.  相似文献   

20.
Summary.  In survival data that are collected from phase III clinical trials on breast cancer, a patient may experience more than one event, including recurrence of the original cancer, new primary cancer and death. Radiation oncologists are often interested in comparing patterns of local or regional recurrences alone as first events to identify a subgroup of patients who need to be treated by radiation therapy after surgery. The cumulative incidence function provides estimates of the cumulative probability of locoregional recurrences in the presence of other competing events. A simple version of the Gompertz distribution is proposed to parameterize the cumulative incidence function directly. The model interpretation for the cumulative incidence function is more natural than it is with the usual cause-specific hazard parameterization. Maximum likelihood analysis is used to estimate simultaneously parametric models for cumulative incidence functions of all causes. The parametric cumulative incidence approach is applied to a data set from the National Surgical Adjuvant Breast and Bowel Project and compared with analyses that are based on parametric cause-specific hazard models and nonparametric cumulative incidence estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号