首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The estimation of the incidence of tumours in an animal carcinogenicity study is complicated by the occult nature of the tumours involved (i.e. tumours are not observable before an animal's death). Also, the lethality of tumours is generally unknown, making the tumour incidence function non-identifiable without interim sacrifices, cause-of-death data or modelling assumptions. Although Kaplan–Meier curves for overall survival are typically displayed, obtaining analogous plots for tumour incidence generally requires fairly elaborate model fitting. We present a case-study of tetrafluoroethylene to illustrate a simple method for estimating the incidence of tumours as a function of more easily estimable components. One of the components, tumour prevalence, is modelled by using a generalized additive model, which leads to estimates that are more flexible than those derived under the usual parametric models. A multiplicative assumption for tumour lethality allows for the incorporation of concomitant information, such as the size of tumours. Our approach requires only terminal sacrifice data although additional sacrifice data are easily accommodated. Simulations are used to illustrate the estimator proposed and to evaluate its properties. The method also yields a simple summary measure of tumour lethality, which can be helpful in interpreting the results of a study.  相似文献   

2.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours.  相似文献   

3.
A fully parametric multistate model is explored for the analysis of animal carcinogenicity experiments in which the time of tumour onset is not known. This model does not require assumptions about tumour lethality or cause of death judgements and can be fitted in the absence of sacrifice data. The model is constructed as a three-state model with simple parametric forms for the transition rates. Maximum likelihood methods are used to estimate the transition rates and different treatment groups are compared using likelihood ratio tests. Selection of an appropriate model and methods to assess the fit of the model are illustrated with data from animal experiments. Comparisons with standard methods are made.  相似文献   

4.
In animal tumorigenicity data, the time of occurrence of the tumor is not observed because the existence of the tumor is looked for only at either the time of death or the time of sacrifice of the animal. Such an incomplete data structure makes it difficult to investigate the impact of treatment on the occurrence of tumors. A three-state model (no tumor–tumor–death) is used to model events that occurred sequentially and to connect them. In this paper, we also employed a frailty effect to model the dependency of death on tumor occurrence. For the inference of parameters, an EM algorithm is considered. The method is applied to a real bladder tumor data set and a simulation study is performed to show the behavior of the proposed estimators.  相似文献   

5.
Development of anti-cancer therapies usually involve small to moderate size studies to provide initial estimates of response rates before initiating larger studies to better quantify response. These early trials often each contain a single tumor type, possibly using other stratification factors. Response rate for a given tumor type is routinely reported as the percentage of patients meeting a clinical criteria (e.g. tumor shrinkage), without any regard to response in the other studies. These estimates (called maximum likelihood estimates or MLEs) on average approximate the true value, but have variances that are usually large, especially for small to moderate size studies. The approach presented here is offered as a way to improve overall estimation of response rates when several small trials are considered by reducing the total uncertainty.The shrinkage estimators considered here (James-Stein/empirical Bayes and hierarchical Bayes) are alternatives that use information from all studies to provide potentially better estimates for each study. While these estimates introduce a small bias, they have a considerably smaller variance, and thus tend to be better in terms of total mean squared error. These procedures provide a better view of drug performance in that group of tumor types as a whole, as opposed to estimating each response rate individually without consideration of the others. In technical terms, the vector of estimated response rates is nearer the vector of true values, on average, than the vector of the usual unbiased MLEs applied to such trials.  相似文献   

6.
Statistical inference about tumorigenesis should focus on the tumour incidence rate. Unfortunately, in most animal carcinogenicity experiments, tumours are not observable in live animals and censoring of the tumour onset times is informative. In this paper, we propose a Bayesian method for analysing data from such studies. Our approach focuses on the incidence of tumours and accommodates occult tumours and censored onset times without restricting tumour lethality, relying on cause-of-death data, or requiring interim sacrifices. We represent the underlying state of nature by a multistate stochastic process and assume general probit models for the time-specific transition rates. These models allow the incorporation of covariates, historical control data and subjective prior information. The inherent flexibility of this approach facilitates the interpretation of results, particularly when the sample size is small or the data are sparse. We use a Gibbs sampler to estimate the relevant posterior distributions. The methods proposed are applied to data from a US National Toxicology Program carcinogenicity study.  相似文献   

7.
Integrated squared density derivatives are important to the plug-in type of bandwidth selector for kernel density estimation. Conventional estimators of these quantities are inefficient when there is a non-smooth boundary in the support of the density. We introduce estimators that utilize density derivative estimators obtained from local polynomial fitting. They retain the rates of convergence in mean-squared error that are familiar from non-boundary cases, and the constant coefficients have similar forms. The estimators and the formula for their asymptotically optimal bandwidths, which depend on integrated products of density derivatives, are applied to automatic bandwidth selection for local linear density estimation. Simulation studies show that the constructed bandwidth rule and the Sheather–Jones bandwidth are competitive in non-boundary cases, but the former overcomes boundary problems whereas the latter does not.  相似文献   

8.
Tang Qingguo 《Statistics》2015,49(6):1262-1278
This paper studies estimation in semi-functional linear regression. A general formulation is used to treat mean regression, median regression, quantile regression and robust mean regression in one setting. The linear slope function is estimated by the functional principal component basis and the nonparametric component is approximated by a B-spline function. The global convergence rates of the estimators of unknown slope function and nonparametric component are established under suitable norm. The convergence rate of the mean-squared prediction error for the proposed estimators is also established. Finite sample properties of our procedures are studied through Monte Carlo simulations. A real data example about Berkeley growth data is used to illustrate our proposed methodology.  相似文献   

9.
We consider improving estimating parameters of diffusion processes for interest rates by incorporating information in bond prices. This is designed to improve the estimation of the drift parameters, which are known to be subject to large estimation errors. It is shown that having the bond prices together with the short rates leads to more efficient estimation of all parameters for the interest rate models. It enhances the estimation efficiency of the maximum likelihood estimation based on the interest rate dynamics alone. The combined estimation based on the bond prices and the interest rate dynamics can also provide inference to the risk premium parameter. Simulation experiments were conducted to confirm the theoretical properties of the estimators concerned. We analyze the overnight Fed fund rates together with the U.S. Treasury bond prices. Supplementary materials for this article are available online.  相似文献   

10.
Estimating population sizes by the catch-effort methods is of enormous importance, in particular to harvest animal populations. A unified mixture model is introduced for different catchability functions to account for heterogeneous catchabilities among individual animals. A sequence of lower bounds to the odds that a single animal is not caught are proposed and used to define pseudo maximum likelihood estimators for the population size. The one-sided nature of confidence intervals is discussed. The proposed estimation methods are presented and illustrated by numerical studies.  相似文献   

11.
Abstract.  In this paper, a two-stage estimation method for non-parametric additive models is investigated. Differing from Horowitz and Mammen's two-stage estimation, our first-stage estimators are designed not only for dimension reduction but also as initial approximations to all of the additive components. The second-stage estimators are obtained by using one-dimensional non-parametric techniques to refine the first-stage ones. From this procedure, we can reveal a relationship between the regression function spaces and convergence rate, and then provide estimators that are optimal in the sense that, better than the usual one-dimensional mean-squared error (MSE) of the order n −4/5 , the MSE of the order n − 1 can be achieved when the underlying models are actually parametric. This shows that our estimation procedure is adaptive in a certain sense. Also it is proved that the bandwidth that is selected by cross-validation depends only on one-dimensional kernel estimation and maintains the asymptotic optimality. Simulation studies show that the new estimators of the regression function and all components outperform the existing estimators, and their behaviours are often similar to that of the oracle estimator.  相似文献   

12.
Estimation of price indexes in the United States is generally based on complex rotating panel surveys. The sample for the Consumer Price Index, for example, is selected in three stages—geographic areas, establishments, and individual items—with 20% of the sample being replaced by rotation each year. At each period, a time series of data is available for use in estimation. This article examines how to best combine data for estimation of long-term and short-term changes and how to estimate the variances of the index estimators in the context of two-stage sampling. I extend the class of estimators, introduced by Valliant and Miller, of Laspeyres indexes formed using sample data collected from the current period back to a previous base period. Linearization estimators of variance for indexes of long-term and short-term change are derived. The theory is supported by an empirical simulation study using two-stage sampling of establishments and items from a population derived from U.S. Bureau of Labor Statistics data.  相似文献   

13.
In this article, we consider parameter estimation in the hazard rate with multiple change points in the presence of long-term survivors. We combine two methods: maximum likelihood based and martingale based, to estimate the change points in the hazard rate for right censored survival data that accounts for long-term survivors. A simulation study is carried out to compare the performance of estimators. The method is applied to analyze two real datasets.  相似文献   

14.
This paper studies robust estimation of multivariate regression model using kernel weighted local linear regression. A robust estimation procedure is proposed for estimating the regression function and its partial derivatives. The proposed estimators are jointly asymptotically normal and attain nonparametric optimal convergence rate. One-step approximations to the robust estimators are introduced to reduce computational burden. The one-step local M-estimators are shown to achieve the same efficiency as the fully iterative local M-estimators as long as the initial estimators are good enough. The proposed estimators inherit the excellent edge-effect behavior of the local polynomial methods in the univariate case and at the same time overcome the disadvantages of the local least-squares based smoothers. Simulations are conducted to demonstrate the performance of the proposed estimators. Real data sets are analyzed to illustrate the practical utility of the proposed methodology. This work was supported by the National Natural Science Foundation of China (Grant No. 10471006).  相似文献   

15.
Results from the theory of uniformly most powerful invariant tests are used to develop a new parameter estimation procedure. The procedure is used to derive parameter estimators for several important distributions. Results of simulation studies comparing the performances of the new estimators and maximum likelihood estimators are presented.  相似文献   

16.
This article considers a class of estimators for the location and scale parameters in the location-scale model based on ‘synthetic data’ when the observations are randomly censored on the right. The asymptotic normality of the estimators is established using counting process and martingale techniques when the censoring distribution is known and unknown, respectively. In the case when the censoring distribution is known, we show that the asymptotic variances of this class of estimators depend on the data transformation and have a lower bound which is not achievable by this class of estimators. However, in the case that the censoring distribution is unknown and estimated by the Kaplan–Meier estimator, this class of estimators has the same asymptotic variance and attains the lower bound for variance for the case of known censoring distribution. This is different from censored regression analysis, where asymptotic variances depend on the data transformation. Our method has three valuable advantages over the method of maximum likelihood estimation. First, our estimators are available in a closed form and do not require an iterative algorithm. Second, simulation studies show that our estimators being moment-based are comparable to maximum likelihood estimators and outperform them when sample size is small and censoring rate is high. Third, our estimators are more robust to model misspecification than maximum likelihood estimators. Therefore, our method can serve as a competitive alternative to the method of maximum likelihood in estimation for location-scale models with censored data. A numerical example is presented to illustrate the proposed method.  相似文献   

17.
In this paper, the estimation of parameters for a three-parameter Weibull distribution based on progressively Type-II right censored sample is studied. Different estimation procedures for complete sample are generalized to the case with progressively censored data. These methods include the maximum likelihood estimators (MLEs), corrected MLEs, weighted MLEs, maximum product spacing estimators and least squares estimators. We also proposed the use of a censored estimation method with one-step bias-correction to obtain reliable initial estimates for iterative procedures. These methods are compared via a Monte Carlo simulation study in terms of their biases, root mean squared errors and their rates of obtaining reliable estimates. Recommendations are made from the simulation results and a numerical example is presented to illustrate all of the methods of inference developed here.  相似文献   

18.
In this article, we propose an additive-multiplicative rates model for recurrent event data in the presence of a terminal event such as death. The association between recurrent and terminal events is nonparametric. For inference on the model parameters, estimating equation approaches are developed, and the asymptotic properties of the resulting estimators are established. The finite sample behavior of the proposed estimators is evaluated through simulation studies, and an application to a bladder cancer study is provided.  相似文献   

19.
Recurrent events are frequently encountered in biomedical studies. Evaluating the covariates effects on the marginal recurrent event rate is of practical interest. There are mainly two types of rate models for the recurrent event data: the multiplicative rates model and the additive rates model. We consider a more flexible additive–multiplicative rates model for analysis of recurrent event data, wherein some covariate effects are additive while others are multiplicative. We formulate estimating equations for estimating the regression parameters. The estimators for these regression parameters are shown to be consistent and asymptotically normally distributed under appropriate regularity conditions. Moreover, the estimator of the baseline mean function is proposed and its large sample properties are investigated. We also conduct simulation studies to evaluate the finite sample behavior of the proposed estimators. A medical study of patients with cystic fibrosis suffered from recurrent pulmonary exacerbations is provided for illustration of the proposed method.  相似文献   

20.
Summary.  A Bayesian intensity model is presented for studying a bioassay problem involving interval-censored tumour onset times, and without discretization of times of death. Both tumour lethality and base-line hazard rates are estimated in the absence of cause-of-death information. Markov chain Monte Carlo methods are used in the numerical estimation, and sophisticated group updating algorithms are applied to achieve reasonable convergence properties. This method was tried on the rat tumorigenicity data that have previously been analysed by Ahn, Moon and Kodell, and our results seem to be more realistic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号