首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper develops a posterior simulation method for a dynamic Tobit model. The major obstacle rooted in such a problem lies in high dimensional integrals, induced by dependence among censored observations, in the likelihood function. The primary contribution of this study is to develop a practical and efficient sampling scheme for the conditional posterior distributions of the censored (i.e., unobserved) data, so that the Gibbs sampler with the data augmentation algorithm is successfully applied. The substantial differences between this approach and some existing methods are highlighted. The proposed simulation method is investigated by means of a Monte Carlo study and applied to a regression model of Japanese exports of passenger cars to the U.S. subject to a non-tariff trade barrier.  相似文献   

2.
A correct detection of areas with excess of pollution relies first on accurate predictions of pollutant concentrations, a task that is usually complicated by skewed histograms and the presence of censored data. The unified skew-Gaussian (SUG) random field proposed by Zareifard and Jafari Khaledi [19] offers a more flexible class of sampling spatial models to account for skewness. In this paper, we adopt a Bayesian framework to perform prediction for the SUG model in the presence of censored data. Owing to the presence of many latent variables with strongly dependent components in the model, we encounter convergence issues when using Monte Carlo Markov Chain algorithms. To overcome this obstacle, we use a computationally efficient inverse Bayes formulas sampling procedure to obtain approximately independent samples from the posterior distribution of latent variables. Then they are applied to update parameters in a Gibbs sampler scheme. This hybrid algorithm provides effective samples, resulting in some computational advantages and precise predictions. The proposed approach is illustrated with a simulation study and applied to a spatial data set which contains right censored data.  相似文献   

3.
The article presents the Bayesian inference for the parameters of randomly censored Burr-type XII distribution with proportional hazards. The joint conjugate prior of the proposed model parameters does not exist; we consider two different systems of priors for Bayesian estimation. The explicit forms of the Bayes estimators are not possible; we use Lindley's method to obtain the Bayes estimates. However, it is not possible to obtain the Bayesian credible intervals with Lindley's method; we suggest the Gibbs sampling procedure for this purpose. Numerical experiments are performed to check the properties of the different estimators. The proposed methodology is applied to a real-life data for illustrative purposes. The Bayes estimators are compared with the Maximum likelihood estimators via numerical experiments and real data analysis. The model is validated using posterior predictive simulation in order to ascertain its appropriateness.  相似文献   

4.
In this article we discuss Bayesian estimation of Kumaraswamy distributions based on three different types of censored samples. We obtain Bayes estimates of the model parameters using two different types of loss functions (LINEX and Quadratic) under each censoring scheme (left censoring, singly type-II censoring, and doubly type-II censoring) using Monte Carlo simulation study with posterior risk plots for each different choices of the model parameters. Also, detailed discussion regarding elicitation of the hyperparameters under the dependent prior setup is discussed. If one of the shape parameters is known then closed form expressions of the Bayes estimates corresponding to posterior risk under both the loss functions are available. To provide the efficacy of the proposed method, a simulation study is conducted and the performance of the estimation is quite interesting. For illustrative purpose, real-life data are considered.  相似文献   

5.
A mixture model is proposed to analyze a bivariate interval censored data with cure rates. There exist two types of association related with bivariate failure times and bivariate cure rates, respectively. A correlation coefficient is adopted for the association of bivariate cure rates and a copula function is applied for bivariate survival times. The conditional expectation of unknown quantities attributable to interval censored data and cure rates are calculated in the E-step in ES (Expectation-Solving algorithm) and the marginal estimates and the association measures are estimated in the S-step through a two-stage procedure. A simulation study is performed to evaluate the suggested method and a real data from HIV patients is analyzed as a real data example.  相似文献   

6.
This article aims at proposing a new type of empirical likelihood testing procedure based on the Wilks theorem and imputed value in censored partial linear model. The present study is mainly designed to use empirical likelihood (EL) method based on synthetic dependent data, and the result can not be applied directly due to the weights in it. In this article, a censored empirical log-likelihood ratio is introduced to tackle this problem. Particularly, we demonstrate that its limiting distribution is a standard chi-squared distribution with freedom of one. This method is used to calculate the p-value and construct the confidence interval. Some simulation studies are conducted to highlight the performance of the proposed EL method, and the results show that it performs well. Finally, an illustration is given using the Stanford Heart Transplant data.  相似文献   

7.
Censoring of a longitudinal outcome often occurs when data are collected in a biomedical study and where the interest is in the survival and or longitudinal experiences of a study population. In the setting considered herein, we encountered upper and lower censored data as the result of restrictions imposed on measurements from a kinetic model producing “biologically implausible” kidney clearances. The goal of this paper is to outline the use of a joint model to determine the association between a censored longitudinal outcome and a time to event endpoint. This paper extends Guo and Carlin's [6] paper to accommodate censored longitudinal data, in a commercially available software platform, by linking a mixed effects Tobit model to a suitable parametric survival distribution. Our simulation results showed that our joint Tobit model outperforms a joint model made up of the more naïve or “fill-in” method for the longitudinal component. In this case, the upper and/or lower limits of censoring are replaced by the limit of detection. We illustrated the use of this approach with example data from the hemodialysis (HEMO) study [3] and examined the association between doubly censored kidney clearance values and survival.  相似文献   

8.
Cramér–von Mises type goodness of fit tests for interval censored data case 2 are proposed based on a resampling method called the leveraged bootstrap, and their asymptotic consistency is shown. The proposed tests are computationally efficient, and in fact can be applied to other types of censored data, including right censored data, doubly censored data and (mixture of) case k interval censored data. Some simulation results and an example from AIDS research are presented.  相似文献   

9.
ABSTRACT

In this paper, we consider an effective Bayesian inference for censored Student-t linear regression model, which is a robust alternative to the usual censored Normal linear regression model. Based on the mixture representation of the Student-t distribution, we propose a non-iterative Bayesian sampling procedure to obtain independently and identically distributed samples approximately from the observed posterior distributions, which is different from the iterative Markov Chain Monte Carlo algorithm. We conduct model selection and influential analysis using the posterior samples to choose the best fitted model and to detect latent outliers. We illustrate the performance of the procedure through simulation studies, and finally, we apply the procedure to two real data sets, one is the insulation life data with right censoring and the other is the wage rates data with left censoring, and we get some interesting results.  相似文献   

10.
In this article, dichotomous variables are used to compare between linear and nonlinear Bayesian structural equation models. Gibbs sampling method is applied for estimation and model comparison. Statistical inferences that involve estimation of parameters and their standard deviations and residuals analysis for testing the selected model are discussed. Hidden continuous normal distribution (censored normal distribution) is used to solve the problem of dichotomous variables. The proposed procedure is illustrated by a simulation data obtained from R program. Analyses are done by using R2WinBUGS package in R-program.  相似文献   

11.
The currently existing estimation methods and goodness-of-fit tests for the Cox model mainly deal with right censored data, but they do not have direct extension to other complicated types of censored data, such as doubly censored data, interval censored data, partly interval-censored data, bivariate right censored data, etc. In this article, we apply the empirical likelihood approach to the Cox model with complete sample, derive the semiparametric maximum likelihood estimators (SPMLE) for the Cox regression parameter and the baseline distribution function, and establish the asymptotic consistency of the SPMLE. Via the functional plug-in method, these results are extended in a unified approach to doubly censored data, partly interval-censored data, and bivariate data under univariate or bivariate right censoring. For these types of censored data mentioned, the estimation procedures developed here naturally lead to Kolmogorov-Smirnov goodness-of-fit tests for the Cox model. Some simulation results are presented.  相似文献   

12.
The paper aims to select a suitable prior for the Bayesian analysis of the two-component mixture of the Topp Leone model under doubly censored samples and left censored samples for the first component and right censored samples for the second component. The posterior analysis has been carried out under the assumption of a class of informative and noninformative priors using a couple of loss functions. The comparison among the different Bayes estimators has been made under a simulation study and a real life example. The model comparison criterion has been used to select a suitable prior for the Bayesian analysis. The hazard rate of the Topp Leone mixture model has been compared for a range of parametric values.  相似文献   

13.
In this article, we provide some suitable pivotal quantities for constructing prediction intervals for the jth future ordered observation from the two-parameter Weibull distribution based on censored samples. Our method is more general in the sense that it can be applied to any data scheme. We present a simulation of our method to analyze its performance. Two illustrative examples are also included. For further study, our method is easily applied to other location and scale family distributions.  相似文献   

14.
Abstract. We propose a spline‐based semiparametric maximum likelihood approach to analysing the Cox model with interval‐censored data. With this approach, the baseline cumulative hazard function is approximated by a monotone B‐spline function. We extend the generalized Rosen algorithm to compute the maximum likelihood estimate. We show that the estimator of the regression parameter is asymptotically normal and semiparametrically efficient, although the estimator of the baseline cumulative hazard function converges at a rate slower than root‐n. We also develop an easy‐to‐implement method for consistently estimating the standard error of the estimated regression parameter, which facilitates the proposed inference procedure for the Cox model with interval‐censored data. The proposed method is evaluated by simulation studies regarding its finite sample performance and is illustrated using data from a breast cosmesis study.  相似文献   

15.
In this paper we propose a quantile survival model to analyze censored data. This approach provides a very effective way to construct a proper model for the survival time conditional on some covariates. Once a quantile survival model for the censored data is established, the survival density, survival or hazard functions of the survival time can be obtained easily. For illustration purposes, we focus on a model that is based on the generalized lambda distribution (GLD). The GLD and many other quantile function models are defined only through their quantile functions, no closed‐form expressions are available for other equivalent functions. We also develop a Bayesian Markov Chain Monte Carlo (MCMC) method for parameter estimation. Extensive simulation studies have been conducted. Both simulation study and application results show that the proposed quantile survival models can be very useful in practice.  相似文献   

16.
ABSTRACT

In this paper, we consider some problems of point estimation and point prediction when the competing risks data from a class of exponential distribution are progressive type-I interval censored. The maximum likelihood estimation and mid-point approximation method are proposed for the estimations of parameters. Also several point predictors of censored units such as the maximum likelihood predictor, the best unbiased predictor and the conditional median predictor are obtained. The methods discussed here are applied when the lifetime distributions of the latent failure times are independent and Weibull-distributed. Finally a simulation study is given by using Monte-Carlo simulations to compare the performances of the different methods and one data analysis has been presented for illustrative purposes.  相似文献   

17.
This article aims at making an empirical likelihood inference of regression parameter in partial linear model when the response variable is right censored randomly. The present studies are mainly designed to use empirical likelihood (EL) method based on synthetic dependent data, and the result cannot be applied directly due to the unknown weights in it. In this paper, we introduce a censored empirical log-likelihood ratio and demonstrate that its limiting distribution is a standard chi-square distribution. The estimating procedure of β is developed based on piecewise polynomial method. As a result, the p-value of test and the confidence interval can be obtained without estimating other quantities. Some simulation studies are conducted to highlight the performance of the proposed EL method, and the results show a good performance. Finally, we apply our method into the real example of multiple myeloma data and show the proof of theorem.  相似文献   

18.
Summary.  The main statistical problem in many epidemiological studies which involve repeated measurements of surrogate markers is the frequent occurrence of missing data. Standard likelihood-based approaches like the linear random-effects model fail to give unbiased estimates when data are non-ignorably missing. In human immunodeficiency virus (HIV) type 1 infection, two markers which have been widely used to track progression of the disease are CD4 cell counts and HIV–ribonucleic acid (RNA) viral load levels. Repeated measurements of these markers tend to be informatively censored, which is a special case of non-ignorable missingness. In such cases, we need to apply methods that jointly model the observed data and the missingness process. Despite their high correlation, longitudinal data of these markers have been analysed independently by using mainly random-effects models. Touloumi and co-workers have proposed a model termed the joint multivariate random-effects model which combines a linear random-effects model for the underlying pattern of the marker with a log-normal survival model for the drop-out process. We extend the joint multivariate random-effects model to model simultaneously the CD4 cell and viral load data while adjusting for informative drop-outs due to disease progression or death. Estimates of all the model's parameters are obtained by using the restricted iterative generalized least squares method or a modified version of it using the EM algorithm as a nested algorithm in the case of censored survival data taking also into account non-linearity in the HIV–RNA trend. The method proposed is evaluated and compared with simpler approaches in a simulation study. Finally the method is applied to a subset of the data from the 'Concerted action on seroconversion to AIDS and death in Europe' study.  相似文献   

19.
This paper discusses multivariate interval‐censored failure time data observed when several correlated survival times of interest exist and only interval censoring is available for each survival time. Such data occur in many fields, for instance, studies of the development of physical symptoms or diseases in several organ systems. A marginal inference approach was used to create a linear transformation model and applied to bivariate interval‐censored data arising from a diabetic retinopathy study and an AIDS study. The results of simulation studies that were conducted to evaluate the performance of the presented approach suggest that it performs well. The Canadian Journal of Statistics 41: 275–290; 2013 © 2013 Statistical Society of Canada  相似文献   

20.
In this paper, we study the maximum likelihood estimation of a model with mixed binary responses and censored observations. The model is very general and includes the Tobit model and the binary choice model as special cases. We show that, by using additional binary choice observations, our method is more efficient than the traditional Tobit model. Two iterative procedures are proposed to compute the maximum likelihood estimator (MLE) for the model based on the EM algorithm (Dempster et al, 1977) and the Newton-Raphson method. The uniqueness of the MLE is proved. The simulation results show that the inconsistency and inefficiency can be significant when the Tobit method is applied to the present mixed model. The experiment results also suggest that the EM algorithm is much faster than the Newton-Raphson method for the present mixed model. The method also allows one to combine two data sets, the smaller data set with more detailed observations and the larger data set with less detailed binary choice observations in order to improve the efficiency of estimation. This may entail substantial savings when one conducts surveys.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号