首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
In Survival Analysis and related fields of research right-censored and left-truncated data often appear. Usually, it is assumed that the right-censoring variable is independent of the lifetime of ultimate interest. However, in particular applications dependent censoring may be present; this is the case, for example, when there exist several competing risks acting on the same individual. In this paper we propose a copula-graphic estimator for such a situation. The estimator is based on a known Archimedean copula function which properly represents the dependence structure between the lifetime and the censoring time. Therefore, the current work extends the copula-graphic estimator in de Uña-Álvarez and Veraverbeke [Generalized copula-graphic estimator. Test. 2013;22:343–360] in the presence of left-truncation. An asymptotic representation of the estimator is derived. The performance of the estimator is investigated in an intensive Monte Carlo simulation study. An application to unemployment duration is included for illustration purposes.  相似文献   

2.
Little work has been published on the analysis of censored data for the Birnbaum–Saunders distribution (BISA). In this article, we implement the EM algorithm to fit a regression model with censored data when the failure times follow the BISA. Three approaches to implement the E-Step of the EM algorithm are considered. In two of these implementations, the M-Step is attained by an iterative least-squares procedure. The algorithm is exemplified with a single explanatory variable in the model.  相似文献   

3.
ABSTRACT

A hybrid censoring is a mixture of Type I and Type II censoring where the experiment terminates when either rth failure or predetermined censoring time comes first or later. In this article, we consider order statistics of the Type I censored data and provide a simple expression for their Kullback–Leibler (KL) information. Then, we provide the expressions for the KL information of the Type I and Type II hybrid censored data.  相似文献   

4.
In this article, we propose several goodness-of-fit methods for location–scale families of distributions under progressively Type-II censored data. The new tests are based on order statistics and sample spacings. We assess the performance of the proposed tests for the normal and Gumbel models against several alternatives by means of Monte Carlo simulations. It has been observed that the proposed tests are quite powerful in comparison with an existing goodness-of-fit test proposed for progressively Type-II censored data by Balakrishnan et al. [Goodness-of-fit tests based on spacings for progressively Type-II censored data from a general location–scale distribution, IEEE Trans. Reliab. 53 (2004), pp. 349–356]. Finally, we illustrate the proposed goodness-of-fit tests using two real data from reliability literature.  相似文献   

5.
ABSTRACT

In this paper, we propose modified spline estimators for nonparametric regression models with right-censored data, especially when the censored response observations are converted to synthetic data. Efficient implementation of these estimators depends on the set of knot points and an appropriate smoothing parameter. We use three algorithms, the default selection method (DSM), myopic algorithm (MA), and full search algorithm (FSA), to select the optimum set of knots in a penalized spline method based on a smoothing parameter, which is chosen based on different criteria, including the improved version of the Akaike information criterion (AICc), generalized cross validation (GCV), restricted maximum likelihood (REML), and Bayesian information criterion (BIC). We also consider the smoothing spline (SS), which uses all the data points as knots. The main goal of this study is to compare the performance of the algorithm and criteria combinations in the suggested penalized spline fits under censored data. A Monte Carlo simulation study is performed and a real data example is presented to illustrate the ideas in the paper. The results confirm that the FSA slightly outperforms the other methods, especially for high censoring levels.  相似文献   

6.
In this article, we consider some nonparametric goodness-of-fit tests for right censored samples, viz., the modified Kolmogorov, Cramer–von Mises–Smirnov, Anderson–Darling, and Nikulin–Rao–Robson χ2 tests. We also consider an approach based on a transformation of the original censored sample to a complete one and the subsequent application of classical goodness-of-fit tests to the pseudo-complete sample. We then compare these tests in terms of power in the case of Type II censored data along with the power of the Neyman–Pearson test, and draw some conclusions. Finally, we present an illustrative example.  相似文献   

7.
In this paper, the maximum likelihood (ML) and Bayes, by using Markov chain Monte Carlo (MCMC), methods are considered to estimate the parameters of three-parameter modified Weibull distribution (MWD(β, τ, λ)) based on a right censored sample of generalized order statistics (gos). Simulation experiments are conducted to demonstrate the efficiency of the proposed methods. Some comparisons are carried out between the ML and Bayes methods by computing the mean squared errors (MSEs), Akaike's information criteria (AIC) and Bayesian information criteria (BIC) of the estimates to illustrate the paper. Three real data sets from Weibull(α, β) distribution are introduced and analyzed using the MWD(β, τ, λ) and also using the Weibull(α, β) distribution. A comparison is carried out between the mentioned models based on the corresponding Kolmogorov–Smirnov (KS) test statistic, {AIC and BIC} to emphasize that the MWD(β, τ, λ) fits the data better than the other distribution. All parameters are estimated based on type-II censored sample, censored upper record values and progressively type-II censored sample which are generated from the real data sets.  相似文献   

8.
Abstract

Chiu [Chiu, S. N. (1999 Chiu, S. N. 1999. An unbiased estimator for the survival function of censored data. Commun. Statist. - Theory Meth., 28(9): 22492260. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). An unbiased estimator for the survival function of censored data. Commun. Statist. - Theory Meth. 28(9):2249–2260.] proposed a nonparametric estimator for the survival function which is based on observable censoring times in the general censoring model. His estimator is less efficient than the Product-Limit estimator. Considering an informative censoring model this drawback can partially be overcome. This is shown by a nonparametric, uniformly consistent estimator based on observable censoring times within the simple Koziol–Green model. Some asymptotic properties of the new estimator are investigated and it is compared with the well-known ACL-estimator.  相似文献   

9.
This paper proposes an approximation to the distribution of a goodness-of-fit statistic proposed recently by Balakrishnan et al. [Balakrishnan, N., Ng, H.K.T. and Kannan, N., 2002, A test of exponentiality based on spacings for progressively Type-II censored data. In: C. Huber-Carol et al. (Eds.), Goodness-of-Fit Tests and Model Validity (Boston: Birkhäuser), pp. 89–111.] for testing exponentiality based on progressively Type-II right censored data. The moments of this statistic can be easily calculated, but its distribution is not known in an explicit form. We first obtain the exact moments of the statistic using Basu's theorem and then the density approximants based on these exact moments of the statistic, expressed in terms of Laguerre polynomials, are proposed. A comparative study of the proposed approximation to the exact critical values, computed by Balakrishnan and Lin [Balakrishnan, N. and Lin, C.T., 2003, On the distribution of a test for exponentiality based on progressively Type-II right censored spacings. Journal of Statistical Computation and Simulation, 73 (4), 277–283.], is carried out. This reveals that the proposed approximation is very accurate.  相似文献   

10.
The Buckley–James estimator (BJE) [J. Buckley and I. James, Linear regression with censored data, Biometrika 66 (1979), pp. 429–436] has been extended from right-censored (RC) data to interval-censored (IC) data by Rabinowitz et al. [D. Rabinowitz, A. Tsiatis, and J. Aragon, Regression with interval-censored data, Biometrika 82 (1995), pp. 501–513]. The BJE is defined to be a zero-crossing of a modified score function H(b), a point at which H(·) changes its sign. We discuss several approaches (for finding a BJE with IC data) which are extensions of the existing algorithms for RC data. However, these extensions may not be appropriate for some data, in particular, they are not appropriate for a cancer data set that we are analysing. In this note, we present a feasible iterative algorithm for obtaining a BJE. We apply the method to our data.  相似文献   

11.

Recently, exact confidence bounds and exact likelihood inference have been developed based on hybrid censored samples by Chen and Bhattacharyya [Chen, S. and Bhattacharyya, G.K. (1998). Exact confidence bounds for an exponential parameter under hybrid censoring. Communications in StatisticsTheory and Methods, 17, 1857–1870.], Childs et al. [Childs, A., Chandrasekar, B., Balakrishnan, N. and Kundu, D. (2003). Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution. Annals of the Institute of Statistical Mathematics, 55, 319–330.], and Chandrasekar et al. [Chandrasekar, B., Childs, A. and Balakrishnan, N. (2004). Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoring. Naval Research Logistics, 51, 994–1004.] for the case of the exponential distribution. In this article, we propose an unified hybrid censoring scheme (HCS) which includes many cases considered earlier as special cases. We then derive the exact distribution of the maximum likelihood estimator as well as exact confidence intervals for the mean of the exponential distribution under this general unified HCS. Finally, we present some examples to illustrate all the methods of inference developed here.  相似文献   

12.
13.
In many life-testing and reliability experiments, data are often censored in order to reduce the cost and time associated with testing and since the conventional Type-I and Type-II censoring schemes are not flexible enough, progressive censoring is developed by researchers. In this article, we develop a general goodness of fit test by using a new estimate of Kullback–Leibler information based on progressively Type-II censored data. Consistency and other properties of the proposed test are shown. Then, we use the proposed test statistic to test for exponentiality based on progressively Type-II censored data. The power values of the proposed test under different progressively Type-II censoring schemes are computed, through Monte Carlo simulations. It is observed that the proposed test is quite powerful in compared with the test proposed by Balakrishnan et al. (2007 Balakrishnan, N., Habibi Rad, A., and Arghami, N. R. (2007). Testing exponentiality based on Kullback–Leibler information with progressively type-II censored data. IEEE Transactions on Reliability 56:301307. [Google Scholar]). Two real datasets from progressive censoring literature are finally presented for illustrative purpose.  相似文献   

14.
15.
We consider the estimation problem of the probability P=P(X>Y) for the standard Topp–Leone distribution. After discussing the maximum likelihood and uniformly minimum variance unbiased estimation procedures for the problem on both complete and left censored samples, we perform a Monte Carlo simulation to compare the estimators based on the mean square error criteria. We also consider the interval estimation of P.  相似文献   

16.
This paper extends the analysis of the bivariate Seemingly Unrelated Regression (SUN) Tobit model by modeling its nonlinear dependence structure through the Clayton copula. The ability to capture/model the lower tail dependence of the SUN Tobit model where some data are censored (generally, left-censored at zero) is an useful feature of the Clayton copula. We propose a modified version of the (classical) Inference Function for Margins (IFS) method by Joe and XP [H. Joe and J.J. XP, The estimation method of inference functions for margins for multivariate models, Tech. Rep. 166, Department of Statistics, University of British Columbia, 1996], which we refer to as Modified Inference Function for Margins (MIFF) method, to obtain the (point) estimates of the marginal and Clayton copula parameters. More specifically, we employ the (frequenting) data augmentation technique at the second stage of the IFS method (the first stage of the MIFF method is equivalent to the first stage of the IFS method) to generate the censored observations and then estimate the Clayton copula parameter. This process (data augmentation and copula parameter estimation) is repeated until convergence. Such modification at the second stage of the usual estimation method is justified in order to obtain continuous marginal distributions, which ensures the uniqueness of the resulting Clayton copula, as stated by Solar's [A. Solar, Fonctions de répartition à n dimensions et leurs marges, Publ. de l'Institut de Statistique de l'Université de Paris 8 (1959), pp. 229–231] theorem; and also to provide an unbiased estimate of the association parameter (the IFS method provides a biased estimate of the Clayton copula parameter in the presence of censored observations in both margins). Since the usual asymptotic approach, that is the computation of the asymptotic covariance matrix of the parameter estimates, is troublesome in this case, we also propose the use of resampling procedures (bootstrap methods, such as standard normal and percentile, by Efron and Tibshirani [B. Efron and R.J. Tibshirani, An Introduction to the Bootstrap, Chapman & Hall, New York, 1993] to obtain confidence intervals for the model parameters.  相似文献   

17.
ABSTRACT

This article considers the empirical Bayes estimation problem in the uniform distribution U(0, θ) with censored data. For the parameter θ, using the empirical Bayes (EB) approach, we propose an EB estimation of θ which possesses a rate of convergence can be arbitrarily close to O(n ?1/2) when the historical samples are randomly censored from the right, where n is the number of historical sample. A sample and some simulation results are also presented.  相似文献   

18.
ABSTRACT

In actuarial applications, mixed Poisson distributions are widely used for modelling claim counts as observed data on the number of claims often exhibit a variance noticeably exceeding the mean. In this study, a new claim number distribution is obtained by mixing negative binomial parameter p which is reparameterized as p?=?exp( ?λ) with Gamma distribution. Basic properties of this new distribution are given. Maximum likelihood estimators of the parameters are calculated using the Newton–Raphson and genetic algorithm (GA). We compared the performance of these methods in terms of efficiency by simulation. A numerical example is provided.  相似文献   

19.
ABSTRACT

In this paper, we first consider the entropy estimators introduced by Vasicek [A test for normality based on sample entropy. J R Statist Soc, Ser B. 1976;38:54–59], Ebrahimi et al. [Two measures of sample entropy. Stat Probab Lett. 1994;20:225–234], Yousefzadeh and Arghami [Testing exponentiality based on type II censored data and a new cdf estimator. Commun Stat – Simul Comput. 2008;37:1479–1499], Alizadeh Noughabi and Arghami [A new estimator of entropy. J Iran Statist Soc. 2010;9:53–64], and Zamanzade and Arghami [Goodness-of-fit test based on correcting moments of modified entropy estimator. J Statist Comput Simul. 2011;81:2077–2093], and the nonparametric distribution functions corresponding to them. We next introduce goodness-of-fit test statistics for the Laplace distribution based on the moments of nonparametric distribution functions of the aforementioned estimators. We obtain power estimates of the proposed test statistics with Monte Carlo simulation and compare them with the competing test statistics against various alternatives. Performance of the proposed new test statistics is illustrated in real cases.  相似文献   

20.
Abstract

Quetelet’s data on Scottish chest girths are analyzed with eight normality tests. In contrast to Quetelet’s conclusion that the data are fit well by what is now known as the normal distribution, six of eight normality tests provide strong evidence that the chest circumferences are not normally distributed. Using corrected chest circumferences from Stigler, the χ2 test no longer provides strong evidence against normality, but five commonly used normality tests do. The D’Agostino–Pearson K2 and Jarque–Bera tests, based only on skewness and kurtosis, find that both Quetelet’s original data and the Stigler-corrected data are consistent with the hypothesis of normality. The major reason causing most normality tests to produce low p-values, indicating that Quetelet’s data are not normally distributed, is that the chest circumferences were reported in whole inches and rounding of large numbers of observations can produce many tied values that strongly affect most normality tests. Users should be cautious using many standard normality tests if data have ties, are rounded, and the ratio of the standard deviation to rounding interval is small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号