首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigate the problem of estimating the association between two related survival variables when they follow a copula model and bivariate left-truncated and right-censored data are available. By expressing truncation probability as the functional of marginal survival functions, we propose a two-stage estimation procedure for estimating the parameters of Archimedean copulas. The asymptotic properties of the proposed estimators are established. Simulation studies are conducted to investigate the finite sample properties of the proposed estimators. The proposed method is applied to a bivariate RNA data.  相似文献   

2.
Abstract

It is one of the important issues in survival analysis to compare two hazard rate functions to evaluate treatment effect. It is quite common that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. In certain applications, besides survival data, we also have related longitudinal data available regarding some time-dependent covariates. In such cases, a joint model that accommodates both types of data can allow us to infer the association between the survival and longitudinal data and to assess the treatment effect better. In this paper, we propose a modelling approach for comparing two crossing hazard rate functions by joint modelling survival and longitudinal data. Maximum likelihood estimation is used in estimating the parameters of the proposed joint model using the EM algorithm. Asymptotic properties of the maximum likelihood estimators are studied. To illustrate the virtues of the proposed method, we compare the performance of the proposed method with several existing methods in a simulation study. Our proposed method is also demonstrated using a real dataset obtained from an HIV clinical trial.  相似文献   

3.
In this paper, we combine empirical likelihood and estimating functions for censored data to obtain robust confidence regions for the parameters and more generally for functions of the parameters of distributions used in lifetime data analysis. The proposed method works with type I, type II or randomly censored data. It is illustrated by considering inference for log-location-scale models. In particular, we focus on the log-normal and the Weibull models and we tackle the problem of constructing robust confidence regions (or intervals) for the parameters of the model, as well as for quantiles and values of the survival function. The usefulness of the method is demonstrated through a Monte Carlo study and by examples on two lifetime data sets.  相似文献   

4.
We propose a novel Bayesian nonparametric (BNP) model, which is built on a class of species sampling models, for estimating density functions of temporal data. In particular, we introduce species sampling mixture models with temporal dependence. To accommodate temporal dependence, we define dependent species sampling models by modeling random support points and weights through an autoregressive model, and then we construct the mixture models based on the collection of these dependent species sampling models. We propose an algorithm to generate posterior samples and present simulation studies to compare the performance of the proposed models with competitors that are based on Dirichlet process mixture models. We apply our method to the estimation of densities for the price of apartment in Seoul, the closing price in Korea Composite Stock Price Index (KOSPI), and climate variables (daily maximum temperature and precipitation) of around the Korean peninsula.  相似文献   

5.
Kendall and Gehan estimating functions are commonly used to estimate the regression parameter in accelerated failure time model with censored observations in survival analysis. In this paper, we apply the jackknife empirical likelihood method to overcome the computation difficulty about interval estimation. A Wilks’ theorem of jackknife empirical likelihood for U-statistic type estimating equations is established, which is used to construct the confidence intervals for the regression parameter. We carry out an extensive simulation study to compare the Wald-type procedure, the empirical likelihood method, and the jackknife empirical likelihood method. The proposed jackknife empirical likelihood method has a better performance than the existing methods. We also use a real data set to compare the proposed methods.  相似文献   

6.
Yu M  Nan B 《Lifetime data analysis》2006,12(3):345-364
As an alternative to the Cox model, the rank-based estimating method for censored survival data has been studied extensively since it was proposed by Tsiatis [Tsiatis AA (1990) Ann Stat 18:354–372] among others. Due to the discontinuity feature of the estimating function, a significant amount of work in the literature has been focused on numerical issues. In this article, we consider the computational aspects of a family of doubly weighted rank-based estimating functions. This family is rich enough to include both estimating functions of Tsiatis (1990) for the randomly observed data and of Nan et al. [Nan B, Yu M, Kalbfleisch JD (2006) Biometrika (to appear)] for the case-cohort data as special examples. The latter belongs to the biased sampling problems. We show that the doubly weighted rank-based discontinuous estimating functions are monotone, a property established for the randomly observed data in the literature, when the generalized Gehan-type weights are used. Though the estimating problem can be formulated to a linear programming problem as that for the randomly observed data, due to its easily uncontrollable large scale even for a moderate sample size, we instead propose a Newton-type iterated method to search for an approximate solution of the (system of) discontinuous monotone estimating equation(s). Simulation results provide a good demonstration of the proposed method. We also apply our method to a real data example.  相似文献   

7.
In the regression model with censored data, it is not straightforward to estimate the covariances of the regression estimators, since their asymptotic covariances may involve the unknown error density function and its derivative. In this article, a resampling method for making inferences on the parameter, based on some estimating functions, is discussed for the censored regression model. The inference procedures are associated with a weight function. To find the best weight functions for the proposed procedures, extensive simulations are performed. The validity of the approximation to the distribution of the estimator by a resampling technique is also examined visually. Implementation of the procedures is discussed and illustrated in a real data example.  相似文献   

8.
We consider an efficient Bayesian approach to estimating integration-based posterior summaries from a separate Bayesian application. In Bayesian quadrature we model an intractable posterior density function f(·) as a Gaussian process, using an approximating function g(·), and find a posterior distribution for the integral of f(·), conditional on a few evaluations of f (·) at selected design points. Bayesian quadrature using normal g (·) is called Bayes-Hermite quadrature. We extend this theory by allowing g(·) to be chosen from two wider classes of functions. One is a family of skew densities and the other is the family of finite mixtures of normal densities. For the family of skew densities we describe an iterative updating procedure to select the most suitable approximation and apply the method to two simulated posterior density functions.  相似文献   

9.
When estimating loss distributions in insurance, large and small losses are usually split because it is difficult to find a simple parametric model that fits all claim sizes. This approach involves determining the threshold level between large and small losses. In this article, a unified approach to the estimation of loss distributions is presented. We propose an estimator obtained by transforming the data set with a modification of the Champernowne cdf and then estimating the density of the transformed data by use of the classical kernel density estimator. We investigate the asymptotic bias and variance of the proposed estimator. In a simulation study, the proposed method shows a good performance. We also present two applications dealing with claims costs in insurance.  相似文献   

10.
Kai B  Li R  Zou H 《Annals of statistics》2011,39(1):305-332
The complexity of semiparametric models poses new challenges to statistical inference and model selection that frequently arise from real applications. In this work, we propose new estimation and variable selection procedures for the semiparametric varying-coefficient partially linear model. We first study quantile regression estimates for the nonparametric varying-coefficient functions and the parametric regression coefficients. To achieve nice efficiency properties, we further develop a semiparametric composite quantile regression procedure. We establish the asymptotic normality of proposed estimators for both the parametric and nonparametric parts and show that the estimators achieve the best convergence rate. Moreover, we show that the proposed method is much more efficient than the least-squares-based method for many non-normal errors and that it only loses a small amount of efficiency for normal errors. In addition, it is shown that the loss in efficiency is at most 11.1% for estimating varying coefficient functions and is no greater than 13.6% for estimating parametric components. To achieve sparsity with high-dimensional covariates, we propose adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and prove that the methods possess the oracle property. Extensive Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures. Finally, we apply the new methods to analyze the plasma beta-carotene level data.  相似文献   

11.
The varying-coefficient single-index model has two distinguishing features: partially linear varying-coefficient functions and a single-index structure. This paper proposes a nonparametric method based on smoothing splines for estimating varying-coefficient functions and an unknown link function. Moreover, the average derivative estimation method is applied to obtain the single-index parameter estimates. For interval inference, Bayesian confidence intervals were obtained based on Bayes models for varying-coefficient functions and the link function. The performance of the proposed method is examined both through simulations and by applying it to Boston housing data.  相似文献   

12.
The generalized linear model (GLM) is a class of regression models where the means of the response variables and the linear predictors are joined through a link function. Standard GLM assumes the link function is fixed, and one can form more flexible GLM by either estimating the flexible link function from a parametric family of link functions or estimating it nonparametically. In this paper, we propose a new algorithm that uses P-spline for nonparametrically estimating the link function which is guaranteed to be monotone. It is equivalent to fit the generalized single index model with monotonicity constraint. We also conduct extensive simulation studies to compare our nonparametric approach for estimating link function with various parametric approaches, including traditional logit, probit and robit link functions, and two recently developed link functions, the generalized extreme value link and the symmetric power logit link. The simulation study shows that the link function estimated nonparametrically by our proposed algorithm performs well under a wide range of different true link functions and outperforms parametric approaches when they are misspecified. A real data example is used to illustrate the results.  相似文献   

13.
For a censored two-sample problem, Chen and Wang [Y.Q. Chen and M.-C. Wang, Analysis of accelerated hazards models, J. Am. Statist. Assoc. 95 (2000), pp. 608–618] introduced the accelerated hazards model. The scale-change parameter in this model characterizes the association of two groups. However, its estimator involves the unknown density in the asymptotic variance. Thus, to make an inference on the parameter, numerically intensive methods are needed. The goal of this article is to propose a simple estimation method in which estimators are asymptotically normal with a density-free asymptotic variance. Some lack-of-fit tests are also obtained from this. These tests are related to Gill–Schumacher type tests [R.D. Gill and M. Schumacher, A simple test of the proportional hazards assumption, Biometrika 74 (1987), pp. 289–300] in which the estimating functions are evaluated at two different weight functions yielding two estimators that are close to each other. Numerical studies show that for some weight functions, the estimators and tests perform well. The proposed procedures are illustrated in two applications.  相似文献   

14.
A five-parameter extension of the Weibull distribution capable of modelling a bathtub-shaped hazard rate function is introduced and studied. The beauty and importance of the new distribution lies in its ability to model both monotone and non-monotone failure rates that are quite common in lifetime problems and reliability. The proposed distribution has a number of well-known lifetime distributions as special sub-models, such as the Weibull, extreme value, exponentiated Weibull, generalized Rayleigh and modified Weibull (MW) distributions, among others. We obtain quantile and generating functions, mean deviations, Bonferroni and Lorenz curves and reliability. We provide explicit expressions for the density function of the order statistics and their moments. For the first time, we define the log-Kumaraswamy MW regression model to analyse censored data. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is determined. Two applications illustrate the potentiality of the proposed distribution.  相似文献   

15.
In this paper, we first introduce two new estimators for estimating the entropy of absolutely continuous random variables. We then compare the introduced estimators with the existing entropy estimators, including the first of such estimators proposed by Dimitriev and Tarasenko [On the estimation functions of the probability density and its derivatives, Theory Probab. Appl. 18 (1973), pp. 628–633]. We next propose goodness-of-fit tests for normality based on the introduced entropy estimators and compare their powers with the powers of other entropy-based tests for normality. Our simulation results show that the introduced estimators perform well in estimating entropy and testing normality.  相似文献   

16.
We study the invariance properties of various test criteria which have been proposed for hypothesis testing in the context of incompletely specified models, such as models which are formulated in terms of estimating functions (Godambe, 1960) or moment conditions and are estimated by generalized method of moments (GMM) procedures (Hansen, 1982), and models estimated by pseudo-likelihood (Gouriéroux, Monfort, and Trognon, 1984b,c) and M-estimation methods. The invariance properties considered include invariance to (possibly nonlinear) hypothesis reformulations and reparameterizations. The test statistics examined include Wald-type, LR-type, LM-type, score-type, and C(α)?type criteria. Extending the approach used in Dagenais and Dufour (1991), we show first that all these test statistics except the Wald-type ones are invariant to equivalent hypothesis reformulations (under usual regularity conditions), but all five of them are not generally invariant to model reparameterizations, including measurement unit changes in nonlinear models. In other words, testing two equivalent hypotheses in the context of equivalent models may lead to completely different inferences. For example, this may occur after an apparently innocuous rescaling of some model variables. Then, in view of avoiding such undesirable properties, we study restrictions that can be imposed on the objective functions used for pseudo-likelihood (or M-estimation) as well as the structure of the test criteria used with estimating functions and generalized method of moments (GMM) procedures to obtain invariant tests. In particular, we show that using linear exponential pseudo-likelihood functions allows one to obtain invariant score-type and C(α)?type test criteria, while in the context of estimating function (or GMM) procedures it is possible to modify a LR-type statistic proposed by Newey and West (1987) to obtain a test statistic that is invariant to general reparameterizations. The invariance associated with linear exponential pseudo-likelihood functions is interpreted as a strong argument for using such pseudo-likelihood functions in empirical work.  相似文献   

17.
In this article, we utilize a form of general linear model where missing data occurred randomly on the covariates. We propose a test function based on the doubly robust method to investigate goodness of fit of the model. For this aim, kernel method is used to estimate unknown functions under estimating equation method. Doubly robustness and asymptotic properties of the test function are obtained under local and alternative hypotheses. Furthermore, we investigate the power of the proposed test function by means of some simulation studies and finally we apply this method on analyzing a real dataset.  相似文献   

18.
In this article, an estimation problem for multivariate stable laws using wavelets has been studied. The method of applying wavelets, which has already been done, to estimate parameters in univariate stable laws, has been extended to multivariate stable laws. The proposed estimating method is based on a nonlinear regression model on wavelet coefficients of characteristic functions. In particular, two parametric sub-classes of stable laws are considered: the class of multivariate stable laws with discrete spectral measure, and sub-Gaussian laws. Using a simulation study, the proposed method has been compared with well-known estimation procedures.  相似文献   

19.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

20.
This paper is concerned with estimating a mixing density g using a random sample from the mixture distribution f(x)=∫f x | θ)g(θ)dθ where f(· | θ) is a known discrete exponen tial family of density functions. Recently two techniques for estimating g have been proposed. The first uses Fourier analysis and the method of kernels and the second uses orthogonal polynomials. It is known that the first technique is capable of yielding estimators that achieve (or almost achieve) the minimax convergence rate. We show that this is true for the technique based on orthogonal polynomials as well. The practical implementation of these estimators is also addressed. Computer experiments indicate that the kernel estimators give somewhat disappoint ing finite sample results. However, the orthogonal polynomial estimators appear to do much better. To improve on the finite sample performance of the orthogonal polynomial estimators, a way of estimating the optimal truncation parameter is proposed. The resultant estimators retain the convergence rates of the previous estimators and a Monte Carlo finite sample study reveals that they perform well relative to the ones based on the optimal truncation parameter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号