首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Survival data with missing censoring indicators are frequently encountered in biomedical studies. In this paper, we consider statistical inference for this type of data under the additive hazard model. Reweighting methods based on simple and augmented inverse probability are proposed. The asymptotic properties of the proposed estimators are established. Furthermore, we provide a numerical technique for checking adequacy of the fitted model with missing censoring indicators. Our simulation results show that the proposed estimators outperform the simple and augmented inverse probability weighted estimators without reweighting. The proposed methods are illustrated by analyzing a dataset from a breast cancer study.  相似文献   

2.
In this article, a new efficient iteration procedure based on quantile regression is developed for single-index varying-coefficient models. The proposed estimation scheme is an extension of the full iteration procedure proposed by Carroll et al., which is different with the method adopted by Wu et al. for single-index models that a double-weighted summation is used therein. This distinguish not only be the reason that undersmoothing should be a necessary condition in our proposed procedure, but also may reduce the computational burden especially for large-sample size. The resulting estimators are shown to be robust with regardless of outliers as well as varying errors. Moreover, to achieve sparsity when there exist irrelevant variables in the index parameters, a variable selection procedure combined with adaptive LASSO penalty is developed to simultaneously select and estimate significant parameters. Theoretical properties of the obtained estimators are established under some regular conditions, and some simulation studies with various distributed errors are conducted to assess the finite sample performance of our proposed method.  相似文献   

3.
We consider the case of a multicenter trial in which the center specific sample sizes are potentially small. Under homogeneity, the conventional procedure is to pool information using a weighted estimator where the weights used are inverse estimated center-specific variances. Whereas this procedure is efficient for conventional asymptotics (e. g. center-specific sample sizes become large, number of center fixed), it is commonly believed that the efficiency of this estimator holds true also for meta-analytic asymptotics (e.g. center-specific sample size bounded, potentially small, and number of centers large). In this contribution we demonstrate that this estimator fails to be efficient. In fact, it shows a persistent bias with increasing number of centers showing that it isnot meta-consistent. In addition, we show that the Cochran and Mantel-Haenszel weighted estimators are meta-consistent and, in more generality, provide conditions on the weights such that the associated weighted estimator is meta-consistent.  相似文献   

4.
对于均匀分布给出了在全样本场合下参数的区间估计,在定数截尾场合下参数的点估计和区间估计,并通过大量Monte-Carlo模拟从均方差的角度比较各种点估计的优劣,考察各种区间估计方法的精度,通过实例说明这些点估计和区间估计方法的应用。  相似文献   

5.
The ranked set samples and median ranked set samples in particular have been used extensively in the literature due to many reasons. In some situations, the experimenter may not be able to quantify or measure the response variable due to the high cost of data collection, however it may be easier to rank the subject of interest. The purpose of this article is to study the asymptotic distribution of the parameter estimators of the simple linear regression model. We show that these estimators using median ranked set sampling scheme converge in distribution to the normal distribution under weak conditions. Moreover, we derive large sample confidence intervals for the regression parameters as well as a large sample prediction interval for new observation. Also, we study the properties of these estimators for small sample setup and conduct a simulation study to investigate the behavior of the distributions of the proposed estimators.  相似文献   

6.
This paper defines a general procedure for estimating the population mean of the study variate based on double sampling for stratification in presence of multi-auxiliary information. Classes of combined and separate estimators have been suggested and their properties are studied under large sample approximation. A class of unstratified double sampling estimators is also proposed with its properties. Asymptotic optimum estimators in the classes are identified with their approximate variance formulae. Further the proposed classes of estimators are compared with the corresponding class of estimators based on un-stratified double sampling. All findings are encouraging and support the soundness of the proposed procedure for mean estimation.  相似文献   

7.
In this paper we present methods for inference on data selected by a complex sampling design for a class of statistical models for the analysis of ordinal variables. Specifically, assuming that the sampling scheme is not ignorable, we derive for the class of cub models (Combination of discrete Uniform and shifted Binomial distributions) variance estimates for a complex two stage stratified sample. Both Taylor linearization and repeated replication variance estimators are presented. We also provide design‐based test diagnostics and goodness‐of‐fit measures. We illustrate by means of real data analysis the differences between survey‐weighted and unweighted point estimates and inferences for cub model parameters.  相似文献   

8.
Probability plots are often used to estimate the parameters of distributions. Using large sample properties of the empirical distribution function and order statistics, weights to stabilize the variance in order to perform weighted least squares regression are derived. Weighted least squares regression is then applied to the estimation of the parameters of the Weibull, and the Gumbel distribution. The weights are independent of the parameters of the distributions considered. Monte Carlo simulation shows that the weighted least-squares estimators outperform the usual least-squares estimators totally, especially in small samples.  相似文献   

9.
Abstract. We introduce and study a class of weighted functional estimators for the coefficient of tail dependence in bivariate extreme value statistics. Asymptotic normality of these estimators is established under a second‐order condition on the joint tail behaviour, some conditions on the weight function and for appropriately chosen sequences of intermediate order statistics. Asymptotically unbiased estimators are constructed by judiciously chosen linear combinations of weighted functional estimators, and variance optimality within this class of asymptotically unbiased estimators is discussed. The finite sample performance of some specific examples from our class of estimators and some alternatives from the recent literature are evaluated with a small simulation experiment.  相似文献   

10.
陈建宝  孙林 《统计研究》2015,32(1):95-101
对随机效应空间滞后单指数面板模型,本文构建了该模型的截面极大似然估计方法,从理论证明和数值模拟两方面分别考察了其估计量的大样本性质和小样本表现。研究结果表明:(1)在大样本条件下,估计量均具有一致性,并且参数估计量具有渐近正态性。(2)在小样本条件下,各估计量依然具有良好的表现,其精度随着样本容量的增加而提高;空间权重矩阵结构的复杂性对空间相关系数的估计量影响较大,但对其他估计量的影响较小。  相似文献   

11.
In this paper, attention is focused on estimation of the location parameter in the double exponential case using a weighted linear combination of the sample median and pairs of order statistics, with symmetric distance to both sides from the sample median. Minimizing with respect to weights and distances we get smaller asymptotic variance in the second order. If the number of pairs is taken as infinite and the distances as null we attain the least asymptotic variance in this class of estimators. The Pitman estimator is also noted. Similarly improved estimators are scanned over their probability of concentration to investigate its bound. Numerical comparison of the estimators is shown.  相似文献   

12.
We advocate the use of an Indirect Inference method to estimate the parameter of a COGARCH(1,1) process for equally spaced observations. This requires that the true model can be simulated and a reasonable estimation method for an approximate auxiliary model. We follow previous approaches and use linear projections leading to an auxiliary autoregressive model for the squared COGARCH returns. The asymptotic theory of the Indirect Inference estimator relies on a uniform strong law of large numbers and asymptotic normality of the parameter estimates of the auxiliary model, which require continuity and differentiability of the COGARCH process with respect to its parameter and which we prove via Kolmogorov's continuity criterion. This leads to consistent and asymptotically normal Indirect Inference estimates under moment conditions on the driving Lévy process. A simulation study shows that the method yields a substantial finite sample bias reduction compared with previous estimators.  相似文献   

13.
When the probability of selecting an individual in a population is propor­tional to its lifelength, it is called length biased sampling. A nonparametric maximum likelihood estimator (NPMLE) of survival in a length biased sam­ple is given in Vardi (1982). In this study, we examine the performance of Vardi's NPMLE in estimating the true survival curve when observations are from a length biased sample. We also compute estimators based on a linear combination (LCE) of empirical distribution function (EDF) estimators and weighted estimators. In our simulations, we consider observations from a mix­ture of two different distributions, one from F and the other from G which is a length biased distribution of F. Through a series of simulations with vari­ous proportions of length biasing in a sample, we show that the NPMLE and the LCE closely approximate the true survival curve. Throughout the sur­vival curve, the EDF estimators overestimate the survival. We also consider a case where the observations are from three different weighted distributions, Again, both the NPMLE and the LCE closely approximate the true distribu­tion, indicating that the length biasedness is properly adjusted for. Finally, an efficiency study shows that Vardi's estimators are more efficient than the EDF estimators in the lower percentiles of the survival curves.  相似文献   

14.
The minimum disparity estimators proposed by Lindsay (1994) for discrete models form an attractive subclass of minimum distance estimators which achieve their robustness without sacrificing first order efficiency at the model. Similarly, disparity test statistics are useful robust alternatives to the likelihood ratio test for testing of hypotheses in parametric models; they are asymptotically equivalent to the likelihood ratio test statistics under the null hypothesis and contiguous alternatives. Despite their asymptotic optimality properties, the small sample performance of many of the minimum disparity estimators and disparity tests can be considerably worse compared to the maximum likelihood estimator and the likelihood ratio test respectively. In this paper we focus on the class of blended weight Hellinger distances, a general subfamily of disparities, and study the effects of combining two different distances within this class to generate the family of “combined” blended weight Hellinger distances, and identify the members of this family which generally perform well. More generally, we investigate the class of "combined and penal-ized" blended weight Hellinger distances; the penalty is based on reweighting the empty cells, following Harris and Basu (1994). It is shown that some members of the combined and penalized family have rather attractive properties  相似文献   

15.
In this paper, we consider the estimation of partially linear additive quantile regression models where the conditional quantile function comprises a linear parametric component and a nonparametric additive component. We propose a two-step estimation approach: in the first step, we approximate the conditional quantile function using a series estimation method. In the second step, the nonparametric additive component is recovered using either a local polynomial estimator or a weighted Nadaraya–Watson estimator. Both consistency and asymptotic normality of the proposed estimators are established. Particularly, we show that the first-stage estimator for the finite-dimensional parameters attains the semiparametric efficiency bound under homoskedasticity, and that the second-stage estimators for the nonparametric additive component have an oracle efficiency property. Monte Carlo experiments are conducted to assess the finite sample performance of the proposed estimators. An application to a real data set is also illustrated.  相似文献   

16.
Analysis of high dimensional data often seeks to identify a subset of important features and assess their effects on the outcome. Traditional statistical inference procedures based on standard regression methods often fail in the presence of high-dimensional features. In recent years, regularization methods have emerged as promising tools for analyzing high dimensional data. These methods simultaneously select important features and provide stable estimation of their effects. Adaptive LASSO and SCAD for instance, give consistent and asymptotically normal estimates with oracle properties. However, in finite samples, it remains difficult to obtain interval estimators for the regression parameters. In this paper, we propose perturbation resampling based procedures to approximate the distribution of a general class of penalized parameter estimates. Our proposal, justified by asymptotic theory, provides a simple way to estimate the covariance matrix and confidence regions. Through finite sample simulations, we verify the ability of this method to give accurate inference and compare it to other widely used standard deviation and confidence interval estimates. We also illustrate our proposals with a data set used to study the association of HIV drug resistance and a large number of genetic mutations.  相似文献   

17.
We propose correcting for non-compliance in randomized trials by estimating the parameters of a class of semi-parametric failure time models, the rank preserving structural failure time models, using a class of rank estimators. These models are the structural or strong version of the “accelerated failure time model with time-dependent covariates” of Cox and Oakes (1984). In this paper we develop a large sample theory for these estimators, derive the optimal estimator within this class, and briefly consider the construction of “partially adaptive” estimators whose efficiency may approach that of the optimal estimator. We show that in the absence of censoring the optimal estimator attains the semiparametric efficiency bound for the model.  相似文献   

18.
Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In this paper, we consider the Clayton–Oakes model with marginal proportional hazards and use the full model structure to improve on efficiency compared with the independence analysis. We derive a likelihood based estimating equation for the regression parameters as well as for the correlation parameter of the model. We give the large sample properties of the estimators arising from this estimating equation. Finally, we investigate the small sample properties of the estimators through Monte Carlo simulations.  相似文献   

19.
Randomized response is an interview technique designed to eliminate response bias when sensitive questions are asked. In this paper, we present a logistic regression model on randomized response data when the covariates on some subjects are missing at random. In particular, we propose Horvitz and Thompson (1952)-type weighted estimators by using different estimates of the selection probabilities. We present large sample theory for the proposed estimators and show that they are more efficient than the estimator using the true selection probabilities. Simulation results support theoretical analysis. We also illustrate the approach using data from a survey of cable TV.  相似文献   

20.
Abstract.  A new semiparametric method for density deconvolution is proposed, based on a model in which only the ratio of the unconvoluted to convoluted densities is specified parametrically. Deconvolution results from reweighting the terms in a standard kernel density estimator, where the weights are defined by the parametric density ratio. We propose that in practice, the density ratio be modelled on the log-scale as a cubic spline with a fixed number of knots. Parameter estimation is based on maximization of a type of semiparametric likelihood. The resulting asymptotic properties for our deconvolution estimator mirror the convergence rates in standard density estimation without measurement error when attention is restricted to our semiparametric class of densities. Furthermore, numerical studies indicate that for practical sample sizes our weighted kernel estimator can provide better results than the classical non-parametric kernel estimator for a range of densities outside the specified semiparametric class.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号