首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract. We propose a spline‐based semiparametric maximum likelihood approach to analysing the Cox model with interval‐censored data. With this approach, the baseline cumulative hazard function is approximated by a monotone B‐spline function. We extend the generalized Rosen algorithm to compute the maximum likelihood estimate. We show that the estimator of the regression parameter is asymptotically normal and semiparametrically efficient, although the estimator of the baseline cumulative hazard function converges at a rate slower than root‐n. We also develop an easy‐to‐implement method for consistently estimating the standard error of the estimated regression parameter, which facilitates the proposed inference procedure for the Cox model with interval‐censored data. The proposed method is evaluated by simulation studies regarding its finite sample performance and is illustrated using data from a breast cosmesis study.  相似文献   

2.
Pao-sheng Shen 《Statistics》2013,47(2):315-326
In this article, we consider nonparametric estimation of the survival function when the data are subject to left-truncation and right-censoring and the sample size before truncation is known. We propose two estimators. The first estimator is derived based on a self-consistent estimating equation. The second estimator is obtained by using the constrained expectation-maximization algorithm. Simulation results indicate that both estimators are more efficient than the product-limit estimator. When there is no censoring, the performance of the proposed estimators is compared with that of the estimator proposed by Li and Qin [Semiparametric likelihood-based inference for biased and truncated data when total sample size is known, J. R. Stat. Soc. B 60 (1998), pp. 243–254] via simulation study.  相似文献   

3.
In this paper, we propose the hard thresholding regression (HTR) for estimating high‐dimensional sparse linear regression models. HTR uses a two‐stage convex algorithm to approximate the ?0‐penalized regression: The first stage calculates a coarse initial estimator, and the second stage identifies the oracle estimator by borrowing information from the first one. Theoretically, the HTR estimator achieves the strong oracle property over a wide range of regularization parameters. Numerical examples and a real data example lend further support to our proposed methodology.  相似文献   

4.
In this paper, we study the problem of testing the hypothesis on whether the density f of a random variable on a sphere belongs to a given parametric class of densities. We propose two test statistics based on the L2 and L1 distances between a non‐parametric density estimator adapted to circular data and a smoothed version of the specified density. The asymptotic distribution of the L2 test statistic is provided under the null hypothesis and contiguous alternatives. We also consider a bootstrap method to approximate the distribution of both test statistics. Through a simulation study, we explore the moderate sample performance of the proposed tests under the null hypothesis and under different alternatives. Finally, the procedure is illustrated by analysing a real data set based on wind direction measurements.  相似文献   

5.
ABSTRACT

This paper proposes an empirical likelihood (EL) method for estimating the GARCH(p, q) models with heavy-tailed errors. Using the kernel smoothing method, we derive a smoothed EL ratio statistic, which yields a smoothed EL estimator. Moreover, we derive a profile EL for the partial parameters in the presence of nuisance parameters. Simulations and empirical results are conducted to illustrate our proposed method.  相似文献   

6.
Abstract. We consider the problem of efficiently estimating multivariate densities and their modes for moderate dimensions and an abundance of data. We propose polynomial histograms to solve this estimation problem. We present first‐ and second‐order polynomial histogram estimators for a general d‐dimensional setting. Our theoretical results include pointwise bias and variance of these estimators, their asymptotic mean integrated square error (AMISE), and optimal binwidth. The asymptotic performance of the first‐order estimator matches that of the kernel density estimator, while the second order has the faster rate of O(n?6/(d+6)). For a bivariate normal setting, we present explicit expressions for the AMISE constants which show the much larger binwidths of the second order estimator and hence also more efficient computations of multivariate densities. We apply polynomial histogram estimators to real data from biotechnology and find the number and location of modes in such data.  相似文献   

7.
In survival analysis, covariate measurements often contain missing observations; ignoring this feature can lead to invalid inference. We propose a class of weighted estimating equations for right‐censored data with missing covariates under semiparametric transformation models. Time‐specific and subject‐specific weights are accommodated in the formulation of the weighted estimating equations. We establish unified results for estimating missingness probabilities that cover both parametric and non‐parametric modelling schemes. To improve estimation efficiency, the weighted estimating equations are augmented by a new set of unbiased estimating equations. The resultant estimator has the so‐called ‘double robustness’ property and is optimal within a class of consistent estimators.  相似文献   

8.
We study the benefit of exploiting the gene–environment independence (GEI) assumption for inferring the joint effect of genotype and environmental exposure on disease risk in a case–control study. By transforming the problem into a constrained maximum likelihood estimation problem we derive the asymptotic distribution of the maximum likelihood estimator (MLE) under the GEI assumption (MLE‐GEI) in a closed form. Our approach uncovers a transparent explanation of the efficiency gained by exploiting the GEI assumption in more general settings, thus bridging an important gap in the existing literature. Moreover, we propose an easy‐to‐implement numerical algorithm for estimating the model parameters in practice. Finally, we conduct simulation studies to compare the proposed method with the traditional prospective logistic regression method and the case‐only estimator. The Canadian Journal of Statistics 47: 473–486; 2019 © 2019 Statistical Society of Canada  相似文献   

9.
ABSTRACT

For monitoring systemic risk from regulators’ point of view, this article proposes a relative risk measure, which is sensitive to the market comovement. The asymptotic normality of a nonparametric estimator and its smoothed version is established when the observations are independent. To effectively construct an interval without complicated asymptotic variance estimation, a jackknife empirical likelihood inference procedure based on the smoothed nonparametric estimation is provided with a Wilks type of result in case of independent observations. When data follow from AR-GARCH models, the relative risk measure with respect to the errors becomes useful and so we propose a corresponding nonparametric estimator. A simulation study and real-life data analysis show that the proposed relative risk measure is useful in monitoring systemic risk.  相似文献   

10.
We consider the problem of estimating the proportion θ of true null hypotheses in a multiple testing context. The setup is classically modelled through a semiparametric mixture with two components: a uniform distribution on interval [0,1] with prior probability θ and a non‐parametric density f . We discuss asymptotic efficiency results and establish that two different cases occur whether f vanishes on a non‐empty interval or not. In the first case, we exhibit estimators converging at a parametric rate, compute the optimal asymptotic variance and conjecture that no estimator is asymptotically efficient (i.e. attains the optimal asymptotic variance). In the second case, we prove that the quadratic risk of any estimator does not converge at a parametric rate. We illustrate those results on simulated data.  相似文献   

11.
This article considers nonparametric estimation of first-price auction models under the monotonicity restriction on the bidding strategy. Based on an integrated-quantile representation of the first-order condition, we propose a tuning-parameter-free estimator for the valuation quantile function. We establish its cube-root-n consistency and asymptotic distribution under weaker smoothness assumptions than those typically assumed in the empirical literature. If the latter are true, we also provide a trimming-free smoothed estimator and show that it is asymptotically normal and achieves the optimal rate of Guerre, Perrigne, and Vuong (2000). We illustrate our method using Monte Carlo simulations and an empirical study of the California highway procurement auctions. Supplementary materials for this article are available online.  相似文献   

12.
In this paper, we consider non‐parametric copula inference under bivariate censoring. Based on an estimator of the joint cumulative distribution function, we define a discrete and two smooth estimators of the copula. The construction that we propose is valid for a large range of estimators of the distribution function and therefore for a large range of bivariate censoring frameworks. Under some conditions on the tails of the distributions, the weak convergence of the corresponding copula processes is obtained in l([0,1]2). We derive the uniform convergence rates of the copula density estimators deduced from our smooth copula estimators. Investigation of the practical behaviour of these estimators is performed through a simulation study and two real data applications, corresponding to different censoring settings. We use our non‐parametric estimators to define a goodness‐of‐fit procedure for parametric copula models. A new bootstrap scheme is proposed to compute the critical values.  相似文献   

13.
In this paper, we propose a general kth correlation coefficient between the density function and distribution function of a continuous variable as a measure of symmetry and asymmetry. We first propose a root-n moment-based estimator of the kth correlation coefficient and present its asymptotic results. Next, we consider statistical inference of the kth correlation coefficient by using the empirical likelihood (EL) method. The EL statistic is shown to be asymptotically a standard chi-squared distribution. Last, we propose a residual-based estimator of the kth correlation coefficient for a parametric regression model to test whether the density function of the true model error is symmetric or not. We present the asymptotic results of the residual-based kth correlation coefficient estimator and also construct its EL-based confidence intervals. Simulation studies are conducted to examine the performance of the proposed estimators, and we also use our proposed estimators to analyze the air quality dataset.  相似文献   

14.
The quantile residual lifetime function provides comprehensive quantitative measures for residual life, especially when the distribution of the latter is skewed or heavy‐tailed and/or when the data contain outliers. In this paper, we propose a general class of semiparametric quantile residual life models for length‐biased right‐censored data. We use the inverse probability weighted method to correct the bias due to length‐biased sampling and informative censoring. Two estimating equations corresponding to the quantile regressions are constructed in two separate steps to obtain an efficient estimator. Consistency and asymptotic normality of the estimator are established. The main difficulty in implementing our proposed method is that the estimating equations associated with the quantiles are nondifferentiable, and we apply the majorize–minimize algorithm and estimate the asymptotic covariance using an efficient resampling method. We use simulation studies to evaluate the proposed method and illustrate its application by a real‐data example.  相似文献   

15.
We propose a regression method that studies covariate effects on the conditional quantiles of residual lifetimes at a certain followup time point. This can be particularly useful in cancer studies, where more patients survive cancers initially and a patient’s residual life expectancy is used to compare the efficacy of secondary or adjuvant therapies. The new method provides a consistent estimator that often exhibits smaller standard error in real and simulated examples, compared to the existing method of Jung et al. (2009). It also provides a simple empirical likelihood inference method that does not require estimating the covariance matrix of the estimator or resampling. We apply the new method to a breast cancer study (NSABP Protocol B-04, Fisher et al. (2002)) and estimate median residual lifetimes at various followup time points, adjusting for important prognostic factors.  相似文献   

16.
Abstract. We study the Jeffreys prior and its properties for the shape parameter of univariate skew‐t distributions with linear and nonlinear Student's t skewing functions. In both cases, we show that the resulting priors for the shape parameter are symmetric around zero and proper. Moreover, we propose a Student's t approximation of the Jeffreys prior that makes an objective Bayesian analysis easy to perform. We carry out a Monte Carlo simulation study that demonstrates an overall better behaviour of the maximum a posteriori estimator compared with the maximum likelihood estimator. We also compare the frequentist coverage of the credible intervals based on the Jeffreys prior and its approximation and show that they are similar. We further discuss location‐scale models under scale mixtures of skew‐normal distributions and show some conditions for the existence of the posterior distribution and its moments. Finally, we present three numerical examples to illustrate the implications of our results on inference for skew‐t distributions.  相似文献   

17.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

18.
Length‐biased sampling data are often encountered in the studies of economics, industrial reliability, epidemiology, genetics and cancer screening. The complication of this type of data is due to the fact that the observed lifetimes suffer from left truncation and right censoring, where the left truncation variable has a uniform distribution. In the Cox proportional hazards model, Huang & Qin (Journal of the American Statistical Association, 107, 2012, p. 107) proposed a composite partial likelihood method which not only has the simplicity of the popular partial likelihood estimator, but also can be easily performed by the standard statistical software. The accelerated failure time model has become a useful alternative to the Cox proportional hazards model. In this paper, by using the composite partial likelihood technique, we study this model with length‐biased sampling data. The proposed method has a very simple form and is robust when the assumption that the censoring time is independent of the covariate is violated. To ease the difficulty of calculations when solving the non‐smooth estimating equation, we use a kernel smoothed estimation method (Heller; Journal of the American Statistical Association, 102, 2007, p. 552). Large sample results and a re‐sampling method for the variance estimation are discussed. Some simulation studies are conducted to compare the performance of the proposed method with other existing methods. A real data set is used for illustration.  相似文献   

19.
We consider the smoothed maximum likelihood estimator and the smoothed Grenander‐type estimator for a monotone baseline hazard rate λ 0 in the Cox model. We analyze their asymptotic behaviour and show that they are asymptotically normal at rate n m /(2m +1), when λ 0 is m ≥2 times continuously differentiable, and that both estimators are asymptotically equivalent. Finally, we present numerical results on pointwise confidence intervals that illustrate the comparable behaviour of the two methods.  相似文献   

20.
Efficient inference for regression models requires that the heteroscedasticity be taken into account. We consider statistical inference under heteroscedasticity in a semiparametric measurement error regression model, in which some covariates are measured with errors. This paper has multiple components. First, we propose a new method for testing the heteroscedasticity. The advantages of the proposed method over the existing ones are that it does not need any nonparametric estimation and does not involve any mismeasured variables. Second, we propose a new two-step estimator for the error variances if there is heteroscedasticity. Finally, we propose a weighted estimating equation-based estimator (WEEBE) for the regression coefficients and establish its asymptotic properties. Compared with existing estimators, the proposed WEEBE is asymptotically more efficient, avoids undersmoothing the regressor functions and requires less restrictions on the observed regressors. Simulation studies show that the proposed test procedure and estimators have nice finite sample performance. A real data set is used to illustrate the utility of our proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号