首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Density ratio models (DRMs) are commonly used semiparametric models to link related populations. Empirical likelihood (EL) under DRM has been demonstrated to be a flexible and useful platform for semiparametric inferences. Since DRM-based EL has the same maximum point and maximum likelihood as its dual form (dual EL), EL-based inferences under DRM are usually made through the latter. A natural question comes up: is there any efficiency loss of doing so? We make a careful comparison of the dual EL and DRM-based EL estimation methods from theory and numerical simulations. We find that their point estimators for any parameter are exactly the same, while they may have different performances in interval estimation. In terms of coverage accuracy, the two intervals are comparable for non- or moderate skewed populations, and the DRM-based EL interval can be much superior for severely skewed populations. A real data example is analysed for illustration purpose.  相似文献   

2.
The L1-type regularization provides a useful tool for variable selection in high-dimensional regression modeling. Various algorithms have been proposed to solve optimization problems for L1-type regularization. Especially the coordinate descent algorithm has been shown to be effective in sparse regression modeling. Although the algorithm shows a remarkable performance to solve optimization problems for L1-type regularization, it suffers from outliers, since the procedure is based on the inner product of predictor variables and partial residuals obtained from a non-robust manner. To overcome this drawback, we propose a robust coordinate descent algorithm, especially focusing on the high-dimensional regression modeling based on the principal components space. We show that the proposed robust algorithm converges to the minimum value of its objective function. Monte Carlo experiments and real data analysis are conducted to examine the efficiency of the proposed robust algorithm. We observe that our robust coordinate descent algorithm effectively performs for the high-dimensional regression modeling even in the presence of outliers.  相似文献   

3.
Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size.  相似文献   

4.
We consider a linear regression model where there are group structures in covariates. The group LASSO has been proposed for group variable selections. Many nonconvex penalties such as smoothly clipped absolute deviation and minimax concave penalty were extended to group variable selection problems. The group coordinate descent (GCD) algorithm is used popularly for fitting these models. However, the GCD algorithms are hard to be applied to nonconvex group penalties due to computational complexity unless the design matrix is orthogonal. In this paper, we propose an efficient optimization algorithm for nonconvex group penalties by combining the concave convex procedure and the group LASSO algorithm. We also extend the proposed algorithm for generalized linear models. We evaluate numerical efficiency of the proposed algorithm compared to existing GCD algorithms through simulated data and real data sets.  相似文献   

5.
To make efficient inference for mean of a response variable when the data are missing at random and the dimension of covariate is not low, we construct three bias-corrected empirical likelihood (EL) methods in conjunction with dimension-reduced kernel estimation of propensity or/and conditional mean response function. Consistency and asymptotic normality of the maximum dimension-reduced EL estimators are established. We further study the asymptotic properties of the resulting dimension-reduced EL ratio functions and the corresponding EL confidence intervals for the response mean are constructed. The finite-sample performance of the proposed estimators is studied through simulation, and an application to HIV-CD4 data set is also presented.  相似文献   

6.
Finite Sample Properties of the Two-Step Empirical Likelihood Estimator   总被引:1,自引:1,他引:0  
We investigate the finite sample properties of two-step empirical likelihood (EL) estimators. These estimators are shown to have the same third-order bias properties as EL itself. The Monte Carlo study provides evidence that (i) higher order asymptotics fails to provide a good approximation in the sense that the bias of the two-step EL estimators can be substantial and sensitive to the number of moment restrictions and (ii) the two-step EL estimators may have heavy tails.  相似文献   

7.
Abstract.  The Cox model with time-dependent coefficients has been studied by a number of authors recently. In this paper, we develop empirical likelihood (EL) pointwise confidence regions for the time-dependent regression coefficients via local partial likelihood smoothing. The EL simultaneous confidence bands for a linear combination of the coefficients are also derived based on the strong approximation methods. The EL ratio is formulated through the local partial log-likelihood for the regression coefficient functions. Our numerical studies indicate that the EL pointwise/simultaneous confidence regions/bands have satisfactory finite sample performances. Compared with the confidence regions derived directly based on the asymptotic normal distribution of the local constant estimator, the EL confidence regions are overall tighter and can better capture the curvature of the underlying regression coefficient functions. Two data sets, the gastric cancer data and the Mayo Clinic primary biliary cirrhosis data, are analysed using the proposed method.  相似文献   

8.
ABSTRACT

We investigate the finite sample properties of two-step empirical likelihood (EL) estimators. These estimators are shown to have the same third-order bias properties as EL itself. The Monte Carlo study provides evidence that (i) higher order asymptotics fails to provide a good approximation in the sense that the bias of the two-step EL estimators can be substantial and sensitive to the number of moment restrictions and (ii) the two-step EL estimators may have heavy tails.  相似文献   

9.
The main purpose of this paper is to introduce first a new family of empirical test statistics for testing a simple null hypothesis when the vector of parameters of interest is defined through a specific set of unbiased estimating functions. This family of test statistics is based on a distance between two probability vectors, with the first probability vector obtained by maximizing the empirical likelihood (EL) on the vector of parameters, and the second vector defined from the fixed vector of parameters under the simple null hypothesis. The distance considered for this purpose is the phi-divergence measure. The asymptotic distribution is then derived for this family of test statistics. The proposed methodology is illustrated through the well-known data of Newcomb's measurements on the passage time for light. A simulation study is carried out to compare its performance with that of the EL ratio test when confidence intervals are constructed based on the respective statistics for small sample sizes. The results suggest that the ‘empirical modified likelihood ratio test statistic’ provides a competitive alternative to the EL ratio test statistic, and is also more robust than the EL ratio test statistic in the presence of contamination in the data. Finally, we propose empirical phi-divergence test statistics for testing a composite null hypothesis and present some asymptotic as well as simulation results for evaluating the performance of these test procedures.  相似文献   

10.
In this paper, we propose a lower bound based smoothed quasi-Newton algorithm for computing the solution paths of the group bridge estimator in linear regression models. Our method is based on the quasi-Newton algorithm with a smoothed group bridge penalty in combination with a novel data-driven thresholding rule for the regression coefficients. This rule is derived based on a necessary KKT condition of the group bridge optimization problem. It is easy to implement and can be used to eliminate groups with zero coefficients. Thus, it reduces the dimension of the optimization problem. The proposed algorithm removes the restriction of groupwise orthogonal condition needed in coordinate descent and LARS algorithms for group variable selection. Numerical results show that the proposed algorithm outperforms the coordinate descent based algorithms in both efficiency and accuracy.  相似文献   

11.
The glmnet package by Friedman et al. [Regularization paths for generalized linear models via coordinate descent, J. Statist. Softw. 33 (2010), pp. 1–22] is an extremely fast implementation of the standard coordinate descent algorithm for solving ?1 penalized learning problems. In this paper, we consider a family of coordinate majorization descent algorithms for solving the ?1 penalized learning problems by replacing each coordinate descent step with a coordinate-wise majorization descent operation. Numerical experiments show that this simple modification can lead to substantial improvement in speed when the predictors have moderate or high correlations.  相似文献   

12.
Rank regression procedures have been proposed and studied for numerous research applications that do not satisfy the underlying assumptions of the more common linear regression models. This article develops confidence regions for the slope parameter of rank regression using an empirical likelihood (EL) ratio method. It has the advantage of not requiring variance estimation which is required for the normal approximation method. The EL method is also range respecting and results in asymmetric confidence intervals. Simulation studies are used to compare and evaluate normal approximation versus EL inference methods for various conditions such as different sample size or error distribution. The simulation study demonstrates our proposed EL method almost outperforms the traditional method in terms of coverage probability, lower-tail side error, and upper-tail side error. An application of stability analysis also shows the EL method results in shorter confidence intervals for real life data.  相似文献   

13.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

14.
We introduce estimation and test procedures through divergence minimization for models satisfying linear constraints with unknown parameter. These procedures extend the empirical likelihood (EL) method and share common features with generalized empirical likelihood approach. We treat the problems of existence and characterization of the divergence projections of probability distributions on sets of signed finite measures. We give a precise characterization of duality, for the proposed class of estimates and test statistics, which is used to derive their limiting distributions (including the EL estimate and the EL ratio statistic) both under the null hypotheses and under alternatives or misspecification. An approximation to the power function is deduced as well as the sample size which ensures a desired power for a given alternative.  相似文献   

15.
Sample entropy based tests, methods of sieves and Grenander estimation type procedures are known to be very efficient tools for assessing normality of underlying data distributions, in one-dimensional nonparametric settings. Recently, it has been shown that the density based empirical likelihood (EL) concept extends and standardizes these methods, presenting a powerful approach for approximating optimal parametric likelihood ratio test statistics, in a distribution-free manner. In this paper, we discuss difficulties related to constructing density based EL ratio techniques for testing bivariate normality and propose a solution regarding this problem. Toward this end, a novel bivariate sample entropy expression is derived and shown to satisfy the known concept related to bivariate histogram density estimations. Monte Carlo results show that the new density based EL ratio tests for bivariate normality behave very well for finite sample sizes. To exemplify the excellent applicability of the proposed approach, we demonstrate a real data example.  相似文献   

16.
We study a group lasso estimator for the multivariate linear regression model that accounts for correlated error terms. A block coordinate descent algorithm is used to compute this estimator. We perform a simulation study with categorical data and multivariate time series data, typical settings with a natural grouping among the predictor variables. Our simulation studies show the good performance of the proposed group lasso estimator compared to alternative estimators. We illustrate the method on a time series data set of gene expressions.  相似文献   

17.
There is an increasing amount of literature focused on Bayesian computational methods to address problems with intractable likelihood. One approach is a set of algorithms known as Approximate Bayesian Computational (ABC) methods. One of the problems with these algorithms is that their performance depends on the appropriate choice of summary statistics, distance measure and tolerance level. To circumvent this problem, an alternative method based on the empirical likelihood has been introduced. This method can be easily implemented when a set of constraints, related to the moments of the distribution, is specified. However, the choice of the constraints is sometimes challenging. To overcome this difficulty, we propose an alternative method based on a bootstrap likelihood approach. The method is easy to implement and in some cases is actually faster than the other approaches considered. We illustrate the performance of our algorithm with examples from population genetics, time series and stochastic differential equations. We also test the method on a real dataset.  相似文献   

18.
Value at risk (VaR) and expected shortfall (ES) are widely used risk measures of the risk of loss on a specific portfolio of financial assets. Adjusted empirical likelihood (AEL) is an important non parametric likelihood method which is developed from empirical likelihood (EL). It can overcome the limitation of convex hull problems in EL. In this paper, we use AEL method to estimate confidence region for VaR and ES. Theoretically, we find that AEL has the same large sample statistical properties as EL, and guarantees solution to the estimating equations in EL. In addition, simulation results indicate that the coverage probabilities of the new confidence regions are higher than that of the original EL with the same level. These results show that the AEL estimation for VaR and ES deserves to recommend for the real applications.  相似文献   

19.
This paper develops a novel weighted composite quantile regression (CQR) method for estimation of a linear model when some covariates are missing at random and the probability for missingness mechanism can be modelled parametrically. By incorporating the unbiased estimating equations of incomplete data into empirical likelihood (EL), we obtain the EL-based weights, and then re-adjust the inverse probability weighted CQR for estimating the vector of regression coefficients. Theoretical results show that the proposed method can achieve semiparametric efficiency if the selection probability function is correctly specified, therefore the EL weighted CQR is more efficient than the inverse probability weighted CQR. Besides, our algorithm is computationally simple and easy to implement. Simulation studies are conducted to examine the finite sample performance of the proposed procedures. Finally, we apply the new method to analyse the US news College data.  相似文献   

20.
ABSTRACT

Empirical likelihood (EL) is a nonparametric method based on observations. EL method is defined as a constrained optimization problem. The solution of this constrained optimization problem is carried on using duality approach. In this study, we propose an alternative algorithm to solve this constrained optimization problem. The new algorithm is based on a newton-type algorithm for Lagrange multipliers for the constrained optimization problem. We provide a simulation study and a real data example to compare the performance of the proposed algorithm with the classical algorithm. Simulation and the real data results show that the performance of the proposed algorithm is comparable with the performance of the existing algorithm in terms of efficiencies and cpu-times.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号