首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
We propose a new estimator, the thresholded scaled Lasso, in high-dimensional threshold regressions. First, we establish an upper bound on the ? estimation error of the scaled Lasso estimator of Lee, Seo, and Shin. This is a nontrivial task as the literature on high-dimensional models has focused almost exclusively on ?1 and ?2 estimation errors. We show that this sup-norm bound can be used to distinguish between zero and nonzero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent variable selection via thresholding. Our simulations show that thresholding the scaled Lasso yields substantial improvements in terms of variable selection. Finally, we use our estimator to shed further empirical light on the long-running debate on the relationship between the level of debt (public and private) and GDP growth. Supplementary materials for this article are available online.  相似文献   

2.
This article proposes a new class of copula-based dynamic models for high-dimensional conditional distributions, facilitating the estimation of a wide variety of measures of systemic risk. Our proposed models draw on successful ideas from the literature on modeling high-dimensional covariance matrices and on recent work on models for general time-varying distributions. Our use of copula-based models enables the estimation of the joint model in stages, greatly reducing the computational burden. We use the proposed new models to study a collection of daily credit default swap (CDS) spreads on 100 U.S. firms over the period 2006 to 2012. We find that while the probability of distress for individual firms has greatly reduced since the financial crisis of 2008–2009, the joint probability of distress (a measure of systemic risk) is substantially higher now than in the precrisis period. Supplementary materials for this article are available online.  相似文献   

3.
High-dimensional sparse modeling with censored survival data is of great practical importance, as exemplified by applications in high-throughput genomic data analysis. In this paper, we propose a class of regularization methods, integrating both the penalized empirical likelihood and pseudoscore approaches, for variable selection and estimation in sparse and high-dimensional additive hazards regression models. When the number of covariates grows with the sample size, we establish asymptotic properties of the resulting estimator and the oracle property of the proposed method. It is shown that the proposed estimator is more efficient than that obtained from the non-concave penalized likelihood approach in the literature. Based on a penalized empirical likelihood ratio statistic, we further develop a nonparametric likelihood approach for testing the linear hypothesis of regression coefficients and constructing confidence regions consequently. Simulation studies are carried out to evaluate the performance of the proposed methodology and also two real data sets are analyzed.  相似文献   

4.
This article considers nonparametric estimation of first-price auction models under the monotonicity restriction on the bidding strategy. Based on an integrated-quantile representation of the first-order condition, we propose a tuning-parameter-free estimator for the valuation quantile function. We establish its cube-root-n consistency and asymptotic distribution under weaker smoothness assumptions than those typically assumed in the empirical literature. If the latter are true, we also provide a trimming-free smoothed estimator and show that it is asymptotically normal and achieves the optimal rate of Guerre, Perrigne, and Vuong (2000). We illustrate our method using Monte Carlo simulations and an empirical study of the California highway procurement auctions. Supplementary materials for this article are available online.  相似文献   

5.
The literature on testing the unit root hypothesis in the presence of GARCH errors is extended. A new test based upon the combination of local-to-unity detrending and joint maximum likelihood estimation of the autoregressive parameter and GARCH process is presented. The finite sample distribution of the test is derived under alternative decisions regarding the deterministic terms employed. Using Monte Carlo simulation, the newly proposed ML t-test is shown to exhibit increased power of relative to rival tests. Finally, the empirical relevance of the simulation results is illustrated via an application to real GDP for the UK.  相似文献   

6.
ABSTRACT

For conditional time-varying factor models with high-dimensional assets, this article proposes a high-dimensional alpha (HDA) test to assess whether there exist abnormal returns on securities (or portfolios) over the theoretical expected returns. To employ this test effectively, a constant coefficient test is also introduced. It examines the validity of constant alphas and factor loadings. Simulation studies and an empirical example are presented to illustrate the finite sample performance and the usefulness of the proposed tests. Using the HDA test, the empirical example demonstrates that the FF three-factor model is better than CAPM in explaining the mean-variance efficiency of both the Chinese and U.S. stock markets. Furthermore, our results suggest that the U.S. stock market is more efficient in terms of mean-variance efficiency than the Chinese stock market. Supplementary materials for this article are available online.  相似文献   

7.
Zhouping Li  Yiming Liu 《Statistics》2017,51(5):1006-1022
In estimation of multiplicative or accelerated failure time models, the relative error criterion has been recognized as an alternative to the squared or absolute error criterion. The general relative error criterion introduced by Chen et al. [Least product relative error estimation. J Multivariate Anal. 2016;144:91–98] is a unified framework for efficient estimation, which includes the least absolute relative error estimation and least product relative error estimation as special cases. In this paper, by combining the empirical likelihood and general relative error criterion in multiplicative model, we develop a new empirical likelihood method for inference on the unknown parameters under high-dimensional setting. Limiting theory is established for the proposed empirical likelihood statistic. We conduct some simulation studies and real data analysis to evaluate the effectiveness of the proposed method.  相似文献   

8.
In many economic models, theory restricts the shape of functions, such as monotonicity or curvature conditions. This article reviews and presents a framework for constrained estimation and inference to test for shape conditions in parametric models. We show that “regional” shape-restricting estimators have important advantages in terms of model fit and flexibility (as opposed to standard “local” or “global” shape-restricting estimators). In our empirical illustration, this is the first article to impose and test for all shape restrictions required by economic theory simultaneously in the “Berndt and Wood” data. We find that this dataset is consistent with “duality theory,” whereas previous studies have found violations of economic theory. We discuss policy consequences for key parameters, such as whether energy and capital are complements or substitutes.  相似文献   

9.
《统计学通讯:理论与方法》2012,41(16-17):2944-2958
The focus of this article is on the choice of suitable prior distributions for item parameters within item response theory (IRT) models. In particular, the use of empirical prior distributions for item parameters is proposed. Firstly, regression trees are implemented in order to build informative empirical prior distributions. Secondly, model estimation is conducted within a fully Bayesian approach through the Gibbs sampler, which makes estimation feasible also with increasingly complex models. The main results show that item parameter recovery is improved with the introduction of empirical prior information about item parameters, also when only a small sample is available.  相似文献   

10.
Efficient statistical inference on nonignorable missing data is a challenging problem. This paper proposes a new estimation procedure based on composite quantile regression (CQR) for linear regression models with nonignorable missing data, that is applicable even with high-dimensional covariates. A parametric model is assumed for modelling response probability, which is estimated by the empirical likelihood approach. Local identifiability of the proposed strategy is guaranteed on the basis of an instrumental variable approach. A set of data-based adaptive weights constructed via an empirical likelihood method is used to weight CQR functions. The proposed method is resistant to heavy-tailed errors or outliers in the response. An adaptive penalisation method for variable selection is proposed to achieve sparsity with high-dimensional covariates. Limiting distributions of the proposed estimators are derived. Simulation studies are conducted to investigate the finite sample performance of the proposed methodologies. An application to the ACTG 175 data is analysed.  相似文献   

11.
Regression analyses are commonly performed with doubly limited continuous dependent variables; for instance, when modeling the behavior of rates, proportions and income concentration indices. Several models are available in the literature for use with such variables, one of them being the unit gamma regression model. In all such models, parameter estimation is typically performed using the maximum likelihood method and testing inferences on the model''s parameters are usually based on the likelihood ratio test. Such a test can, however, deliver quite imprecise inferences when the sample size is small. In this paper, we propose two modified likelihood ratio test statistics for use with the unit gamma regressions that deliver much more accurate inferences when the number of data points in small. Numerical (i.e. simulation) evidence is presented for both fixed dispersion and varying dispersion models, and also for tests that involve nonnested models. We also present and discuss two empirical applications.  相似文献   

12.
Statistical inferences in high-dimensional precision matrices are equally important as statistical inferences in high-dimensional covariance matrices. In the literature, much attention has been paid to the latter, and significant advances have been achieved, especially in estimation and test of the banded structure. This paper proposes a new test for testing banded structures of precision matrices without assuming any specific parametric distribution. The test is adapted to the large p small n problems in which we derive the asymptotic distribution under the null hypothesis of bandedness. Simulation results show that the proposed test performs well with finite sample sizes. A real data application is realised to a phone call centre data.  相似文献   

13.
The heteroscedasticity consistent covariance matrix estimators are commonly used for the testing of regression coefficients when error terms of regression model are heteroscedastic. These estimators are based on the residuals obtained from the method of ordinary least squares and this method yields inefficient estimators in the presence of heteroscedasticity. It is usual practice to use estimated weighted least squares method or some adaptive methods to find efficient estimates of the regression parameters when the form of heteroscedasticity is unknown. But HCCM estimators are seldom derived from such efficient estimators for testing purposes in the available literature. The current article addresses the same concern and presents the weighted versions of HCCM estimators. Our numerical work uncovers the performance of these estimators and their finite sample properties in terms of interval estimation and null rejection rate.  相似文献   

14.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

15.
Four testing procedures are considered for testing the response rate of one sample correlated binary data with a cluster size of one or two, which often occurs in otolaryngologic and ophthalmologic studies. Although an asymptotic approach is often used for statistical inference, it is criticized for unsatisfactory type I error control in small sample settings. An alternative to the asymptotic approach is an unconditional approach. The first unconditional approach is the one based on estimation, also known as parametric bootstrap (Lee and Young in Stat Probab Lett 71(2):143–153, 2005). The other two unconditional approaches considered in this article are an approach based on maximization (Basu in J Am Stat Assoc 72(358):355–366, 1977), and an approach based on estimation and maximization (Lloyd in Biometrics 64(3):716–723, 2008a). These two unconditional approaches guarantee the test size and are generally more reliable than the asymptotic approach. We compare these four approaches in conjunction with a test proposed by Lee and Dubin (Stat Med 13(12):1241–1252, 1994) and a likelihood ratio test derived in this article, in regards to type I error rate and power for sample sizes from small to medium. An example from an otolaryngologic study is provided to illustrate the various testing procedures. The unconditional approach based on estimation and maximization using the test in Lee and Dubin (Stat Med 13(12):1241–1252, 1994) is preferable due to the power advantageous.  相似文献   

16.
It is common for linear regression models that the error variances are not the same for all observations and there are some high leverage data points. In such situations, the available literature advocates the use of heteroscedasticity consistent covariance matrix estimators (HCCME) for the testing of regression coefficients. Primarily, such estimators are based on the residuals derived from the ordinary least squares (OLS) estimator that itself can be seriously inefficient in the presence of heteroscedasticity. To get efficient estimation, many efficient estimators, namely the adaptive estimators are available but their performance has not been evaluated yet when the problem of heteroscedasticity is accompanied with the presence of high leverage data. In this article, the presence of high leverage data is taken into account to evaluate the performance of the adaptive estimator in terms of efficiency. Furthermore, our numerical work also evaluates the performance of the robust standard errors based on this efficient estimator in terms of interval estimation and null rejection rate (NRR).  相似文献   

17.
Under non-normality, this article is concerned with testing diagonality of high-dimensional covariance matrix, which is more practical than testing sphericity and identity in high-dimensional setting. The existing testing procedure for diagonality is not robust against either the data dimension or the data distribution, producing tests with distorted type I error rates much larger than nominal levels. This is mainly due to bias from estimating some functions of high-dimensional covariance matrix under non-normality. Compared to the sphericity and identity hypotheses, the asymptotic property of the diagonality hypothesis would be more involved and we should be more careful to deal with bias. We develop a correction that makes the existing test statistic robust against both the data dimension and the data distribution. We show that the proposed test statistic is asymptotically normal without the normality assumption and without specifying an explicit relationship between the dimension p and the sample size n. Simulations show that it has good size and power for a wide range of settings.  相似文献   

18.
The aim of this study is to compare performances of commonly cointegration tests used in literature in terms of their empirical power and type I error probabilty for various sample sizes. As a result of the study, it has been found that some tests are not appropriate in testing cointegration in terms of empirical power and type I error probability. As a result of simulation study, λmax test for any values of ρ and sample sizes have been found most appropriate test in conclusion.  相似文献   

19.
We study sparse high dimensional additive model fitting via penalization with sparsity-smoothness penalties. We review several existing algorithms that have been developed for this problem in the recent literature, highlighting the connections between them, and present some computationally efficient algorithms for fitting such models. Furthermore, using reasonable assumptions and exploiting recent results on group LASSO-like procedures, we take advantage of several oracle results which yield asymptotic optimality of estimators for high-dimensional but sparse additive models. Finally, variable selection procedures are compared with some high-dimensional testing procedures available in the literature for testing the presence of additive components.  相似文献   

20.
In this paper, we investigate the problem of testing semiparametric hypotheses in locally stationary processes. The proposed method is based on an empirical version of the L2‐distance between the true time varying spectral density and its best approximation under the null hypothesis. As this approach only requires estimation of integrals of the time varying spectral density and its square, we do not have to choose a smoothing bandwidth for the local estimation of the spectral density – in contrast to most other procedures discussed in the literature. Asymptotic normality of the test statistic is derived both under the null hypothesis and the alternative. We also propose a bootstrap procedure to obtain critical values in the case of small sample sizes. Additionally, we investigate the finite sample properties of the new method and compare it with the currently available procedures by means of a simulation study. Finally, we illustrate the performance of the new test in two data examples, one regarding log returns of the S&P 500 and the other a well‐known series of weekly egg prices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号