首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
We establish consistency and asymptotic normality of the quasi‐maximum likelihood estimator in the linear ARCH model. Contrary to the existing literature, we allow the parameters to be in the region where no stationary version of the process exists. This implies that the estimator is always asymptotically normal.  相似文献   

2.
This paper investigates asymptotic properties of the maximum likelihood estimator and the quasi‐maximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is important to make the distinction with different spatial scenarios. Under the scenario that each unit will be influenced by only a few neighboring units, the estimators may have ‐rate of convergence and be asymptotically normal. When each unit can be influenced by many neighbors, irregularity of the information matrix may occur and various components of the estimators may have different rates of convergence.  相似文献   

3.
We introduce the class of conditional linear combination tests, which reject null hypotheses concerning model parameters when a data‐dependent convex combination of two identification‐robust statistics is large. These tests control size under weak identification and have a number of optimality properties in a conditional problem. We show that the conditional likelihood ratio test of Moreira, 2003 is a conditional linear combination test in models with one endogenous regressor, and that the class of conditional linear combination tests is equivalent to a class of quasi‐conditional likelihood ratio tests. We suggest using minimax regret conditional linear combination tests and propose a computationally tractable class of tests that plug in an estimator for a nuisance parameter. These plug‐in tests perform well in simulation and have optimal power in many strongly identified models, thus allowing powerful identification‐robust inference in a wide range of linear and nonlinear models without sacrificing efficiency if identification is strong.  相似文献   

4.
An asymptotically efficient likelihood‐based semiparametric estimator is derived for the censored regression (tobit) model, based on a new approach for estimating the density function of the residuals in a partially observed regression. Smoothing the self‐consistency equation for the nonparametric maximum likelihood estimator of the distribution of the residuals yields an integral equation, which in some cases can be solved explicitly. The resulting estimated density is smooth enough to be used in a practical implementation of the profile likelihood estimator, but is sufficiently close to the nonparametric maximum likelihood estimator to allow estimation of the semiparametric efficient score. The parameter estimates obtained by solving the estimated score equations are then asymptotically efficient. A summary of analogous results for truncated regression is also given.  相似文献   

5.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

6.
Seemingly absent from the arsenal of currently available “nearly efficient” testing procedures for the unit root hypothesis, that is, tests whose asymptotic local power functions are virtually indistinguishable from the Gaussian power envelope, is a test admitting a (quasi‐)likelihood ratio interpretation. We study the large sample properties of a quasi‐likelihood ratio unit root test based on a Gaussian likelihood and show that this test is nearly efficient.  相似文献   

7.
This paper studies the asymptotic properties of the quasi‐maximum likelihood estimator of (generalized autoregressive conditional heteroscedasticity) GARCH(1, 1) models without strict stationarity constraints and considers applications to testing problems. The estimator is unrestricted in the sense that the value of the intercept, which cannot be consistently estimated in the explosive case, is not fixed. A specific behavior of the estimator of the GARCH coefficients is obtained at the boundary of the stationarity region, but, except for the intercept, this estimator remains consistent and asymptotically normal in every situation. The asymptotic variance is different in the stationary and nonstationary situations, but is consistently estimated with the same estimator in both cases. Tests of strict stationarity and nonstationarity are proposed. The tests developed for the classical GARCH(1, 1) model are able to detect nonstationarity in more general GARCH models. A numerical illustration based on stock indices and individual stock returns is proposed.  相似文献   

8.
This paper considers testing problems where several of the standard regularity conditions fail to hold. We consider the case where (i) parameter vectors in the null hypothesis may lie on the boundary of the maintained hypothesis and (ii) there may be a nuisance parameter that appears under the alternative hypothesis, but not under the null. The paper establishes the asymptotic null and local alternative distributions of quasi‐likelihood ratio, rescaled quasi‐likelihood ratio, Wald, and score tests in this case. The results apply to tests based on a wide variety of extremum estimators and apply to a wide variety of models. Examples treated in the paper are: (i) tests of the null hypothesis of no conditional heteroskedasticity in a GARCH(1, 1) regression model and (ii) tests of the null hypothesis that some random coefficients have variances equal to zero in a random coefficients regression model with (possibly) correlated random coefficients.  相似文献   

9.
This paper proposes an asymptotically efficient method for estimating models with conditional moment restrictions. Our estimator generalizes the maximum empirical likelihood estimator (MELE) of Qin and Lawless (1994). Using a kernel smoothing method, we efficiently incorporate the information implied by the conditional moment restrictions into our empirical likelihood‐based procedure. This yields a one‐step estimator which avoids estimating optimal instruments. Our likelihood ratio‐type statistic for parametric restrictions does not require the estimation of variance, and achieves asymptotic pivotalness implicitly. The estimation and testing procedures we propose are normalization invariant. Simulation results suggest that our new estimator works remarkably well in finite samples.  相似文献   

10.
This paper applies some general concepts in decision theory to a simple instrumental variables model. There are two endogenous variables linked by a single structural equation; k of the exogenous variables are excluded from this structural equation and provide the instrumental variables (IV). The reduced‐form distribution of the endogenous variables conditional on the exogenous variables corresponds to independent draws from a bivariate normal distribution with linear regression functions and a known covariance matrix. A canonical form of the model has parameter vector (ρ, φ, ω), where φis the parameter of interest and is normalized to be a point on the unit circle. The reduced‐form coefficients on the instrumental variables are split into a scalar parameter ρand a parameter vector ω, which is normalized to be a point on the (k−1)‐dimensional unit sphere; ρmeasures the strength of the association between the endogenous variables and the instrumental variables, and ωis a measure of direction. A prior distribution is introduced for the IV model. The parameters φ, ρ, and ωare treated as independent random variables. The distribution for φis uniform on the unit circle; the distribution for ωis uniform on the unit sphere with dimension k‐1. These choices arise from the solution of a minimax problem. The prior for ρis left general. It turns out that given any positive value for ρ, the Bayes estimator of φdoes not depend on ρ; it equals the maximum‐likelihood estimator. This Bayes estimator has constant risk; because it minimizes average risk with respect to a proper prior, it is minimax. The same general concepts are applied to obtain confidence intervals. The prior distribution is used in two ways. The first way is to integrate out the nuisance parameter ωin the IV model. This gives an integrated likelihood function with two scalar parameters, φand ρ. Inverting a likelihood ratio test, based on the integrated likelihood function, provides a confidence interval for φ. This lacks finite sample optimality, but invariance arguments show that the risk function depends only on ρand not on φor ω. The second approach to confidence sets aims for finite sample optimality by setting up a loss function that trades off coverage against the length of the interval. The automatic uniform priors are used for φand ω, but a prior is also needed for the scalar ρ, and no guidance is offered on this choice. The Bayes rule is a highest posterior density set. Invariance arguments show that the risk function depends only on ρand not on φor ω. The optimality result combines average risk and maximum risk. The confidence set minimizes the average—with respect to the prior distribution for ρ—of the maximum risk, where the maximization is with respect to φand ω.  相似文献   

11.
In this paper we propose a new estimator for a model with one endogenous regressor and many instrumental variables. Our motivation comes from the recent literature on the poor properties of standard instrumental variables estimators when the instrumental variables are weakly correlated with the endogenous regressor. Our proposed estimator puts a random coefficients structure on the relation between the endogenous regressor and the instruments. The variance of the random coefficients is modelled as an unknown parameter. In addition to proposing a new estimator, our analysis yields new insights into the properties of the standard two‐stage least squares (TSLS) and limited‐information maximum likelihood (LIML) estimators in the case with many weak instruments. We show that in some interesting cases, TSLS and LIML can be approximated by maximizing the random effects likelihood subject to particular constraints. We show that statistics based on comparisons of the unconstrained estimates of these parameters to the implicit TSLS and LIML restrictions can be used to identify settings when standard large sample approximations to the distributions of TSLS and LIML are likely to perform poorly. We also show that with many weak instruments, LIML confidence intervals are likely to have under‐coverage, even though its finite sample distribution is approximately centered at the true value of the parameter. In an application with real data and simulations around this data set, the proposed estimator performs markedly better than TSLS and LIML, both in terms of coverage rate and in terms of risk.  相似文献   

12.
This paper studies the behavior, under local misspecification, of several confidence sets (CSs) commonly used in the literature on inference in moment (in)equality models. We propose the amount of asymptotic confidence size distortion as a criterion to choose among competing inference methods. This criterion is then applied to compare across test statistics and critical values employed in the construction of CSs. We find two important results under weak assumptions. First, we show that CSs based on subsampling and generalized moment selection (Andrews and Soares (2010)) suffer from the same degree of asymptotic confidence size distortion, despite the fact that asymptotically the latter can lead to CSs with strictly smaller expected volume under correct model specification. Second, we show that the asymptotic confidence size of CSs based on the quasi‐likelihood ratio test statistic can be an arbitrary small fraction of the asymptotic confidence size of CSs based on the modified method of moments test statistic.  相似文献   

13.
We analyze use of a quasi‐likelihood ratio statistic for a mixture model to test the null hypothesis of one regime versus the alternative of two regimes in a Markov regime‐switching context. This test exploits mixture properties implied by the regime‐switching process, but ignores certain implied serial correlation properties. When formulated in the natural way, the setting is nonstandard, involving nuisance parameters on the boundary of the parameter space, nuisance parameters identified only under the alternative, or approximations using derivatives higher than second order. We exploit recent advances by Andrews (2001) and contribute to the literature by extending the scope of mixture models, obtaining asymptotic null distributions different from those in the literature. We further provide critical values for popular models or bounds for tail probabilities that are useful in constructing conservative critical values for regime‐switching tests. We compare the size and power of our statistics to other useful tests for regime switching via Monte Carlo methods and find relatively good performance. We apply our methods to reexamine the classic cartel study of Porter (1983) and reaffirm Porter's findings.  相似文献   

14.
This paper applies some general concepts in decision theory to a linear panel data model. A simple version of the model is an autoregression with a separate intercept for each unit in the cross section, with errors that are independent and identically distributed with a normal distribution. There is a parameter of interest γ and a nuisance parameter τ, a N×K matrix, where N is the cross‐section sample size. The focus is on dealing with the incidental parameters problem created by a potentially high‐dimension nuisance parameter. We adopt a “fixed‐effects” approach that seeks to protect against any sequence of incidental parameters. We transform τ to (δ, ρ, ω), where δ is a J×K matrix of coefficients from the least‐squares projection of τ on a N×J matrix x of strictly exogenous variables, ρ is a K×K symmetric, positive semidefinite matrix obtained from the residual sums of squares and cross‐products in the projection of τ on x, and ω is a (NJ) ×K matrix whose columns are orthogonal and have unit length. The model is invariant under the actions of a group on the sample space and the parameter space, and we find a maximal invariant statistic. The distribution of the maximal invariant statistic does not depend upon ω. There is a unique invariant distribution for ω. We use this invariant distribution as a prior distribution to obtain an integrated likelihood function. It depends upon the observation only through the maximal invariant statistic. We use the maximal invariant statistic to construct a marginal likelihood function, so we can eliminate ω by integration with respect to the invariant prior distribution or by working with the marginal likelihood function. The two approaches coincide. Decision rules based on the invariant distribution for ω have a minimax property. Given a loss function that does not depend upon ω and given a prior distribution for (γ, δ, ρ), we show how to minimize the average—with respect to the prior distribution for (γ, δ, ρ)—of the maximum risk, where the maximum is with respect to ω. There is a family of prior distributions for (δ, ρ) that leads to a simple closed form for the integrated likelihood function. This integrated likelihood function coincides with the likelihood function for a normal, correlated random‐effects model. Under random sampling, the corresponding quasi maximum likelihood estimator is consistent for γ as N→∞, with a standard limiting distribution. The limit results do not require normality or homoskedasticity (conditional on x) assumptions.  相似文献   

15.
Using many moment conditions can improve efficiency but makes the usual generalized method of moments (GMM) inferences inaccurate. Two‐step GMM is biased. Generalized empirical likelihood (GEL) has smaller bias, but the usual standard errors are too small in instrumental variable settings. In this paper we give a new variance estimator for GEL that addresses this problem. It is consistent under the usual asymptotics and, under many weak moment asymptotics, is larger than usual and is consistent. We also show that the Kleibergen (2005) Lagrange multiplier and conditional likelihood ratio statistics are valid under many weak moments. In addition, we introduce a jackknife GMM estimator, but find that GEL is asymptotically more efficient under many weak moments. In Monte Carlo examples we find that t‐statistics based on the new variance estimator have nearly correct size in a wide range of cases.  相似文献   

16.
This paper analyzes the conditions under which consistent estimation can be achieved in instrumental variables (IV) regression when the available instruments are weak and the number of instruments, Kn, goes to infinity with the sample size. We show that consistent estimation depends importantly on the strength of the instruments as measured by rn, the rate of growth of the so‐called concentration parameter, and also on Kn. In particular, when Kn→∞, the concentration parameter can grow, even if each individual instrument is only weakly correlated with the endogenous explanatory variables, and consistency of certain estimators can be established under weaker conditions than have previously been assumed in the literature. Hence, the use of many weak instruments may actually improve the performance of certain point estimators. More specifically, we find that the limited information maximum likelihood (LIML) estimator and the bias‐corrected two‐stage least squares (B2SLS) estimator are consistent when , while the two‐stage least squares (2SLS) estimator is consistent only if Kn/rn→0 as n→∞. These consistency results suggest that LIML and B2SLS are more robust to instrument weakness than 2SLS.  相似文献   

17.
When a continuous‐time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observations is not explicitly computable. Using Hermite polynomials, I construct an explicit sequence of closed‐form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models.  相似文献   

18.
Measures of sensitivity and uncertainty have become an integral part of risk analysis. Many such measures have a conditional probabilistic structure, for which a straightforward Monte Carlo estimation procedure has a double‐loop form. Recently, a more efficient single‐loop procedure has been introduced, and consistency of this procedure has been demonstrated separately for particular measures, such as those based on variance, density, and information value. In this work, we give a unified proof of single‐loop consistency that applies to any measure satisfying a common rationale. This proof is not only more general but invokes less restrictive assumptions than heretofore in the literature, allowing for the presence of correlations among model inputs and of categorical variables. We examine numerical convergence of such an estimator under a variety of sensitivity measures. We also examine its application to a published medical case study.  相似文献   

19.
This paper analyzes the properties of standard estimators, tests, and confidence sets (CS's) for parameters that are unidentified or weakly identified in some parts of the parameter space. The paper also introduces methods to make the tests and CS's robust to such identification problems. The results apply to a class of extremum estimators and corresponding tests and CS's that are based on criterion functions that satisfy certain asymptotic stochastic quadratic expansions and that depend on the parameter that determines the strength of identification. This covers a class of models estimated using maximum likelihood (ML), least squares (LS), quantile, generalized method of moments, generalized empirical likelihood, minimum distance, and semi‐parametric estimators. The consistency/lack‐of‐consistency and asymptotic distributions of the estimators are established under a full range of drifting sequences of true distributions. The asymptotic sizes (in a uniform sense) of standard and identification‐robust tests and CS's are established. The results are applied to the ARMA(1, 1) time series model estimated by ML and to the nonlinear regression model estimated by LS. In companion papers, the results are applied to a number of other models.  相似文献   

20.
估计带跳资产价格的时点波动时,需要用门限过滤方法消除跳的影响。在有限样本下,门限过滤会产生错滤偏误和漏虑偏误,降低估计精度。跳错滤产生的偏误可通过对错滤样本进行补足的方法进行纠偏,但由于发生时点未知,跳漏滤产生的偏误无法纠正,只能通过估计量设计来减少漏滤偏误。本文首次提出基于门限双幂变差的时点波动估计量,采用核平滑方法对资产价格时点波动进行非参数估计,有效减少跳错滤导致的偏误。采用随机阵列极限理论,本文证明了估计量的一致性和渐进正态性,在分析有限样本偏误的基础上,给出估计量的纠偏方法。蒙特卡洛模拟表明,本文给出的估计量,漏滤偏误明显小于基于二次变差构造的估计量,对时点波动估计的性质具有实质改进。采用Kupiec动态VaR精度检验对沪深300指数的实证分析表明,本文给出的时点波动估计更能描述资产收益的波动特征。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号