首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

2.
The purpose of this paper is to provide theoretical justification for some existing methods for constructing confidence intervals for the sum of coefficients in autoregressive models. We show that the methods of Stock (1991), Andrews (1993), and Hansen (1999) provide asymptotically valid confidence intervals, whereas the subsampling method of Romano and Wolf (2001) does not. In addition, we generalize the three valid methods to a larger class of statistics. We also clarify the difference between uniform and pointwise asymptotic approximations, and show that a pointwise convergence of coverage probabilities for all values of the parameter does not guarantee the validity of the confidence set.  相似文献   

3.
This paper applies some general concepts in decision theory to a simple instrumental variables model. There are two endogenous variables linked by a single structural equation; k of the exogenous variables are excluded from this structural equation and provide the instrumental variables (IV). The reduced‐form distribution of the endogenous variables conditional on the exogenous variables corresponds to independent draws from a bivariate normal distribution with linear regression functions and a known covariance matrix. A canonical form of the model has parameter vector (ρ, φ, ω), where φis the parameter of interest and is normalized to be a point on the unit circle. The reduced‐form coefficients on the instrumental variables are split into a scalar parameter ρand a parameter vector ω, which is normalized to be a point on the (k−1)‐dimensional unit sphere; ρmeasures the strength of the association between the endogenous variables and the instrumental variables, and ωis a measure of direction. A prior distribution is introduced for the IV model. The parameters φ, ρ, and ωare treated as independent random variables. The distribution for φis uniform on the unit circle; the distribution for ωis uniform on the unit sphere with dimension k‐1. These choices arise from the solution of a minimax problem. The prior for ρis left general. It turns out that given any positive value for ρ, the Bayes estimator of φdoes not depend on ρ; it equals the maximum‐likelihood estimator. This Bayes estimator has constant risk; because it minimizes average risk with respect to a proper prior, it is minimax. The same general concepts are applied to obtain confidence intervals. The prior distribution is used in two ways. The first way is to integrate out the nuisance parameter ωin the IV model. This gives an integrated likelihood function with two scalar parameters, φand ρ. Inverting a likelihood ratio test, based on the integrated likelihood function, provides a confidence interval for φ. This lacks finite sample optimality, but invariance arguments show that the risk function depends only on ρand not on φor ω. The second approach to confidence sets aims for finite sample optimality by setting up a loss function that trades off coverage against the length of the interval. The automatic uniform priors are used for φand ω, but a prior is also needed for the scalar ρ, and no guidance is offered on this choice. The Bayes rule is a highest posterior density set. Invariance arguments show that the risk function depends only on ρand not on φor ω. The optimality result combines average risk and maximum risk. The confidence set minimizes the average—with respect to the prior distribution for ρ—of the maximum risk, where the maximization is with respect to φand ω.  相似文献   

4.
In this paper we propose a new estimator for a model with one endogenous regressor and many instrumental variables. Our motivation comes from the recent literature on the poor properties of standard instrumental variables estimators when the instrumental variables are weakly correlated with the endogenous regressor. Our proposed estimator puts a random coefficients structure on the relation between the endogenous regressor and the instruments. The variance of the random coefficients is modelled as an unknown parameter. In addition to proposing a new estimator, our analysis yields new insights into the properties of the standard two‐stage least squares (TSLS) and limited‐information maximum likelihood (LIML) estimators in the case with many weak instruments. We show that in some interesting cases, TSLS and LIML can be approximated by maximizing the random effects likelihood subject to particular constraints. We show that statistics based on comparisons of the unconstrained estimates of these parameters to the implicit TSLS and LIML restrictions can be used to identify settings when standard large sample approximations to the distributions of TSLS and LIML are likely to perform poorly. We also show that with many weak instruments, LIML confidence intervals are likely to have under‐coverage, even though its finite sample distribution is approximately centered at the true value of the parameter. In an application with real data and simulations around this data set, the proposed estimator performs markedly better than TSLS and LIML, both in terms of coverage rate and in terms of risk.  相似文献   

5.
An autoregressive model with Markov regime‐switching is analyzed that reflects on the properties of the quasi‐likelihood ratio test developed by Cho and White (2007). For such a model, we show that consistency of the quasi‐maximum likelihood estimator for the population parameter values, on which consistency of the test is based, does not hold. We describe a condition that ensures consistency of the estimator and discuss the consistency of the test in the absence of consistency of the estimator.  相似文献   

6.
This paper investigates a generalized method of moments (GMM) approach to the estimation of autoregressive roots near unity with panel data and incidental deterministic trends. Such models arise in empirical econometric studies of firm size and in dynamic panel data modeling with weak instruments. The two moment conditions in the GMM approach are obtained by constructing bias corrections to the score functions under OLS and GLS detrending, respectively. It is shown that the moment condition under GLS detrending corresponds to taking the projected score on the Bhattacharya basis, linking the approach to recent work on projected score methods for models with infinite numbers of nuisance parameters (Waterman and Lindsay (1998)). Assuming that the localizing parameter takes a nonpositive value, we establish consistency of the GMM estimator and find its limiting distribution. A notable new finding is that the GMM estimator has convergence rate , slower than , when the true localizing parameter is zero (i.e., when there is a panel unit root) and the deterministic trends in the panel are linear. These results, which rely on boundary point asymptotics, point to the continued difficulty of distinguishing unit roots from local alternatives, even when there is an infinity of additional data.  相似文献   

7.
This paper studies the asymptotic properties of the quasi‐maximum likelihood estimator of (generalized autoregressive conditional heteroscedasticity) GARCH(1, 1) models without strict stationarity constraints and considers applications to testing problems. The estimator is unrestricted in the sense that the value of the intercept, which cannot be consistently estimated in the explosive case, is not fixed. A specific behavior of the estimator of the GARCH coefficients is obtained at the boundary of the stationarity region, but, except for the intercept, this estimator remains consistent and asymptotically normal in every situation. The asymptotic variance is different in the stationary and nonstationary situations, but is consistently estimated with the same estimator in both cases. Tests of strict stationarity and nonstationarity are proposed. The tests developed for the classical GARCH(1, 1) model are able to detect nonstationarity in more general GARCH models. A numerical illustration based on stock indices and individual stock returns is proposed.  相似文献   

8.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

9.
加权复合分位数回归方法在动态VaR风险度量中的应用   总被引:1,自引:0,他引:1  
风险价值(VaR)因为简单直观,成为了当今国际上最主流的风险度量方法之一,而基于时间序列自回归(AR)模型来计算无条件风险度量值在实业界有广泛应用。本文基于分位数回归理论对AR模型提出了一个估计方法--加权复合分位数回归(WCQR)估计,该方法可以充分利用多个分位数信息提高参数估计的效率,并且对于不同的分位数回归赋予不同的权重,使得估计更加有效,文中给出了该估计的渐近正态性质。有限样本的数值模拟表明,当残差服从非正态分布时,WCQR估计的的统计性质接近于极大似然估计,而该估计是不需要知道残差分布的,因此,所提出的WCQR估计更加具有竞争力。此方法在预测资产收益的VaR动态风险时有较好的应用,我们将所提出的理论分析了我国九只封闭式基金,实证分析发现,结合WCQR方法求得的VaR风险与用非参数方法求得的VaR风险非常接近,而结合WCQR方法可以计算动态的VaR风险值和预测资产收益的VaR风险值。  相似文献   

10.
This paper investigates asymptotic properties of the maximum likelihood estimator and the quasi‐maximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is important to make the distinction with different spatial scenarios. Under the scenario that each unit will be influenced by only a few neighboring units, the estimators may have ‐rate of convergence and be asymptotically normal. When each unit can be influenced by many neighbors, irregularity of the information matrix may occur and various components of the estimators may have different rates of convergence.  相似文献   

11.
The delta method and continuous mapping theorem are among the most extensively used tools in asymptotic derivations in econometrics. Extensions of these methods are provided for sequences of functions that are commonly encountered in applications and where the usual methods sometimes fail. Important examples of failure arise in the use of simulation‐based estimation methods such as indirect inference. The paper explores the application of these methods to the indirect inference estimator (IIE) in first order autoregressive estimation. The IIE uses a binding function that is sample size dependent. Its limit theory relies on a sequence‐based delta method in the stationary case and a sequence‐based implicit continuous mapping theorem in unit root and local to unity cases. The new limit theory shows that the IIE achieves much more than (partial) bias correction. It changes the limit theory of the maximum likelihood estimator (MLE) when the autoregressive coefficient is in the locality of unity, reducing the bias and the variance of the MLE without affecting the limit theory of the MLE in the stationary case. Thus, in spite of the fact that the IIE is a continuously differentiable function of the MLE, the limit distribution of the IIE is not simply a scale multiple of the MLE, but depends implicitly on the full binding function mapping. The unit root case therefore represents an important example of the failure of the delta method and shows the need for an implicit mapping extension of the continuous mapping theorem.  相似文献   

12.
This paper extends Imbens and Manski's (2004) analysis of confidence intervals for interval identified parameters. The extension is motivated by the discovery that for their final result, Imbens and Manski implicitly assumed locally superefficient estimation of a nuisance parameter. I reanalyze the problem both with assumptions that merely weaken this superefficiency condition and with assumptions that remove it altogether. Imbens and Manski's confidence region is valid under weaker assumptions than theirs, yet superefficiency is required. I also provide a confidence interval that is valid under superefficiency, but can be adapted to the general case. A methodological contribution is to observe that the difficulty of inference comes from a preestimation problem regarding a nuisance parameter, clarifying the connection to other work on partial identification.  相似文献   

13.
14.
We study the asymptotic distribution of three‐step estimators of a finite‐dimensional parameter vector where the second step consists of one or more nonparametric regressions on a regressor that is estimated in the first step. The first‐step estimator is either parametric or nonparametric. Using Newey's (1994) path‐derivative method, we derive the contribution of the first‐step estimator to the influence function. In this derivation, it is important to account for the dual role that the first‐step estimator plays in the second‐step nonparametric regression, that is, that of conditioning variable and that of argument.  相似文献   

15.
This paper establishes the higher‐order equivalence of the k‐step bootstrap, introduced recently by Davidson and MacKinnon (1999), and the standard bootstrap. The k‐step bootstrap is a very attractive alternative computationally to the standard bootstrap for statistics based on nonlinear extremum estimators, such as generalized method of moment and maximum likelihood estimators. The paper also extends results of Hall and Horowitz (1996) to provide new results regarding the higher‐order improvements of the standard bootstrap and the k‐step bootstrap for extremum estimators (compared to procedures based on first‐order asymptotics). The results of the paper apply to Newton‐Raphson (NR), default NR, line‐search NR, and Gauss‐Newton k‐step bootstrap procedures. The results apply to the nonparametric iid bootstrap and nonoverlapping and overlapping block bootstraps. The results cover symmetric and equal‐tailed two‐sided t tests and confidence intervals, one‐sided t tests and confidence intervals, Wald tests and confidence regions, and J tests of over‐identifying restrictions.  相似文献   

16.
It is well known that standard asymptotic theory is not applicable or is very unreliable in models with identification problems or weak instruments. One possible way out consists of using a variant of the Anderson–Rubin ((1949), AR) procedure. The latter allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, but not for individual coefficients. This problem may in principle be overcome by using projection methods (Dufour (1997), Dufour and Jasiak (2001)). At first sight, however, this technique requires the application of costly numerical algorithms. In this paper, we give a general necessary and sufficient condition that allows one to check whether an AR‐type confidence set is bounded. Furthermore, we provide an analytic solution to the problem of building projection‐based confidence sets from AR‐type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are needed to build the confidence intervals.  相似文献   

17.
It is well known that, in misspecified parametric models, the maximum likelihood estimator (MLE) is consistent for the pseudo‐true value and has an asymptotically normal sampling distribution with “sandwich” covariance matrix. Also, posteriors are asymptotically centered at the MLE, normal, and of asymptotic variance that is, in general, different than the sandwich matrix. It is shown that due to this discrepancy, Bayesian inference about the pseudo‐true parameter value is, in general, of lower asymptotic frequentist risk when the original posterior is substituted by an artificial normal posterior centered at the MLE with sandwich covariance matrix. An algorithm is suggested that allows the implementation of this artificial posterior also in models with high dimensional nuisance parameters which cannot reasonably be estimated by maximizing the likelihood.  相似文献   

18.
A large‐sample approximation of the posterior distribution of partially identified structural parameters is derived for models that can be indexed by an identifiable finite‐dimensional reduced‐form parameter vector. It is used to analyze the differences between Bayesian credible sets and frequentist confidence sets. We define a plug‐in estimator of the identified set and show that asymptotically Bayesian highest‐posterior‐density sets exclude parts of the estimated identified set, whereas it is well known that frequentist confidence sets extend beyond the boundaries of the estimated identified set. We recommend reporting estimates of the identified set and information about the conditional prior along with Bayesian credible sets. A numerical illustration for a two‐player entry game is provided.  相似文献   

19.
This study utilizes old and new Norovirus (NoV) human challenge data to model the dose‐response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta‐Poisson dose‐response model that includes parameters for virus aggregation and for a beta‐distribution that describes variable susceptibility among hosts. The quality of the beta‐Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two‐parameter beta‐distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta‐Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta‐Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta‐Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low‐dose data would be of great value to further clarify the NoV dose‐response relationship and to support improved risk assessment for environmentally relevant exposures.  相似文献   

20.
Cointegrated bivariate nonstationary time series are considered in a fractional context, without allowance for deterministic trends. Both the observable series and the cointegrating error can be fractional processes. The familiar situation in which the respective integration orders are 1 and 0 is nested, but these values have typically been assumed known. We allow one or more of them to be unknown real values, in which case Robinson and Marinucci (2001, 2003) have justified least squares estimates of the cointegrating vector, as well as narrow‐band frequency‐domain estimates, which may be less biased. While consistent, these estimates do not always have optimal convergence rates, and they have nonstandard limit distributional behavior. We consider estimates formulated in the frequency domain, that consequently allow for a wide variety of (parametric) autocorrelation in the short memory input series, as well as time‐domain estimates based on autoregressive transformation. Both can be interpreted as approximating generalized least squares and Gaussian maximum likelihood estimates. The estimates share the same limiting distribution, having mixed normal asymptotics (yielding Wald test statistics with χ2 null limit distributions), irrespective of whether the integration orders are known or unknown, subject in the latter case to their estimation with adequate rates of convergence. The parameters describing the short memory stationary input series are √n‐consistently estimable, but the assumptions imposed on these series are much more general than ones of autoregressive moving average type. A Monte Carlo study of finite‐sample performance is included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号