首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
We study inference in structural models with a jump in the conditional density, where location and size of the jump are described by regression curves. Two prominent examples are auction models, where the bid density jumps from zero to a positive value at the lowest cost, and equilibrium job‐search models, where the wage density jumps from one positive level to another at the reservation wage. General inference in such models remained a long‐standing, unresolved problem, primarily due to nonregularities and computational difficulties caused by discontinuous likelihood functions. This paper develops likelihood‐based estimation and inference methods for these models, focusing on optimal (Bayes) and maximum likelihood procedures. We derive convergence rates and distribution theory, and develop Bayes and Wald inference. We show that Bayes estimators and confidence intervals are attractive both theoretically and computationally, and that Bayes confidence intervals, based on posterior quantiles, provide a valid large sample inference method.  相似文献   

2.
We show how correctly to extend known methods for generating error bands in reduced form VAR's to overidentified models. We argue that the conventional pointwise bands common in the literature should be supplemented with measures of shape uncertainty, and we show how to generate such measures. We focus on bands that characterize the shape of the likelihood. Such bands are not classical confidence regions. We explain that classical confidence regions mix information about parameter location with information about model fit, and hence can be misleading as summaries of the implications of the data for the location of parameters. Because classical confidence regions also present conceptual and computational problems in multivariate time series models, we suggest that likelihood-based bands, rather than approximate confidence bands based on asymptotic theory, be standard in reporting results for this type of model.  相似文献   

3.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

4.
5.
It is well known that standard asymptotic theory is not applicable or is very unreliable in models with identification problems or weak instruments. One possible way out consists of using a variant of the Anderson–Rubin ((1949), AR) procedure. The latter allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, but not for individual coefficients. This problem may in principle be overcome by using projection methods (Dufour (1997), Dufour and Jasiak (2001)). At first sight, however, this technique requires the application of costly numerical algorithms. In this paper, we give a general necessary and sufficient condition that allows one to check whether an AR‐type confidence set is bounded. Furthermore, we provide an analytic solution to the problem of building projection‐based confidence sets from AR‐type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are needed to build the confidence intervals.  相似文献   

6.
We characterize and prove the existence of Nash equilibrium in a limit order market with a finite number of risk‐neutral liquidity providers. We show that if there is sufficient adverse selection, then pointwise optimization (maximizing in p for each q) in a certain nonlinear pricing game produces a Nash equilibrium in the limit order market. The need for a sufficient degree of adverse selection does not vanish as the number of liquidity providers increases. Our formulation of the nonlinear pricing game encompasses various specifications of informed and liquidity trading, including the case in which nature chooses whether the market‐order trader is informed or a liquidity trader. We solve for an equilibrium analytically in various examples and also present examples in which the first‐order condition for pointwise optimization does not define an equilibrium, because the amount of adverse selection is insufficient.  相似文献   

7.
This paper extends Imbens and Manski's (2004) analysis of confidence intervals for interval identified parameters. The extension is motivated by the discovery that for their final result, Imbens and Manski implicitly assumed locally superefficient estimation of a nuisance parameter. I reanalyze the problem both with assumptions that merely weaken this superefficiency condition and with assumptions that remove it altogether. Imbens and Manski's confidence region is valid under weaker assumptions than theirs, yet superefficiency is required. I also provide a confidence interval that is valid under superefficiency, but can be adapted to the general case. A methodological contribution is to observe that the difficulty of inference comes from a preestimation problem regarding a nuisance parameter, clarifying the connection to other work on partial identification.  相似文献   

8.
This paper is concerned with tests and confidence intervals for parameters that are not necessarily point identified and are defined by moment inequalities. In the literature, different test statistics, critical‐value methods, and implementation methods (i.e., the asymptotic distribution versus the bootstrap) have been proposed. In this paper, we compare these methods. We provide a recommended test statistic, moment selection critical value, and implementation method. We provide data‐dependent procedures for choosing the key moment selection tuning parameter κ and a size‐correction factor η.  相似文献   

9.
Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to different values of the individual effects. Fixed effects, random effects, and Bayesian approaches all fall into this category. We provide a characterization of the class of weights (or priors) that produce estimators that are first‐order unbiased. We show that such bias‐reducing weights will depend on the data in general unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid confidence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Next, we show that random effects estimators are not bias reducing in general and we discuss important exceptions. Moreover, the bias depends on the Kullback–Leibler distance between the population distribution of the effects and its best approximation in the random effects family. Finally, we show that, in general, standard random effects estimation of marginal effects is inconsistent for large T, whereas the posterior mean of the marginal effect is large‐T consistent, and we provide conditions for bias reduction. Some examples and Monte Carlo experiments illustrate the results.  相似文献   

10.
We provide easy to verify sufficient conditions for the consistency and asymptotic normality of a class of semiparametric optimization estimators where the criterion function does not obey standard smoothness conditions and simultaneously depends on some nonparametric estimators that can themselves depend on the parameters to be estimated. Our results extend existing theories such as those of Pakes and Pollard (1989), Andrews (1994a), and Newey (1994). We also show that bootstrap provides asymptotically correct confidence regions for the finite dimensional parameters. We apply our results to two examples: a ‘hit rate’ and a partially linear median regression with some endogenous regressors.  相似文献   

11.
We introduce and apply a new nonparametric approach to identification and inference on data from ascending auctions. We exploit variation in the number of bidders across auctions to nonparametrically identify useful bounds on seller profit and bidder surplus using a general model of correlated private values that nests the standard independent private values (IPV) model. We also translate our identified bounds into closed form and asymptotically valid confidence intervals for several economic measures of interest. Applying our methods to much studied U.S. Forest Service timber auctions, we find evidence of correlation among values after controlling for a rich vector of relevant auction covariates; this correlation causes expected profit, the profit‐maximizing reserve price, and bidder surplus to be substantially lower than conventional (IPV) analysis of the data would suggest.  相似文献   

12.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

13.
Limited overlap between the covariate distributions of groups with different treatment assignments does not only make estimates of average treatment effects rather imprecise, but can also lead to substantially distorted confidence intervals. This paper argues that this is because the coverage error of traditional confidence intervals is driven by the number of observations in the areas of limited overlap. Some of these “local sample sizes” can be very small in applications, up to the point that distributional approximations derived from classical asymptotic theory become unreliable. Building on this observation, this paper constructs confidence intervals based on classical approaches to small sample inference. The approach is easy to implement, and has superior theoretical and practical properties relative to standard methods in empirically relevant settings.  相似文献   

14.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

15.
The purpose of this note is to show how semiparametric estimators with a small bias property can be constructed. The small bias property (SBP) of a semiparametric estimator is that its bias converges to zero faster than the pointwise and integrated bias of the nonparametric estimator on which it is based. We show that semiparametric estimators based on twicing kernels have the SBP. We also show that semiparametric estimators where nonparametric kernel estimation does not affect the asymptotic variance have the SBP. In addition we discuss an interpretation of series and sieve estimators as idempotent transformations of the empirical distribution that helps explain the known result that they lead to the SBP. In Monte Carlo experiments we find that estimators with the SBP have mean‐square error that is smaller and less sensitive to bandwidth than those that do not have the SBP.  相似文献   

16.
This paper establishes the higher‐order equivalence of the k‐step bootstrap, introduced recently by Davidson and MacKinnon (1999), and the standard bootstrap. The k‐step bootstrap is a very attractive alternative computationally to the standard bootstrap for statistics based on nonlinear extremum estimators, such as generalized method of moment and maximum likelihood estimators. The paper also extends results of Hall and Horowitz (1996) to provide new results regarding the higher‐order improvements of the standard bootstrap and the k‐step bootstrap for extremum estimators (compared to procedures based on first‐order asymptotics). The results of the paper apply to Newton‐Raphson (NR), default NR, line‐search NR, and Gauss‐Newton k‐step bootstrap procedures. The results apply to the nonparametric iid bootstrap and nonoverlapping and overlapping block bootstraps. The results cover symmetric and equal‐tailed two‐sided t tests and confidence intervals, one‐sided t tests and confidence intervals, Wald tests and confidence regions, and J tests of over‐identifying restrictions.  相似文献   

17.
We study the asymptotic distribution of Tikhonov regularized estimation of quantile structural effects implied by a nonseparable model. The nonparametric instrumental variable estimator is based on a minimum distance principle. We show that the minimum distance problem without regularization is locally ill‐posed, and we consider penalization by the norms of the parameter and its derivatives. We derive pointwise asymptotic normality and develop a consistent estimator of the asymptotic variance. We study the small sample properties via simulation results and provide an empirical illustration of estimation of nonlinear pricing curves for telecommunications services in the United States.  相似文献   

18.
This paper presents a methodology for analyzing Analytic Hierarchy Process (AHP) rankings if the pairwise preference judgments are uncertain (stochastic). If the relative preference statements are represented by judgment intervals, rather than single values, then the rankings resulting from a traditional (deterministic) AHP analysis based on single judgment values may be reversed, and therefore incorrect. In the presence of stochastic judgments, the traditional AHP rankings may be stable or unstable, depending on the nature of the uncertainty. We develop multivariate statistical techniques to obtain both point estimates and confidence intervals of the rank reversal probabilities, and show how simulation experiments can be used as an effective and accurate tool for analyzing the stability of the preference rankings under uncertainty. If the rank reversal probability is low, then the rankings are stable and the decision maker can be confident that the AHP ranking is correct. However, if the likelihood of rank reversal is high, then the decision maker should interpret the AHP rankings cautiously, as there is a subtantial probability that these rankings are incorrect. High rank reversal probabilities indicate a need for exploring alternative problem formulations and methods of analysis. The information about the extent to which the ranking of the alternatives is sensitive to the stochastic nature of the pairwise judgments should be valuable information into the decision-making process, much like variability and confidence intervals are crucial tools for statistical inference. We provide simulation experiments and numerical examples to evaluate our method. Our analysis of rank reversal due to stochastic judgments is not related to previous research on rank reversal that focuses on mathematical properties inherent to the AHP methodology, for instance, the occurrence of rank reversal if a new alternative is added or an existing one is deleted.  相似文献   

19.
Local to unity limit theory is used in applications to construct confidence intervals (CIs) for autoregressive roots through inversion of a unit root test (Stock (1991)). Such CIs are asymptotically valid when the true model has an autoregressive root that is local to unity (ρ = 1 + c/n), but are shown here to be invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Failure at the boundary implies that these CIs have zero asymptotic coverage probability in the stationary case and vicinities of unity that are wider than O(n−1/3). The inversion methods of Hansen (1999) and Mikusheva (2007) are asymptotically valid in such cases. Implications of these results for predictive regression tests are explored. When the predictive regressor is stationary, the popular Campbell and Yogo (2006) CIs for the regression coefficient have zero coverage probability asymptotically, and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious cautionary implications for the use of the procedures in empirical practice.  相似文献   

20.
A new method is proposed for constructing confidence intervals in autoregressive models with linear time trend. Interest focuses on the sum of the autoregressive coefficients because this parameter provides a useful scalar measure of the long‐run persistence properties of an economic time series. Since the type of the limiting distribution of the corresponding OLS estimator, as well as the rate of its convergence, depend in a discontinuous fashion upon whether the true parameter is less than one or equal to one (that is, trend‐stationary case or unit root case), the construction of confidence intervals is notoriously difficult. The crux of our method is to recompute the OLS estimator on smaller blocks of the observed data, according to the general subsampling idea of Politis and Romano (1994a), although some extensions of the standard theory are needed. The method is more general than previous approaches in that it works for arbitrary parameter values, but also because it allows the innovations to be a martingale difference sequence rather than i.i.d. Some simulation studies examine the finite sample performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号