首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper introduces time‐varying grouped patterns of heterogeneity in linear panel data models. A distinctive feature of our approach is that group membership is left unrestricted. We estimate the parameters of the model using a “grouped fixed‐effects” estimator that minimizes a least squares criterion with respect to all possible groupings of the cross‐sectional units. Recent advances in the clustering literature allow for fast and efficient computation. We provide conditions under which our estimator is consistent as both dimensions of the panel tend to infinity, and we develop inference methods. Finally, we allow for grouped patterns of unobserved heterogeneity in the study of the link between income and democracy across countries.  相似文献   

2.
This paper develops the fixed‐smoothing asymptotics in a two‐step generalized method of moments (GMM) framework. Under this type of asymptotics, the weighting matrix in the second‐step GMM criterion function converges weakly to a random matrix and the two‐step GMM estimator is asymptotically mixed normal. Nevertheless, the Wald statistic, the GMM criterion function statistic, and the Lagrange multiplier statistic remain asymptotically pivotal. It is shown that critical values from the fixed‐smoothing asymptotic distribution are high order correct under the conventional increasing‐smoothing asymptotics. When an orthonormal series covariance estimator is used, the critical values can be approximated very well by the quantiles of a noncentral F distribution. A simulation study shows that statistical tests based on the new fixed‐smoothing approximation are much more accurate in size than existing tests.  相似文献   

3.
In this paper, we study the least squares (LS) estimator in a linear panel regression model with unknown number of factors appearing as interactive fixed effects. Assuming that the number of factors used in estimation is larger than the true number of factors in the data, we establish the limiting distribution of the LS estimator for the regression coefficients as the number of time periods and the number of cross‐sectional units jointly go to infinity. The main result of the paper is that under certain assumptions, the limiting distribution of the LS estimator is independent of the number of factors used in the estimation as long as this number is not underestimated. The important practical implication of this result is that for inference on the regression coefficients, one does not necessarily need to estimate the number of interactive fixed effects consistently.  相似文献   

4.
It is well known that the finite‐sample properties of tests of hypotheses on the co‐integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett‐type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic χ2 tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co‐integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co‐integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.  相似文献   

5.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

6.
We consider empirical measurement of equivalent variation (EV) and compensating variation (CV) resulting from price change of a discrete good using individual‐level data when there is unobserved heterogeneity in preferences. We show that for binary and unordered multinomial choice, the marginal distributions of EV and CV can be expressed as simple closed‐form functionals of conditional choice probabilities under essentially unrestricted preference distributions. These results hold even when the distribution and dimension of unobserved heterogeneity are neither known nor identified, and utilities are neither quasilinear nor parametrically specified. The welfare distributions take simple forms that are easy to compute in applications. In particular, average EV for a price rise equals the change in average Marshallian consumer surplus and is smaller than average CV for a normal good. These nonparametric point‐identification results fail for ordered choice if the unit price is identical for all alternatives, thereby providing a connection to Hausman–Newey's (2014) partial identification results for the limiting case of continuous choice.  相似文献   

7.
Mechanism design enables a social planner to obtain a desired outcome by leveraging the players' rationality and their beliefs. It is thus a fundamental, but yet unproven, intuition that the higher the level of rationality of the players, the better the set of obtainable outcomes. In this paper, we prove this fundamental intuition for players with possibilistic beliefs, a model long considered in epistemic game theory. Specifically, • We define a sequence of monotonically increasing revenue benchmarks for single‐good auctions, G0G1G2≤⋯, where each Gi is defined over the players' beliefs and G0 is the second‐highest valuation (i.e., the revenue benchmark achieved by the second‐price mechanism). • We (1) construct a single, interim individually rational, auction mechanism that, without any clue about the rationality level of the players, guarantees revenue Gk if all players have rationality levels ≥k+1, and (2) prove that no such mechanism can guarantee revenue even close to Gk when at least two players are at most level‐k rational.  相似文献   

8.
This paper proposes an empirical model of network formation, combining strategic and random networks features. Payoffs depend on direct links, but also link externalities. Players meet sequentially at random, myopically updating their links. Under mild assumptions, the network formation process is a potential game and converges to an exponential random graph model (ERGM), generating directed dense networks. I provide new identification results for ERGMs in large networks: if link externalities are nonnegative, the ERGM is asymptotically indistinguishable from an Erdős–Rényi model with independent links. We can identify the parameters only when at least one of the externalities is negative and sufficiently large. However, the standard estimation methods for ERGMs can have exponentially slow convergence, even when the model has asymptotically independent links. I thus estimate parameters using a Bayesian MCMC method. When the parameters are identifiable, I show evidence that the estimation algorithm converges in almost quadratic time.  相似文献   

9.
This paper develops a dynamic model of neighborhood choice along with a computationally light multi‐step estimator. The proposed empirical framework captures observed and unobserved preference heterogeneity across households and locations in a flexible way. We estimate the model using a newly assembled data set that matches demographic information from mortgage applications to the universe of housing transactions in the San Francisco Bay Area from 1994 to 2004. The results provide the first estimates of the marginal willingness to pay for several non‐marketed amenities—neighborhood air pollution, violent crime, and racial composition—in a dynamic framework. Comparing these estimates with those from a static version of the model highlights several important biases that arise when dynamic considerations are ignored.  相似文献   

10.
Stochastic discount factor (SDF) processes in dynamic economies admit a permanent‐transitory decomposition in which the permanent component characterizes pricing over long investment horizons. This paper introduces an empirical framework to analyze the permanent‐transitory decomposition of SDF processes. Specifically, we show how to estimate nonparametrically the solution to the Perron–Frobenius eigenfunction problem of Hansen and Scheinkman, 2009. Our empirical framework allows researchers to (i) construct time series of the estimated permanent and transitory components and (ii) estimate the yield and the change of measure which characterize pricing over long investment horizons. We also introduce nonparametric estimators of the continuation value function in a class of models with recursive preferences by reinterpreting the value function recursion as a nonlinear Perron–Frobenius problem. We establish consistency and convergence rates of the eigenfunction estimators and asymptotic normality of the eigenvalue estimator and estimators of related functionals. As an application, we study an economy where the representative agent is endowed with recursive preferences, allowing for general (nonlinear) consumption and earnings growth dynamics.  相似文献   

11.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

12.
The impact of insurer competition on welfare, negotiated provider prices, and premiums in the U.S. private health care industry is theoretically ambiguous. Reduced competition may increase the premiums charged by insurers and their payments made to hospitals. However, it may also strengthen insurers' bargaining leverage when negotiating with hospitals, thereby generating offsetting cost decreases. To understand and measure this trade‐off, we estimate a model of employer‐insurer and hospital‐insurer bargaining over premiums and reimbursements, household demand for insurance, and individual demand for hospitals using detailed California admissions, claims, and enrollment data. We simulate the removal of both large and small insurers from consumers' choice sets. Although consumer welfare decreases and premiums typically increase, we find that premiums can fall upon the removal of a small insurer if an employer imposes effective premium constraints through negotiations with the remaining insurers. We also document substantial heterogeneity in hospital price adjustments upon the removal of an insurer, with renegotiated price increases and decreases of as much as 10% across markets.  相似文献   

13.
The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one‐dimensional parameters. We introduce the idea in the context of M and GMM estimators. A modification of the approach can be used to estimate the variance of two‐step estimators.  相似文献   

14.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

15.
A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homogeneous of degree 1 and concave (and monotone), the competitive division maximizes the Nash product of utilities (Gale–Eisenberg): hence it is welfarist (determined by the set of feasible utility profiles), unique, continuous, and easy to compute. We show that the competitive division of a mixed manna is still welfarist. If the zero utility profile is Pareto dominated, the competitive profile is strictly positive and still uniquely maximizes the product of utilities. If the zero profile is unfeasible (for instance, if all items are bads), the competitive profiles are strictly negative and are the critical points of the product of disutilities on the efficiency frontier. The latter allows for multiple competitive utility profiles, from which no single‐valued selection can be continuous or resource monotonic. Thus the implementation of competitive fairness under linear preferences in interactive platforms like SPLIDDIT will be more difficult when the manna contains bads that overwhelm the goods.  相似文献   

16.
We propose an estimation method for models of conditional moment restrictions, which contain finite dimensional unknown parameters (θ) and infinite dimensional unknown functions (h). Our proposal is to approximate h with a sieve and to estimate θ and the sieve parameters jointly by applying the method of minimum distance. We show that: (i) the sieve estimator of h is consistent with a rate faster than n‐1/4 under certain metric; (ii) the estimator of θ is √n consistent and asymptotically normally distributed; (iii) the estimator for the asymptotic covariance of the θ estimator is consistent and easy to compute; and (iv) the optimally weighted minimum distance estimator of θ attains the semiparametric efficiency bound. We illustrate our results with two examples: a partially linear regression with an endogenous nonparametric part, and a partially additive IV regression with a link function.  相似文献   

17.
EXcess Idle Time     
We introduce a novel economic indicator, named excess idle time (EXIT), measuring the extent of sluggishness in financial prices. Under a null and an alternative hypothesis grounded in no‐arbitrage (the null) and market microstructure (the alternative) theories of price determination, we derive a limit theory for EXIT leading to formal tests for staleness in the price adjustments. Empirical implementation of the theory indicates that financial prices are often more sluggish than implied by the (ubiquitous, in frictionless continuous‐time asset pricing) semimartingale assumption. EXIT is interpretable as an illiquidity proxy and is easily implementable, for each trading day, using transaction prices only. By using EXIT, we show how to estimate structurally market microstructure models with asymmetric information.  相似文献   

18.
Consider a group of individuals with unobservable perspectives (subjective prior beliefs) about a sequence of states. In each period, each individual receives private information about the current state and forms an opinion (a posterior belief). She also chooses a target individual and observes the target's opinion. This choice involves a trade‐off between well‐informed targets, whose signals are precise, and well‐understood targets, whose perspectives are well known. Opinions are informative about the target's perspective, so observed individuals become better understood over time. We identify a simple condition under which long‐run behavior is history independent. When this fails, each individual restricts attention to a small set of experts and observes the most informed among these. A broad range of observational patterns can arise with positive probability, including opinion leadership and information segregation. In an application to areas of expertise, we show how these mechanisms generate own field bias and large field dominance.  相似文献   

19.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

20.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号