首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

2.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

3.
We propose a novel methodology for evaluating the accuracy of numerical solutions to dynamic economic models. It consists in constructing a lower bound on the size of approximation errors. A small lower bound on errors is a necessary condition for accuracy: If a lower error bound is unacceptably large, then the actual approximation errors are even larger, and hence, the approximation is inaccurate. Our lower‐bound error analysis is complementary to the conventional upper‐error (worst‐case) bound analysis, which provides a sufficient condition for accuracy. As an illustration of our methodology, we assess approximation in the first‐ and second‐order perturbation solutions for two stylized models: a neoclassical growth model and a new Keynesian model. The errors are small for the former model but unacceptably large for the latter model under some empirically relevant parameterizations.  相似文献   

4.
This paper shows that the problem of testing hypotheses in moment condition models without any assumptions about identification may be considered as a problem of testing with an infinite‐dimensional nuisance parameter. We introduce a sufficient statistic for this nuisance parameter in a Gaussian problem and propose conditional tests. These conditional tests have uniformly correct asymptotic size for a large class of models and test statistics. We apply our approach to construct tests based on quasi‐likelihood ratio statistics, which we show are efficient in strongly identified models and perform well relative to existing alternatives in two examples.  相似文献   

5.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

6.
Risk aversion (a second‐order risk preference) is a time‐proven concept in economic models of choice under risk. More recently, the higher order risk preferences of prudence (third‐order) and temperance (fourth‐order) also have been shown to be quite important. While a majority of the population seems to exhibit both risk aversion and these higher order risk preferences, a significant minority does not. We show how both risk‐averse and risk‐loving behaviors might be generated by a simple type of basic lottery preference for either (1) combining “good” outcomes with “bad” ones, or (2) combining “good with good” and “bad with bad,” respectively. We further show that this dichotomy is fairly robust at explaining higher order risk attitudes in the laboratory. In addition to our own experimental evidence, we take a second look at the extant laboratory experiments that measure higher order risk preferences and we find a fair amount of support for this dichotomy. Our own experiment also is the first to look beyond fourth‐order risk preferences, and we examine risk attitudes at even higher orders.  相似文献   

7.
We develop a new parametric estimation procedure for option panels observed with error. We exploit asymptotic approximations assuming an ever increasing set of option prices in the moneyness (cross‐sectional) dimension, but with a fixed time span. We develop consistent estimators for the parameters and the dynamic realization of the state vector governing the option price dynamics. The estimators converge stably to a mixed‐Gaussian law and we develop feasible estimators for the limiting variance. We also provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and one constructed nonparametrically from high‐frequency data on the underlying asset. Furthermore, we develop new tests for the day‐by‐day model fit over specific regions of the volatility surface and for the stability of the risk‐neutral dynamics over time. A comprehensive Monte Carlo study indicates that the inference procedures work well in empirically realistic settings. In an empirical application to S&P 500 index options, guided by the new diagnostic tests, we extend existing asset pricing models by allowing for a flexible dynamic relation between volatility and priced jump tail risk. Importantly, we document that the priced jump tail risk typically responds in a more pronounced and persistent manner than volatility to large negative market shocks.  相似文献   

8.
We introduce methods for estimating nonparametric, nonadditive models with simultaneity. The methods are developed by directly connecting the elements of the structural system to be estimated with features of the density of the observable variables, such as ratios of derivatives or averages of products of derivatives of this density. The estimators are therefore easily computed functionals of a nonparametric estimator of the density of the observable variables. We consider in detail a model where to each structural equation there corresponds an exclusive regressor and a model with one equation of interest and one instrument that is included in a second equation. For both models, we provide new characterizations of observational equivalence on a set, in terms of the density of the observable variables and derivatives of the structural functions. Based on those characterizations, we develop two estimation methods. In the first method, the estimators of the structural derivatives are calculated by a simple matrix inversion and matrix multiplication, analogous to a standard least squares estimator, but with the elements of the matrices being averages of products of derivatives of nonparametric density estimators. In the second method, the estimators of the structural derivatives are calculated in two steps. In a first step, values of the instrument are found at which the density of the observable variables satisfies some properties. In the second step, the estimators are calculated directly from the values of derivatives of the density of the observable variables evaluated at the found values of the instrument. We show that both pointwise estimators are consistent and asymptotically normal.  相似文献   

9.
We estimate demand for residential broadband using high‐frequency data from subscribers facing a three‐part tariff. The three‐part tariff makes data usage during the billing cycle a dynamic problem, thus generating variation in the (shadow) price of usage. We provide evidence that subscribers respond to this variation, and we use their dynamic decisions to estimate a flexible distribution of willingness to pay for different plan characteristics. Using the estimates, we simulate demand under alternative pricing and find that usage‐based pricing eliminates low‐value traffic. Furthermore, we show that the costs associated with investment in fiber‐optic networks are likely recoverable in some markets, but that there is a large gap between social and private incentives to invest.  相似文献   

10.
We present a methodology for estimating the distributional effects of an endogenous treatment that varies at the group level when there are group‐level unobservables, a quantile extension of Hausman and Taylor, 1981. Because of the presence of group‐level unobservables, standard quantile regression techniques are inconsistent in our setting even if the treatment is independent of unobservables. In contrast, our estimation technique is consistent as well as computationally simple, consisting of group‐by‐group quantile regression followed by two‐stage least squares. Using the Bahadur representation of quantile estimators, we derive weak conditions on the growth of the number of observations per group that are sufficient for consistency and asymptotic zero‐mean normality of our estimator. As in Hausman and Taylor, 1981, micro‐level covariates can be used as internal instruments for the endogenous group‐level treatment if they satisfy relevance and exogeneity conditions. Our approach applies to a broad range of settings including labor, public finance, industrial organization, urban economics, and development; we illustrate its usefulness with several such examples. Finally, an empirical application of our estimator finds that low‐wage earners in the United States from 1990 to 2007 were significantly more affected by increased Chinese import competition than high‐wage earners.  相似文献   

11.
Both aristocratic privileges and constitutional constraints in traditional monarchies can be derived from a ruler's incentive to minimize expected costs of moral‐hazard rents for high officials. We consider a dynamic moral‐hazard model of governors serving a sovereign prince, who must deter them from rebellion and hidden corruption which could cause costly crises. To minimize costs, a governor's rewards for good performance should be deferred up to the maximal credit that the prince can be trusted to pay. In the long run, we find that high officials can become an entrenched aristocracy with low turnover and large claims on the ruler. Dismissals for bad performance should be randomized to avoid inciting rebellions, but the prince can profit from reselling vacant offices, and so his decisions to dismiss high officials require institutionalized monitoring. A soft budget constraint that forgives losses for low‐credit governors can become efficient when costs of corruption are low.  相似文献   

12.
Building on Rest's (1986) conceptual model of ethical decision making, we derive and empirically test a model that links an organization's formal ethical infrastructure to individuals’ moral awareness of ethical situations, moral judgment, and moral intention. We contribute to the literature by shedding light on the importance of a multifaceted formal ethical infrastructure—consisting of formal communication, recurrent communication, formal surveillance, and formal sanctions—as a crucial antecedent of moral awareness. In so doing, we discern how these four elements of a formal ethical infrastructure combine to collectively influence moral awareness based on a second‐order factor structure using structural equation modeling. We test our model based on survey data from 805 respondents with significant work experience across three separate ethical scenarios. Our results across the three scenarios provide overall support for our model. We found that a second‐order factor structure for the formal ethical infrastructure explains the variance among the four infrastructure elements and that a multifaceted formal ethical infrastructure significantly increases moral awareness. Our results further suggest a strong positive effect of moral awareness on moral judgment, which in turn was found to have a positive impact on moral intention. These results were substantiated when taking several individual and contextual control variables into account, such as gender, age, religiosity, work satisfaction, and a de facto ethical climate. Implications for theory, practice, and supply management are discussed.  相似文献   

13.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

14.
This paper introduces time‐varying grouped patterns of heterogeneity in linear panel data models. A distinctive feature of our approach is that group membership is left unrestricted. We estimate the parameters of the model using a “grouped fixed‐effects” estimator that minimizes a least squares criterion with respect to all possible groupings of the cross‐sectional units. Recent advances in the clustering literature allow for fast and efficient computation. We provide conditions under which our estimator is consistent as both dimensions of the panel tend to infinity, and we develop inference methods. Finally, we allow for grouped patterns of unobserved heterogeneity in the study of the link between income and democracy across countries.  相似文献   

15.
This paper develops a method for inference in dynamic discrete choice models with serially correlated unobserved state variables. Estimation of these models involves computing high‐dimensional integrals that are present in the solution to the dynamic program and in the likelihood function. First, the paper proposes a Bayesian Markov chain Monte Carlo estimation procedure that can handle the problem of multidimensional integration in the likelihood function. Second, the paper presents an efficient algorithm for solving the dynamic program suitable for use in conjunction with the proposed estimation procedure.  相似文献   

16.
This article considers a class of fresh‐product supply chains in which products need to be transported by the upstream producer from a production base to a distant retail market. Due to high perishablility a portion of the products being shipped may decay during transportation, and therefore, become unsaleable. We consider a supply chain consisting of a single producer and a single distributor, and investigate two commonly adopted business models: (i) In the “pull” model, the distributor places an order, then the producer determines the shipping quantity, taking into account potential product decay during transportation, and transports the products to the destination market of the distributor; (ii) In the “push” model, the producer ships a batch of products to a distant wholesale market, and then the distributor purchases and resells to end customers. By considering a price‐sensitive end‐customer demand, we investigate the optimal decisions for supply chain members, including order quantity, shipping quantity, and retail price. Our research shows that both the producer and distributor (and thus the supply chain) will perform better if the pull model is adopted. To improve the supply chain performance, we propose a fixed inventory‐plus factor (FIPF) strategy, in which the producer announces a pre‐determined inventory‐plus factor and the distributor compensates the producer for any surplus inventory that would otherwise be wasted. We show that this strategy is a Pareto improvement over the pull and push models for both parties. Finally, numerical experiments are conducted, which reveal some interesting managerial insights on the comparison between different business models.  相似文献   

17.
The paper analyzes dynamic principal–agent models with short period lengths. The two main contributions are: (i) an analytic characterization of the values of optimal contracts in the limit as the period length goes to 0, and (ii) the construction of relatively simple (almost) optimal contracts for fixed period lengths. Our setting is flexible and includes the pure hidden action or pure hidden information models as special cases. We show how such details of the underlying information structure affect the optimal provision of incentives and the value of the contracts. The dependence is very tractable and we obtain sharp comparative statics results. The results are derived with a novel method that uses a quadratic approximation of the Pareto boundary of the equilibrium value set.  相似文献   

18.
This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models generalize the class of incomplete instrumental variable (IV) models in which unobserved variables are single‐valued functions of observed variables. Thus the models are referred to as generalized IV (GIV) models, but there are important cases in which instrumental variable restrictions play no significant role. Building on a definition of observational equivalence for incomplete models the development uses results from random set theory that guarantee that the characterizations deliver sharp bounds, thereby dispensing with the need for case‐by‐case proofs of sharpness. The use of random sets defined on the space of unobserved variables allows identification analysis under mean and quantile independence restrictions on the distributions of unobserved variables conditional on exogenous variables as well as under a full independence restriction. The results are used to develop sharp bounds on the distribution of valuations in an incomplete model of English auctions, improving on the pointwise bounds available until now. Application of many of the results of the paper requires no familiarity with random set theory.  相似文献   

19.
This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized techniques. We consider both linear and nonlinear models where the regression coefficients are heterogeneous across groups but homogeneous within a group and the group membership is unknown. Two approaches are considered—penalized profile likelihood (PPL) estimation for the general nonlinear models without endogenous regressors, and penalized GMM (PGMM) estimation for linear models with endogeneity. In both cases, we develop a new variant of Lasso called classifier‐Lasso (C‐Lasso) that serves to shrink individual coefficients to the unknown group‐specific coefficients. C‐Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PPL estimation, C‐Lasso also achieves the oracle property so that group‐specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation, the oracle property of C‐Lasso is preserved in some special cases. Simulations demonstrate good finite‐sample performance of the approach in both classification and estimation. Empirical applications to both linear and nonlinear models are presented.  相似文献   

20.
This paper makes the following original contributions to the literature. (i) We develop a simpler analytical characterization and numerical algorithm for Bayesian inference in structural vector autoregressions (VARs) that can be used for models that are overidentified, just‐identified, or underidentified. (ii) We analyze the asymptotic properties of Bayesian inference and show that in the underidentified case, the asymptotic posterior distribution of contemporaneous coefficients in an n‐variable VAR is confined to the set of values that orthogonalize the population variance–covariance matrix of ordinary least squares residuals, with the height of the posterior proportional to the height of the prior at any point within that set. For example, in a bivariate VAR for supply and demand identified solely by sign restrictions, if the population correlation between the VAR residuals is positive, then even if one has available an infinite sample of data, any inference about the demand elasticity is coming exclusively from the prior distribution. (iii) We provide analytical characterizations of the informative prior distributions for impulse‐response functions that are implicit in the traditional sign‐restriction approach to VARs, and we note, as a special case of result (ii), that the influence of these priors does not vanish asymptotically. (iv) We illustrate how Bayesian inference with informative priors can be both a strict generalization and an unambiguous improvement over frequentist inference in just‐identified models. (v) We propose that researchers need to explicitly acknowledge and defend the role of prior beliefs in influencing structural conclusions and we illustrate how this could be done using a simple model of the U.S. labor market.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号