首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 0 毫秒
1.
We propose a theory of monetary policy and macroprudential interventions in financial markets. We focus on economies with nominal rigidities in goods and labor markets and subject to constraints on monetary policy, such as the zero lower bound or fixed exchange rates. We identify an aggregate demand externality that can be corrected by macroprudential interventions in financial markets. Ex post, the distribution of wealth across agents affects aggregate demand and output. Ex ante, however, these effects are not internalized in private financial decisions. We provide a simple formula for the required financial interventions that depends on a small number of measurable sufficient statistics. We also characterize optimal monetary policy. We extend our framework to incorporate pecuniary externalities, providing a unified approach to both externalities. Finally, we provide a number of applications which illustrate the relevance of our theory.  相似文献   

2.
We consider forecasting with uncertainty about the choice of predictor variables. The researcher wants to select a model, estimate the parameters, and use the parameter estimates for forecasting. We investigate the distributional properties of a number of different schemes for model choice and parameter estimation, including: in‐sample model selection using the Akaike information criterion; out‐of‐sample model selection; and splitting the data into subsamples for model selection and parameter estimation. Using a weak‐predictor local asymptotic scheme, we provide a representation result that facilitates comparison of the distributional properties of the procedures and their associated forecast risks. This representation isolates the source of inefficiency in some of these procedures. We develop a simulation procedure that improves the accuracy of the out‐of‐sample and split‐sample methods uniformly over the local parameter space. We also examine how bootstrap aggregation (bagging) affects the local asymptotic risk of the estimators and their associated forecasts. Numerically, we find that for many values of the local parameter, the out‐of‐sample and split‐sample schemes perform poorly if implemented in the conventional way. But they perform well, if implemented in conjunction with our risk‐reduction method or bagging.  相似文献   

3.
This paper develops a dynamic model of neighborhood choice along with a computationally light multi‐step estimator. The proposed empirical framework captures observed and unobserved preference heterogeneity across households and locations in a flexible way. We estimate the model using a newly assembled data set that matches demographic information from mortgage applications to the universe of housing transactions in the San Francisco Bay Area from 1994 to 2004. The results provide the first estimates of the marginal willingness to pay for several non‐marketed amenities—neighborhood air pollution, violent crime, and racial composition—in a dynamic framework. Comparing these estimates with those from a static version of the model highlights several important biases that arise when dynamic considerations are ignored.  相似文献   

4.
U.S. data reveal three facts: (1) the share of goods in total expenditure declines at a constant rate over time, (2) the price of goods relative to services declines at a constant rate over time, and (3) poor households spend a larger fraction of their budget on goods than do rich households. I provide a macroeconomic model with non‐Gorman preferences that rationalizes these facts, along with the aggregate Kaldor facts. The model is parsimonious and admits an analytical solution. Its functional form allows a decomposition of U.S. structural change into an income and substitution effect. Estimates from micro data show each of these effects to be of roughly equal importance.  相似文献   

5.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

6.
We consider nonparametric identification and estimation in a nonseparable model where a continuous regressor of interest is a known, deterministic, but kinked function of an observed assignment variable. We characterize a broad class of models in which a sharp “Regression Kink Design” (RKD or RK Design) identifies a readily interpretable treatment‐on‐the‐treated parameter (Florens, Heckman, Meghir, and Vytlacil (2008)). We also introduce a “fuzzy regression kink design” generalization that allows for omitted variables in the assignment rule, noncompliance, and certain types of measurement errors in the observed values of the assignment variable and the policy variable. Our identifying assumptions give rise to testable restrictions on the distributions of the assignment variable and predetermined covariates around the kink point, similar to the restrictions delivered by Lee (2008) for the regression discontinuity design. Using a kink in the unemployment benefit formula, we apply a fuzzy RKD to empirically estimate the effect of benefit rates on unemployment durations in Austria.  相似文献   

7.
In a service environment a service provider needs to determine the amount and kinds of capacity to meet customers’ needs over many periods. To make good decisions, she needs to know the probability distribution of her customers’ demand in each period. We study a situation in which customers’ demand for a given service is random in each period, but inelastic, or modeled well by this assumption, and cannot be delayed to the next period. This article presents a mechanism that allows a service provider to learn the distribution of a customer's demand by offering him a set of contracts through which he can partially prepay for future service for a reduced cost for units of service based on anticipated needs. We describe the form of a set of contracts that will cause the customer to reveal his demand distribution as he minimizes his expected costs. To justify the effort of organizing and offering contracts, we present an application that demonstrates the cost savings to the service provider with better capacity planning using the truthfully elicited distribution.  相似文献   

8.
This paper presents a test of the exogeneity of a single explanatory variable in a multivariate model. It does not require the exogeneity of the other regressors or the existence of instrumental variables. The fundamental maintained assumption is that the model must be continuous in the explanatory variable of interest. This test has power when unobservable confounders are discontinuous with respect to the explanatory variable of interest, and it is particularly suitable for applications in which that variable has bunching points. An application of the test to the problem of estimating the effects of maternal smoking in birth weight shows evidence of remaining endogeneity, even after controlling for the most complete covariate specification in the literature.  相似文献   

9.
I introduce a model of undirected dyadic link formation which allows for assortative matching on observed agent characteristics (homophily) as well as unrestricted agent‐level heterogeneity in link surplus (degree heterogeneity). Like in fixed effects panel data analyses, the joint distribution of observed and unobserved agent‐level characteristics is left unrestricted. Two estimators for the (common) homophily parameter, β0, are developed and their properties studied under an asymptotic sequence involving a single network growing large. The first, tetrad logit (TL), estimator conditions on a sufficient statistic for the degree heterogeneity. The second, joint maximum likelihood (JML), estimator treats the degree heterogeneity {Ai0}i = 1N as additional (incidental) parameters to be estimated. The TL estimate is consistent under both sparse and dense graph sequences, whereas consistency of the JML estimate is shown only under dense graph sequences.  相似文献   

10.
Intention theories, such as the Theory of Reasoned Action, the Theory of Planned Behavior, and the Technology Acceptance Model (TAM), have been widely adopted to explain information system usage. These theories, however, do not explicitly consider the availability of alternative systems that users may have access to and may have a preference for. Recent calls for advancing knowledge in technology acceptance have included the examination of selection among competing channels and extending the investigation beyond adoption of a single technology. In this study, we provide a theoretical extension to the TAM by integrating preferential decision knowledge to its constructs. The concept of Attitude‐Based Preference and Attribute‐Based Preference are introduced to produce a new intention model, namely the Model of Technology Preference (MTP). MTP was validated in the context of alternative behaviors in adopting two service channels: one a technology‐based online store and the other a traditional brick‐and‐mortar store. A sample of 320 responses was used to run a structural equation model. Empirical results show that MTP is a powerful predictor of alternative behaviors. Furthermore, in the context of service channel selection, incorporating preferential decision knowledge into intention models can be used to develop successful business strategies.  相似文献   

11.
This note studies some seemingly anomalous results that arise in possibly misspecified, reduced‐rank linear asset‐pricing models estimated by the continuously updated generalized method of moments. When a spurious factor (that is, a factor that is uncorrelated with the returns on the test assets) is present, the test for correct model specification has asymptotic power that is equal to the nominal size. In other words, applied researchers will erroneously conclude that the model is correctly specified even when the degree of misspecification is arbitrarily large. The rejection probability of the test for overidentifying restrictions typically decreases further in underidentified models where the dimension of the null space is larger than 1.  相似文献   

12.
It is well known that the finite‐sample properties of tests of hypotheses on the co‐integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett‐type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic χ2 tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co‐integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co‐integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.  相似文献   

13.
This paper develops the fixed‐smoothing asymptotics in a two‐step generalized method of moments (GMM) framework. Under this type of asymptotics, the weighting matrix in the second‐step GMM criterion function converges weakly to a random matrix and the two‐step GMM estimator is asymptotically mixed normal. Nevertheless, the Wald statistic, the GMM criterion function statistic, and the Lagrange multiplier statistic remain asymptotically pivotal. It is shown that critical values from the fixed‐smoothing asymptotic distribution are high order correct under the conventional increasing‐smoothing asymptotics. When an orthonormal series covariance estimator is used, the critical values can be approximated very well by the quantiles of a noncentral F distribution. A simulation study shows that statistical tests based on the new fixed‐smoothing approximation are much more accurate in size than existing tests.  相似文献   

14.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

15.
The resource‐based view of the firm argues the essence of decision making is to determine how firm and supply chain resources can be configured to achieve inimitable advantage and superior performance. However, combining resources found among diverse members of a supply chain requires higher levels of coordination than exist at most companies. Manifest cross‐functional and interorganizational conflict impedes the relational advantages of collaboration. This research employs a multimethod—survey and interview—approach to evaluate collaboration's influence on operational and firm performance. Our findings show that collaboration, as a dynamic capability, mediates the conflict resulting from functional orientations, and improves performance. Specific structural enablers to enhance an organization's collaborative capability are identified and described, providing insight into how firms can exploit interfirm resources for competitive advantage.  相似文献   

16.
Despite recent attention to closed‐loop supply chains and remanufacturing, there is scant information about what drives the re‐make versus buy decision for original equipment manufacturers (OEMs) engaging in remanufacturing. Based on the extant remanufacturing literature and transaction cost economics, we formulated hypotheses related to the drivers of in‐house versus contracted remanufacturing operations. The hypotheses were investigated via quantitative and qualitative data, thus offering a rich test of the formulated relationships. Consistent with the theory, the quantitative results showed that intellectual property, operational assets, and remanufacturing frequency are significant drivers of the re‐make versus buy decision. However, counter to the theory, the quantitative results did not support the significance of brand reputation, technological uncertainty, condition uncertainty, product complexity, and volume uncertainty. The qualitative results were used to enrich these findings by providing theoretical extensions and pragmatic insights of the remake versus buy decision in remanufacturing.  相似文献   

17.
Store brands are of increasing importance in retail supply chains, often causing channel conflict, as the retailer's product directly competes with the manufacturer's national brand. Extant research on the resulting channel interactions either assumes the national brand manufacturer can credibly commit to maintaining a wholesale price or that he lacks such ability. However, these two scenarios imply very different supply chain interactions, as only a national brand manufacturer with commitment ability can strategically adjust a national brand wholesale price to prevent a store brand introduction by the retailer. We specifically analyze the impact of this assumption on the manufacturer, the retailer, and the customers. We determine when long‐term contracts that provide the manufacturer with such commitment ability can improve supply chain profitability.  相似文献   

18.
In this article, we analyze a location model where facilities may be subject to disruptions. Customers do not have advance information about whether a given facility is operational or not, and thus may have to visit several facilities before finding an operational one. The objective is to locate a set of facilities to minimize the total expected cost of customer travel. We decompose the total cost into travel, reliability, and information components. This decomposition allows us to put a value on the advance information about the states of facilities and compare it to the reliability and travel cost components, which allows a decision maker to evaluate which part of the system would benefit the most from improvements. The structure of optimal solutions is analyzed, with two interesting effects identified: facility centralization and co‐location; both effects appear to be stronger than in the complete information case, where the status of each facility is known in advance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号