首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Using the intuition that financial markets transfer risks in business time, “market microstructure invariance” is defined as the hypotheses that the distributions of risk transfers (“bets”) and transaction costs are constant across assets when measured per unit of business time. The invariance hypotheses imply that bet size and transaction costs have specific, empirically testable relationships to observable dollar volume and volatility. Portfolio transitions can be viewed as natural experiments for measuring transaction costs, and individual orders can be treated as proxies for bets. Empirical tests based on a data set of 400,000+ portfolio transition orders support the invariance hypotheses. The constants calibrated from structural estimation imply specific predictions for the arrival rate of bets (“market velocity”), the distribution of bet sizes, and transaction costs.  相似文献   

2.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

3.
Jump Regressions     
We develop econometric tools for studying jump dependence of two processes from high‐frequency observations on a fixed time interval. In this context, only segments of data around a few outlying observations are informative for the inference. We derive an asymptotically valid test for stability of a linear jump relation over regions of the jump size domain. The test has power against general forms of nonlinearity in the jump dependence as well as temporal instabilities. We further propose an efficient estimator for the linear jump regression model that is formed by optimally weighting the detected jumps with weights based on the diffusive volatility around the jump times. We derive the asymptotic limit of the estimator, a semiparametric lower efficiency bound for the linear jump regression, and show that our estimator attains the latter. The analysis covers both deterministic and random jump arrivals. In an empirical application, we use the developed inference techniques to test the temporal stability of market jump betas.  相似文献   

4.
在风险管理中杠杆效应的现象广泛存在,也是金融计量学中的重要议题。高频金融市场中蕴含着丰富的交易信息,而这些信息并不能都看作随机噪声,因此探讨利用市场交易信息并在带有随机噪声模型下研究杠杆效应具有重要意义。本文在带有市场交易信息和随机微观噪声相结合的模型下研究了杠杆效应,提出了新的杠杆效应估计,该估计具有n1/8的收敛速度,同时给出了估计的方差和相关的定理。通过模拟分析得出利用广泛的市场微观信息可以更有效和更精确地对杠杆效应进行估计,模拟的结果表明本文提出的杠杆效应估计具有更好的渐近正态性和更小的偏差。最后将提出的估计应用到实证分析中,发现杠杆效应对未来一天波动率的预测具有显著性影响。  相似文献   

5.
We present a methodology for estimating the distributional effects of an endogenous treatment that varies at the group level when there are group‐level unobservables, a quantile extension of Hausman and Taylor, 1981. Because of the presence of group‐level unobservables, standard quantile regression techniques are inconsistent in our setting even if the treatment is independent of unobservables. In contrast, our estimation technique is consistent as well as computationally simple, consisting of group‐by‐group quantile regression followed by two‐stage least squares. Using the Bahadur representation of quantile estimators, we derive weak conditions on the growth of the number of observations per group that are sufficient for consistency and asymptotic zero‐mean normality of our estimator. As in Hausman and Taylor, 1981, micro‐level covariates can be used as internal instruments for the endogenous group‐level treatment if they satisfy relevance and exogeneity conditions. Our approach applies to a broad range of settings including labor, public finance, industrial organization, urban economics, and development; we illustrate its usefulness with several such examples. Finally, an empirical application of our estimator finds that low‐wage earners in the United States from 1990 to 2007 were significantly more affected by increased Chinese import competition than high‐wage earners.  相似文献   

6.
This paper introduces time‐varying grouped patterns of heterogeneity in linear panel data models. A distinctive feature of our approach is that group membership is left unrestricted. We estimate the parameters of the model using a “grouped fixed‐effects” estimator that minimizes a least squares criterion with respect to all possible groupings of the cross‐sectional units. Recent advances in the clustering literature allow for fast and efficient computation. We provide conditions under which our estimator is consistent as both dimensions of the panel tend to infinity, and we develop inference methods. Finally, we allow for grouped patterns of unobserved heterogeneity in the study of the link between income and democracy across countries.  相似文献   

7.
This paper examines some of the recent literature on the estimation of production functions. We focus on techniques suggested in two recent papers, Olley and Pakes (1996) and Levinsohn and Petrin (2003). While there are some solid and intuitive identification ideas in these papers, we argue that the techniques can suffer from functional dependence problems. We suggest an alternative approach that is based on the ideas in these papers, but does not suffer from the functional dependence problems and produces consistent estimates under alternative data generating processes for which the original procedures do not.  相似文献   

8.
We develop a new quantile‐based panel data framework to study the nature of income persistence and the transmission of income shocks to consumption. Log‐earnings are the sum of a general Markovian persistent component and a transitory innovation. The persistence of past shocks to earnings is allowed to vary according to the size and sign of the current shock. Consumption is modeled as an age‐dependent nonlinear function of assets, unobservable tastes, and the two earnings components. We establish the nonparametric identification of the nonlinear earnings process and of the consumption policy rule. Exploiting the enhanced consumption and asset data in recent waves of the Panel Study of Income Dynamics, we find that the earnings process features nonlinear persistence and conditional skewness. We confirm these results using population register data from Norway. We then show that the impact of earnings shocks varies substantially across earnings histories, and that this nonlinearity drives heterogeneous consumption responses. The framework provides new empirical measures of partial insurance in which the transmission of income shocks to consumption varies systematically with assets, the level of the shock, and the history of past shocks.  相似文献   

9.
In this paper, we study the least squares (LS) estimator in a linear panel regression model with unknown number of factors appearing as interactive fixed effects. Assuming that the number of factors used in estimation is larger than the true number of factors in the data, we establish the limiting distribution of the LS estimator for the regression coefficients as the number of time periods and the number of cross‐sectional units jointly go to infinity. The main result of the paper is that under certain assumptions, the limiting distribution of the LS estimator is independent of the number of factors used in the estimation as long as this number is not underestimated. The important practical implication of this result is that for inference on the regression coefficients, one does not necessarily need to estimate the number of interactive fixed effects consistently.  相似文献   

10.
This paper develops a theory of randomization tests under an approximate symmetry assumption. Randomization tests provide a general means of constructing tests that control size in finite samples whenever the distribution of the observed data exhibits symmetry under the null hypothesis. Here, by exhibits symmetry we mean that the distribution remains invariant under a group of transformations. In this paper, we provide conditions under which the same construction can be used to construct tests that asymptotically control the probability of a false rejection whenever the distribution of the observed data exhibits approximate symmetry in the sense that the limiting distribution of a function of the data exhibits symmetry under the null hypothesis. An important application of this idea is in settings where the data may be grouped into a fixed number of “clusters” with a large number of observations within each cluster. In such settings, we show that the distribution of the observed data satisfies our approximate symmetry requirement under weak assumptions. In particular, our results allow for the clusters to be heterogeneous and also have dependence not only within each cluster, but also across clusters. This approach enjoys several advantages over other approaches in these settings.  相似文献   

11.
We document abrupt increases in retail beer prices just after the consummation of the MillerCoors joint venture, both for MillerCoors and its major competitor, Anheuser‐Busch. Within the context of a differentiated‐products pricing model, we test and reject the hypothesis that the price increases can be explained by movement from one Nash–Bertrand equilibrium to another. Counterfactual simulations imply that prices after the joint venture are 6%–8% higher than they would have been with Nash–Bertrand competition, and that markups are 17%–18% higher. We relate the results to documentary evidence that the joint venture may have facilitated price coordination.  相似文献   

12.
The potential for cannibalization of new product sales by remanufactured versions of the same product is a central issue in the continuing development of closed‐loop supply chains. Practitioners have no fact‐based information to guide practice at firms and academics have no studies available to use as the basis for assumptions in models. We address the cannibalization issue by using auctions to determine consumers’ willingness to pay (WTP) for both new and remanufactured products. The auctions also allow us to better understand the potential impact of offering new and remanufactured products at the same time, which provides us insights into the potential for new product cannibalization. Our results indicate that, for the consumer and commercial products auctioned, there is a clear difference in WTP) for new and remanufactured goods. For the consumer product, there is scant overlap in bidders between the new and remanufactured products, leading us to conclude that the risk of cannibalization in this case is minimal. For the commercial product, there is evidence of overlap in bidding behavior, exposing the potential for cannibalization.  相似文献   

13.
Deviations from requirements during the product development process can be considered as glitches. Fixing glitches, or problems, during the product development process consumes valuable resources, which may adversely affect product development time and hamper the firm's goal to pursue a first‐mover advantage. It is posited that an integrated organizational response can diminish incidences of glitches and improve the ability of the firm to respond to engineering changes, subsequently leading to improved market success. This organizational response frequently includes heavyweight product development managers who are seen as essential catalysts for internal integration. Though internal integration is vital, it is equally important to integrate with customers and suppliers alike because such network partners can provide access to information, knowledge, and unique and complementary resources that are otherwise unavailable to the firm. Findings, which are based on a sample of 191 product development projects in the automotive industry, suggest that some integration routines have a positive impact on product development outcomes and market success, while other routines can in fact hamper the collective effort.  相似文献   

14.
While more and more firms have implemented e‐business in business operations, a better understanding of the factors that successfully drive the assimilation of e‐business will provide insights for firm executives and practitioners to develop effective strategies for e‐business. Different from previous studies that focus on individual‐level factors related to business executives and top management teams, this study examines how firm‐level strategic and cultural factors shape e‐business assimilation. Based on the strategy and marketing literature on market orientation and firm ownership, we developed a research model to describe how a firm's market orientation impacts e‐business assimilation. The model also describes the moderating effect of firm ownership type on the relationship between market orientation and e‐business assimilation. Based on data from 301 Chinese international trade firms, we found that two dimensions of market orientation (i.e., customer orientation, competitor orientation) had significant effects on e‐business assimilation. However, the third dimension, interfunctional coordination, was only partially significant. In addition, ownership type was a significant moderator of the effects of customer orientation and competitor orientation on e‐business assimilation, although ownership type was not a moderator of interfunctional coordination. Being one of the first studies of the impact of market orientation and firm ownership type on e‐business assimilation, we conclude with a discussion of the implications for future research and practice.  相似文献   

15.
Owing to the worldwide shortage of deceased‐donor organs for transplantation, living donations have become a significant source of transplant organs. However, not all willing donors can donate to their intended recipients because of medical incompatibilities. These incompatibilities can be overcome by an exchange of donors between patients. For kidneys, such exchanges have become widespread in the last decade with the introduction of optimization and market design techniques to kidney exchange. A small but growing number of liver exchanges have also been conducted. Over the last two decades, a number of transplantation procedures emerged where organs from two living donors are transplanted to a single patient. Prominent examples include dual‐graft liver transplantation, lobar lung transplantation, and simultaneous liver‐kidney transplantation. Exchange, however, has been neither practiced nor introduced in this context. We introduce dual‐donor organ exchange as a novel transplantation modality, and through simulations show that living‐donor transplants can be significantly increased through such exchanges. We also provide a simple theoretical model for dual‐donor organ exchange and introduce optimal exchange mechanisms under various logistical constraints.  相似文献   

16.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

17.
This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized techniques. We consider both linear and nonlinear models where the regression coefficients are heterogeneous across groups but homogeneous within a group and the group membership is unknown. Two approaches are considered—penalized profile likelihood (PPL) estimation for the general nonlinear models without endogenous regressors, and penalized GMM (PGMM) estimation for linear models with endogeneity. In both cases, we develop a new variant of Lasso called classifier‐Lasso (C‐Lasso) that serves to shrink individual coefficients to the unknown group‐specific coefficients. C‐Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PPL estimation, C‐Lasso also achieves the oracle property so that group‐specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation, the oracle property of C‐Lasso is preserved in some special cases. Simulations demonstrate good finite‐sample performance of the approach in both classification and estimation. Empirical applications to both linear and nonlinear models are presented.  相似文献   

18.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

19.
We present a flexible and scalable method for computing global solutions of high‐dimensional stochastic dynamic models. Within a time iteration or value function iteration setup, we interpolate functions using an adaptive sparse grid algorithm. With increasing dimensions, sparse grids grow much more slowly than standard tensor product grids. Moreover, adaptivity adds a second layer of sparsity, as grid points are added only where they are most needed, for instance, in regions with steep gradients or at nondifferentiabilities. To further speed up the solution process, our implementation is fully hybrid parallel, combining distributed and shared memory parallelization paradigms, and thus permits an efficient use of high‐performance computing architectures. To demonstrate the broad applicability of our method, we solve two very different types of dynamic models: first, high‐dimensional international real business cycle models with capital adjustment costs and irreversible investment; second, multiproduct menu‐cost models with temporary sales and economies of scope in price setting.  相似文献   

20.
A fundamental characteristic of any innovation is its novelty, the newness or freshness of the innovation in the eyes of the adopter. Past research has often considered novelty to be inherent to an information technology (IT) innovation, yet it is also likely that perceptions of novelty differ widely across individuals. Nevertheless, the role that the novelty of an IT innovation plays in adoption is not well understood. The primary goal of this research effort is to frame the perceived novelty of an IT innovation as a salient affective belief in the nomological network related to adoption. Further, we examine how perceived novelty influences the way individuals reconcile their perceptions of risk versus reward when considering the adoption of an IT innovation. Two empirical studies with 424 and 138 participants, respectively, examine the effect of perceived novelty on IT innovations from a risk/reward perspective. Results indicate that perceived novelty is a salient affective belief that plays a significant role in the adoption of IT innovations. Implications for both theory and organizational decision making are examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号