首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We demonstrate the asymptotic equivalence between commonly used test statistics for out‐of‐sample forecasting performance and conventional Wald statistics. This equivalence greatly simplifies the computational burden of calculating recursive out‐of‐sample test statistics and their critical values. For the case with nested models, we show that the limit distribution, which has previously been expressed through stochastic integrals, has a simple representation in terms of χ2‐distributed random variables and we derive its density. We also generalize the limit theory to cover local alternatives and characterize the power properties of the test.  相似文献   

2.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

3.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

4.
We develop an econometric methodology to infer the path of risk premia from a large unbalanced panel of individual stock returns. We estimate the time‐varying risk premia implied by conditional linear asset pricing models where the conditioning includes both instruments common to all assets and asset‐specific instruments. The estimator uses simple weighted two‐pass cross‐sectional regressions, and we show its consistency and asymptotic normality under increasing cross‐sectional and time series dimensions. We address consistent estimation of the asymptotic variance by hard thresholding, and testing for asset pricing restrictions induced by the no‐arbitrage assumption. We derive the restrictions given by a continuum of assets in a multi‐period economy under an approximate factor structure robust to asset repackaging. The empirical analysis on returns for about ten thousand U.S. stocks from July 1964 to December 2009 shows that risk premia are large and volatile in crisis periods. They exhibit large positive and negative strays from time‐invariant estimates, follow the macroeconomic cycles, and do not match risk premia estimates on standard sets of portfolios. The asset pricing restrictions are rejected for a conditional four‐factor model capturing market, size, value, and momentum effects.  相似文献   

5.
In certain auction, search, and related models, the boundary of the support of the observed data depends on some of the parameters of interest. For such nonregular models, standard asymptotic distribution theory does not apply. Previous work has focused on characterizing the nonstandard limiting distributions of particular estimators in these models. In contrast, we study the problem of constructing efficient point estimators. We show that the maximum likelihood estimator is generally inefficient, but that the Bayes estimator is efficient according to the local asymptotic minmax criterion for conventional loss functions. We provide intuition for this result using Le Cam's limits of experiments framework.  相似文献   

6.
The conventional heteroskedasticity‐robust (HR) variance matrix estimator for cross‐sectional regression (with or without a degrees‐of‐freedom adjustment), applied to the fixed‐effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than 2) as the number of entities n increases. We provide a bias‐adjusted HR estimator that is ‐consistent under any sequences (n, T) in which n and/or T increase to ∞. This estimator can be extended to handle serial correlation of fixed order.  相似文献   

7.
This article proposes an intertemporal risk‐value (IRV) model that integrates probability‐time tradeoff, time‐value tradeoff, and risk‐value tradeoff into one unified framework. We obtain a general probability‐time tradeoff, which yields a formal representation form to reflect the psychological distance of a decisionmaker in evaluating a temporal lottery. This intuition of probability‐time tradeoff is supported by robust empirical findings as well as by psychological theory. Through an explicit formalization of probability‐time tradeoff, an IRV model taking into account three fundamental dimensions, namely, value, probability, and time, is established. The object of evaluation in our framework is a complex lottery. We also give some insights into the structure of the IRV model using a wildcatter problem.  相似文献   

8.
The econometric literature of high frequency data often relies on moment estimators which are derived from assuming local constancy of volatility and related quantities. We here study this local‐constancy approximation as a general approach to estimation in such data. We show that the technique yields asymptotic properties (consistency, normality) that are correct subject to an ex post adjustment involving asymptotic likelihood ratios. These adjustments are derived and documented. Several examples of estimation are provided: powers of volatility, leverage effect, and integrated betas. The first order approximations based on local constancy can be over the period of one observation or over blocks of successive observations. It has the advantage of gaining in transparency in defining and analyzing estimators. The theory relies heavily on the interplay between stable convergence and measure change, and on asymptotic expansions for martingales.  相似文献   

9.
This paper analyzes the properties of standard estimators, tests, and confidence sets (CS's) for parameters that are unidentified or weakly identified in some parts of the parameter space. The paper also introduces methods to make the tests and CS's robust to such identification problems. The results apply to a class of extremum estimators and corresponding tests and CS's that are based on criterion functions that satisfy certain asymptotic stochastic quadratic expansions and that depend on the parameter that determines the strength of identification. This covers a class of models estimated using maximum likelihood (ML), least squares (LS), quantile, generalized method of moments, generalized empirical likelihood, minimum distance, and semi‐parametric estimators. The consistency/lack‐of‐consistency and asymptotic distributions of the estimators are established under a full range of drifting sequences of true distributions. The asymptotic sizes (in a uniform sense) of standard and identification‐robust tests and CS's are established. The results are applied to the ARMA(1, 1) time series model estimated by ML and to the nonlinear regression model estimated by LS. In companion papers, the results are applied to a number of other models.  相似文献   

10.
We consider forecasting with uncertainty about the choice of predictor variables. The researcher wants to select a model, estimate the parameters, and use the parameter estimates for forecasting. We investigate the distributional properties of a number of different schemes for model choice and parameter estimation, including: in‐sample model selection using the Akaike information criterion; out‐of‐sample model selection; and splitting the data into subsamples for model selection and parameter estimation. Using a weak‐predictor local asymptotic scheme, we provide a representation result that facilitates comparison of the distributional properties of the procedures and their associated forecast risks. This representation isolates the source of inefficiency in some of these procedures. We develop a simulation procedure that improves the accuracy of the out‐of‐sample and split‐sample methods uniformly over the local parameter space. We also examine how bootstrap aggregation (bagging) affects the local asymptotic risk of the estimators and their associated forecasts. Numerically, we find that for many values of the local parameter, the out‐of‐sample and split‐sample schemes perform poorly if implemented in the conventional way. But they perform well, if implemented in conjunction with our risk‐reduction method or bagging.  相似文献   

11.
This paper develops an asymptotic theory of inference for an unrestricted two‐regime threshold autoregressive (TAR) model with an autoregressive unit root. We find that the asymptotic null distribution of Wald tests for a threshold are nonstandard and different from the stationary case, and suggest basing inference on a bootstrap approximation. We also study the asymptotic null distributions of tests for an autoregressive unit root, and find that they are nonstandard and dependent on the presence of a threshold effect. We propose both asymptotic and bootstrap‐based tests. These tests and distribution theory allow for the joint consideration of nonlinearity (thresholds) and nonstationary (unit roots). Our limit theory is based on a new set of tools that combine unit root asymptotics with empirical process methods. We work with a particular two‐parameter empirical process that converges weakly to a two‐parameter Brownian motion. Our limit distributions involve stochastic integrals with respect to this two‐parameter process. This theory is entirely new and may find applications in other contexts. We illustrate the methods with an application to the U.S. monthly unemployment rate. We find strong evidence of a threshold effect. The point estimates suggest that the threshold effect is in the short‐run dynamics, rather than in the dominate root. While the conventional ADF test for a unit root is insignificant, our TAR unit root tests are arguably significant. The evidence is quite strong that the unemployment rate is not a unit root process, and there is considerable evidence that the series is a stationary TAR process.  相似文献   

12.
We study an inventory management mechanism that uses two stochastic programs (SPs), the customary one‐period assemble‐to‐order (ATO) model and its relaxation, to conceive control policies for dynamic ATO systems. We introduce a class of ATO systems, those that possess what we call a “chained BOM.” We prove that having a chained BOM is a sufficient condition for both SPs to be convex in the first‐stage decision variables. We show by examples the necessity of the condition. For ATO systems with a chained BOM, our result implies that the optimal integer solutions of the SPs can be found efficiently, and thus expedites the calculation of control parameters. The M system is a representative chained BOM system with two components and three products. We show that in this special case, the SPs can be solved as a one‐stage optimization problem. The allocation policy can also be reduced to simple, intuitive instructions, of which there are four distinct sets, one for each of four different parameter regions. We highlight the need for component reservation in one of these four regions. Our numerical studies demonstrate that achieving asymptotic optimality represents a significant advantage of the SP‐based approach over alternative approaches. Our numerical comparisons also show that outside of the asymptotic regime, the SP‐based approach has a commanding lead over the alternative policies. Our findings indicate that the SP‐based approach is a promising inventory management strategy that warrants further development for more general systems and practical implementations.  相似文献   

13.
This paper considers the problem of disruption risk management in global supply chains. We consider a supply chain with two participants, who face interdependent losses resulting from supply chain disruptions such as terrorist strikes and natural hazards. The Harsanyi–Selten–Nash bargaining framework is used to model the supply chain participants' choice of risk mitigation investments. The bargaining approach allows a framing of both joint financing of mitigation activities before the fact and loss‐sharing net of insurance payouts after the fact. The disagreement outcome in the bargaining game is assumed to be the result of the corresponding non‐cooperative game. We describe an incentive‐compatible contract that leads to First Best investment and equal “gain” for all players, when the solution is “interior” (as it almost certainly is in practice). A supplier that has superior security practices (i.e., is inherently safer) exploits its informational advantage by extracting an “information rent” in the usual spirit of incomplete information games. We also identify a special case of this contract, which is robust to moral hazard. The role of auditing in reinforcing investment incentives is also examined.  相似文献   

14.
We study inventory optimization for locally controlled, continuous‐review distribution systems with stochastic customer demands. Each node follows a base‐stock policy and a first‐come, first‐served allocation policy. We develop two heuristics, the recursive optimization (RO) heuristic and the decomposition‐aggregation (DA) heuristic, to approximate the optimal base‐stock levels of all the locations in the system. The RO heuristic applies a bottom‐up approach that sequentially solves single‐variable, convex problems for each location. The DA heuristic decomposes the distribution system into multiple serial systems, solves for the base‐stock levels of these systems using the newsvendor heuristic of Shang and Song (2003), and then aggregates the serial systems back into the distribution system using a procedure we call “backorder matching.” A key advantage of the DA heuristic is that it does not require any evaluation of the cost function (a computationally costly operation that requires numerical convolution). We show that, for both RO and DA, changing some of the parameters, such as leadtime, unit backordering cost, and demand rate, of a location has an impact only on its own local base‐stock level and its upstream locations’ local base‐stock levels. An extensive numerical study shows that both heuristics perform well, with the RO heuristic providing more accurate results and the DA heuristic consuming less computation time. We show that both RO and DA are asymptotically optimal along multiple dimensions for two‐echelon distribution systems. Finally, we show that, with minor changes, both RO and DA are applicable to the balanced allocation policy.  相似文献   

15.
Traditional approaches in inventory control first estimate the demand distribution among a predefined family of distributions based on data fitting of historical demand observations, and then optimize the inventory control using the estimated distributions. These approaches often lead to fragile solutions whenever the preselected family of distributions was inadequate. In this article, we propose a minimax robust model that integrates data fitting and inventory optimization for the single‐item multi‐period periodic review stochastic lot‐sizing problem. In contrast with the standard assumption of given distributions, we assume that histograms are part of the input. The robust model generalizes the Bayesian model, and it can be interpreted as minimizing history‐dependent risk measures. We prove that the optimal inventory control policies of the robust model share the same structure as the traditional stochastic dynamic programming counterpart. In particular, we analyze the robust model based on the chi‐square goodness‐of‐fit test. If demand samples are obtained from a known distribution, the robust model converges to the stochastic model with true distribution under generous conditions. Its effectiveness is also validated by numerical experiments.  相似文献   

16.
This paper considers tests of the parameter on an endogenous variable in an instrumental variables regression model. The focus is on determining tests that have some optimal power properties. We start by considering a model with normally distributed errors and known error covariance matrix. We consider tests that are similar and satisfy a natural rotational invariance condition. We determine a two‐sided power envelope for invariant similar tests. This allows us to assess and compare the power properties of tests such as the conditional likelihood ratio (CLR), the Lagrange multiplier, and the Anderson–Rubin tests. We find that the CLR test is quite close to being uniformly most powerful invariant among a class of two‐sided tests. The finite‐sample results of the paper are extended to the case of unknown error covariance matrix and possibly nonnormal errors via weak instrument asymptotics. Strong instrument asymptotic results also are provided because we seek tests that perform well under both weak and strong instruments.  相似文献   

17.
Entropy is a classical statistical concept with appealing properties. Establishing asymptotic distribution theory for smoothed nonparametric entropy measures of dependence has so far proved challenging. In this paper, we develop an asymptotic theory for a class of kernel‐based smoothed nonparametric entropy measures of serial dependence in a time‐series context. We use this theory to derive the limiting distribution of Granger and Lin's (1994) normalized entropy measure of serial dependence, which was previously not available in the literature. We also apply our theory to construct a new entropy‐based test for serial dependence, providing an alternative to Robinson's (1991) approach. To obtain accurate inferences, we propose and justify a consistent smoothed bootstrap procedure. The naive bootstrap is not consistent for our test. Our test is useful in, for example, testing the random walk hypothesis, evaluating density forecasts, and identifying important lags of a time series. It is asymptotically locally more powerful than Robinson's (1991) test, as is confirmed in our simulation. An application to the daily S&P 500 stock price index illustrates our approach.  相似文献   

18.
Siwei Gao 《Risk analysis》2012,32(11):1967-1977
For catastrophe losses, the conventional risk finance paradigm of enterprise risk management identifies transfer, as opposed to pooling or avoidance, as the preferred solution. However, this analysis does not necessarily account for differences between light‐ and heavy‐tailed characteristics of loss portfolios. Of particular concern are the decreasing benefits of diversification (through pooling) as the tails of severity distributions become heavier. In the present article, we study a loss portfolio characterized by nonstochastic frequency and a class of Lévy‐stable severity distributions calibrated to match the parameters of the Pareto II distribution. We then propose a conservative risk finance paradigm that can be used to prepare the firm for worst‐case scenarios with regard to both (1) the firm's intrinsic sensitivity to risk and (2) the heaviness of the severity's tail.  相似文献   

19.
This paper studies a shape‐invariant Engel curve system with endogenous total expenditure, in which the shape‐invariant specification involves a common shift parameter for each demographic group in a pooled system of nonparametric Engel curves. We focus on the identification and estimation of both the nonparametric shapes of the Engel curves and the parametric specification of the demographic scaling parameters. The identification condition relates to the bounded completeness and the estimation procedure applies the sieve minimum distance estimation of conditional moment restrictions, allowing for endogeneity. We establish a new root mean squared convergence rate for the nonparametric instrumental variable regression when the endogenous regressor could have unbounded support. Root‐n asymptotic normality and semiparametric efficiency of the parametric components are also given under a set of “low‐level” sufficient conditions. Our empirical application using the U.K. Family Expenditure Survey shows the importance of adjusting for endogeneity in terms of both the nonparametric curvatures and the demographic parameters of systems of Engel curves.  相似文献   

20.
We develop a new specification test for IV estimators adopting a particular second order approximation of Bekker. The new specification test compares the difference of the forward (conventional) 2SLS estimator of the coefficient of the right‐hand side endogenous variable with the reverse 2SLS estimator of the same unknown parameter when the normalization is changed. Under the null hypothesis that conventional first order asymptotics provide a reliable guide to inference, the two estimates should be very similar. Our test sees whether the resulting difference in the two estimates satisfies the results of second order asymptotic theory. Essentially the same idea is applied to develop another new specification test using second‐order unbiased estimators of the type first proposed by Nagar. If the forward and reverse Nagar‐type estimators are not significantly different we recommend estimation by LIML, which we demonstrate is the optimal linear combination of the Nagar‐type estimators (to second order). We also demonstrate the high degree of similarity for k‐class estimators between the approach of Bekker and the Edgeworth expansion approach of Rothenberg. An empirical example and Monte Carlo evidence demonstrate the operation of the new specification test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号