首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A new method is proposed for constructing confidence intervals in autoregressive models with linear time trend. Interest focuses on the sum of the autoregressive coefficients because this parameter provides a useful scalar measure of the long‐run persistence properties of an economic time series. Since the type of the limiting distribution of the corresponding OLS estimator, as well as the rate of its convergence, depend in a discontinuous fashion upon whether the true parameter is less than one or equal to one (that is, trend‐stationary case or unit root case), the construction of confidence intervals is notoriously difficult. The crux of our method is to recompute the OLS estimator on smaller blocks of the observed data, according to the general subsampling idea of Politis and Romano (1994a), although some extensions of the standard theory are needed. The method is more general than previous approaches in that it works for arbitrary parameter values, but also because it allows the innovations to be a martingale difference sequence rather than i.i.d. Some simulation studies examine the finite sample performance.  相似文献   

2.
The purpose of this paper is to provide theoretical justification for some existing methods for constructing confidence intervals for the sum of coefficients in autoregressive models. We show that the methods of Stock (1991), Andrews (1993), and Hansen (1999) provide asymptotically valid confidence intervals, whereas the subsampling method of Romano and Wolf (2001) does not. In addition, we generalize the three valid methods to a larger class of statistics. We also clarify the difference between uniform and pointwise asymptotic approximations, and show that a pointwise convergence of coverage probabilities for all values of the parameter does not guarantee the validity of the confidence set.  相似文献   

3.
We consider the estimation of dynamic panel data models in the presence of incidental parameters in both dimensions: individual fixed‐effects and time fixed‐effects, as well as incidental parameters in the variances. We adopt the factor analytical approach by estimating the sample variance of individual effects rather than the effects themselves. In the presence of cross‐sectional heteroskedasticity, the factor method estimates the average of the cross‐sectional variances instead of the individual variances. The method thereby eliminates the incidental‐parameter problem in the means and in the variances over the cross‐sectional dimension. We further show that estimating the time effects and heteroskedasticities in the time dimension does not lead to the incidental‐parameter bias even when T and N are comparable. Moreover, efficient and robust estimation is obtained by jointly estimating heteroskedasticities.  相似文献   

4.
Important estimation problems in econometrics like estimating the value of a spectral density at frequency zero, which appears in the econometrics literature in the guises of heteroskedasticity and autocorrelation consistent variance estimation and long run variance estimation, are shown to be “ill‐posed” estimation problems. A prototypical result obtained in the paper is that the minimax risk for estimating the value of the spectral density at frequency zero is infinite regardless of sample size, and that confidence sets are close to being uninformative. In this result the maximum risk is over commonly used specifications for the set of feasible data generating processes. The consequences for inference on unit roots and cointegration are discussed. Similar results for persistence estimation and estimation of the long memory parameter are given. All these results are obtained as special cases of a more general theory developed for abstract estimation problems, which readily also allows for the treatment of other ill‐posed estimation problems such as, e.g., nonparametric regression or density estimation.  相似文献   

5.
In this paper we investigate methods for testing the existence of a cointegration relationship among the components of a nonstationary fractionally integrated (NFI) vector time series. Our framework generalizes previous studies restricted to unit root integrated processes and permits simultaneous analysis of spurious and cointegrated NFI vectors. We propose a modified F‐statistic, based on a particular studentization, which converges weakly under both hypotheses, despite the fact that OLS estimates are only consistent under cointegration. This statistic leads to a Wald‐type test of cointegration when combined with a narrow band GLS‐type estimate. Our semiparametric methodology allows consistent testing of the spurious regression hypothesis against the alternative of fractional cointegration without prior knowledge on the memory of the original series, their short run properties, the cointegrating vector, or the degree of cointegration. This semiparametric aspect of the modelization does not lead to an asymptotic loss of power, permitting the Wald statistic to diverge faster under the alternative of cointegration than when testing for a hypothesized cointegration vector. In our simulations we show that the method has comparable power to customary procedures under the unit root cointegration setup, and maintains good properties in a general framework where other methods may fail. We illustrate our method testing the cointegration hypothesis of nominal GNP and simple‐sum (M1, M2, M3) monetary aggregates.  相似文献   

6.
It is well known that the finite‐sample properties of tests of hypotheses on the co‐integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett‐type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic χ2 tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co‐integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co‐integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.  相似文献   

7.
In certain auction, search, and related models, the boundary of the support of the observed data depends on some of the parameters of interest. For such nonregular models, standard asymptotic distribution theory does not apply. Previous work has focused on characterizing the nonstandard limiting distributions of particular estimators in these models. In contrast, we study the problem of constructing efficient point estimators. We show that the maximum likelihood estimator is generally inefficient, but that the Bayes estimator is efficient according to the local asymptotic minmax criterion for conventional loss functions. We provide intuition for this result using Le Cam's limits of experiments framework.  相似文献   

8.
Despite being theoretically suboptimal, simpler contracts (such as price‐only contracts and quantity discount contracts with limited number of price blocks) are commonly preferred in practice. Thus, exploring the tension between theory and practice regarding complexity and performance in contract design is especially relevant. Using human subject experiments, Kalkancı et al. (2011) showed that such simpler contracts perform effectively for a supplier interacting with a computerized buyer under asymmetric demand information. We use a similar set of experiments with the modification that a human supplier interacts with a human buyer. We show that human interactions strengthen the supplier's preference for simpler contracts. We find that suppliers have fairness concerns even when they interact with computerized buyers. These fairness concerns tend to be even stronger when suppliers interact with human buyers, particularly when the complexity of the contract is low. We also find that suppliers are more prone to random decision errors (i.e., bounded rationality) when interacting with human buyers. In the absence of social preferences, Kalkancı et al. identified reinforcement and bounded rationality as key biases that impact suppliers' decisions. In human‐to‐human experiments, we find evidence for social preference effects. However, these effects may be secondary to bounded rationality.  相似文献   

9.
This paper establishes that instruments enable the identification of nonparametric regression models in the presence of measurement error by providing a closed form solution for the regression function in terms of Fourier transforms of conditional expectations of observable variables. For parametrically specified regression functions, we propose a root n consistent and asymptotically normal estimator that takes the familiar form of a generalized method of moments estimator with a plugged‐in nonparametric kernel density estimate. Both the identification and the estimation methodologies rely on Fourier analysis and on the theory of generalized functions. The finite‐sample properties of the estimator are investigated through Monte Carlo simulations.  相似文献   

10.
Mostly fueled by mandates, adoption, and implementation of the RFID, technology in the retail industry is growing rapidly. At these early stages of adoption, one puzzling issue for retailers and suppliers is the compelling business case for RFID. In order to explore the potential business case for RFID, we conducted a case study using actual RFID data collected by a major retailer for the cases shipped by one of its major suppliers. We show the physical layout of the RFID readers on a partial supply‐chain covering product movement from distribution centers to retail stores. First, in the analysis phase, we identify several performance metrics that can be computed from the RFID readings. Next, using this RFID data, we compute the values of those performance metrics. These values represent mean time between movements at different locations. Then, we discuss how these measures can assist in improving logistical performance at a micro supply chain level of operations between a distribution center and a retail store. We present how such information can be valuable to both the retail store operator and the supplier. We also discuss the initial lessons learned from actual RFID data collected in the field, in terms of data quality issues.  相似文献   

11.
We develop an asymptotic theory for the pre‐averaging estimator when asset price jumps are weakly identified, here modeled as local to zero. The theory unifies the conventional asymptotic theory for continuous and discontinuous semimartingales as two polar cases with a continuum of local asymptotics, and explains the breakdown of the conventional procedures under weak identification. We propose simple bias‐corrected estimators for jump power variations, and construct robust confidence sets with valid asymptotic size in a uniform sense. The method is also robust to certain forms of microstructure noise.  相似文献   

12.
We develop general model‐free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit recent nonparametric asymptotic distributional results, are both easy‐to‐implement and highly accurate in empirically realistic situations. We also illustrate that properly accounting for the measurement errors in the volatility forecast evaluations reported in the existing literature can result in markedly higher estimates for the true degree of return volatility predictability.  相似文献   

13.
This note studies some seemingly anomalous results that arise in possibly misspecified, reduced‐rank linear asset‐pricing models estimated by the continuously updated generalized method of moments. When a spurious factor (that is, a factor that is uncorrelated with the returns on the test assets) is present, the test for correct model specification has asymptotic power that is equal to the nominal size. In other words, applied researchers will erroneously conclude that the model is correctly specified even when the degree of misspecification is arbitrarily large. The rejection probability of the test for overidentifying restrictions typically decreases further in underidentified models where the dimension of the null space is larger than 1.  相似文献   

14.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

15.
Wavelet analysis is a new mathematical method developed as a unified field of science over the last decade or so. As a spatially adaptive analytic tool, wavelets are useful for capturing serial correlation where the spectrum has peaks or kinks, as can arise from persistent dependence, seasonality, and other kinds of periodicity. This paper proposes a new class of generally applicable wavelet‐based tests for serial correlation of unknown form in the estimated residuals of a panel regression model, where error components can be one‐way or two‐way, individual and time effects can be fixed or random, and regressors may contain lagged dependent variables or deterministic/stochastic trending variables. Our tests are applicable to unbalanced heterogenous panel data. They have a convenient null limit N(0,1) distribution. No formulation of an alternative model is required, and our tests are consistent against serial correlation of unknown form even in the presence of substantial inhomogeneity in serial correlation across individuals. This is in contrast to existing serial correlation tests for panel models, which ignore inhomogeneity in serial correlation across individuals by assuming a common alternative, and thus have no power against the alternatives where the average of serial correlations among individuals is close to zero. We propose and justify a data‐driven method to choose the smoothing parameter—the finest scale in wavelet spectral estimation, making the tests completely operational in practice. The data‐driven finest scale automatically converges to zero under the null hypothesis of no serial correlation and diverges to infinity as the sample size increases under the alternative, ensuring the consistency of our tests. Simulation shows that our tests perform well in small and finite samples relative to some existing tests.  相似文献   

16.
We consider a make‐to‐stock, finite‐capacity production system with setup cost and delay‐sensitive customers. To balance the setup and inventory related costs, the production manager adopts a two‐critical‐number control policy, where the production starts when the number of waiting customers reaches a certain level and shuts down when a certain quantity of inventory has accumulated. Once the production is set up, the unit production time follows an exponential distribution. Potential customers arrive according to a Poisson process. Customers are strategic, i.e., they make decisions on whether to stay for the product or to leave without purchase based on their utility values, which depend on the production manager's control decisions. We formulate the problem as a Stackelberg game between the production manager and the customers, where the former is the game leader. We first derive the equilibrium customer purchasing strategy and system performance. We then formulate the expected cost rate function for the production system and present a search algorithm for obtaining the optimal values of the two control variables. We further analyze the characteristics of the optimal solution numerically and compare them with the situation where the customers are non‐strategic.  相似文献   

17.
《Risk analysis》2018,38(4):694-709
Subsurface energy activities entail the risk of induced seismicity including low‐probability high‐consequence (LPHC) events. For designing respective risk communication, the scientific literature lacks empirical evidence of how the public reacts to different written risk communication formats about such LPHC events and to related uncertainty or expert confidence. This study presents findings from an online experiment (N = 590) that empirically tested the public's responses to risk communication about induced seismicity and to different technology frames, namely deep geothermal energy (DGE) and shale gas (between‐subject design). Three incrementally different formats of written risk communication were tested: (i) qualitative, (ii) qualitative and quantitative, and (iii) qualitative and quantitative with risk comparison. Respondents found the latter two the easiest to understand, the most exact, and liked them the most. Adding uncertainty and expert confidence statements made the risk communication less clear, less easy to understand and increased concern. Above all, the technology for which risks are communicated and its acceptance mattered strongly: respondents in the shale gas condition found the identical risk communication less trustworthy and more concerning than in the DGE conditions. They also liked the risk communication overall less. For practitioners in DGE or shale gas projects, the study shows that the public would appreciate efforts in describing LPHC risks with numbers and optionally risk comparisons. However, there seems to be a trade‐off between aiming for transparency by disclosing uncertainty and limited expert confidence, and thereby decreasing clarity and increasing concern in the view of the public.  相似文献   

18.
The objective of this article is to evaluate the performance of the COM‐Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM‐Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over‐ or underdispersion. Over the last year, the COM‐Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson‐gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM‐Poisson models were estimated using crash data collected at 162 railway‐highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM‐Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.  相似文献   

19.
We consider a make‐to‐order manufacturer that serves two customer classes: core customers who pay a fixed negotiated price, and “fill‐in” customers who make submittal decisions based on the current price set by the firm. Using a Markovian queueing model, we determine how much the firm can gain by explicitly accounting for the status of its production facility in making pricing decisions. Specifically, we examine three pricing policies: (1) static, state‐independent pricing, (2) constant pricing up to a cutoff state, and (3) general state‐dependent pricing. We determine properties of each policy, and illustrate numerically the financial gains that the firm can achieve by following each policy as compared with simpler policies. Our main result is that constant pricing up to a cutoff state can dramatically outperform a state‐independent policy, while at the same time achieving most of the increase in revenue achievable from general state‐dependent pricing. Thus, we find that constant pricing up to a cutoff state presents an attractive tradeoff between ease of implementation and revenue gain. When the costs of policy design and implementation are taken into account, this simple heuristic may actually out‐perform general state‐dependent pricing in some settings.  相似文献   

20.
We present a flexible and scalable method for computing global solutions of high‐dimensional stochastic dynamic models. Within a time iteration or value function iteration setup, we interpolate functions using an adaptive sparse grid algorithm. With increasing dimensions, sparse grids grow much more slowly than standard tensor product grids. Moreover, adaptivity adds a second layer of sparsity, as grid points are added only where they are most needed, for instance, in regions with steep gradients or at nondifferentiabilities. To further speed up the solution process, our implementation is fully hybrid parallel, combining distributed and shared memory parallelization paradigms, and thus permits an efficient use of high‐performance computing architectures. To demonstrate the broad applicability of our method, we solve two very different types of dynamic models: first, high‐dimensional international real business cycle models with capital adjustment costs and irreversible investment; second, multiproduct menu‐cost models with temporary sales and economies of scope in price setting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号