首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article we introduce efficient Wald tests for testing the null hypothesis of the unit root against the alternative of the fractional unit root. In a local alternative framework, the proposed tests are locally asymptotically equivalent to the optimal Robinson Lagrange multiplier tests. Our results contrast with the tests for fractional unit roots, introduced by Dolado, Gonzalo, and Mayoral, which are inefficient. In the presence of short range serial correlation, we propose a simple and efficient two‐step test that avoids the estimation of a nonlinear regression model. In addition, the first‐order asymptotic properties of the proposed tests are not affected by the preestimation of short or long memory parameters.  相似文献   

2.
This paper develops an asymptotic theory of inference for an unrestricted two‐regime threshold autoregressive (TAR) model with an autoregressive unit root. We find that the asymptotic null distribution of Wald tests for a threshold are nonstandard and different from the stationary case, and suggest basing inference on a bootstrap approximation. We also study the asymptotic null distributions of tests for an autoregressive unit root, and find that they are nonstandard and dependent on the presence of a threshold effect. We propose both asymptotic and bootstrap‐based tests. These tests and distribution theory allow for the joint consideration of nonlinearity (thresholds) and nonstationary (unit roots). Our limit theory is based on a new set of tools that combine unit root asymptotics with empirical process methods. We work with a particular two‐parameter empirical process that converges weakly to a two‐parameter Brownian motion. Our limit distributions involve stochastic integrals with respect to this two‐parameter process. This theory is entirely new and may find applications in other contexts. We illustrate the methods with an application to the U.S. monthly unemployment rate. We find strong evidence of a threshold effect. The point estimates suggest that the threshold effect is in the short‐run dynamics, rather than in the dominate root. While the conventional ADF test for a unit root is insignificant, our TAR unit root tests are arguably significant. The evidence is quite strong that the unemployment rate is not a unit root process, and there is considerable evidence that the series is a stationary TAR process.  相似文献   

3.
In this paper we investigate methods for testing the existence of a cointegration relationship among the components of a nonstationary fractionally integrated (NFI) vector time series. Our framework generalizes previous studies restricted to unit root integrated processes and permits simultaneous analysis of spurious and cointegrated NFI vectors. We propose a modified F‐statistic, based on a particular studentization, which converges weakly under both hypotheses, despite the fact that OLS estimates are only consistent under cointegration. This statistic leads to a Wald‐type test of cointegration when combined with a narrow band GLS‐type estimate. Our semiparametric methodology allows consistent testing of the spurious regression hypothesis against the alternative of fractional cointegration without prior knowledge on the memory of the original series, their short run properties, the cointegrating vector, or the degree of cointegration. This semiparametric aspect of the modelization does not lead to an asymptotic loss of power, permitting the Wald statistic to diverge faster under the alternative of cointegration than when testing for a hypothesized cointegration vector. In our simulations we show that the method has comparable power to customary procedures under the unit root cointegration setup, and maintains good properties in a general framework where other methods may fail. We illustrate our method testing the cointegration hypothesis of nominal GNP and simple‐sum (M1, M2, M3) monetary aggregates.  相似文献   

4.
《Omega》2002,30(2):97-108
In this paper, we consider optimal market timing strategies under transaction costs. We assume that the asset's return follows an auto-regressive model and use long-term investment growth as the objective of a market timing strategy which entails the shifting of funds between a risky asset and a riskless asset. We give the optimal trading strategy for a finite investment horizon, and analyze its limiting behavior. For a finite horizon, the optimal decision in each step depends on two threshold values. If the return today is between the two values, nothing needs to be done, otherwise funds will be shifted from one asset to another, depending on which threshold value is being exceeded. When investment horizon tends to infinity, the optimal strategy converges to a stationary policy, which is shown to be closely related to a well-known technical trading rule, called Momentum Index trading rule. An integral equation of the two threshold values is given. Numerical results for the limiting stationary strategy are presented. The results confirm the obvious guess that the no-transaction region increases as the transaction cost increase. Finally, the limiting stationary strategy is applied to data in the Hang Seng Index Futures market in Hong Kong. The out-of-sample performance of the limiting stationary strategy is found to be better than the simple strategy used in literature, which is based on an 1-step ahead forecast of return.  相似文献   

5.
We propose a novel statistic for conducting joint tests on all the structural parameters in instrumental variables regression. The statistic is straightforward to compute and equals a quadratic form of the score of the concentrated log–likelihood. It therefore attains its minimal value equal to zero at the maximum likelihood estimator. The statistic has a χ2 limiting distribution with a degrees of freedom parameter equal to the number of structural parameters. The limiting distribution does not depend on nuisance parameters. The statistic overcomes the deficiencies of the Anderson–Rubin statistic, whose limiting distribution has a degrees of freedom parameter equal to the number of instruments, and the likelihood based, Wald, likelihood ratio, and Lagrange multiplier statistics, whose limiting distributions depend on nuisance parameters. Size and power comparisons reveal that the statistic is a (asymptotic) size–corrected likelihood ratio statistic. We apply the statistic to the Angrist–Krueger (1991) data and find similar results as in Staiger and Stock (1997).  相似文献   

6.
We study a special environmental producer responsibility policy for the Chinese electronics industry that is based on awarding a per unit subsidy to qualified returned electronic products and ensuring a minimum producer collection volume while allowing larger collection volumes. Based on a real application from a Chinese electronics company that produces LCD TVs, our paper studies the optimal design of the product’s reverse supply chain when there is flexibility in settling the inspection locations of the returned products and flexibility in the volume of returned products collected. The problem is modeled as a nonlinear mixed-integer program and an efficient outer approximation-based solution approach is proposed. Analytical results and extensive numerical experiments based on this real application are conducted. Observations novel to the reverse logistics literature are related to the testing location decisions (upstream or downstream) and the optimal collection volumes of returned products. Particularly, we show how the government can stimulate the collection amount of returned products by increasing the unit subsidy and we also find that the company’s marginal benefit from improving the subsidy increases in a superlinear fashion. Furthermore, the highest collection volumes may not occur at the highest quality level of returned products for capacitated remanufacturers. The company can also be incentivized to increase the collection of returned products by permitting flexible testing locations. We also observe how the optimal testing locations vary for different levels of unit subsidy and different ratios of qualified and non-qualified returned products. Finally, conclusions and future research directions are provided.  相似文献   

7.
The asymptotic validity of tests is usually established by making appropriate primitive assumptions, which imply the weak convergence of a specific function of the data, and an appeal to the continuous mapping theorem. This paper, instead, takes the weak convergence of some function of the data to a limiting random element as the starting point and studies efficiency in the class of tests that remain asymptotically valid for all models that induce the same weak limit. It is found that efficient tests in this class are simply given by efficient tests in the limiting problem—that is, with the limiting random element assumed observed—evaluated at sample analogues. Efficient tests in the limiting problem are usually straightforward to derive, even in nonstandard testing problems. What is more, their evaluation at sample analogues typically yields tests that coincide with suitably robustified versions of optimal tests in canonical parametric versions of the model. This paper thus establishes an alternative and broader sense of asymptotic efficiency for many previously derived tests in econometrics, such as tests for unit roots, parameter stability tests, and tests about regression coefficients under weak instruments.  相似文献   

8.
Louis Anthony Cox  Jr  . 《Risk analysis》2007,27(5):1083-1086
Hansen et al. (2007) recently assessed the historical performance of the precautionary principle in 88 specific cases, concluding that "applying our definition of a regulatory false positive, we were able to identify only four cases that fit the definition of a false positive." Empirically evaluating how prone the precautionary principle is to classify nonproblems as problems ("false positives") is an excellent idea. Yet, Hansen et al.'s implementation of this idea applies a diverse set of questionable criteria to label many highly uncertain risks as "real" even when no real or potential harm has actually been demonstrated. Examples include treating each of the following as reasons to categorize risks as "real": considering that a company's actions contaminated its own product; lack of a known exposure threshold for health effects; occurrence of a threat; treating deliberately conservative (upper-bound) regulatory assumptions as if they were true values; treating assumed exposures of children to contaminated soils (by ingestion) as evidence that feared dioxin risks are real; and treating claimed (sometimes ambiguous) epidemiological associations as if they were known to be true causal relations. Such criteria can classify even nonexistent and unknown risks as "real," providing an alternative possible explanation for why the authors failed to find more false positives, even if they exist.  相似文献   

9.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

10.
Weng Kee Wong 《Risk analysis》2011,31(12):1949-1960
Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt‐Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.  相似文献   

11.
We study testable implications for the dynamics of consumption and income of models in which first‐best allocations are not achieved because of a moral hazard problem with hidden saving. We show that in this environment, agents typically achieve more insurance than that obtained under self‐insurance with a single asset. Consumption allocations exhibit “excess smoothness,” as found and defined by Campbell and Deaton (1989). We argue that excess smoothness, in this context, is equivalent to a violation of the intertemporal budget constraint considered in a Bewley economy (with a single asset). We also show parameterizations of our model in which we can obtain a closed‐form solution for the efficient insurance contract and where the excess smoothness parameter has a structural interpretation in terms of the severity of the moral hazard problem. We present tests of excess smoothness, applied to U.K. microdata and constructed using techniques proposed by Hansen, Roberds, and Sargent (1991) to test the intertemporal budget constraint. Our theoretical model leads us to interpret them as tests of the market structure faced by economic agents. We also construct a test based on the dynamics of the cross‐sectional variances of consumption and income that is, in a precise sense, complementary to that based on Hansen, Roberds, and Sargent (1991) and that allows us to estimate the same structural parameter. The results we report are consistent with the implications of the model and are internally coherent.  相似文献   

12.
关于我国期货市场弱式有效性的研究   总被引:1,自引:0,他引:1  
张小艳  张宗成 《管理工程学报》2007,21(1):145-147,154
由于金融价格遵循随机游走蕴涵着市场呈弱式有效,而单位根的存在仅是随机游走的必要条件,故运用这一含义,本文利用单位根检验与自相关检验的结合,并同时利用方差比检验和多重方差比检验来对随机游走假设进行实证研究,目的在于探讨国内铜、大豆、小麦三大期货市场是否呈弱式有效态势.结果显示:各种检验方法得出的结论是一致的,即铜、大豆、小麦三大期货市场的对数期货价格序列符合随机游走假设.  相似文献   

13.
期货市场有效性理论与实证检验   总被引:10,自引:1,他引:10  
本文以我国期货市场选取的六种期货的价格行为为对象,利用单位根检验与自相关检验的结合,并同时利用方差比检验和多重方差比检验来随机游走假设进行实证研究,目的在于探讨国内铜、大豆、小麦等六大期货市场是否呈有效态势。结果显示:各种检验方法得出的结论不尽一致,除上天胶外,各大期货市场的对数期货价格序列不能拒绝弱式有效市场假设。  相似文献   

14.
Abstract

Psychosocial safety climate (PSC) refers to a specific organizational climate for the psychological health of workers. It is largely determined by management and at low levels is proposed as a latent pathogen for psychosocial risk factors and psychological strain. Using an extended Job Demands-Control-Support framework, we predicted the (24 month) cross-level effects of PSC on psychological strain via work conditions. We used a novel design whereby data from two unrelated samples of nurses working in remote areas were used across time (N=202, Time 1; N=163, Time 2), matched at the work unit level (N= 48). Using hierarchical linear modelling we found that unit PSC assessed by nurses predicted work conditions (workload, control, supervisor support) and psychological strain in different nurses in the same work unit 24 months later. There was evidence that the between-group relationship between unit PSC and psychological strain was mediated via Time 2 work conditions (workload, job control) as well as Time 1 emotional demands. The results support a multilevel work stress model with PSC as a plausible primary cause, or “cause of the causes”, of work-related strain. The study adds to the literature that identifies organizational contextual factors as origins of the work stress process.  相似文献   

15.
This paper considers issues related to estimation, inference, and computation with multiple structural changes that occur at unknown dates in a system of equations. Changes can occur in the regression coefficients and/or the covariance matrix of the errors. We also allow arbitrary restrictions on these parameters, which permits the analysis of partial structural change models, common breaks that occur in all equations, breaks that occur in a subset of equations, and so forth. The method of estimation is quasi‐maximum likelihood based on Normal errors. The limiting distributions are obtained under more general assumptions than previous studies. For testing, we propose likelihood ratio type statistics to test the null hypothesis of no structural change and to select the number of changes. Structural change tests with restrictions on the parameters can be constructed to achieve higher power when prior information is present. For computation, an algorithm for an efficient procedure is proposed to construct the estimates and test statistics. We also introduce a novel locally ordered breaks model, which allows the breaks in different equations to be related yet not occurring at the same dates.  相似文献   

16.
This paper considers large N and large T panel data models with unobservable multiple interactive effects, which are correlated with the regressors. In earnings studies, for example, workers' motivation, persistence, and diligence combined to influence the earnings in addition to the usual argument of innate ability. In macroeconomics, interactive effects represent unobservable common shocks and their heterogeneous impacts on cross sections. We consider identification, consistency, and the limiting distribution of the interactive‐effects estimator. Under both large N and large T, the estimator is shown to be consistent, which is valid in the presence of correlations and heteroskedasticities of unknown form in both dimensions. We also derive the constrained estimator and its limiting distribution, imposing additivity coupled with interactive effects. The problem of testing additive versus interactive effects is also studied. In addition, we consider identification and estimation of models in the presence of a grand mean, time‐invariant regressors, and common regressors. Given identification, the rate of convergence and limiting results continue to hold.  相似文献   

17.
A nonparametric, residual‐based block bootstrap procedure is proposed in the context of testing for integrated (unit root) time series. The resampling procedure is based on weak assumptions on the dependence structure of the stationary process driving the random walk and successfully generates unit root integrated pseudo‐series retaining the important characteristics of the data. It is more general than previous bootstrap approaches to the unit root problem in that it allows for a very wide class of weakly dependent processes and it is not based on any parametric assumption on the process generating the data. As a consequence the procedure can accurately capture the distribution of many unit root test statistics proposed in the literature. Large sample theory is developed and the asymptotic validity of the block bootstrap‐based unit root testing is shown via a bootstrap functional limit theorem. Applications to some particular test statistics of the unit root hypothesis, i.e., least squares and Dickey‐Fuller type statistics are given. The power properties of our procedure are investigated and compared to those of alternative bootstrap approaches to carry out the unit root test. Some simulations examine the finite sample performance of our procedure.  相似文献   

18.
Entropy is a classical statistical concept with appealing properties. Establishing asymptotic distribution theory for smoothed nonparametric entropy measures of dependence has so far proved challenging. In this paper, we develop an asymptotic theory for a class of kernel‐based smoothed nonparametric entropy measures of serial dependence in a time‐series context. We use this theory to derive the limiting distribution of Granger and Lin's (1994) normalized entropy measure of serial dependence, which was previously not available in the literature. We also apply our theory to construct a new entropy‐based test for serial dependence, providing an alternative to Robinson's (1991) approach. To obtain accurate inferences, we propose and justify a consistent smoothed bootstrap procedure. The naive bootstrap is not consistent for our test. Our test is useful in, for example, testing the random walk hypothesis, evaluating density forecasts, and identifying important lags of a time series. It is asymptotically locally more powerful than Robinson's (1991) test, as is confirmed in our simulation. An application to the daily S&P 500 stock price index illustrates our approach.  相似文献   

19.
本文选取1435家A股上市公司2011—2018年度的面板数据,以企业金融化水平作为门限变量,研究企业研发投入对企业绩效的非线性影响关系。研究结果表明:(1)企业金融化行为给研发投入对企业绩效的促进作用带来严重的时滞效应,研发投入对当年企业绩效不存在促进作用;(2)研发投入对未来一年企业绩效有双门限效应,二者呈倒N型关系,在第二区间内研发投入促进未来一年企业绩效;研发投入对未来二年企业绩效有单门限效应,在第一区间内研发投入促进未来二年企业绩效;(3)在适度的企业金融化水平区间内,研发投入才会促进未来企业绩效。本文指出上市公司金融化水平最优区间占比分布具有区域、行业和企业性质异质性,并进一步基于实证研究结果提出了相应的政策建议,有利于企业合理管理研发投入和防止企业脱实向虚。  相似文献   

20.
检测台湾股票型基金绩效是否存在门坎上下不对称效果,并进一步探讨在不同的基金规模下投信业公司特征和董监事规模对基金绩效的影响。在实证模型的选取上,采用Hansen提出的纵横门坎回归模型进行检测,以2005年~2007年台湾地区投信业所发行的股票型基金作为研究对象。在实证结果方面,基金管理费用与基金报酬存在显著非线性不对称关系,且其影响取决于基金规模的差异;在不同基金规模区间的样本群组,公司若收取较高的管理费用时,对基金报酬的影响则呈现出不同大小的负向关系;当基金规模超过门坎值或低于门坎值时,公司特征和董监事规模对基金报酬的影响也呈现出显著的差异,特别是在主要股东席次比方面,两群组具有反向的显著差异。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号