首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
This paper investigates the interaction between aggregation and nonlinearity through a monte carlo study. Various tests for neglected nonlinearity are used to compare the power of the tests for different nonlinear models to different levels of aggregation. Three types of aggregation, namely, cross-sectional aggregation, temporal aggregation and systematic sampling are considered. Aggregation is inclined to simplify nonlinearity. The degree to which nonlinearity is reduced depends on the importance of common factor and extent of the aggregation. The effect is larger when the size of common factor is smaller and when the extent of the aggregation is larger.  相似文献   

2.
The data are n independent random binomial events, each resulting in success or failure. The event outcomes are believed to be trials from a binomial distribution with success probability p, and tests of p=1/2 are desired. However, there is the possibility that some unidentified event has a success probability different from the common value p for the other n?1 events. Then, tests of whether this common p equals 1/2 are desired. Fortunately, two-sided tests can be obtained that simultaneously are applicable for both situations. That is, the significance level for a test is same when one event has a different probability as when all events have the same probability. These tests are the usual equal-tail tests for p=1/2 (based on n independent trials from a binomial distribution).  相似文献   

3.
It is well known that, even if all forecasters are rational, unbiasedness tests using consensus forecasts are inconsistent because forecasters have private information. However, if all forecasters face a common realization, pooled estimators are also inconsistent. In contrast, we show that when predictions and realizations are integrated and cointegrated, microhomogeneity ensures that consensus and pooled estimators are consistent. Therefore, contrary to claims in the literature, in the absence of microhomogeneity, pooling is not a solution to the aggregation problem. We reject microhomogeneity for a number of forecasts from the Survey of Professional Forecasters. Therefore, for these variables unbiasedness can only be tested at the individual level.  相似文献   

4.
In this paper, we develop Bayes factor based testing procedures for the presence of a correlation or a partial correlation. The proposed Bayesian tests are obtained by restricting the class of the alternative hypotheses to maximize the probability of rejecting the null hypothesis when the Bayes factor is larger than a specified threshold. It turns out that they depend simply on the frequentist t-statistics with the associated critical values and can thus be easily calculated by using a spreadsheet in Excel and in fact by just adding one more step after one has performed the frequentist correlation tests. In addition, they are able to yield an identical decision with the frequentist paradigm, provided that the evidence threshold of the Bayesian tests is determined by the significance level of the frequentist paradigm. We illustrate the performance of the proposed procedures through simulated and real-data examples.  相似文献   

5.
Two different two-sample tests for dispersion differences based on placement statistics are proposed. The means and variances of the test statistics are derived, and asymptotic normality is established for both. Variants of the proposed tests based on reversing the X and Y labels in the test statistic calculations are shown to have different small-sample properties; for both pairs of tests, one member of the pair will be resolving, the other nonresolving. The proposed tests are similar in spirit to the dispersion tests of both Mood and Hollander; comparative simulation results for these four tests are given. For small sample sizes, the powers of the proposed tests are approximately equal to the powers of the tests of both Mood and Hollander for samples from the normal, Cauchy and exponential distributions. The one-sample limiting distributions are also provided, yielding useful approximations to the exact tests when one sample is much larger than the other. A bootstrap test may alternatively be performed. The proposed test statistics may be used with lightly censored data by substituting Kaplan-Meier estimates for the empirical distribution functions.  相似文献   

6.
This article considers consistent testing the null hypothesis that the conditional mean of an economic time series is linear in past values. Two specific tests are discussed, the Cramér–von Mises and the Kolmogorov–Smirnov tests. The particular feature of the proposed tests is that the bootstrap is used to estimate the nonstandard asymptotic distributions of the test statistics considered. The tests are justified theoretically by asymptotics, and their finite-sample behaviors are studied by means of Monte Carlo experiments. The tests are applied to five U.S. monthly series, and evidence of nonlinearity is found for the first difference of the logarithm of the personal income and for the first difference of the unemployment rate. No evidence of nonlinearity is found for the first difference of the logarithm of the U.S. dollar/Japanese Yen exchange rate, for the first difference of the 3-month T-bill interest rate and for the first difference of the logarithm of the M2 money stock. Contrary to typically used tests, the proposed testing procedures are robust to the presence of conditional heteroscedasticity. This may explain the results for the exchange rate and the interest rate.  相似文献   

7.
We investigate the finite sample properties of the estimator of a persistence parameter of an unobservable common factor when the factor is estimated by the principal components method. When the number of cross-sectional observations is not sufficiently large, relative to the number of time series observations, the autoregressive coefficient estimator of a positively autocorrelated factor is biased downward, and the bias becomes larger for a more persistent factor. Based on theoretical and simulation analyses, we show that bootstrap procedures are effective in reducing the bias, and bootstrap confidence intervals outperform naive asymptotic confidence intervals in terms of the coverage probability.  相似文献   

8.
The present paper discusses how nonparametric tests can be deduced from statistical functionals. Efficient and asymptotically most powerful maximin tests are derived. Their power function is calculated under implicit alternatives given by the functional for one – and two – sample testing problems. It is shown that the asymptotic power function does not depend on the special implicit direction of the alternatives but only on quantities of the functional. The present approach offers a nonparametric principle how to construct common rank tests as the Wilcoxon test, the log rank test, and the median test from special two-sample functionals. In addition it is shown that studentized permutation tests yield asymptotically valid tests for certain extended null hypotheses given by functionals which are strictly larger than the common i.i.d. null hypothesis. As example tests concerning the von Mises functional and the Wilcoxon two-sample test are treated.  相似文献   

9.
Interest in the interface of nonstationarity and nonlinearity has been increasing in the econometric literature. This paper provides a formal method of testing for nonstationary long memory against the alternative of a particular form of nonlinear ergodic processes; namely, exponential smooth transition autoregressive processes. In this regard, the current paper provides a significant generalization to existing unit root tests by allowing the null hypothesis to encompass a much larger class of nonstationary processes. The asymptotic theory associated with the proposed Wald statistic is derived, and Monte Carlo simulation results confirm that the Wald statistics have reasonably correct size and good power in small samples. In an application to real interest rates and the Yen real exchange rates, we find that the tests are able to distinguish between these competing processes in most cases, supporting the long-run Purchasing Power Parity (PPP) and Fisher hypotheses. But, there are a few cases in which long memory and nonlinear ergodic processes display similar characteristics and are thus confused with each other in small samples.  相似文献   

10.
The classical change point problem is considered, from the invariance point of view. Locally optimal invariant tests are derived for the change in level, when the initial level and the common variance are assumed to be unknown. The tests derived by Chernoff and Zacks (1964) and Gardner (1969), for the change in level, when variance is known, are shown to be locally optimal invariant tests.  相似文献   

11.
Abstract

Traditional unit root tests display a tendency to be nonstationary in the case of structural breaks and nonlinearity. To eliminate this problem this paper proposes a new flexible Fourier form nonlinear unit root test. This test eliminates this problem to add structural breaks and nonlinearity together to the test procedure. In this test procedure, structural breaks are modeled by means of a Fourier function and nonlinear adjustment is modeled by means of an exponential smooth threshold autoregressive (ESTAR) model. The simulation results indicate that the proposed unit root test is more powerful than the Kruse and KSS tests.  相似文献   

12.
Likelihood ratio tests of constant vs. monotone regression function, as well as linear vs. convex regression function and other tests with shape-restricted alternatives, are known to have null distributions equivalent to mixtures of beta random variates. The monotone and convex regression estimators are known to be inconsistent at the endpoints, where there is “spiking.” This spiking affects the critical values of the test statistic. Modified versions of the monotone and convex regression estimators are proposed that are consistent everywhere; when the modified versions are used in hypothesis testing, the null distribution is again an exact mixture of beta distributions, with different mixing parameters. Simulations show that the power of the test using the modified version is larger for the examples chosen.  相似文献   

13.
I analyze efficient estimation of a cointegrating vector when the regressand and regressor are observed at different frequencies. Previous authors have examined the effects of specific temporal aggregation or sampling schemes, finding conventionally efficient techniques to be efficient only when both the regressand and the regressors are average sampled. Using an alternative method for analyzing aggregation under more general weighting schemes, I derive an efficiency bound that is conditional on the type of aggregation used on the low-frequency series and differs from the unconditional bound defined by the full-information high-frequency data-generating process, which is infeasible due to aggregation of at least one series. I modify a conventional estimator, canonical cointegrating regression (CCR), to accommodate cases in which the aggregation weights are known. The correlation structure may be utilized to offset the potential information loss from aggregation, resulting in a conditionally efficient estimator. In the case of unknown weights, the correlation structure of the error term generally confounds identification of conditionally efficient weights. Efficiency is illustrated using a simulation study and an application to estimating a gasoline demand equation.  相似文献   

14.
Most Markov chain Monte Carlo (MCMC) users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. Potentially useful diagnostics can be borrowed from diverse areas such as time series. One such method is phase randomization. This paper describes this method in the context of MCMC, summarizes its characteristics, and contrasts its performance with those of the more common diagnostic tests for MCMC. It is observed that the new tool contributes information about third‐ and higher‐order cumulant behaviour which is important in characterizing certain forms of nonlinearity and non‐stationarity.  相似文献   

15.
In this paper, we investigate the properties of the Granger causality test in stationary and stable vector autoregressive models under the presence of spillover effects, that is, causality in variance. The Wald test and the WW test (the Wald test with White's proposed heteroskedasticity-consistent covariance matrix estimator imposed) are analyzed. The investigation is undertaken by using Monte Carlo simulation in which two different sample sizes and six different kinds of data-generating processes are used. The results show that the Wald test over-rejects the null hypothesis both with and without the spillover effect, and that the over-rejection in the latter case is more severe in larger samples. The size properties of the WW test are satisfactory when there is spillover between the variables. Only when there is feedback in the variance is the size of the WW test slightly affected. The Wald test is shown to have higher power than the WW test when the errors follow a GARCH(1,1) process without a spillover effect. When there is a spillover, the power of both tests deteriorates, which implies that the spillover has a negative effect on the causality tests.  相似文献   

16.
A compendium to information theory in economics and econometrics   总被引:5,自引:0,他引:5  
  相似文献   

17.
Overdispersion or extra variation is a common phenomenon that occurs when binomial (multinomial) data exhibit larger variances than that permitted by the binomial (multinomial) model. This arises when the data are clustered or when the assumption of independence is violated. Goodness-of-fit (GOF) tests available in the overdispersion literature have focused on testing for the presence of overdispersion in the data and hence they are not applicable for choosing between the several competing overdispersion models. In this paper, we consider a GOF test proposed by Neerchal and Morel [1998. Large cluster results for two parametric multinomial extra variation models. J. Amer. Statist. Assoc. 93(443), 1078–1087], and study its distributional properties and performance characteristics. This statistic is a direct analogue of the usual Pearson chi-squared statistic, but is also applicable when the clusters are not necessarily of the same size. As this test statistic is for testing model adequacy against the alternative that the model is not adequate, it is applicable in testing two competing overdispersion models.  相似文献   

18.
The potential observational equivalence between various types of nonlinearity and long memory has been recognized by the econometrics community since at least the contribution of Diebold and Inoue (2001 Diebold, F., Inoue, A. (2001). Long memory and regime switching. Journal of Econometrics 105:131159.[Crossref], [Web of Science ®] [Google Scholar]). A large literature has developed in an attempt to ascertain whether or not the long memory finding in many economic series is spurious. Yet to date, no study has analyzed the consequences of using long memory methods to test for unit roots when the “truth” derives from regime switching, structural breaks, or other types of mean reverting nonlinearity. In this article, I conduct a comprehensive Monte Carlo analysis to investigate the consequences of using tests designed to have power against fractional integration when the actual data generating process is unknown. I additionally consider the use of tests designed to have power against breaks and threshold nonlinearity. The findings are compelling and demonstrate that the use of long memory as an approximation to nonlinearity yields tests with relatively high power. In contrast, misspecification has severe consequences for tests designed to have power against threshold nonlinearity, and especially for tests designed to have power against breaks.  相似文献   

19.
In this article, three innovative panel error-correction model (PECM) tests are proposed. These tests are based on the multivariate versions of the Wald (W), likelihood ratio (LR), and Lagrange multiplier (LM) tests. Using Monte Carlo simulations, the size and power of the tests are investigated when the error terms exhibit both cross-sectional dependence and independence. We find that the LM test is the best option when the error terms follow independent white-noise processes. However, in the more empirically relevant case of cross-sectional dependence, we conclude that the W test is the optimal choice. In contrast to previous studies, our method is general and does not rely on the strict assumption that a common factor causes the cross-sectional dependency. In an empirical application, our method is also demonstrated in terms of the Fisher effect—a hypothesis about the existence of which there is still no clear consensus. Based on our sample of the five Nordic countries we utilize our powerful test and discover evidence which, in contrast to most previous research, confirms the Fisher effect.  相似文献   

20.
Repeating measurements of efficacy variables in clinical trials may be desirable when the measurement may be affected by ambient conditions. When such measurements are repeated at baseline and at the end of therapy, statistical questions relate to: (1) the best summary measurement to use for a subject when there is a possibility that some observations are contaminated and have increased variances; and (2) the effect of screening procedures which exclude outliers based on within- and between-subject contamination tests. We study these issues in two stages, each using a different set of models. The first stage deals only with the choice of the summary measure. The simulation results show that in some cases of contamination, the power achieved by the tests based on the median exceeds that achieved by the tests based on the mean of the replicates. However, even when we use the median, there are cases when contamination leads to a considerable loss in power. The combined issue of the best summary measurement and the effect of screening is studied in the second stage. The tests use either the observed data or the data after screening for outliers. The simulation results demonstrate that the power depends on the screening procedure as well as on the test statistic used in the study. We found that for the extent and magnitude of contamination considered, within-subject screening has a minimal effect on the power of the tests when there are at least three replicates; as a result, we found no advantage in the use of screening procedures for within-subject contamination. On the other hand, the use of a between-subject screening for outliers increases the power of the test procedures. However, even with the use of screening procedures, heterogeneity of variances can greatly reduce the power of the study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号