首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Optimal Predictive Tests   总被引:1,自引:1,他引:0  
  相似文献   

2.
Formulas that yield minimum sample size for standard T tests are presented. Although the results are approximations, they usually yield the exact solution. Involving only standard normal quantiles, they could be used in an elementary course.  相似文献   

3.
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006 Campbell, J. Y. and Yogo, M. (2006), “Efficient Tests of Stock Return Predictability,” Journal of Financial Economics, 81, 2760.[Crossref], [Web of Science ®] [Google Scholar]). Supplementary materials for this article are available online.  相似文献   

4.
ABSTRACT

Most statistical analyses use hypothesis tests or estimation about parameters to form inferential conclusions. I think this is noble, but misguided. The point of view expressed here is that observables are fundamental, and that the goal of statistical modeling should be to predict future observations, given the current data and other relevant information. Further, the prediction of future observables provides multiple advantages to practicing scientists, and to science in general. These include an interpretable numerical summary of a quantity of direct interest to current and future researchers, a calibrated prediction of what’s likely to happen in future experiments, a prediction that can be either “corroborated” or “refuted” through experimentation, and avoidance of inference about parameters; quantities that exists only as convenient indices of hypothetical distributions. Finally, the predictive probability of a future observable can be used as a standard for communicating the reliability of the current work, regardless of whether confirmatory experiments are conducted. Adoption of this paradigm would improve our rigor for scientific accuracy and reproducibility by shifting our focus from “finding differences” among hypothetical parameters to predicting observable events based on our current scientific understanding.  相似文献   

5.
ABSTRACT

This article argues that researchers do not need to completely abandon the p-value, the best-known significance index, but should instead stop using significance levels that do not depend on sample sizes. A testing procedure is developed using a mixture of frequentist and Bayesian tools, with a significance level that is a function of sample size, obtained from a generalized form of the Neyman–Pearson Lemma that minimizes a linear combination of α, the probability of rejecting a true null hypothesis, and β, the probability of failing to reject a false null, instead of fixing α and minimizing β. The resulting hypothesis tests do not violate the Likelihood Principle and do not require any constraints on the dimensionalities of the sample space and parameter space. The procedure includes an ordering of the entire sample space and uses predictive probability (density) functions, allowing for testing of both simple and compound hypotheses. Accessible examples are presented to highlight specific characteristics of the new tests.  相似文献   

6.
Manufacturers are often faced with the problem of how to select the most reliable design among several competing designs in the stage of development. It becomes complicated if products are highly reliable. Under the circumstances, recent work has focused on the study with degradation data by assuming that degradation paths follow Wiener processes or random-effect models. However, it is more appropriate to use gamma processes to model degradation data with monotone-increasing pattern. This article deals with the selection problem for such processes. With a minimum probability of correct decision, optimal test plans can be obtained by minimizing the total cost.  相似文献   

7.
This article proposes new methodologies for evaluating economic models’ out-of-sample forecasting performance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide range of window sizes. The study shows that the tests proposed in the literature may lack the power to detect predictive ability and might be subject to data snooping across different window sizes if used repeatedly. An empirical application shows the usefulness of the methodologies for evaluating exchange rate models’ forecasting ability.  相似文献   

8.
We provide an application of a variety of predicting densities to quality control involving multivariate normal linear models. We produce optimal control designs for single muleivaiiate future observations using predicting densities employing estimative, profile likelihood, Hinkley-Lauritzen, Butler, Bayesian, and Parametric Bootstrap methodologies. The decision-theoretic optimality criterion is an intuitively appealing quadratic consumer-producer risk function. The optimal control design arising from an optimal Kullback-Leibler frequentist prediction density is shown to coincide with that arising from an optimal Kullback-Leibler Bayesian predictive density. An example involving EVOP is provided to illustrate the methodology and to raise questions concerning the relative merics of the variety of predictive approaches in the quality control context.  相似文献   

9.
Predictability tests with long memory regressors may entail both size distortion and incompatibility between the orders of integration of the dependent and independent variables. Addressing both problems simultaneously, this paper proposes a two-step procedure that rebalances the predictive regression by fractionally differencing the predictor based on a first-stage estimation of the memory parameter. Extensive simulations indicate that our procedure has good size, is robust to estimation error in the first stage, and can yield improved power over cases in which an integer order is assumed for the regressor. We also extend our approach beyond the standard predictive regression context to cases in which the dependent variable is also fractionally integrated, but not cointegrated with the regressor. We use our procedure to provide a valid test of forward rate unbiasedness that allows for a long memory forward premium.  相似文献   

10.
In this article we discuss the estimation of stochastic volatility (SV) using generalized empirical likelihood/minimum contrast methods based on moment conditionsmodels. We show via Monte Carlo simulations that the proposed methods have superior or equivalent performance to the other alternative methods, and, additionally, they offer robustness properties in the presence of heavy-tailed distributions and outliers.  相似文献   

11.
12.
Generalized method of moments (GMM) estimation has become an important unifying framework for inference in econometrics in the last 20 years. It can be thought of as encompassing almost all of the common estimation methods, such as maximum likelihood, ordinary least squares, instrumental variables, and two-stage least squares, and nowadays is an important part of all advanced econometrics textbooks. The GMM approach links nicely to economic theory where orthogonality conditions that can serve as such moment functions often arise from optimizing behavior of agents. Much work has been done on these methods since the seminal article by Hansen, and much remains in progress. This article discusses some of the developments since Hansen's original work. In particular, it focuses on some of the recent work on empirical likelihood–type estimators, which circumvent the need for a first step in which the optimal weight matrix is estimated and have attractive information theoretic interpretations.  相似文献   

13.
Utilizing the notion of matching predictives as in Berger and Pericchi, we show that for the conjugate family of prior distributions in the normal linear model, the symmetric Kullback-Leibler divergence between two particular predictive densities is minimized when the prior hyperparameters are taken to be those corresponding to the predictive priors proposed in Ibrahim and Laud and Laud and Ibrahim. The main application for this result is for Bayesian variable selection.  相似文献   

14.
The objective of this article is to propose and study frequentist tests that have maximum average power, averaging with respect to some specified weight function. First, some relationships between these tests, called maximum average-power (MAP) tests, and most powerful or uniformly most powerful tests are presented. Second, the existence of a maximum average-power test for any hypothesis testing problem is shown. Third, an MAP test for any hypothesis testing problem with a simple null hypothesis is constructed, including some interesting classical examples. Fourth, an MAP test for a hypothesis testing problem with a composite null hypothesis is discussed. From any one-parameter exponential family, a commonly used UMPU test is shown to be also an MAP test with respect to a rich class of weight functions. Finally, some remarks are given to conclude the article.  相似文献   

15.
This article is concerned with making predictive inference on the basis of a doubly censored sample from a two-parameter Rayleigh life model. We derive the predictive distributions for a single future response, the ith future response, and several future responses. We use the Bayesian approach in conjunction with an improper flat prior for the location parameter and an independent proper conjugate prior for the scale parameter to derive the predictive distributions. We conclude with a numerical example in which the effect of the hyperparameters on the mean and standard deviation of the predictive density is assessed.  相似文献   

16.
Testing predictability is of importance in economics and finance. Based on a predictive regression model with independent and identically distributed errors, some uniform tests have been proposed in the literature without distinguishing whether the predicting variable is stationary or nearly integrated. In this article, we extend the empirical likelihood methods of Zhu, Cai, and Peng with independent errors to the case of an AR error process. Again, the proposed new tests do not need to know whether the predicting variable is stationary or nearly integrated, and whether it has a finite variance or an infinite variance. A simulation study shows the new methodologies perform well in finite sample.  相似文献   

17.
We present a family of smooth tests for the goodness of fit of semiparametric multivariate copula models. The proposed tests are distribution free and can be easily implemented. They are diagnostic and constructive in the sense that when a null distribution is rejected, the test provides useful pointers to alternative copula distributions. We then propose a method of copula density construction, which can be viewed as a multivariate extension of Efron and Tibshirani. We further generalize our methods to the semiparametric copula-based multivariate dynamic models. We report extensive Monte Carlo simulations and three empirical examples to illustrate the effectiveness and usefulness of our method.  相似文献   

18.
This article considers the order selection problem of periodic autoregressive models. Our main goal is the adaptation of the Bayesian Predictive Density Criterion (PDC), established by Djuric' and Kay (1992 Djuric' , P. M. , Kay , S. M. ( 1992 ). Order selection of autoregressive models . IEEE Transactions on Signal Processing 40 : 28292833 . [Google Scholar]) for selecting the order of a stationary autoreg-ressive model, to deal with the order identification problem of a periodic autoregressive model. The performance of the established criterion, (P-PDC), is compared, via simulation studies, to the performances of some well-known existing criteria.  相似文献   

19.
We derive optimal two-stage adaptive group-sequential designs for normally distributed data which achieve the minimum of a mixture of expected sample sizes at the range of plausible values of a normal mean. Unlike standard group-sequential tests, our method is adaptive in that it allows the group size at the second look to be a function of the observed test statistic at the first look. Using optimality criteria, we construct two-stage designs which we show have advantage over other popular adaptive methods. The employed computational method is a modification of the backward induction algorithm applied to a Bayesian decision problem.  相似文献   

20.
We develop tests for detecting possibly episodic predictability induced by a persistent predictor. Our framework is that of a predictive regression model with threshold effects and our goal is to develop operational and easily implementable inferences when one does not wish to impose à priori restrictions on the parameters of the model other than the slopes corresponding to the persistent predictor. Differently put our tests for the null hypothesis of no predictability against threshold predictability remain valid without the need to know whether the remaining parameters of the model are characterized by threshold effects or not (e.g., shifting versus nonshifting intercepts). One interesting feature of our setting is that our test statistics remain unaffected by whether some nuisance parameters are identified or not. We subsequently apply our methodology to the predictability of aggregate stock returns with valuation ratios and document a robust countercyclicality in the ability of some valuation ratios to predict returns in addition to highlighting a strong sensitivity of predictability based results to the time period under consideration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号