首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
《Econometric Reviews》2013,32(1):25-52
Abstract

This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models.  相似文献   

3.
This work shows a procedure that aims to eliminate or reduce the bias caused by omitted variables by means of the so-called regime-switching regressions. There is a bias estimation whenever the statistical (linear) model is under-specified, that is, when there are some omitted variables and they are correlated with the regressors. This work shows how an appropriate specification of a regime-switching model (independent or Markov-switching) can eliminate or reduce this correlation, hence the estimation bias. A demonstration is given, together with some Monte Carlo simulations. An empirical verification, based on Fisher's equation, is also provided.  相似文献   

4.

To test the equality of the covariance matrices of two dependent bivariate normals, we derive five combination tests using the Simes method. We compare the performance of these tests using simulation to each other and to the competing tests. In particular, simulations show that one of the combination tests has the best performance in terms of controlling the type I error rate even for small samples with similar power compared to other tests. We also apply the recommended test to real data from a crossover bioavailability study.  相似文献   

5.
The use of the correlation coefficient is suggested as a technique for summarizing and objectively evaluating the information contained in probability plots. Goodness-of-fit tests are constructed using this technique for several commonly used plotting positions for the normal distribution. Empirical sampling methods are used to construct the null distribution for these tests, which are then compared on the basis of power against certain nonnormal alternatives. Commonly used regression tests of fit are also included in the comparisons. The results indicate that use of the plotting position pi = (i - .375)/(n + .25) yields a competitive regression test of fit for normality.  相似文献   

6.
We proposed two simple moment-based procedures, one with (GCCC1) and one without (GCCC2) normality assumptions, to generalize the inference of concordance correlation coefficient for the evaluation of agreement among multiple observers for measurements on a continuous scale. A modified Fisher's Z-transformation was adapted to further improve the inference. We compared the proposed methods with U-statistic-based inference approach. Simulation analysis showed desirable statistical properties of the simplified approach GCCC1, in terms of coverage probabilities and coverage balance, especially for small samples. GCCC2, which is distribution-free, behaved comparably with the U-statistic-based procedure, but had a more intuitive and explicit variance estimator. The utility of these approaches were illustrated using two clinical data examples.  相似文献   

7.
The most popular method for trying to detect an association between two random variables is to test H 0 ?:?ρ=0, the hypothesis that Pearson's correlation is equal to zero. It is well known, however, that Pearson's correlation is not robust, roughly meaning that small changes in any distribution, including any bivariate normal distribution as a special case, can alter its value. Moreover, the usual estimate of ρ, r, is sensitive to only a few outliers which can mask a true association. A simple alternative to testing H 0 ?:?ρ =0 is to switch to a measure of association that guards against outliers among the marginal distributions such as Kendall's tau, Spearman's rho, a Winsorized correlation, or a so-called percentage bend correlation. But it is known that these methods fail to take into account the overall structure of the data. Many measures of association that do take into account the overall structure of the data have been proposed, but it seems that nothing is known about how they might be used to detect dependence. One such measure of association is selected, which is designed so that under bivariate normality, its estimator gives a reasonably accurate estimate of ρ. Then methods for testing the hypothesis of a zero correlation are studied.  相似文献   

8.
Several methods for generating variates with univariate and multivariate Walleniu' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from the mode, ratio-of-uniforms rejection method, and rejection by sampling in the τ domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.  相似文献   

9.
Summary. A bivariate and unimodal distribution is introduced to model an unconventionally distributed data set collected by the Forensic Science Service. This family of distributions allows for a different kurtosis in each orthogonal direction and has a constructive rather than probability density function definition, making conventional inference impossible. However, the construction and inference work well with a Bayesian Markov chain Monte Carlo analysis.  相似文献   

10.
From a theoretical perspective, the paper considers the properties of the maximum likelihood estimator of the correlation coefficient, principally regarding precision, in various types of bivariate model which are popular in the applied literature. The models are: 'Full-Full', in which both variables are fully observed; 'Censored-Censored', in which both of the variables are censored at zero; and finally, 'Binary-Binary', in which both variables are observed only in sign. For analytical convenience, the underlying bivariate distribution which is assumed in each of these cases is the bivariate logistic. A central issue is the extent to which censoring reduces the level of Fisher's information pertaining to the correlation coefficient, and therefore reduces the precision with which this important parameter can be estimated.  相似文献   

11.
Shared frailty models are often used to model heterogeneity in survival analysis. The most common shared frailty model is a model in which hazard function is a product of random factor (frailty) and baseline hazard function which is common to all individuals. There are certain assumptions about the baseline distribution and distribution of frailty. In this article, we consider inverse Gaussian distribution as frailty distribution and three different baseline distributions namely, Weibull, generalized exponential, and exponential power distribution. With these three baseline distributions, we propose three different inverse Gaussian shared frailty models. To estimate the parameters involved in these models we adopt Markov Chain Monte Carlo (MCMC) approach. We present a simulation study to compare the true values of the parameters with the estimated values. Also, we apply these three models to a real life bivariate survival data set of McGilchrist and Aisbett (1991 McGilchrist , C. A. , Aisbett , C. W. ( 1991 ). Regression with frailty in survival analysis . Biometrics 47 : 461466 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) related to kidney infection and a better model is suggested for the data.  相似文献   

12.
A formula to evaluate the integral of the bivariate normal density over finite area regions of the plane is developed. It is then used to compare regression estimates when bivariate normality is appropriate.  相似文献   

13.
Assuming that (X1, X2) has a bivariate elliptical distribution, we obtain an exact expression for the joint probability density function (pdf) as well as the corresponding conditional pdfs of X1 and X(2) ? max?{X1, X2}. The problem is motivated by an application in financial markets. Exchangeable random variables are discussed in more detail. Two special cases of the elliptical distributions that is the normal and the student’s t models are investigated. For illustrative purposes, a real data set on the total personal income in California and New York is analyzed using the results obtained. Finally, some concluding remarks and further works are discussed.  相似文献   

14.
In this paper we consider a simple linear regression model under heteroscedasticity and nonnormality. A statistical test for testing the regression coefficient is then derived by assuming normality for the random disturbances and by applying Welch's method. Some Monte Carlo studies are generated for assessing robustness of this test. By combining Tiku's robust procedure with the new test, a robust but more powerful test is developed.  相似文献   

15.
We derive two C(α) statistics and the likelihood-ratio statistic for testing the equality of several correlation coefficients, from k ≥ 2 independent random samples from bivariate normal populations. The asymptotic relationship of the C(α) tests, the likelihood-ratio test, and a statistic based on the normality assumption of Fisher's Z-transform of the sample correlation coefficient is established. A comparative performance study, in terms of size and power, is then conducted by Monte Carlo simulations. The likelihood-ratio statistic is often too liberal, and the statistic based on Fisher's Z-transform is conservative. The performance of the two C(α) statistics is identical. They maintain significance level well and have almost the same power as the other statistics when empirically calculated critical values of the same size are used. The C(α) statistic based on a noniterative estimate of the common correlation coefficient (based on Fisher's Z-transform) is recommended.  相似文献   

16.
Analysis of repeated measures data using a mixed model includes specifying a form for the covariance matrix of the within-subject observations. This reduction in the number of estimated parameters from the unspecified structure may improve the efficiency of inferences made. An implementation of this technique has been incorporated in the MIXED procedure of the SAS® statistical package, and includes a wide range of options for the structure of the covariance matrix. It is demonstrated that draftman's display plots and/or plots in a coordinate system with parallel axes can aid in visualizing the dispersion structure.  相似文献   

17.
In this article, we examine the performance of two newly developed procedures that jointly select the number of states and variables in Markov-switching models by means of Monte Carlo simulations. They are Smith et al. (2006 Smith , A. , Naik , P. A. , Tsai , C. ( 2006 ). Markov-switching model selection using Kullback–Leibler divergence . Journal of Econometrics 134 ( 2 ): 553577 .[Crossref], [Web of Science ®] [Google Scholar]) and Psaradakis and Spagnolo (2006 Psaradakis , Z. , Spagnolo , N. ( 2006 ). Joint determination of the state dimension and autoregressive order for models with Markov regime switching . Journal of Time Series Analysis 27 ( 2 ): 753766 .[Crossref], [Web of Science ®] [Google Scholar]), respectively. The former develops Markov switching criterion (MSC) designed specifically for Markov-switching models, while the latter recommends the use of standard complexity-penalised information criteria (BIC, HQC, and AIC) in joint determination of the state dimension and the autoregressive order of Markov-switching models. The Monte Carlo evidence shows that BIC outperforms MSC while MSC and HQC are preferable over AIC.  相似文献   

18.
The classical Pearson's correlation coefficient has been widely adopted in various fields of application. However, when the data are composed of fuzzy interval values, it is not feasible to use such a traditional approach to evaluate the correlation coefficient. In this study, we propose the specific calculation of fuzzy interval correlation coefficient with fuzzy interval data to measure the relationship between various stocks. As such, the study is able to offer an improving measure of investment strategy for stocks substitution via the analysis of the fuzzy interval correlation. In addition, we use empirical studies to verify the validity of our proposal on fuzzy interval correlation coefficient using data from companies in electric machinery and plastic sectors in Taiwan.  相似文献   

19.
20.
Fisher's least significant difference (LSD) procedure is a two-step testing procedure for pairwise comparisons of several treatment groups. In the first step of the procedure, a global test is performed for the null hypothesis that the expected means of all treatment groups under study are equal. If this global null hypothesis can be rejected at the pre-specified level of significance, then in the second step of the procedure, one is permitted in principle to perform all pairwise comparisons at the same level of significance (although in practice, not all of them may be of primary interest). Fisher's LSD procedure is known to preserve the experimentwise type I error rate at the nominal level of significance, if (and only if) the number of treatment groups is three. The procedure may therefore be applied to phase III clinical trials comparing two doses of an active treatment against placebo in the confirmatory sense (while in this case, no confirmatory comparison has to be performed between the two active treatment groups). The power properties of this approach are examined in the present paper. It is shown that the power of the first step global test--and therefore the power of the overall procedure--may be relevantly lower than the power of the pairwise comparison between the more-favourable active dose group and placebo. Achieving a certain overall power for this comparison with Fisher's LSD procedure--irrespective of the effect size at the less-favourable dose group--may require slightly larger treatment groups than sizing the study with respect to the simple Bonferroni alpha adjustment. Therefore if Fisher's LSD procedure is used to avoid an alpha adjustment for phase III clinical trials, the potential loss of power due to the first-step global test should be considered at the planning stage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号