首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The distribution of the sample correlation coefficient is derived when the population is a mixture of two bivariate normal distributions with zero mean but different covariances and mixing proportions 1 - λ and λ respectively; λ will be called the proportion of contamination. The test of ρ = 0 based on Student's t, Fisher's z, arcsine, or Ruben's transformation is shown numerically to be nonrobust when λ, the proportion of contamination, lies between 0.05 and 0.50 and the contaminated population has 9 times the variance of the standard (bivariate normal) population. These tests are also sensitive to the presence of outliers.  相似文献   

2.
It is shown that the non-null distribution of the multiple correlation coefficient may be derived rather easily if the correlated normal variables are defined in a convenient vay. The invariance of the correlation distribution to linear transformations of the variables makes the results generally applicable. The distribution is derived as the well-known mixture of null distributions, and some generalizations when the variables are not normally distributed are indicated.  相似文献   

3.
We proposed two simple moment-based procedures, one with (GCCC1) and one without (GCCC2) normality assumptions, to generalize the inference of concordance correlation coefficient for the evaluation of agreement among multiple observers for measurements on a continuous scale. A modified Fisher's Z-transformation was adapted to further improve the inference. We compared the proposed methods with U-statistic-based inference approach. Simulation analysis showed desirable statistical properties of the simplified approach GCCC1, in terms of coverage probabilities and coverage balance, especially for small samples. GCCC2, which is distribution-free, behaved comparably with the U-statistic-based procedure, but had a more intuitive and explicit variance estimator. The utility of these approaches were illustrated using two clinical data examples.  相似文献   

4.
Fisher's least significant difference (LSD) procedure is a two-step testing procedure for pairwise comparisons of several treatment groups. In the first step of the procedure, a global test is performed for the null hypothesis that the expected means of all treatment groups under study are equal. If this global null hypothesis can be rejected at the pre-specified level of significance, then in the second step of the procedure, one is permitted in principle to perform all pairwise comparisons at the same level of significance (although in practice, not all of them may be of primary interest). Fisher's LSD procedure is known to preserve the experimentwise type I error rate at the nominal level of significance, if (and only if) the number of treatment groups is three. The procedure may therefore be applied to phase III clinical trials comparing two doses of an active treatment against placebo in the confirmatory sense (while in this case, no confirmatory comparison has to be performed between the two active treatment groups). The power properties of this approach are examined in the present paper. It is shown that the power of the first step global test--and therefore the power of the overall procedure--may be relevantly lower than the power of the pairwise comparison between the more-favourable active dose group and placebo. Achieving a certain overall power for this comparison with Fisher's LSD procedure--irrespective of the effect size at the less-favourable dose group--may require slightly larger treatment groups than sizing the study with respect to the simple Bonferroni alpha adjustment. Therefore if Fisher's LSD procedure is used to avoid an alpha adjustment for phase III clinical trials, the potential loss of power due to the first-step global test should be considered at the planning stage.  相似文献   

5.
Several methods for generating variates with univariate and multivariate Walleniu' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from the mode, ratio-of-uniforms rejection method, and rejection by sampling in the τ domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.  相似文献   

6.
We consider n pairs of random variables (X11,X21),(X12,X22),… (X1n,X2n) having a bivariate elliptically contoured density of the form where θ1 θ2 are location parameters and Δ = ((λik)) is a 2 × 2 symmetric positive definite matrix of scale parameters. The exact distribution of the Pearson product-moment correlation coefficient between X1 and X2 is obtained. The usual case when a sample of size n is drawn from a bivariate normal population is a special case of the abovementioned model.  相似文献   

7.
The truncated bivariate normal distribution (TBVND) with truncation in both variables on the left is studied here. The behaviour of the sample correlation coefficient is assessed through its moments when the sample is from such a population. Some inequalities established by Rao et al. (1968) are extended  相似文献   

8.
Spearman's rank correlation coefficient is shown to be a measure of distance on the unit square, which characterizes the concentration of the probability density under a copula. A distance function offers insight into structuring copulas with a desired degree of association.  相似文献   

9.
《Econometric Reviews》2013,32(1):25-52
Abstract

This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models.  相似文献   

10.
The Edgeworth expansion for the distribution function of Spearman's rank correlation coefficient may be used to show that the rates of convergence for the normal and Pearson type II approximations are l/nand l/n2 respectively. Using the Edgeworth expansion up to terms involving the sixth moment of the exact distribution allows an approximation with an error of order l/n3.  相似文献   

11.
A study is made of Neyman's C(a) test for testing independence in nonnormal situations. It is shown that it performs very well both in terms of the level of significance and the powereven for smallvalues of the samplesize. Also, in the case of the bivariate Polsson distribution, itis shown that Fisher's z and Student's t transforms of the sample correlation coefficient are good competitors for Neyman's procedure.

  相似文献   

12.
We respond to criticism leveled at bootstrap confidence intervals for the correlation coefficient by recent authors by arguing that in the correlation coefficient case, non–standard methods should be employed. We propose two such methods. The first is a bootstrap coverage coorection algorithm using iterated bootstrap techniques (Hall, 1986; Beran, 1987a; Hall and Martin, 1988) applied to ordinary percentile–method intervals (Efron, 1979), giving intervals with high coverage accuracy and stable lengths and endpoints. The simulation study carried out for this method gives results for sample sizes 8, 10, and 12 in three parent populations. The second technique involves the construction of percentile–t bootstrap confidence intervals for a transformed correlation coefficient, followed by an inversion of the transformation, to obtain “transformed percentile–t” intervals for the correlation coefficient. In particular, Fisher's z–transformation is used, and nonparametric delta method and jackknife variance estimates are used to Studentize the transformed correlation coefficient, with the jackknife–Studentized transformed percentile–t interval yielding the better coverage accuracy, in general. Percentile–t intervals constructed without first using the transformation perform very poorly, having large expected lengths and erratically fluctuating endpoints. The simulation study illustrating this technique gives results for sample sizes 10, 15 and 20 in four parent populations. Our techniques provide confidence intervals for the correlation coefficient which have good coverage accuracy (unlike ordinary percentile intervals), and stable lengths and endpoints (unlike ordinary percentile–t intervals).  相似文献   

13.
A simple and accurate test on the value of the correlation coefficient in normal bivariate populations is here proposed. Its accuracy compares favourably with any previous approximations.  相似文献   

14.
This paper proposes a control chart with variable sampling intervals (VSI) to detect increases in the expected value of the number of defects in a random sample of constant size n the upper one-sided c-VSI chart

The performance of this chart is evaluated by means of the average time to signal (ATS).The comparisons made between the standard FSI (fixed sampling intervals) and the VSI upper one-sided c - charts indicate that using variable sampling intervals can substantially reduce the average time to signal. Using stochastic ordering we prove that this reduction always occurs.

Special attention is given to the choice of the proposed control chart parameters and to the chart graphical display.  相似文献   

15.
Summary The probability integral (p.i.) values of the correlation coefficient in samples from a normal bi-variate population are usually computed by approximate methods, except for the first few values ofn. In this note we shall obtain the explicit expression for any sample size through a relation which also enables us to calculate easily and quickly the p.i. exact values as well as those of the density function (d.f.). From this p.i. expression it is also possible to obtain, among others, that of Student'st.  相似文献   

16.
This study treats an asymptotic distribution for measures of predictive power for generalized linear models (GLMs). We focus on the regression correlation coefficient (RCC) that is one of the measures of predictive power. The RCC, proposed by Zheng and Agresti is a population value and a generalization of the population value for the coefficient of determination. Therefore, the RCC is easy to interpret and familiar. Recently, Takahashi and Kurosawa provided an explicit form of the RCC and proposed a new RCC estimator for a Poisson regression model. They also showed the validity of the new estimator compared with other estimators. This study discusses the new statistical properties of the RCC for the Poisson regression model. Furthermore, we show an asymptotic normality of the RCC estimator.  相似文献   

17.
Abstract

Based on Balakrishnan's (2002 Balakrishnan, N. (2002). Discussion on “Skew multivariate models related to hidden truncation and/or selective reporting” by B.C. Arnold and R.J. Beaver. Test 11: 3739.[Web of Science ®] [Google Scholar]) novel idea, a new skew logistic distribution is proposed. It contains Azzalini's skew logistic and standard logistic distributions as particular cases. Several mathematical properties of the proposed distribution (including characteristic function, cumulative distribution function, stochastic orderings, and simulation schemes) are derived. A real-data application is used to illustrate practical usefulness.  相似文献   

18.
A Gaussian copula is widely used to define correlated random variables. To obtain a prescribed Pearson correlation coefficient of ρx between two random variables with given marginal distributions, the correlation coefficient ρz between two standard normal variables in the copula must take a specific value which satisfies an integral equation that links ρx to ρz. In a few cases, this equation has an explicit solution, but in other cases it must be solved numerically. This paper attempts to address this issue. If two continuous random variables are involved, the marginal transformation is approximated by a weighted sum of Hermite polynomials; via Mehler’s formula, a polynomial of ρz is derived to approximate the function relationship between ρx and ρz. If a discrete variable is involved, the marginal transformation is decomposed into piecewise continuous ones, and ρx is expressed as a polynomial of ρz by Taylor expansion. For a given ρx, ρz can be efficiently determined by solving a polynomial equation.  相似文献   

19.
Terrel (1983) (The Annals of Probability, Vol. 11, No. 3, 823–826) showed that the coefficient of correlation between the smaller and larger of a sample of size two is at most one-half, and this upper bound is attained only for continuous uniform distributions. His proof is of computational nature and is based on the properties of Legendre polynomials. We give an easier proof of Terrel's characterization and we show how our method can be used for obtaining sharper bounds within the class of discrete distributions onN points and also a characterization of the equidistant uniform distribution.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号