首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Fisher's transformation of the bivariate-normal correlation coefficient is usually derived as a variance-stabilizing transformation and its normalizing property is then demonstrated by the reduced skewness of the distribution resulting from the transformation. In this note the transformation is derived as a normalizing transformation that incorporates variance stabilization. Some additional remarks are made on the transformation and its uses.  相似文献   

2.
This article shows how to use any correlation coefficient to produce an estimate of location and scale. It is part of a broader system, called a correlation estimation system (CES), that uses correlation coefficients as the starting point for estimations. The method is illustrated using the well-known normal distribution. This article shows that any correlation coefficient can be used to fit a simple linear regression line to bivariate data and then the slope and intercept are estimates of standard deviation and location. Because a robust correlation will produce robust estimates, this CES can be recommended as a tool for everyday data analysis. Simulations indicate that the median with this method using a robust correlation coefficient appears to be nearly as efficient as the mean with good data and much better if there are a few errant data points. Hypothesis testing and confidence intervals are discussed for the scale parameter; both normal and Cauchy distributions are covered.  相似文献   

3.
Some applications of ratios of normal random variables require both the numerator and denominator of the ratio to be positive if the ratio is to have a meaningful interpretation. In these applications, there may also be substantial likelihood that the variables will assume negative values. An example of such an application is when comparisons are made in which treatments may have either efficacious or deleterious effects on different trials. Classical theory on ratios of normal variables has focused on the distribution of the ratio and has not formally incorporated this practical consideration. When this issue has arisen, approximations have been used to address it. In this article, we provide an exact method for determining (1 ? α) confidence bounds for ratios of normal variables under the constraint that the ratio is composed of positive values and connect this theory to classical work in this area. We then illustrate several practical applications of this method.  相似文献   

4.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   

5.
In this article, we propose an approach for estimating the confidence interval of the common intraclass correlation coefficient based on the profile likelihood. Comparisons are made with a procedure using the concept of generalized pivots. The method presented is less computationally demanding than the method using generalized pivots. The approach also provides better coverage, and shorter lengths of confidence intervals for the case when the value of the common intraclass correlation coefficient is low. The lengths of confidence intervals given by both methods are quite comparable for high but less realistic values of the common intraclass correlation coefficient.  相似文献   

6.
For given continuous distribution functions F(x) and G(y) and a Pearson correlation coefficient ρ, an algorithm is provided to construct a sequence of continuous bivariate distributions with marginals equal to F(x) and G(y) and the corresponding correlation coefficient converges to ρ. The algorithm can be easily implemented using S-Plus or R. Applications are given to generate bivariate random variables with marginals including Gamma, Beta, Weibull, and uniform distributions.  相似文献   

7.
Sample size and correlation coefficient of populations are the most important factors which influence the statistical significance of the sample correlation coefficient. It is observed that for evaluating the hypothesis when the observed value of the correlation coefficient's r is different from zero, Fisher's Z transformation may be incorrect for small samples especially when population correlation coefficient ρ has big values. In this study, a simulation program has been generated for to illustrate how the bias in the Fisher transformation of the correlation coefficient affects estimate precision when sample size is small and ρ has big value. By the simulation results, 90 and 95% confidence intervals of correlation coefficients have been created and tabled. As a result, it is suggested that especially when ρ is greater than 0.2 and sample sizes of 18 or less, Tables 1 and 2 can be used for the significance test in correlations.  相似文献   

8.
The use of the correlation coefficient is suggested as a technique for summarizing and objectively evaluating the information contained in probability plots. Goodness-of-fit tests are constructed using this technique for several commonly used plotting positions for the normal distribution. Empirical sampling methods are used to construct the null distribution for these tests, which are then compared on the basis of power against certain nonnormal alternatives. Commonly used regression tests of fit are also included in the comparisons. The results indicate that use of the plotting position pi = (i - .375)/(n + .25) yields a competitive regression test of fit for normality.  相似文献   

9.
ABSTRACT

The correlation coefficient (CC) is a standard measure of a possible linear association between two continuous random variables. The CC plays a significant role in many scientific disciplines. For a bivariate normal distribution, there are many types of confidence intervals for the CC, such as z-transformation and maximum likelihood-based intervals. However, when the underlying bivariate distribution is unknown, the construction of confidence intervals for the CC is not well-developed. In this paper, we discuss various interval estimation methods for the CC. We propose a generalized confidence interval for the CC when the underlying bivariate distribution is a normal distribution, and two empirical likelihood-based intervals for the CC when the underlying bivariate distribution is unknown. We also conduct extensive simulation studies to compare the new intervals with existing intervals in terms of coverage probability and interval length. Finally, two real examples are used to demonstrate the application of the proposed methods.  相似文献   

10.
From a theoretical perspective, the paper considers the properties of the maximum likelihood estimator of the correlation coefficient, principally regarding precision, in various types of bivariate model which are popular in the applied literature. The models are: 'Full-Full', in which both variables are fully observed; 'Censored-Censored', in which both of the variables are censored at zero; and finally, 'Binary-Binary', in which both variables are observed only in sign. For analytical convenience, the underlying bivariate distribution which is assumed in each of these cases is the bivariate logistic. A central issue is the extent to which censoring reduces the level of Fisher's information pertaining to the correlation coefficient, and therefore reduces the precision with which this important parameter can be estimated.  相似文献   

11.
A formula to evaluate the integral of the bivariate normal density over finite area regions of the plane is developed. It is then used to compare regression estimates when bivariate normality is appropriate.  相似文献   

12.
ABSTRACT

The most common measure of dependence between two time series is the cross-correlation function. This measure gives a complete characterization of dependence for two linear and jointly Gaussian time series, but it often fails for nonlinear and non-Gaussian time series models, such as the ARCH-type models used in finance. The cross-correlation function is a global measure of dependence. In this article, we apply to bivariate time series the nonlinear local measure of dependence called local Gaussian correlation. It generally works well also for nonlinear models, and it can distinguish between positive and negative local dependence. We construct confidence intervals for the local Gaussian correlation and develop a test based on this measure of dependence. Asymptotic properties are derived for the parameter estimates, for the test functional and for a block bootstrap procedure. For both simulated and financial index data, we construct confidence intervals and we compare the proposed test with one based on the ordinary correlation and with one based on the Brownian distance correlation. Financial indexes are examined over a long time period and their local joint behavior, including tail behavior, is analyzed prior to, during and after the financial crisis. Supplementary material for this article is available online.  相似文献   

13.
Abstract

The “New Statistics” emphasizes effect sizes, confidence intervals, meta-analysis, and the use of Open Science practices. We present three specific ways in which a New Statistics approach can help improve scientific practice: by reducing overconfidence in small samples, by reducing confirmation bias, and by fostering more cautious judgments of consistency. We illustrate these points through consideration of the literature on oxytocin and human trust, a research area that typifies some of the endemic problems that arise with poor statistical practice.  相似文献   

14.
15.
In this article, we propose various tests for serial correlation in fixed-effects panel data regression models with a small number of time periods. First, a simplified version of the test suggested by Wooldridge (2002) and Drukker (2003) is considered. The second test is based on the Lagrange Multiplier (LM) statistic suggested by Baltagi and Li (1995), and the third test is a modification of the classical Durbin–Watson statistic. Under the null hypothesis of no serial correlation, all tests possess a standard normal limiting distribution as N tends to infinity and T is fixed. Analyzing the local power of the tests, we find that the LM statistic has superior power properties. Furthermore, a generalization to test for autocorrelation up to some given lag order and a test statistic that is robust against time dependent heteroskedasticity are proposed.  相似文献   

16.
It is well known that a Bayesian credible interval for a parameter of interest is derived from a prior distribution that appropriately describes the prior information. However, it is less well known that there exists a frequentist approach developed by Pratt (1961 Pratt , J. W. ( 1961 ). Length of confidence intervals . J. Amer. Statist. Assoc. 56 : 549657 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) that also utilizes prior information in the construction of frequentist confidence intervals. This frequentist approach produces confidence intervals that have minimum weighted average expected length, averaged according to some weight function that appropriately describes the prior information. We begin with a simple model as a starting point in comparing these two distinct procedures in interval estimation. Consider X 1,…, X n that are independent and identically N(μ, σ2) distributed random variables, where σ2 is known, and the parameter of interest is μ. Suppose also that previous experience with similar data sets and/or specific background and expert opinion suggest that μ = 0. Our aim is to: (a) develop two types of Bayesian 1 ? α credible intervals for μ, derived from an appropriate prior cumulative distribution function F(μ) more importantly; (b) compare these Bayesian 1 ? α credible intervals for μ to the frequentist 1 ? α confidence interval for μ derived from Pratt's frequentist approach, in which the weight function corresponds to the prior cumulative distribution function F(μ). We show that the endpoints of the Bayesian 1 ? α credible intervals for μ are very different to the endpoints of the frequentist 1 ? α confidence interval for μ, when the prior information strongly suggests that μ = 0 and the data supports the uncertain prior information about μ. In addition, we assess the performance of these intervals by analyzing their coverage probability properties and expected lengths.  相似文献   

17.
Traditionally, sphericity (i.e., independence and homoscedasticity for raw data) is put forward as the condition to be satisfied by the variance–covariance matrix of at least one of the two observation vectors analyzed for correlation, for the unmodified t test of significance to be valid under the Gaussian and constant population mean assumptions. In this article, the author proves that the sphericity condition is too strong and a weaker (i.e., more general) sufficient condition for valid unmodified t testing in correlation analysis is circularity (i.e., independence and homoscedasticity after linear transformation by orthonormal contrasts), to be satisfied by the variance–covariance matrix of one of the two observation vectors. Two other conditions (i.e., compound symmetry for one of the two observation vectors; absence of correlation between the components of one observation vector, combined with a particular pattern of joint heteroscedasticity in the two observation vectors) are also considered and discussed. When both observation vectors possess the same variance–covariance matrix up to a positive multiplicative constant, the circularity condition is shown to be necessary and sufficient. “Observation vectors” may designate partial realizations of temporal or spatial stochastic processes as well as profile vectors of repeated measures. From the proof, it follows that an effective sample size appropriately defined can measure the discrepancy from the more general sufficient condition for valid unmodified t testing in correlation analysis with autocorrelated and heteroscedastic sample data. The proof is complemented by a simulation study. Finally, the differences between the role of the circularity condition in the correlation analysis and its role in the repeated measures ANOVA (i.e., where it was first introduced) are scrutinized, and the link between the circular variance–covariance structure and the centering of observations with respect to the sample mean is emphasized.  相似文献   

18.
The density of the multiple correlation coefficient is derived by direct integration when the sample covariance matrix has a linear non-central distribution. Using the density, we deduce the null and non-null distribution of the multiple correlation coefficient when sampling from a mixture of two multivariate normal populations with the same covariance matrix. We also compute actual significance levels of the test of the hypothesis Ho : ρ1·2…p = 0 versus Ha1·2…p > 0, given the mixture model.  相似文献   

19.
We construct new pivotals to obtain confidence bounds and confidence intervals for the mean of a stationary process. These follow the approach based on estimating functions. The new pivotals are compared with the standard pivotal based on studentization. We study the first four cumulants of each of these pivotals and explain why the pivotals based on the estimating function approach result in better coverage probabilities. Some simulation results comparing these pivotals have been reported.  相似文献   

20.
For the unbalanced analysis of covariance model with one covariate, a simple formula is given for the intraclass correlation coefficient estimator that results from Henderson's Method 3 estimation of variance components. Example calculations and the corresponding interpretations are given for a study of the correlation of iron content among brothers. The example illustrates the manner in which the estimator depends on the pattern of correlation between the covariate and the variable under investigation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号