首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we investigate the behaviour of a ratio of random variables and try to find conditions under which, this expectation equals the ratio of expectations.  相似文献   

2.
In the quantitative group testing problem, the use of the group mean to identify if the group maximum is greater than a prefixed threshold (infected group) is analyzed, using n independent and identically distributed individuals. Under these conditions, it is shown that the information of the mean is sufficient to classify each group as infected or healthy with low probability of misclassification when the underline distribution is a unilateral heavy-tailed distribution.  相似文献   

3.
There is currently much interest in the use of surrogate endpoints in clinical trials and intermediate endpoints in epidemiology. Freedman et al. [Statist. Med. 11 (1992) 167] proposed the use of a validation ratio for judging the evidence of the validity of a surrogate endpoint. The method involves calculation of a confidence interval for the ratio. In this paper, I compare through computer simulations the performance of Fieller's method with the delta method for this calculation. In typical situations, the numerator and denominator of the ratio are highly correlated. I find that the Fieller method is superior to the delta method in coverage properties and in statistical power of the validation test. In addition, the formula for predicting statistical power seems to be much more accurate for the Fieller method than for the delta method. The simulations show that the role of validation analysis is likely to be limited in evaluating the reliability of using surrogate endpoints in clinical trials; however, it is likely to be a useful tool in epidemiology for identifying intermediate endpoints.  相似文献   

4.
Baseball performances are an imperfect measure of baseball abilities, and consequently exaggerate differences in abilities. Predictions of relative batting averages and earned run averages can be improved substantially by using correlation coefficients estimated from earlier seasons to shrink performances toward the mean.  相似文献   

5.
The profile likelihood of the reliability parameter θP(X < Y) or of the ratio of means, when X and Y are independent exponential random variables, has a simple analytical expression and is a powerful tool for making inferences. Inferences about θ can be given in terms of likelihood-confidence intervals with a simple algebraic structure even for small and unequal samples. The case of right censored data can also be handled in a simple way. This is in marked contrast with the complicated expressions that depend on cumbersome numerical calculations of multidimensional integrals required to obtain asymptotic confidence intervals that have been traditionally presented in scientific literature.  相似文献   

6.
Following Gart (1966) a test of significance for the odds ratio in a 2×2 table is developed based on a semi-empirical method of approximating discrete distributions by their continuous analogues. The distribution of the test statistic (W), the ratio of two independent F-variates, is derived. This approximate technique is compared with the "exact" test, uncorrected X test, and a normal approximation based on lnW.  相似文献   

7.
8.
For right‐censored survival data, it is well‐known that the mean survival time can be consistently estimated when the support of the censoring time contains the support of the survival time. In practice, however, this condition can be easily violated because the follow‐up of a study is usually within a finite window. In this article, we show that the mean survival time is still estimable from a linear model when the support of some covariate(s) with non‐zero coefficient(s) is unbounded regardless of the length of follow‐up. This implies that the mean survival time can be well estimated when the support of linear predictor is wide in practice. The theoretical finding is further verified for finite samples by simulation studies. Simulations also show that, when both models are correctly specified, the linear model yields reasonable mean square prediction errors and outperforms the Cox model, particularly with heavy censoring and short follow‐up time.  相似文献   

9.
The adaptive trimmed means of Hogg (1974) and modified by De Wet and van Wvk (1978) are studied in this paper for finite samples. They are shown to have good intervals and afficiency Properties for Sample sizes 20 and larger Confidence intervals pased on these estimator are also considered and found to be fairly ro-bust for sample sizes 40 and larger.  相似文献   

10.
11.
The important problem of the ratio of Weibull random variables is considered. Two motivating examples from engineering are discussed. Exact expressions are derived for the probability density function, cumulative distribution function, hazard rate function, shape characteristics, moments, factorial moments, skewness, kurtosis and percentiles of the ratio. Estimation procedures by the methods of moments and maximum likelihood are provided. The performances of the estimates from these methods are compared by simulation. Finally, an application is discussed for aspect and performance ratios of systems.  相似文献   

12.
Useful models for time series of counts or simply wrong ones?   总被引:1,自引:0,他引:1  
There has been a considerable and growing interest in low integer-valued time series data leading to a diversification of modelling approaches. In addition to static regression models, both observation-driven and parameter-driven models are considered here. We compare and contrast a variety of time series models for counts using two very different data sets as a testbed. A range of diagnostic devices is employed to help inform model adequacy. Special attention is paid to dynamic structure and underlying distributional assumptions including associated dispersion properties. Competing models show attractive features, but overall no one modelling approach is seen to dominate.  相似文献   

13.
I make recommendations in choosing a confidence interval for the Poisson mean, from twelve different methods, that are based on four general principles: actual coverage should closely match the nominal coverage; narrower expected widths of confidence intervals are better; the right and left non-coverage should be fairly balanced; and some investigators may prefer closed-form intervals. The interval chosen depends on the relative importance the investigator places on each of these principles. The confidence intervals are examined through graphs of their coverage probability, interval widths and shapes.  相似文献   

14.
Newhouse and Oman (1971) identified the orientations with respect to the eigenvectors of X'X of the true coefficient vector of the linear regression model for which the ordinary ridge regression estimator performs best and performs worse when mean squared error is the measure of performance. In this paper the corresponding result is derived for generalized ridge regression for two risk functions: mean squared error and mean squared error of prediction.  相似文献   

15.
《Econometric Reviews》2013,32(4):371-402
Abstract

It is widely known that significant in-sample evidence of predictability does not guarantee significant out-of-sample predictability. This is often interpreted as an indication that in-sample evidence is likely to be spurious and should be discounted. In this paper, we question this interpretation. Our analysis shows that neither data mining nor dynamic misspecification of the model under the null nor unmodelled structural change under the null are plausible explanations of the observed tendency of in-sample tests to reject the no-predictability null more often than out-of-sample tests. We provide an alternative explanation based on the higher power of in-sample tests of predictability in many situations. We conclude that results of in-sample tests of predictability will typically be more credible than results of out-of-sample tests.  相似文献   

16.
The estimation of incremental cost–effectiveness ratio (ICER) has received increasing attention recently. It is expressed in terms of the ratio of the change in costs of a therapeutic intervention to the change in the effects of the intervention. Despite the intuitive interpretation of ICER as an additional cost per additional benefit unit, it is a challenge to estimate the distribution of a ratio of two stochastically dependent distributions. A vast literature regarding the statistical methods of ICER has developed in the past two decades, but none of these methods provide an unbiased estimator. Here, to obtain the unbiased estimator of the cost–effectiveness ratio (CER), the zero intercept of the bivariate normal regression is assumed. In equal sample sizes, the Iman–Conover algorithm is applied to construct the desired variance–covariance matrix of two random bivariate samples, and the estimation then follows the same approach as CER to obtain the unbiased estimator of ICER. The bootstrapping method with the Iman–Conover algorithm is employed for unequal sample sizes. Simulation experiments are conducted to evaluate the proposed method. The regression-type estimator performs overwhelmingly better than the sample mean estimator in terms of mean squared error in all cases.  相似文献   

17.
Summary.  The paper examines the capital structure adjustment dynamics of listed non-financial corporations in seven east Asian countries before, during and after the crisis of 1997–1998. Our methodology allows for speeds of adjustment to vary, not only among firms, but also over time, distinguishing between cases of sudden and smooth adjustment. Whereas, compared with firms in the least affected countries, average leverages were much higher, generalized method-of-moments analysis of the Worldscope panel data suggests that average speeds of adjustment were lower in the worst affected countries. This holds also for the severely financially distressed firms in some worst affected countries, though the trend reversed in the post-crisis period. These findings have important implications for the regulatory environment as well as access to market finance.  相似文献   

18.
In this paper we introduce the distribution of , with c >  0, where X i , i =  1, 2, are independent generalized beta-prime-distributed random variables, and establish a closed form expression of its density. This distribution has as its limiting case the generalized beta type I distribution recently introduced by Nadarajah and Kotz (2004). Due to the presence of several parameters the density can take a wide variety of shapes.   相似文献   

19.
Scheffé (1970) introduced a method for deriving confidence sets for directions and ratios of normals. The procedure requires use of an approximation and Scheffé provided evidence that the method performs well for cases in which the variances of the random deviates are known. This paper extends Scheffé's numerical integrations to the case of unknown variances. Our results indicate that Scheffé's method works well when variances are unknown  相似文献   

20.
Abstract

Sliced average variance estimation (SAVE) is one of the best methods for estimating central dimension-reduction subspace in semi parametric regression models when covariates are normal. In recent days SAVE is being used to analyze DNA microarray data especially in tumor classification but most important drawback is normality of covariates. In this article, the asymptotic behavior of estimates of CDR space under varying slice size is studied through simulation studies when covariates are non normal but follows linearity condition as well as when covariates slightly perturbed from normal distribution and we observed that serious error may occur under violation normality assumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号