首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
For a loss distribution belonging to a location–scale family, Fμ,σFμ,σ, the risk measures, Value-at-Risk and Expected Shortfall are linear functions of the parameters: μ+τσμ+τσ where ττ is the corresponding risk measure of the mean-zero and unit-variance member of the family. For each risk measure, we consider a natural estimator by replacing the unknown parameters μμ and σσ by the sample mean and (bias corrected) sample standard deviation, respectively. The large-sample parametric confidence intervals for the risk measures are derived, relying on the asymptotic joint distribution of the sample mean and sample standard deviation. Simulation studies with the Normal, Laplace and Gumbel families illustrate that the derived asymptotic confidence intervals for Value-at-Risk and Expected Shortfall outperform those of Bahadur (1966) and Brazauskas et al. (2008), respectively. The method can also be effectively applied to Log-location-scale families whose supports are positive reals; an illustrative example is given in the area of financial credit risk.  相似文献   

3.
For a confidence interval (L(X),U(X)) of a parameter θ in one-parameter discrete distributions, the coverage probability is a variable function of θ. The confidence coefficient is the infimum of the coverage probabilities, inf  θ P θ (θ∈(L(X),U(X))). Since we do not know which point in the parameter space the infimum coverage probability occurs at, the exact confidence coefficients are unknown. Beside confidence coefficients, evaluation of a confidence intervals can be based on the average coverage probability. Usually, the exact average probability is also unknown and it was approximated by taking the mean of the coverage probabilities at some randomly chosen points in the parameter space. In this article, methodologies for computing the exact average coverage probabilities as well as the exact confidence coefficients of confidence intervals for one-parameter discrete distributions are proposed. With these methodologies, both exact values can be derived.  相似文献   

4.
It is demonstrated that the confidence intervals (CIs) for the probability of eventual extinction and other parameters of a Galton–Watson branching process based upon the maximum likelihood estimators can often have substantially lower coverage when compared to the desired nominal confidence coefficient, especially in small, more realistic sample sizes. The same conclusion holds for the traditional bootstrap CIs. We propose several adjustments to these CIs, which greatly improves coverage in most cases. We also make a correction in an asymptotic variance formula given in Stigler (1971 Stigler, S.M. (1971). The estimation of the probability of extinction and other parameters associated with branching processes. Biometrika 58(3):499508.[Crossref], [Web of Science ®] [Google Scholar]). The focus here is on implementation of the CIs which have good coverage, in a wide variety of cases. We also consider expected CI lengths. Some recommendations are made.  相似文献   

5.
Simultaneous confidence intervals for the p means of a multivariate normal random variable with known variances may be generated by the projection method of Scheffé and by the use of Bonferroni's inequality. It has been conjectured that the Bonferroni intervals are shorter than the Scheffé intervals, at least for the usual confidence levels. This conjecture is proved for all p≥2 and all confidence levels above 50%. It is shown, incidentally, that for all p≥2 Scheffé's intervals are shorter for sufficiently small confidence levels. The results are also applicable to the Bonferroni and Scheffé intervals generated for multinomial proportions.  相似文献   

6.
7.
8.
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example.  相似文献   

9.
10.
The Poisson–Lindley distribution is a compound discrete distribution that can be used as an alternative to other discrete distributions, like the negative binomial. This paper develops approximate one-sided and equal-tailed two-sided tolerance intervals for the Poisson–Lindley distribution. Practical applications of the Poisson–Lindley distribution frequently involve large samples, thus we utilize large-sample Wald confidence intervals in the construction of our tolerance intervals. A coverage study is presented to demonstrate the efficacy of the proposed tolerance intervals. The tolerance intervals are also demonstrated using two real data sets. The R code developed for our discussion is briefly highlighted and included in the tolerance package.  相似文献   

11.
In sample surveys and many other areas of application, the ratio of variables is often of great importance. This often occurs when one variable is available at the population level while another variable of interest is available for sample data only. In this case, using the sample ratio, we can often gather valuable information on the variable of interest for the unsampled observations. In many other studies, the ratio itself is of interest, for example when estimating proportions from a random number of observations. In this note we compare three confidence intervals for the population ratio: A large sample interval, a log based version of the large sample interval, and Fieller’s interval. This is done through data analysis and through a small simulation experiment. The Fieller method has often been proposed as a superior interval for small sample sizes. We show through a data example and simulation experiments that Fieller’s method often gives nonsensical and uninformative intervals when the observations are noisy relative to the mean of the data. The large sample interval does not similarly suffer and thus can be a more reliable method for small and large samples.  相似文献   

12.
Parametric methods for the calculation of reference intervals in clinical studies often rely on the identification of a suitable transformation so that the transformed data can be assumed to be drawn from a Gaussian distribution. In this paper, the two-stage transformation recommended by the International Federation for Clinical Chemistry is compared with a novel generalised Box–Cox family of transformations. Investigation is also made of sample sizes needed to achieve certain criteria of reliability in the calculated reference interval. Simulations are used to show that the generalised Box–Cox family achieves a lower bias than the two-stage transformation. It was found that there is a possibility that the two-stage transformation will result in percentile estimates that cannot be back-transformed to obtain the required reference intervals, a difficulty not observed when using the generalised Box–Cox family introduced in this paper.  相似文献   

13.
14.
15.
16.
Non-normality and heteroscedasticity are common in applications. For the comparison of two samples in the non-parametric Behrens–Fisher problem, different tests have been proposed, but no single test can be recommended for all situations. Here, we propose combining two tests, the Welch t test based on ranks and the Brunner–Munzel test, within a maximum test. Simulation studies indicate that this maximum test, performed as a permutation test, controls the type I error rate and stabilizes the power. That is, it has good power characteristics for a variety of distributions, and also for unbalanced sample sizes. Compared to the single tests, the maximum test shows acceptable type I error control.  相似文献   

17.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

18.
There is currently much interest in the use of surrogate endpoints in clinical trials and intermediate endpoints in epidemiology. Freedman et al. [Statist. Med. 11 (1992) 167] proposed the use of a validation ratio for judging the evidence of the validity of a surrogate endpoint. The method involves calculation of a confidence interval for the ratio. In this paper, I compare through computer simulations the performance of Fieller's method with the delta method for this calculation. In typical situations, the numerator and denominator of the ratio are highly correlated. I find that the Fieller method is superior to the delta method in coverage properties and in statistical power of the validation test. In addition, the formula for predicting statistical power seems to be much more accurate for the Fieller method than for the delta method. The simulations show that the role of validation analysis is likely to be limited in evaluating the reliability of using surrogate endpoints in clinical trials; however, it is likely to be a useful tool in epidemiology for identifying intermediate endpoints.  相似文献   

19.
An explicit decomposition on asymptotically independent distributed as chi-squared with one degree of freedom components of the Pearson–Fisher and Dzhaparidze–Nikulin tests is presented. The decomposition is formally the same for both tests and is valid for any partitioning of a sample space. Vector-valued tests, components of which can be not only different scalar tests based on the same sample, but also scalar tests based on components or groups of components of the same statistic are considered. Numerical examples illustrating the idea are presented.  相似文献   

20.
The exponentially weighted moving average (EWMA) control charts with variable sampling intervals (VSIs) have been shown to be substantially quicker than the fixed sampling intervals (FSI) EWMA control charts in detecting process mean shifts. The usual assumption for designing a control chart is that the data or measurements are normally distributed. However, this assumption may not be true for some processes. In the present paper, the performances of the EWMA and combined –EWMA control charts with VSIs are evaluated under non-normality. It is shown that adding the VSI feature to the EWMA control charts results in very substantial decreases in the expected time to detect shifts in process mean under both normality and non-normality. However, the combined –EWMA chart has its false alarm rate and its detection ability is affected if the process data are not normally distributed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号