首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 141 毫秒
1.
The lognormal distribution is currently used extensively to describe the distribution of positive random variables. This is especially the case with data pertaining to occupational health and other biological data. One particular application of the data is statistical inference with regards to the mean of the data. Other authors, namely Zou et al. (2009), have proposed procedures involving the so-called “method of variance estimates recovery” (MOVER), while an alternative approach based on simulation is the so-called generalized confidence interval, discussed by Krishnamoorthy and Mathew (2003). In this paper we compare the performance of the MOVER-based confidence interval estimates and the generalized confidence interval procedure to coverage of credibility intervals obtained using Bayesian methodology using a variety of different prior distributions to estimate the appropriateness of each. An extensive simulation study is conducted to evaluate the coverage accuracy and interval width of the proposed methods. For the Bayesian approach both the equal-tail and highest posterior density (HPD) credibility intervals are presented. Various prior distributions (Independence Jeffreys' prior, Jeffreys'-Rule prior, namely, the square root of the determinant of the Fisher Information matrix, reference and probability-matching priors) are evaluated and compared to determine which give the best coverage with the most efficient interval width. The simulation studies show that the constructed Bayesian confidence intervals have satisfying coverage probabilities and in some cases outperform the MOVER and generalized confidence interval results. The Bayesian inference procedures (hypothesis tests and confidence intervals) are also extended to the difference between two lognormal means as well as to the case of zero-valued observations and confidence intervals for the lognormal variance. In the last section of this paper the bivariate lognormal distribution is discussed and Bayesian confidence intervals are obtained for the difference between two correlated lognormal means as well as for the ratio of lognormal variances, using nine different priors.  相似文献   

2.
Many diagnostic tests may be available to identify a particular disease. Diagnostic performance can be potentially improved by combining. “Either” and “both” positive strategies for combining tests have been discussed in the literature, where a gain in diagnostic performance is measured by a ratio of positive (negative) likelihood ratio of the combined test to that of an individual test. Normal theory and bootstrap confidence intervals are constructed for gains in likelihood ratios. The performance (coverage probability, width) of the two methods are compared via simulation. All confidence intervals perform satisfactorily for large samples, while bootstrap performs better in smaller samples in terms of coverage and width.  相似文献   

3.
The confidence interval of the Kaplan–Meier estimate of the survival probability at a fixed time point is often constructed by the Greenwood formula. This normal approximation-based method can be looked as a Wald type confidence interval for a binomial proportion, the survival probability, using the “effective” sample size defined by Cutler and Ederer. Wald-type binomial confidence interval has been shown to perform poorly comparing to other methods. We choose three methods of binomial confidence intervals for the construction of confidence interval for survival probability: Wilson's method, Agresti–Coull's method, and higher-order asymptotic likelihood method. The methods of “effective” sample size proposed by Peto et al. and Dorey and Korn are also considered. The Greenwood formula is far from satisfactory, while confidence intervals based on the three methods of binomial proportion using Cutler and Ederer's “effective” sample size have much better performance.  相似文献   

4.
In the prospective study of a finely stratified population, one individual from each stratum is chosen at random for the “treatment” group and one for the “non-treatment” group. For each individual the probability of failure is a logistic function of parameters designating the stratum, the treatment and a covariate. Uniformly most powerful unbiased tests for the treatment effect are given. These tests are generally cumbersome but, if the covariate is dichotomous, the tests and confidence intervals are simple. Readily usable (but non-optimal) tests are also proposed for poly-tomous covariates and factorial designs. These are then adapted to retrospective studies (in which one “success” and one “failure” per stratum are sampled). Tests for retrospective studies with a continuous “treatment” score are also proposed.  相似文献   

5.
Two overlapping confidence intervals have been used in the past to conduct statistical inferences about two population means and proportions. Several authors have examined the shortcomings of Overlap procedure and have determined that such a method distorts the significance level of testing the null hypothesis of two population means and reduces the statistical power of the test. Nearly all results for small samples in Overlap literature have been obtained either by simulation or by formulas that may need refinement for small sample sizes, but accurate large sample information exists. Nevertheless, there are aspects of Overlap that have not been presented and compared against the standard statistical procedure. This article will present exact formulas for the maximum % overlap of two independent confidence intervals below which the null hypothesis of equality of two normal population means or variances must still be rejected for any sample sizes. Further, the impact of Overlap on the power of testing the null hypothesis of equality of two normal variances will be assessed. Finally, the noncentral t-distribution is used to assess the Overlap impact on type II error probability when testing equality of means for sample sizes larger than 1.  相似文献   

6.
The problem of testing a point null hypothesis involving an exponential mean is The problem of testing a point null hypothesis involving an exponential mean is usual interpretation of P-values as evidence against precise hypotheses is faulty. As in Berger and Delampady (1986) and Berger and Sellke (1987), lower bounds on Bayesian measures of evidence over wide classes of priors are found emphasizing the conflict between posterior probabilities and P-values. A hierarchical Bayes approach is also considered as an alternative to computing lower bounds and “automatic” Bayesian significance tests which further illustrates the point that P-values are highly misleading measures of evidence for tests of point null hypotheses.  相似文献   

7.
Assessment of analytical similarity of tier 1 quality attributes is based on a set of hypotheses that tests the mean difference of reference and test products against a margin adjusted for standard deviation of the reference product. Thus, proper assessment of the biosimilarity hypothesis requires statistical tests that account for the uncertainty associated with the estimations of the mean differences and the standard deviation of the reference product. Recently, a linear reformulation of the biosimilarity hypothesis has been proposed, which facilitates development and implementation of statistical tests. These statistical tests account for the uncertainty in the estimation process of all the unknown parameters. In this paper, we survey methods for constructing confidence intervals for testing the linearized reformulation of the biosimilarity hypothesis and also compare the performance of the methods. We discuss test procedures using confidence intervals to make possible comparison among recently developed methods as well as other previously developed methods that have not been applied for demonstrating analytical similarity. A computer simulation study was conducted to compare the performance of the methods based on the ability to maintain the test size and power, as well as computational complexity. We demonstrate the methods using two example applications. At the end, we make recommendations concerning the use of the methods.  相似文献   

8.
ON BOOTSTRAP HYPOTHESIS TESTING   总被引:2,自引:0,他引:2  
We describe methods for constructing bootstrap hypothesis tests, illustrating our approach using analysis of variance. The importance of pivotalness is discussed. Pivotal statistics usually result in improved accuracy of level. We note that hypothesis tests and confidence intervals call for different methods of resampling, so as to ensure that accurate critical point estimates are obtained in the former case even when data fail to comply with the null hypothesis. Our main points are illustrated by a simulation study and application to three real data sets.  相似文献   

9.
The method of constructing confidence intervals from hypothesis tests is studied in the case in which there is a single unknown parameter and is proved to provide confidence intervals with coverage probability that is at least the nominal level. The confidence intervals obtained by the method in several different contexts are seen to compare favorably with confidence intervals obtained by traditional methods. The traditional intervals are seen to have coverage probability less than the nominal level in several instances, This method can be applied to all confidence interval problems and reduces to the traditional method when an exact pivotal statistic is known.  相似文献   

10.
In this paper, we consider a constant stress accelerated life test terminated by a hybrid Type-I censoring at the first stress level. The model is based on a general log-location-scale lifetime distribution with mean life being a linear function of stress and with constant scale. We obtain the maximum likelihood estimators (MLE) and the approximate maximum likelihood estimators (AMLE) of the model parameters. Approximate confidence intervals, likelihood ratio tests and two bootstrap methods are used to construct confidence intervals for the unknown parameters of the Weibull and lognormal distributions using the MLEs. Finally, a simulation study and two illustrative examples are provided to demonstrate the performance of the developed inferential methods.  相似文献   

11.
Using the methods of asymptotic decision theory asymptotically optimal for translation and scale families as well as for certian nonparmetric families. Moreover, two new classes of nonlinear rank tests are introduced. These tests are designed for detecting either “ omnibus alternatives ” or “ one sided alternatives of trend ”. Under the null hypothesis of randomness all tests are distribution - free. The asymptotic distributions of the test statistics are derived under contiguous alternatives.  相似文献   

12.
设立原假设中的辩证分析   总被引:1,自引:0,他引:1       下载免费PDF全文
王静  史济洲 《统计研究》2010,27(6):95-99
 该文从统计决策角度提出了“如何选取单侧假设检验中原假设”的问题,综合分析了相关文献中对该问题的各种解释及解决方法,同时运用唯物辩证法原理来理解并解决该问题中的各种矛盾,最后简要阐述了假设检验、可信区间和统计决策之间的辨证关系。  相似文献   

13.
For evaluating diagnostic accuracy of inherently continuous diagnostic tests/biomarkers, sensitivity and specificity are well-known measures both of which depend on a diagnostic cut-off, which is usually estimated. Sensitivity (specificity) is the conditional probability of testing positive (negative) given the true disease status. However, a more relevant question is “what is the probability of having (not having) a disease if a test is positive (negative)?”. Such post-test probabilities are denoted as positive predictive value (PPV) and negative predictive value (NPV). The PPV and NPV at the same estimated cut-off are correlated, hence it is desirable to make the joint inference on PPV and NPV to account for such correlation. Existing inference methods for PPV and NPV focus on the individual confidence intervals and they were developed under binomial distribution assuming binary instead of continuous test results. Several approaches are proposed to estimate the joint confidence region as well as the individual confidence intervals of PPV and NPV. Simulation results indicate the proposed approaches perform well with satisfactory coverage probabilities for normal and non-normal data and, additionally, outperform existing methods with improved coverage as well as narrower confidence intervals for PPV and NPV. The Alzheimer's Disease Neuroimaging Initiative (ADNI) data set is used to illustrate the proposed approaches and compare them with the existing methods.  相似文献   

14.
A simplified proof of the basic properties of the estimators in the Exponential Order Statistics (Jelinski-Moranda) Model is given. The method of constructing confidence intervals from hypothesis tests is applied to find conservative confidence intervals for the unknown parameters in the model.  相似文献   

15.
For interval estimation of a proportion, coverage probabilities tend to be too large for “exact” confidence intervals based on inverting the binomial test and too small for the interval based on inverting the Wald large-sample normal test (i.e., sample proportion ± z-score × estimated standard error). Wilson's suggestion of inverting the related score test with null rather than estimated standard error yields coverage probabilities close to nominal confidence levels, even for very small sample sizes. The 95% score interval has similar behavior as the adjusted Wald interval obtained after adding two “successes” and two “failures” to the sample. In elementary courses, with the score and adjusted Wald methods it is unnecessary to provide students with awkward sample size guidelines.  相似文献   

16.
In practice non-randomized conservative confidence intervals for the parameter of a discrete distribution are used instead of the randomized uniformly most accurate intervals. We suggest in this paper that a part of the data be used as the random mechanism to create “data-randomized” confidence intervals. A thoughtful utilization of the data leads to intervals that are shorter than the usual conservative intervals but avoids the arbitrariness of the randomized uniformly most accurate intervals. Examples are given using the binomial, Poisson, and extended hypergeometric distributions, as well as applications to a metched case-control study and a randomized clinical trial.  相似文献   

17.
We consider a 2r factorial experiment with at least two replicates. Our aim is to find a confidence interval for θ, a specified linear combination of the regression parameters (for the model written as a regression, with factor levels coded as ?1 and 1). We suppose that preliminary hypothesis tests are carried out sequentially, beginning with the rth‐order interaction. After these preliminary hypothesis tests, a confidence interval for θ with nominal coverage 1 ?α is constructed under the assumption that the selected model had been given to us a priori. We describe a new efficient Monte Carlo method, which employs conditioning for variance reduction, for estimating the minimum coverage probability of the resulting confidence interval. The application of this method is demonstrated in the context of a 23 factorial experiment with two replicates and a particular contrast θ of interest. The preliminary hypothesis tests consist of the following two‐step procedure. We first test the null hypothesis that the third‐order interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero and proceed to the second step; otherwise, we stop. In the second step, for each of the second‐order interactions we test the null hypothesis that the interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero. The resulting confidence interval, with nominal coverage probability 0.95, has a minimum coverage probability that is, to a good approximation, 0.464. This shows that this confidence interval is completely inadequate.  相似文献   

18.
This study considers testing for a unit root in a time series characterized by a structural change in its mean level. My approach follows the “intervention analysis” of Box and Tiao (1975) in the sense that I consider the change as being exogenous and as occurring at a known date. Standard unit-root tests are shown to be biased toward nonrejection of the hypothesis of a unit root when the full sample is used. Since tests using split sample regressions usually have low power, I design test statistics that allow the presence of a change in the mean of the series under both the null and alternative hypotheses. The limiting distribution of the statistics is derived and tabulated under the null hypothesis of a unit root. My analysis is illustrated by considering the behavior of various univariate time series for which the unit-root hypothesis has been advanced in the literature. This study complements that of Perron (1989), which considered time series with trends.  相似文献   

19.
ABSTRACT

The performances of six confidence intervals for estimating the arithmetic mean of a lognormal distribution are compared using simulated data. The first interval considered is based on an exact method and is recommended in U.S. EPA guidance documents for calculating upper confidence limits for contamination data. Two intervals are based on asymptotic properties due to the Central Limit Theorem, and the other three are based on transformations and maximum likelihood estimation. The effects of departures from lognormality on the performance of these intervals are also investigated. The gamma distribution is considered to represent departures from the lognormal distribution. The average width and coverage of each confidence interval is reported for varying mean, variance, and sample size. In the lognormal case, the exact interval gives good coverage, but for small sample sizes and large variances the confidence intervals are too wide. In these cases, an approximation that incorporates sampling variability of the sample variance tends to perform better. When the underlying distribution is a gamma distribution, the intervals based upon the Central Limit Theorem tend to perform better than those based upon lognormal assumptions.  相似文献   

20.
Confidence intervals for location parameters are expanded (in either direction) to some “crucial” points and the resulting increase in the confidence coefficient investigated.Particaular crucial points are chosen to illuminate some hypothesis testing problems.Special results are dervied for the normal distribution with estimated variance and, in particular, for the problem of classifiying treatments as better or worse than a control.For this problem the usual two-sided Dunnett procedure is seen to be inefficient.Suggestions are made for the use of already published tables for this problem.Mention is made of the use of expanded confidence intervals for all pairwise comparisons of treatments using an “honest ordering difference” rather than Tukey's “honest siginificant difference”.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号