首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
The Lindstrom-Madden method of approximating lower confidence limits for series systems with unlike components is extended to series systems with repeated components utilizing the methods of Buehler, Sudakov and of Harris and Sons. An exact solution is given for no failures and key test results, together with an approximation for the general case. Numerical examples are also provided.  相似文献   

3.
A review is provided of the concept confidence distributions. Material covered include: fundamentals, extensions, applications of confidence distributions and available computer software. We expect that this review could serve as a source of reference and encourage further research with respect to confidence distributions.  相似文献   

4.
Well-known nonparametric confidence intervals for quantiles are of the form (X i : n , X j : n ) with suitably chosen order statistics X i : n and X j : n , but typically their coverage levels differ from those prescribed. It appears that the coverage level of the confidence interval of the form (X i : n , X j : n ) with random indices I and J can be rendered equal, exactly to any predetermined level γ?∈?(0, 1). Best in the sense of minimum E(J???I), i.e., ‘the shortest’, two-sided confidence intervals are constructed. If no two-sided confidence interval exists for a given γ, the most accurate one-sided confidence intervals are constructed.  相似文献   

5.
In this article, we discuss constructing confidence intervals (CIs) of performance measures for an M/G/1 queueing system. Fiducial empirical distribution is applied to estimate the service time distribution. We construct fiducial empirical quantities (FEQs) for the performance measures. The relationship between generalized pivotal quantity and fiducial empirical quantity is illustrated. We also present numerical examples to show that the FEQs can yield new CIs dominate the bootstrap CIs in relative coverage (defined as the ratio of coverage probability to average length of CI) for performance measures of an M/G/1 queueing system in most of the cases.  相似文献   

6.
Consider the partially balanced one-way layout for comparing k   treatments μi,μi,1?i?k,1?i?k, with a control μ0μ0. We propose a new test which is similar to the test statistics of Marcus [1976. The powers of some tests of the equality of normal means against an ordered alternative. Biometrika 63, 177–183]. By simulation we find that the proposed test has a good power performance when compared with other tests. Moreover, it can produce confidence intervals for μi-μ0,1?i?k.μi-μ0,1?i?k.  相似文献   

7.
This section is similar in organization to a Book Review section in other journals; however, software of interest to statisticians is the subject of review here. Emphasis is on software for microcomputers. Programs that operate only in larger mainframe computers will seldom receive review. Normally, producers of programs make a copy of their product available to the Section Editor, who then selects one or more persons to test the product and prepare a review.

Producers of computer software who wish to have their product reviewed are invited to contact the Section Editor, Professor Kenneth Berk, Department of Mathematics, 313 Stevenson Hall, Illinois State University, Normal, IL 61761.

Findings and opinions expressed in every review are solely those of the author. They should not be construed as reflecting endorsement of the product, or opinions held, by the American Statistical Association, nor is any warranty implied about any product reviewed.

STAN, Version II.0. David M. Allen. Available from Statistical Consultants, Inc., 462 E. High Street, Lexington, KY 40508. $300. Reviewed by Peter A. Lachenbruch  相似文献   

8.
Abstract

The “New Statistics” emphasizes effect sizes, confidence intervals, meta-analysis, and the use of Open Science practices. We present three specific ways in which a New Statistics approach can help improve scientific practice: by reducing overconfidence in small samples, by reducing confirmation bias, and by fostering more cautious judgments of consistency. We illustrate these points through consideration of the literature on oxytocin and human trust, a research area that typifies some of the endemic problems that arise with poor statistical practice.  相似文献   

9.
In regenerative simulation, one frequently requires an estimate or a confidence interval for the ratio of the means of the two components of a certain bivariate random quantity. Estimation of this ratio has been studied by many authors and most recently by Asmussen and Rydén(2010 Asmussen, S., Rydén, T. (2010). A note on skewness in regenerative simulation. Communications in Statistics—Simulation and Computation 40: 4557.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). Here, we propose an estimate that performs better than the best of the known estimates. Our estimate is also lot simpler than the best known estimate.  相似文献   

10.
The quality of the asymptotic normality of realized volatility can be poor if sampling does not occur at very high frequencies. In this article we consider an alternative approximation to the finite sample distribution of realized volatility based on Edgeworth expansions. In particular, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions. The Monte Carlo study we conduct shows that the intervals based on the Edgeworth corrections have improved properties relatively to the conventional intervals based on the normal approximation. Contrary to the bootstrap, the Edgeworth approach is an analytical approach that is easily implemented, without requiring any resampling of one's data. A comparison between the bootstrap and the Edgeworth expansion shows that the bootstrap outperforms the Edgeworth corrected intervals. Thus, if we are willing to incur in the additional computational cost involved in computing bootstrap intervals, these are preferred over the Edgeworth intervals. Nevertheless, if we are not willing to incur in this additional cost, our results suggest that Edgeworth corrected intervals should replace the conventional intervals based on the first order normal approximation.  相似文献   

11.
《Econometric Reviews》2008,27(1):139-162
The quality of the asymptotic normality of realized volatility can be poor if sampling does not occur at very high frequencies. In this article we consider an alternative approximation to the finite sample distribution of realized volatility based on Edgeworth expansions. In particular, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions. The Monte Carlo study we conduct shows that the intervals based on the Edgeworth corrections have improved properties relatively to the conventional intervals based on the normal approximation. Contrary to the bootstrap, the Edgeworth approach is an analytical approach that is easily implemented, without requiring any resampling of one's data. A comparison between the bootstrap and the Edgeworth expansion shows that the bootstrap outperforms the Edgeworth corrected intervals. Thus, if we are willing to incur in the additional computational cost involved in computing bootstrap intervals, these are preferred over the Edgeworth intervals. Nevertheless, if we are not willing to incur in this additional cost, our results suggest that Edgeworth corrected intervals should replace the conventional intervals based on the first order normal approximation.  相似文献   

12.
In complete samples from a continuous cumulative distribution with unknown parameters, it is known that various pivotal functions can be constructed by appealing to the probability integral transform. A pivotal function (or simply pivot) is a function of the data and parameters that has the property that its distribution is free of any unknown parameters. Pivotal functions play a key role in constructing confidence intervals and hypothesis tests. If there are nuisance parameters in addition to a parameter of interest, and consistent estimators of the nuisance parameters are available, then substituting them into the pivot can preserve the pivot property while altering the pivot distribution, or may instead create a function that is approximately a pivot in the sense that its asymptotic distribution is free of unknown parameters. In this latter case, bootstrapping has been shown to be an effective way of estimating its distribution accurately and constructing confidence intervals that have more accurate coverage probability in finite samples than those based on the asymptotic pivot distribution. In this article, one particular pivotal function based on the probability integral transform is considered when nuisance parameters are estimated, and the estimation of its distribution using parametric bootstrapping is examined. Applications to finding confidence intervals are emphasized. This material should be of interest to instructors of upper division and beginning graduate courses in mathematical statistics who wish to integrate bootstrapping into their lessons on interval estimation and the use of pivotal functions.

[Received November 2014. Revised August 2015.]  相似文献   

13.
The lower 5% point of the correlation determinant in the null case, that is with zero parental correlations in the multivariate, normally distributed, data set, are presented for sample sizes up to 30. Repeated Monte Carlo simulation suggests that the limits are correct to +/?2 units in the third place of decimals. Thus the limits permit a test of hypothesis for the mutual independence of the variates involved. For sample sizes greater than 30, an asymptotic approximation based on the chi-squared distribution, as proposed by Morrison (2005 Morrison , D. F. ( 2005 ). Multivariate Statistical Methods , 4th . ed. London : Thompson Brooks/Cde . [Google Scholar]), is shown to be quite reliable.  相似文献   

14.
The existence of the shortest confidence interval for binomial probability is shown. The method of obtaining such an interval is presented as well.  相似文献   

15.
Double censoring arises when T represents an outcome variable that can only be accurately measured within a certain range, [L, U], where L and U are the left- and right-censoring variables, respectively. When L is always observed, we consider the empirical likelihood inference for linear transformation models, based on the martingale-type estimating equation proposed by Chen et al. (2002 Chen , K. , Jin , Z. , Ying , Z. ( 2002 ). Semiparametric analysis of transformation models with censored data . Biometrika 89 : 659668 .[Crossref], [Web of Science ®] [Google Scholar]). It is demonstrated that both the approach of Lu and Liang (2006 Lu , W. , Liang , Y. ( 2006 ). Empirical likelihood inference for linear transformation models . Journal of Multivariate Analysis 97 : 15861599 .[Crossref], [Web of Science ®] [Google Scholar]) and that of Yu et al. (2011 Yu , W. , Sun , Y. , Zheng , M. ( 2011 ). Empirical likelihood method for linear transformation models . Annals of the Institute of Statistical Mathematics 63 : 331346 .[Crossref], [Web of Science ®] [Google Scholar]) can be extended to doubly censored data. Simulation studies are conducted to investigate the performance of the empirical likelihood ratio methods.  相似文献   

16.
Some applications of ratios of normal random variables require both the numerator and denominator of the ratio to be positive if the ratio is to have a meaningful interpretation. In these applications, there may also be substantial likelihood that the variables will assume negative values. An example of such an application is when comparisons are made in which treatments may have either efficacious or deleterious effects on different trials. Classical theory on ratios of normal variables has focused on the distribution of the ratio and has not formally incorporated this practical consideration. When this issue has arisen, approximations have been used to address it. In this article, we provide an exact method for determining (1 ? α) confidence bounds for ratios of normal variables under the constraint that the ratio is composed of positive values and connect this theory to classical work in this area. We then illustrate several practical applications of this method.  相似文献   

17.
In comparing a collection of K populations, it is common practice to display in one visualization confidence intervals for the corresponding population parameters θ1, θ2, …, θK. For a pair of confidence intervals that do (or do not) overlap, viewers of the visualization are cognitively compelled to declare that there is not (or there is) a statistically significant difference between the two corresponding population parameters. It is generally well known that the method of examining overlap of pairs of confidence intervals should not be used for formal hypothesis testing. However, use of a single visualization with overlapping and nonoverlapping confidence intervals leads many to draw such conclusions, despite the best efforts of statisticians toward preventing users from reaching such conclusions. In this article, we summarize some alternative visualizations from the literature that can be used to properly test equality between a pair of population parameters. We recommend that these visualizations be used with caution to avoid incorrect statistical inference. The methods presented require only that we have K sample estimates and their associated standard errors. We also assume that the sample estimators are independent, unbiased, and normally distributed.  相似文献   

18.
Sample size and correlation coefficient of populations are the most important factors which influence the statistical significance of the sample correlation coefficient. It is observed that for evaluating the hypothesis when the observed value of the correlation coefficient's r is different from zero, Fisher's Z transformation may be incorrect for small samples especially when population correlation coefficient ρ has big values. In this study, a simulation program has been generated for to illustrate how the bias in the Fisher transformation of the correlation coefficient affects estimate precision when sample size is small and ρ has big value. By the simulation results, 90 and 95% confidence intervals of correlation coefficients have been created and tabled. As a result, it is suggested that especially when ρ is greater than 0.2 and sample sizes of 18 or less, Tables 1 and 2 can be used for the significance test in correlations.  相似文献   

19.
Group testing procedures, in which groups containing several units are tested without testing each unit, are widely used as cost-effective procedures in estimating the proportion of defective units in a population. A problem arises when we apply these procedures to the detection of genetically modified organisms (GMOs), because the analytical instrument for detecting GMOs has a threshold of detection. If the group size (i.e., the number of units within a group) is large, the GMOs in a group are not detected due to the dilution even if the group contains one unit of GMOs. Thus, most people conventionally use a small group size (which we call conventional group size) so that they can surely detect the existence of defective units if at least one unit of GMOs is included in the group. However, we show that we can estimate the proportion of defective units for any group size even if a threshold of detection exists; the estimate of the proportion of defective units is easily obtained by using functions implemented in a spreadsheet. Then, we show that the conventional group size is not always optimal in controlling a consumer's risk, because such a group size requires a larger number of groups for testing.  相似文献   

20.
A Critic's Guide to Software for CP/M Computers, ISBN 0-8019-7404-6. A Critic's Guide to Software for IBM-PC and PC-Compatible Computers, ISBN 0-8091-7413-5. A Critic's Guide to Software for Apple and Apple-Compatible Computers, ISBN 0-8091-7412-7 by Phillip I, Good, ph. D. Prienred and distributed by: Chilton Company Radnor, Pennsylvania 19089 1983 - $ per volume  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号