首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Inverse Gaussian distribution has been used in a wide range of applications in modeling duration and failure phenomena. In these applications, one-sided lower tolerance limits are employed, for instance, for designing safety limits of medical devices. Tang and Chang (1994) proposed lowersided tolerance limits via Bonferroni inequality when parameters in the inverse Gaussian distribution are unknown. However, their simulation results showed conservative coverage probabilities, and consequently larger interval width. In their paper, they also proposed an alternative to construct lesser conservative limits. But simulation results yielded unsatisfactory coverage probabilities in many cases. In this article, the exact lower-sided tolerance limit is proposed. The proposed limit has a similar form to that of the confidence interval for mean under inverse Gaussian. The comparison between the proposed limit and Tang and Chang's method is compared via extensive Monte Carlo simulations. Simulation results suggest that the proposed limit is superior to Tang and Chang's method in terms of narrower interval width and approximate to nominal level of coverage probability. Similar argument can be applied to the formulation of two-sided tolerance limits. A summary and conclusion of the proposed limits is included.  相似文献   

2.
In the conventional concept, the variance of a tolerance interval from the measurements is a single component, and the sample size for quality control process was estimated by the variance of a single component. However, we can find examples in recent about several components that could vary in their measurements, so an approximate method must be found to modify the conventional tolerance interval. In our paper, we develop an approach to calculate the sample size for a two-sided tolerance interval including several components in the variance from the measurements. An example is presented to illustrate our proposed method.  相似文献   

3.
ABSTRACT

In the past, a tolerance interval was used for the statistical quality control process on raw materials and/or the final product. In the traditional concept of the tolerance interval, the variance from the measurements is a single component. However, we can find examples about several components that could vary in their measurements, so an approximate method must be found to modify the traditional tolerance interval. Now we employ a tolerance interval considering multiple components in the variance from the measurements to deal with quality control process. In our paper, the proposed method is used to solve the sample size determination for a two-sided tolerance interval approach considering multiple components on the variance of measurements.  相似文献   

4.
In this paper, a confidence interval for the lOOpth percentile of the Birnbaum-Saunders distribution is constructed. Conservative two-sided tolerance limits are then obtained from the confidence limits. These results are useful for reliability evaluation when using the Birnbaum-Saunders model. A simple scheme for generating Birnbaum-Saunders random variates is derived. This is used for a simulation study on investigating the effectiveness of the proposed confidence interval in terms of its coverage probability.  相似文献   

5.
Abstract

The standard method of obtaining a two-sided confidence interval for the Poisson mean produces an interval which is exact but can be shortened without violating the minimum coverage requirement. We classify the intervals proposed as alternatives to the standard method interval. We carry out the classification using two desirable properties of two-sided confidence intervals.  相似文献   

6.
Halperin et al. (1988) suggested an approach which allows for k Type I errors while using Scheffe's method of multiple comparisons for linear combinations of p means. In this paper we apply the same type of error control to Tukey's method of multiple pairwise comparisons. In fact, the variant of the Tukey (1953) approach discussed here defines the error control objective as assuring with a specified probability that at most one out of the p(p-l)/2 comparisons between all pairs of the treatment means is significant in two-sided tests when an overall null hypothesis (all p means are equal) is true or, from a confidence interval point of view, that at most one of a set of simultaneous confidence intervals for all of the pairwise differences of the treatment means is incorrect. The formulae which yield the critical values needed to carry out this new procedure are derived and the critical values are tabulated. A Monte Carlo study was conducted and several tables are presented to demonstrate the experimentwise Type I error rates and the gains in power furnished by the proposed procedure  相似文献   

7.
The standard approach to construct nonparametric tolerance intervals is to use the appropriate order statistics, provided a minimum sample size requirement is met. However, it is well-known that this traditional approach is conservative with respect to the nominal level. One way to improve the coverage probabilities is to use interpolation. However, the extension to the case of two-sided tolerance intervals, as well as for the case when the minimum sample size requirement is not met, have not been studied. In this paper, an approach using linear interpolation is proposed for improving coverage probabilities for the two-sided setting. In the case when the minimum sample size requirement is not met, coverage probabilities are shown to improve by using linear extrapolation. A discussion about the effect on coverage probabilities and expected lengths when transforming the data is also presented. The applicability of this approach is demonstrated using three real data sets.  相似文献   

8.
A method for controlling the familywise error rate combining the Bonferroni adjustment and fixed testing sequence procedures is proposed. This procedure allots Type I error like the Bonferroni adjustment, but allows the Type I error to accumulate whenever a null hypothesis is rejected. In this manner, power for hypotheses tested later in a prespecified order will be increased. The order of the hypothesis tests needs to be prespecified as in a fixed sequence testing procedure, but unlike the fixed sequence testing procedure all hypotheses can always be tested, allowing for an a priori method of concluding a difference in the various endpoints. An application will be in clinical trials in which mortality is a concern, but it is expected that power to distinguish a difference in mortality will be low. If the effect on mortality is larger than anticipated, this method allows a test with a prespecified method of controlling the Type I error rate. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we propose a design that uses a short‐term endpoint for accelerated approval at interim analysis and a long‐term endpoint for full approval at final analysis with sample size adaptation based on the long‐term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlation. Family‐wise error rate is proved to be strongly controlled for the two endpoints, sample size adaptation, and two analysis time points with the proposed designs. We show that using alpha exhaustive designs greatly improve the power when both endpoints are effective, and the power difference between the two adaptation rules is minimal. The proposed design can be extended to more general settings. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   

11.
Exact methods for constructing two-sided tolerance intervals (TIs) and tolerance intervals that control percentages in both tails for a location-scale family of distributions are proposed. The proposed methods are illustrated by constructing TIs for a normal, logistic, and Laplace (double exponential) distributions based on type II singly censored samples. Factors for constructing one-sided and two-sided TIs for a logistic distribution are tabulated for the case of uncensored samples. Factors for constructing TIs based on censored samples for all three distributions are also tabulated. The factors for all cases are estimated by Monte Carlo simulation. An adjustment to the tolerance factors based on type II censored samples is proposed so that they can be used to find approximate TIs based on type I censored samples. Coverage studies of the approximate TIs based on type I censored samples indicate that the approximation is satisfactory as long as the proportion of censored observations is no more than 0.70. The methods are illustrated using some practical examples.  相似文献   

12.
Confidence intervals for the pth-quantile Q of a two-parameter exponential distribution provide useful information on the plausible range of Q, and only inefficient equal-tail confidence intervals have been discussed in the statistical literature so far. In this article, the construction of the shortest possible confidence interval within a family of two-sided confidence intervals is addressed. This shortest confidence interval is always shorter, and can be substantially shorter, than the corresponding equal-tail confidence interval. Furthermore, the computational intensity of both methodologies is similar, and therefore it is advantageous to use the shortest confidence interval. It is shown how the results provided in this paper can apply to data obtained from progressive Type II censoring, with standard Type II censoring as a special case. The applications of more complex confidence interval constructions through acceptance set inversions that can employ prior information are also discussed.  相似文献   

13.
The Poisson–Lindley distribution is a compound discrete distribution that can be used as an alternative to other discrete distributions, like the negative binomial. This paper develops approximate one-sided and equal-tailed two-sided tolerance intervals for the Poisson–Lindley distribution. Practical applications of the Poisson–Lindley distribution frequently involve large samples, thus we utilize large-sample Wald confidence intervals in the construction of our tolerance intervals. A coverage study is presented to demonstrate the efficacy of the proposed tolerance intervals. The tolerance intervals are also demonstrated using two real data sets. The R code developed for our discussion is briefly highlighted and included in the tolerance package.  相似文献   

14.
A two-sided sequential confidence interval is suggested for the number of equally probable cells in a given multinomial population with prescribed width and confidence coefficient. We establish large-sample properties of the fixed-width confidence interval procedure using a normal approximation, and some comparisons are made. In addition, a simulation study is carried out in order to investigate the finite sample behaviour of the suggested sequential interval estimation procedure.  相似文献   

15.
Recently, authors have studied the weighted version of Kerridgeinaccuracy measure for left/right truncated distributions. In the present communication we introduce the notion of weighted interval inaccuracy measure and study it in the context of two-sided truncated random variables. In reliability theory and survival analysis, this measure may help to study the various characteristics of a system/component when it fails between two time points. We study various properties of this measure, including the effect of monotone transformations, and obtain its upper and lower bounds. It is shown that the proposed measure can uniquely determine the distribution function and characterizations of some important life distributions have been provided. This new measure is a generalization of recent weighted residual (past) inaccuracy measure.  相似文献   

16.
It is well known that that the construction of two-sided tolerance intervals is far more challenging than that of their one-sided counterparts. In a general framework of parametric models, we derive asymptotic results leading to explicit formulae for two-sided Bayesian and frequentist tolerance intervals. In the process, probability matching priors for such intervals are characterized and their role in finding frequentist tolerance intervals via a Bayesian route is indicated. Furthermore, in situations where matching priors are hard to obtain, we develop purely frequentist tolerance intervals as well. The findings are applied to real data. Simulation studies are seen to lend support to the asymptotic results in finite samples.  相似文献   

17.
Summary.  We propose 'Dunnett-type' test procedures to test for simple tree order restrictions on the means of p independent normal populations. The new tests are based on the estimation procedures that were introduced by Hwang and Peddada and later by Dunbar, Conaway and Peddada. The procedures proposed are also extended to test for 'two-sided' simple tree order restrictions. For non-normal data, nonparametric versions based on ranked data are also suggested. Using computer simulations, we compare the proposed test procedures with some existing test procedures in terms of size and power. Our simulation study suggests that the procedures compete well with the existing procedures for both one-sided and two-sided simple tree alternatives. In some instances, especially in the case of two-sided alternatives or for non-normally distributed data, the gains in power due to the procedures proposed can be substantial.  相似文献   

18.
In this paper, we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a prespecified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described.  相似文献   

19.
The ordinary-G class of distributions is defined to have the cumulative distribution function (cdf) as the value of the cdf of the ordinary distribution F whose range is the unit interval at G, that is, F(G), and it generalizes the ordinary distribution. In this work, we consider the standard two-sided power distribution to define other classes like the beta-G and the Kumaraswamy-G classes. We extend the idea of two-sidedness to other ordinary distributions like normal. After studying the basic properties of the new class in general setting, we consider the two-sided generalized normal distribution with maximum likelihood estimation procedure.  相似文献   

20.
ABSTRACT

In this paper, we consider the problem of constructing non parametric confidence intervals for the mean of a positively skewed distribution. We suggest calibrated, smoothed bootstrap upper and lower percentile confidence intervals. For the theoretical properties, we show that the proposed one-sided confidence intervals have coverage probability α + O(n? 3/2). This is an improvement upon the traditional bootstrap confidence intervals in terms of coverage probability. A version smoothed approach is also considered for constructing a two-sided confidence interval and its theoretical properties are also studied. A simulation study is performed to illustrate the performance of our confidence interval methods. We then apply the methods to a real data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号