首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A simple and unified prediction interval (PI) for the median of a future lifetime can be obtained through a power transformation. This interval usually possesses the correct coverage, at least asymptotically, when the transformation is known. However, when the transformation is unknown and is estimated from the data, a correction is required. A simple correction factor is derived based on large sample theory. Simulation shows that the unified PI after correction performs well. When compared with the existing frequentist PI's, it shows an equivalent or a better performance in terms of coverage probability and average length of the interval. Its nonparametric aspect and the ease of usage make it very attractive to practitioners. Real data examples are provided for illustration.  相似文献   

2.
Motivated by a study on comparing sensitivities and specificities of two diagnostic tests in a paired design when the sample size is small, we first derived an Edgeworth expansion for the studentized difference between two binomial proportions of paired data. The Edgeworth expansion can help us understand why the usual Wald interval for the difference has poor coverage performance in the small sample size. Based on the Edgeworth expansion, we then derived a transformation based confidence interval for the difference. The new interval removes the skewness in the Edgeworth expansion; the new interval is easy to compute, and its coverage probability converges to the nominal level at a rate of O(n−1/2). Numerical results indicate that the new interval has the average coverage probability that is very close to the nominal level on average even for sample sizes as small as 10. Numerical results also indicate this new interval has better average coverage accuracy than the best existing intervals in finite sample sizes.  相似文献   

3.
Brown and Cohen (1974) considered the problem of interval estimation of the common mean of two normal populations based on independent random samples. They showed that if we take the usual confidence interval using the first sample only and centre it around an appropriate combined estimate of the common mean the resulting interval would contain the true value with higher probability. They also gave a sufficient condition which such a point estimate should satisfy. Bhattacharya and Shah (1978) showed that the estimates satisfying this condition are nearly identical to the mean of the first sample. In this paper we obtain a stronger sufficient condition which is satisfied by many point estimates when the size of the second sample exceeds ten.  相似文献   

4.
Frequently a random vector Y with known distribution function is readily observed. However, the random variable of interest is a transformation of Y say h(Y), and sample values of h are expensive to evaluate. The objective is to estimate the distribution function of using only a small sample on Y. Four estimators are proposed for use when Y is discrete. A Monte Carlo study of the estimators is presented This estimation problem frequently arises when Y is a parameter in a mathematical programming problem and h(Y) is the optimal objective function value. Two examples of this type are presented.  相似文献   

5.
Parametric methods for the calculation of reference intervals in clinical studies often rely on the identification of a suitable transformation so that the transformed data can be assumed to be drawn from a Gaussian distribution. In this paper, the two-stage transformation recommended by the International Federation for Clinical Chemistry is compared with a novel generalised Box–Cox family of transformations. Investigation is also made of sample sizes needed to achieve certain criteria of reliability in the calculated reference interval. Simulations are used to show that the generalised Box–Cox family achieves a lower bias than the two-stage transformation. It was found that there is a possibility that the two-stage transformation will result in percentile estimates that cannot be back-transformed to obtain the required reference intervals, a difficulty not observed when using the generalised Box–Cox family introduced in this paper.  相似文献   

6.
One of the well-known problems with testing for sharp null hypotheses against two-sided alternatives is that, when sample sizes diverge, every consistent test rejects the null with a probability converging to one, even when it is true. This kind of problem emerges in practically all applications of traditional two-sided tests. The main purpose of the present paper is to overcome this very intriguing impasse by considering a general solution to the problem of testing for an equivalence null interval against a two one-sided alternative. Our goal is to go beyond the limitations of likelihood-based methods by working in a nonparametric permutation framework. This solution requires the nonparameteric Combination of dependent permutation tests, which is the methodological tool that achieves Roy’s Union–intersection principle. To obtain practical solutions, the related algorithm is presented. To appreciate its effectiveness for practical purposes, a simple example and some simulation results are also presented. In addition, for every pair of consistent partial test statistics it is proved that, if sample sizes diverge, when the effect lies in the open equivalence interval, the Rejection probability (RP) converges to zero. Analogously, if the effect lies outside that interval, the RP converges to one.  相似文献   

7.
We investigate the sample size problem when a binomial parameter is to be estimated, but some degree of misclassification is possible. The problem is especially challenging when the degree to which misclassification occurs is not exactly known. Motivated by a Canadian survey of the prevalence of toxoplasmosis infection in pregnant women, we examine the situation where it is desired that a marginal posterior credible interval for the prevalence of width w has coverage 1−α, using a Bayesian sample size criterion. The degree to which the misclassification probabilities are known a priori can have a very large effect on sample size requirements, and in some cases achieving a coverage of 1−α is impossible, even with an infinite sample size. Therefore, investigators must carefully evaluate the degree to which misclassification can occur when estimating sample size requirements.  相似文献   

8.
In randomized clinical trials (RCTs), we may come across the situation in which some patients do not fully comply with their assigned treatment. For an experimental treatment with trichotomous levels, we derive the maximum likelihood estimator (MLE) of the risk ratio (RR) per level of dose increase in a RCT with noncompliance. We further develop three asymptotic interval estimators for the RR. To evaluate and compare the finite sample performance of these interval estimators, we employ Monte Carlo simulation. When the number of patients per treatment is large, we find that all interval estimators derived in this paper can perform well. When the number of patients is not large, we find that the interval estimator using Wald’s statistic can be liberal, while the interval estimator using the logarithmic transformation of the MLE can lose precision. We note that use of a bootstrap variance estimate in this case may alleviate these concerns. We further note that an interval estimator combining interval estimators using Wald’s statistic and the logarithmic transformation can generally perform well with respect to the coverage probability, and be generally more efficient than interval estimators using bootstrap variance estimates when RR>1. Finally, we use the data taken from a study of vitamin A supplementation to reduce mortality in preschool children to illustrate the use of these estimators.  相似文献   

9.
Finding an interval estimation procedure for the variance of a population that achieves a specified confidence level can be problematic. If the distribution of the population is known, then a distribution-dependent interval for the variance can be obtained by considering a power transformation of the sample variance. Simulation results suggest that this method produces intervals for the variance that maintain the nominal probability of coverage for a wide variety of distributions. If the underlying distribution is unknown, then the power itself must be estimated prior to forming the endpoints of the interval. The result is a distribution-free confidence interval estimator of the population variance. Simulation studies indicate that the power transformation method compares favorably to the logarithmic transformation method and the nonparametric bias-corrected and accelerated bootstrap method for moderately sized samples. However, two applications, one in forestry and the other in health sciences, demonstrate that no single method is best for all scenarios.  相似文献   

10.
This paper is concerned with interval estimation of an autoregressive parameter when the parameter space allows for magnitudes outside the unit interval. In this case, intervals based on the least-squares estimator tend to require a high level of numerical computation and can be unreliable for small sample sizes. Intervals based on the asymptotic distribution of instrumental variable estimators provide an alternative. If the instrument is taken to be the sign function, the interval is centered at the Cauchy estimator and a large sample interval can be created by estimating the standard error of this estimator. The interval proposed in this paper avoids estimating this standard error and results in a small sample improvement in coverage probability. In fact, small sample coverage is exact when the innovations come from a normal distribution.  相似文献   

11.
In single-arm clinical trials with survival outcomes, the Kaplan–Meier estimator and its confidence interval are widely used to assess survival probability and median survival time. Since the asymptotic normality of the Kaplan–Meier estimator is a common result, the sample size calculation methods have not been studied in depth. An existing sample size calculation method is founded on the asymptotic normality of the Kaplan–Meier estimator using the log transformation. However, the small sample properties of the log transformed estimator are quite poor in small sample sizes (which are typical situations in single-arm trials), and the existing method uses an inappropriate standard normal approximation to calculate sample sizes. These issues can seriously influence the accuracy of results. In this paper, we propose alternative methods to determine sample sizes based on a valid standard normal approximation with several transformations that may give an accurate normal approximation even with small sample sizes. In numerical evaluations via simulations, some of the proposed methods provided more accurate results, and the empirical power of the proposed method with the arcsine square-root transformation tended to be closer to a prescribed power than the other transformations. These results were supported when methods were applied to data from three clinical trials.  相似文献   

12.
The problem of estimating the difference between two Poisson means is considered. A new moment confidence interval (CI), and a fiducial CI for the difference between the means are proposed. The moment CI is simple to compute, and it specializes to the classical Wald CI when the sample sizes are equal. Numerical studies indicate that the moment CI offers improvement over the Wald CI when the sample sizes are different. Exact properties of the CIs based on the moment, fiducial and hybrid methods are evaluated numerically. Our numerical study indicates that the hybrid and fiducial CIs are in general comparable, and the moment CI seems to be the best when the expected total counts from both distributions are two or more. The interval estimation procedures are illustrated using two examples.  相似文献   

13.
In sample surveys and many other areas of application, the ratio of variables is often of great importance. This often occurs when one variable is available at the population level while another variable of interest is available for sample data only. In this case, using the sample ratio, we can often gather valuable information on the variable of interest for the unsampled observations. In many other studies, the ratio itself is of interest, for example when estimating proportions from a random number of observations. In this note we compare three confidence intervals for the population ratio: A large sample interval, a log based version of the large sample interval, and Fieller’s interval. This is done through data analysis and through a small simulation experiment. The Fieller method has often been proposed as a superior interval for small sample sizes. We show through a data example and simulation experiments that Fieller’s method often gives nonsensical and uninformative intervals when the observations are noisy relative to the mean of the data. The large sample interval does not similarly suffer and thus can be a more reliable method for small and large samples.  相似文献   

14.
针对多目标决策值为区间数的规范化问题,提出一种具有奖优罚劣特性的[-1,1]线性变换算子,规范化处理原始决策信息,将其运用到目标权重确定,且属性值为区间数的多目标灰色局势决策中,给出了基于"奖优罚劣"算子的区间数多目标灰色局势决策方法,并以空舰导弹设计方案的选择作为应用案例,结果表明该方法操作方便、计算简单、易于实现,可以为一些具有区间值的不确定决策问题提供一种有效、科学、实用的方法。  相似文献   

15.
It is often necessary to compare two measurement methods in medicine and other experimental sciences. This problem covers a broad range of data. Many authors have explored ways of assessing the agreement of two sets of measurements. However, there has been relatively little attention to the problem of determining sample size for designing an agreement study. In this paper, a method using the interval approach for concordance is proposed to calculate sample size in conducting an agreement study. The philosophy behind this is that the concordance is satisfied when no more than the pre‐specified k discordances are found for a reasonable large sample size n since it is much easier to define a discordance pair. The goal here is to find such a reasonable large sample size n. The sample size calculation is based on two rates: the discordance rate and tolerance probability, which in turn can be used to quantify an agreement study. The proposed approach is demonstrated through a real data set. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
The maximum likeihood estimate is considered for an intraclass correlation coefficent in a bivariate normal distribution when some observations on either of the varibles are missuing. The estimate is given as the soulution of a polynomial equation of degree seven. An approximate confidence interval and a test procedure for the intraclass correlation are constricted based on an asymptotic variance stabilizing transformation of the resulting estimator. The distributional results are also considered under violation of the normality assumption. A Monte Carlo study was performed to examine the finite sample properties of the maximum likelihood estimator and to evaluate the proposed procedures for hypotheses testing and interval estimation.  相似文献   

17.
The author proposes an adaptive method which produces confidence intervals that are often narrower than those obtained by the traditional procedures. The proposed methods use both a weighted least squares approach to reduce the length of the confidence interval and a permutation technique to insure that its coverage probability is near the nominal level. The author reports simulations comparing the adaptive intervals to the traditional ones for the difference between two population means, for the slope in a simple linear regression, and for the slope in a multiple linear regression having two correlated exogenous variables. He is led to recommend adaptive intervals for sample sizes superior to 40 when the error distribution is not known to be Gaussian.  相似文献   

18.
Fosdick and Raftery (2012) recently encountered the problem of inference for a bivariate normal correlation coefficient ρ with known variances. We derive a variance-stabilizing transformation y(ρ) analogous to Fisher’s classical z-transformation for the unknown-variance case. Adjusting y for the sample size n produces an improved “confidence-stabilizing” transformation yn(ρ) that provides more accurate interval estimates for ρ than the known-variance MLE. Interestingly, the z transformation applied to the unknown-but-equal-variance MLE performs well in the known-variance case for smaller values of |ρ|. Both methods are useful for comparing two or more correlation coefficients in the known-variance case.  相似文献   

19.
Consider the problem of estimating a multivariate mean 0(pxl), p>3, based on a sample x^ ..., xn with quadratic loss function. We find an optimal decision rule within the class of James-Stein type decision rules when the underlying distribution is that of a variance mixture of normals and when the norm ||0|| is known. When the norm is restricted to a known interval, typically no optimal James-Stein type rule exists but we characterize a minimal complete class within the class of James-Stein type decision rules. We also characterize the subclass of James-Stein type decision rules that dominate the sample mean.  相似文献   

20.
In 1954 Hodges and Lehmann considered the following problem: given is an i.i. normally distributed random sample with variance unknown. Under the null-hypothesis the mean is contained in a prescribed interval. Hodges and Lehmann constructed a test similar on the interval. This test is superior in power to the usual auxiliary procedure applied to this problem. Numerical calculations by Hodges and Lehmann indicated that the test is unbaised, however an analytical proof could not be given. In a recent paper the author proved unbiasedness for levels not too large, the magnitude depending on the sample size. Here the Proof is completed by establishing unbiasedness for all levels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号