首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The balanced half-sample and jackknife variance estimation techniques are used to estimate the variance of the combined ratio estimate. An empirical sampling study is conducted using computer-generated populations to investigate the variance, bias and mean square error of these variance estimators and results are compared to theoretical results derived elsewhere for the linear case. Results indicate that either the balanced half-sample or jackknife method may be used effectively for estimating the variance of the combined ratio estimate.  相似文献   

2.
Bias and variance are evaluated explicitly for the maximum likelihood estimate (MLE) of the drift of a Brownian motion following a symmetric sequential probability ratio test (SPRT). The MLE is shown to be asymptotically efficient when the boundary of the SPRT tends to infinity.  相似文献   

3.
We show that the jackknife technique fails badly when applied to the problem of estimating the variance of a sample quantile. When viewed as a point estimator, the jackknife estimator is known to be inconsistent. We show that the ratio of the jackknife variance estimate to the true variance has an asymptotic Weibull distribution with parameters 1 and 1/2. We also show that if the jackknife variance estimate is used to Studentize the sample quantile, the asymptotic distribution of the resulting Studentized statistic is markedly nonnormal, having infinite mean. This result is in stark contrast with that obtained in simpler problems, such as that of constructing confidence intervals for a mean, where the jackknife-Studentized statistic has an asymptotic standard normal distribution.  相似文献   

4.
The authors discuss the bias of the estimate of the variance of the overall effect synthesized from individual studies by using the variance weighted method. This bias is proven to be negative. Furthermore, the conditions, the likelihood of underestimation and the bias from this conventional estimate are studied based on the assumption that the estimates of the effect are subject to normal distribution with common mean. The likelihood of underestimation is very high (e.g. it is greater than 85% when the sample sizes in two combined studies are less than 120). The alternative less biased estimates for the cases with and without the homogeneity of the variances are given in order to adjust for the sample size and the variation of the population variance. In addition, the sample size weight method is suggested if the consistence of the sample variances is violated Finally, a real example is presented to show the difference by using the above three estimate methods.  相似文献   

5.
Observational studies are increasingly being used in medicine to estimate the effects of treatments or exposures on outcomes. To minimize the potential for confounding when estimating treatment effects, propensity score methods are frequently implemented. Often outcomes are the time to event. While it is common to report the treatment effect as a relative effect, such as the hazard ratio, reporting the effect using an absolute measure of effect is also important. One commonly used absolute measure of effect is the risk difference or difference in probability of the occurrence of an event within a specified duration of follow-up between a treatment and comparison group. We first describe methods for point and variance estimation of the risk difference when using weighting or matching based on the propensity score when outcomes are time-to-event. Next, we conducted Monte Carlo simulations to compare the relative performance of these methods with respect to bias of the point estimate, accuracy of variance estimates, and coverage of estimated confidence intervals. The results of the simulation generally support the use of weighting methods (untrimmed ATT weights and IPTW) or caliper matching when the prevalence of treatment is low for point estimation. For standard error estimation the simulation results support the use of weighted robust standard errors, bootstrap methods, or matching with a naïve standard error (i.e., Greenwood method). The methods considered in the article are illustrated using a real-world example in which we estimate the effect of discharge prescribing of statins on patients hospitalized for acute myocardial infarction.  相似文献   

6.
We propose replacing the usual Student's-t statistic, which tests for equality of means of two distributions and is used to construct a confidence interval for the difference, by a biweight-“t” statistic. The biweight-“t” is a ratio of the difference of the biweight estimates of location from the two samples to an estimate of the standard error of this difference. Three forms of the denominator are evaluated: weighted variance estimates using both pooled and unpooled scale estimates, and unweighted variance estimates using an unpooled scale estimate. Monte Carlo simulations reveal that resulting confidence intervals are highly efficient on moderate sample sizes, and that nominal levels are nearly attained, even when considering extreme percentage points.  相似文献   

7.
One of the most important steps in the design of a pharmaceutical clinical trial is the estimation of the sample size. For a superiority trial the sample size formula (to achieve a stated power) would be based on a given clinically meaningful difference and a value for the population variance. The formula is typically used as though this population variance is known whereas in reality it is unknown and is replaced by an estimate with its associated uncertainty. The variance estimate would be derived from an earlier similarly designed study (or an overall estimate from several previous studies) and its precision would depend on its degrees of freedom. This paper provides a solution for the calculation of sample sizes that allows for the imprecision in the estimate of the sample variance and shows how traditional formulae give sample sizes that are too small since they do not allow for this uncertainty with the deficiency being more acute with fewer degrees of freedom. It is recommended that the methodology described in this paper should be used when the sample variance has less than 200 degrees of freedom.  相似文献   

8.
Heteroscedasticity generally exists when a linear regression model is applied to analyzing some real-world problems. Therefore, how to accurately estimate the variance functions of the error term in a heteroscedastic linear regression model is of great importance for obtaining efficient estimates of the regression parameters and making valid statistical inferences. A method for estimating the variance function of heteroscedastic linear regression models is proposed in this article based on the variance-reduced local linear smoothing technique. Some simulations and comparisons with other method are conducted to assess the performance of the proposed method. The results demonstrate that the proposed method can accurately estimate the variance functions and therefore produce more efficient estimates of the regression parameters.  相似文献   

9.
We have observations for a t distribution with unknown mean, variance, and degrees of freedom, each of which we wish to estimate. The major problem lies in the estimate of the degrees of freedom. We show that a relatively efficient yet very simple estimator is a given function of the ratio of percentile estimates. We derive the appropriate estimator, provide equations for transformation and standard errors, contrast this with other estimators, and give examples.  相似文献   

10.
The increase in the variance of the estimate of treatment effect which results from omitting a dichotomous or continuous covariate is quantified as a function of censoring. The efficiency of not adjusting for a covariate is measured by the ratio of the variance obtained with and without adjustment for the covariate. The variance is derived using the Weibull proportional hazards model. Under random censoring, the efficiency of not adjusting for a continuous covariate is an increasing function of the percentage of censored observations.  相似文献   

11.
Two consistent estimators for the non-null variance of Wil-coxon-Mann-Whitney’s statistic applied to grouped ordered data, are considered. The first is based on U-statistics and the sec-ond is obtained by the Delta method. Some examples are given to demonstrate the extent of error when using a null variance esti-mate for constructing confidence intervals. It appears that the two consistent estimates are very close, but may both be disting-uishably larger or smaller than the null variance estimate.  相似文献   

12.
In planning a study, the choice of sample size may depend on a variance value based on speculation or obtained from an earlier study. Scientists may wish to use an internal pilot design to protect themselves against an incorrect choice of variance. Such a design involves collecting a portion of the originally planned sample and using it to produce a new variance estimate. This leads to a new power analysis and increasing or decreasing sample size. For any general linear univariate model, with fixed predictors and Gaussian errors, we prove that the uncorrected fixed sample F-statistic is the likelihood ratio test statistic. However, the statistic does not follow an F distribution. Ignoring the discrepancy may inflate test size. We derive and evaluate properties of the components of the likelihood ratio test statistic in order to characterize and quantify the bias. Most notably, the fixed sample size variance estimate becomes biased downward. The bias may inflate test size for any hypothesis test, even if the parameter being tested was not involved in the sample size re-estimation. Furthermore, using fixed sample size methods may create biased confidence intervals for secondary parameters and the variance estimate.  相似文献   

13.
The present paper studies the validity of inferential procedures which follow the Taguchi method, under saturated designs. The distribution of the signal to noise (S/N) ratio Y [ILM0001] is investigated,for normal parent distributions. We further investigate the distribution of orthonormal contrasts of such S/N variables. Finally, we discuss and provide critical values for mod-F tests of significance of parameters, when the k smallest SS values are pooled to serve as error variance estimate  相似文献   

14.
Simple heterogeneity variance estimation for meta-analysis   总被引:2,自引:0,他引:2  
Summary.  A simple method of estimating the heterogeneity variance in a random-effects model for meta-analysis is proposed. The estimator that is presented is simple and easy to calculate and has improved bias compared with the most common estimator used in random-effects meta-analysis, particularly when the heterogeneity variance is moderate to large. In addition, it always yields a non-negative estimate of the heterogeneity variance, unlike some existing estimators. We find that random-effects inference about the overall effect based on this heterogeneity variance estimator is more reliable than inference using the common estimator, in terms of coverage probability for an interval estimate.  相似文献   

15.
Exact confidence intervals for variances rely on normal distribution assumptions. Alternatively, large-sample confidence intervals for the variance can be attained if one estimates the kurtosis of the underlying distribution. The method used to estimate the kurtosis has a direct impact on the performance of the interval and thus the quality of statistical inferences. In this paper the author considers a number of kurtosis estimators combined with large-sample theory to construct approximate confidence intervals for the variance. In addition, a nonparametric bootstrap resampling procedure is used to build bootstrap confidence intervals for the variance. Simulated coverage probabilities using different confidence interval methods are computed for a variety of sample sizes and distributions. A modification to a conventional estimator of the kurtosis, in conjunction with adjustments to the mean and variance of the asymptotic distribution of a function of the sample variance, improves the resulting coverage values for leptokurtically distributed populations.  相似文献   

16.
The efficiency of schemes for sampling an alternating Poisson process (0,1 observations) is evaluated by the inverse ratio of the variance of the proportion estimate, p, to the binomial variance. The variance ratio presented by D.R. Cox (in Renewal Theory) for fixed interval sampling is generalized to accommodate random sampling and random sampling after a time delay equal to a fixed proportion, γ , of the mean time between observations, δ. The result is a sampling design tool that provides quantifications for the effect of various spacings between observations and of fixed vs. random sampling. Direct application is made to thes field of work sampling.  相似文献   

17.
The effects of heteroscedasticity have been studied on the mean and variance of F ratio and on the power of F-test in unbalanced one-way random model, numerically. The computed results reveal that the heteroscedasticity and unbalanoedness have combined effects. The mean and variance of F as well as the power of F-test increase with inequality of error variances under balanced and those unbalanced situations where more variable groups have larger size. The effects are of serious nature when more variable groups have smaller size.  相似文献   

18.
We study a factor analysis model with two normally distributed observations and one factor. In the case when the errors have equal variance, the maximum likelihood estimate of the factor loading is given in closed form. Exact and approximate distributions of the maximum likelihood estimate are considered. The exact distribution function is given in a complex form that involves the incomplete Beta function. Approximations to the distribution function are given for the cases of large sample sizes and small error variances. The accuracy of the approximations is discussed  相似文献   

19.
利用模型的方法研究出现测量误差时多变量间的关系是目前国际上的流行方法,但这不利于对单指标的估计。因此,通过在估计量的设计中纳入测量误差信息,推导测量误差方差的定量测度方法,实现了存在测量误差时分层抽样各层均值方差的估计。采用2007年广东省三个市(县)城镇住户调查中的人均消费性支出数据进行实证分析,定量测度了测量误差在层均值方差估计中的大小及其影响,并对不考虑测量误差的估计结果进行了修正。  相似文献   

20.
A new, fully data-driven bandwidth selector with a double smoothing (DS) bias term and a data-driven variance estimator is developed following the bootstrap idea. The data-driven variance estimation does not involve any additional bandwidth selection. The proposed bandwidth selector convergences faster than a plug-in one due to the DS bias estimate, whereas the data-driven variance improves its finite sample performance clearly and makes it stable. Asymptotic results of the proposals are obtained. A comparative simulation study was done to show the overall gains and the gains obtained by improving either the bias term or the variance estimate, respectively. It is shown that the use of a good variance estimator is more important when the sample size is relatively small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号