首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

The performances of six confidence intervals for estimating the arithmetic mean of a lognormal distribution are compared using simulated data. The first interval considered is based on an exact method and is recommended in U.S. EPA guidance documents for calculating upper confidence limits for contamination data. Two intervals are based on asymptotic properties due to the Central Limit Theorem, and the other three are based on transformations and maximum likelihood estimation. The effects of departures from lognormality on the performance of these intervals are also investigated. The gamma distribution is considered to represent departures from the lognormal distribution. The average width and coverage of each confidence interval is reported for varying mean, variance, and sample size. In the lognormal case, the exact interval gives good coverage, but for small sample sizes and large variances the confidence intervals are too wide. In these cases, an approximation that incorporates sampling variability of the sample variance tends to perform better. When the underlying distribution is a gamma distribution, the intervals based upon the Central Limit Theorem tend to perform better than those based upon lognormal assumptions.  相似文献   

2.
Abstract.  The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient.  相似文献   

3.
Two-stage procedures are introduced to control the width and coverage (validity) of confidence intervals for the estimation of the mean, the between groups variance component and certain ratios of the variance components in one-way random effects models. The procedures use the pilot sample data to estimate an “optimal” group size and then proceed to determine the number of groups by a stopping rule. Such sampling plans give rise to unbalanced data, which are consequently analyzed by the harmonic mean method. Several asymptotic results concerning the proposed procedures are given along with simulation results to assess their performance in moderate sample size situations. The proposed procedures were found to effectively control the width and probability of coverage of the resulting confidence intervals in all cases and were also found to be robust in the presence of missing observations. From a practical point of view, the procedures are illustrated using a real data set and it is shown that the resulting unbalanced designs tend to require smaller sample sizes than is needed in a corresponding balanced design where the group size is arbitrarily pre-specified.  相似文献   

4.
In scenarios where the variance of a response variable can be attributed to two sources of variation, a confidence interval for a ratio of variance components gives information about the relative importance of the two sources. For example, if measurements taken from different laboratories are nine times more variable than the measurements taken from within the laboratories, then 90% of the variance in the responses is due to the variability amongst the laboratories and 10% of the variance in the responses is due to the variability within the laboratories. Assuming normally distributed sources of variation, confidence intervals for variance components are readily available. In this paper, however, simulation studies are conducted to evaluate the performance of confidence intervals under non-normal distribution assumptions. Confidence intervals based on the pivotal quantity method, fiducial inference, and the large-sample properties of the restricted maximum likelihood (REML) estimator are considered. Simulation results and an empirical example suggest that the REML-based confidence interval is favored over the other two procedures in unbalanced one-way random effects model.  相似文献   

5.
Construction of closed-form confidence intervals on linear combinations of variance components were developed generically for balanced data and studied mainly for one-way and two-way random effects analysis of variance models. The Satterthwaite approach is easily generalized to unbalanced data and modified to increase its coverage probability. They are applied on measures of assay precision in combination with (restricted) maximum likelihood and Henderson III Type 1 and 3 estimation. Simulations of interlaboratory studies with unbalanced data and with small sample sizes do not show superiority of any of the possible combinations of estimation methods and Satterthwaite approaches on three measures of assay precision. However, the modified Satterthwaite approach with Henderson III Type 3 estimation is often preferred above the other combinations.  相似文献   

6.
Abstract.  The plug-in solution is usually not entirely adequate for computing prediction intervals, as their coverage probability may differ substantially from the nominal value. Prediction intervals with improved coverage probability can be defined by adjusting the plug-in ones, using rather complicated asymptotic procedures or suitable simulation techniques. Other approaches are based on the concept of predictive likelihood for a future random variable. The contribution of this paper is the definition of a relatively simple predictive distribution function giving improved prediction intervals. This distribution function is specified as a first-order unbiased modification of the plug-in predictive distribution function based on the constrained maximum likelihood estimator. Applications of the results to the Gaussian and the generalized extreme-value distributions are presented.  相似文献   

7.
The Hartley‐Rao‐Cochran sampling design is an unequal probability sampling design which can be used to select samples from finite populations. We propose to adjust the empirical likelihood approach for the Hartley‐Rao‐Cochran sampling design. The approach proposed intrinsically incorporates sampling weights, auxiliary information and allows for large sampling fractions. It can be used to construct confidence intervals. In a simulation study, we show that the coverage may be better for the empirical likelihood confidence interval than for standard confidence intervals based on variance estimates. The approach proposed is simple to implement and less computer intensive than bootstrap. The confidence interval proposed does not rely on re‐sampling, linearization, variance estimation, design‐effects or joint inclusion probabilities.  相似文献   

8.
Issues of public policy are typically decided by non‐specialists who are increasingly informed by statistical methods. In order to be influential, inferential techniques must be widely understood and accepted. This motivates the author to propose likelihood‐based methods that prove relatively insensitive to the choice of underlying distribution because they exploit a peculiarly stable relation between two standard errors and a 95% coverage probability. The author also notes that bootstrap and jackknife estimates of variance can sometimes be strongly biased. In fact, symbolic computations in R suggest that they are reliable only for statistics that are well approximated by averages whose distributions are roughly symmetric. The author thus proposes to transform the classical likelihood ratio into a statistic whose variance can be estimated robustly. He shows that the signed root of the log‐likelihood is well approximated by an average with a roughly symmetric distribution. This leads to Cox‐Tukey intervals for a Student‐like statistic and to simple confidence intervals for most models used in public policy.  相似文献   

9.
We consider nonparametric interval estimation for the population quantiles based on unbalanced ranked set samples. We derived the large sample distribution of the empirical log likelihood ratio statistic for the quantiles. Approximate intervals for quantiles are obtained by inverting the likelihood ratio statistic. The performance of the empirical likelihood interval is investigated and compared with the performance of the intervals based on the ranked set sample order statistics.  相似文献   

10.
A procedure for the construction of exact simultaneous confidence intervals on functions of the fixed-effects parameters and on functions of variance components in an unbalanced, mixed, two-fold nested classification is introduced. The type of model considered in this paper enables the construction of such intervals to be based on the corresponding ANOVA table using its mean square ratios.  相似文献   

11.
The inverse hypergeometric distribution is of interest in applications of inverse sampling without replacement from a finite population where a binary observation is made on each sampling unit. Thus, sampling is performed by randomly choosing units sequentially one at a time until a specified number of one of the two types is selected for the sample. Assuming the total number of units in the population is known but the number of each type is not, we consider the problem of estimating this parameter. We use the Delta method to develop approximations for the variance of three parameter estimators. We then propose three large sample confidence intervals for the parameter. Based on these results, we selected a sampling of parameter values for the inverse hypergeometric distribution to empirically investigate performance of these estimators. We evaluate their performance in terms of expected probability of parameter coverage and confidence interval length calculated as means of possible outcomes weighted by the appropriate outcome probabilities for each parameter value considered. The unbiased estimator of the parameter is the preferred estimator relative to the maximum likelihood estimator and an estimator based on a negative binomial approximation, as evidenced by empirical estimates of closeness to the true parameter value. Confidence intervals based on the unbiased estimator tend to be shorter than the two competitors because of its relatively small variance but at a slight cost in terms of coverage probability.  相似文献   

12.
The problem of estimating the intraclass correlation when the sampling design is unbalanced is discussed. The method of moments is used to derive an approximation to the distribution of the estimate of the intraclass correlation obtained by the variance components approach. The maximum likelihood estimator is also presented, along with a simple procedure due to Richard (1961), for numerically maximizing a likelihood function of several parameters. Finally, the issue of optimal study design is considered for both balanced and unbalanced situations. For given power we determine the number of sets required to detect different values of the intraclass correlation.  相似文献   

13.
Much research has been conducted to develop confidence Intervals on linear combinations and ratios of variance components in balanced and unbalanced random models.This paper first presents confidence intervals on functions of variance components in balanced designs.These results assume that classical analysis of variance sums of squares are independent and have exact scaled chi-squared distributions.In unbalanced designs, either one or both of these assumptions are violated, and modifications to the balanced model intervals are required.We report results of some recent work that examines various modifications for some particular unbalanced designs.  相似文献   

14.
Azzalini (Scand J Stat 12:171–178, 1985) provided a methodology to introduce skewness in a normal distribution. Using the same method of Azzalini (1985), the skew logistic distribution can be easily obtained by introducing skewness to the logistic distribution. For the skew logistic distribution, the likelihood equations do not provide explicit solutions for the location and scale parameters. We present a simple method of deriving explicit estimators by approximating the likelihood equations appropriately. We examine numerically the bias and variance of these estimators and show that these estimators are as efficient as the maximum likelihood estimators (MLEs). The coverage probabilities of the pivotal quantities (for location and scale parameters) based on asymptotic normality are shown to be unsatisfactory, especially when the effective sample size is small. To improve the coverage probabilities and for constructing confidence intervals, we suggest the use of simulated percentage points. Finally, we present a numerical example to illustrate the methods of inference developed here.  相似文献   

15.
The authors show how an adjusted pseudo‐empirical likelihood ratio statistic that is asymptotically distributed as a chi‐square random variable can be used to construct confidence intervals for a finite population mean or a finite population distribution function from complex survey samples. They consider both non‐stratified and stratified sampling designs, with or without auxiliary information. They examine the behaviour of estimates of the mean and the distribution function at specific points using simulations calling on the Rao‐Sampford method of unequal probability sampling without replacement. They conclude that the pseudo‐empirical likelihood ratio confidence intervals are superior to those based on the normal approximation, whether in terms of coverage probability, tail error rates or average length of the intervals.  相似文献   

16.
Summary.  Log-linear models for multiway contingency tables where one variable is subject to non-ignorable non-response will often yield boundary solutions, with the probability of non-respondents being classified in some cells of the table estimated as 0. The paper considers the effect of this non-standard behaviour on two methods of interval estimation based on the distribution of the maximum likelihood estimator. The first method relies on the estimator being approximately normally distributed with variance equal to the inverse of the information matrix. It is shown that the information matrix is singular for boundary solutions, but intervals can be calculated after a simple transformation. For the second method, based on the bootstrap, asymptotic results suggest that the coverage properties may be poor for boundary solutions. Both methods are compared with profile likelihood intervals in a simulation study based on data from the British General Election Panel Study. The results of this study indicate that all three methods perform poorly for a parameter of the non-response model, whereas they all perform well for a parameter of the margin model, irrespective of whether or not there is a boundary solution.  相似文献   

17.
Exact confidence interval estimation for accelerated life regression models with censored smallest extreme value (or Weibull) data is often impractical. This paper evaluates the accuracy of approximate confidence intervals based on the asymptotic normality of the maximum likelihood estimator, the asymptotic X2distribution of the likelihood ratio statistic, mean and variance correction to the likelihood ratio statistic, and the so-called Bartlett correction to the likelihood ratio statistic. The Monte Carlo evaluations under various degrees of time censoring show that uncorrected likelihood ratio intervals are very accurate in situations with heavy censoring. The benefits of mean and variance correction to the likelihood ratio statistic are only realized with light or no censoring. Bartlett correction tends to result in conservative intervals. Intervals based on the asymptotic normality of maximum likelihood estimators are anticonservative and should be used with much caution.  相似文献   

18.
In this paper, we consider the empirical likelihood inferences of the partial functional linear model with missing responses. Two empirical log-likelihood ratios of the parameters of interest are constructed, and the corresponding maximum empirical likelihood estimators of parameters are derived. Under some regularity conditions, we show that the proposed two empirical log-likelihood ratios are asymptotic standard Chi-squared. Thus, the asymptotic results can be used to construct the confidence intervals/regions for the parameters of interest. We also establish the asymptotic distribution theory of corresponding maximum empirical likelihood estimators. A simulation study indicates that the proposed methods are comparable in terms of coverage probabilities and average lengths of confidence intervals. An example of real data is also used to illustrate our proposed methods.  相似文献   

19.
In many applications, a finite population contains a large proportion of zero values that make the population distribution severely skewed. An unequal‐probability sampling plan compounds the problem, and as a result the normal approximation to the distribution of various estimators has poor precision. The central‐limit‐theorem‐based confidence intervals for the population mean are hence unsatisfactory. Complex designs also make it hard to pin down useful likelihood functions, hence a direct likelihood approach is not an option. In this paper, we propose a pseudo‐likelihood approach. The proposed pseudo‐log‐likelihood function is an unbiased estimator of the log‐likelihood function when the entire population is sampled. Simulations have been carried out. When the inclusion probabilities are related to the unit values, the pseudo‐likelihood intervals are superior to existing methods in terms of the coverage probability, the balance of non‐coverage rates on the lower and upper sides, and the interval length. An application with a data set from the Canadian Labour Force Survey‐2000 also shows that the pseudo‐likelihood method performs more appropriately than other methods. The Canadian Journal of Statistics 38: 582–597; 2010 © 2010 Statistical Society of Canada  相似文献   

20.
Abstract.  A kernel regression imputation method for missing response data is developed. A class of bias-corrected empirical log-likelihood ratios for the response mean is defined. It is shown that any member of our class of ratios is asymptotically chi-squared, and the corresponding empirical likelihood confidence interval for the response mean is constructed. Our ratios share some of the desired features of the existing methods: they are self-scale invariant and no plug-in estimators for the adjustment factor and asymptotic variance are needed; when estimating the non-parametric function in the model, undersmoothing to ensure root- n consistency of the estimator for the parameter is avoided. Since the range of bandwidths contains the optimal bandwidth for estimating the regression function, the existing data-driven algorithm is valid for selecting an optimal bandwidth. We also study the normal approximation-based method. A simulation study is undertaken to compare the empirical likelihood with the normal approximation method in terms of coverage accuracies and average lengths of confidence intervals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号