首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We compare minimum Hellinger distance and minimum Heiiinger disparity estimates for U-shaped beta distributions. Given suitable density estimates, both methods are known to be asymptotically efficient when the data come from the assumed model family, and robust to small perturbations from the model family. Most implementations use kernel density estimates, which may not be appropriate for U-shaped distributions. We compare fixed binwidth histograms, percentile mesh histograms, and averaged shifted histograms. Minimum disparity estimates are less sensitive to the choice of density estimate than are minimum distance estimates, and the percentile mesh histogram gives the best results for both minimum distance and minimum disparity estimates. Minimum distance estimates are biased and a bias-corrected method is proposed. Minimum disparity estimates and bias-corrected minimum distance estimates are comparable to maximum likelihood estimates when the model holds, and give better results than either method of moments or maximum likelihood when the data are discretized or contaminated, Although our re¬sults are for the beta density, the implementations are easily modified for other U-shaped distributions such as the Dirkhlet or normal generated distribution.  相似文献   

2.
In nonlinear random coefficients models, the means or variances of response variables may not exist. In such cases, commonly used estimation procedures, e.g., (extended) least-squares (LS) and quasi-likelihood methods, are not applicable. This article solves this problem by proposing an estimate based on percentile estimating equations (PEE). This method does not require full distribution assumptions and leads to efficient estimates within the class of unbiased estimating equations. By minimizing the asymptotic variance of the PEE estimates, the optimum percentile estimating equations (OPEE) are derived. Several examples including Weibull regression show the flexibility of the PEE estimates. Under certain regularity conditions, the PEE estimates are shown to be strongly consistent and asymptotic normal, and the OPEE estimates have the minimal asymptotic variance. Compared with the parametric maximum likelihood estimates (MLE), the asymptotic efficiency of the OPEE estimates is more than 98%, while the LS-type of procedures can have infinite variances. When the observations have outliers or do not follow the distributions considered in model assumptions, the article shows that OPEE is more robust than the MLE, and the asymptotic efficiency in the model misspecification cases can be above 150%.  相似文献   

3.
ABSTRACT

The maximum likelihood estimates (MLEs) of parameters of a bivariate normal distribution are derived based on progressively Type-II censored data. The asymptotic variances and covariances of the MLEs are derived from the Fisher information matrix. Using the asymptotic normality of MLEs and the asymptotic variances and covariances derived from the Fisher information matrix, interval estimation of the parameters is discussed and the probability coverages of the 90% and 95% confidence intervals for all the parameters are then evaluated by means of Monte Carlo simulations. To improve the probability coverages of the confidence intervals, especially for the correlation coefficient, sample-based Monte Carlo percentage points are determined and the probability coverages of the 90% and 95% confidence intervals obtained using these percentage points are evaluated and shown to be quite satisfactory. Finally, an illustrative example is presented.  相似文献   

4.
We respond to criticism leveled at bootstrap confidence intervals for the correlation coefficient by recent authors by arguing that in the correlation coefficient case, non–standard methods should be employed. We propose two such methods. The first is a bootstrap coverage coorection algorithm using iterated bootstrap techniques (Hall, 1986; Beran, 1987a; Hall and Martin, 1988) applied to ordinary percentile–method intervals (Efron, 1979), giving intervals with high coverage accuracy and stable lengths and endpoints. The simulation study carried out for this method gives results for sample sizes 8, 10, and 12 in three parent populations. The second technique involves the construction of percentile–t bootstrap confidence intervals for a transformed correlation coefficient, followed by an inversion of the transformation, to obtain “transformed percentile–t” intervals for the correlation coefficient. In particular, Fisher's z–transformation is used, and nonparametric delta method and jackknife variance estimates are used to Studentize the transformed correlation coefficient, with the jackknife–Studentized transformed percentile–t interval yielding the better coverage accuracy, in general. Percentile–t intervals constructed without first using the transformation perform very poorly, having large expected lengths and erratically fluctuating endpoints. The simulation study illustrating this technique gives results for sample sizes 10, 15 and 20 in four parent populations. Our techniques provide confidence intervals for the correlation coefficient which have good coverage accuracy (unlike ordinary percentile intervals), and stable lengths and endpoints (unlike ordinary percentile–t intervals).  相似文献   

5.
A simulation study was done to compare seven confidence interval methods, based on the normal approximation, for the difference of two binomial probabilities. Cases considered included minimum expected cell sizes ranging from 2 to 15 and smallest group sizes (NMIN) ranging from 6 to 100. Our recommendation is to use a continuity correction of 1/(2 NMIN) combined with the use of (N ? 1) rather than N in the estimate of the standard error. For all of the cases considered with minimum expected cell size of at least 3, this method gave coverage probabilities close to or greater than the nominal 90% and 95%. The Yates method is also acceptable, but it is slightly more conservative. At the other extreme, the usual method (with no continuity correction) does not provide adequate coverage even at the larger sample sizes. For the 99% intervals, our recommended method and the Yates correction performed equally well and are reasonable for minimum expected cell sizes of at least 5. None of the methods performed consistently well for a minimum expected cell size of 2.  相似文献   

6.
The aim of this article is to compare via Monte Carlo simulations the finite sample properties of the parameter estimates of the Marshall–Olkin extended exponential distribution obtained by ten estimation methods: maximum likelihood, modified moments, L-moments, maximum product of spacings, ordinary least-squares, weighted least-squares, percentile, Crámer–von-Mises, Anderson–Darling, and Right-tail Anderson–Darling. The bias, root mean-squared error, absolute and maximum absolute difference between the true and estimated distribution functions are used as criterion of comparison. The simulation study reveals that the L-moments and maximum products of spacings methods are highly competitive with the maximum likelihood method in small as well as in large-sized samples.  相似文献   

7.
In this article, interval estimates of Clements' process capability index are studied through bootstrapping when the underlying distribution is Inverse Gaussian. The standard bootstrap, the percentile bootstrap, and the bias-corrected percentile bootstrap confidence intervals are compared.  相似文献   

8.
9.
Duplicate analysis is a strategy commonly used to assess precision of bioanalytical methods. In some cases, duplicate analysis may rely on pooling data generated across organizations. Despite being generated under comparable conditions, organizations may produce duplicate measurements with different precision. Thus, these pooled data consist of a heterogeneous collection of duplicate measurements. Precision estimates are often expressed as relative difference indexes (RDI), such as relative percentage difference (RPD). Empirical evidence indicates that the frequency distribution of RDI values from heterogeneous data exhibits sharper peaks and heavier tails than normal distributions. Therefore, traditional normal-based models may yield faulty or unreliable estimates of precision from heterogeneous duplicate data. In this paper, we survey application of the mixture models that satisfactorily represent the distribution of RDI values from heterogeneous duplicate data. A simulation study was conducted to compare the performance of the different models in providing reliable estimates and inferences of percentile calculated from RDI values. These models are readily accessible to practitioners for study implementation through the use of modern statistical software. The utility of mixture models are explained in detail using a numerical example.  相似文献   

10.
Two-tailed asymptotic inferences for a proportion   总被引:1,自引:0,他引:1  
This paper evaluates 29 methods for obtaining a two-sided confidence interval for a binomial proportion (16 of which are new proposals) and comes to the conclusion that: Wilson's classic method is only optimal for a confidence of 99%, although generally it can be applied when n≥50; for a confidence of 95% or 90%, the optimal method is the one based on the arcsine transformation (when this is applied to the data incremented by 0.5), which behaves in a very similar manner to Jeffreys’ Bayesian method. A simpler option, though not so good as those just mentioned, is the classic-adjusted Wald method of Agresti and Coull.  相似文献   

11.
ABSTRACT

Various methods have been proposed to estimate intra-cluster correlation coefficients (ICCs) for correlated binary data, and many are very sensitive to the type of design and underlying distributional assumptions. We proposed a new method to estimate ICC and its 95% confidence intervals based on resampling principles and U-statistics, where we resampled with replacement pairs of individuals from within and between clusters. We concluded from our simulation study that the resampling-based estimates approximate the population ICC more precisely than the analysis of variance and method of moments techniques for different event rates, varying number of clusters, and cluster sizes.  相似文献   

12.
Abstract.  We develop a variance reduction method for smoothing splines. For a given point of estimation, we define a variance-reduced spline estimate as a linear combination of classical spline estimates at three nearby points. We first develop a variance reduction method for spline estimators in univariate regression models. We then develop an analogous variance reduction method for spline estimators in clustered/longitudinal models. Simulation studies are performed which demonstrate the efficacy of our variance reduction methods in finite sample settings. Finally, a real data analysis with the motorcycle data set is performed. Here we consider variance estimation and generate 95% pointwise confidence intervals for the unknown regression function.  相似文献   

13.
A weighted area estimation technique which allows explicit estimation of the parameters of the growth curve lt= L∞[1-exp(-K(t-t0))] is introduced. Simulated data are used to compare an optimal weighted area method with that of maximum likelihood. Length at age data from fisheries field studies are used to compare the weighted area and maximum likelihood techniques with the “graphical” method of Ford-Walford. The weighted area methods achieve 80–90% efficiency on simulated data, and are shown to provide robust and sensible estimates on field data.  相似文献   

14.
15.
Some studies of the bootstrap have assessed the effect of smoothing the estimated distribution that is resampled, a process usually known as the smoothed bootstrap. Generally, the smoothed distribution for resampling is a kernel estimate and is often rescaled to retain certain characteristics of the empirical distribution. Typically the effect of such smoothing has been measured in terms of the mean-squared error of bootstrap point estimates. The reports of these previous investigations have not been encouraging about the efficacy of smoothing. In this paper the effect of resampling a kernel-smoothed distribution is evaluated through expansions for the coverage of bootstrap percentile confidence intervals. It is shown that, under the smooth function model, proper bandwidth selection can accomplish a first-order correction for the one-sided percentile method. With the objective of reducing the coverage error the appropriate bandwidth for one-sided intervals converges at a rate of n −1/4, rather than the familiar n −1/5 for kernel density estimation. Applications of this same approach to bootstrap t and two-sided intervals yield optimal bandwidths of order n −1/2. These bandwidths depend on moments of the smooth function model and not on derivatives of the underlying density of the data. The relationship of this smoothing method to both the accelerated bias correction and the bootstrap t methods provides some insight into the connections between three quite distinct approximate confidence intervals.  相似文献   

16.
Researchers often report point estimates of turning point(s) obtained in polynomial regression models but rarely assess the precision of these estimates. We discuss three methods to assess the precision of such turning point estimates. The first is the delta method that leads to a normal approximation of the distribution of the turning point estimator. The second method uses the exact distribution of the turning point estimator of quadratic regression functions. The third method relies on Markov chain Monte Carlo methods to provide a finite sample approximation of the exact distribution of the turning point estimator. We argue that the delta method may lead to misleading inference and that the other two methods are more reliable. We compare the three methods using two data sets from the environmental Kuznets curve literature, where the presence and location of a turning point in the income-pollution relationship is the focus of much empirical work.  相似文献   

17.
Researchers often report point estimates of turning point(s) obtained in polynomial regression models but rarely assess the precision of these estimates. We discuss three methods to assess the precision of such turning point estimates. The first is the delta method that leads to a normal approximation of the distribution of the turning point estimator. The second method uses the exact distribution of the turning point estimator of quadratic regression functions. The third method relies on Markov chain Monte Carlo methods to provide a finite sample approximation of the exact distribution of the turning point estimator. We argue that the delta method may lead to misleading inference and that the other two methods are more reliable. We compare the three methods using two data sets from the environmental Kuznets curve literature, where the presence and location of a turning point in the income-pollution relationship is the focus of much empirical work.  相似文献   

18.
Based on progressive Type-I hybrid censored data, statistical analysis in constant-stress accelerated life test (CS-ALT) for generalized exponential (GE) distribution is discussed. The maximum likelihood estimates (MLEs) of the parameters and the reliability function are obtained with EM algorithm, as well as the observed Fisher information matrix, the asymptotic variance-covariance matrix of the MLEs, and the asymptotic unbiased estimate (AUE) of the scale parameter. Confidence intervals (CIs) for the parameters are derived using asymptotic normality of MLEs and percentile bootstrap (Boot-p) method. Finally, the point estimates and interval estimates of the parameters are compared separately through the Monte-Carlo method.  相似文献   

19.
We study a general class of piecewise Cox models. We discuss the computation of the semi-parametric maximum likelihood estimates (SMLE) of the parameters, with right-censored data, and a simplified algorithm for the maximum partial likelihood estimates (MPLE). Our simulation study suggests that the relative efficiency of the PMLE of the parameter to the SMLE ranges from 96% to 99.9%, but the relative efficiency of the existing estimators of the baseline survival function to the SMLE ranges from 3% to 24%. Thus, the SMLE is much better than the existing estimators.  相似文献   

20.
The ordinary Wilcoxon signed rank test table provides confidence intervals for the median of one population. Adjusted Wilcoxon signed rank test tables which can provide confidence intervals for the median and the 10th percentile of one population are created in this paper. Base-(n + 1) number system and theorems about property of symmetry of the adjusted Wilcoxon signed rank test statistic are derived for programming. Theorem 1 states that the adjusted Wilcoxon signed rank test statistic are symmetric around n(n + 1)/4. Theorem 2 states that the adjusted Wilcoxon signed rank test statistic with the same number of negative ranks m are symmetric around m(n+1)/2. 87.5% and 85% confidence intervals of the median are given in the table for n = 12, 13,…, 29 to create approximated 95% confidence intervals of the ratio of medians for two independent populations. 95% and 92.5% confidence intervals of the 10th percentile are given in the table for n = 26, 27, 28, 29 to create approximated 95% confidence regions of the ratio of the 10th percentiles for two independent populations. Finally two large datasets from wood industry will be partitioned to verify the correctness of adjusted Wilcoxon signed rank test tables for small samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号