首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Yu et al. [An improved score interval with a modified midpoint for a binomial proportion. J Stat Comput Simul. 2014;84:1022–1038] propose a novel confidence interval (CI) for a binomial proportion by modifying the midpoint of the score interval. This CI is competitive with the various commonly used methods. At the same time, Martín and Álvarez [Two-tailed asymptotic inferences for a proportion. J Appl Stat. 2014;41:1516–1529] analyse the performance of 29 asymptotic two-tailed CI for a proportion. The CI they selected is based on the arcsin transformation (when this is applied to the data increased by 0.5), although they also refer to the good behaviour of the classical methods of score and Agresti and Coull (which may be preferred in certain circumstances). The aim of this commentary is to compare the four methods referred to previously. The conclusion (for the classic error α of 5%) is that with a small sample size (≤80) the method that should be used is that of Yu et al.; for a large sample size (n?≥?100), the four methods perform in a similar way, with a slight advantage for the Agresti and Coull method. In any case the Agresti and Coull method does not perform badly and tends to be conservative. The program which determines these four intervals are available from the address http://www.ugr.es/local/bioest/Z_LINEAR_K.EXEhttp://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

2.
In statistics it is customary to realize asymptotic inferences about the difference d, the ratio R or a linear combination L of two independent proportions. In this article the authors evaluate ten inference methods, and conclude that for α = 1% some of the new procedures behave better than the classical. In cases d, R, or L, the optimal method consists in adding 0.5, 0.5 or 1 to all the data, respectively, and then applying a modification of the arc-sine transformation (d or R) or the likelihood ratio test (L). A free program may be obtained at ULR http://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

3.
Inference concerning the negative binomial dispersion parameter, denoted by c, is important in many biological and biomedical investigations. Properties of the maximum-likelihood estimator of c and its bias-corrected version have been studied extensively, mainly, in terms of bias and efficiency [W.W. Piegorsch, Maximum likelihood estimation for the negative binomial dispersion parameter, Biometrics 46 (1990), pp. 863–867; S.J. Clark and J.N. Perry, Estimation of the negative binomial parameter κ by maximum quasi-likelihood, Biometrics 45 (1989), pp. 309–316; K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185]. However, not much work has been done on the construction of confidence intervals (C.I.s) for c. The purpose of this paper is to study the behaviour of some C.I. procedures for c. We study, by simulations, three Wald type C.I. procedures based on the asymptotic distribution of the method of moments estimate (mme), the maximum-likelihood estimate (mle) and the bias-corrected mle (bcmle) [K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185] of c. All three methods show serious under-coverage. We further study parametric bootstrap procedures based on these estimates of c, which significantly improve the coverage probabilities. The bootstrap C.I.s based on the mle (Boot-MLE method) and the bcmle (Boot-BCM method) have coverages that are significantly better (empirical coverage close to the nominal coverage) than the corresponding bootstrap C.I. based on the mme, especially for small sample size and highly over-dispersed data. However, simulation results on lengths of the C.I.s show evidence that all three bootstrap procedures have larger average coverage lengths. Therefore, for practical data analysis, the bootstrap C.I. Boot-MLE or Boot-BCM should be used, although Boot-MLE method seems to be preferable over the Boot-BCM method in terms of both coverage and length. Furthermore, Boot-MLE needs less computation than Boot-BCM.  相似文献   

4.
This paper investigates methodologies for evaluating the probabilistic value (P-value) of the Kolmogorov–Smirnov (K–S) goodness-of-fit test using algorithmic program development implemented in Microsoft® Visual Basic® (VB). Six methods were examined for the one-sided one-sample and two methods for the two-sided one-sample cumulative sampling distributions in the investigative software implementation that was based on machine-precision arithmetic. For sample sizes n≤2000 considered, results from the Smirnov iterative method found optimal accuracy for K–S P-values≥0.02, while those from the SmirnovD were more accurate for lower P-values for the one-sided one-sample distribution statistics. Also, the Durbin matrix method sustained better P-value results than the Durbin recursion method for the two-sided one-sample tests up to n≤700 sample sizes. Based on these results, an algorithm for Microsoft Excel® function was proposed from which a model function was developed and its implementation was used to test the performance of engineering students in a general engineering course across seven departments.  相似文献   

5.
Let Π1, …, Π p be p(p≥2) independent Poisson populations with unknown parameters θ1, …, θ p , respectively. Let X i denote an observation from the population Π i , 1≤ip. Suppose a subset of random size, which includes the best population corresponding to the largest (smallest) θ i , is selected using Gupta and Huang [On subset selection procedures for Poisson populations and some applications to the multinomial selection problems, in Applied Statistics, R.P. Gupta, ed., North-Holland, Amsterdam, 1975, pp. 97–109] and (Gupta et al. [On subset selection procedures for Poisson populations, Bull. Malaysian Math. Soc. 2 (1979), pp. 89–110]) selection rule. In this paper, the problem of estimating the average worth of the selected subset is considered under the squared error loss function. The natural estimator is shown to be biased and the UMVUE is obtained using Robbins [The UV method of estimation, in Statistical Decision Theory and Related Topics-IV, S.S. Gupta and J.O. Berger, eds., Springer, New York, vol. 1, 1988, pp. 265–270] UV method of estimation. The natural estimator is shown to be inadmissible, by constructing a class of dominating estimators. Using Monte Carlo simulations, the bias and risk of the natural, dominated and UMVU estimators are computed and compared.  相似文献   

6.
We study variable sampling plans for exponential distributions based on type-I hybrid censored samples. For this problem, two sampling plans based on the non-failure sample proportion and the conditional maximum likelihood estimator are proposed by Chen et al. [J. Chen, W. Chou, H. Wu, and H. Zhou, Designing acceptance sampling schemes for life testing with mixed censoring, Naval Res. Logist. 51 (2004), pp. 597–612] and Lin et al. [C.-T. Lin, Y.-L. Huang, and N. Balakrishnan, Exact Bayesian variable sampling plans for the exponential distribution based on type-I and type-II censored samples, Commun. Statist. Simul. Comput. 37 (2008), pp. 1101–1116], respectively. From the theoretic decision point of view, the preceding two sampling plans are not optimal due to their decision functions not being the Bayes decision functions. In this article, we consider the decision theoretic approach, and the optimal Bayesian sampling plan based on sufficient statistics is derived under a general loss function. Furthermore, for the conjugate prior distribution, the closed-form formula of the Bayes decision rule can be obtained under either the linear or quadratic decision loss. The resulting Bayesian sampling plan has the minimum Bayes risk, and hence it is better than the sampling plans proposed by Chen et al. (2004) and Lin et al. (2008). Numerical comparisons are given and demonstrate that the performance of the proposed Bayesian sampling plan is superior to that of Chen et al. (2004) and Lin et al. (2008).  相似文献   

7.
In order to reach the inference about a linear combination of two independent binomial proportions, various procedures exist (Wald's classic method, the exact, approximate, or maximized score methods, and the Newcombe-Zou method). This article defines and evaluates 25 different methods of inference, and selects the ones with the best behavior. In general terms, the optimal method is the classic Wald method applied to the data to which z 2 α/2/4 successes and z 2 α/2/4 failures are added (≈1 if α = 5%) if no sample proportion has a value of 0 or 1 (otherwise the added increase may be different).

Supplemental materials are available for this article. Go to the publisher's online edition of Communications in Statistics - Simulation and Computation to view the free supplemental file.  相似文献   

8.
The K principal points of a p-variate random variable X are defined as those points 1,..., K which minimize the expected squared distance of X from the nearest of the k . This paper reviews some of the theory of principal points and presents a method of determining principal points of univariate continuous distributions. The method is applied to the uniform distribution, to the normal distribution and to the exponential distribution.  相似文献   

9.
The study of proportions is a common topic in many fields of study. The standard beta distribution or the inflated beta distribution may be a reasonable choice to fit a proportion in most situations. However, they do not fit well variables that do not assume values in the open interval (0, c), 0 < c < 1. For these variables, the authors introduce the truncated inflated beta distribution (TBEINF). This proposed distribution is a mixture of the beta distribution bounded in the open interval (c, 1) and the trinomial distribution. The authors present the moments of the distribution, its scoring vector, and Fisher information matrix, and discuss estimation of its parameters. The properties of the suggested estimators are studied using Monte Carlo simulation. In addition, the authors present an application of the TBEINF distribution for unemployment insurance data.  相似文献   

10.
This paper examines prior choice in probit regression through a predictive cross-validation criterion. In particular, we focus on situations where the number of potential covariates is far larger than the number of observations, such as in gene expression data. Cross-validation avoids the tendency of such models to fit perfectly. We choose the scale parameter c in the standard variable selection prior as the minimizer of the log predictive score. Naive evaluation of the log predictive score requires substantial computational effort, and we investigate computationally cheaper methods using importance sampling. We find that K-fold importance densities perform best, in combination with either mixing over different values of c or with integrating over c through an auxiliary distribution.  相似文献   

11.
12.
Let X1X2,.be i.i.d. random variables and let Un= (n r)-1S?(n,r) h (Xi1,., Xir,) be a U-statistic with EUn= v, v unknown. Assume that g(X1) =E[h(X1,.,Xr) - v |X1]has a strictly positive variance s?2. Further, let a be such that φ(a) - φ(-a) =α for fixed α, 0 < α < 1, where φ is the standard normal d.f., and let S2n be the Jackknife estimator of n Var Un. Consider the stopping times N(d)= min {n: S2n: + n-12a-2},d > 0, and a confidence interval for v of length 2d,of the form In,d= [Un,-d, Un + d]. We assume that Var Un is unknown, and hence, no fixed sample size method is available for finding a confidence interval for v of prescribed width 2d and prescribed coverage probability α Turning to a sequential procedure, let IN(d),d be a sequence of sequential confidence intervals for v. The asymptotic consistency of this procedure, i.e. limd → 0P(v ∈ IN(d),d)=α follows from Sproule (1969). In this paper, the rate at which |P(v ∈ IN(d),d) converges to α is investigated. We obtain that |P(v ∈ IN(d),d) - α| = 0 (d1/2-(1+k)/2(1+m)), d → 0, where K = max {0,4 - m}, under the condition that E|h(X1, Xr)|m < ∞m > 2. This improves and extends recent results of Ghosh & DasGupta (1980) and Mukhopadhyay (1981).  相似文献   

13.
Given an unknown function (e.g. a probability density, a regression function, …) f and a constant c, the problem of estimating the level set L(c) ={fc} is considered. This problem is tackled in a very general framework, which allows f to be defined on a metric space different from . Such a degree of generality is motivated by practical considerations and, in fact, an example with astronomical data is analyzed where the domain of f is the unit sphere. A plug‐in approach is followed; that is, L(c) is estimated by Ln(c) ={fnc} , where fn is an estimator of f. Two results are obtained concerning consistency and convergence rates, with respect to the Hausdorff metric, of the boundaries ?Ln(c) towards ?L(c) . Also, the consistency of Ln(c) to L(c) is shown, under mild conditions, with respect to the L1 distance. Special attention is paid to the particular case of spherical data.  相似文献   

14.
One of the basic parameters in survival analysis is the mean residual life M 0. For right censored observation, the usual empirical likelihood based log-likelihood ratio leads to a scaled c12{\chi_1^2} limit distribution and estimating the scaled parameter leads to lower coverage of the corresponding confidence interval. To solve the problem, we present a log-likelihood ratio l(M 0) by methods of Murphy and van der Vaart (Ann Stat 1471–1509, 1997). The limit distribution of l(M 0) is the standard c12{\chi_1^2} distribution. Based on the limit distribution of l(M 0), the corresponding confidence interval of M 0 is constructed. Since the proof of the limit distribution does not offer a computational method for the maximization of the log-likelihood ratio, an EM algorithm is proposed. Simulation studies support the theoretical result.  相似文献   

15.
In this article, we first give a version with continuous paths for stochastic convolution ∫t0U(t, s)φ(s)dW(s) driven by a Wiener process W in a Hilbert space under weaker conditions. Based on the Picard approximation and the factorization method, we prove the existence, uniqueness and regularity of mild solutions for non-autonomous semilinear stochastic evolution equations with more general assumptions on the coefficients. As an application, we obtain the Feller property of the associated semigroup.  相似文献   

16.
17.
Let X1, X2,… be an independently and identically distributed sequence with ξX1 = 0, ξ exp (tX1 < ∞ (t ≧ 0) and partial sums Sn = X1 + … + Xn. Consider the maximum increment D1 (N, K) = max0≤nN - K (Sn + K - Sn)of the sequence (Sn) in (0, N) over a time K = KN, 1 ≦ KN. Under appropriate conditions on (KN) it is shown that in the case KN/log N → 0, but KN/(log N)1/2 → ∞, there exists a sequence (αN) such that K-1/2 D1 (N, K) - αN converges to 0 w. p. 1. This result provides a small increment analogue to the improved Erd?s-Rényi-type laws stated by Csörg? and Steinebach (1981).  相似文献   

18.
This article deals with the study of some properties of a mixture periodically correlated n-variate vector autoregressive (MPVAR) time series model, which extends the mixture time invariant parameter n-vector autoregressive (MVAR) model that has been recently studied by Fong et al. (2007 Fong, P.W., Li, W.K., Yau, C.W., Wong, C.S. (2007). On a mixture vector autoregressive model. The Canadian Journal of Statistics 35:135150.[Crossref], [Web of Science ®] [Google Scholar]). Our main contributions here are, on the one side, the obtaining of the second moment periodically stationary condition for a n-variate MPVARS(n; K; 2, …, 2) model; furthermore, the closed-form of the second moment is obtained and, on the other side, the estimation, via the Expectation-Maximization (EM) algorithm, of the coefficient matrices and the error variance matrix.  相似文献   

19.
Comparing the variances of several independent samples is a classic problem and many tests have been proposed in the literature. Conover et al. [Conover, W.J., Johnson, M.E. and Johnson, M.M., 1981, A comparative study of tests for homogeneity of variances with applications to the outer continental self bidding data. Technometrics, 23, 351–361.] and Shoemaker [Shoemaker, L.H., 1995, Tests for difference in dispersion based on quantiles. The American Statistician, 49 (2), 179–182.] find that the existing tests lack power for skewed sampling distributions. To address this problem, we studied the effect of an a priori symmetrization of the data on the performance of tests for homogeneity of variances. This article also updates the comprehensive comparative study of Conover et al.  相似文献   

20.
Abstract

K-means inverse regression was developed as an easy-to-use dimension reduction procedure for multivariate regression. This approach is similar to the original sliced inverse regression method, with the exception that the slices are explicitly produced by a K-means clustering of the response vectors. In this article, we propose K-medoids clustering as an alternative clustering approach for slicing and compare its performance to K-means in a simulation study. Although the two methods often produce comparable results, K-medoids tends to yield better performance in the presence of outliers. In addition to isolation of outliers, K-medoids clustering also has the advantage of accommodating a broader range of dissimilarity measures, which could prove useful in other graphical regression applications where slicing is required.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号