首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two-sided asymptotic confidence intervals for an unknown proportion p have been the subject of a great deal of literature. Surprisingly, there are very few papers devoted, like this article, to the case of one tail, despite its great importance in practice and the fact that its behavior is usually different from that of the case with two tails. This paper evaluates 47 methods and concludes that (1) the optimal method is the classic Wilson method with a correction for continuity and (2) a simpler option, almost as good as the first, is the new adjusted Wald method (Wald's classic method applied to the data increased in the values proposed by Borkowf: adding a single imaginary failure or success).  相似文献   

2.
Asymptotic inferences about a linear combination of K independent binomial proportions are very frequent in applied research. Nevertheless, until quite recently research had been focused almost exclusively on cases of K≤2 (particularly on cases of one proportion and the difference of two proportions). This article focuses on cases of K>2, which have recently begun to receive more attention due to their great practical interest. In order to make this inference, there are several procedures which have not been compared: the score method (S0) and the method proposed by Martín Andrés et al. (W3) for adjusted Wald (which is a generalization of the method proposed by Price and Bonett) on the one hand and, on the other hand, the method of Zou et al. (N0) based on the Wilson confidence interval (which is a generalization of the Newcombe method). The article describes a new procedure (P0) based on the classic Peskun method, modifies the previous methods giving them continuity correction (methods S0c, W3c, N0c and P0c, respectively) and, finally, a simulation is made to compare the eight aforementioned procedures (which are selected from a total of 32 possible methods). The conclusion reached is that the S0c method is the best, although for very small samples (n i ≤10, ? i) the W3 method is better. The P0 method would be the optimal method if one needs a method which is almost never too liberal, but this entails using a method which is too conservative and which provides excessively wide CIs. The W3 and P0 methods have the additional advantage of being very easy to apply. A free programme which allows the application of the S0 and S0c methods (which are the most complex) can be obtained at http://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

3.
Two-tailed asymptotic inferences for a proportion   总被引:1,自引:0,他引:1  
This paper evaluates 29 methods for obtaining a two-sided confidence interval for a binomial proportion (16 of which are new proposals) and comes to the conclusion that: Wilson's classic method is only optimal for a confidence of 99%, although generally it can be applied when n≥50; for a confidence of 95% or 90%, the optimal method is the one based on the arcsine transformation (when this is applied to the data incremented by 0.5), which behaves in a very similar manner to Jeffreys’ Bayesian method. A simpler option, though not so good as those just mentioned, is the classic-adjusted Wald method of Agresti and Coull.  相似文献   

4.
The adjusted r2 algorithm is a popular automated method for selecting the start time of the terminal disposition phase (tz) when conducting a noncompartmental pharmacokinetic data analysis. Using simulated data, the performance of the algorithm was assessed in relation to the ratio of the slopes of the preterminal and terminal disposition phases, the point of intercept of the terminal disposition phase with the preterminal disposition phase, the length of the terminal disposition phase captured in the concentration‐time profile, the number of data points present in the terminal disposition phase, and the level of variability in concentration measurement. The adjusted r2 algorithm was unable to identify tz accurately when there were more than three data points present in a profile's terminal disposition phase. The terminal disposition phase rate constant (λz) calculated based on the value of tz selected by the algorithm had a positive bias in all simulation data conditions. Tolerable levels of bias (median bias less than 5%) were achieved under conditions of low measurement variability. When measurement variability was high, tolerable levels of bias were attained only when the terminal phase time span was 4 multiples of t1/2 or longer. A comparison of the performance of the adjusted r2 algorithm, a simple r2 algorithm, and tz selection by visual inspection was conducted using a subset of the simulation data. In the comparison, the simple r2 algorithm performed as well as the adjusted r2 algorithm and the visual inspection method outperformed both algorithms. Recommendations concerning the use of the various tz selection methods are presented.  相似文献   

5.
Following the paper by Genton and Loperfido [Generalized skew-elliptical distributions and their quadratic forms, Ann. Inst. Statist. Math. 57 (2005), pp. 389–401], we say that Z has a generalized skew-normal distribution, if its probability density function (p.d.f.) is given by f(z)=2φ p (z; ξ, Ω)π (z?ξ), z∈? p , where φ p (·; ξ, Ω) is the p-dimensional normal p.d.f. with location vector ξ and scale matrix Ω, ξ∈? p , Ω>0, and π is a skewing function from ? p to ?, that is 0≤π (z)≤1 and π (?z)=1?π (z), ? z∈? p . First the distribution of linear transformations of Z are studied, and some moments of Z and its quadratic forms are derived. Next we obtain the joint moment-generating functions (m.g.f.’s) of linear and quadratic forms of Z and then investigate conditions for their independence. Finally explicit forms for the above distributions, m.g.f.’s and moments are derived when π (z)=κ (αz), where α∈? p and κ is the normal, Laplace, logistic or uniform distribution function.  相似文献   

6.
7.
In statistical inference on the drift parameter a in the fractional Brownian motion WHt with the Hurst parameter H ∈ (0, 1) with a constant drift YHt = at + WHt, there is a large number of options how to do it. We may, for example, base this inference on the properties of the standard normal distribution applied to the differences between the observed values of the process at discrete times. Although such methods are very simple, it turns out that more appropriate is to use inverse methods. Such methods can be generalized to non constant drift. For the hypotheses testing about the drift parameter a, it is more proper to standardize the observed process, and to use inverse methods based on the first exit time of the observed process of a pre-specified interval until some given time. These procedures are illustrated, and their times of decision are compared against the direct approach. Other generalizations are possible when the random part is a symmetric stochastic integral of a known, deterministic function with respect to fractional Brownian motion.  相似文献   

8.
Most of the higher-order asymptotic results in statistical inference available in the literature assume model correctness. The aim of this paper is to develop higher-order results under model misspecification. The density functions to O(n?3/2) of the robust score test statistic and the robust Wald test statistic are derived under the null hypothesis, for the scalar as well as the multiparameter case. Alternate statistics which are robust to O(n?3/2) are also proposed.  相似文献   

9.
Fosdick and Raftery (2012) recently encountered the problem of inference for a bivariate normal correlation coefficient ρ with known variances. We derive a variance-stabilizing transformation y(ρ) analogous to Fisher’s classical z-transformation for the unknown-variance case. Adjusting y for the sample size n produces an improved “confidence-stabilizing” transformation yn(ρ) that provides more accurate interval estimates for ρ than the known-variance MLE. Interestingly, the z transformation applied to the unknown-but-equal-variance MLE performs well in the known-variance case for smaller values of |ρ|. Both methods are useful for comparing two or more correlation coefficients in the known-variance case.  相似文献   

10.
Exact unconditional tests for comparing two binomial probabilities are generally more powerful than conditional tests like Fisher's exact test. Their power can be further increased by the Berger and Boos confidence interval method, where a p-value is found by restricting the common binomial probability under H 0 to a 1?γ confidence interval. We studied the average test power for the exact unconditional z-pooled test for a wide range of cases with balanced and unbalanced sample sizes, and significance levels 0.05 and 0.01. The detailed results are available online on the web. Among the values 10?3, 10?4, …, 10?10, the value γ=10?4 gave the highest power, or close to the highest power, in all the cases we looked at, and can be given as a general recommendation as an optimal γ.  相似文献   

11.
As the sample size increases, the coefficient of skewness of the Fisher's transformation z= tanh-1r, of the correlation coefficient decreases much more rapidly than the excess of its kurtosis. Hence, the distribution of standardized z can be approximated more accurately in terms of the t distribution with matching kurtosis than by the unit normal distribution. This t distribution can, in turn be subjected to Wallace's approximation resulting in a new normal approximation for the Fisher's z transform. This approximation, which can be used to estimate the probabilities, as well as the percentiles, compares favorably in both accuracy and simplicity, with the two best earlier approximations, namely, those due to Ruben (1966) and Kraemer (1974). Fisher (1921) suggested approximating distribution of the variance stabilizing transform z=(1/2) log ((1 +r)/(1r)) of the correlation coefficient r by the normal distribution with mean = (1/2) log ((1 + p)/(lp)) and variance =l/(n3). This approximation is generally recognized as being remarkably accurate when ||Gr| is moderate but not so accurate when ||Gr| is large, even when n is not small (David (1938)). Among various alternatives to Fisher's approximation, the normalizing transformation due to Ruben (1966) and a t approximation due to Kraemer (1973), are interesting on the grounds of novelty, accuracy and/or aesthetics. If r?= r/√ (1r2) and r?|Gr = |Gr/√(1|Gr2), then Ruben (1966) showed that (1) gn (r,|Gr) ={(2n5)/2}1/2r?r{(2n3)/2}1/2r?|GR, {1 + (1/2)(r?r2+r?|Gr2)}1/2 is approximately unit normal. Kraemer (1973) suggests approximating (2) tn (r, |Gr) = (r|GR1) √ (n2), √(11r2) √(1|Gr2) by a Student's t variable with (n2) degrees of freedom, where after considering various valid choices for |Gr1 she recommends taking |Gr1= |Gr*, the median of r given n and |Gr.  相似文献   

12.
In this paper, we propose a smoothed Q‐learning algorithm for estimating optimal dynamic treatment regimes. In contrast to the Q‐learning algorithm in which nonregular inference is involved, we show that, under assumptions adopted in this paper, the proposed smoothed Q‐learning estimator is asymptotically normally distributed even when the Q‐learning estimator is not and its asymptotic variance can be consistently estimated. As a result, inference based on the smoothed Q‐learning estimator is standard. We derive the optimal smoothing parameter and propose a data‐driven method for estimating it. The finite sample properties of the smoothed Q‐learning estimator are studied and compared with several existing estimators including the Q‐learning estimator via an extensive simulation study. We illustrate the new method by analyzing data from the Clinical Antipsychotic Trials of Intervention Effectiveness–Alzheimer's Disease (CATIE‐AD) study.  相似文献   

13.
R-squared (R2) and adjusted R-squared (R2Adj) are sometimes viewed as statistics detached from any target parameter, and sometimes as estimators for the population multiple correlation. The latter interpretation is meaningful only if the explanatory variables are random. This article proposes an alternative perspective for the case where the x’s are fixed. A new parameter is defined, in a similar fashion to the construction of R2, but relying on the true parameters rather than their estimates. (The parameter definition includes also the fixed x values.) This parameter is referred to as the “parametric” coefficient of determination, and denoted by ρ2*. The proposed ρ2* remains stable when irrelevant variables are removed (or added), unlike the unadjusted R2, which always goes up when variables, either relevant or not, are added to the model (and goes down when they are removed). The value of the traditional R2Adj may go up or down with added (or removed) variables, either relevant or not. It is shown that the unadjusted R2 overestimates ρ2*, while the traditional R2Adj underestimates it. It is also shown that for simple linear regression the magnitude of the bias of R2Adj can be as high as the bias of the unadjusted R2 (while their signs are opposite). Asymptotic convergence in probability of R2Adj to ρ2* is demonstrated. The effects of model parameters on the bias of R2 and R2Adj are characterized analytically and numerically. An alternative bi-adjusted estimator is presented and evaluated.  相似文献   

14.
Consider two independent random samples of size f + 1 , one from an N (μ1, σ21) distribution and the other from an N (μ2, σ22) distribution, where σ2122∈ (0, ∞) . The Welch ‘approximate degrees of freedom’ (‘approximate t‐solution’) confidence interval for μ12 is commonly used when it cannot be guaranteed that σ2122= 1 . Kabaila (2005, Comm. Statist. Theory and Methods 34 , 291–302) multiplied the half‐width of this interval by a positive constant so that the resulting interval, denoted by J0, has minimum coverage probability 1 ?α. Now suppose that we have uncertain prior information that σ2122= 1. We consider a broad class of confidence intervals for μ12 with minimum coverage probability 1 ?α. This class includes the interval J0, which we use as the standard against which other members of will be judged. A confidence interval utilizes the prior information substantially better than J0 if (expected length of J)/(expected length of J0) is (a) substantially less than 1 (less than 0.96, say) for σ2122= 1 , and (b) not too much larger than 1 for all other values of σ2122 . For a given f, does there exist a confidence interval that satisfies these conditions? We focus on the question of whether condition (a) can be satisfied. For each given f, we compute a lower bound to the minimum over of (expected length of J)/(expected length of J0) when σ2122= 1 . For 1 ?α= 0.95 , this lower bound is not substantially less than 1. Thus, there does not exist any confidence interval belonging to that utilizes the prior information substantially better than J0.  相似文献   

15.
We present results of a Monte Carlo study comparing four methods of estimating the parameters of the logistic model logit (pr (Y = 1 | X, Z)) = α0 + α 1 X + α 2 Z where X and Z are continuous covariates and X is always observed but Z is sometimes missing. The four methods examined are 1) logistic regression using complete cases, 2) logistic regression with filled-in values of Z obtained from the regression of Z on X and Y, 3) logistic regression with filled-in values of Z and random error added, and 4) maximum likelihood estimation assuming the distribution of Z given X and Y is normal. Effects of different percent missing for Z and different missing value mechanisms on the bias and mean absolute deviation of the estimators are examined for data sets of N = 200 and N = 400.  相似文献   

16.
Interval estimation of the difference of two independent binomial proportions is an important problem in many applied settings. Newcombe (1998 Newcombe , R. G. ( 1998 ). Interval estimation for the difference between independent proportions: comparison of seven methods . Statistics in Medicine 17 : 873890 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) compared the performance of several existing asymptotic methods, and based on the results obtained, recommended a method known as Wilson's method, a modified version of a method originally proposed for single binomial proportion. In this article, we propose a method based on profile likelihood, where the likelihood is weighted by noninformative Jeffrey' prior. By doing extensive simulations, we find that the proposed method performs well compared to Wilson's method. A SAS/IML program implementing this method is also given with this article.  相似文献   

17.
For each n, k ∈ ?, let Y i  = (Y i1, Y i2,…, Y ik ), 1 ≤ i ≤ n be independent random vectors in ? k with finite third moments and Y ij are independent for all j = 1, 2,…, k. In this article, we use the Stein's technique to find constants in uniform bounds for multidimensional Berry-Esseen inequality on a closed sphere, a half plane and a rectangular set.  相似文献   

18.
In this paper an attempt has been made to examine the multivariate versions of the common process capability indices (PCI's) denoted by Cp and Cpk . Markov chain Monte Carlo (MCMC) methods are used to generate sampling distributions for the various PCI's from where inference is performed. Some Bayesian model checking techniques are developed and implemented to examine how well our model fits the data. Finally the methods are exemplified on a historical aircraft data set collected by the Pratt and Whitney Company.  相似文献   

19.
Yu et al. [An improved score interval with a modified midpoint for a binomial proportion. J Stat Comput Simul. 2014;84:1022–1038] propose a novel confidence interval (CI) for a binomial proportion by modifying the midpoint of the score interval. This CI is competitive with the various commonly used methods. At the same time, Martín and Álvarez [Two-tailed asymptotic inferences for a proportion. J Appl Stat. 2014;41:1516–1529] analyse the performance of 29 asymptotic two-tailed CI for a proportion. The CI they selected is based on the arcsin transformation (when this is applied to the data increased by 0.5), although they also refer to the good behaviour of the classical methods of score and Agresti and Coull (which may be preferred in certain circumstances). The aim of this commentary is to compare the four methods referred to previously. The conclusion (for the classic error α of 5%) is that with a small sample size (≤80) the method that should be used is that of Yu et al.; for a large sample size (n?≥?100), the four methods perform in a similar way, with a slight advantage for the Agresti and Coull method. In any case the Agresti and Coull method does not perform badly and tends to be conservative. The program which determines these four intervals are available from the address http://www.ugr.es/local/bioest/Z_LINEAR_K.EXEhttp://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

20.
Editor's Report     
There are two common methods for statistical inference on 2 × 2 contingency tables. One is the widely taught Pearson chi-square test, which uses the well-known χ2statistic. The chi-square test is appropriate for large sample inference, and it is equivalent to the Z-test that uses the difference between the two sample proportions for the 2 × 2 case. Another method is Fisher’s exact test, which evaluates the likelihood of each table with the same marginal totals. This article mathematically justifies that these two methods for determining extreme do not completely agree with each other. Our analysis obtains one-sided and two-sided conditions under which a disagreement in determining extreme between the two tests could occur. We also address the question whether or not their discrepancy in determining extreme would make them draw different conclusions when testing homogeneity or independence. Our examination of the two tests casts light on which test should be trusted when the two tests draw different conclusions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号