首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
In this paper we derive two likelihood-based procedures for the construction of confidence limits for the common odds ratio in K 2 × 2 contingency tables. We then conduct a simulation study to compare these procedures with a recently proposed procedure by Sato (Biometrics 46 (1990) 71–79), based on the asymptotic distribution of the Mantel-Haenszel estimate of the common odds ratio. We consider the situation in which the number of strata remains fixed (finite), but the sample sizes within each stratum are large. Bartlett's score procedure based on the conditional likelihood is found to be almost identical, in terms of coverage probabilities and average coverage lengths, to the procedure recommended by Sato, although the score procedure has some edge, in some instances, in terms of average coverage lengths. So, for ‘fixed strata and large sample’ situation Bartlett's score procedure can be considered as an alternative to the procedure proposed by Sato, based on the asymptotic distribution of the Mantel-Haenszel estimator of the common odds ratio.  相似文献   

2.
Several methods are available for generating confidence intervals for rate difference, rate ratio, or odds ratio, when comparing two independent binomial proportions or Poisson (exposure‐adjusted) incidence rates. Most methods have some degree of systematic bias in one‐sided coverage, so that a nominal 95% two‐sided interval cannot be assumed to have tail probabilities of 2.5% at each end, and any associated hypothesis test is at risk of inflated type I error rate. Skewness‐corrected asymptotic score methods have been shown to have superior equal‐tailed coverage properties for the binomial case. This paper completes this class of methods by introducing novel skewness corrections for the Poisson case and for odds ratio, with and without stratification. Graphical methods are used to compare the performance of these intervals against selected alternatives. The skewness‐corrected methods perform favourably in all situations—including those with small sample sizes or rare events—and the skewness correction should be considered essential for analysis of rate ratios. The stratified method is found to have excellent coverage properties for a fixed effects analysis. In addition, another new stratified score method is proposed, based on the t‐distribution, which is suitable for use in either a fixed effects or random effects analysis. By using a novel weighting scheme, this approach improves on conventional and modern meta‐analysis methods with weights that rely on crude estimation of stratum variances. In summary, this paper describes methods that are found to be robust for a wide range of applications in the analysis of rates.  相似文献   

3.
Inference concerning the negative binomial dispersion parameter, denoted by c, is important in many biological and biomedical investigations. Properties of the maximum-likelihood estimator of c and its bias-corrected version have been studied extensively, mainly, in terms of bias and efficiency [W.W. Piegorsch, Maximum likelihood estimation for the negative binomial dispersion parameter, Biometrics 46 (1990), pp. 863–867; S.J. Clark and J.N. Perry, Estimation of the negative binomial parameter κ by maximum quasi-likelihood, Biometrics 45 (1989), pp. 309–316; K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185]. However, not much work has been done on the construction of confidence intervals (C.I.s) for c. The purpose of this paper is to study the behaviour of some C.I. procedures for c. We study, by simulations, three Wald type C.I. procedures based on the asymptotic distribution of the method of moments estimate (mme), the maximum-likelihood estimate (mle) and the bias-corrected mle (bcmle) [K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185] of c. All three methods show serious under-coverage. We further study parametric bootstrap procedures based on these estimates of c, which significantly improve the coverage probabilities. The bootstrap C.I.s based on the mle (Boot-MLE method) and the bcmle (Boot-BCM method) have coverages that are significantly better (empirical coverage close to the nominal coverage) than the corresponding bootstrap C.I. based on the mme, especially for small sample size and highly over-dispersed data. However, simulation results on lengths of the C.I.s show evidence that all three bootstrap procedures have larger average coverage lengths. Therefore, for practical data analysis, the bootstrap C.I. Boot-MLE or Boot-BCM should be used, although Boot-MLE method seems to be preferable over the Boot-BCM method in terms of both coverage and length. Furthermore, Boot-MLE needs less computation than Boot-BCM.  相似文献   

4.
Azzalini (Scand J Stat 12:171–178, 1985) provided a methodology to introduce skewness in a normal distribution. Using the same method of Azzalini (1985), the skew logistic distribution can be easily obtained by introducing skewness to the logistic distribution. For the skew logistic distribution, the likelihood equations do not provide explicit solutions for the location and scale parameters. We present a simple method of deriving explicit estimators by approximating the likelihood equations appropriately. We examine numerically the bias and variance of these estimators and show that these estimators are as efficient as the maximum likelihood estimators (MLEs). The coverage probabilities of the pivotal quantities (for location and scale parameters) based on asymptotic normality are shown to be unsatisfactory, especially when the effective sample size is small. To improve the coverage probabilities and for constructing confidence intervals, we suggest the use of simulated percentage points. Finally, we present a numerical example to illustrate the methods of inference developed here.  相似文献   

5.
The problem of interval estimation of the stress–strength reliability involving two independent Weibull distributions is considered. An interval estimation procedure based on the generalized variable (GV) approach is given when the shape parameters are unknown and arbitrary. The coverage probabilities of the GV approach are evaluated by Monte Carlo simulation. Simulation studies show that the proposed generalized variable approach is very satisfactory even for small samples. For the case of equal shape parameter, it is shown that the generalized confidence limits are exact. Some available asymptotic methods for the case of equal shape parameter are described and their coverage probabilities are evaluated using Monte Carlo simulation. Simulation studies indicate that no asymptotic approach based on the likelihood method is satisfactory even for large samples. Applicability of the GV approach for censored samples is also discussed. The results are illustrated using an example.  相似文献   

6.
We obtain approximate Bayes–confidence intervals for a scalar parameter based on directed likelihood. The posterior probabilities of these intervals agree with their unconditional coverage probabilities to fourth order, and with their conditional coverage probabilities to third order. These intervals are constructed for arbitrary smooth prior distributions. A key feature of the construction is that log-likelihood derivatives beyond second order are not required, unlike the asymptotic expansions of Severini.  相似文献   

7.
A stationarity test on Markov chain models is proposed in this paper. Most of the previous test procedures for the Markov chain models have been done based on the conditional probabilities of a transition matrix. The likelihood ratio and Pearson type chi-square tests have been used for testing stationarity and order of Markov chains. This paper uses the efficient score test, an extension of the test developed by Tsiatis (1980) [18], for testing the stationarity of Markov chain models based on the marginal distribution as obtained by Azzalini (1994) [2]. For testing the suitability of the proposed method, a numerical example of real life data and simulation studies for comparison with an alternative test procedure are given.  相似文献   

8.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1.

It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width.  相似文献   

9.
This article studies the empirical likelihood method for the first-order random coefficient integer-valued autoregressive process. The limiting distribution of the log empirical likelihood ratio statistic is established. Confidence region for the parameter of interest and its coverage probabilities are given, and hypothesis testing is considered. The maximum empirical likelihood estimator for the parameter is derived and its asymptotic properties are established. The performances of the estimator are compared with the conditional least squares estimator via simulation.  相似文献   

10.
Abstract

Occupancy models are used in statistical ecology to estimate species dispersion. The two components of an occupancy model are the detection and occupancy probabilities, with the main interest being in the occupancy probabilities. We show that for the homogeneous occupancy model there is an orthogonal transformation of the parameters that gives a natural two-stage inference procedure based on a conditional likelihood. We then extend this to a partial likelihood that gives explicit estimators of the model parameters. By allowing the separate modeling of the detection and occupancy probabilities, the extension of the two-stage approach to more general models has the potential to simplify the computational routines used there.  相似文献   

11.
12.
The skew t distribution is a flexible parametric family to fit data, because it includes parameters that let us regulate skewness and kurtosis. A problem with this distribution is that, for moderate sample sizes, the maximum likelihood estimator of the shape parameter is infinite with positive probability. In order to try to solve this problem, Sartori (2006) has proposed using a modified score function as an estimating equation for the shape parameter. In this note we prove that the resulting modified maximum likelihood estimator is always finite, considering the degrees of freedom as known and greater than or equal to 2.  相似文献   

13.
ABSTRACT

This paper presents a modified skew-normal (SN) model that contains the normal model as a special case. Unlike the usual SN model, the Fisher information matrix of the proposed model is always non-singular. Despite of this desirable property for the regular asymptotic inference, as with the SN model, in the considered model the divergence of the maximum likelihood estimator (MLE) of the skewness parameter may occur with positive probability in samples with moderate sizes. As a solution to this problem, a modified score function is used for the estimation of the skewness parameter. It is proved that the modified MLE is always finite. The quasi-likelihood approach is considered to build confidence intervals. When the model includes location and scale parameters, the proposed method is combined with the unmodified maximum likelihood estimates of these parameters.  相似文献   

14.
The importance of the normal distribution for fitting continuous data is well known. However, in many practical situations data distribution departs from normality. For example, the sample skewness and the sample kurtosis are far away from 0 and 3, respectively, which are nice properties of normal distributions. So, it is important to have formal tests of normality against any alternative. D'Agostino et al. [A suggestion for using powerful and informative tests of normality, Am. Statist. 44 (1990), pp. 316–321] review four procedures Z 2(g 1), Z 2(g 2), D and K 2 for testing departure from normality. The first two of these procedures are tests of normality against departure due to skewness and kurtosis, respectively. The other two tests are omnibus tests. An alternative to the normal distribution is a class of skew-normal distributions (see [A. Azzalini, A class of distributions which includes the normal ones, Scand. J. Statist. 12 (1985), pp. 171–178]). In this paper, we obtain a score test (W) and a likelihood ratio test (LR) of goodness of fit of the normal regression model against the skew-normal family of regression models. It turns out that the score test is based on the sample skewness and is of very simple form. The performance of these six procedures, in terms of size and power, are compared using simulations. The level properties of the three statistics LR, W and Z 2(g 1) are similar and close to the nominal level for moderate to large sample sizes. Also, their power properties are similar for small departure from normality due to skewness (γ1≤0.4). Of these, the score test statistic has a very simple form and computationally much simpler than the other two statistics. The LR statistic, in general, has highest power, although it is computationally much complex as it requires estimates of the parameters under the normal model as well as those under the skew-normal model. So, the score test may be used to test for normality against small departure from normality due to skewness. Otherwise, the likelihood ratio statistic LR should be used as it detects general departure from normality (due to both skewness and kurtosis) with, in general, largest power.  相似文献   

15.
We discuss higher-order adjustments for a quasi-profile likelihood for a scalar parameter of interest, in order to alleviate some of the problems inherent to the presence of nuisance parameters, such as bias and inconsistency. Indeed, quasi-profile score functions for the parameter of interest have bias of order O(1)O(1), and such bias can lead to poor inference on the parameter of interest. The higher-order adjustments are obtained so that the adjusted quasi-profile score estimating function is unbiased and its variance is the negative expected derivative matrix of the adjusted profile estimating equation. The modified quasi-profile likelihood is then obtained as the integral of the adjusted profile estimating function. We discuss two methods for the computation of the modified quasi-profile likelihoods: a bootstrap simulation method and a first-order asymptotic expression, which can be simplified under an orthogonality assumption. Examples in the context of generalized linear models and of robust inference are provided, showing that the use of a modified quasi-profile likelihood ratio statistic may lead to coverage probabilities more accurate than those pertaining to first-order Wald-type confidence intervals.  相似文献   

16.
The aim of this paper is to investigate the robustness properties of likelihood inference with respect to rounding effects. Attention is focused on exponential families and on inference about a scalar parameter of interest, also in the presence of nuisance parameters. A summary value of the influence function of a given statistic, the local-shift sensitivity, is considered. It accounts for small fluctuations in the observations. The main result is that the local-shift sensitivity is bounded for the usual likelihood-based statistics, i.e. the directed likelihood, the Wald and score statistics. It is also bounded for the modified directed likelihood, which is a higher-order adjustment of the directed likelihood. The practical implication is that likelihood inference is expected to be robust with respect to rounding effects. Theoretical analysis is supplemented and confirmed by a number of Monte Carlo studies, performed to assess the coverage probabilities of confidence intervals based on likelihood procedures when data are rounded. In addition, simulations indicate that the directed likelihood is less sensitive to rounding effects than the Wald and score statistics. This provides another criterion for choosing among first-order equivalent likelihood procedures. The modified directed likelihood shows the same robustness as the directed likelihood, so that its gain in inferential accuracy does not come at the price of an increase in instability with respect to rounding.  相似文献   

17.
In this paper, a small-sample asymptotic method is proposed for higher order inference in the stress–strength reliability model, R=P(Y<X), where X and Y are distributed independently as Burr-type X distributions. In a departure from the current literature, we allow the scale parameters of the two distributions to differ, and the likelihood-based third-order inference procedure is applied to obtain inference for R. The difficulty of the implementation of the method is in obtaining the the constrained maximum likelihood estimates (MLE). A penalized likelihood method is proposed to handle the numerical complications of maximizing the constrained likelihood model. The proposed procedures are illustrated using a sample of carbon fibre strength data. Our results from simulation studies comparing the coverage probabilities of the proposed small-sample asymptotic method with some existing large-sample asymptotic methods show that the proposed method is very accurate even when the sample sizes are small.  相似文献   

18.
Empirical likelihood inferences for the parameter component in an additive partially linear errors-in-variables model with longitudinal data are investigated in this article. A corrected-attenuation block empirical likelihood procedure is used to estimate the regression coefficients, a corrected-attenuation block empirical log-likelihood ratio statistic is suggested and its asymptotic distribution is obtained. Compared with the method based on normal approximations, our proposed method does not require any consistent estimator for the asymptotic variance and bias. Simulation studies indicate that our proposed method performs better than the method based on normal approximations in terms of relatively higher coverage probabilities and smaller confidence regions. Furthermore, an example of an air pollution and health data set is used to illustrate the performance of the proposed method.  相似文献   

19.
This paper uses graphical methods to illustrate and compare the coverage properties of a number of methods for calculating confidence intervals for the difference between two independent binomial proportions. We investigate both small‐sample and large‐sample properties of both two‐sided and one‐sided coverage, with an emphasis on asymptotic methods. In terms of aligning the smoothed coverage probability surface with the nominal confidence level, we find that the score‐based methods on the whole have the best two‐sided coverage, although they have slight deficiencies for confidence levels of 90% or lower. For an easily taught, hand‐calculated method, the Brown‐Li ‘Jeffreys’ method appears to perform reasonably well, and in most situations, it has better one‐sided coverage than the widely recommended alternatives. In general, we find that the one‐sided properties of many of the available methods are surprisingly poor. In fact, almost none of the existing asymptotic methods achieve equal coverage on both sides of the interval, even with large sample sizes, and consequently if used as a non‐inferiority test, the type I error rate (which is equal to the one‐sided non‐coverage probability) can be inflated. The only exception is the Gart‐Nam ‘skewness‐corrected’ method, which we express using modified notation in order to include a bias correction for improved small‐sample performance, and an optional continuity correction for those seeking more conservative coverage. Using a weighted average of two complementary methods, we also define a new hybrid method that almost matches the performance of the Gart‐Nam interval. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
We consider estimation of the number of cells in a multinomial distribution. This is one version of the species problem: there are many applications, such as the estimation of the number of unobserved species of animals; estimation of vocabulary size, etc. We describe the results of a simulation comparison of three principal frequent-ist' procedures for estimating the number of cells (or species). The first procedure postulates a functional form for the cell probabilities; the second procedure approxi mates the distribution of the probabilities by a parametric probability density function; and the third procedure is based on an estimate of the sample coverage, i.e. the sum of the probabilities of the observed cells. Among the procedures studied, we find that the third (non-parametric) method is globally preferable; the second (functional parametric) method cannot be recommended; and that, when based on the inverse Gaussian density, the first method is competitive in some cases with the third method. We also discuss Sichel's recent generalized inverse Gaussian-based procedure which, with some refine ment, promises to perform at least as well as the non-parametric method in all cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号