首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In recent years, a variety of regression models, including zero-inflated and hurdle versions, have been proposed to explain the case of a dependent variable with respect to exogenous covariates. Apart from the classical Poisson, negative binomial and generalised Poisson distributions, many proposals have appeared in the statistical literature, perhaps in response to the new possibilities offered by advanced software that now enables researchers to implement numerous special functions in a relatively simple way. However, we believe that a significant research gap remains, since very little attention has been paid to the quasi-binomial distribution, which was first proposed over fifty years ago. We believe this distribution might constitute a valid alternative to existing regression models, in situations in which the variable has bounded support. Therefore, in this paper we present a zero-inflated regression model based on the quasi-binomial distribution, taking into account the moments and maximum likelihood estimators, and perform a score test to compare the zero-inflated quasi-binomial distribution with the zero-inflated binomial distribution, and the zero-inflated model with the homogeneous model (the model in which covariates are not considered). This analysis is illustrated with two data sets that are well known in the statistical literature and which contain a large number of zeros.  相似文献   

2.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

3.
Hall (2000) has described zero‐inflated Poisson and binomial regression models that include random effects to account for excess zeros and additional sources of heterogeneity in the data. The authors of the present paper propose a general score test for the null hypothesis that variance components associated with these random effects are zero. For a zero‐inflated Poisson model with random intercept, the new test reduces to an alternative to the overdispersion test of Ridout, Demério & Hinde (2001). The authors also examine their general test in the special case of the zero‐inflated binomial model with random intercept and propose an overdispersion test in that context which is based on a beta‐binomial alternative.  相似文献   

4.
Rao (1963) introduced what we call an additive damage model. In this model, original observation is subjected to damage according to a specified probability law by the survival distribution. In this paper, we consider a bivariate observation with second component subjected to damage. Using the invariance of linearity of regression of the first component on the second under the transition of the second component from the original to the damaged state, we obtain the characterizations of the Poisson, binomial and negative binomial distributions within the framework of the additive damage model.  相似文献   

5.
Analysis of the human sex ratio by using overdispersion models   总被引:2,自引:1,他引:1  
For study of the human sex ratio, one of the most important data sets was collected in Saxony in the 19th century by Geissler. The data contain the sizes of families, with the sex of all children, at the time of registration of the birth of a child. These data are reanalysed to determine how the probability for each sex changes with family size. Three models for overdispersion are fitted: the beta–binomial model of Skellam, the 'multiplicative' binomial model of Altham and the double-binomial model of Efron. For each distribution, both the probability and the dispersion parameters are allowed to vary simultaneously with family size according to two separate regression equations. A finite mixture model is also fitted. The models are fitted using non-linear Poisson regression. They are compared using direct likelihood methods based on the Akaike information criterion. The multiplicative and beta–binomial models provide similar fits, substantially better than that of the double-binomial model. All models show that both the probability that the child is a boy and the dispersion are greater in larger families. There is also some indication that a point probability mass is needed for families containing children uniquely of one sex.  相似文献   

6.
Count data often display excessive number of zero outcomes than are expected in the Poisson regression model. The zero-inflated Poisson regression model has been suggested to handle zero-inflated data, whereas the zero-inflated negative binomial (ZINB) regression model has been fitted for zero-inflated data with additional overdispersion. For bivariate and zero-inflated cases, several regression models such as the bivariate zero-inflated Poisson (BZIP) and bivariate zero-inflated negative binomial (BZINB) have been considered. This paper introduces several forms of nested BZINB regression model which can be fitted to bivariate and zero-inflated count data. The mean–variance approach is used for comparing the BZIP and our forms of BZINB regression model in this study. A similar approach was also used by past researchers for defining several negative binomial and zero-inflated negative binomial regression models based on the appearance of linear and quadratic terms of the variance function. The nested BZINB regression models proposed in this study have several advantages; the likelihood ratio tests can be performed for choosing the best model, the models have flexible forms of marginal mean–variance relationship, the models can be fitted to bivariate zero-inflated count data with positive or negative correlations, and the models allow additional overdispersion of the two dependent variables.  相似文献   

7.
This paper uses a new bivariate negative binomial distribution to model scores in the 1996 Australian Rugby League competition. First, scores are modelled using the home ground advantage but ignoring the actual teams playing. Then a bivariate negative binomial regression model is introduced that takes into account the offensive and defensive capacities of each team. Finally, the 1996 season is simulated using the latter model to determine whether or not Manly did indeed deserve to win the competition.  相似文献   

8.
This paper provides a partial solution to a problem posed by J. Neyman (1965) regarding the characterization of multivariate negative binomial distribution based on the properties of regression. It is shown that some of the properties of regression characterize the form of the nonsingular dispersion matrix of the parent distribution, which, interestingly enough, corresponds to only two types viz. those of positive and negative multivariate binomial distributions.  相似文献   

9.
In this article, for the first time, we propose the negative binomial–beta Weibull (BW) regression model for studying the recurrence of prostate cancer and to predict the cure fraction for patients with clinically localized prostate cancer treated by open radical prostatectomy. The cure model considers that a fraction of the survivors are cured of the disease. The survival function for the population of patients can be modeled by a cure parametric model using the BW distribution. We derive an explicit expansion for the moments of the recurrence time distribution for the uncured individuals. The proposed distribution can be used to model survival data when the hazard rate function is increasing, decreasing, unimodal and bathtub shaped. Another advantage is that the proposed model includes as special sub-models some of the well-known cure rate models discussed in the literature. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. We analyze a real data set for localized prostate cancer patients after open radical prostatectomy.  相似文献   

10.
This paper proposes modified splitting criteria for classification and regression trees by modifying the definition of the deviance. The modified deviance is based on local averaging instead of global averaging and is more successful at modelling data with interactions. The paper shows that the modified criteria result in much simpler trees for pure interaction data (no main effects) and can produce trees with fewer errors and lower residual mean deviances than those produced by Clark & Pregibon's (1992) method when applied to real datasets with strong interaction effects.  相似文献   

11.
The estimation of abundance from presence–absence data is an intriguing problem in applied statistics. The classical Poisson model makes strong independence and homogeneity assumptions and in practice generally underestimates the true abundance. A controversial ad hoc method based on negative‐binomial counts (Am. Nat.) has been empirically successful but lacks theoretical justification. We first present an alternative estimator of abundance based on a paired negative binomial model that is consistent and asymptotically normally distributed. A quadruple negative binomial extension is also developed, which yields the previous ad hoc approach and resolves the controversy in the literature. We examine the performance of the estimators in a simulation study and estimate the abundance of 44 tree species in a permanent forest plot.  相似文献   

12.
The problem of discriminating between the Poisson and binomial models is discussed in the context of a detailed statistical analysis of the number of appointments of the U.S. Supreme Court justices from 1789 to 2004. Various new and existing tests are examined. The analysis shows that both simple Poisson and simple binomial models are equally appropriate for describing the data. No firm statistical evidence in favour of an exponential Poisson regression model was found. Two attendant results were obtained by simulation: firstly, that the likelihood ratio test is the most powerful of those considered when testing for the Poisson versus binomial and, secondly, that the classical variance test with an upper-tail critical region is biased.  相似文献   

13.
Data envelopment analysis (DEA) is a deterministic econometric model for calculating efficiency by using data from an observed set of decision-making units (DMUs). We propose a method for calculating the distribution of efficiency scores. Our framework relies on estimating data from an unobserved set of DMUs. The model provides posterior predictive data for the unobserved DMUs to augment the frontier in the DEA that provides a posterior predictive distribution for the efficiency scores. We explore the method on a multiple-input and multiple-output DEA model. The data for the example are from a comprehensive examination of how nursing homes complete a standardized mandatory assessment of residents.  相似文献   

14.
In this paper, we propose a method to assess influence in skew-Birnbaum–Saunders regression models, which are an extension based on the skew-normal distribution of the usual Birnbaum–Saunders (BS) regression model. An interesting characteristic that the new regression model has is the capacity of predicting extreme percentiles, which is not possible with the BS model. In addition, since the observed likelihood function associated with the new regression model is more complex than that from the usual model, we facilitate the parameter estimation using a type-EM algorithm. Moreover, we employ influence diagnostic tools that considers this algorithm. Finally, a numerical illustration includes a brief simulation study and an analysis of real data in order to show the proposed methodology.  相似文献   

15.
Negative binomial group distribution was proposed in the literature which was motivated by inverse sampling when considering group inspection: products are inspected group by group, and the number of non-conforming items of a group is recorded only until the inspection of the whole group is finished. The non-conforming probability p of the population is thus the parameter of interest. In this paper, the confidence interval construction for this parameter is investigated. The common normal approximation and exact method are applied. To overcome the drawbacks of these commonly used methods, a composite method that is based on the confidence intervals of the negative binomial distribution is proposed, which benefits from the relationship between negative binomial distribution and negative binomial group distribution. Simulation studies are carried out to examine the performances of our methods. A real data example is also presented to illustrate the application of our method.  相似文献   

16.
Abstract

The objective of this paper is to propose an efficient estimation procedure in a marginal mean regression model for longitudinal count data and to develop a hypothesis test for detecting the presence of overdispersion. We extend the matrix expansion idea of quadratic inference functions to the negative binomial regression framework that entails accommodating both the within-subject correlation and overdispersion issue. Theoretical and numerical results show that the proposed procedure yields a more efficient estimator asymptotically than the one ignoring either the within-subject correlation or overdispersion. When the overdispersion is absent in data, the proposed method might hinder the estimation efficiency in practice, yet the Poisson regression based regression model is fitted to the data sufficiently well. Therefore, we construct the hypothesis test that recommends an appropriate model for the analysis of the correlated count data. Extensive simulation studies indicate that the proposed test can identify the effective model consistently. The proposed procedure is also applied to a transportation safety study and recommends the proposed negative binomial regression model.  相似文献   

17.
On the use of corrections for overdispersion   总被引:3,自引:0,他引:3  
In studying fluctuations in the size of a blackgrouse ( Tetrao tetrix ) population, an autoregressive model using climatic conditions appears to follow the change quite well. However, the deviance of the model is considerably larger than its number of degrees of freedom. A widely used statistical rule of thumb holds that overdispersion is present in such situations, but model selection based on a direct likelihood approach can produce opposing results. Two further examples, of binomial and of Poisson data, have models with deviances that are almost twice the degrees of freedom and yet various overdispersion models do not fit better than the standard model for independent data. This can arise because the rule of thumb only considers a point estimate of dispersion, without regard for any measure of its precision. A reasonable criterion for detecting overdispersion is that the deviance be at least twice the number of degrees of freedom, the familiar Akaike information criterion, but the actual presence of overdispersion should then be checked by some appropriate modelling procedure.  相似文献   

18.
Relative risks are often considered preferable to odds ratios for quantifying the association between a predictor and a binary outcome. Relative risk regression is an alternative to logistic regression where the parameters are relative risks rather than odds ratios. It uses a log link binomial generalised linear model, or log‐binomial model, which requires parameter constraints to prevent probabilities from exceeding 1. This leads to numerical problems with standard approaches for finding the maximum likelihood estimate (MLE), such as Fisher scoring, and has motivated various non‐MLE approaches. In this paper we discuss the roles of the MLE and its main competitors for relative risk regression. It is argued that reliable alternatives to Fisher scoring mean that numerical issues are no longer a motivation for non‐MLE methods. Nonetheless, non‐MLE methods may be worthwhile for other reasons and we evaluate this possibility for alternatives within a class of quasi‐likelihood methods. The MLE obtained using a reliable computational method is recommended, but this approach requires bootstrapping when estimates are on the parameter space boundary. If convenience is paramount, then quasi‐likelihood estimation can be a good alternative, although parameter constraints may be violated. Sensitivity to model misspecification and outliers is also discussed along with recommendations and priorities for future research.  相似文献   

19.
Extended Poisson process modelling is generalised to allow for covariate-dependent dispersion as well as a covariate-dependent mean response. This is done by a re-parameterisation that uses approximate expressions for the mean and variance. Such modelling allows under- and over-dispersion, or a combination of both, in the same data set to be accommodated within the same modelling framework. All the necessary calculations can be done numerically, enabling maximum likelihood estimation of all model parameters to be carried out. The modelling is applied to re-analyse two published data sets, where there is evidence of covariate-dependent dispersion, with the modelling leading to more informative analyses of these data and more appropriate measures of the precision of any estimates.  相似文献   

20.
The two-sided power (TSP) distribution is a flexible two-parameter distribution having uniform, power function and triangular as sub-distributions, and it is a reasonable alternative to beta distribution in some cases. In this work, we introduce the TSP-binomial model which is defined as a mixture of binomial distributions, with the binomial parameter p having a TSP distribution. We study its distributional properties and demonstrate its use on some data. It is shown that the newly defined model is a useful candidate for overdispersed binomial data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号