首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Zero-inflated data are more frequent when the data represent counts. However, there are practical situations in which continuous data contain an excess of zeros. In these cases, the zero-inflated Poisson, binomial or negative binomial models are not suitable. In order to reduce this gap, we propose the zero-spiked gamma-Weibull (ZSGW) model by mixing a distribution which is degenerate at zero with the gamma-Weibull distribution, which has positive support. The model attempts to estimate simultaneously the effects of explanatory variables on the response variable and the zero-spiked. We consider a frequentist analysis and a non-parametric bootstrap for estimating the parameters of the ZSGW regression model. We derive the appropriate matrices for assessing local influence on the model parameters. We illustrate the performance of the proposed regression model by means of a real data set (copaiba oil resin production) from a study carried out at the Department of Forest Science of the Luiz de Queiroz School of Agriculture, University of São Paulo. Based on the ZSGW regression model, we determine the explanatory variables that can influence the excess of zeros of the resin oil production and identify influential observations. We also prove empirically that the proposed regression model can be superior to the zero-adjusted inverse Gaussian regression model to fit zero-inflated positive continuous data.  相似文献   

2.
In binary regression, imbalanced data result from the presence of values equal to zero (or one) in a proportion that is significantly greater than the corresponding real values of one (or zero). In this work, we evaluate two methods developed to deal with imbalanced data and compare them to the use of asymmetric links. The results based on simulation study show, that correction methods do not adequately correct bias in the estimation of regression coefficients and that the models with power links and reverse power considered produce better results for certain types of imbalanced data. Additionally, we present an application for imbalanced data, identifying the best model among the various ones proposed. The parameters are estimated using a Bayesian approach, considering the Hamiltonian Monte-Carlo method, utilizing the No-U-Turn Sampler algorithm and the comparisons of models were developed using different criteria for model comparison, predictive evaluation and quantile residuals.  相似文献   

3.
Abstract

In some clinical, environmental, or economical studies, researchers are interested in a semi-continuous outcome variable which takes the value zero with a discrete probability and has a continuous distribution for the non-zero values. Due to the measuring mechanism, it is not always possible to fully observe some outcomes, and only an upper bound is recorded. We call this left-censored data and observe only the maximum of the outcome and an independent censoring variable, together with an indicator. In this article, we introduce a mixture semi-parametric regression model. We consider a parametric model to investigate the influence of covariates on the discrete probability of the value zero. For the non-zero part of the outcome, a semi-parametric Cox’s regression model is used to study the conditional hazard function. The different parameters in this mixture model are estimated using a likelihood method. Hereby the infinite dimensional baseline hazard function is estimated by a step function. As results, we show the identifiability and the consistency of the estimators for the different parameters in the model. We study the finite sample behaviour of the estimators through a simulation study and illustrate this model on a practical data example.  相似文献   

4.
In statistical modelling, it is often of interest to evaluate non‐negative quantities that capture heterogeneity in the population such as variances, mixing proportions and dispersion parameters. In instances of covariate‐dependent heterogeneity, the implied homogeneity hypotheses are nonstandard and existing inferential techniques are not applicable. In this paper, we develop a quasi‐score test statistic to evaluate homogeneity against heterogeneity that varies with a covariate profile through a regression model. We establish the limiting null distribution of the proposed test as a functional of mixtures of chi‐square processes. The methodology does not require the full distribution of the data to be entirely specified. Instead, a general estimating function for a finite dimensional component of the model, that is, of interest is assumed but other characteristics of the population are left completely unspecified. We apply the methodology to evaluate the excess zero proportion in zero‐inflated models for count data. Our numerical simulations show that the proposed test can greatly improve efficiency over tests of homogeneity that neglect covariate information under the alternative hypothesis. An empirical application to dental caries indices demonstrates the importance and practical utility of the methodology in detecting excess zeros in the data.  相似文献   

5.
We study the correlation structure for a mixture of ordinal and continuous repeated measures using a Bayesian approach. We assume a multivariate probit model for the ordinal variables and a normal linear regression for the continuous variables, where latent normal variables underlying the ordinal data are correlated with continuous variables in the model. Due to the probit model assumption, we are required to sample a covariance matrix with some of the diagonal elements equal to one. The key computational idea is to use parameter-extended data augmentation, which involves applying the Metropolis-Hastings algorithm to get a sample from the posterior distribution of the covariance matrix incorporating the relevant restrictions. The methodology is illustrated through a simulated example and through an application to data from the UCLA Brain Injury Research Center.  相似文献   

6.
Sequential regression multiple imputation has emerged as a popular approach for handling incomplete data with complex features. In this approach, imputations for each missing variable are produced based on a regression model using other variables as predictors in a cyclic manner. Normality assumption is frequently imposed for the error distributions in the conditional regression models for continuous variables, despite that it rarely holds in real scenarios. We use a simulation study to investigate the performance of several sequential regression imputation methods when the error distribution is flat or heavy tailed. The methods evaluated include the sequential normal imputation and its several extensions which adjust for non normal error terms. The results show that all methods perform well for estimating the marginal mean and proportion, as well as the regression coefficient when the error distribution is flat or moderately heavy tailed. When the error distribution is strongly heavy tailed, all methods retain their good performances for the mean and the adjusted methods have robust performances for the proportion; but all methods can have poor performances for the regression coefficient because they cannot accommodate the extreme values well. We caution against the mechanical use of sequential regression imputation without model checking and diagnostics.  相似文献   

7.
By releasing the unbiasedness condition, we often obtain more accurate estimators due to the bias–variance trade-off. In this paper, we propose a class of shrinkage proportion estimators which show improved performance over the sample proportion. We provide the “optimal” amount of shrinkage. The advantage of the proposed estimators is given theoretically as well as explored empirically by simulation studies and real data analyses.  相似文献   

8.
Using a multivariate latent variable approach, this article proposes some new general models to analyze the correlated bounded continuous and categorical (nominal or/and ordinal) responses with and without non-ignorable missing values. First, we discuss regression methods for jointly analyzing continuous, nominal, and ordinal responses that we motivated by analyzing data from studies of toxicity development. Second, using the beta and Dirichlet distributions, we extend the models so that some bounded continuous responses are replaced for continuous responses. The joint distribution of the bounded continuous, nominal and ordinal variables is decomposed into a marginal multinomial distribution for the nominal variable and a conditional multivariate joint distribution for the bounded continuous and ordinal variables given the nominal variable. We estimate the regression parameters under the new general location models using the maximum-likelihood method. Sensitivity analysis is also performed to study the influence of small perturbations of the parameters of the missing mechanisms of the model on the maximal normal curvature. The proposed models are applied to two data sets: BMI, Steatosis and Osteoporosis data and Tehran household expenditure budgets.  相似文献   

9.
Zero adjusted regression models are used to fit variables that are discrete at zero and continuous at some interval of the positive real numbers. Diagnostic analysis in these models is usually performed using the randomized quantile residual, which is useful for checking the overall adequacy of a zero adjusted regression model. However, it may fail to identify some outliers. In this work, we introduce a class of residuals for outlier identification in zero adjusted regression models. Monte Carlo simulation studies and two applications suggest that one of the residuals of the class introduced here has good properties and detects outliers that are not identified by the randomized quantile residual.  相似文献   

10.
11.
Tweedie regression models (TRMs) provide a flexible family of distributions to deal with non-negative right-skewed data and can handle continuous data with probability mass at zero. Estimation and inference of TRMs based on the maximum likelihood (ML) method are challenged by the presence of an infinity sum in the probability function and non-trivial restrictions on the power parameter space. In this paper, we propose two approaches for fitting TRMs, namely quasi-likelihood (QML) and pseudo-likelihood (PML). We discuss their asymptotic properties and perform simulation studies to compare our methods with the ML method. We show that the QML method provides asymptotically efficient estimation for regression parameters. Simulation studies showed that the QML and PML approaches present estimates, standard errors and coverage rates similar to the ML method. Furthermore, the second-moment assumptions required by the QML and PML methods enable us to extend the TRMs to the class of quasi-TRMs in Wedderburn's style. It allows to eliminate the non-trivial restriction on the power parameter space, and thus provides a flexible regression model to deal with continuous data. We provide an R implementation and illustrate the application of TRMs using three data sets.  相似文献   

12.
Abstract. Zero‐inflated data abound in ecological studies as well as in other scientific fields. Non‐parametric regression with zero‐inflated response may be studied via the zero‐inflated generalized additive model (ZIGAM) with a probabilistic mixture distribution of zero and a regular exponential family component. We propose the (partially) constrained ZIGAM, which assumes that some covariates affect the probability of non‐zero‐inflation and the regular exponential family distribution mean proportionally on the link scales. When the assumption obtains, the new approach provides a unified framework for modelling zero‐inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We develop an iterative estimation algorithm, and discuss the confidence interval construction of the estimator. Some asymptotic properties are derived. We also propose a Bayesian model selection criterion for choosing between the unconstrained and constrained ZIGAMs. The new methods are illustrated with both simulated data and a real application in jellyfish abundance data analysis.  相似文献   

13.
Distribution function estimation plays a significant role of foundation in statistics since the population distribution is always involved in statistical inference and is usually unknown. In this paper, we consider the estimation of the distribution function of a response variable Y with missing responses in the regression problems. It is proved that the augmented inverse probability weighted estimator converges weakly to a zero mean Gaussian process. A augmented inverse probability weighted empirical log-likelihood function is also defined. It is shown that the empirical log-likelihood converges weakly to the square of a Gaussian process with mean zero and variance one. We apply these results to the construction of Gaussian process approximation based confidence bands and empirical likelihood based confidence bands of the distribution function of Y. A simulation is conducted to evaluate the confidence bands.  相似文献   

14.
We developed a flexible non-parametric Bayesian model for regional disease-prevalence estimation based on cross-sectional data that are obtained from several subpopulations or clusters such as villages, cities, or herds. The subpopulation prevalences are modeled with a mixture distribution that allows for zero prevalence. The distribution of prevalences among diseased subpopulations is modeled as a mixture of finite Polya trees. Inferences can be obtained for (1) the proportion of diseased subpopulations in a region, (2) the distribution of regional prevalences, (3) the mean and median prevalence in the region, (4) the prevalence of any sampled subpopulation, and (5) predictive distributions of prevalences for regional subpopulations not included in the study, including the predictive probability of zero prevalence. We focus on prevalence estimation using data from a single diagnostic test, but we also briefly discuss the scenario where two conditionally dependent (or independent) diagnostic tests are used. Simulated data demonstrate the utility of our non-parametric model over parametric analysis. An example involving brucellosis in cattle is presented.  相似文献   

15.
In this paper, we consider simple random sampling without replacement from a dichotomous finite population. We investigate accuracy of the Normal approximation to the Hypergeometric probabilities for a wide range of parameter values, including the nonstandard cases where the sampling fraction tends to one and where the proportion of the objects of interest in the population tends to the boundary values, zero and one. We establish a non-uniform Berry–Esseen theorem for the Hypergeometric distribution which shows that in the nonstandard cases, the rate of Normal approximation to the Hypergeometric distribution can be considerably slower than the rate of Normal approximation to the Binomial distribution. We also report results from a moderately large numerical study and provide some guidelines for using the Normal approximation to the Hypergeometric distribution in finite samples.  相似文献   

16.
In this paper, we propose the class of generalized additive models for location, scale and shape in a test for the association of genetic markers with non-normally distributed phenotypes comprising a spike at zero. The resulting statistical test is a generalization of the quantitative transmission disequilibrium test with mating type indicator, which was originally designed for normally distributed quantitative traits and parent-offspring data. As a motivational example, we consider coronary artery calcification (CAC), which can accurately be identified by electron beam tomography. In the investigated regions, individuals will have a continuous measure of the extent of calcium found or they will be calcium-free. Hence, the resulting distribution is a mixed discrete-continuous distribution with spike at zero. We carry out parent-offspring simulations motivated by such CAC measurement values in a screening population to study statistical properties of the proposed test for genetic association. Furthermore, we apply the approach to data of the Genetic Analysis Workshop 16 that are based on real genotype and family data of the Framingham Heart Study, and test the association of selected genetic markers with simulated coronary artery calcification.  相似文献   

17.
We study a new family of continuous distributions with two extra shape parameters called the Burr generalized family of distributions. We investigate the shapes of the density and hazard rate function. We derive explicit expressions for some of its mathematical quantities. The estimation of the model parameters is performed by maximum likelihood. We prove the flexibility of the new family by means of applications to two real data sets. Furthermore, we propose a new extended regression model based on the logarithm of the Burr generalized distribution. This model can be very useful to the analysis of real data and provide more realistic fits than other special regression models.  相似文献   

18.
Dimension reduction with bivariate responses, especially a mix of a continuous and categorical responses, can be of special interest. One immediate application is to regressions with censoring. In this paper, we propose two novel methods to reduce the dimension of the covariates of a bivariate regression via a model-free approach. Both methods enjoy a simple asymptotic chi-squared distribution for testing the dimension of the regression, and also allow us to test the contributions of the covariates easily without pre-specifying a parametric model. The new methods outperform the current one both in simulations and in analysis of a real data. The well-known PBC data are used to illustrate the application of our method to censored regression.  相似文献   

19.
Traditionally, analysis of Hydrology employs only one hydrological variable. Recently, Nadarajah [A bivariate distribution with gamma and beta marginals with application to drought data. J Appl Stat. 2009;36:277–301] proposed a bivariate model with gamma and beta as marginal distributions to analyse the drought duration and the proportion of drought events. However, the validity of this method hinges on fulfilment of stringent assumptions. We propose a robust likelihood approach which can be used to make inference for general bivariate continuous and proportion data. Unlike the gamma–beta (GB) model which is sensitive to model misspecification, the new method provides legitimate inference without knowing the true underlying distribution of the bivariate data. Simulations and the analysis of the drought data from the State of Nebraska, USA, are provided to make contrasts between this robust approach and the GB model.  相似文献   

20.
ABSTRACT

Simplex regression model is often employed to analyze continuous proportion data in many studies. In this paper, we extend the assumption of a constant dispersion parameter (homogeneity) to varying dispersion parameter (heterogeneity) in Simplex regression model, and present the B-spline to approximate the smoothing unknown function within the Bayesian framework. A hybrid algorithm combining the block Gibbs sampler and the Metropolis-Hastings algorithm is presented for sampling observations from the posterior distribution. The procedures for computing model comparison criteria such as conditional predictive ordinate statistic, deviance information criterion, and averaged mean squared error are presented. Also, we develop a computationally feasible Bayesian case-deletion influence measure based on the Kullback-Leibler divergence. Several simulation studies and a real example are employed to illustrate the proposed methodologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号