首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Zero-inflated power series distribution is commonly used for modelling count data with extra zeros. Inflation at point zero has been investigated and several tests for zero inflation have been examined. However sometimes, inflation occurs at a point apart from zero. In this case, we say inflation occurs at an arbitrary point j. The j-inflation has been discussed less than zero inflation. In this paper, inflation at an arbitrary point j is studied with more details and a Bayesian test for detecting inflation at point j is presented. The Bayesian method is extended to inflation at arbitrary points i and j. The relationship between the distribution for inflation at point j, inflation at points i and j and missing value imputation is studied. It is shown how to obtain a proper estimate of the population variance if a mean-imputed missing at random data set is used. Some simulation studies are conducted and the proposed Bayesian test is applied on two real data sets.  相似文献   

2.
When the target variable exhibits a semicontinuous behavior (a point mass in a single value and a continuous distribution elsewhere), parametric “two-part models” have been extensively used and investigated. The applications have mainly been related to non negative variables with a point mass in zero (zero-inflated data). In this article, a semiparametric Bayesian two-part model for dealing with such variables is proposed. The model allows a semiparametric expression for the two parts of the model by using Dirichlet processes. A motivating example, based on grape wine production in Tuscany (an Italian region), is used to show the capabilities of the model. Finally, two simulation experiments evaluate the model. Results show a satisfactory performance of the suggested approach for modeling and predicting semicontinuous data when parametric assumptions are not reasonable.  相似文献   

3.
We study bias arising from rounding categorical variables following multivariate normal (MVN) imputation. This task has been well studied for binary variables, but not for more general categorical variables. Three methods that assign imputed values to categories based on fixed reference points are compared using 25 specific scenarios covering variables with k=3, …, 7 categories, and five distributional shapes, and for each k=3, …, 7, we examine the distribution of bias arising over 100,000 distributions drawn from a symmetric Dirichlet distribution. We observed, on both empirical and theoretical grounds, that one method (projected-distance-based rounding) is superior to the other two methods, and that the risk of invalid inference with the best method may be too high at sample sizes n≥150 at 50% missingness, n≥250 at 30% missingness and n≥1500 at 10% missingness. Therefore, these methods are generally unsatisfactory for rounding categorical variables (with up to seven categories) following MVN imputation.  相似文献   

4.
ABSTRACT

In this article, a finite mixture model of hurdle Poisson distribution with missing outcomes is proposed, and a stochastic EM algorithm is developed for obtaining the maximum likelihood estimates of model parameters and mixing proportions. Specifically, missing data is assumed to be missing not at random (MNAR)/non ignorable missing (NINR) and the corresponding missingness mechanism is modeled through probit regression. To improve the algorithm efficiency, a stochastic step is incorporated into the E-step based on data augmentation, whereas the M-step is solved by the method of conditional maximization. A variation on Bayesian information criterion (BIC) is also proposed to compare models with different number of components with missing values. The considered model is a general model framework and it captures the important characteristics of count data analysis such as zero inflation/deflation, heterogeneity as well as missingness, providing us with more insight into the data feature and allowing for dispersion to be investigated more fully and correctly. Since the stochastic step only involves simulating samples from some standard distributions, the computational burden is alleviated. Once missing responses and latent variables are imputed to replace the conditional expectation, our approach works as part of a multiple imputation procedure. A simulation study and a real example illustrate the usefulness and effectiveness of our methodology.  相似文献   

5.
ABSTRACT

In this article, inflation at an arbitrary point β of a member of power series exponential family and mean-inflation as a cause of having semi-continuous distribution are discussed. Also, a joint modeling of such a semi-continuous response and β-inflated Poisson response is presented. Simultaneous effects of covariates on both responses, which have two-component mixture distributions, are investigated. To find the parameter estimates, the maximum likelihood approach is used. The proposed model is illustrated on some simulation studies and applied to a real survey dataset.  相似文献   

6.
Caren Hasler  Yves Tillé 《Statistics》2016,50(6):1310-1331
Random imputation is an interesting class of imputation methods to handle item nonresponse because it tends to preserve the distribution of the imputed variable. However, such methods amplify the total variance of the estimators because values are imputed at random. This increase in variance is called imputation variance. In this paper, we propose a new random hot-deck imputation method that is based on the k-nearest neighbour methodology. It replaces the missing value of a unit with the observed value of a similar unit. Calibration and balanced sampling are applied to minimize the imputation variance. Moreover, our proposed method provides triple protection against nonresponse bias. This means that if at least one out of three specified models holds, then the resulting total estimator is unbiased. Finally, our approach allows the user to perform consistency edits and to impute simultaneously.  相似文献   

7.
This paper presents a new model that monitors the basic network formation mechanisms via the attributes through time. It considers the issue of joint modeling of longitudinal inflated (0, 1)-support continuous and inflated count response variables. For joint model of mentioned response variables, a correlated generalized linear mixed model is studied. The fraction response is inflated in two points k and l (k < l) and a k and l inflated beta distribution is introduced to use as its distribution. Also, the count response is inflated in zero and we use some members of zero-inflated power series distributions, hurdle-at-zero, members of zero-inflated double power series distributions and zero-inflated generalized Poisson distribution as our count response distribution. A full likelihood-based approach is used to yield maximum likelihood estimates of the model parameters and the model is applied to a real social network obtained from an observational study where the rate of the ith node’s responsiveness to the jth node and the number of arrows or edges with some specific characteristics from the ith node to the jth node are the correlated inflated (0, 1)-support continuous and inflated count response variables, respectively. The effect of the sender and receiver positions in an office environment on the responses are investigated simultaneously.  相似文献   

8.
Imputation is often used in surveys to treat item nonresponse. It is well known that treating the imputed values as observed values may lead to substantial underestimation of the variance of the point estimators. To overcome the problem, a number of variance estimation methods have been proposed in the literature, including resampling methods such as the jackknife and the bootstrap. In this paper, we consider the problem of doubly robust inference in the presence of imputed survey data. In the doubly robust literature, point estimation has been the main focus. In this paper, using the reverse framework for variance estimation, we derive doubly robust linearization variance estimators in the case of deterministic and random regression imputation within imputation classes. Also, we study the properties of several jackknife variance estimators under both negligible and nonnegligible sampling fractions. A limited simulation study investigates the performance of various variance estimators in terms of relative bias and relative stability. Finally, the asymptotic normality of imputed estimators is established for stratified multistage designs under both deterministic and random regression imputation. The Canadian Journal of Statistics 40: 259–281; 2012 © 2012 Statistical Society of Canada  相似文献   

9.
A random effects model for analyzing mixed longitudinal normal and count outcomes with and without the possibility of non ignorable missing outcomes is presented. The count response is inflated in two points (k and l) and the (k, l)-Hurdle power series is used as its distribution. The new distribution contains, as special submodels, several important distributions which are discussed, such as (k, l)-Hurdle Poisson and (k, l)-Hurdle negative binomial and (k, l)-Hurdle binomial distributions among others. Random effects are used to take into account the correlation between longitudinal outcomes and inflation parameters. A full likelihood-based approach is used to yield maximum likelihood estimates of the model parameters. A simulation study is performed in which for count outcome (k, l)-Hurdle Poisson, (k, l)-Hurdle negative binomial and (k, l)-Hurdle binomial distributions are considered. To illustrate the application of such modelling the longitudinal data of body mass index and the number of joint damage are analyzed.  相似文献   

10.
Analyzing incomplete data for inferring the structure of gene regulatory networks (GRNs) is a challenging task in bioinformatic. Bayesian network can be successfully used in this field. k-nearest neighbor, singular value decomposition (SVD)-based and multiple imputation by chained equations are three fundamental imputation methods to deal with missing values. Path consistency (PC) algorithm based on conditional mutual information (PCA–CMI) is a famous algorithm for inferring GRNs. This algorithm needs the data set to be complete. However, the problem is that PCA–CMI is not a stable algorithm and when applied on permuted gene orders, different networks are obtained. We propose an order independent algorithm, PCA–CMI–OI, for inferring GRNs. After imputation of missing data, the performances of PCA–CMI and PCA–CMI–OI are compared. Results show that networks constructed from data imputed by the SVD-based method and PCA–CMI–OI algorithm outperform other imputation methods and PCA–CMI. An undirected or partially directed network is resulted by PC-based algorithms. Mutual information test (MIT) score, which can deal with discrete data, is one of the famous methods for directing the edges of resulted networks. We also propose a new score, ConMIT, which is appropriate for analyzing continuous data. Results shows that the precision of directing the edges of skeleton is improved by applying the ConMIT score.  相似文献   

11.
This article addresses issues in creating public-use data files in the presence of missing ordinal responses and subsequent statistical analyses of the dataset by users. The authors propose a fully efficient fractional imputation (FI) procedure for ordinal responses with missing observations. The proposed imputation strategy retrieves the missing values through the full conditional distribution of the response given the covariates and results in a single imputed data file that can be analyzed by different data users with different scientific objectives. Two most critical aspects of statistical analyses based on the imputed data set,  validity  and  efficiency, are examined through regression analysis involving the ordinal response and a selected set of covariates. It is shown through both theoretical development and simulation studies that, when the ordinal responses are missing at random, the proposed FI procedure leads to valid and highly efficient inferences as compared to existing methods. Variance estimation using the fractionally imputed data set is also discussed. The Canadian Journal of Statistics 48: 138–151; 2020 © 2019 Statistical Society of Canada  相似文献   

12.
Summary.  We propose to use calibrated imputation to compensate for missing values. This technique consists of finding final imputed values that are as close as possible to preliminary imputed values and are calibrated to satisfy constraints. Preliminary imputed values, potentially justified by an imputation model, are obtained through deterministic single imputation. Using appropriate constraints, the resulting imputed estimator is asymptotically unbiased for estimation of linear population parameters such as domain totals. A quasi-model-assisted approach is considered in the sense that inferences do not depend on the validity of an imputation model and are made with respect to the sampling design and a non-response model. An imputation model may still be used to generate imputed values and thus to improve the efficiency of the imputed estimator. This approach has the characteristic of handling naturally the situation where more than one imputation method is used owing to missing values in the variables that are used to obtain imputed values. We use the Taylor linearization technique to obtain a variance estimator under a general non-response model. For the logistic non-response model, we show that ignoring the effect of estimating the non-response model parameters leads to overestimating the variance of the imputed estimator. In practice, the overestimation is expected to be moderate or even negligible, as shown in a simulation study.  相似文献   

13.
In this paper, the mixture model of k extreme value distributions is investigated. Using the Laplace transform of extreme value distributions given in terms of the Krätzel function, we first prove the identifiability of the class of arbitrary mixtures of extreme-value distributions of type 1 and type 2. We then find the estimates for the parameters of the mixture of two extreme-value distributions, including the three different types, via the EM algorithm. The performance of the estimates is tested by Monte Carlo simulation.  相似文献   

14.
Multiple imputation has emerged as a widely used model-based approach in dealing with incomplete data in many application areas. Gaussian and log-linear imputation models are fairly straightforward to implement for continuous and discrete data, respectively. However, in missing data settings which include a mix of continuous and discrete variables, correct specification of the imputation model could be a daunting task owing to the lack of flexible models for the joint distribution of variables of different nature. This complication, along with accessibility to software packages that are capable of carrying out multiple imputation under the assumption of joint multivariate normality, appears to encourage applied researchers for pragmatically treating the discrete variables as continuous for imputation purposes, and subsequently rounding the imputed values to the nearest observed category. In this article, I introduce a distance-based rounding approach for ordinal variables in the presence of continuous ones. The first step of the proposed rounding process is predicated upon creating indicator variables that correspond to the ordinal levels, followed by jointly imputing all variables under the assumption of multivariate normality. The imputed values are then converted to the ordinal scale based on their Euclidean distances to a set of indicators, with minimal distance corresponding to the closest match. I compare the performance of this technique to crude rounding via commonly accepted accuracy and precision measures with simulated data sets.  相似文献   

15.
Dealing with incomplete data is a pervasive problem in statistical surveys. Bayesian networks have been recently used in missing data imputation. In this research, we propose a new methodology for the multivariate imputation of missing data using discrete Bayesian networks and conditional Gaussian Bayesian networks. Results from imputing missing values in coronary artery disease data set and milk composition data set as well as a simulation study from cancer-neapolitan network are presented to demonstrate and compare the performance of three Bayesian network-based imputation methods with those of multivariate imputation by chained equations (MICE) and the classical hot-deck imputation method. To assess the effect of the structure learning algorithm on the performance of the Bayesian network-based methods, two methods called Peter-Clark algorithm and greedy search-and-score have been applied. Bayesian network-based methods are: first, the method introduced by Di Zio et al. [Bayesian networks for imputation, J. R. Stat. Soc. Ser. A 167 (2004), 309–322] in which, each missing item of a variable is imputed using the information given in the parents of that variable; second, the method of Di Zio et al. [Multivariate techniques for imputation based on Bayesian networks, Neural Netw. World 15 (2005), 303–310] which uses the information in the Markov blanket set of the variable to be imputed and finally, our new proposed method which applies the whole available knowledge of all variables of interest, consisting the Markov blanket and so the parent set, to impute a missing item. Results indicate the high quality of our new proposed method especially in the presence of high missingness percentages and more connected networks. Also the new method have shown to be more efficient than the MICE method for small sample sizes with high missing rates.  相似文献   

16.
The article focuses mainly on a conditional imputation algorithm of quantile-filling to analyze a new kind of censored data, mixed interval-censored and complete data related to interval-censored sample. With the algorithm, the imputed failure times, which are the conditional quantiles, are obtained within the censoring intervals in which some exact failure times are. The algorithm is viable and feasible for the parameter estimation with general distributions, for instance, a case of Weibull distribution that has a moment estimation of closed form by log-transformation. Furthermore, interval-censored sample is a special case of the new censored sample, and the conditional imputation algorithm can also be used to deal with the failure data of interval censored. By comparing the interval-censored data and the new censored data, using the imputation algorithm, in the view of the bias of estimation, we find that the performance of new censored data is better than that of interval censored.  相似文献   

17.
In this paper, we propose the use of Bayesian quantile regression for the analysis of proportion data. We also consider the case when the data present a zero-or-one inflation using a two-part model approach. For the latter scheme, we assume that the response variable is generated by a mixed discrete–continuous distribution with a point mass at zero or one. Quantile regression is then used to explain the conditional distribution of the continuous part between zero and one, while the mixture probability is also modelled as a function of the covariates. We check the performance of these models with two simulation studies. We illustrate the method with data about the proportion of households with access to electricity in Brazil.  相似文献   

18.
Resampling methods are a common measure to estimate the variance of a statistic of interest when data consist of nonresponse and imputation is used as compensation. Applying resampling methods usually means that subsamples are drawn from the original sample and that variance estimates are computed based on point estimators of several subsamples. However, newer resampling methods such as the rescaling bootstrap of Chipperfield and Preston [Efficient bootstrap for business surveys. Surv Methodol. 2007;33:167–172] include all elements of the original sample in the computation of its point estimator. Thus, procedures to consider imputation in resampling methods cannot be applied in the ordinary way. For such methods, modifications are necessary. This paper presents an approach applying newer resampling methods for imputed data. The Monte Carlo simulation study conducted in the paper shows that the proposed approach leads to reliable variance estimates in contrast to other modifications.  相似文献   

19.
Approximating the distribution of mobile communications expenditures (MCE) is complicated by zero observations in the sample. To deal with the zero observations by allowing a point mass at zero, a mixture model of MCE distributions is proposed and applied. The MCE distribution is specified as a mixture of two distributions, one with a point mass at zero and the other with full support on the positive half of the real line. The model is empirically verified for individual MCE survey data collected in Seoul, Korea. The mixture model can easily capture the common bimodality feature of the MCE distribution. In addition, when covariates were added to the model, it was found that the probability that an individual has non-expenditure significantly varies with some variables. Finally, the goodness-of-fit test suggests that the data are well represented by the mixture model.  相似文献   

20.
Multiple imputation under the multivariate normality assumption has often been regarded as a viable model-based approach in dealing with incomplete continuous data in the last two decades. A situation where the measurements are taken on a continuous scale with an ultimate interest in dichotomized versions through discipline-specific thresholds is not uncommon in applied research, especially in medical and social sciences. In practice, researchers generally tend to impute missing values for continuous outcomes under a Gaussian imputation model, and then dichotomize them via commonly-accepted cut-off points. An alternative strategy is creating multiply imputed data sets after dichotomization under a log-linear imputation model that uses a saturated multinomial structure. In this work, the performances of the two imputation methods were examined on a fairly wide range of simulated incomplete data sets that exhibit varying distributional characteristics such as skewness and multimodality. Behavior of efficiency and accuracy measures was explored to determine the extent to which the procedures work properly. The conclusion drawn is that dichotomization before carrying out a log-linear imputation should be the preferred approach except for a few special cases. I recommend that researchers use the atypical second strategy whenever the interest centers on binary quantities that are obtained through underlying continuous measurements. A possible explanation is that erratic/idiosyncratic aspects that are not accommodated by a Gaussian model are probably transformed into better-behaving discrete trends in this particular missing-data setting. This premise outweighs the assertion that continuous variables inherently carry more information, leading to a counter-intuitive, but potentially useful result for practitioners.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号