首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Imputation methods that assign a selection of respondents’ values for missing i tern nonresponses give rise to an addd,tional source of sampling variation, which we term imputation varLance , We examine the effect of imputation variance on the precision of the mean, and propose four procedures for sampling the rEespondents that reduce this additional variance. Two of the procedures employ improved sample designs through selection of respc,ndents by sampling without replacement and by stratified sampl;lng. The other two increase the sample base by the use of multiple imputations.  相似文献   

2.
In official statistics, when a file of microdata must be delivered to external users, it is very difficult to propose them a file where missing values has been treated by multiple imputations. In order to overcome this difficulty, we propose a method of single imputation for qualitative data that respect numerous constraints. The imputation is balanced on totals previously estimated; editing rules can be respected; the imputation is random, but the totals are not affected by an imputation variance.  相似文献   

3.
This paper presents missing data methods for repeated measures data in small samples. Most methods currently available are for large samples. In particular, no studies have compared the performance of multiple imputation methods to that of non-imputation incomplete analysis methods. We first develop a strategy for multiple imputations for repeated measures data under a cell-means model that is applicable for any multivariate data with small samples. Multiple imputation inference procedures are applied to the resulting multiply imputed complete data sets. Comparisons to other available non-imputation incomplete data methods is made via simulation studies to conclude that there is not much gain in using the computer intensive multiple imputation methods for small sample repeated measures data analysis in terms of the power of testing hypotheses of parameters of interest.  相似文献   

4.
A general nonparametric imputation procedure, based on kernel regression, is proposed to estimate points as well as set- and function-indexed parameters when the data are missing at random (MAR). The proposed method works by imputing a specific function of a missing value (and not the missing value itself), where the form of this specific function is dictated by the parameter of interest. Both single and multiple imputations are considered. The associated empirical processes provide the right tool to study the uniform convergence properties of the resulting estimators. Our estimators include, as special cases, the imputation estimator of the mean, the estimator of the distribution function proposed by Cheng and Chu [1996. Kernel estimation of distribution functions and quantiles with missing data. Statist. Sinica 6, 63–78], imputation estimators of a marginal density, and imputation estimators of regression functions.  相似文献   

5.
Tukey proposed a class of distributions, the g-and-h family (gh family), based on a transformation of a standard normal variable to accommodate different skewness and elongation in the distribution of variables arising in practical applications. It is easy to draw values from this distribution even though it is hard to explicitly state the probability density function. Given this flexibility, the gh family may be extremely useful in creating multiple imputations for missing data. This article demonstrates how this family, as well as its generalizations, can be used in the multiple imputation analysis of incomplete data. The focus of this article is on a scalar variable with missing values. In the absence of any additional information, data are missing completely at random, and hence the correct analysis is the complete-case analysis. Thus, the application of the gh multiple imputation to the scalar cases affords comparison with the correct analysis and with other model-based multiple imputation methods. Comparisons are made using simulated datasets and the data from a survey of adolescents ascertaining driving after drinking alcohol.  相似文献   

6.
ABSTRACT

In the past, a tolerance interval was used for the statistical quality control process on raw materials and/or the final product. In the traditional concept of the tolerance interval, the variance from the measurements is a single component. However, we can find examples about several components that could vary in their measurements, so an approximate method must be found to modify the traditional tolerance interval. Now we employ a tolerance interval considering multiple components in the variance from the measurements to deal with quality control process. In our paper, the proposed method is used to solve the sample size determination for a two-sided tolerance interval approach considering multiple components on the variance of measurements.  相似文献   

7.
In the presence of missing values, researchers may be interested in the rates of missing information. The rates of missing information are (a) important for assessing how the missing information contributes to inferential uncertainty about, Q, the population quantity of interest, (b) are an important component in the decision of the number of imputations, and (c) can be used to test model uncertainty and model fitting. In this article I will derive the asymptotic distribution of the rates of missing information in two scenarios: the conventional multiple imputation (MI), and the two-stage MI. Numerically I will show that the proposed asymptotic distribution agrees with the simulated one. I will also suggest the number of imputations needed to obtain reliable missing information rate estimates for each method, based on the asymptotic distribution.  相似文献   

8.
We present three multiple imputation estimates for the Cox model with missing covariates. Two of the suggested estimates are asymptotically equivalent to estimates in the literature when the number of multiple imputations approaches infinity. The third estimate can be implemented using standard software that could handle time-varying covariates. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

9.
ABSTRACT

We present here an extension of Pan's multiple imputation approach to Cox regression in the setting of interval-censored competing risks data. The idea is to convert interval-censored data into multiple sets of complete or right-censored data and to use partial likelihood methods to analyse them. The process is iterated, and at each step, the coefficient of interest, its variance–covariance matrix, and the baseline cumulative incidence function are updated from multiple posterior estimates derived from the Fine and Gray sub-distribution hazards regression given augmented data. Through simulation of patients at risks of failure from two causes, and following a prescheduled programme allowing for informative interval-censoring mechanisms, we show that the proposed method results in more accurate coefficient estimates as compared to the simple imputation approach. We have implemented the method in the MIICD R package, available on the CRAN website.  相似文献   

10.
Imputation is commonly used to compensate for missing data in surveys. We consider the general case where the responses on either the variable of interest y or the auxiliary variable x or both may be missing. We use ratio imputation for y when the associated x is observed and different imputations when x is not observed. We obtain design-consistent linearization and jackknife variance estimators under uniform response. We also report the results of a simulation study on the efficiencies of imputed estimators, and relative biases and efficiencies of associated variance estimators.  相似文献   

11.
"This paper gives a brief introduction to multiple imputation for handling non-response in surveys. We then describe a recently completed project in which multiple imputation was used to recalibrate industry and occupation codes in 1970 U.S. census public use samples to the 1980 standard. Using analyses of data from the project, we examine the utility of analysing a large data set having imputed values compared with analysing a small data set having true values, and we provide examples of the amount by which variability is underestimated by using just one imputation rather than multiple imputations."  相似文献   

12.
The additive model is considered when some observations on x are missing at random but corresponding observations on y are available. Especially for this model, missing at random is an interesting case because the complete case analysis is expected to be no more suitable. A simulation experiment is reported and the different methods are compared based on their superiority with respect to the sample mean squared error. Some focus is also given on the sample variance and the estimated bias. In detail, the complete case analysis, a kind of stochastic mean imputation, a single imputation and the nearest neighbor imputation are discussed.  相似文献   

13.
Missing variances, on the basis of the summary-level data, can be a problem when an inverse variance weighted meta-analysis is undertaken. A wide range of approaches in dealing with this issue exist, such as excluding data without a variance measure, using a function of sample size as a weight and imputing the missing standard errors/deviations. A non-linear mixed effects modelling approach was taken to describe the time-course of standard deviations across 14 studies. The model was then used to make predictions of the missing standard deviations, thus, enabling a precision weighted model-based meta-analysis of a mean pain endpoint over time. Maximum likelihood and Bayesian approaches were implemented with example code to illustrate how this imputation can be carried out and to compare the output from each method. The resultant imputations were nearly identical for the two approaches. This modelling approach acknowledges the fact that standard deviations are not necessarily constant over time and can differ between treatments and across studies in a predictable way.  相似文献   

14.
Missing observations due to non‐response are commonly encountered in data collected from sample surveys. The focus of this article is on item non‐response which is often handled by filling in (or imputing) missing values using the observed responses (donors). Random imputation (single or fractional) is used within homogeneous imputation classes that are formed on the basis of categorical auxiliary variables observed on all the sampled units. A uniform response rate within classes is assumed, but that rate is allowed to vary across classes. We construct confidence intervals (CIs) for a population parameter that is defined as the solution to a smooth estimating equation with data collected using stratified simple random sampling. The imputation classes are assumed to be formed across strata. Fractional imputation with a fixed number of random draws is used to obtain an imputed estimating function. An empirical likelihood inference method under the fractional imputation is proposed and its asymptotic properties are derived. Two asymptotically correct bootstrap methods are developed for constructing the desired CIs. In a simulation study, the proposed bootstrap methods are shown to outperform traditional bootstrap methods and some non‐bootstrap competitors under various simulation settings. The Canadian Journal of Statistics 47: 281–301; 2019 © 2019 Statistical Society of Canada  相似文献   

15.
Sequential regression multiple imputation has emerged as a popular approach for handling incomplete data with complex features. In this approach, imputations for each missing variable are produced based on a regression model using other variables as predictors in a cyclic manner. Normality assumption is frequently imposed for the error distributions in the conditional regression models for continuous variables, despite that it rarely holds in real scenarios. We use a simulation study to investigate the performance of several sequential regression imputation methods when the error distribution is flat or heavy tailed. The methods evaluated include the sequential normal imputation and its several extensions which adjust for non normal error terms. The results show that all methods perform well for estimating the marginal mean and proportion, as well as the regression coefficient when the error distribution is flat or moderately heavy tailed. When the error distribution is strongly heavy tailed, all methods retain their good performances for the mean and the adjusted methods have robust performances for the proportion; but all methods can have poor performances for the regression coefficient because they cannot accommodate the extreme values well. We caution against the mechanical use of sequential regression imputation without model checking and diagnostics.  相似文献   

16.
ABSTRACT

Missing data are commonly encountered in self-reported measurements and questionnaires. It is crucial to treat missing values using appropriate method to avoid bias and reduction of power. Various types of imputation methods exist, but it is not always clear which method is preferred for imputation of data with non-normal variables. In this paper, we compared four imputation methods: mean imputation, quantile imputation, multiple imputation, and quantile regression multiple imputation (QRMI), using both simulated and real data investigating factors affecting self-efficacy in breast cancer survivors. The results displayed an advantage of using multiple imputation, especially QRMI when data are not normal.  相似文献   

17.
Caren Hasler  Yves Tillé 《Statistics》2016,50(6):1310-1331
Random imputation is an interesting class of imputation methods to handle item nonresponse because it tends to preserve the distribution of the imputed variable. However, such methods amplify the total variance of the estimators because values are imputed at random. This increase in variance is called imputation variance. In this paper, we propose a new random hot-deck imputation method that is based on the k-nearest neighbour methodology. It replaces the missing value of a unit with the observed value of a similar unit. Calibration and balanced sampling are applied to minimize the imputation variance. Moreover, our proposed method provides triple protection against nonresponse bias. This means that if at least one out of three specified models holds, then the resulting total estimator is unbiased. Finally, our approach allows the user to perform consistency edits and to impute simultaneously.  相似文献   

18.
ABSTRACT

We propose a multiple imputation method based on principal component analysis (PCA) to deal with incomplete continuous data. To reflect the uncertainty of the parameters from one imputation to the next, we use a Bayesian treatment of the PCA model. Using a simulation study and real data sets, the method is compared to two classical approaches: multiple imputation based on joint modelling and on fully conditional modelling. Contrary to the others, the proposed method can be easily used on data sets where the number of individuals is less than the number of variables and when the variables are highly correlated. In addition, it provides unbiased point estimates of quantities of interest, such as an expectation, a regression coefficient or a correlation coefficient, with a smaller mean squared error. Furthermore, the widths of the confidence intervals built for the quantities of interest are often smaller whilst ensuring a valid coverage.  相似文献   

19.
Markov Chain Monte Carlo (MCMC) is the most common method used in multiple imputation. However, it is not unbiased when it is applied to imputations of categorical variables. The literature has considered the problem for binary variables with only two levels. In this article, we consider more general situations. We not only evaluate the bias associated with the imputation of categorical variables using the MCMC method, but also introduce a method to correct the bias. A simulation study is conducted and an application is provided to demonstrate the advantages of using the correction factors proposed in this article.  相似文献   

20.

In this paper we consider a Bayesian analysis for an autoregressive model with random normal coefficients (RCA). For the proposed procedure we use conjugate priors for some parameters and improper vague priors for others. The inference for the parameters is made via Gibbs sampler and the convergence is assessed with multiple chains and Gelman and Rubin criterium. Forecasts are based on the predictive density of future observations. Some remarks are also made regarding order determination and stationarity. Applications to simulated and real series are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号