首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When multilevel models are estimated from survey data derived using multistage sampling, unequal selection probabilities at any stage of sampling may induce bias in standard estimators, unless the sources of the unequal probabilities are fully controlled for in the covariates. This paper proposes alternative ways of weighting the estimation of a two-level model by using the reciprocals of the selection probabilities at each stage of sampling. Consistent estimators are obtained when both the sample number of level 2 units and the sample number of level 1 units within sampled level 2 units increase. Scaling of the weights is proposed to improve the properties of the estimators and to simplify computation. Variance estimators are also proposed. In a limited simulation study the scaled weighted estimators are found to perform well, although non-negligible bias starts to arise for informative designs when the sample number of level 1 units becomes small. The variance estimators perform extremely well. The procedures are illustrated using data from the survey of psychiatric morbidity.  相似文献   

2.
Poisson sampling is a method for unequal probabilities sampling with random sample size. There exist several implementations of the Poisson sampling design, with fixed sample size, which almost all are rejective methods, that is, the sample is not always accepted. Thus, the existing methods can be time-consuming or even infeasible in some situations. In this paper, a fast and non-rejective method, which is efficient even for large populations, is proposed and studied. The method is a new design for selecting a sample of fixed size with unequal inclusion probabilities. For the population of large size, the proposed design is very close to the strict πps sampling which is similar to the conditional Poisson (CP) sampling design, but the implementation of the design is much more efficient than the CP sampling. And the inclusion probabilities can be calculated recursively.  相似文献   

3.
Analysts of survey data are often interested in modelling the population process, or superpopulation, that gave rise to a 'target' set of survey variables. An important tool for this is maximum likelihood estimation. A survey is said to provide limited information for such inference if data used in the design of the survey are unavailable to the analyst. In this circumstance, sample inclusion probabilities, which are typically available, provide information which needs to be incorporated into the analysis. We consider the case where these inclusion probabilities can be modelled in terms of a linear combination of the design and target variables, and only sample values of these are available. Strict maximum likelihood estimation of the underlying superpopulation means of these variables appears to be analytically impossible in this case, but an analysis based on approximations to the inclusion probabilities leads to a simple estimator which is a close approximation to the maximum likelihood estimator. In a simulation study, this estimator outperformed several other estimators that are based on approaches suggested in the sampling literature.  相似文献   

4.
Data from large surveys are often supplemented with sampling weights that are designed to reflect unequal probabilities of response and selection inherent in complex survey sampling methods. We propose two methods for Bayesian estimation of parametric models in a setting where the survey data and the weights are available, but where information on how the weights were constructed is unavailable. The first approach is to simply replace the likelihood with the pseudo likelihood in the formulation of Bayes theorem. This is proven to lead to a consistent estimator but also leads to credible intervals that suffer from systematic undercoverage. Our second approach involves using the weights to generate a representative sample which is integrated into a Markov chain Monte Carlo (MCMC) or other simulation algorithms designed to estimate the parameters of the model. In the extensive simulation studies, the latter methodology is shown to achieve performance comparable to the standard frequentist solution of pseudo maximum likelihood, with the added advantage of being applicable to models that require inference via MCMC. The methodology is demonstrated further by fitting a mixture of gamma densities to a sample of Australian household income.  相似文献   

5.
The Hartley‐Rao‐Cochran sampling design is an unequal probability sampling design which can be used to select samples from finite populations. We propose to adjust the empirical likelihood approach for the Hartley‐Rao‐Cochran sampling design. The approach proposed intrinsically incorporates sampling weights, auxiliary information and allows for large sampling fractions. It can be used to construct confidence intervals. In a simulation study, we show that the coverage may be better for the empirical likelihood confidence interval than for standard confidence intervals based on variance estimates. The approach proposed is simple to implement and less computer intensive than bootstrap. The confidence interval proposed does not rely on re‐sampling, linearization, variance estimation, design‐effects or joint inclusion probabilities.  相似文献   

6.
A class of sampling two units without replacement with inclusion probability proportional to size is proposed in this article. Many different well known probability proportional to size sampling designs are special cases from this class. The first and second inclusion probabilities of this class satisfy important properties and provide a non-negative variance estimator of the Horvitz and Thompson estimator for the population total. Suitable choice for the first and second inclusion probabilities from this class can be used to reduce the variance estimator of the Horvitz and Thompson estimator. Comparisons between different proportional to size sampling designs through real data and artificial examples are given. Examples show that the minimum variance of the Horvitz and Thompson estimator obtained from the proposed design is not attainable for the most cases at any of the well known designs.  相似文献   

7.
Calibration on the available auxiliary variables is widely used to increase the precision of the estimates of parameters. Singh and Sedory [Two-step calibration of design weights in survey sampling. Commun Stat Theory Methods. 2016;45(12):3510–3523.] considered the problem of calibration of design weights under two-step for single auxiliary variable. For a given sample, design weights and calibrated weights are set proportional to each other, in the first step. While, in the second step, the value of proportionality constant is determined on the basis of objectives of individual investigator/user for, for example, to get minimum mean squared error or reduction of bias. In this paper, we have suggested to use two auxiliary variables for two-step calibration of the design weights and compared the results with single auxiliary variable for different sample sizes based on simulated and real-life data set. The simulated and real-life application results show that two-auxiliary variables based two-step calibration estimator outperforms the estimator under single auxiliary variable in terms of minimum mean squared error.  相似文献   

8.
We show how to infer about a finite population proportion using data from a possibly biased sample. In the absence of any selection bias or survey weights, a simple ignorable selection model, which assumes that the binary responses are independent and identically distributed Bernoulli random variables, is not unreasonable. However, this ignorable selection model is inappropriate when there is a selection bias in the sample. We assume that the survey weights (or their reciprocals which we call ‘selection’ probabilities) are available, but there is no simple relation between the binary responses and the selection probabilities. To capture the selection bias, we assume that there is some correlation between the binary responses and the selection probabilities (e.g., there may be a somewhat higher/lower proportion of positive responses among the sampled units than among the nonsampled units). We use a Bayesian nonignorable selection model to accommodate the selection mechanism. We use Markov chain Monte Carlo methods to fit the nonignorable selection model. We illustrate our method using numerical examples obtained from NHIS 1995 data.  相似文献   

9.
When auxiliary information is available at the design stage, samples may be selected by means of balanced sampling. The variance of the Horvitz-Thompson estimator is then reduced, since it is approximately given by that of the residuals of the variable of interest on the balancing variables. In this paper, a method for computing optimal inclusion probabilities for balanced sampling on given auxiliary variables is studied. We show that the method formerly suggested by Tillé and Favre (2005) enables the computation of inclusion probabilities that lead to a decrease in variance under some conditions on the set of balancing variables. A disadvantage is that the target optimal inclusion probabilities depend on the variable of interest. If the needed quantities are unknown at the design stage, we propose to use estimates instead (e.g., arising from a previous wave of the survey). A limited simulation study suggests that, under some conditions, our method performs better than the method of Tillé and Favre (2005).  相似文献   

10.
In real-time sampling, the units of a population pass a sampler one by one. Alternatively the sampler may successively visit the units of the population. Each unit passes only once and at that time it is decided whether or not it should be included in the sample. The goal is to take a sample and efficiently estimate a population parameter. The list sequential sampling method presented here is called correlated Poisson sampling. The method is an alternative to Poisson sampling, where the units are sampled independently with given inclusion probabilities. Correlated Poisson sampling uses weights to create correlations between the inclusion indicators. In that way it is possible to reduce the variation of the sample size and to make the samples more evenly spread over the population. Simulation shows that correlated Poisson sampling improves the efficiency in many cases.  相似文献   

11.
In practical survey sampling, missing data are unavoidable due to nonresponse, rejected observations by editing, disclosure control, or outlier suppression. We propose a calibrated imputation approach so that valid point and variance estimates of the population (or domain) totals can be computed by the secondary users using simple complete‐sample formulae. This is especially helpful for variance estimation, which generally require additional information and tools that are unavailable to the secondary users. Our approach is natural for continuous variables, where the estimation may be either based on reweighting or imputation, including possibly their outlier‐robust extensions. We also propose a multivariate procedure to accommodate the estimation of the covariance matrix between estimated population totals, which facilitates variance estimation of the ratios or differences among the estimated totals. We illustrate the proposed approach using simulation data in supplementary materials that are available online.  相似文献   

12.
A common strategy for handling item nonresponse in survey sampling is hot deck imputation, where each missing value is replaced with an observed response from a "similar" unit. We discuss here the use of sampling weights in the hot deck. The naive approach is to ignore sample weights in creation of adjustment cells, which effectively imputes the unweighted sample distribution of respondents in an adjustment cell, potentially causing bias. Alternative approaches have been proposed that use weights in the imputation by incorporating them into the probabilities of selection for each donor. We show by simulation that these weighted hot decks do not correct for bias when the outcome is related to the sampling weight and the response propensity. The correct approach is to use the sampling weight as a stratifying variable alongside additional adjustment variables when forming adjustment cells.  相似文献   

13.
Recently, non‐uniform sampling has been suggested in microscopy to increase efficiency. More precisely, proportional to size (PPS) sampling has been introduced, where the probability of sampling a unit in the population is proportional to the value of an auxiliary variable. In the microscopy application, the sampling units are fields of view, and the auxiliary variables are easily observed approximations to the variables of interest. Unfortunately, often some auxiliary variables vanish, that is, are zero‐valued. Consequently, part of the population is inaccessible in PPS sampling. We propose a modification of the design based on a stratification idea, for which an optimal solution can be found, using a model‐assisted approach. The new optimal design also applies to the case where ‘vanish’ refers to missing auxiliary variables and has independent interest in sampling theory. We verify robustness of the new approach by numerical results, and we use real data to illustrate the applicability.  相似文献   

14.
Suppose that the conditional density of a response variable given a vector of explanatory variables is parametrically modelled, and that data are collected by a two-phase sampling design. First, a simple random sample is drawn from the population. The stratum membership in a finite number of strata of the response and explanatory variables is recorded for each unit. Second, a subsample is drawn from the phase-one sample such that the selection probability is determined by the stratum membership. The response and explanatory variables are fully measured at this phase. We synthesize existing results on nonparametric likelihood estimation and present a streamlined approach for the computation and the large sample theory of profile likelihood in four different situations. The amount of information in terms of data and assumptions varies depending on whether the phase-one data are retained, the selection probabilities are known, and/or the stratum probabilities are known. We establish and illustrate numerically the order of efficiency among the maximum likelihood estimators, according to the amount of information utilized, in the four situations.  相似文献   

15.
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. Longitudinal data are often analyzed through the generalized estimating equations (GEE) approach. The vast majority of existing literature on the GEE method; however, is developed under non‐survey settings and are inappropriate for data collected through complex sampling designs. In this paper the authors develop a pseudo‐GEE approach for the analysis of survey data. They show that survey weights must and can be appropriately accounted in the GEE method under a joint randomization framework. The consistency of the resulting pseudo‐GEE estimators is established under the proposed framework. Linearization variance estimators are developed for the pseudo‐GEE estimators when the finite population sampling fractions are small or negligible, a scenario often held for large‐scale surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study using data from the National Longitudinal Survey of Children and Youth. The results show that the pseudo‐GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous and binary responses. The Canadian Journal of Statistics 38: 540–554; 2010 © 2010 Statistical Society of Canada  相似文献   

16.
Summary.  Multilevel modelling is sometimes used for data from complex surveys involving multistage sampling, unequal sampling probabilities and stratification. We consider generalized linear mixed models and particularly the case of dichotomous responses. A pseudolikelihood approach for accommodating inverse probability weights in multilevel models with an arbitrary number of levels is implemented by using adaptive quadrature. A sandwich estimator is used to obtain standard errors that account for stratification and clustering. When level 1 weights are used that vary between elementary units in clusters, the scaling of the weights becomes important. We point out that not only variance components but also regression coefficients can be severely biased when the response is dichotomous. The pseudolikelihood methodology is applied to complex survey data on reading proficiency from the American sample of the 'Program for international student assessment' 2000 study, using the Stata program gllamm which can estimate a wide range of multilevel and latent variable models. Performance of pseudo-maximum-likelihood with different methods for handling level 1 weights is investigated in a Monte Carlo experiment. Pseudo-maximum-likelihood estimators of (conditional) regression coefficients perform well for large cluster sizes but are biased for small cluster sizes. In contrast, estimators of marginal effects perform well in both situations. We conclude that caution must be exercised in pseudo-maximum-likelihood estimation for small cluster sizes when level 1 weights are used.  相似文献   

17.
In outcome‐dependent sampling, the continuous or binary outcome variable in a regression model is available in advance to guide selection of a sample on which explanatory variables are then measured. Selection probabilities may either be a smooth function of the outcome variable or be based on a stratification of the outcome. In many cases, only data from the final sample is accessible to the analyst. A maximum likelihood approach for this data configuration is developed here for the first time. The likelihood for fully general outcome‐dependent designs is stated, then the special case of Poisson sampling is examined in more detail. The maximum likelihood estimator differs from the well‐known maximum sample likelihood estimator, and an information bound result shows that the former is asymptotically more efficient. A simulation study suggests that the efficiency difference is generally small. Maximum sample likelihood estimation is therefore recommended in practice when only sample data is available. Some new smooth sample designs show considerable promise.  相似文献   

18.
《统计学通讯:理论与方法》2012,41(16-17):3278-3300
Under complex survey sampling, in particular when selection probabilities depend on the response variable (informative sampling), the sample and population distributions are different, possibly resulting in selection bias. This article is concerned with this problem by fitting two statistical models, namely: the variance components model (a two-stage model) and the fixed effects model (a single-stage model) for one-way analysis of variance, under complex survey design, for example, two-stage sampling, stratification, and unequal probability of selection, etc. Classical theory underlying the use of the two-stage model involves simple random sampling for each of the two stages. In such cases the model in the sample, after sample selection, is the same as model for the population; before sample selection. When the selection probabilities are related to the values of the response variable, standard estimates of the population model parameters may be severely biased, leading possibly to false inference. The idea behind the approach is to extract the model holding for the sample data as a function of the model in the population and of the first order inclusion probabilities. And then fit the sample model, using analysis of variance, maximum likelihood, and pseudo maximum likelihood methods of estimation. The main feature of the proposed techniques is related to their behavior in terms of the informativeness parameter. We also show that the use of the population model that ignores the informative sampling design, yields biased model fitting.  相似文献   

19.
Approaches that use the pseudolikelihood to perform multilevel modelling on survey data have been presented in the literature. To avoid biased estimates due to unequal selection probabilities, conditional weights can be introduced at each level. Less-biased estimators can also be obtained in a two-level linear model if the level-1 weights are scaled. In this paper, we studied several level-2 weights that can be introduced into the pseudolikelihood when the sampling design and the hierarchical structure of the multilevel model do not match. Two-level and three-level models were studied. The present work was motivated by a study that aims to estimate the contributions of lead sources to polluting the interior floor dust of the rooms within dwellings. We performed a simulation study using the real data collected from a French survey to achieve our objective. We conclude that it is preferable to use unweighted analyses or, at the most, to use conditional level-2 weights in a two-level or a three-level model. We state some warnings and make some recommendations.  相似文献   

20.
It is the main purpose of this paper to study the asymptotics of certain variants of the empirical process in the context of survey data. Precisely, Functional Central Limit Theorems are established under usual conditions when the sample is drawn from a Poisson or a rejective sampling design. The framework we develop encompasses sampling designs with non‐uniform first order inclusion probabilities, which can be chosen so as to optimize estimation accuracy. Applications to Hadamard differentiable functionals are considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号