首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
"We describe a methodology for estimating the accuracy of dual systems estimates (DSE's) of population, census estimates of population, and estimates of undercount in the census. The DSE's are based on the census and a post-enumeration survey (PES). We apply the methodology to the 1988 dress rehearsal census of St. Louis and east-central Missouri and we discuss its applicability to the 1990 [U.S.] census and PES. The methodology is based on decompositions of the total (or net) error into components, such as sampling error, matching error, and other nonsampling errors. Limited information about the accuracy of certain components of error, notably failure of assumptions in the 'capture-recapture' model, but others as well, lead us to offer tentative estimates of the errors of the census, DSE, and undercount estimates for 1988. Improved estimates are anticipated for 1990." Comments are included by Eugene P. Ericksen and Joseph B. Kadane (pp. 855-7) and Kenneth W. Wachter and Terence P. Speed (pp. 858-61), as well as a rejoinder by Mulry and Spencer (pp. 861-3).  相似文献   

2.
"Population estimates from the 1990 Post-Enumeration Survey (PES), used to measure decennial census undercount, were obtained from dual system estimates (DSE's) that assumed independence within strata defined by age-race-sex-geography and other variables. We make this independence assumption for females, but develop methods to avoid the independence assumption for males within strata by using national level sex ratios from demographic analysis (DA).... We consider several...alternative DSE's, and use DA results for 1990 to apply them to data from the 1990 U.S. census and PES."  相似文献   

3.
Inference concerning the negative binomial dispersion parameter, denoted by c, is important in many biological and biomedical investigations. Properties of the maximum-likelihood estimator of c and its bias-corrected version have been studied extensively, mainly, in terms of bias and efficiency [W.W. Piegorsch, Maximum likelihood estimation for the negative binomial dispersion parameter, Biometrics 46 (1990), pp. 863–867; S.J. Clark and J.N. Perry, Estimation of the negative binomial parameter κ by maximum quasi-likelihood, Biometrics 45 (1989), pp. 309–316; K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185]. However, not much work has been done on the construction of confidence intervals (C.I.s) for c. The purpose of this paper is to study the behaviour of some C.I. procedures for c. We study, by simulations, three Wald type C.I. procedures based on the asymptotic distribution of the method of moments estimate (mme), the maximum-likelihood estimate (mle) and the bias-corrected mle (bcmle) [K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185] of c. All three methods show serious under-coverage. We further study parametric bootstrap procedures based on these estimates of c, which significantly improve the coverage probabilities. The bootstrap C.I.s based on the mle (Boot-MLE method) and the bcmle (Boot-BCM method) have coverages that are significantly better (empirical coverage close to the nominal coverage) than the corresponding bootstrap C.I. based on the mme, especially for small sample size and highly over-dispersed data. However, simulation results on lengths of the C.I.s show evidence that all three bootstrap procedures have larger average coverage lengths. Therefore, for practical data analysis, the bootstrap C.I. Boot-MLE or Boot-BCM should be used, although Boot-MLE method seems to be preferable over the Boot-BCM method in terms of both coverage and length. Furthermore, Boot-MLE needs less computation than Boot-BCM.  相似文献   

4.
Official population data for the USSR are presented for 1985 and 1986. Part 1 (pp. 65-72) contains data on capitals of union republics and cities with over one million inhabitants, including population estimates for 1986 and vital statistics for 1985. Part 2 (p. 72) presents population estimates by sex and union republic, 1986. Part 3 (pp. 73-6) presents data on population growth, including birth, death, and natural increase rates, 1984-1985; seasonal distribution of births and deaths; birth order; age-specific birth rates in urban and rural areas and by union republic; marriages; age at marriage; and divorces.  相似文献   

5.
"The 1990 [U.S.] Post-Enumeration Survey (PES) stratified the population into 1,392 subpopulations called post-strata based on location, race, tenure, sex and age, in the hope that these subpopulations were homogeneous in relation to factors affecting the Census coverage....With block-level data from the PES for sites around Detroit and Texas, we are able to examine empirically the extent to which this hope was realized. Using various measures, we find that between-block variation in erroneous enumeration and gross omission rates is about the same magnitude as, and largely in addition to, the corresponding between-post-stratum variation." Comments by Joseph L. Schafer and Donald Ylvisaker and a rejoinder by the authors are included (pp. 1,125-9).  相似文献   

6.
This article presents estimates of household equivalence scales, broken down by demographic characteristics, of U.S. households. Separate estimates are given by family size, age of head, region, race, and urban versus rural residence. Commodity-specific scales are presented for five separate commodity groups—energy, food, consumer goods, capital services, and other services. The estimates are obtained from an econometric model of aggregate consumer behavior. The parameters of this model are estimated by combining aggregate time series and individual cross-section data.  相似文献   

7.
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S 1 , … , S k ; random effects can then be a useful model: Si = E(S) + k i . Here, the temporal variation in survival probability is treated as random with average value E( k 2 ) = σ 2 . This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, σ 2 , estimation of E(S) and var(Ê(S)) where the latter includes a component for σ 2 as well as the traditional component for v ar(S|S). Furthermore, the random effects model leads to shrinkage estimates, S i , as improved (in mean square error) estimators of Si compared to the MLE, S i , from the unrestricted time-effects model. Appropriate confidence intervals based on the S i are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of σ 2 , confidence interval coverage on σ 2 , coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: S i = S (no effects), Si = E(S) + k i (random effects), and S 1 , … , S k (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the S i .  相似文献   

8.
The author assesses the 1990 Post-Enumeration Survey, which was "designed to produce Census tabulation of [U.S.] states and local areas corrected for the undercount or overcount of population....[He] discusses the process that produced the census adjustment estimates [as well as] the work aimed at improving the estimates.... The article then presents some of the principal results...."  相似文献   

9.
The small sample performance of Zeger and Liang's extended generalized linear models for the analysis of longitudinal data (Biometrics, 42,121-130,1986) is investigated for correlated gamma data. Results show that the confidence intervals do not provide desirable coverage of the true parameter due to considerably biased point estimates. Improved estimates are proposed using the jackknife procedure. Simulations performed to evaluate the proposed estimates indicate superior properties to the previous estimates.  相似文献   

10.
Doubly robust (DR) estimators of the mean with missing data are compared. An estimator is DR if either the regression of the missing variable on the observed variables or the missing data mechanism is correctly specified. One method is to include the inverse of the propensity score as a linear term in the imputation model [D. Firth and K.E. Bennett, Robust models in probability sampling, J. R. Statist. Soc. Ser. B. 60 (1998), pp. 3–21; D.O. Scharfstein, A. Rotnitzky, and J.M. Robins, Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion), J. Am. Statist. Assoc. 94 (1999), pp. 1096–1146; H. Bang and J.M. Robins, Doubly robust estimation in missing data and causal inference models, Biometrics 61 (2005), pp. 962–972]. Another method is to calibrate the predictions from a parametric model by adding a mean of the weighted residuals [J.M Robins, A. Rotnitzky, and L.P. Zhao, Estimation of regression coefficients when some regressors are not always observed, J. Am. Statist. Assoc. 89 (1994), pp. 846–866; D.O. Scharfstein, A. Rotnitzky, and J.M. Robins, Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion), J. Am. Statist. Assoc. 94 (1999), pp. 1096–1146]. The penalized spline propensity prediction (PSPP) model includes the propensity score into the model non-parametrically [R.J.A. Little and H. An, Robust likelihood-based analysis of multivariate data with missing values, Statist. Sin. 14 (2004), pp. 949–968; G. Zhang and R.J. Little, Extensions of the penalized spline propensity prediction method of imputation, Biometrics, 65(3) (2008), pp. 911–918]. All these methods have consistency properties under misspecification of regression models, but their comparative efficiency and confidence coverage in finite samples have received little attention. In this paper, we compare the root mean square error (RMSE), width of confidence interval and non-coverage rate of these methods under various mean and response propensity functions. We study the effects of sample size and robustness to model misspecification. The PSPP method yields estimates with smaller RMSE and width of confidence interval compared with other methods under most situations. It also yields estimates with confidence coverage close to the 95% nominal level, provided the sample size is not too small.  相似文献   

11.
"Net undercount rates in the U.S. decennial census have been steadily declining over the last several censuses. Differential undercounts among race groups and geographic areas, however, appear to persist. In the following, we examine and compare several methodologies for providing small area estimates of census coverage by constructing artificial populations. Measures of performance are also introduced to assess the various small area estimates. Synthetic estimation in combination with regression modelling provide the best results over the methods considered. Sampling error effects are also simulated. The results form the basis for determining coverage evaluation survey small area estimates of the 1900 decennial census."  相似文献   

12.
"This article shows that undercount adjustment [of the 1990 U.S. census] will probably reallocate one [in] three House seats across the states. The adjustment's impact may depend on the method used and the assumptions underlying undercount estimates. Using regression analysis to reduce sampling error in undercount estimates from dual-systems analysis, however, eliminates sensitivity to all but the most extreme changes in assumptions. Generally, adjustment will more likely affect large states than small ones, and large states with proportionately many urban Black and Hispanic residents will likely gain seats at the expense of large states with few such residents."  相似文献   

13.
Summary.  The 2001 census in the UK asked for a return of people 'usually living at this address'. But this phrase is fuzzy and may have led to undercount. In addition, analysis of the sex ratios in the 2001 census of England and Wales points to a sex bias in the adjustments for net undercount—too few males in relation to females. The Office for National Statistics's abandonment of the method of demographic analysis for the population of working ages has allowed these biases to creep in. The paper presents a demographic account to check on the plausibility of census results. The need to revise preliminary estimates of the national population over a period of years following census day—as experienced in North America and now in the UK—calls into question the feasibility of a one-number census. Looking to the future, the environment for taking a reliable census by conventional methods is deteriorating. The UK Government's proposals for a population register open up the possibility of a Nordic-style administrative record census in the longer term.  相似文献   

14.
"Planning is under way for the U.S.A. bicentennial census in 1990. The U.S. Census Bureau sponsored a study panel under the U.S. Committee on National Statistics to consider key aspects of methodology for the census and to recommend priority areas for research and testing. The recommendations of the Panel on Decennial Census Methodology, which are summarized in this paper, cover four main topics: adjustment of the census counts for coverage errors, methods of coverage evaluation, uses of sampling in obtaining the count, and uses of administrative records in improving the quality of selected content items."  相似文献   

15.
The coefficient of variation (CV) can be used as an index of reliability of measurement. The lognormal distribution has been applied to fit data in many fields. We developed approximate interval estimation of the ratio of two coefficients of variation (CsV) for lognormal distributions by using the Wald-type, Fieller-type, log methods, and method of variance estimates recovery (MOVER). The simulation studies show that empirical coverage rates of the methods are satisfactorily close to a nominal coverage rate for medium sample sizes.  相似文献   

16.
"The 1990 [U.S.] census and Post-Enumeration Survey produced census and dual system estimates (DSE) of population by domain, together with an estimated sampling covariance matrix of the DSE. Estimates of the bias of the DSE were derived from various PES evaluation programs. Of the three sources, the unadjusted census is the least variable but is believed to be the most biased, the DSE is less biased but more variable, and the bias estimates may be regarded as unbiased but are the most variable. This article addresses methods for combining the census, the DSE, and bias estimates obtained from the evaluation programs to produce accurate estimates of population shares, as measured by weighted squared- or absolute-error loss functions applied to estimated population shares of domains."  相似文献   

17.
For constructing simultaneous confidence intervals for ratios of means for lognormal distributions, two approaches using a two-step method of variance estimates recovery are proposed. The first approach proposes fiducial generalized confidence intervals (FGCIs) in the first step followed by the method of variance estimates recovery (MOVER) in the second step (FGCIs–MOVER). The second approach uses MOVER in the first and second steps (MOVER–MOVER). Performance of proposed approaches is compared with simultaneous fiducial generalized confidence intervals (SFGCIs). Monte Carlo simulation is used to evaluate the performance of these approaches in terms of coverage probability, average interval width, and time consumption.  相似文献   

18.
This article proposes the maximum likelihood estimates based on bare bones particle swarm optimization (BBPSO) algorithm for estimating the parameters of Weibull distribution with censored data, which is widely used in lifetime data analysis. This approach can produce more accuracy of the parameter estimation for the Weibull distribution. Additionally, the confidence intervals for the estimators are obtained. The simulation results show that the BB PSO algorithm outperforms the Newton–Raphson method in most cases in terms of bias, root mean square of errors, and coverage rate. Two examples are used to demonstrate the performance of the proposed approach. The results show that the maximum likelihood estimates via BBPSO algorithm perform well for estimating the Weibull parameters with censored data.  相似文献   

19.
The U.S. Bureau of Labour Statistics publishes monthly unemployment rate estimates for its 50 states, the District of Columbia, and all counties, under Current Population Survey. However, the unemployment rate estimates for some states are unreliable due to low sample sizes in these states. Datta et al. (1999) proposed a hierarchical Bayes (HB) method using a time series generalization of a widely used cross-sectional model in small-area estimation. However, the geographical variation is also likely to be important. To have an efficient model, a comprehensive mixed normal model that accounts for the spatial and temporal effects is considered. A HB approach using Markov chain Monte Carlo is used for the analysis of the U.S. state-level unemployment rate estimates for January 2004-December 2007. The sensitivity of such type of analysis to prior assumptions in the Gaussian context is also studied.  相似文献   

20.
"Modern time series methods are applied to the analysis of annual demographic data for England, 1541-1800. Evidence is found of non-stationarity in the series and of co-integration among the series. Building on economic models of historical demography, optimal inferential procedures are implemented to estimate the structural parameters of long-term equilibria among the variables. Evidence is found for a small, but significant, Malthusian 'preventive check' as well as interactions between fertility, mortality and nuptiality that are consistent with the predictions often made in demographic studies. Tentative experiments to detect the influence of environmental factors fail to reveal any significant impact on the estimates obtained."  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号