首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
"In 1980, several cities and states sued the U.S. Census Bureau to correct census results. This correction would adjust for the differential undercounting of Blacks and Hispanics, especially in cities. In this article, the authors, each of whom testified for New York City and State in their joint lawsuit against the Census Bureau, describe the likely pattern of the undercount and present a method to adjust for it." The authors describe available methods for data adjustment and introduce a regression-based composite method of adjustment, which is used to estimate the undercounts for 66 areas. "As expected, we find that the highest undercount rates are in large cities, and the lowest are in states and state remainders with small percentages of Blacks and Hispanics. Next, we analyze how sensitive our estimates are to changes in data and modeling assumptions. We find that these changes do not affect the estimates very much. Our conclusion is that regardless of whether we use one of the simple methods or the composite method and regardless of how we vary the assumptions of the composite method, an adjustment reliably reduces population shares in states with few minorities and increases the shares of large cities."  相似文献   

2.
The author assesses the 1990 Post-Enumeration Survey, which was "designed to produce Census tabulation of [U.S.] states and local areas corrected for the undercount or overcount of population....[He] discusses the process that produced the census adjustment estimates [as well as] the work aimed at improving the estimates.... The article then presents some of the principal results...."  相似文献   

3.
"We describe a methodology for estimating the accuracy of dual systems estimates (DSE's) of population, census estimates of population, and estimates of undercount in the census. The DSE's are based on the census and a post-enumeration survey (PES). We apply the methodology to the 1988 dress rehearsal census of St. Louis and east-central Missouri and we discuss its applicability to the 1990 [U.S.] census and PES. The methodology is based on decompositions of the total (or net) error into components, such as sampling error, matching error, and other nonsampling errors. Limited information about the accuracy of certain components of error, notably failure of assumptions in the 'capture-recapture' model, but others as well, lead us to offer tentative estimates of the errors of the census, DSE, and undercount estimates for 1988. Improved estimates are anticipated for 1990." Comments are included by Eugene P. Ericksen and Joseph B. Kadane (pp. 855-7) and Kenneth W. Wachter and Terence P. Speed (pp. 858-61), as well as a rejoinder by Mulry and Spencer (pp. 861-3).  相似文献   

4.
The U.S. Bureau of Labour Statistics publishes monthly unemployment rate estimates for its 50 states, the District of Columbia, and all counties, under Current Population Survey. However, the unemployment rate estimates for some states are unreliable due to low sample sizes in these states. Datta et al. (1999) proposed a hierarchical Bayes (HB) method using a time series generalization of a widely used cross-sectional model in small-area estimation. However, the geographical variation is also likely to be important. To have an efficient model, a comprehensive mixed normal model that accounts for the spatial and temporal effects is considered. A HB approach using Markov chain Monte Carlo is used for the analysis of the U.S. state-level unemployment rate estimates for January 2004-December 2007. The sensitivity of such type of analysis to prior assumptions in the Gaussian context is also studied.  相似文献   

5.
"In July 1991 the [U.S.] Census Bureau recommended to its parent agency, the Department of Commerce, that the 1990 census be adjusted for undercount. The Secretary of Commerce decided not to adjust, however. Those decisions relied at least partly on the Census Bureau's analyses of the accuracy of the census and of the proposed undercount adjustments based on the Post-Enumeration Survey (PES).... This article describes the total error analysis and loss function analysis of the Census Bureau. In its decision not to adjust the census, the Department of Commerce cited different criteria than aggregate loss functions. Those criteria are identified and discussed."  相似文献   

6.
"Population estimates from the 1990 Post-Enumeration Survey (PES), used to measure decennial census undercount, were obtained from dual system estimates (DSE's) that assumed independence within strata defined by age-race-sex-geography and other variables. We make this independence assumption for females, but develop methods to avoid the independence assumption for males within strata by using national level sex ratios from demographic analysis (DA).... We consider several...alternative DSE's, and use DA results for 1990 to apply them to data from the 1990 U.S. census and PES."  相似文献   

7.
A methodological strategy for a one-number census in the UK   总被引:1,自引:1,他引:0  
As a result of lessons learnt from the 1991 census, a research programme was set up to seek improvements in census methodology. Underenumeration has been placed top of the agenda in this programme, and every effort is being made to achieve as high a coverage as possible in the 2001 census. In recognition, however, that 100% coverage will never be achieved, the one-number census (ONC) project was established to measure the degree of underenumeration in the 2001 census and, if possible, to adjust fully the outputs from the census for that undercount. A key component of this adjustment process is a census coverage survey (CCS). This paper presents an overview of the ONC project, focusing on the design and analysis methodology for the CCS. It also presents results that allow the reader to evaluate the robustness of this methodology.  相似文献   

8.
Empirical Bayes methods are used to estimate the extent of the undercount at the local level in the 1980 U.S. census. "Grouping of like subareas from areas such as states, counties, and so on into strata is a useful way of reducing the variance of undercount estimators. By modeling the subareas within a stratum to have a common mean and variances inversely proportional to their census counts, and by taking into account sampling of the areas (e.g., by dual-system estimation), empirical Bayes estimators that compromise between the (weighted) stratum average and the sample value can be constructed. The amount of compromise is shown to depend on the relative importance of stratum variance to sampling variance. These estimators are evaluated at the state level (51 states, including Washington, D.C.) and stratified on race/ethnicity (3 strata) using data from the 1980 postenumeration survey (PEP 3-8, for the noninstitutional population)."  相似文献   

9.
The authors present the case for the housing unit (HU) method as a technique for estimating the population of states and local areas in the United States. They "evaluate population estimates produced by the housing unit method and by three other commonly used methods: component II, ratio correlation, and administrative records. Basing [the] analysis on 1980 census data from 67 counties in Florida and testing for precision, bias, and the distribution of errors, [they find that] application of the HU method performs at least as well as the more highly acclaimed methods of local population estimation."  相似文献   

10.
"Net undercount rates in the U.S. decennial census have been steadily declining over the last several censuses. Differential undercounts among race groups and geographic areas, however, appear to persist. In the following, we examine and compare several methodologies for providing small area estimates of census coverage by constructing artificial populations. Measures of performance are also introduced to assess the various small area estimates. Synthetic estimation in combination with regression modelling provide the best results over the methods considered. Sampling error effects are also simulated. The results form the basis for determining coverage evaluation survey small area estimates of the 1900 decennial census."  相似文献   

11.
This is a history of the 1937 census of the USSR, the results of which were suppressed because of an alleged massive undercount. The author attempts to determine whether there was in fact an undercount in this census, and if so how significant it was. He concludes that those responsible were not guilty of producing a significant undercount, and in fact employed the methods required to keep the undercount to a bare mimimum.  相似文献   

12.
ABSTRACT

Censoring frequently occurs in survival analysis but naturally observed lifetimes are not of a large size. Thus, inferences based on the popular maximum likelihood (ML) estimation which often give biased estimates should be corrected in the sense of bias. Here, we investigate the biases of ML estimates under the progressive type-II censoring scheme (pIIcs). We use a method proposed in Efron and Johnstone [Fisher's information in terms of the hazard rate. Technical Report No. 264, January 1987, Stanford University, Stanford, California; 1987] to derive general expressions for bias corrected ML estimates under the pIIcs. This requires derivation of the Fisher information matrix under the pIIcs. As an application, exact expressions are given for bias corrected ML estimates of the Weibull distribution under the pIIcs. The performance of the bias corrected ML estimates and ML estimates are compared by simulations and a real data application.  相似文献   

13.
Summary. A major difficulty in meta-analysis is publication bias . Studies with positive outcomes are more likely to be published than studies reporting negative or inconclusive results. Correcting for this bias is not possible without making untestable assumptions. In this paper, a sensitivity analysis is discussed for the meta-analysis of 2×2 tables using exact conditional distributions. A Markov chain Monte Carlo EM algorithm is used to calculate maximum likelihood estimates. A rule for increasing the accuracy of estimation and automating the choice of the number of iterations is suggested.  相似文献   

14.
"A central assumption in the standard capture-recapture approach to the estimation of the size of a closed population is the homogeneity of the 'capture' probabilities. In this article we develop an approach that allows for varying susceptibility to capture through individual parameters using a variant of the Rasch model from psychological measurement situations. Our approach requires an additional recapture. In the context of census undercount estimation, this requirement amounts to the use of a second independent sample or alternative data source to be matched with census and Post-Enumeration Survey (PES) data.... We illustrate [our] models and their estimation using data from a 1988 dress-rehearsal study for the 1990 census conducted by the U.S. Bureau of the Census, which explored the use of administrative data as a supplement to the PES. The article includes a discussion of extensions and related models."  相似文献   

15.
This article demonstrates that the assumption of a homothetically separable utility function places a priori restrictions on the parameters of the demand system. If these restrictions are unwarranted, an open question if they are not explicitly tested, they will lead to biased price elasticity estimates. In particular, we show that the uncompensated own-price elasticities must be smaller than the negative of the expenditure shares; that is, the price elasticity of peak electricity demand must be less than the negative of the share of expenditure devoted to peak electricity. This finding is probably not new to economists familiar with consumer demand analysis. Nevertheless, many recent studies of consumer demand for electricity under time-of-day rates explicitly impose this restriction. The resulting price elasticity estimates are usually quite large in absolute value (.5 to .8); but they are the product of restrictive a priori assumptions as well as information embodied in the sample data. The results of two analyses of time-of-day experiments, where the researchers imposed the untested assumption of homothetic separability, are examined more closely. We find that the reported price elasticities are strongly influenced by that a priori assumption. A Monte Carlo experiment demonstrates that using this model will lead to the reported price elasticities even if the consumption data are perfectly random with respect to price.  相似文献   

16.
The Monitoring Avian Productivity and Survivorship (MAPS) programme is a cooperative effort to provide annual regional indices of adult population size and post-fledging productivity and estimates of adult survival rates from data pooled from a network of constant-effort mist-netting stations across North America. This paper provides an overview of the field and analytical methods currently employed by MAPS, a discussion of the assumptions underlying the use of these techniques, and a discussion of the validity of some of these assumptions based on data gathered during the first 5 years (1989-1993) of the programme, during which time it grew from 17 to 227 stations. Ageand species-specific differences in dispersal characteristics are important factors affecting the usefulness of the indices of adult population size and productivity derived from MAPS data. The presence of transients, heterogeneous capture probabilities among stations, and the large sample sizes required by models to deal effectively with these two considerations are important factors affecting the accuracy and precision of survival rate estimates derived from MAPS data. Important results from the first 5 years of MAPS are: (1) indices of adult population size derived from MAPS mist-netting data correlated well with analogous indices derived from point-count data collected at MAPS stations; (2) annual changes in productivity indices generated by MAPS were similar to analogous changes documented by direct nest monitoring and were generally as expected when compared to annual changes in weather during the breeding season; and (3) a model using between-year recaptures in Cormack-Jolly-Seber (CJS) mark-recapture analyses to estimate the proportion of residents among unmarked birds was found, for most tropical-wintering species sampled, to provide a better fit with the available data and more realistic and precise estimates of annual survival rates of resident birds than did standard CJS mark-recapture analyses. A detailed review of the statistical characteristics of MAPS data and a thorough evaluation of the field and analytical methods used in the MAPS programme are currently under way.  相似文献   

17.
Quantile regression (QR) provides estimates of a range of conditional quantiles. This stands in contrast to traditional regression techniques, which focus on a single conditional mean function. Lee et al. [Regularization of case-specific parameters for robustness and efficiency. Statist Sci. 2012;27(3):350–372] proposed efficient QR by rounding the sharp corner of the loss. The main modification generally involves an asymmetric ?2 adjustment of the loss function around zero. We extend the idea of ?2 adjusted QR to linear heterogeneous models. The ?2 adjustment is constructed to diminish as sample size grows. Conditions to retain consistency properties are also provided.  相似文献   

18.
The present paper deals with the multiple-threshold p-order autoregressive model which has been introduced by Tong and Lim [H. Tong, K.S. Lim, Threshold autoregression, limit cycles and cyclical data, J. R. Stat. Soc. Ser. B 42 (1980) 245–292] in nonlinear system modelling. Under some conditions on the coefficients of the model which ensure the stationarity, the existence of moments and the strong mixing property of this process and under other mild assumptions, we establish the asymptotic properties (consistency and asymptotic normality) of the minimum Hellinger distance estimates of the autoregressive coefficients of the model.  相似文献   

19.
Summary.  The 2001 census in the UK asked for a return of people 'usually living at this address'. But this phrase is fuzzy and may have led to undercount. In addition, analysis of the sex ratios in the 2001 census of England and Wales points to a sex bias in the adjustments for net undercount—too few males in relation to females. The Office for National Statistics's abandonment of the method of demographic analysis for the population of working ages has allowed these biases to creep in. The paper presents a demographic account to check on the plausibility of census results. The need to revise preliminary estimates of the national population over a period of years following census day—as experienced in North America and now in the UK—calls into question the feasibility of a one-number census. Looking to the future, the environment for taking a reliable census by conventional methods is deteriorating. The UK Government's proposals for a population register open up the possibility of a Nordic-style administrative record census in the longer term.  相似文献   

20.
Permutation tests based on medians are examined for pairwise comparison of scale. Tests that have been found in the literature to be effective for comparing scale for two groups are extended to the case of all pairwise comparisons, using the Tukey-type adjustment of Richter and McCann [Multiple comparison of medians using permutation tests. J Mod Appl Stat Methods. 2007;6(2):399–412] to guarantee strong Type I error rate control. Power and Type I error rate estimates are computed using simulated data. A method based on the ratio of deviances performed best and appears to be the best overall test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号