首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
统计数据是各级政府进行宏观决策的重要依据,真实可靠的统计数据利于政府做出正确决策,而不真实、不准确的统计数据必将引致错误决策,危及经济社会发展。因此,统计数据质量是统计工作的生命,如何保障统计数据的真实可靠,是统计部门要努力解决的首要问题。基层调查-数据审核-数据处理-汇总上报-数据评估-信息分析过程是统计工作的一般流程。从统计工作流程分析,基层统计数据质量将直接影响统计数据的真实性和统计信息、统计分析的可靠性。当前,影响基层统计数据质量的原因是多方面的。本文通过对影响基层统计数据质量的主要因素分析,提出提高基层统计数据质量的若干建议。  相似文献   

2.
本人在基层从事农业统计多年,发现农业畜牧业全面报表在基层存在口径不统一、数据可比性差、人为因素影响大等诸多问题,从而影响统计数据质量和统计部门的威信,如果调查时间也如农村住户一样提前1个月,能有效解决以上弊端。有利于统一口径,提高数据质量。报表制度要求畜牧业全面报表从村级逐级上报,要提高数据质量必须从这一“源头”保证数据真实、准确。但现行上报时间是,各级统计部门为留存一定时间催报和审核,要求下级上报时间提前一星期左右,这样到了村级上报时间往往是5月底了(以二季度为例,下同),与统计报表要求的调查时间相差了1个月…  相似文献   

3.
基层人民银行全面、准确的统计数据是中央银行正确制定和执行货币政策的基础。多年来,基层人民银行调查统计部门为了保证各项金融统计、经济调查数据的真实、准确、完整,认真履行岗位职责,从数据采集到报表的复核和上报都严格执行操作规程,对上报的调查统计数据进行认真的核  相似文献   

4.
俞肖云 《统计研究》2001,18(6):28-29
如同质量是产品的生命一样 ,数据的准确性可以说是统计工作的生命。就政府统计而言 ,中国的统计数据内容广泛 ,有着其他国家无与伦比的丰富性 ;但是 ,我国统计数据的总体质量不高也是一个无法回避的问题。其中 ,数据审核体系的缺陷、统计方法的随意性和人为因素对数据质量的影响不容忽视。目前 ,企业数据直接上报的数据收集方式正在逐步推广 ,数据质量在报送环节上的控制因素在减少 ,而大范围的超级汇总常常掩盖分企业的数据错误。在这种情况下 ,进行数据的深层次开发和研究将面临一些意想不到的困难。在进行统计分析时 ,我们曾发现人均产值…  相似文献   

5.
江苏省统计局要求各地要通力合作,精心组织好贸易抽样调查,切实做好以下三项工作:一是进一步加强对抽样调查工作的组织领导,加大对辖区内基层抽样调查工作业务指导力度,尤其是进展较慢的地区要加快工作节奏,制定出具体的工作计划工作表。按照“分级管理”的原则,各省辖市负责辖区内省级样本、县(市)、区和省级亿元以上样本市场抽样调查资料的审核、数据质量评估和上报工作。二是采取多种方法对样本调查资料和推断结果进行认真评估。只有经过评估,抽样调查数据才能正式投入使用。各县(市)、区抽样调查数据正式使用之前,必须经过省辖市统计局…  相似文献   

6.
基层人民银行全面、准确的统计数据是中央银行正确制定和执行货币政策的基础.多年来,基层人民银行调查统计部门为了保证各项金融统计、经济调查数据的真实、准确、完整,认真履行岗位职责,从数据采集到报表的复核和上报都严格执行操作规程,对上报的调查统计数据进行认真的核对,为确保数据的及时、准确和完整付出了辛勤的汗水,但在统计工作中也有许多需改进的方面.……  相似文献   

7.
对统计数据质量的基本认识 一段时间以来,虚报浮夸、弄虚作假的不正之风在一些地方滋长起来,统计数据质量问题引起了党政领导、社会各界乃至国际舆论的普遍关注。最近,国家统计局对统计数据质量问题进行了深入的调查、严格的检查和认真的分析评估,得出了三点基本认识:第一,从微观看,从基层看,统计数据不实的确是一个比较普遍的问题;第二,从宏观看,从全国看,统计数据大体上是真实可靠的;第三,对于微观的、局部的统计数据失实问题,必须从根本上加以整治,否则可能转化为宏观的、全局性的统计数据失实。  相似文献   

8.
为提高统计数据质量,确保数据质量的准确性,日前,新疆统计局提出:坚持“三审”、“三核对”、“三结合”、“把三关”的数据审核控制制度。“三审”为县、地州、自治区三级审核;“三核对”为与上年数据、进度数据、部门数据进行核对;“两结合”即计算、规范审核与人工经验审核相结合;“把三关”即把好企业基层数据审核关、综合汇总数据审核关、数据质量评估关。从数据源头上使数据质量得到提高。  相似文献   

9.
《浙江统计》2000,(3):5-5
(一 )强化统计数据质量评估。健全、完善了省级主要统计数据质量评估办法和市 (地 )、县两级主要统计数据质量评审制度 ,制定了主要统计指标数据质量评审办法。以GDP数据质量评估为龙头 ,加强了对农业总产值、工业增加值、固定资产投资额、社会消费品零售总额、粮食产量、城乡居民收入等指标的审核评估工作 ,取得了明显成效。(二 )统计制度方法改革取得了新进展。一是积极推行抽样调查。全面实施规模以下工业、小型贸易企业抽样调查制度 ,并在宁波市对省、市、县三级一套抽样调查方案进行了试点 ,探索了经验。组织开展了个体批发零售贸易…  相似文献   

10.
统计数据质量的内涵与控制   总被引:2,自引:0,他引:2  
杨辉 《中国统计》2006,(3):9-10
统计数据质量的涵义1.关于统计数据质量的含义。所谓“质量”,是指“产品、体系或过程的一组固有特性满足顾客和其他相关方要求的能力”(ISO9000:2000定义)。“统计”是“将原始数据整理转化为二次加工数据或信息的一个过程”。在这里,“原始数据”是统计过程的输入,“二次加工  相似文献   

11.
Methods for the analysis of data on the incidence of an infectious disease are reviewed, with an emphasis on important objectives that such analyses should address and identifying areas where further work is required. Recent statistical work has adapted methods for constructing estimating functions from martingale theory, methods of data augmentation and methods developed for studying the human immunodeficiency virus–acquired immune deficiency syndrome epidemic. Infectious disease data seem particularly suited to analysis by Markov chain Monte Carlo methods. Epidemic modellers have recently made substantial progress in allowing for community structure and heterogeneity among individuals when studying the requirements for preventing major epidemics. This has stimulated interest in making statistical inferences about crucial parameters from infectious disease data for such community settings.  相似文献   

12.
Summary. Latent class analysis (LCA) is a statistical tool for evaluating the error in categorical data when two or more repeated measurements of the same survey variable are available. This paper illustrates an application of LCA for evaluating the error in self-reports of drug use using data from the 1994, 1995 and 1996 implementations of the US National Household Survey on Drug Abuse. In our application, the LCA approach is used for estimating classification errors which in turn leads to identifying problems with the questionnaire and adjusting estimates of prevalence of drug use for classification error bias. Some problems in using LCA when the indicators of the use of a particular drug are embedded in a single survey questionnaire, as in the National Household Survey on Drug Abuse, are also discussed.  相似文献   

13.
Considerable statistical research has been performed in recent years to develop sophisticated statistical methods for handling missing data and dropouts in the analysis of clinical trial data. However, if statisticians and other study team members proactively set out at the trial initiation stage to assess the impact of missing data and investigate ways to reduce dropouts, there is considerable potential to improve the clarity and quality of trial results and also increase efficiency. This paper presents a Human Immunodeficiency Virus (HIV) case study where statisticians led a project to reduce dropouts. The first step was to perform a pooled analysis of past HIV trials investigating which patient subgroups are more likely to drop out. The second step was to educate internal and external trial staff at all levels about the patient types more likely to dropout, and the impact this has on data quality and sample sizes required. The final step was to work collaboratively with clinical trial teams to create proactive plans regarding focused retention efforts, identifying ways to increase retention particularly in patients most at risk. It is acknowledged that identifying the specific impact of new patient retention efforts/tools is difficult because patient retention can be influenced by overall study design, investigational product tolerability profile, current standard of care and treatment access for the disease under study, which may vary over time. However, the implementation of new retention strategies and efforts within clinical trial teams attests to the influence of the analyses described in this case study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
中国统计数据质量理论研究与实践历程   总被引:13,自引:4,他引:9       下载免费PDF全文
金勇进  陶然 《统计研究》2010,27(1):62-67
本文通过对改革开放以来我国统计数据质量理论研究和实践成果的回顾,归纳出三十年来有关统计数据质量的理论研究和实践脉络。在总结三十年来我国统计数据质量说取得的理论与实践成就的基础上,分析了存在的问题及面临的挑战。  相似文献   

15.
Growing concern about the health effects of exposure to pollutants and other chemicals in the environment has stimulated new research to detect and quantify environmental hazards. This research has generated many interesting and challenging methodological problems for statisticians. One type of statistical research develops new methods for the design and analysis of individual studies. Because current research of this type is too diverse to summarize in a single article, we discuss current work in two areas of application: the carcinogen bioassay in small rodents and epidemiologic studies of air pollution. To assess the risk of a potentially harmful agent, one must frequently combine evidence from different and often quite dissimilar studies. Hence, this paper also discusses the central role of data synthesis in risk assessment, reviews some of the relevant statistical literature, and considers the role of statisticians in evaluating and combining evidence from diverse sources.  相似文献   

16.
Variable and model selection problems are fundamental to high-dimensional statistical modeling in diverse fields of sciences. Especially in health studies, many potential factors are usually introduced to determine an outcome variable. This paper deals with the problem of high-dimensional statistical modeling through the analysis of the trauma annual data in Greece for 2005. The data set is divided into the experiment and control sets and consists of 6334 observations and 112 factors that include demographic, transport and intrahospital data used to detect possible risk factors of death. In our study, different model selection techniques are applied to the experiment set and the notion of deviance is used on the control set to assess the fit of the overall selected model. The statistical methods employed in this work were the non-concave penalized likelihood methods, smoothly clipped absolute deviation, least absolute shrinkage and selection operator, and Hard, the generalized linear logistic regression, and the best subset variable selection.The way of identifying the significant variables in large medical data sets along with the performance and the pros and cons of the various statistical techniques used are discussed. The performed analysis reveals the distinct advantages of the non-concave penalized likelihood methods over the traditional model selection techniques.  相似文献   

17.
Unless the preliminary m subgroups of small samples are drawn from a stable process, the estimated control limits of chart in phase I can be erroneous, due to which the performance of the chart in phase II can be significantly affected. In this work, a quantitative approach based on extraction of the shape features of control chart patterns in the chart is proposed for evaluating the stability of the process mean, while the preliminary samples were drawn and thus, the subjectivity associated with the visual analysis of the patterns is eliminated. The effectiveness of the test procedure is evaluated using simulated data. The results show that the proposed approach can be very effective for m≥48. The power of the test can be improved by identifying a new feature that can more efficiently discriminate the cyclic pattern of smaller periodicity from the natural pattern and by redefining the test statistic.  相似文献   

18.
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms.  相似文献   

19.
The evolution of computers is currently in a period of rapid change, stimulated by radically cheaper and smaller devices for processing and memory. These changes are certain to provide major opportunities and challenges for the use of computers in statistics. This article looks at history and current trends, in both general computing and statistical computing, with the goal of identifying key features and requirements for the near future. A discussion of the S language developed at Bell Laboratories illustrates some program design principles that can make future work on statistical programs more effective and more valuable.  相似文献   

20.
Data resulting from some deterministic dynamic systems may appear to be random. To distinguish these kinds of data from random data is a new challenge for statisticians. This paper develops a nonparametric statistical test procedure for distinguishing noisy chaos from i. i. d. random processes. The procedure can be easily implemented by computer and is very effective in identifying low dimensional chaos in certain instances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号