首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
夏滨生 《统计研究》2008,25(5):9-18
本文从分析统计的本质属性出发,根据统计广泛性特点,归纳广义统计和狭义统计的概念,建立“统计概念总系”,力求能够包罗统计万象,体现统计全貌,使统计这一事物有一个总纲纪,为人们全面认识统计提供了一个新的视角。文中重点对政府统计纵向构成进行了分析和解读,解开人们在传统认识上的挽扣儿,这是正确理解政府统计构成的关键认识点,由此方能建立起统计概念体系。  相似文献   

2.
ABSTRACT

In response to growing concern about the reliability and reproducibility of published science, researchers have proposed adopting measures of “greater statistical stringency,” including suggestions to require larger sample sizes and to lower the highly criticized “p?<?0.05” significance threshold. While pros and cons are vigorously debated, there has been little to no modeling of how adopting these measures might affect what type of science is published. In this article, we develop a novel optimality model that, given current incentives to publish, predicts a researcher’s most rational use of resources in terms of the number of studies to undertake, the statistical power to devote to each study, and the desirable prestudy odds to pursue. We then develop a methodology that allows one to estimate the reliability of published research by considering a distribution of preferred research strategies. Using this approach, we investigate the merits of adopting measures of “greater statistical stringency” with the goal of informing the ongoing debate.  相似文献   

3.
The gist of the quickest change-point detection problem is to detect the presence of a change in the statistical behavior of a series of sequentially made observations, and do so in an optimal detection-speed-versus-“false-positive”-risk manner. When optimality is understood either in the generalized Bayesian sense or as defined in Shiryaev's multi-cyclic setup, the so-called Shiryaev–Roberts (SR) detection procedure is known to be the “best one can do”, provided, however, that the observations’ pre- and post-change distributions are both fully specified. We consider a more realistic setup, viz. one where the post-change distribution is assumed known only up to a parameter, so that the latter may be misspecified. The question of interest is the sensitivity (or robustness) of the otherwise “best” SR procedure with respect to a possible misspecification of the post-change distribution parameter. To answer this question, we provide a case study where, in a specific Gaussian scenario, we allow the SR procedure to be “out of tune” in the way of the post-change distribution parameter, and numerically assess the effect of the “mistuning” on Shiryaev's (multi-cyclic) Stationary Average Detection Delay delivered by the SR procedure. The comprehensive quantitative robustness characterization of the SR procedure obtained in the study can be used to develop the respective theory as well as to provide a rational for practical design of the SR procedure. The overall qualitative conclusion of the study is an expected one: the SR procedure is less (more) robust for less (more) contrast changes and for lower (higher) levels of the false alarm risk.  相似文献   

4.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

5.
This paper discusses a pre-test regression estimator which uses the least squares estimate when it is “large” and a ridge regression estimate for “small” regression coefficients, where the preliminary test is applied separately to each regression coefficient in turn to determine whether it is “large” or “small.” For orthogonal regressors, the exact finite-sample bias and mean squared error of the pre-test estimator are derived. The latter is less biased than a ridge estimator, and over much of the parameter space the pre-test estimator has smaller mean squared error than least squares. A ridge estimator is found to be inferior to the pre-test estimator in terms of mean squared error in many situations, and at worst the latter estimator is only slightly less efficient than the former at commonly used significance levels.  相似文献   

6.
随着信息技术的发展,数字经济已经成为经济增长的"新引擎"。但由于缺乏权威的产业统计分类标准,学者们一直面临"数字经济研究缺乏数字依据"的尴尬境地。文章基于国家统计局公布并实施的《数字经济及其核心产业统计分类(2021)》中的分类标准,对各省份统计年鉴的数据进行重新整理,利用熵权法构建数字经济发展指数,测度了我国30个省份的数字经济发展水平,分析了各省份数字经济发展的差异以及时空特征。研究发现,2009—2019年我国数字经济产业发展迅猛,各项子产业都取得了长足的进步。相比较而言,数字要素驱动业发展速度略低于其他三个子产业;数字经济发展存在着明显的区域不平衡。东中部地区的数字经济发展状况明显优于西部地区,南方优于北方,而且区域不平衡有持续扩大趋势。  相似文献   

7.
孙文浩等 《统计研究》2021,38(6):102-115
研究政府减税何以提升高新技术“僵尸企业”的创新能力,对推动我国供给侧结构性改革和落实创新驱动发展战略具有重大的理论与现实意义。本文使用2008-2014年全国创新调查企业数据库,借鉴ABBGH(2005)的一般均衡模型,界定了高新技术“僵尸企业”的概念与识别方法。主要发现:第一,给高新技术“僵尸企业”减税对企业创新存在显著促进效应,尤其对创新型“僵尸企业”减税存在较强的“杠杆效应”;第二,给高新技术“僵尸企业”减税对企业创新的促进效应显著高于非高新技术“僵尸企业”;第三,政府对偏向科研固定资产投资的创新型“僵尸企业”和倾向基础科学研究的效率型“僵尸企业”增加减税额度更有利于激发企业的创新活力,是促使高新技术“僵尸企业”起死回生的重要途径。政府可利用创新型“僵尸企业”“重资产、轻科研”与效率型“僵尸企业” “重科研、轻资产”的创新策略优化税收优惠政策,为平稳、持续、高效推动我国供给侧结构性改革和创新驱动发展提供新的治理框架。  相似文献   

8.
《Serials Review》2012,38(4):219-226
Abstract

This study uses systematic random sampling to compare the content of “Beall’s List of Predatory Journals and Publishers” and “Cabell’s Blacklist” of journals. The Beall’s List data was generated from its new site that maintains a new list besides the original list. It found that 28.5% Beall’s List sample publishers are out of business, some Cabell’s Blacklist journals have become ceased. The main takeaway is that among the Beall’s List sample publishers with a working website for journal publishing, only 31.8% can be found on Cabell’s Blacklist.  相似文献   

9.
关于AHP统计构权方法的几点看法   总被引:3,自引:0,他引:3       下载免费PDF全文
苏为华 《统计研究》1998,15(4):57-60
在多指标统计综合评价中,权数是影响评价结论的一个重要因素。不同的权数体系有可能导致不同的评价结论。最近几年来,人们指出了不少构造统计权数的方法。其中行之有效的构权方法当首推AHP构权法与DELPHI构权法。本文拟就AHP构权法谈几点自己的看法。  相似文献   

10.
Although several authors have indicated that the median test has low power in small samples, it continues to be presented in many statistical textbooks, included in a number of popular statistical software packages, and used in a variety of application areas. We present results of a power simulation study that shows that the median test has noticeably lower power, even for the double exponential distribution for which it is asymptotically most powerful, than other readily available rank tests. We suggest that the median test be “retired” from routine use and recommend alternative rank tests that have superior power over a relatively large family of symmetric distributions.  相似文献   

11.
Abstract

Experiments in various countries with “last week” and “last month” reference periods for reporting of households’ food consumption have generally found that “week”-based estimates are higher. In India the National Sample Survey (NSS) has consistently found that “week”-based estimates are higher than month-based estimates for a majority of food item groups. But why are week-based estimates higher than month-based estimates? It has long been believed that the reason must be recall lapse, inherent in a long reporting period such as a month. But is household consumption of a habitually consumed item “recalled” in the same way as that of an item of infrequent consumption? And why doesn’t memory lapse cause over-reporting (over-assessment) as often as under-reporting? In this paper, we provide an alternative hypothesis, involving a “quantity floor effect” in reporting behavior, under which “week” may cause over-reporting for many items. We design a test to detect the effect postulated by this hypothesis and carry it out on NSS 68th round HCES data. The test results strongly suggest that our hypothesis provides a better explanation of the difference between week-based and month-based estimates than the recall lapse theory.  相似文献   

12.
Wavelets are a commonly used tool in science and technology. Often, their use involves applying a wavelet transform to the data, thresholding the coefficients and applying the inverse transform to obtain an estimate of the desired quantities. In this paper, we argue that it is often possible to gain more insight into the data by producing not just one, but many wavelet reconstructions using a range of threshold values and analysing the resulting object, which we term the Time–Threshold Map (TTM) of the input data. We discuss elementary properties of the TTM, in its “basic” and “derivative” versions, using both Haar and Unbalanced Haar wavelet families. We then show how the TTM can help in solving two statistical problems in the signal + noise model: breakpoint detection, and estimating the longest interval of approximate stationarity. We illustrate both applications with examples involving volatility of financial returns. We also briefly discuss other possible uses of the TTM.  相似文献   

13.
贾怀勤等 《统计研究》2021,38(12):30-41
数字贸易是构建“以国内大循环为主体、国内国际双循环相互促进”的新发展格局的新模式和新业态,也是各国参与国际竞争与合作的重要领域。然而,国际社会对数字贸易概念的认识还比较模糊,直接影响着数字贸易市场拓展和规则制定,数字贸易测度也成为国际贸易统计领域具有挑战性课题。本文在回顾和梳理国际社会关于数字贸易概念和测度方法既有论述的基础上,提出了数字贸易的“二元三环”概念架构,构建了测度数字贸易规模的指标体系,开发了以“实际数字交付比率”为关键的数字贸易测度法,并使用中国“两化融合”平台数据库的数据,对中国2018—2019 年数字贸易进出口总额进行了试测度。本文的研究成果对我国数字贸易测度研究以及有关部门建立数字贸易统计监测制度具有借鉴作用。  相似文献   

14.
In the prospective study of a finely stratified population, one individual from each stratum is chosen at random for the “treatment” group and one for the “non-treatment” group. For each individual the probability of failure is a logistic function of parameters designating the stratum, the treatment and a covariate. Uniformly most powerful unbiased tests for the treatment effect are given. These tests are generally cumbersome but, if the covariate is dichotomous, the tests and confidence intervals are simple. Readily usable (but non-optimal) tests are also proposed for poly-tomous covariates and factorial designs. These are then adapted to retrospective studies (in which one “success” and one “failure” per stratum are sampled). Tests for retrospective studies with a continuous “treatment” score are also proposed.  相似文献   

15.
Deterministic simulation models are used to guide decision-making and enhance understanding of complex systems such as disease transmission, population dynamics, and tree plantation growth. Bayesian inference about parameters in deterministic simulation models can require the pooling of expert opinion. One class of approaches to pooling expert opinion in this context is supra-Bayesian pooling, in which expert opinion is treated as data for an ultimate decision maker. This article details and compares two supra-Bayesian approaches—“event updating” and “parameter updating.” The suitability of each approach in the context of deterministic simulation models is assessed based on theoretical properties, performance on examples, and the selection and sensitivity of required hyperparameters. In general, we favor a parameter updating approach because it uses more intuitive hyperparameters, it performs sensibly on examples, and because the alternative event updating approach fails to exhibit a desirable property (relative propensity consistency) in all cases. Inference in deterministic simulation models is an increasingly important statistical and practical problem, and supra-Bayesian methods represent one viable option for achieving a sensible pooling of expert opinion.  相似文献   

16.
The “What If” analysis is applicablein research and heuristic situations that utilize statistical significance testing. One utility for the “What If” is in a pedagogical perspective; the “What If” analysis provides professors an interactive tool that visually represents examples of what statistical significance testing entails and the variables that affect the commonly misinterpreted pCALCULATED value. In order to develop a strong understanding of what affects the pCALCULATED value, the students tangibly manipulate data within the Excel sheet to create a visualize representation that explicitly demonstrates how variables affect the pCALCULATED value. The second utility is primarily applicable to researchers. “What If” analysis contributes to research in two ways: (1) a “What If” analysis can be run a priori to estimate the sample size a researcher may wish to use for his study; and (2) a “What If” analysis can be run a posteriori to aid in the interpretation of results. If used, the “What If” analysis provides researchers with another utility that enables them to conduct high-quality research and disseminate their results in an accurate manner.  相似文献   

17.
In the field of education, it is often of great interest to estimate the percentage of students who start out in the top test quantile at time 1 and who remain there at time 2, which is termed as “persistence rate,” to measure the students’ academic growth. One common difficulty is that students’ performance may be subject to measurement errors. We therefore considered a correlation calibration method and the simulation–extrapolation (SIMEX) method for correcting the measurement errors. Simulation studies are presented to compare various measurement error correction methods in estimating the persistence rate.  相似文献   

18.
余静文等 《统计研究》2021,38(4):89-102
中国银行业在金融体系中起着关键性的作用,一直是金融发展的重要部分。进入21世纪以来,中国银行业以国有银行为主的结构出现了重大变革,银行业竞争程度不断提高。关于银行业竞争的经济效应存在“市场势力假说”和“信息假说”两个假说,本文尝试在银行准入管制放松政策的背景下和鼓励企业“走出去”的情境下,利用匹配的微观层面数据来实证检验以上的假说,主要得到以下几个结论:首先,银行业“松绑”有助于企业“走出去”; 其次,采取倾向性得分匹配方法应对样本选择问题, 并用工具变量方法应对内生性问题后,这一结论依然成立;最后,银行业“松绑”引起的融资成本下降是银行业“松绑”影响企业“走出去”的重要渠道,企业“走出去”的分析结果也支持了中国情境下银行业“松绑”的“市场势力假说”。本文的研究为银行业改革与企业对外直接投资的关系提供了重要证据,并验证了中国情境下银行业“松绑”的“市场势力假说”和“信息假说”,有助于更加深刻地理解和评估中国银行业改革的经济效应,对更好推动“一带一路”建设有着重要意义。  相似文献   

19.
The use of the “exact test” with the 2X2 table that records the observations obtained in a comparative trial has been widely considered to be the paradigm of statistical tests of significance. This is attributable to the fact that it is based on the theories of R.A. Fisher and as a result has acquired the sobriquet ‘exact’.The Fisherian basis of the exact test, that the marginal totals are “ancillary statistics,” and therefore provide no information respecting the configuration of the body of the table is shown to be incorrect.The exact test for the one-sided case is compared with the normal test for the nominal significance levels P=0.05 and P=0.01. It is shown by direct computation that the effective level is closer to the nominal level with the normal test than with the exact test, and that the power of the normal test is considerably larger than the power of the exact test, the increased power exceeding the change of effective level.It is concluded that the exact test should not be used in preference to the normal test.  相似文献   

20.
The type I and II error rates of several statistical tests for seasonality in monthly data were investigated through a computer simulation study at two nominal significance levels, α=1% and α=5%. Three models were used for the variation: annual sinusoidal; semi—annual sinusoidal; and a curve which is constant in all but three consecutive months of the year, when it exhibits a constant increase (a “one—pulse” model). The statistical tests are compared in terms of the simulation results. These results may be applied to calculate either the sample size required to detect seasonal variation of fixed amplitude or the probability of detecting seasonal variation of variable amplitude with a fixed sample size. A numerical case study is given  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号