首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract. We, as statisticians, are living in interesting times. New scientifically significant questions are waiting for our contributions, new data accumulate at a fast rate, and the rapid increase of computing power gives us unprecedented opportunities to meet these challenges. Yet, many members of our community are still turning the old wheel as if nothing dramatic had happened. There are ideas, methods and techniques which are commonly used but outdated and should be replaced by new ones. Can we expect to see, as has been suggested, a consolidation of statistical methodologies towards a new synthesis, or is perhaps an even wider separation and greater divergence the more likely scenario? In this talk these issues are discussed, and some conjectures and suggestions are made.  相似文献   

2.
Abstract

The inverse Gaussian (IG) family is now widely used for modeling non negative skewed measurements. In this article, we construct the likelihood-ratio tests (LRTs) for homogeneity of the order constrained IG means and study the null distributions for simple order and simple tree order cases. Interestingly, it is seen that the null distribution results for the normal case are applicable without modification to the IG case. This supplements the numerous well known and striking analogies between Gaussian and inverse Gaussian families  相似文献   

3.
《统计法》已实施20余年,但统计数据虚假现象仍然十分严重。文章从统计体制、统计方法、执法缺陷等方面探析了统计数据虚假成因及治理对策。同时,现行《统计法》亟需修改完善。  相似文献   

4.
5.
The author believes that tests provide a poor model of most real problems, usually so poor that their objectivity is tangential and often too poor to be useful.  相似文献   

6.
In this paper, a novel Bayesian framework is used to derive the posterior density function, predictive density for a single future response, a bivariate future response, and several future responses from the exponentiated Weibull model (EWM). We study three related types of models, the exponentiated exponential, exponentiated Weibull, and beta generalized exponential, which are all utilized to determine the goodness of fit of two real data sets. The statistical analysis indicates that the EWM best fits both data sets. We determine the predictive means, standard deviations, highest predictive density intervals, and the shape characteristics for a single future response. We also consider a new parameterization method to determine the posterior kernel densities for the parameters. The summary results of the parameters are calculated by using the Markov chain Monte Carlo method.  相似文献   

7.
对政府统计数据质量成本的探讨   总被引:3,自引:2,他引:1       下载免费PDF全文
傅德印  陶然 《统计研究》2007,24(8):9-12
本文在界定统计数据质量成本含义基础上,对统计数据质量成本构成进行了分析,给出了统计数据质量成本要素表及核算方法,以及统计数据质量成本分析、预测、计划和控制的内容,并对统计数据质量成本和统计数据质量之间关系进行了探讨。  相似文献   

8.
统计数据、统计安全与统计法治   总被引:2,自引:1,他引:2       下载免费PDF全文
李金昌 《统计研究》2009,26(8):45-49
 本文以《统计违法违纪行为处分规定》的施行为背景,以统计数据质量为切入点,讨论了统计法治问题。文章阐述了统计数据质量与国家统计安全、统计本质与统计法治的关系,并对实现统计法治的途径进行了探讨。  相似文献   

9.
Counting by weighing is widely used in industry and often more efficient than counting manually which is time consuming and prone to human errors especially when the number of items is large. Lower confidence bounds on the numbers of items in infinitely many future bags based on the weights of the bags have been proposed recently in Liu et al. [Counting by weighing: Know your numbers with confidence, J. Roy. Statist. Soc. Ser. C 65(4) (2016), pp. 641–648]. These confidence bounds are constructed using the data from one calibration experiment and for different parameters (or numbers), but have the frequency interpretation similar to a usual confidence set for one parameter only. In this paper, the more challenging problem of constructing two-sided confidence intervals is studied. A simulation-based method for computing the critical constant is proposed. This method is proven to give the required critical constant when the number of simulations goes to infinity, and shown to be easily implemented on an ordinary computer to compute the critical constant accurately and quickly. The methodology is illustrated with a real data example.  相似文献   

10.
影响统计数据质量的因素是多方面的。提高统计数据质量不仅要改进统计的方法技术,还需要建设优良的统计社会环境,为统计提供良好的条件,所以应当重视“统计生态环境建设”的问题。“统计生态环境建设”包括有利于确保数据质量的思想观念环境、政府统计的体制环境、包含数据真实性的社会信用体系以及保障统计数据质量的法制环境。  相似文献   

11.
12.
We revisit the Flatland paradox proposed by Stone (1976 Stone, M. 1976. Strong inconsistency from uniform priors. Journal of the American Statistical Association 71 (353):11425.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), which is an example of non conglomerability. The main novelty in the analysis of the paradox is to consider marginal versus conditional models rather than proper versus improper priors. We show that in the first model a prior distribution should be considered as a probability measure, whereas, in the second one, a prior distribution should be considered in the projective space of measures. This induces two different kinds of limiting arguments which are useful to understand the paradox. We also show that the choice of a flat prior is not adapted to the structure of the parameter space and we consider an improper prior based on reference priors with nuisance parameters for which the Bayesian analysis matches the intuitive reasoning.  相似文献   

13.
The inverse Gaussian family of non negative, skewed random variables is analytically simple, and its inference theory is well known to be analogous to the normal theory in numerous ways. Hence, it is widely used for modeling non negative positively skewed data. In this note, we consider the problem of testing homogeneity of order restricted means of several inverse Gaussian populations with a common unknown scale parameter using an approach based on the classical methods, such as Fisher's, for combining independent tests. Unlike the likelihood approach which can only be readily applied to a limited number of restrictions and the settings of equal sample sizes, this approach is applicable to problems involving a broad variety of order restrictions and arbitrary sample size settings, and most importantly, no new null distributions are needed. An empirical power study shows that, in case of the simple order, the test based on Fisher's combination method compares reasonably with the corresponding likelihood ratio procedure.  相似文献   

14.
This article identifies four problems with the present federal statistical system for economics: (1) lack of comparability of data series; (2) fragmentation and poor data quality; (3) nonoptimal funding patterns; and (4) susceptibility of the system to politicization. Full centralization of the system appears to be both impractical and undesirable as a solution; however, considerable consolidation in the statistics collection process and a substantial increase in the number and professional quality of personnel devoted to planning, developing, and coordinating the federal statistical system are recommended.  相似文献   

15.
Empirical Bayes estimates of the local false discovery rate can reflect uncertainty about the estimated prior by supplementing their Bayesian posterior probabilities with confidence levels as posterior probabilities. This use of coherent fiducial inference with hierarchical models generates set estimators that propagate uncertainty to varying degrees. Some of the set estimates approach estimates from plug-in empirical Bayes methods for high numbers of comparisons and can come close to the usual confidence sets given a sufficiently low number of comparisons.  相似文献   

16.
Abstract.  In the analysis of clustered and/or longitudinal data, it is usually desirable to ignore covariate information for other cluster members as well as future covariate information when predicting outcome for a given subject at a given time. This can be accomplished through con-ditional mean models which merely condition on the considered subject's covariate history at each time. Pepe & Anderson (Commun. Stat. Simul. Comput. 23, 1994 , 939) have shown that ordinary generalized estimating equations may yield biased estimates for the parameters in such models, but that valid inferences can be guaranteed by using a diagonal working covariance matrix in the equations. In this paper, we provide insight into the nature of this problem by uncovering substantive data-generating mechanisms under which such biases will result. We then propose a class of asymptotically unbiased estimators for the parameters indexing the suggested conditional mean models. In addition, we provide a representation for the efficient estimator in our class, which attains the semi-parametric efficiency bound under the model, along with an efficient algorithm for calculating it. This algorithm is easy to apply and may realize major efficiency improvements as demonstrated through simulation studies. The results suggest ways to improve the efficiency of inverse-probability-of-treatment estimators which adjust for time-varying confounding, and are used to estimate the effect of discontinuing highly active anti-retroviral therapy (HAART) on viral load in HIV-infected patients.  相似文献   

17.
基于内部民调的地方统计数据准确性及影响因素研究   总被引:1,自引:0,他引:1  
当前作为衡量经济和社会发展状况的统计数据的准确性已成为社会大众关注的焦点问题之一。以政府统计工作者为对象调查其对地方统计数据准确性及影响因素的判断,结果表明:当前中国各地统计数据质量总体比较可靠,但是一些地区统计指标的数据准确性较低,"水分"较大,领导干预是影响统计数据质量第一因素。  相似文献   

18.
刘洪  黄燕 《统计研究》2007,24(8):17-21
 本文采用组合模型的形式对时间序列数据的变化特点建模,在模型通过各种检验、具有良好统计预测功能的基础上,从检验异常值的角度来分析预测值与实际值之间差异的程度,找出离群数据,利用数理统计中检验实验观测数据异常值的方法,对离群数据的误差进行统计上的显著检验,从而评估统计数据的质量。文章以我国国内生产总值(GDP)为研究对象,选取我国1978-2003年间的GDP作为样本,运用趋势模拟评估法来评估我国2004年国内生产总值的准确性。对我国经济指标的时间序列数据进行了实证分析。  相似文献   

19.
Abstract.  Multivariate failure time data frequently occur in medical studies and the dependence or association among survival variables is often of interest ( Biometrics , 51 , 1995, 1384; Stat. Med. , 18 , 1999, 3101; Biometrika , 87 , 2000, 879; J. Roy. Statist. Soc. Ser. B , 65 , 2003, 257). We study the problem of estimating the association between two related survival variables when they follow a copula model and only bivariate interval-censored failure time data are available. For the problem, a two-stage estimation procedure is proposed and the asymptotic properties of the proposed estimator are established. Simulation studies are conducted to assess the finite sample properties of the presented estimate and the results suggest that the method works well for practical situations. An example from an acquired immunodeficiency syndrome clinical trial is discussed.  相似文献   

20.
Computer simulation techniques were employed to investigate the Type I and Type II error rates (experiment-wise and comparison-wise) of three nonparametric multiple comparison procedures. Three different underlying distributions were considered. It was found that the nonparametric analog to Fisher’s LSD (a Kruskal-Wallis test, followed by pairwise Mann-Whitney U tests if a significant overall effect is detected) appeared to be superior to the Nemenyi-Dunn and Steel-Dwass procedures, because of the extreme conservatism of these latter methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号