首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary.  Traffic safety in the UK is one of the increasing number of areas where central government sets targets based on 'outcome-focused' performance indicators (PIs). Judgments about such PIs are often based solely on rankings of raw indicators and simple league tables dominate centrally published analyses. There is a considerable statistical literature examining health and education issues which has tended to use the generalized linear mixed model (GLMM) to address variability in the data when drawing inferences about relative performance from headline PIs. This methodology could obviously be applied in contexts such as traffic safety. However, when such models are applied to the fairly crude data sets that are currently available, the interval estimates generated, e.g. in respect of rankings, are often too broad to allow much real differentiation between the traffic safety performance of the units that are being considered. Such results sit uncomfortably with the ethos of 'performance management' and raise the question of whether the inference from such data sets about relative performance can be improved in some way. Motivated by consideration of a set of nine road safety performance indicators measured on English local authorities in the year 2000, the paper considers methods to strengthen the weak inference that is obtained from GLMMs of individual indicators by simultaneous, multivariate modelling of a range of related indicators. The correlation structure between indicators is used to reduce the uncertainty that is associated with rankings of any one of the individual indicators. The results demonstrate that credible intervals can be substantially narrowed by the use of the multivariate GLMM approach and that multivariate modelling of multiple PIs may therefore have considerable potential for introducing more robust and realistic assessments of differential performance in some contexts.  相似文献   

2.
Summary.  Using Bayesian model averaging, we quantify associations of governance and economic health with country level presence of foot-and-mouth disease (FMD) and estimate the probability of the presence of FMD in each country from 1997 to 2005. The Bayesian model averaging accounted for countries' previous FMD status and other possible confounders, as well as uncertainty about the 'true' model, and provided accurate predictions (90% specificity and 80% sensitivity). This model represents a novel approach to predicting FMD, and other conditions, on a global scale and in identifying important risk factors that can be applied to global policy and allocation of resources for disease control.  相似文献   

3.
对中外主要大学排行榜的评价指标按所反映的评价内容进行分类归纳,通过各类指标权重的大小来比较各评价指标体系的不同侧重点,并对由此反映出的中外大学评价指标的差异进行原因分析,从定性与定量相结合的角度为中国大学综合排行榜评价指标体系的设计提供借鉴参考。此外,在对中外大学排行榜评价指标的数据来源进行比较分析的基础上,指出影响大学评价指标设计合理性的主要原因在于数据来源的条件限制,并提出相应的建议。  相似文献   

4.
马丹等 《统计研究》2018,35(10):44-57
本文提出利用大型统计数据直接测算中国宏观经济不确定性的方法。通过建立含有潜在不可观测变量的混频动态因子随机波动模型,实现利用月度和季度大型数据测度宏观经济不确定性。利用1994—2017年中国60个月度统计指标和4个季度统计指标,测算了我国宏观经济不确定性,结果表明:(1)中国宏观经济不确定性具有明显的阶段性特征,不确定性的变动受到多种因素的影响,传统的景气监测指标并不是一致最优的同步监测指标。(2)在测算宏观经济不确定性时,有必要将核心指标作为观测到的因子予以保留,不仅提高了结果的解释性,也能得到更符合经济事实的结果。(3)宏观政策变动是引起经济不确定性上升的重要因素,但政策的影响往往具有滞后效应,未预期的政策变动将触发更高的宏观经济不确定。  相似文献   

5.
In this article, we consider permutation methods for multivariate testing on ordered categorical variables based on the nonparametric combination of permutation dependent tests (NPC; Pesarin and Salmaso, 2010). Furthermore, an extension of the nonparametric combination of dependent rankings (Arboretti et al., 2007) is proposed in order to construct a synthesis of composite indicators.

The methodological approaches are applied to a study of risk factors for skin cancer in a cohort of adult patients with heart transplants followed for a minimum of three years after transplantation (Belloni et al, 2004) and to a survey on tourist's opinions about “Tre Cime” Park (District of Sesto Dolomites/Alta Pusteria, Italy).  相似文献   

6.
基于模糊物元的天然气汽车项目后评价   总被引:1,自引:0,他引:1  
鉴于天然气汽车项目后评价中各指标的不确定性和模糊性,结合模糊物元分析法建立天然气汽车项目后评价模型,并以海南省天然气汽车项目为例验证了该评价方法的合理性。最后,通过对评价结果和主要影响因素的分析,发现国家政策导向及资源限制对天然气汽车项目的影响最大,投资者需要在国家政策支持的前提下与天然气开采商建立多方的、持久的天然气资源供应机制。  相似文献   

7.
The evaluation of hazards from complex, large scale, technologically advanced systems often requires the construction of computer implemented mathematical models. These models are used to evaluate the safety of the systems and to evaluate the consequences of modifications to the systems. These evaluations, however, are normally surrounded by significant uncertainties related to the uncertainty inherent in natural phenomena such as the weather and those related to uncertainties in the parameters and models used in the evaluation.

Another use of these models is to evaluate strategies for improving information used in the modeling process itself. While sensitivity analysis is useful in defining variables in the model that are important, uncertainty analysis provides a tool for assessing the importance of uncertainty about these variables. A third complementary technique, is decision analysis. It provides a methodology for explicitly evaluating and ranking potential improvements to the model. Its use in the development of information gathering strategies for a nuclear waste repository are discussed in this paper.  相似文献   

8.
Ranked-set sampling (RSS) and judgment post-stratification (JPS) use ranking information to obtain more efficient inference than is possible using simple random sampling. Both methods were developed with subjective, judgment-based rankings in mind, but the idea of ranking using a covariate has received a lot of attention. We provide evidence here that when rankings are done using a covariate, the standard RSS and JPS mean estimators no longer make efficient use of the available information. We first show that when rankings are done using a covariate, the standard nonparametric mean estimators in JPS and unbalanced RSS are inadmissible under squared error loss. We then show that when rankings are done using a covariate, nonparametric regression techniques yield mean estimators that tend to be significantly more efficient than the standard RSS and JPS mean estimators. We conclude that the standard estimators are best reserved for settings where only subjective, judgment-based rankings are available.  相似文献   

9.
Uncertainty and sensitivity analyses for systems that involve both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty are discussed. In such analyses, the dependent variable is usually a complementary cumulative distribution function (CCDF) that arises from stochastic uncertainty; uncertainty analysis involves the determination of a distribution of CCDFs that results from subjective uncertainty, and sensitivity analysis involves the determination of the effects of subjective uncertainty in individual variables on this distribution of CCDFs. Uncertainty analysis is presented as an integration problem involving probability spaces for stochastic and subjective uncertainty. Approximation procedures for the underlying integrals are described that provide an assessment of the effects of stochastic uncertainty, an assessment of the effects of subjective uncertainty, and a basis for performing sensitivity studies. Extensive use is made of Latin hypercube sampling, importance sampling and regression-based sensitivity analysis techniques. The underlying ideas, which are initially presented in an abstract form, are central to the design and performance of real analyses. To emphasize the connection between concept and computational practice, these ideas are illustrated with an analysis involving the MACCS reactor accident consequence model a, performance assessment for the Waste Isolation Pilot Plant, and a probabilistic risk assessment for a nuclear power station.  相似文献   

10.
Probabilistic sensitivity analysis of complex models: a Bayesian approach   总被引:3,自引:0,他引:3  
Summary.  In many areas of science and technology, mathematical models are built to simulate complex real world phenomena. Such models are typically implemented in large computer programs and are also very complex, such that the way that the model responds to changes in its inputs is not transparent. Sensitivity analysis is concerned with understanding how changes in the model inputs influence the outputs. This may be motivated simply by a wish to understand the implications of a complex model but often arises because there is uncertainty about the true values of the inputs that should be used for a particular application. A broad range of measures have been advocated in the literature to quantify and describe the sensitivity of a model's output to variation in its inputs. In practice the most commonly used measures are those that are based on formulating uncertainty in the model inputs by a joint probability distribution and then analysing the induced uncertainty in outputs, an approach which is known as probabilistic sensitivity analysis. We present a Bayesian framework which unifies the various tools of prob- abilistic sensitivity analysis. The Bayesian approach is computationally highly efficient. It allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than standard Monte Carlo methods. Furthermore, all measures of interest may be computed from a single set of runs.  相似文献   

11.
We consider the use of emulator technology as an alternative method to second-order Monte Carlo (2DMC) in the uncertainty analysis for a percentile from the output of a stochastic model. 2DMC is a technique that uses repeated sampling in order to make inferences on the uncertainty and variability in a model output. The conventional 2DMC approach can often be highly computational, making methods for uncertainty and sensitivity analysis unfeasible. We explore the adequacy and efficiency of the emulation approach, and we find that emulation provides a viable alternative in this situation. We demonstrate these methods using two different examples of different input dimensions, including an application that considers contamination in pre-pasteurised milk.  相似文献   

12.
在不确定的复杂外包环境下众包模式受到诸多因素影响。在双轮复合问卷调查的基础上对众包影响因素进行分析统计,构建研究框架模型并从多个角度对调查数据样本进行T检验,利用模糊多层次评判法对提炼出的众包影响因素进行综合因子评判,并结合算例加以实证研究。结果表明:互联网网民数量、中介机构数量、涉及众包业务企业数量和出台外包相关法规数量四个因素与众包发展显著正相关,并获得企业普遍较高的支持率。根据实证测评结果有针对性地提出中国众包跨越式发展的具体对策,也为"十二五"中国大力实施外包新兴产业政策提供理论支持,助力中国众包产业转型升级的推进。  相似文献   

13.
Data envelopment analysis models are used for measuring composite indicators in various areas. Although there are many models for measuring composite indicators in the literature, surprisingly, there is no methodology that clearly shows how composite indicators improvement could be performed. This article proposes a slack analysis framework for improving the composite indicator of inefficient entities. For doing so, two dual problems originated from two data envelopment analysis models in the literature are proposed, which can guide decision makers on how to adjust the subindicators of inefficient entities to improve their composite indicators through identifying which subindicators must be improved and how much they should be augmented. The proposed methodology for improving composite indicators is inspired from data envelopment analysis and slack analysis approaches. Applicability of the proposed methodology is investigated for improving two well-known composite indicators, i.e., Sustainable Energy Index and Human Development Index. The results show that 12 out of 18 economies are inefficient in the context of sustainable energy index, for which the proposed slack analysis models provide the suggested adjustments in terms of their respected subindicators. Furthermore, the proposed methodology suggests how to adjust the life expectancy, the education, and the gross domestic product (GDP) as the three socioeconomic indicators to improve the human development index of 24 countries which are identified as inefficient entities among 27 countries.  相似文献   

14.
This paper reviews five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main questions are: when should which type of analysis be applied; which statistical techniques may then be used? This paper claims that the proper sequence to follow in the evaluation of simulation models is as follows. 1) Validation, in which the availability of data on the real system determines which type of statistical technique to use for validation. 2) Screening: in the simulation‘s pilot phase the really important inputs can be identified through a novel technique, called sequential bifurcation, which uses aggregation and sequential experimentation. 3) Sensitivity analysis: the really important inputs should be subjected to a more detailed analysis, which includes interactions between these inputs; relevant statistical techniques are design of experiments (DOE) and regression analysis. 4) Uncertainty analysis: the important environmental inputs may have values that are not precisely known, so the uncertainties of the model outputs that result from the uncertainties in these model inputs should be quantified; relevant techniques are the Monte Carlo method and Latin hypercube sampling. 5) Optimization: the policy variables should be controlled; a relevant technique is Response Surface Methodology (RSM), which combines DOE, regression analysis, and steepest-ascent hill-climbing. The recommended sequence implies that sensitivity analysis procede uncertainty analysis. Several case studies for each phase are briefly discussed in this paper.  相似文献   

15.
New approaches to prior specification and structuring in autoregressive time series models are introduced and developed. We focus on defining classes of prior distributions for parameters and latent variables related to latent components of an autoregressive model for an observed time series. These new priors naturally permit the incorporation of both qualitative and quantitative prior information about the number and relative importance of physically meaningful components that represent low frequency trends, quasi-periodic subprocesses and high frequency residual noise components of observed series. The class of priors also naturally incorporates uncertainty about model order and hence leads in posterior analysis to model order assessment and resulting posterior and predictive inferences that incorporate full uncertainties about model order as well as model parameters. Analysis also formally incorporates uncertainty and leads to inferences about unknown initial values of the time series, as it does for predictions of future values. Posterior analysis involves easily implemented iterative simulation methods, developed and described here. One motivating field of application is climatology, where the evaluation of latent structure, especially quasi-periodic structure, is of critical importance in connection with issues of global climatic variability. We explore the analysis of data from the southern oscillation index, one of several series that has been central in recent high profile debates in the atmospheric sciences about recent apparent trends in climatic indicators.  相似文献   

16.
"Uncertainty in statistics and demographic projections for aging and other policy purposes comes from four sources: differences in definitions, sampling error, nonsampling error, and scientific uncertainty. Some of these uncertainties can be reduced by proper planning and coordination, but most often decisions have to be made in the face of some remaining uncertainty. Although decision makers have a tendency to ignore uncertainty, doing so does not lead to good policy-making. Techniques for estimating and reporting on uncertainty include sampling theory, assessment of experts' subjective distributions, sensitivity analysis, and multiple independent estimates." The primary geographical focus is on the United States.  相似文献   

17.
Multiple-bias modelling for analysis of observational data   总被引:3,自引:3,他引:0  
Summary.  Conventional analytic results do not reflect any source of uncertainty other than random error, and as a result readers must rely on informal judgments regarding the effect of possible biases. When standard errors are small these judgments often fail to capture sources of uncertainty and their interactions adequately. Multiple-bias models provide alternatives that allow one systematically to integrate major sources of uncertainty, and thus to provide better input to research planning and policy analysis. Typically, the bias parameters in the model are not identified by the analysis data and so the results depend completely on priors for those parameters. A Bayesian analysis is then natural, but several alternatives based on sensitivity analysis have appeared in the risk assessment and epidemiologic literature. Under some circumstances these methods approximate a Bayesian analysis and can be modified to do so even better. These points are illustrated with a pooled analysis of case–control studies of residential magnetic field exposure and childhood leukaemia, which highlights the diminishing value of conventional studies conducted after the early 1990s. It is argued that multiple-bias modelling should become part of the core training of anyone who will be entrusted with the analysis of observational data, and should become standard procedure when random error is not the only important source of uncertainty (as in meta-analysis and pooled analysis).  相似文献   

18.
Overall journal rankings, which are generated with sample articles in different research fields, are commonly used to measure the research productivity of academic economists. In this article, we investigate a growing concern in the profession that the use of the overall journal rankings to evaluate scholars’ relative research productivity may exhibit a downward bias toward researchers in some specialty fields if their respective field journals are under-ranked in the overall journals rankings. To address this concern, we constructed new journal rankings based on the intellectual influence of research in 8 specialty fields using a sample consisting of 26,401 articles published across 60 economics journals from 1998 to 2007. We made various comparisons between the newly constructed journal rankings in specialty fields and the traditional overall journal ranking. Our results show that the overall journal ranking provides a considerably good mapping for the article quality in specialty fields. Supplementary materials for this article are available online.  相似文献   

19.
The struggle to find and use indicators of sustainable development is intimately bound up with the process of deciding what we mean by sustainable development and what we shall do about it. In this field at least, indicators are intrinsically and unavoidably normative and political. The paper proposes an approach to indicators which reflects, and can further clarify and help to achieve, an important aspect of sustainable development. The paper is written from a practical, instrumental interest in indicators as a tool to put sustainable development principles into practice in public policy. The author is not a statistician and makes no claim to technical expertise but hopes that this `barefoot' practitioner perspective may be of some interest to the professionals. The main argument is introduced by a discussion of some of the pitfalls and limitations of sustainability indicators to date.  相似文献   

20.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号