首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary.  This is a response to Stone's criticisms of the Spottiswoode report to the UK Treasury which was responding to the Treasury's request for improved methods to evaluate the efficiency and productivity of the 43 police districts in England and Wales. The Spottiswoode report recommended uses of data envelopment analysis (DEA) and stochastic frontier analysis (SFA), which Stone critiqued en route to proposing an alternative approach. Here we note some of the most serious errors in his criticism and inaccurate portrayals of DEA and SFA. Most of our attention is devoted to DEA, and to Stone's recommended alternative approach without much attention to SFA, partly because of his abbreviated discussion of the latter. In our response we attempt to be constructive as well as critical by showing how Stone's proposed approach can be joined to DEA to expand his proposal beyond limitations in his formulations.  相似文献   

2.
本文基于2002-2017年中国投入产出表(延长表),运用非竞争型投入产出模型测算出我 国各产业部门的隐含碳排放,并将其引入到碳排放效率测算模型当中,再利用三阶段DEA模型将外部 环境和随机干扰因素剔除,从而得到更加准确客观的各产业部门隐含碳排放效率水平。实证结果表明: 目前我国整体产业部门隐含碳排放效率还处于较低水平,且各产业部门之间的隐含碳排放效率水平差 异显著。根据第三阶段各产业部门隐含碳排放效率历年均值的聚类分析可知,农业、石油和天然气开采业、批发零售及餐饮业及其他服务业处于高等效率水平,电气机械及器材制造业、通信设备和计算机及 其他电子设备制造业、电力热力的生产和供应业、燃气的生产和供应业以及煤炭采选业等处于中等效率 水平,而资源依赖、创新能力较差的纺织服装鞋帽皮革羽绒及其制品业、木材加工及家具制造业等,水的生产和供应业、建筑业、交通运输、仓储及邮政业处于低等效率水平。外部环境对各产业部门隐含碳排 放效率影响显著,影响方式主要有三种,即“交叉型”“重合型”和“平行型”。 “交叉型”产业部门主要以制造业为主,外部环境对其影响先提升后抑制;“重合型”产业部门主要以第一、三产业为主,外部环境 对其影响甚微;“平行型”产业部门主要以资源型行业为主,外部环境对其影响就具体行业而定。  相似文献   

3.
Data envelopment analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency [B. Hollingsworth, The measurement of efficiency and productivity of health care delivery. Health Economics 17(10) (2008), pp. 1107–1128], but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized [B.J. Gajewski, R. Lee, M. Bott, U. Piamjariyakul, and R.L. Taunton, On estimating the distribution of data envelopment analysis efficiency scores: an application to nursing homes’ care planning process. Journal of Applied Statistics 36(9) (2009), pp. 933–944; J. Ruggiero, Data envelopment analysis with stochastic data. Journal of the Operational Research Society 55 (2004), pp. 1008–1012]. We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible.  相似文献   

4.
This paper describes a statistical method for estimating data envelopment analysis (DEA) score confidence intervals for individual organizations or other entities. This method applies statistical panel data analysis, which provides proven and powerful methodologies for diagnostic testing and for estimation of confidence intervals. DEA scores are tested for violations of the standard statistical assumptions including contemporaneous correlation, serial correlation, heteroskedasticity and the absence of a normal distribution. Generalized least squares statistical models are used to adjust for violations that are present and to estimate valid confidence intervals within which the true efficiency of each individual decision-making unit occurs. This method is illustrated with two sets of panel data, one from large US urban transit systems and the other from a group of US hospital pharmacies.  相似文献   

5.
针对多投入多产出评价系统中决策单元的同质性问题,借鉴多阶段DEA模型的发展历程,利用Tobit、SFA多元线性回归分析和DEA模型相结合的方法,提出了六阶段DEA模型。首次将外部环境变量区分为正/负向环境变量,充分利用投入冗余松弛变量和产出不足松弛变量,重新调整投入量或产出量,剔除环境变量、随机误差以及管理无效率对系统效率评价的影响,得到纯管理效率。利用2009年商业银行的数据进行实证分析,证实作为多阶段DEA模型的延续,六阶段DEA模型可以作为判断评价系统中决策单元同质性的一个参考准则,有助于建立系统、全面的评价指标体系,该方法可以扩展到面板型数据,对非DEA模型的系统评价也有借鉴价值。  相似文献   

6.
There is a growing trend towards the production of “hospital report-cards” in which hospitals with higher than acceptable mortality rates are identified. Several commentators have advocated for the use of Bayesian methods for health care report cards. Earlier research has demonstrated that there is poor concordance between different Bayesian methods. The current study used Monte Carlo simulation methods to examine the reliability and validity of four different Bayesian measures of hospital performance. Estimates of the reliability of the different measures ranged from a low of 0.89 to a high of 0.99. Estimates of the validity of the four measures ranged from a low of 0.58 to a high of 0.65. Thus, while the four measures of hospital performance demonstrated high reliability, the validity of each method was at most moderate. It is hypothesized that the low validity is due in part to the limited sample sizes that are typically available for hospital report cards.  相似文献   

7.
University courses in elementary statistics are usually taught from a frequentist perspective. In this paper I suggest how such courses can be taught using a Bayesian approach, and I indicate why beginning students are well served by a Bayesian course. A principal focus of any good elementary course is the application of statistics to real and important scientific problems. The Bayesian approach fits neatly with a scientific focus. Bayesians take a larger view, and one not limited to data analysis. In particular, the Bayesian approach is subjective, and requires assessing prior probabilities. This requirement forces users to relate current experimental evidence to other available information–-including previous experiments of a related nature, where “related” is judged subjectively. I discuss difficulties faced by instructors and students in elementary Bayesian courses, and provide a sample syllabus for an elementary Bayesian course.  相似文献   

8.
The power of randomized controlled clinical trials to demonstrate the efficacy of a drug compared with a control group depends not just on how efficacious the drug is, but also on the variation in patients' outcomes. Adjusting for prognostic covariates during trial analysis can reduce this variation. For this reason, the primary statistical analysis of a clinical trial is often based on regression models that besides terms for treatment and some further terms (e.g., stratification factors used in the randomization scheme of the trial) also includes a baseline (pre-treatment) assessment of the primary outcome. We suggest to include a “super-covariate”—that is, a patient-specific prediction of the control group outcome—as a further covariate (but not as an offset). We train a prognostic model or ensembles of such models on the individual patient (or aggregate) data of other studies in similar patients, but not the new trial under analysis. This has the potential to use historical data to increase the power of clinical trials and avoids the concern of type I error inflation with Bayesian approaches, but in contrast to them has a greater benefit for larger sample sizes. It is important for prognostic models behind “super-covariates” to generalize well across different patient populations in order to similarly reduce unexplained variability whether the trial(s) to develop the model are identical to the new trial or not. In an example in neovascular age-related macular degeneration we saw efficiency gains from the use of a “super-covariate”.  相似文献   

9.
Interest centres on a group of statisticians , each supplied with the same n sample datapoint sandmaking formal Bayesian inference with a common likelihood function but differing prior knowledge and utility functions.

Definitions are proposed which quantify, in a commensurable way, the inference processes of “accuracy”, “confidence” and “consensus” for the case of hypothesis inference with a fixed sample size n.

The general significance of comparing the three quantifiers is considered. As n increases the asymptotic behaviour of the quantifiers is evaluated and it is found that the three rates of convergence are of the same order as a function of n. The results are interpreted and some of their implications are discussed.  相似文献   

10.
A class of “optimal”U-statistics type nonparametric test statistics is proposed for the one-sample location problem by considering a kernel depending on a constant a and all possible (distinct) subsamples of size two from a sample of n independent and identically distributed observations. The “optimal” choice of a is determined by the underlying distribution. The proposed class includes the Sign and the modified Wilcoxon signed-rank statistics as special cases. It is shown that any “optimal” member of the class performs better in terms of Pitman efficiency relative to the Sign and Wilcoxon-signed rank statistics. The effect of deviation of chosen a from the “optimal” a on Pitman efficiency is also examined. A Hodges-Lehmann type point estimator of the location parameter corresponding to the proposed “optimal” test-statistics is also defined and studied in this paper.  相似文献   

11.
Standard methods for analyzing binomial regression data rely on asymptotic inferences. Bayesian methods can be performed using simple computations, and they apply for any sample size. We provide a relatively complete discussion of Bayesian inferences for binomial regression with emphasis on inferences for the probability of “success.” Furthermore, we illustrate diagnostic tools, perform model selection among nonnested models, and examine the sensitivity of the Bayesian methods.  相似文献   

12.
We introduce a new test of isotropy or uniformity on the circle, based on the Gini mean difference of the sample arc-lengths and obtain both the exact and asymptotic distributions under the null hypothesis of circular uniformity. We also provide a table of upper percentile values of the exact distribution for small to moderate sample sizes. Illustrative examples in circular data analysis are also given. It is shown that a “generalized” Gini mean difference test has better asymptotic efficiency than a corresponding “generalized” Rao's test in the sense of Pitman asymptotic relative efficiency.  相似文献   

13.
The beta-binomial distribution, which is generated by a simple mixture model, has been widely applied in the social, physical, and health sciences. Problems of estimation, inference, and prediction have been addressed in the past, but not in a Bayesian framework. This article develops Bayesian procedures for the beta-binomial model and, using a suitable reparameterization, establishes a conjugate-type property for a beta family of priors. The transformed parameters have interesting interpretations, especially in marketing applications, and are likely to be more stable. More specifically, one of these parameters is the market share and the other is a measure of the heterogeneity of the customer population. Analytical results are developed for the posterior and prediction quantities, although the numerical evaluation is not trivial. Since the posterior moments are more easily calculated, we also propose the use of posterior approximation using the Pearson system. A particular case (when there are two trials), which occurs in taste testing, brand choice, media exposure, and some epidemiological applications, is analyzed in detail. Simulated and real data are used to demonstrate the feasibility of the calculations. The simulation results effectively demonstrate the superiority of Bayesian estimators, particularly in small samples, even with uniform (“non-informed”) priors. Naturally, “informed” priors can give even better results. The real data on television viewing behavior are used to illustrate the prediction results. In our analysis, several problems with the maximum likelihood estimators are encountered. The superior properties and performance of the Bayesian estimators and the excellent approximation results are strong indications that our results will be potentially of high value in small sample applications of the beta-binomial and in cases in which significant prior information exists.  相似文献   

14.
苏为华 《统计研究》1996,13(5):34-37
论统计指标的构造过程苏为华ABSTRACTTheconstructionofstatisticalindicatorsisaprocessofcomplicatedlogicthind-ingthatcanbedividedintoaseriesof...  相似文献   

15.
In this paper, we propose new estimation techniques in connection with the system of S-distributions. Besides “exact” maximum likelihood (ML), we propose simulated ML and a characteristic function-based procedure. The “exact” and simulated likelihoods can be used to provide numerical, MCMC-based Bayesian inferences.  相似文献   

16.
余静文等 《统计研究》2021,38(4):89-102
中国银行业在金融体系中起着关键性的作用,一直是金融发展的重要部分。进入21世纪以来,中国银行业以国有银行为主的结构出现了重大变革,银行业竞争程度不断提高。关于银行业竞争的经济效应存在“市场势力假说”和“信息假说”两个假说,本文尝试在银行准入管制放松政策的背景下和鼓励企业“走出去”的情境下,利用匹配的微观层面数据来实证检验以上的假说,主要得到以下几个结论:首先,银行业“松绑”有助于企业“走出去”; 其次,采取倾向性得分匹配方法应对样本选择问题, 并用工具变量方法应对内生性问题后,这一结论依然成立;最后,银行业“松绑”引起的融资成本下降是银行业“松绑”影响企业“走出去”的重要渠道,企业“走出去”的分析结果也支持了中国情境下银行业“松绑”的“市场势力假说”。本文的研究为银行业改革与企业对外直接投资的关系提供了重要证据,并验证了中国情境下银行业“松绑”的“市场势力假说”和“信息假说”,有助于更加深刻地理解和评估中国银行业改革的经济效应,对更好推动“一带一路”建设有着重要意义。  相似文献   

17.
Incorporating historical data has a great potential to improve the efficiency of phase I clinical trials and to accelerate drug development. For model-based designs, such as the continuous reassessment method (CRM), this can be conveniently carried out by specifying a “skeleton,” that is, the prior estimate of dose limiting toxicity (DLT) probability at each dose. In contrast, little work has been done to incorporate historical data into model-assisted designs, such as the Bayesian optimal interval (BOIN), Keyboard, and modified toxicity probability interval (mTPI) designs. This has led to the misconception that model-assisted designs cannot incorporate prior information. In this paper, we propose a unified framework that allows for incorporating historical data into model-assisted designs. The proposed approach uses the well-established “skeleton” approach, combined with the concept of prior effective sample size, thus it is easy to understand and use. More importantly, our approach maintains the hallmark of model-assisted designs: simplicity—the dose escalation/de-escalation rule can be tabulated prior to the trial conduct. Extensive simulation studies show that the proposed method can effectively incorporate prior information to improve the operating characteristics of model-assisted designs, similarly to model-based designs.  相似文献   

18.
Decision making is a critical component of a new drug development process. Based on results from an early clinical trial such as a proof of concept trial, the sponsor can decide whether to continue, stop, or defer the development of the drug. To simplify and harmonize the decision‐making process, decision criteria have been proposed in the literature. One of them is to exam the location of a confidence bar relative to the target value and lower reference value of the treatment effect. In this research, we modify an existing approach by moving some of the “stop” decision to “consider” decision so that the chance of directly terminating the development of a potentially valuable drug can be reduced. As Bayesian analysis has certain flexibilities and can borrow historical information through an inferential prior, we apply the Bayesian analysis to the trial planning and decision making. Via a design prior, we can also calculate the probabilities of various decision outcomes in relationship with the sample size and the other parameters to help the study design. An example and a series of computations are used to illustrate the applications, assess the operating characteristics, and compare the performances of different approaches.  相似文献   

19.
In this paper, we present different “frailty” models to analyze longitudinal data in the presence of covariates. These models incorporate the extra-Poisson variability and the possible correlation among the repeated counting data for each individual. Assuming a CD4 counting data set in HIV-infected patients, we develop a hierarchical Bayesian analysis considering the different proposed models and using Markov Chain Monte Carlo methods. We also discuss some Bayesian discrimination aspects for the choice of the best model.  相似文献   

20.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号