首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
The significant impact of health foodservice operations on the total operational cost of the hospital sector has increased the need to improve the efficiency of these operations. Although important studies on the performance of foodservice operations have been published in various academic journals and industrial reports, the findings and implications remain simple and limited in scope and methodology. This paper investigates two popular methodologies in the efficiency literature: Bayesian “stochastic frontier analysis” (SFA) and “data envelopment analysis” (DEA). The paper discusses the statistical advantages of the Bayesian SFA and compares it with an extended DEA model. The results from a sample of 101 hospital foodservice operations show the existence of inefficiency in the sample, and indicate significant differences between the average efficiency generated by the Bayesian SFA and DEA models. The ranking of efficiency is, however, statistically independent of the methodologies.  相似文献   

2.
Summary. The single-input case of the 'technical efficiency' theory of M. J. Farrell is reformulated geometrically and algebraically. Its linear programming developments as 'data envelopment analysis' are critically reviewed, as are the related techniques of 'stochastic frontier analysis'. The sense and realism of using data envelopment analysis or stochastic frontier analysis techniques, rather than some value-based method, for the assessment of police force efficiency are questioned with reference to the Spottiswoode report and related studies.  相似文献   

3.
针对多投入多产出评价系统中决策单元的同质性问题,借鉴多阶段DEA模型的发展历程,利用Tobit、SFA多元线性回归分析和DEA模型相结合的方法,提出了六阶段DEA模型。首次将外部环境变量区分为正/负向环境变量,充分利用投入冗余松弛变量和产出不足松弛变量,重新调整投入量或产出量,剔除环境变量、随机误差以及管理无效率对系统效率评价的影响,得到纯管理效率。利用2009年商业银行的数据进行实证分析,证实作为多阶段DEA模型的延续,六阶段DEA模型可以作为判断评价系统中决策单元同质性的一个参考准则,有助于建立系统、全面的评价指标体系,该方法可以扩展到面板型数据,对非DEA模型的系统评价也有借鉴价值。  相似文献   

4.
On Testing Equality of Distributions of Technical Efficiency Scores   总被引:5,自引:0,他引:5  
The challenge of the econometric problem in production efficiency analysis is that the efficiency scores to be analyzed are unobserved. Statistical properties have recently been discovered for a type of estimator popular in the literature, known as data envelopment analysis (DEA). This opens up a wide range of possibilities for well-grounded statistical inference about the true efficiency scores from their DEA estimates. In this paper we investigate the possibility of using existing tests for the equality of two distributions in such a context. Considering the statistical complications pertinent to our context, we consider several approaches to adapting the Li test to the context and explore their performance in terms of the size and power of the test in various Monte Carlo experiments. One of these approaches shows good performance for both the size and the power of the test, thus encouraging its use in empirical studies. We also present an empirical illustration analyzing the efficiency distributions of countries in the world, following up a recent study by Kumar and Russell (2002), and report very interesting results.  相似文献   

5.
The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs to be taken when choosing which variables to include in the model. A number of different approaches to determining these variables have been put forward. These are, however, often based on ad hoc procedures or abandon the underlying theoretical factor model. In this article, we will take a different approach to the problem by using the least absolute shrinkage and selection operator (LASSO) as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model that is better suited for forecasting compared to the traditional principal components (PC) approach. We provide an asymptotic analysis of the estimator and illustrate its merits empirically in a forecasting experiment based on U.S. macroeconomic data. Overall we find that compared to PC we obtain improvements in forecasting accuracy and thus find it to be an important alternative to PC. Supplementary materials for this article are available online.  相似文献   

6.
During past few years great attention has been devoted to the analysis of disease incidence and mortality rates, with an explicit focus on modelling geographical variation of rates observed in spatially adjacent regions. The general aim of these contributes has been both to highlight clusters of regions with homogeneous relative risk and to determine the effects of observed and unobserved risk factors related to the analyzed disease. Most of the proposed modelling approaches can be derived as alternative specifications of the components of a general convolution model (Molliè, 1996). In this paper, we consider the semiparametric approach discussed by Schlattmann and Böhning (1993); in particular, we focus on models with an explicit spatially structured component (see Biggeri et al., 2000), and propose alternative choices for the structure of the spatial component.  相似文献   

7.
The objective of Taguchi's robust design method is to reduce the output variation from the target (the desired output) by making the performance insensitive to noise, such as manufacturing imperfections, environmental variations and deterioration. This objective has been recognized to be very effective in improving product and manufacturing process design. In application, however, Taguchi's analysis approach of modelling the average loss (or signal-to-noise ratios) may lead to non-optimal solutions, efficiency loss and information loss. In addition, since his modelling loss approach requires a special experimental format that contains a cross-product of two separate arrays for control and noise factors, this leads to less flexible and unnecessarily expensive experiments. The response model approach, an alternative approach proposed by Welch et al. , Box and Jones, Lucas and Shoemaker et al. , does not have these problems. However, this alternative approach also has its own problems. This paper reviews and discusses the potential problems of Taguchi's modelling approach. We illustrate these problems with examples and numerical studies. We also compare the advantages and disadvantages of Taguchi's approach and the alternative approach.  相似文献   

8.
刘小二  谢月华 《统计教育》2009,(7):32-37,41
本文利用SFA对我国区域全要素生产率进行实证研究之后认为:改革开放以来我国不同地区的全要素生产率均有明显上升,但欠发达地区的增长快于发达地区,说明TFP有一定的收敛性;而从TFP增长率的构成看,技术进步的贡献最大,规模经济贡献很小,生产效率的贡献甚至为负值,说明我国在以往的经济增长中,注重了技术进步,而忽略了地区生产效率的提高。  相似文献   

9.
The purpose of this article is to strengthen the understanding of the relationship between a fixed-blocks and random-blocks analysis in models that do not include interactions between treatments and blocks. Treating the block effects as random has been recommended in the literature for balanced incomplete block designs (BIBD) because it results in smaller variances of treatment contrasts. This reduction in variance is large if the block-to-block variation relative to the total variation is small. However, this analysis is also more complicated because it results in a subjective interpretation of results if the block variance component is non-positive. The probability of a non-positive variance component is large precisely in those situations where a random-blocks analysis is useful – that is, when the block-to-block variation, relative to the total variation, is small. In contrast, the analysis in which the block effects are fixed is computationally simpler and less subjective. The loss in power for some BIBD with a fixed effects analysis is trivial. In such cases, we recommend treating the block effects as fixed. For response surface experiments designed in blocks, however, an opposite recommendation is made. When block effects are fixed, the variance of the estimated response surface is not uniquely estimated, and in practice this variance is obtained by ignoring the block effect. It is argued that a more reasonable approach is to treat the block effects to be random than to ignore it.  相似文献   

10.
商业银行X效率测评是目前银行业研究的热点之一.常用的DEA与SFA方法在模型设定上存在缺陷,StoNED方法与超效率DEA方法吸收了DEA与SFA方法的优点.从理论与实证两个方面对StoNED方法与超效率DEA方法进行比较研究,理论比较包括数学原理、方法背景、准确性、科学性等方面.以劳动力、固定资产原值、存款为投入指标,以税前利润为产出指标,利用两种方法对2007- 2009年中国商业银行X效率进行了测度,实证表明StoNED方法测度商业银行X效率具有一定的优势.  相似文献   

11.
Bootstrap in functional linear regression   总被引:1,自引:0,他引:1  
We have considered the functional linear model with scalar response and functional explanatory variable. One of the most popular methodologies for estimating the model parameter is based on functional principal components analysis (FPCA). In recent literature, weak convergence for a wide class of FPCA-type estimates has been proved, and consequently asymptotic confidence sets can be built. In this paper, we have proposed an alternative approach in order to obtain pointwise confidence intervals by means of a bootstrap procedure, for which we have obtained its asymptotic validity. Besides, a simulation study allows us to compare the practical behaviour of asymptotic and bootstrap confidence intervals in terms of coverage rates for different sample sizes.  相似文献   

12.
In connection with assessing how an ongoing development in fisheries management may change fishing activity, evaluation of Total Factor Productivity (TFP) change over a period, including efficiency, scale and technology changes, is an important tool. The Malmquist index, based on distance functions evaluated with Data Envelopment Analysis (DEA), is often employed to estimate TFP changes. DEA is generally gaining attention for evaluating efficiency and capacity in fisheries. One main criticism of DEA is that it does not have any statistical foundation, i.e. that it is not possible to make inference about DEA scores or related parameters. The bootstrap method for estimating confidence intervals of deterministic parameters can however be applied to estimate confidence intervals for DEA scores. This method is applied in the present paper for assessing TFP changes between 1987 and 1999 for the fleet of Danish seiners operating in the North Sea and the Skagerrak.  相似文献   

13.
Experience has shown us that when data are pooled from multiple studies to create an integrated summary, an analysis based on naïvely‐pooled data is vulnerable to the mischief of Simpson's Paradox. Using the proportions of patients with a target adverse event (AE) as an example, we demonstrate the Paradox's effect on both the comparison and the estimation of the proportions. While meta analytic approaches have been recommended and increasingly used for comparing safety data between treatments, reporting proportions of subjects experiencing a target AE based on data from multiple studies has received little attention. In this paper, we suggest two possible approaches to report these cumulative proportions. In addition, we urge that regulatory guidelines on reporting such proportions be established so that risks can be communicated in a scientifically defensible and balanced manner. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
With rapid development of computing technology, Bayesian statistics have increasingly gained more attention in various areas of public health. However, the full potential of Bayesian sequential methods applied to vaccine safety surveillance has not yet been realized, despite acknowledged practical benefits and philosophical advantages of Bayesian statistics. In this paper, we describe how sequential analysis can be performed in a Bayesian paradigm in the field of vaccine safety. We compared the performance of the frequentist sequential method, specifically, Maximized Sequential Probability Ratio Test (MaxSPRT), and a Bayesian sequential method using simulations and a real world vaccine safety example. The performance is evaluated using three metrics: false positive rate, false negative rate, and average earliest time to signal. Depending on the background rate of adverse events, the Bayesian sequential method could significantly improve the false negative rate and decrease the earliest time to signal. We consider the proposed Bayesian sequential approach to be a promising alternative for vaccine safety surveillance.  相似文献   

15.
Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonparametrically but these methods are either intensive in computation or inefficient in performance. To overcome these drawbacks, in this paper, a simple and powerful two-step alternative is proposed. In particular, the implementation of the proposed approach via local polynomial smoothing is discussed. Methods for estimating standard deviations of estimated coefficient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two longitudinal data sets, one of which involves time-dependent covariates, are used to demonstrate the approach proposed. Simulation studies show that our two-step approach improves the kernel method proposed by Hoover and co-workers in several aspects such as accuracy, computational time and visual appeal of the estimators.  相似文献   

16.
Summary.  The problem of component choice in regression-based prediction has a long history. The main cases where important choices must be made are functional data analysis, and problems in which the explanatory variables are relatively high dimensional vectors. Indeed, principal component analysis has become the basis for methods for functional linear regression. In this context the number of components can also be interpreted as a smoothing parameter, and so the viewpoint is a little different from that for standard linear regression. However, arguments for and against conventional component choice methods are relevant to both settings and have received significant recent attention. We give a theoretical argument, which is applicable in a wide variety of settings, justifying the conventional approach. Although our result is of minimax type, it is not asymptotic in nature; it holds for each sample size. Motivated by the insight that is gained from this analysis, we give theoretical and numerical justification for cross-validation choice of the number of components that is used for prediction. In particular we show that cross-validation leads to asymptotic minimization of mean summed squared error, in settings which include functional data analysis.  相似文献   

17.
将DEA方法引入粗集理论中,提出一种可解决“具有多个决策属性特征”的多投入、多产出问题的方法,建立一种基于DEA的粗集决策模型,并通过研究中国生产力水平的效率问题来验证模型的合理性和有效性,为人们进行科学的决策提供了一种新的思想和方法。  相似文献   

18.
We study the problem of merging homogeneous groups of pre-classified observations from a robust perspective motivated by the anti-fraud analysis of international trade data. This problem may be seen as a clustering task which exploits preliminary information on the potential clusters, available in the form of group-wise linear regressions. Robustness is then needed because of the sensitivity of likelihood-based regression methods to deviations from the postulated model. Through simulations run under different contamination scenarios, we assess the impact of outliers both on group-wise regression fitting and on the quality of the final clusters. We also compare alternative robust methods that can be adopted to detect the outliers and thus to clean the data. One major conclusion of our study is that the use of robust procedures for preliminary outlier detection is generally recommended, except perhaps when contamination is weak and the identification of cluster labels is more important than the estimation of group-specific population parameters. We also apply the methodology to find homogeneous groups of transactions in one empirical example that illustrates our motivating anti-fraud framework.  相似文献   

19.
An alarming report from an environmental pressure group raised concerns about childhood leukaemia and the Irish Sea. In response, this ecological study explores the hypotheses that childhood cancer rates are increased by living near the coast of Wales, especially in the north, and in particular near estuaries and mud-flats. Using Poisson regression to adjust for possible confounding variables, no evidence was found for a coastline proximity effect at the level of census wards (5 km). Moreover the rates were significantly lower near estuaries than for the rest of the coast, but there was a small but non-significant increase near mud-flats. Case–control modelling of postcoded cases living within the coastal wards using Stone's method also failed to detect any monotonic reduction in relative risk near the coastline.  相似文献   

20.
The Finnish common toad data of Heikkinen and Hogmander are reanalysed using an alternative fully Bayesian model that does not require a pseudolikelihood approximation and an alternative prior distribution for the true presence or absence status of toads in each 10 km×10 km square. Markov chain Monte Carlo methods are used to obtain posterior probability estimates of the square-specific presences of the common toad and these are presented as a map. The results are different from those of Heikkinen and Hogmander and we offer an explanation in terms of the prior used for square-specific presence of the toads. We suggest that our approach is more faithful to the data and avoids unnecessary confounding of effects. We demonstrate how to extend our model efficiently with square-specific covariates and illustrate this by introducing deterministic spatial changes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号