首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
This paper describes a statistical method for estimating data envelopment analysis (DEA) score confidence intervals for individual organizations or other entities. This method applies statistical panel data analysis, which provides proven and powerful methodologies for diagnostic testing and for estimation of confidence intervals. DEA scores are tested for violations of the standard statistical assumptions including contemporaneous correlation, serial correlation, heteroskedasticity and the absence of a normal distribution. Generalized least squares statistical models are used to adjust for violations that are present and to estimate valid confidence intervals within which the true efficiency of each individual decision-making unit occurs. This method is illustrated with two sets of panel data, one from large US urban transit systems and the other from a group of US hospital pharmacies.  相似文献   

2.
利用全局DEA及Malmquist-Luenberger指数,测度1998—2010年中国工业行业绿色TFP及其分解成份,结果发现:工业绿色TFP增长的动力主要来自技术进步,技术效率整体上拖累了工业绿色TFP增长;工业碳生产率不断提升,技术效率对工业碳生产率增长的促进作用强于技术进步的促进作用,且重工业的技术效率与技术进步对碳生产率增长的促进效应高于轻工业。因此,除了发挥技术进步对工业行业碳生产率增长的促进作用外,更需要重视提升技术效率对工业行业碳生产率增长的推动作用。  相似文献   

3.
We study methods to estimate regression and variance parameters for over-dispersed and correlated count data from highly stratified surveys. Our application involves counts of fish catches from stratified research surveys and we propose a novel model in fisheries science to address changes in survey protocols. A challenge with this model is the large number of nuisance parameters which leads to computational issues and biased statistical inferences. We use a computationally efficient profile generalized estimating equation method and compare it to marginal maximum likelihood (MLE) and restricted MLE (REML) methods. We use REML to address bias and inaccurate confidence intervals because of many nuisance parameters. The marginal MLE and REML approaches involve intractable integrals and we used a new R package that is designed for estimating complex nonlinear models that may include random effects. We conclude from simulation analyses that the REML method provides more reliable statistical inferences among the three methods we investigated.  相似文献   

4.
Data envelopment analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency [B. Hollingsworth, The measurement of efficiency and productivity of health care delivery. Health Economics 17(10) (2008), pp. 1107–1128], but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized [B.J. Gajewski, R. Lee, M. Bott, U. Piamjariyakul, and R.L. Taunton, On estimating the distribution of data envelopment analysis efficiency scores: an application to nursing homes’ care planning process. Journal of Applied Statistics 36(9) (2009), pp. 933–944; J. Ruggiero, Data envelopment analysis with stochastic data. Journal of the Operational Research Society 55 (2004), pp. 1008–1012]. We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible.  相似文献   

5.
Diagnostic techniques are proposed for assessing the influence of individual cases on confidence intervals in nonlinear regression. The technique proposed uses the method of profile t-plots applied to the case-deletion model. The effect of the geometry of the statistical model on the influence measures is assessed, and an algorithm for computing case-deleted confidence intervals is described. This algorithm provides a direct method for constructing a simple diagnostic measure based on the ratio of the lengths of confidence intervals. The generalization of these methods to multiresponse models is discussed.  相似文献   

6.
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo‐likelihood estimator of the parameters of a spatial Gibbs point process model. This allows us to construct asymptotic confidence intervals for the parameters. We illustrate the efficiency of our procedure in a simulation study for several classical parametric models. The procedure is implemented in the statistical software R , and it is included in spatstat , which is an R package for analyzing spatial point patterns.  相似文献   

7.
In two observational studies, one investigating the effects of minimum wage laws on employment and the other of the effects of exposures to lead, an estimated treatment effect's sensitivity to hidden bias is examined. The estimate uses the combined quantile averages that were introduced in 1981 by B. M. Brown as simple, efficient, robust estimates of location admitting both exact and approximate confidence intervals and significance tests. Closely related to Gastwirth's estimate and Tukey's trimean, the combined quantile average has asymptotic efficiency for normal data that is comparable with that of a 15% trimmed mean, and higher efficiency than the trimean, but it has resistance to extreme observations or breakdown comparable with that of the trimean and better than the 15% trimmed mean. Combined quantile averages provide consistent estimates of an additive treatment effect in a matched randomized experiment. Sensitivity analyses are discussed for combined quantile averages when used in a matched observational study in which treatments are not randomly assigned. In a sensitivity analysis in an observational study, subjects are assumed to differ with respect to an unobserved covariate that was not adequately controlled by the matching, so that treatments are assigned within pairs with probabilities that are unequal and unknown. The sensitivity analysis proposed here uses significance levels, point estimates and confidence intervals based on combined quantile averages and examines how these inferences change under a range of assumptions about biases due to an unobserved covariate. The procedures are applied in the studies of minimum wage laws and exposures to lead. The first example is also used to illustrate sensitivity analysis with an instrumental variable.  相似文献   

8.
不同区域金融体系资本供给、风险管理和回报激励等方面的差异,导致金融促进技术创新的效率也不相同。本文测定了我国23个省市金融体系对技术创新的Malmquist生产率,发现23个省市金融发展促进技术创新的效率的平均水平是逐渐提高的,且各省市金融发展促进技术创新的Malmquist生产率的差异性也通过随机影响变截距模型得到了验证。  相似文献   

9.

We consider a sieve bootstrap procedure to quantify the estimation uncertainty of long-memory parameters in stationary functional time series. We use a semiparametric local Whittle estimator to estimate the long-memory parameter. In the local Whittle estimator, discrete Fourier transform and periodogram are constructed from the first set of principal component scores via a functional principal component analysis. The sieve bootstrap procedure uses a general vector autoregressive representation of the estimated principal component scores. It generates bootstrap replicates that adequately mimic the dependence structure of the underlying stationary process. We first compute the estimated first set of principal component scores for each bootstrap replicate and then apply the semiparametric local Whittle estimator to estimate the memory parameter. By taking quantiles of the estimated memory parameters from these bootstrap replicates, we can nonparametrically construct confidence intervals of the long-memory parameter. As measured by coverage probability differences between the empirical and nominal coverage probabilities at three levels of significance, we demonstrate the advantage of using the sieve bootstrap compared to the asymptotic confidence intervals based on normality.

  相似文献   

10.
In this paper, based on an adaptive Type-II progressively censored sample from the generalized exponential distribution, the maximum likelihood and Bayesian estimators are derived for the unknown parameters as well as the reliability and hazard functions. Also, the approximate confidence intervals of the unknown parameters, and the reliability and hazard functions are calculated. Markov chain Monte Carlo method is applied to carry out a Bayesian estimation procedure and in turn calculate the credible intervals. Moreover, results from simulation studies assessing the performance of our proposed method are included. Finally, an illustrative example using real data set is presented for illustrating all the inferential procedures developed here.  相似文献   

11.
This paper considers the statistical analysis of masked data in a series system with Burr-XII distributed components. Based on progressively Type-I interval censored sample, the maximum likelihood estimators for the parameters are obtained by using the expectation maximization algorithm, and the associated approximate confidence intervals are also derived. In addition, Gibbs sampling procedure using important sampling is applied for obtaining the Bayesian estimates of the parameters, and Monte Carlo method is employed to construct the credible intervals. Finally, a simulation study is proposed to illustrate the efficiency of the methods under different removal schemes and masking probabilities.  相似文献   

12.
A simplified proof of the basic properties of the estimators in the Exponential Order Statistics (Jelinski-Moranda) Model is given. The method of constructing confidence intervals from hypothesis tests is applied to find conservative confidence intervals for the unknown parameters in the model.  相似文献   

13.
Data envelopment analysis (DEA) is a deterministic econometric model for calculating efficiency by using data from an observed set of decision-making units (DMUs). We propose a method for calculating the distribution of efficiency scores. Our framework relies on estimating data from an unobserved set of DMUs. The model provides posterior predictive data for the unobserved DMUs to augment the frontier in the DEA that provides a posterior predictive distribution for the efficiency scores. We explore the method on a multiple-input and multiple-output DEA model. The data for the example are from a comprehensive examination of how nursing homes complete a standardized mandatory assessment of residents.  相似文献   

14.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   

15.
We consider the distribution of the turning point location of time series modeled as the sum of deterministic trend plus random noise. If the variables are modeled by shifted exponentials, whose location parameters define the trend, we provide a formula for computing the distribution of the turning point location and consequently to estimate a confidence interval for the location. We test this formula in simulated data series having a trend with asymmetric minimum, investigating the coverage rate as a function of a bandwidth parameter. The method is applied to estimate the confidence interval of the minimum location of two types of real-time series: the RT intervals extracted from the electrocardiogram recorded during the exercise test and an economic indicator, the current account balance. We discuss the connection with stochastic ordering.  相似文献   

16.
Empirical Bayes approaches have often been applied to the problem of estimating small-area parameters. As a compromise between synthetic and direct survey estimators, an estimator based on an empirical Bayes procedure is not subject to the large bias that is sometimes associated with a synthetic estimator, nor is it as variable as a direct survey estimator. Although the point estimates perform very well, naïve empirical Bayes confidence intervals tend to be too short to attain the desired coverage probability, since they fail to incorporate the uncertainty which results from having to estimate the prior distribution. Several alternative methodologies for interval estimation which correct for the deficiencies associated with the naïve approach have been suggested. Laird and Louis (1987) proposed three types of bootstrap for correcting naïve empirical Bayes confidence intervals. Calling the methodology of Laird and Louis (1987) an unconditional bias-corrected naïve approach, Carlin and Gelfand (1991) suggested a modification to the Type III parametric bootstrap which corrects for bias in the naïve intervals by conditioning on the data. Here we empirically evaluate the Type II and Type III bootstrap proposed by Laird and Louis, as well as the modification suggested by Carlin and Gelfand (1991), with the objective of examining coverage properties of empirical Bayes confidence intervals for small-area proportions.  相似文献   

17.
Eunju Hwang 《Statistics》2017,51(4):844-861
This paper studies the stationary bootstrap applicability for realized covariations of high frequency asynchronous financial data. The stationary bootstrap method, which is characterized by a block-bootstrap with random block length, is applied to estimate the integrated covariations. The bootstrap realized covariance, bootstrap realized regression coefficient and bootstrap realized correlation coefficient are proposed, and the validity of the stationary bootstrapping for them is established both for large sample and for finite sample. Consistencies of bootstrap distributions are established, which provide us valid stationary bootstrap confidence intervals. The bootstrap confidence intervals do not require a consistent estimator of a nuisance parameter arising from nonsynchronous unequally spaced sampling while those based on a normal asymptotic theory require a consistent estimator. A Monte-Carlo comparison reveals that the proposed stationary bootstrap confidence intervals have better coverage probabilities than those based on normal approximation.  相似文献   

18.
In this paper, we consider the simple step-stress model for a two-parameter exponential distribution, when both the parameters are unknown and the data are Type-II censored. It is assumed that under two different stress levels, the scale parameter only changes but the location parameter remains unchanged. It is observed that the maximum likelihood estimators do not always exist. We obtain the maximum likelihood estimates of the unknown parameters whenever they exist. We provide the exact conditional distributions of the maximum likelihood estimators of the scale parameters. Since the construction of the exact confidence intervals is very difficult from the conditional distributions, we propose to use the observed Fisher Information matrix for this purpose. We have suggested to use the bootstrap method for constructing confidence intervals. Bayes estimates and associated credible intervals are obtained using the importance sampling technique. Extensive simulations are performed to compare the performances of the different confidence and credible intervals in terms of their coverage percentages and average lengths. The performances of the bootstrap confidence intervals are quite satisfactory even for small sample sizes.  相似文献   

19.
索洛余额法、随机前沿生产函数法以及数据包络分析方法都没有解决函数的内生性和模型参数的时变性问题,ACF方法克服了这些局限性,对全要素生产率的测度更加准确。本文推导了参数的内生性和时变性问题,基于ACF模型提出了我国的时变参数估计方法,并对我国1990-2017年28个省份的全要素生产率进行重估。研究结果表明,ACF方法对全要素生产率的测度更加准确。从全国来看,资本投入增长对经济增长的贡献度最大,TFP增长对我国经济增长的贡献度正在逐渐下降,劳动投入对经济增长的贡献度相对较弱且波动大。分地区来看,各地区的TFP平均水平有所回落,近年来东北地区的TFP增长率水平最低,2012-2017年均为负值。同时,东北地区的劳动力流失情况较为严重。  相似文献   

20.
提高企业全要素生产率是实现我国经济高质量发展的关键。本文在理论提出卖空机制“信息增益效应”和“压力减损效应”的基础上,采用双重差分方法实证检验卖空机制与全要素生产率增长之间的关系。研究发现,卖空交易机制显著促进了企业全要素生产率增长,具体表现为放松卖空管制后上市公司的技术、技术效率、规模效率和配置效率的变化率均有显著提升。渠道检验发现,卖空机制对全要素生产率增长的提升效应主要通过畅通信息传递、优化市场资源配置和提高公司治理水平发挥作用。此外,较之于国有企业,民营企业中卖空机制对生产率增长的促进效应表现得更为强烈。本研究推动了微观金融和宏观经济增长的融合,并为资本市场改革助力经济高质量发展战略提供了微观经验证据。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号