首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Data envelopment analysis (DEA) and free disposal hull (FDH) estimators are widely used to estimate efficiency of production. Practitioners use DEA estimators far more frequently than FDH estimators, implicitly assuming that production sets are convex. Moreover, use of the constant returns to scale (CRS) version of the DEA estimator requires an assumption of CRS. Although bootstrap methods have been developed for making inference about the efficiencies of individual units, until now no methods exist for making consistent inference about differences in mean efficiency across groups of producers or for testing hypotheses about model structure such as returns to scale or convexity of the production set. We use central limit theorem results from our previous work to develop additional theoretical results permitting consistent tests of model structure and provide Monte Carlo evidence on the performance of the tests in terms of size and power. In addition, the variable returns to scale version of the DEA estimator is proved to attain the faster convergence rate of the CRS-DEA estimator under CRS. Using a sample of U.S. commercial banks, we test and reject convexity of the production set, calling into question results from numerous banking studies that have imposed convexity assumptions. Supplementary materials for this article are available online.  相似文献   

2.
It is customary to use two groups of indices to evaluate a diagnostic method with a binary outcome: validity indices with a standard rater (sensitivity, specificity, and positive or negative predictive values) and reliability indices (positive, negative and overall agreements) without a standard rater. However neither of these classic indices is chance-corrected, and this may distort the analysis of the problem (especially in comparative studies). One way of chance-correcting these indices is by using the Delta model (an alternative to the Kappa model), but this means having to use a computer program to work out the calculations. This paper gives an asymptotic version of the Delta model, thus allowing simple expressions to be obtained for the estimator of each of the above-mentioned chance-corrected indices (as well as for its standard error).  相似文献   

3.
We consider the fitting of a Bayesian model to grouped data in which observations are assumed normally distributed around group means that are themselves normally distributed, and consider several alternatives for accommodating the possibility of heteroscedasticity within the data. We consider the case where the underlying distribution of the variances is unknown, and investigate several candidate prior distributions for those variances. In each case, the parameters of the candidate priors (the hyperparameters) are themselves given uninformative priors (hyperpriors). The most mathematically convenient model for the group variances is to assign them inverse gamma distributed priors, the inverse gamma distribution being the conjugate prior distribution for the unknown variance of a normal population. We demonstrate that for a wide class of underlying distributions of the group variances, a model that assigns the variances an inverse gamma-distributed prior displays favorable goodness-of-fit properties relative to other candidate priors, and hence may be used as standard for modeling such data. This allows us to take advantage of the elegant mathematical property of prior conjugacy in a wide variety of contexts without compromising model fitness. We test our findings on nine real world publicly available datasets from different domains, and on a wide range of artificially generated datasets.  相似文献   

4.
This article considers K pairs of incomplete correlated 2 × 2 tables in which the interesting measurement is the risk difference between marginal and conditional probabilities. A Wald-type statistic and a score-type statistic are presented to test the homogeneity hypothesis about risk differences across strata. Powers and sample size formulae based on the above two statistics are deduced. Figures about sample size against risk difference (or marginal probability) are given. A real example is used to illustrate the proposed methods.  相似文献   

5.
The correct and efficient estimation of memory parameters in a stationary Gaussian processes is an important issue, since otherwise, forecasts based on the resulting time series would be misleading. On the other hand, if the memory parameters are suspected to fall in a smaller subspace through some hypothesis restrictions, it becomes a hard decision whether to use estimators based on the restricted spaces or to use unrestricted estimators over the full parameter space. In this article, we propose James-Stein-type estimators of the memory parameters of a stationary Gaussian times series process, which can efficiently incorporate the hypothetical restrictions. We show theoretically that the proposed estimators are more efficient than the usual unrestricted maximum likelihood estimators over the entire parameter space.  相似文献   

6.
This article deals with the problem of estimation of the finite population mean using auxiliary information in the presence of random non response. Three different situations where random non response occurs either in study variate, or in auxiliary variate, or in both the variates, have been discussed. The asymptotically optimum estimators (AOEs) for each strategy are also identified. Expressions of biases and mean squared errors of the proposed estimators have been derived up to the first degree of approximation. Proposed estimators have been compared with the usual unbiased estimator, ratio estimator, and product estimator in the presence of random non response. Empirical studies are also carried out to show the performance of the proposed estimators over other estimators.  相似文献   

7.
This article introduces a non parametric warping model for functional data. When the outcome of an experiment is a sample of curves, data can be seen as realizations of a stochastic process, which takes into account the variations between the different observed curves. The aim of this work is to define a mean pattern which represents the main behaviour of the set of all the realizations. So, we define the structural expectation of the underlying stochastic function. Then, we provide empirical estimators of this structural expectation and of each individual warping function. Consistency and asymptotic normality for such estimators are proved.  相似文献   

8.
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately, most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR) image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic variant outperform the remaining algorithms.  相似文献   

9.
We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.  相似文献   

10.
When there are several replicates available at each level combination of two factors, testing nonadditivity can be done by the usual two-way ANOVA method. However, the ANOVA method cannot be used when the experiment is unreplicated (one observation per cell of the two-way classification). Several tests have been developed to address nonadditivity in unreplicated experiments starting with Tukey's (1949 Tukey, J.W. (1949). One degree of freedom for non-additivity. Biometrics 5:232242.[Crossref], [Web of Science ®] [Google Scholar]) one-degree-of-freedom test for nonadditivity. Most of them assume that the interaction term has a multiplicative form. But such tests have low power if the assumed functional form is inappropriate. This leads to tests which do not assume a specific form for the interaction term. This paper proposes a new method for testing interaction which does not assume a specific form of interaction. The proposed test has the advantage over the earlier tests that it can also be used for incomplete two-way tables. A simulation study is performed to evaluate the power of the proposed test and compare it with other well-known tests.  相似文献   

11.
Computational expressions for the exact CDF of Roy’s test statistic in MANOVA and the largest eigenvalue of a Wishart matrix are derived based upon their Pfaffian representations given in Gupta and Richards (SIAM J. Math. Anal. 16:852–858, 1985). These expressions allow computations to proceed until a prespecified degree of accuracy is achieved. For both distributions, convergence acceleration methods are used to compute CDF values which achieve reasonably fast run times for dimensions up to 50 and error degrees of freedom as large as 100. Software that implements these computations is described and has been made available on the Web.  相似文献   

12.
This article considers Bayesian p-values for testing independence in 2 × 2 contingency tables with cell counts observed from the two independent binomial sampling scheme and the multinomial sampling scheme. From the frequentist perspective, Fisher's p-value (p F ) is the most commonly used p-value but it can be conservative for small to moderate sample sizes. On the other hand, from the Bayesian perspective, Bayarri and Berger (2000 Bayarri , M. J. , Berger , J. O. ( 2000 ). P-values for composite null models (with discussion) . J. Amer. Statist. Assoc. 95 : 11271170 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) first proposed the partial posterior predictive p-value (p PPOST ), which can avoid the double use of the data that occurs in another Bayesian p-value proposed by Guttman (1967 Guttman , I. ( 1967 ). The use of the concept of a future observation in goodness-of-fit problems . J. Roy. Statist. Soc. Ser. B 29 : 83100 . [Google Scholar]) and Rubin (1984 Rubin , D. B. ( 1984 ). Bayesianly justifiable and relevant frequency calculations for the applied statistician . Ann. Statist. 12 : 11511172 .[Crossref], [Web of Science ®] [Google Scholar]), called the posterior predictive p-value (p POST ). The subjective and objective Bayesian p-values in terms of p POST and p PPOST are derived under the beta prior and the (noninformative) Jeffreys prior, respectively. Numerical comparisons among p F , p POST , and p PPOST reveal that p PPOST performs much better than p F and p POST for small to moderate sample sizes from the frequentist perspective.  相似文献   

13.
We consider the problem of estimating the stress-strength reliability when the available data is in the form of record values. The one parameter and two parameters exponential distribution are considered. In the case of two parameters exponential distributions we considered the case where the location parameter is common and the case where the scale parameter is common. The maximum likelihood estimators and the associated confidence intervals are derived.  相似文献   

14.
Segmentation of the mean of heteroscedastic data via cross-validation   总被引:1,自引:0,他引:1  
This paper tackles the problem of detecting abrupt changes in the mean of a heteroscedastic signal by model selection, without knowledge on the variations of the noise. A new family of change-point detection procedures is proposed, showing that cross-validation methods can be successful in the heteroscedastic framework, whereas most existing procedures are not robust to heteroscedasticity. The robustness to heteroscedasticity of the proposed procedures is supported by an extensive simulation study, together with recent partial theoretical results. An application to Comparative Genomic Hybridization (CGH) data is provided, showing that robustness to heteroscedasticity can indeed be required for their analysis.  相似文献   

15.
Two symmetrical fractional factorial designs are said to be combinatorially equivalent if one design can be obtained from another by reordering the runs, relabeling the factors and relabeling the levels of one or more factors. This article presents concepts of ordered distance frequency matrix, distance frequency vector, and reduced distance frequency vector for a design. Necessary conditions for two designs to be combinatorial equivalent based on these concepts are presented. A new algorithm based on the results obtained is proposed to check combinatorial non-equivalence of two factorial designs and several illustrating examples are provided.  相似文献   

16.
This article considers the problem of testing marginal homogeneity in a 2 × 2 contingency table. We first review some well-known conditional and unconditional p-values appeared in the statistical literature. Then we treat the p-value as the test statistic and use the unconditional approach to obtain the modified p-value, which is shown to be valid. For a given nominal level, the rejection region of the modified p-value test contains that of the original p-value test. Some nice properties of the modified p-value are given. Especially, under mild conditions the rejection region of the modified p-value test is shown to be the Barnard convex set as described by Barnard (1947 Barnard , G. A. ( 1947 ). Significance tests for 2 × 2 tables . Biometrika 34 : 123138 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). If the one-sided null hypothesis has two nuisance parameters, we show that this result can reduce the dimension of the nuisance parameter space from two to one for computing modified p-values and sizes of tests. Numerical studies including an illustrative example are given. Numerical comparisons show that the sizes of the modified p-value tests are closer to a nominal level than those of the original p-value tests for many cases, especially in the case of small to moderate sample sizes.  相似文献   

17.
This paper addresses the problem of unbiased estimation of P[X > Y] = θ for two independent exponentially distributed random variables X and Y. We present (unique) unbiased estimator of θ based on a single pair of order statistics obtained from two independent random samples from the two populations. We also indicate how this estimator can be utilized to obtain unbiased estimators of θ when only a few selected order statistics are available from the two random samples as well as when the samples are selected by an alternative procedure known as ranked set sampling. It is proved that for ranked set samples of size two, the proposed estimator is uniformly better than the conventional non-parametric unbiased estimator and further, a modified ranked set sampling procedure provides an unbiased estimator even better than the proposed estimator.  相似文献   

18.
Forecasting in economic data analysis is dominated by linear prediction methods where the predicted values are calculated from a fitted linear regression model. With multiple predictor variables, multivariate nonparametric models were proposed in the literature. However, empirical studies indicate the prediction performance of multi-dimensional nonparametric models may be unsatisfactory. We propose a new semiparametric model average prediction (SMAP) approach to analyse panel data and investigate its prediction performance with numerical examples. Estimation of individual covariate effect only requires univariate smoothing and thus may be more stable than previous multivariate smoothing approaches. The estimation of optimal weight parameters incorporates the longitudinal correlation and the asymptotic properties of the estimated results are carefully studied in this paper.  相似文献   

19.
In this article, we present a model-based framework to estimate the educational attainments of students in latent groups defined by unobservable or only partially observed features that are likely to affect the outcome distribution, as well as being interesting to be investigated. We focus our attention on the case of students in the first year of the upper secondary schools, for which the teachers’ suggestion at the end of their lower educational level toward the subsequent type of school is available. We use this information to develop latent strata according to the compliance behavior of students simplifying to the case of binary data for both counseled and attended school (i.e., academic or technical institute). We consider a likelihood-based approach to estimate outcome distributions in the latent groups and propose a set of plausible assumptions with respect to the problem at hand. In order to assess our method and its robustness, we simulate data resembling a real study conducted on pupils of the province of Bologna in year 2007/2008 to investigate their success or failure at the end of the first school year.  相似文献   

20.
We propose a new method for the Maximum Likelihood Estimator (MLE) of nonlinear mixed effects models when the variance matrix of Gaussian random effects has a prescribed pattern of zeros (PPZ). The method consists of coupling the recently developed Iterative Conditional Fitting (ICF) algorithm with the Expectation Maximization (EM) algorithm. It provides positive definite estimates for any sample size, and does not rely on any structural assumption concerning the PPZ. It can be easily adapted to many versions of EM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号