首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In this paper, we propose a dynamic error-components model to represent the unobserved level of technology. This specification implies a well-defined common factor dynamic model for per capita output that can be tested explicitly. The model is applied to data on aggregates of agricultural inputs and outputs for groups of countries from the OECD, Africa (AF), Latin America (LA) as well as centrally planned countries, over a period of 31 years. We find that the proposed model fits the data better than alternative static specifications and satisfies the implied common factor restrictions in two of the samples. The results suggest that although technological change seems to have been a faster process for less developed countries relative to the OECD countries, it has not been fast enough to reduce appreciably the enormous differences in average technological levels that still persist between them.  相似文献   

2.
Primal panel data models of production risk are estimated, using more flexible specifications than has previously been the practice. Production risk has important implications for the analysis of technology adoption and technical efficiency, since risk averse producers will take into account both the mean and variance of output when ranking alternative technologies. Hence, one should estimate technical change separately for the deterministic part and the risk part of thetechnology.  相似文献   

3.
This paper adresses the measurement of technical efficiency of textile, clothing, and leather (TCL) industries in Tunisia through a panel data estimation of a dynamic translog production frontier. It provides a perspective on productivity and efficiency that should be instructive to a developing economy which will face substantial competitive pressure along the gradual economic liberalisation process. The importance of TCL industries in Tunisian manufacturing sector is a reason for obtaining more knowledge of productivity and efficiency for this key industry. Dynamic is introduced to reflect the production consequences of the adjustment costs, which are associated with changes in factor inputs. Estimation of a dynamic error components model is considered using the system generalized method of moments (GMM) estimator suggested by Arellano and Bover (1995), Another look at the instrumental-variable estimation of error-components models, J. Econometrics68:29-51) and Blundell and Bond (Blundell, R., Bond, S. (1998a), Initial conditions and moment restrictions in dynamic panel data models. J. Econometrics87:115-143; Blundell, R., Bond, S. (1998b), GMM estimation with persistent panel data: an application to production functions, Paper presented at the Eighth International Conference on Panel Data, Goteborg University). Our study evaluates the sensitivity of the results, particularly of the efficiency measures, to different specifications. Firm-specific time-invariant technical efficiency is obtained using the Schmidt and Sickles (Schmidt, P., Sickles, R. C. (1984). Production frontiers and panel data. J. Bus. Econ. Stat.2:367-374) approach after estimating the dynamic frontier. We stress the importance of allowing for lags in adjustment of output to inputs and of controlling for time-invariant variables when estimating firm-specific efficiency. The results suggest that the system GMM estimation of the dynamic specification produces the most accurate parameter estimates and technical efficiency measure. Mean efficiency scores is of 68%. Policy implications of the results are outlined.  相似文献   

4.
Many of the recently developed alternative econometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implict preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

5.
Reply     
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

6.
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

7.
An unknown moment-determinate cumulative distribution function or its density function can be recovered from corresponding moments and estimated from the empirical moments. This method of estimating an unknown density is natural in certain inverse estimation models like multiplicative censoring or biased sampling when the moments of unobserved distribution can be estimated via the transformed moments of the observed distribution. In this paper, we introduce a new nonparametric estimator of a probability density function defined on the positive real line, motivated by the above. Some fundamental properties of proposed estimator are studied. The comparison with traditional kernel density estimator is discussed.  相似文献   

8.
Standard productivity estimates contain a mixture of cost efficiency and demand conditions. I propose a method to identify the distribution of the demand shock using production data. Identification does not depend on functional form restrictions. It is also robust to dynamic demand considerations and flexible labor. In the parametric case, the ratio of intermediate inputs to the wage bill (input ratio) contains information about the magnitude of the demand shock. The method is tested using data from Spain that contains information on prices and demand conditions. Finally, we generate Monte Carlo simulations to evaluate the method’s performance and sensitivity. Supplementary materials for this article are available online.  相似文献   

9.
We propose a new flexible generalized family (NFGF) for constructing many families of distributions. The importance of the NFGF is that any baseline distribution can be chosen and it does not involve any additional parameters. Some useful statistical properties of the NFGF are determined such as a linear representation for the family density, analytical shapes of the density and hazard rate, random variable generation, moments and generating function. Further, the structural properties of a special model named the new flexible Kumaraswamy (NFKw) distribution, are investigated, and the model parameters are estimated by maximum-likelihood method. A simulation study is carried out to assess the performance of the estimates. The usefulness of the NFKw model is proved empirically by means of three real-life data sets. In fact, the two-parameter NFKw model performs better than three-parameter transmuted-Kumaraswamy, three-parameter exponentiated-Kumaraswamy and the well-known two-parameter Kumaraswamy models.  相似文献   

10.
In this paper, we introduce a new distribution, called the alpha-skew generalized normal (ASGN), for GARCH models in modeling daily Value-at-Risk (VaR). Basic structural properties of the proposed distribution are derived including probability and cumulative density functions, moments and stochastic representation. The real data application based on ISE-100 index is given to show the performance of GARCH model specified under ASGN innovation distribution with respect to normal, Student’s-t, skew normal and generalized normal models in terms of the VaR accuracy. The empirical results show that GARCH model with ASGN innovation distribution generates the most accurate VaR forecasts for all confidence levels.  相似文献   

11.
In this paper, we propose a new bivariate distribution, namely bivariate alpha-skew-normal distribution. The proposed distribution is very flexible and capable of generalizing the univariate alpha-skew-normal distribution as its marginal component distributions; it features a probability density function with up to two modes and has the bivariate normal distribution as a special case. The joint moment generating function as well as the main moments are provided. Inference is based on a usual maximum-likelihood estimation approach. The asymptotic properties of the maximum-likelihood estimates are verified in light of a simulation study. The usefulness of the new model is illustrated in a real benchmark data.  相似文献   

12.
Probabilistic sensitivity analysis of complex models: a Bayesian approach   总被引:3,自引:0,他引:3  
Summary.  In many areas of science and technology, mathematical models are built to simulate complex real world phenomena. Such models are typically implemented in large computer programs and are also very complex, such that the way that the model responds to changes in its inputs is not transparent. Sensitivity analysis is concerned with understanding how changes in the model inputs influence the outputs. This may be motivated simply by a wish to understand the implications of a complex model but often arises because there is uncertainty about the true values of the inputs that should be used for a particular application. A broad range of measures have been advocated in the literature to quantify and describe the sensitivity of a model's output to variation in its inputs. In practice the most commonly used measures are those that are based on formulating uncertainty in the model inputs by a joint probability distribution and then analysing the induced uncertainty in outputs, an approach which is known as probabilistic sensitivity analysis. We present a Bayesian framework which unifies the various tools of prob- abilistic sensitivity analysis. The Bayesian approach is computationally highly efficient. It allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than standard Monte Carlo methods. Furthermore, all measures of interest may be computed from a single set of runs.  相似文献   

13.
ABSTRACT

In this article, we define a new lifetime model called the Weibull–Dagum distribution. The proposed model is based on the Weibull–G class. It can also be defined by a simple transformation of the Weibull random variable. Its density function is very flexible and can be symmetrical, left-skewed, right-skewed, and reversed-J shaped. It has constant, increasing, decreasing, upside-down bathtub, bathtub, and reversed-J shaped hazard rate. Various structural properties are derived including explicit expressions for the quantile function, ordinary and incomplete moments, and probability weighted moments. We also provide explicit expressions for the Rényi and q-entropies. We derive the density function of the order statistics as a mixture of Dagum densities. We use maximum likelihood to estimate the model parameters and illustrate the potentiality of the new model by means of a simulation study and two applications to real data. In fact, the proposed model outperforms the beta-Dagum, McDonald–Dagum, and Dagum models in these applications.  相似文献   

14.
We propose approximations to the moments, different possibilities for the limiting distributions and approximate confidence intervals for the maximum-likelihood estimator of a given parametric function when sampling from partially non-regular log-exponential models. Our results are applicable to the two-parameter exponential, power-function and Pareto distribution. Asymptotic confidence intervals for quartiles in several Pareto models have been simulated. These are compared to asymptotic intervals based on sample quartiles. Our intervals are superior since we get shorter intervals with similar coverage probability. This superiority is even assessed probabilistically. Applications to real data are included.  相似文献   

15.
In the literature, technical efficiency is measured as the ratio of the observed output to potential output. Although there is no a priori theoretical reasoning, in the stochastic framework of measuring technical efficiency, potential output has been conventionally assumed as a neutral shift from observed output, owing solely to a larger intercept term in the frontier production function and without change in the input response coefficients. The objective of this paper is to propose and apply a method to measure technical efficiency without the above assumption. Furthermore, this methodology does not require the restrictive assumption of a particular distribution for the efficiency-related error term, as has been the case until now in the stochastic production function literature. A random sample of farmers from Madurai district in Tamil Nadu, India, was used. The analysis revealed substantial variation in the farm-specific input response coefficients between farms, which means that the contributions of individual inputs to the output differ from farm to farm, because the methods of application of the individual inputs vary. The frontier production function, which defines the potential of a technology, is determined by the highest values of the coefficients of each individual input which may come from one or more farms. Farm-specific frontier functions generally showed a considerable potential for improving the technical.performance of each input.  相似文献   

16.
A new lifetime distribution is introduced based on compounding Pareto and Poisson–Lindley distributions. Several statistical properties of the distribution are established, including behavior of the probability density function and the failure rate function, heavy- and long-right tailedness, moments, the Laplace transform, quantiles, order statistics, moments of residual lifetime, conditional moments, conditional moment generating function, stress–strength parameter, Rényi entropy and Song's measure. We get maximum-likelihood estimators of the distribution parameters and investigate the asymptotic distribution of the estimators via Fisher's information matrix. Applications of the distribution using three real data sets are presented and it is shown that the distribution fits better than other related distributions in practical uses.  相似文献   

17.
A general methodology is developed for approximating the distribution of a random variable on the basis of its exact moments. More specifically, a probability density function is approximated by the product of a suitable weight function and a linear combination of its associated orthogonal polynomials. A technique for generating a sequence of orthogonal polynomials from a given weight function is provided and the coefficients of the linear combination are explicitly expressed in terms of the moments of the target distribution. On applying this approach to several test statistics, we observed that the resulting percentiles are consistently in excellent agreement with the tabulated values. As well, it is explained that the same moment-matching technique can be utilized to produce density estimates on the basis of the sample moments obtained from a given set of observations. An example involving a well-known data set illustrates the density estimation methodology advocated herein.  相似文献   

18.
A distinction between Fisher's implied data-generating process for Monte Carlo cycles and the more general Markov process leads to non-parametric tests for duration dependence. Tests are based on the method of moments, Tauchen's generalized method of moments (GMM) procedure, and a statistic whose null distribution probability limit is zero. Using finite-sample critical values obtained by Monte Carlo methods, our test results are remarkably consistent. The null distribution of the GMM test statistic for samples of the size considered is distinctly non-normal, so that asymptotic critical values give erroneous results. The tests are applied to UK business cycle data for 1854-1992. There is evidence for duration dependence in expansions but not in contractions.  相似文献   

19.
A new lifetime model, which extends the Fréchet distribution called the generalized transmuted Fréchet distribution is proposed and studied. Various of its structural properties including ordinary and incomplete moments, generating function, residual and reversed residual lifes, order statistics and probability weighted moments are derived. Two characterization theorems are presented. The maximum likelihood method is used to estimate the model parameters. The flexibility of the new distribution is illustrated using a real data set. It can serve as an alternative model to other lifetime models available in the literature for modeling positive real data in many areas.  相似文献   

20.
This paper gives an interpretation for the scale parameter of a Dirichlet process when the aim is to estimate a linear functional of an unknown probability distribution. We provide exact first and second posterior moments for such functionals under both informative and noninformative prior specifications. The noninformative case provides a normal approximation to the Bayesian bootstrap.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号