首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we analyze the relationship between the distribution of firm size and stochastic processes of growth. Three main models have been suggested by Gibrat (1931), Kalecki (1945) and Champernowne (1973). The first two lead to lognormal distribution and the last to Pareto distribution. We fitted lognormal and Pareto distribution to two Italian sectors: ICT and mechanical. For ICT we found that lognormal distribution must be rejected and Pareto fits reasonably well to the last 30% of largest companies. For mechanical sector we can not reject lognormal distribution. Furthermore, we perform some experiments to corroborate the theoretical models. By means of transition matrices we found that ICT shows features very close to Gibrats and Champernownes models, while Kaleckis model strongly fits to mechanical.JEL Classification: L00, L25, D21Correspondence to: Luigi GrossiThis research was partially supported by grants from Ministero dellIstruzione, dellUniversitá e della Ricerca (MIUR). Despite being the results of a joint work, Sects. 1, 4, 8 and 10 should be attributed to Ganugi, Sects. 3, 6, and 7 to Grossi and Sects. 2, 5, and 9 to Crosato.  相似文献   

2.
In the present article, we discuss the regression of a point on the surface of a unit sphere in d dimensions given a point on the surface of a unit sphere in p dimensions, where p may not be equal to d. Point projection is added to the rotation and linear transformation for regression link function. The identifiability of the model is proved. Then, parameter estimation in this set up is discussed. Simulation studies and data analyses are done to illustrate the model.  相似文献   

3.
When combining estimates of a common parameter (of dimension d?1d?1) from independent data sets—as in stratified analyses and meta analyses—a weighted average, with weights ‘proportional’ to inverse variance matrices, is shown to have a minimal variance matrix (a standard fact when d=1d=1)—minimal in the sense that all convex combinations of the coordinates of the combined estimate have minimal variances. Minimum variance for the estimation of a single coordinate of the parameter can therefore be achieved by joint estimation of all coordinates using matrix weights. Moreover, if each estimate is asymptotically efficient within its own data set, then this optimally weighted average, with consistently estimated weights, is shown to be asymptotically efficient in the combined data set and avoids the need to merge the data sets and estimate the parameter in question afresh. This is so whatever additional non-common nuisance parameters may be in the models for the various data sets. A special case of this appeared in Fisher [1925. Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22, 700–725.]: Optimal weights are ‘proportional’ to information matrices, and he argued that sample information should be used as weights rather than expected information, to maintain second-order efficiency of maximum likelihood. A number of special cases have appeared in the literature; we review several of them and give additional special cases, including stratified regression analysis—proportional-hazards, logistic or linear—, combination of independent ROC curves, and meta analysis. A test for homogeneity of the parameter across the data sets is also given.  相似文献   

4.
The general aim of manifold estimation is reconstructing, by statistical methods, an m-dimensional compact manifold S on d (with md) or estimating some relevant quantities related to the geometric properties of S. Focussing on the cases d=2 and d=3, with m=d or m=d?1, we will assume that the data are given by the distances to S from points randomly chosen on a band surrounding S. The aim of this paper is to show that, if S belongs to a wide class of compact sets (which we call sets with polynomial volume), the proposed statistical model leads to a relatively simple parametric formulation. In this setup, standard methodologies (method of moments, maximum likelihood) can be used to estimate some interesting geometric parameters, including curvatures and Euler characteristic. We will particularly focus on the estimation of the (d?1)-dimensional boundary measure (in Minkowski's sense) of S. It turns out, however, that the estimation problem is not straightforward since the standard estimators show a remarkably pathological behaviour: while they are consistent and asymptotically normal, their expectations are infinite. The theoretical and practical consequences of this fact are discussed in some detail.  相似文献   

5.
ABSTRACT

Here we introduce a new class of distributions namely the generalized hyper-Poisson distribution of order k (GHPD(k)) as an order k version of the alpha-generalized hyper-Poisson distribution of Kumar and Nair (Statistica, 2014b Kumar, C.S., Nair, B.U. (2014b). A three parameter hyper-Poisson distribution and some of its properties. Statistica. 74(2):183–198. [Google Scholar]). Several properties of the GHPD(k) are derived and the estimation of the parameters of the distribution by the method of mixed moments and the method of maximum likelihood is discussed. Certain testing procedures are suggested and all these estimation and testing procedures are illustrated with the help of a real-life data set. Further a simulation study is conducted.  相似文献   

6.
The power-law process (PLP) is a two-parameter model widely used for modeling repairable system reliability. Results on exact point estimation for both parameters as well as exact interval estimation for the shape parameter are well known. In this paper, we investigate the interval estimation for the scale parameter. Asymptotic confidence intervals are derived using Fisher information matrix and theoretical results by Cocozza-Thivent (1997 Cocozza-Thivent , C. ( 1997 ). Processus Stochastiques et Fiabilité des Systèmes . Berlin : Springer-Verlag . [Google Scholar]). The accuracy of the interval estimation for finite samples is studied by simulation methods.  相似文献   

7.
8.
In this article, we analyze the performance of five estimation methods for the long memory parameter d. The goal of our article is to construct a wavelet estimate for the fractional differencing parameter in nonstationary long memory processes that dominate the well-known estimate of Shimotsu and Phillips (2005) Shimotsu, K., Phillips, P. (2005). Exact local whittle estimation of fractional integration. Annals of statistics 20:87127. [Google Scholar]. The simulation results show that the wavelet estimation method of Lee (2005) Lee, J. (2005). Estimating memory parameter in the US inflation rate. Economics Letters 87:207210. [Google Scholar] with several tapering techniques performs better under most cases in nonstationary long memory. The comparison is based on the empirical root mean squared error of each estimate.  相似文献   

9.
In this paper we introduce a procedure to compute prediction intervals for FARIMA (p d q) processes, taking into account the variability due to model identification and parameter estimation. To this aim, a particular bootstrap technique is developed. The performance of the prediction intervals is then assessed and compared to that of stand­ard bootstrap percentile intervals. The methods are applied to the time series of Nile River annual minima.  相似文献   

10.
This paper reports an extensive Monte Carlo simulation study based on six estimators for the long memory fractional parameter when the time series is non-stationary, i.e., ARFIMA(p, d, q) process for d?>?0.5. Parametric and semiparametric methods are compared. In addition, the effect of the parameter estimation is investigated for small and large sample sizes and non-Gaussian error innovations. The methodology is applied to a well known data set, the so-called UK short interest rates.  相似文献   

11.
Although the t-type estimator is a kind of M-estimator with scale optimization, it has some advantages over the M-estimator. In this article, we first propose a t-type joint generalized linear model as a robust extension to the classical joint generalized linear models for modeling data containing extreme or outlying observations. Next, we develop a t-type pseudo-likelihood (TPL) approach, which can be viewed as a robust version to the existing pseudo-likelihood (PL) approach. To determine which variables significantly affect the variance of the response variable, we then propose a unified penalized maximum TPL method to simultaneously select significant variables for the mean and dispersion models in t-type joint generalized linear models. Thus, the proposed variable selection method can simultaneously perform parameter estimation and variable selection in the mean and dispersion models. With appropriate selection of the tuning parameters, we establish the consistency and the oracle property of the regularized estimators. Simulation studies are conducted to illustrate the proposed methods.  相似文献   

12.
Abstract. A common practice in obtaining an efficient semiparametric estimate is through iteratively maximizing the (penalized) full log‐likelihood w.r.t. its Euclidean parameter and functional nuisance parameter. A rigorous theoretical study of this semiparametric iterative estimation approach is the main purpose of this study. We first show that the grid search algorithm produces an initial estimate with the proper convergence rate. Our second contribution is to provide a formula in calculating the minimal number of iterations k * needed to produce an efficient estimate . We discover that (i) k * depends on the convergence rates of the initial estimate and the nuisance functional estimate, and (ii) k * iterations are also sufficient for recovering the estimation sparsity in high dimensional data. The last contribution is the novel construction of which does not require knowing the explicit expression of the efficient score function. The above general conclusions apply to semiparametric models estimated under various regularizations, for example, kernel or penalized estimation. As far as we are aware, this study provides a first general theoretical justification for the ‘one‐/two‐step iteration’ phenomena observed in the semiparametric literature.  相似文献   

13.
The EM algorithm is a popular method for parameter estimation in situations where the data can be viewed as being incomplete. As each E-step visits each data point on a given iteration, the EM algorithm requires considerable computation time in its application to large data sets. Two versions, the incremental EM (IEM) algorithm and a sparse version of the EM algorithm, were proposed recently by Neal R.M. and Hinton G.E. in Jordan M.I. (Ed.), Learning in Graphical Models, Kluwer, Dordrecht, 1998, pp. 355–368 to reduce the computational cost of applying the EM algorithm. With the IEM algorithm, the available n observations are divided into B (B n) blocks and the E-step is implemented for only a block of observations at a time before the next M-step is performed. With the sparse version of the EM algorithm for the fitting of mixture models, only those posterior probabilities of component membership of the mixture that are above a specified threshold are updated; the remaining component-posterior probabilities are held fixed. In this paper, simulations are performed to assess the relative performances of the IEM algorithm with various number of blocks and the standard EM algorithm. In particular, we propose a simple rule for choosing the number of blocks with the IEM algorithm. For the IEM algorithm in the extreme case of one observation per block, we provide efficient updating formulas, which avoid the direct calculation of the inverses and determinants of the component-covariance matrices. Moreover, a sparse version of the IEM algorithm (SPIEM) is formulated by combining the sparse E-step of the EM algorithm and the partial E-step of the IEM algorithm. This SPIEM algorithm can further reduce the computation time of the IEM algorithm.  相似文献   

14.
Abstract. In the context of multivariate mean regression, we propose a new method to measure and estimate the inadequacy of a given parametric model. The measure is basically the missed fraction of variation after adjusting the best possible parametric model from a given family. The proposed approach is based on the minimum L 2 ‐distance between the true but unknown regression curve and a given model. The estimation method is based on local polynomial averaging of residuals with a polynomial degree that increases with the dimension d of the covariate. For any d ≥ 1 and under some weak assumptions we give a Bahadur‐type representation of the estimator from which ‐consistency and asymptotic normality are derived for strongly mixing variables. We report the outcomes of a simulation study that aims at checking the finite sample properties of these techniques. We present the analysis of a dataset on ultrasonic calibration for illustration.  相似文献   

15.
We propose a specific general Markov-regime switching estimation both in the long memory parameter d and the mean of a time series. We employ Viterbi algorithm that combines the Viterbi procedures in two state Markov-switching parameter estimation. It is well-known that existence of mean break and long memory in time series can be easily confused with each other in most cases. Thus, we aim at observing the deviation and interaction of mean and d estimates for different cases. A Monte Carlo experiment reveals that the finite sample performance of the proposed algorithm for a simple mixture model of Markov-switching mean and d changes with respect to the fractional integrating parameters and the mean values for the two regimes.  相似文献   

16.
We propose a new method for risk‐analytic benchmark dose (BMD) estimation in a dose‐response setting when the responses are measured on a continuous scale. For each dose level d, the observation X(d) is assumed to follow a normal distribution: . No specific parametric form is imposed upon the mean μ(d), however. Instead, nonparametric maximum likelihood estimates of μ(d) and σ are obtained under a monotonicity constraint on μ(d). For purposes of quantitative risk assessment, a ‘hybrid’ form of risk function is defined for any dose d as R(d) = P[X(d) < c], where c > 0 is a constant independent of d. The BMD is then determined by inverting the additional risk functionRA(d) = R(d) ? R(0) at some specified value of benchmark response. Asymptotic theory for the point estimators is derived, and a finite‐sample study is conducted, using both real and simulated data. When a large number of doses are available, we propose an adaptive grouping method for estimating the BMD, which is shown to have optimal mean integrated squared error under appropriate designs.  相似文献   

17.
Kernel density classification and boosting: an L2 analysis   总被引:1,自引:0,他引:1  
Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density methods when the final goal is classification.A relative newcomer to the classification portfolio is boosting, and this paper proposes an algorithm for boosting kernel density classifiers. We note that boosting is closely linked to a previously proposed method of bias reduction in kernel density estimation and indicate how it will enjoy similar properties for classification. We show that boosting kernel classifiers reduces the bias whilst only slightly increasing the variance, with an overall reduction in error. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research.  相似文献   

18.
This paper is concerned with the parameter estimation in linear regression model. To overcome the multicollinearity problem, a new class of estimator, namely principal component two-parameter (PCTP) estimator is proposed. The superiority of the new estimator over the principal component regression (PCR) estimator, the r ? k class estimator, the r ? d class estimator and the two-parameter estimator proposed by Yang and Chang (Commun Stat Theory Methods 39:923?C934 2010) are discussed with respect to the mean squared error matrix (MSEM) criterion. Furthermore, we give a numerical example and a simulation study to illustrate some of the theoretical results.  相似文献   

19.
The maximum likelihood (ML) method is used to estimate the unknown Gamma regression (GR) coefficients. In the presence of multicollinearity, the variance of the ML method becomes overstated and the inference based on the ML method may not be trustworthy. To combat multicollinearity, the Liu estimator has been used. In this estimator, estimation of the Liu parameter d is an important problem. A few estimation methods are available in the literature for estimating such a parameter. This study has considered some of these methods and also proposed some new methods for estimation of the d. The Monte Carlo simulation study has been conducted to assess the performance of the proposed methods where the mean squared error (MSE) is considered as a performance criterion. Based on the Monte Carlo simulation and application results, it is shown that the Liu estimator is always superior to the ML and recommendation about which best Liu parameter should be used in the Liu estimator for the GR model is given.  相似文献   

20.
In this paper we revisit the classical problem of interval estimation for one-binomial parameter and for the log odds ratio of two binomial parameters. We examine the confidence intervals provided by two versions of the modified log likelihood root: the usual Barndorff-Nielsen's r*r* and a Bayesian version of the r*r* test statistic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号