首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
绿色国民核算方法简评   总被引:6,自引:0,他引:6       下载免费PDF全文
朱启贵 《统计研究》2001,18(10):11-20
 自国民经济核算体系(SAN)产生以来,人们就不断对其修改与完善,力求它能适应宏观经济调控和管理的需要。正因为如此,联合国SNA在不到50年时间就做了两次大的修改。20世纪80年代以来,为了适应可持续发展的需要,国际组织、各国政府和学者极力提倡“绿化”国民核算体系,即建立满足可持续发展战略需要的国民核算新模式——绿色国民核算体系。例如,《21世纪议程》提出“应在所有国家中建立环境与经济一体化的核算系统”;《中国21世纪议程》要求“建立各国自然资源的食物账户和价值账户,以支持建立综合的国民经济核算体系,以补充或改进现有的国民经济核算体系”。本文讨论绿色国民核算方法形成的机理,并对主要方法做简要评论,旨在明确绿色国民核算的发展方向。  相似文献   

2.
The class of Lagrangian probability distributions ‘LPD’, given by the expansion of a probability generating function ft’ under the transformation u = t/gt’ where gt’ is also a p.g.f., has been substantially widened by removing the restriction that the defining functions gt’ and ft’ be probability generating functions. The class of modified power series distributions defined by Gupta ‘1974’ has been shown to be a sub-class of the wider class of LPDs  相似文献   

3.
In a series of crop variety trials, ‘test varieties’ are compared with one another and with a ‘reference’ variety that is included in all trials. The series is typically analyzed with a linear mixed model and the method of generalized least squares. Usually, the estimates of the expected differences between the test varieties and the reference variety are presented. When the series is incomplete, i.e. when all test varieties were not included in all trials, the method of generalized least squares may give estimates of expected differences to the reference variety that do not appear to accord with observed differences. The present paper draws attention to this phenomenon and explores the recurrent idea of comparing test varieties indirectly through the use of the reference. A new ‘reference treatment method’ was specified and compared with the method of generalized least squares when applied to a five-year series of 85 spring wheat trials. The reference treatment method provided estimates of differences to the reference variety that agreed with observed differences, but was considerably less efficient than the method of generalized least squares.  相似文献   

4.
We examine the properties of the distribution of the number of drug abusers in a previously free community assuming that initiators enter the community during a ‘latent’ period in which they randomly infect other members of the community, and that during the subsequent ‘control’ period the spread of abuse follows a linear birth and death process. The form of distribution is shown to be unaltered by a series of steady changes in the birth and death parameters. The distribution can be regarded as the convolution of three distributions:pseudo-binomial or binomial, negative binomial, and Polya-Aeppli. Special cases include the Laguerre series distribution.  相似文献   

5.
A strictly stationary time series is modelled directly, once the variables' realizations fit into a table: no knowledge of a distribution is required other than the prior discretization. A multiplicative model with combined random ‘Auto-Regressive’ and ‘Moving-Average’ parts is considered for the serial dependence. Based on a multi-sequence of unobserved series that serve as differences and differences of differences from the main building block, a causal version is obtained; a condition that secures an exponential rate of convergence for its expected random coefficients is presented. For the remainder, writing the conditional probability as a function of past conditional probabilities, is within reach: subject to the presence of the moving-average segment in the original equation, what could be a long process of elimination with mathematical arguments concludes with a new derivation that does not support a simplistic linear dependence on the lagged probability values.  相似文献   

6.
One of the most famous controversies in the history of Statistics regards the number of the degrees of freedom of a chi-square test. In 1900, Pearson introduced the chi-square test for goodness of fit without recognizing that the degrees of freedom depend on the number of estimated parameters under the null hypothesis. Yule tried an ‘experimental’ approach to check the results by a short series of ‘experiments’. Nowadays, an open-source language such as R gives the opportunity to empirically check the adequateness of Pearson's arguments. Pearson paid crucial attention to the relative error, which he stated ‘will, as a rule, be small’. However, this point is fallacious, as is made evident by the simulations carried out with R. The simulations concentrate on 2×2 tables where the fallacy of the argument is most evident. Moreover, this is one of the most employed cases in the research field.  相似文献   

7.
Many time series encountered in practice are nonstationary, and instead are often generated from a process with a unit root. Because of the process of data collection or the practice of researchers, time series used in analysis and modeling are frequently obtained through temporal aggregation. As a result, the series used in testing for a unit root are often time series aggregates. In this paper, we study the effects of the use of aggregate time series on the Dickey–Fuller test for a unit root. We start by deriving a proper model for the aggregate series. Based on this model, we find the limiting distributions of the test statistics and illustrate how the tests are affected by the use of aggregate time series. The results show that those distributions shift to the right and that this effect increases with the order of aggregation, causing a strong impact both on the empirical significance level and on the power of the test. To correct this problem, we present tables of critical points appropriate for the tests based on aggregate time series and demonstrate their adequacy. Examples illustrate the conclusions of our analysis.  相似文献   

8.
Time series analysis is a tremendous research area in statistics and econometrics. In a previous review, the author was able to break down up 15 key areas of research interest in time series analysis. Nonetheless, the aim of the review in this current paper is not to cover a wide range of somewhat unrelated topics on the subject, but the key strategy of the review in this paper is to begin with a core the ‘curse of dimensionality’ in nonparametric time series analysis, and explore further in a metaphorical domino-effect fashion into other closely related areas in semiparametric methods in nonlinear time series analysis.  相似文献   

9.
This paper is concerned with the problem of estimation of total weight in a chemical balance weighing design. Some results regarding the estimability of the total weight are obtained and a lower bound for the variance of the estimated total weight is given. Finally, a series of weighing designs estimating the total weight in an ‘optimum’ manner is reported.  相似文献   

10.
Let Y be distributed symmetrically about Xβ. Natural generalizations of odd location statistics, say T‘Y’, and even location-free statistics, say W‘Y’, that were used by Hogg ‘1960, 1967)’ are introduced. We show that T‘Y’ is distributed symmetrically about β and thus E[T‘Y’] = β and that each element of T‘Y’ is uncorrelated with each element of W‘Y’. Applications of this result are made to R-estiraators and the result is extended to a multivariate linear model situation.  相似文献   

11.
This paper considers spurious regression between two different types of seasonal time series: one with a deterministic seasonal component and the other with a stochastic seasonal component. When one type of seasonal time series is regressed on the other type and they are independent of each other, the phenomenon of spurious regression occurs. Asymptotic properties of the regression coefficient estimator and the associated regression ‘t-ratio’ are studied. A Monte Carlo simulation study is conducted to confirm the phenomenon of spurious regression and spurious rejection of seasonal cointegration for finite samples.  相似文献   

12.
Automated public health surveillance of disease counts for rapid outbreak, epidemic or bioterrorism detection using conventional control chart methods can be hampered by over-dispersion and background (‘in-control’) mean counts that vary over time. An adaptive cumulative sum (CUSUM) plan is developed for signalling unusually high incidence in prospectively monitored time series of over-dispersed daily disease counts with a non-homogeneous mean. Negative binomial transitional regression is used to prospectively model background counts and provide ‘one-step-ahead’ forecasts of the next day's count. A CUSUM plan then accumulates departures of observed counts from an offset (reference value) that is dynamically updated using the modelled forecasts. The CUSUM signals whenever the accumulated departures exceed a threshold. The amount of memory of past observations retained by the CUSUM plan is determined by the offset value; a smaller offset retains more memory and is efficient at detecting smaller shifts. Our approach optimises early outbreak detection by dynamically adjusting the offset value. We demonstrate the practical application of the ‘optimal’ CUSUM plans to daily counts of laboratory-notified influenza and Ross River virus diagnoses, with particular emphasis on the steady-state situation (i.e. changes that occur after the CUSUM statistic has run through several in-control counts).  相似文献   

13.
In this paper, we improve upon the Carlin and Chib Markov chain Monte Carlo algorithm that searches in model and parameter spaces. Our proposed algorithm attempts non-uniformly chosen ‘local’ moves in the model space and avoids some pitfalls of other existing algorithms. In a series of examples with linear and logistic regression, we report evidence that our proposed algorithm performs better than the existing algorithms.  相似文献   

14.
A sequence of empirical Bayes estimators is given for estimating a distribution function. It is shown that ‘i’ this sequence is asymptotically optimum relative to a Gamma process prior, ‘ii’ the overall expected loss approaches the minimum Bayes risk at a rate of n , and ‘iii’ the estimators form a sequence of proper distribution functions. Finally, the numerical example presented by Susarla and Van Ryzin ‘Ann. Statist., 6, 1978’ reworked by Phadia ‘Ann. Statist., 1, 1980, to appear’ has been analyzed and the results are compared to the numerical results by Phadia  相似文献   

15.
In this paper the consequences of considering the household ‘food share’ distribution as a welfare measure, in isolation from the joint distribution of itemized budget shares, is examined through the unconditional and conditional distribution of ‘food share’ both parametrically and nonparametrically. The parametric framework uses Dirichlet and Beta distributions, while the nonparametric framework uses kernel smoothing methods. The analysis, in a three commodity setup (‘food’, ‘durables’, ‘others’), based on household level rural data for West Bengal, India, for the year 2009–2010 shows significant underrepresentation of households by the conventional unconditional ‘food share’ distribution in the higher range of food budget shares that correspond to the lower end of the income profile. This may have serious consequences for welfare measurement.  相似文献   

16.
A note on the correlation structure of transformed Gaussian random fields   总被引:1,自引:0,他引:1  
Transformed Gaussian random fields can be used to model continuous time series and spatial data when the Gaussian assumption is not appropriate. The main features of these random fields are specified in a transformed scale, while for modelling and parameter interpretation it is useful to establish connections between these features and those of the random field in the original scale. This paper provides evidence that for many ‘normalizing’ transformations the correlation function of a transformed Gaussian random field is not very dependent on the transformation that is used. Hence many commonly used transformations of correlated data have little effect on the original correlation structure. The property is shown to hold for some kinds of transformed Gaussian random fields, and a statistical explanation based on the concept of parameter orthogonality is provided. The property is also illustrated using two spatial datasets and several ‘normalizing’ transformations. Some consequences of this property for modelling and inference are also discussed.  相似文献   

17.
This paper proposes a generalized quasi-likelihood (GQL) function for estimating the vector of regression and over-dispersion effects for the respective series in the bivariate integer-valued autoregressive process of order 1 (BINAR(1)) with Negative Binomial (NB) marginals. The auto-covariance function in the proposed GQL is computed using some ‘robust’ working structures. As for the BINAR(1) process, the inter-relation between the series is induced mainly by the correlated NB innovations that are subject to different levels of over-dispersion. The performance of the GQL approach is tested via some Monte-Carlo simulations under different combination of over-dispersion together with low and high serial- and cross-correlation parameters. The model is also applied to analyse a real-life series of day and night accidents in Mauritius.  相似文献   

18.
This article analyzes the importance of exact aggregation restrictions and the modeling of demographic effects in Jorgenson, Lau, and Stoker's (1982) model of aggregate consumer behavior. These issues are examined at the household level, using Canadian cross-sectional microdata. Exact aggregation restrictions and some implicit restrictions on household demographic effects are strongly rejected by our data. These results do not preclude pooling aggregate time series data with cross-sectional microdata to estimate a model of aggregate consumer behavior. They do suggest, however, an alternative basis for the aggregate model.  相似文献   

19.
In the 96 Anglo-Australian Test matches played from the end of world war II to the final test in the 1980 series there have been over 3000 dismissals of batsman for both countries. A breakdown reveals interesting difference between the type of dismissal and (a) quality of batsman, (b) location of match, (c) team which is batting. In particular, the ‘leg before wickets’ (lbw) dismissals appear to provide controversial data.  相似文献   

20.
Graphical analysis of complex brain networks is a fundamental area of modern neuroscience. Functional connectivity is important since many neurological and psychiatric disorders, including schizophrenia, are described as ‘dys-connectivity’ syndromes. Using electroencephalogram time series collected on each of a group of 15 individuals with a common medical diagnosis of positive syndrome schizophrenia we seek to build a single, representative, brain functional connectivity group graph. Disparity/distance measures between spectral matrices are identified and used to define the normalized graph Laplacian enabling clustering of the spectral matrices for detecting ‘outlying’ individuals. Two such individuals are identified. For each remaining individual, we derive a test for each edge in the connectivity graph based on average estimated partial coherence over frequencies, and associated p-values are found. For each edge these are used in a multiple hypothesis test across individuals and the proportion rejecting the hypothesis of no edge is used to construct a connectivity group graph. This study provides a framework for integrating results on multiple individuals into a single overall connectivity structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号