首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
社会总供需的平衡是国民经济综合平衡的集中体现。所谓总供需平衡效应的统计分析,就是利用有关指标就总供需失衡对国民经济产生的影响进行具体判断和定量分析。本文拟在明确总供需差率与通货膨胀率之间理论联系的基础上,对我国的总供需平衡与通货膨胀以及经济增长的关系进行实证分析,从中揭示出一些具有规律性的问题。  相似文献   

2.
针对中国缺乏农村失业率统计数据的情况,设计一个能够反映劳动力供需总量失衡和结构失衡的监测体系,该监测体系中既有反映劳动力资源和使用失衡的指标,也有反映劳动力市场供需状况的指标。综合指数合成过程中,在各指标加权之前先对各指标进行了标准化,使各指标量纲相同,实现了监测体系内各指标横向纵向均可比。研究认为:2001—2007年中国的劳动力供需失衡状况在好转;城镇新增劳动力就业越来越困难;劳动力市场供需失衡和劳动力供需结构失衡程度在下降;中国劳动力供需总量失衡比结构失衡更加严重。  相似文献   

3.
游广武 《统计研究》1992,9(2):48-51
目前,社会总供需的平衡统计分析所遵循的思路不外乎就是从统计角度先计量总供给和总需求,然后将这两个指标进行对比来考察总供需的失衡程度。这种方法暗含着一个往往不正面提及和论述的假定或前提,即存在统计意义上的社会总供需平衡差。本文从方法论的角度对目前总供需平衡统计分析中所暗含的这个假定提出疑问,并通过对以这个假定为前题进行总供需平衡统计分析的典型剖析,证明统计意义上的社会总供需平衡差的虚构性。  相似文献   

4.
在经济进入换挡减速期的新常态大环境下,房地产开发企业面临经营效益下滑、替在需求空间压缩等新命题。当前,嘉兴房地产市场进入深度调整期,土地市场交易遇冷,房地产开发投资下滑明显。政府、企业多措并举加快库存去化,但结构性供需失衡、区域性供需失衡等问题仍存在,需要把握发展二孩政策等机遇,提升企业品质,促进房地产业持续健康发展。  相似文献   

5.
任何一种经济模式中都存在资源配置的问题。我国社会主义市场经济条件下,资源配置问题无疑也是经济工作中的核心问题。但是,即使是在市场经济条件下,完全靠市场自发地去调节资源配置,也是做不到的。国家必须利用一定的手段和方法进行干预和调整,这就是我们通常所说的宏观调控。国民经济宏观调控中,社会总供需平衡与否,既是调控是否有效的指示器,也是调控之目的所在,国民经济核算应致力于为社会总供需平衡提供基本材料,切实发挥国民经济核算在国民经济宏观调控中的作用。一.社会总供需平衡与否的统计认识社会总供需是平衡还是失衡…  相似文献   

6.
杨缅昆 《统计研究》1993,10(4):34-39
一、选择:总供需定义 80年代以来,为了适应我国新的经济体制下宏观调控的更高要求,理论界开展了有关社会总供需核标理论和方法的讨论。归纳众多不同的观点,不难发现,在总供需核算问题上形成了两大学派:平衡论派和非平衡论派。两派得出了统计核算上总供需恒等与否的不同结论,分歧源于对总供需定义的不同理解。  相似文献   

7.
新型农村社会养老保险供需均衡分析   总被引:1,自引:0,他引:1  
冯兰 《统计与决策》2012,(8):111-114
在当前新农保政策实施过程中,从政府和农村居民的角度出发,利用经济学的传统分析方法研究新农保的供给、需求及均衡问题。通过成本——收益法分析新农保的政府供给行为,和通过序数效用理论分析农村居民对新农保的需求行为,进而建立新农保供需均衡模型。政府对新农保的供给离农村居民的养老需求还有很大的差距,供需之间并未达到均衡,针对目前新农保供需之间的非均衡,文章提出了促进新农保供需均衡的相关政策建议。  相似文献   

8.
庞皓 《统计研究》1995,12(2):77-78
社会总供需定量分析的创新研究──评曾五一著的《总供需平衡统计研究》庞皓改革开放以来,我国理论界和实际经济部门对社会总供需已经进行了大量的理论研究,并且取得了许多积极的成果。然而,这方面的研究还多是停留在定性研究上,对中国社会总供需实证性定量分析的研究...  相似文献   

9.
章和杰  陈威吏 《统计研究》2008,25(10):26-33
 内容提要:本文根据中国当前的实际情况,对经典的M-F模型进行了修正,提出了基于篮子货币汇率制度的M-F模型,并在修正的M-F模型框架下对扩张财政政策对国民收入的影响效力进行了理论和实证分析。本文在前述分析的基础上进一步分析了财政支出对国内消费的决定作用。最后本文基于以上的分析,针对当前中国内外失衡的现状,提出当前政府应实施扩张的财政政策、优化财政支出结构等相关对策建议。  相似文献   

10.
对总供需平衡状况判断方法的进一步探讨厦门大学计统系曾五一为了有效地进行宏观经济调控,不仅需要对总供需平衡问题作正确的理论分析,而且更需要及时地对总供需的具体平衡状况做出判断。本文主要对判断总供需平衡状况的方法进行一些探讨。一、现有方法存在的问题在实际...  相似文献   

11.
We derive a computationally convenient formula for the large sample coverage probability of a confidence interval for a scalar parameter of interest following a preliminary hypothesis test that a specified vector parameter takes a given value in a general regression model. Previously, this large sample coverage probability could only be estimated by simulation. Our formula only requires the evaluation, by numerical integration, of either a double or a triple integral, irrespective of the dimension of this specified vector parameter. We illustrate the application of this formula to a confidence interval for the odds ratio of myocardial infarction when the exposure is recent oral contraceptive use, following a preliminary test where two specified interactions in a logistic regression model are zero. For this real‐life data, we compare this large sample coverage probability with the actual coverage probability of this confidence interval, obtained by simulation.  相似文献   

12.
The nonparametric Bayesian approach for inference regarding the unknown distribution of a random sample customarily assumes that this distribution is random and arises through Dirichlet-process mixing. Previous work within this setting has focused on the mean of the posterior distribution of this random distribution, which is the predictive distribution of a future observation given the sample. Our interest here is in learning about other features of this posterior distribution as well as about posteriors associated with functionals of the distribution of the data. We indicate how to do this in the case of linear functionals. An illustration, with a sample from a Gamma distribution, utilizes Dirichlet-process mixtures of normals to recover this distribution and its features.  相似文献   

13.
In this article we propose a novel non-parametric sampling approach to estimate posterior distributions from parameters of interest. Starting from an initial sample over the parameter space, this method makes use of this initial information to form a geometrical structure known as Voronoi tessellation over the whole parameter space. This rough approximation to the posterior distribution provides a way to generate new points from the posterior distribution without any additional costly model evaluations. By using a traditional Markov Chain Monte Carlo (MCMC) over the non-parametric tessellation, the initial approximate distribution is refined sequentially. We applied this method to a couple of climate models to show that this hybrid scheme successfully approximates the posterior distribution of the model parameters.  相似文献   

14.
Drawing distinct units without replacement and with unequal probabilities from a population is a problem often considered in the literature (e.g. Hanif and Brewer, 1980, Int. Statist. Rev. 48, 317–355). In such a case, the sample mean is a biased estimator of the population mean. For this reason, we use the unbiased Horvitz–Thompson estimator (1951). In this work, we focus our interest on the variance of this estimator. The variance is cumbersome to compute because it requires the calculation of a large number of second-order inclusion probabilities. It would be helpful to use an approximation that does not need heavy calculations. The Hájek (1964) variance approximation provides this advantage as it is free of second-order inclusion probabilities. Hájek (1964) proved that this approximation is valid under restrictive conditions that are usually not fulfilled in practice. In this paper, we give more general conditions and we show that this approximation remains acceptable for most practical problems.  相似文献   

15.
The Levenberg–Marquardt algorithm is a flexible iterative procedure used to solve non-linear least-squares problems. In this work, we study how a class of possible adaptations of this procedure can be used to solve maximum-likelihood problems when the underlying distributions are in the exponential family. We formally demonstrate a local convergence property and discuss a possible implementation of the penalization involved in this class of algorithms. Applications to real and simulated compositional data show the stability and efficiency of this approach.  相似文献   

16.
The bootstrap is a powerful non-parametric statistical technique for making probability-based inferences about a population parameter. Through a Monte-Carlo resampling simulation, bootstrapping empirically generates a statistic's entire distribution. From this simulated distribution, inferences can be made about a population parameter. Assumptions about normality are not required. In general, despite its power, bootstrapping has been used relatively infrequently in social science research, and this is particularly true for business research. This under-utilization is likely due to a combination of a general lack of understanding of the bootstrap technique and the difficulty with which it has traditionally been implemented. Researchers in the various fields of business should be familiar with this powerful statistical technique. The purpose of this paper is to explain how this technique works using Lotus 1-2-3, a software package with which business people are very familiar.  相似文献   

17.
Analyzing repeated difference tests aims in significance testing for differences as well as in estimating the mean discrimination ability of the consumers. In addition to the average success probability, the proportion of consumers that may detect the difference between two products and therefore account for any increase of this probability is of interest. While some authors address the first two goals, for the latter one only an estimator directly linked to the average probability seems to be used. However, this may lead to unreasonable results. Therefore we propose a new approach based on multiple test theory. We define a suitable set of hypotheses that is closed under intersection. From this, we derive a series of hypotheses that may be sequentially tested while the overall significance level will not be violated. By means of this procedure we may determine a minimal number of assessors that must have perceived the difference between the products at least once in a while. From this, we can find a conservative lower bound for the proportion of perceivers within the consumers. In several examples, we give some insight into the properties of this new method and show that the knowledge about this lower bound might indeed be valuable for the investigator. Finally, an adaption of this approach for similarity tests will be proposed.  相似文献   

18.
Consider a linear regression model with independent normally distributed errors. Suppose that the scalar parameter of interest is a specified linear combination of the components of the regression parameter vector. Also suppose that we have uncertain prior information that a parameter vector, consisting of specified distinct linear combinations of these components, takes a given value. Part of our evaluation of a frequentist confidence interval for the parameter of interest is the scaled expected length, defined to be the expected length of this confidence interval divided by the expected length of the standard confidence interval for this parameter, with the same confidence coefficient. We say that a confidence interval for the parameter of interest utilizes this uncertain prior information if (a) the scaled expected length of this interval is substantially less than one when the prior information is correct, (b) the maximum value of the scaled expected length is not too large and (c) this confidence interval reverts to the standard confidence interval, with the same confidence coefficient, when the data happen to strongly contradict the prior information. We present a new confidence interval for a scalar parameter of interest, with specified confidence coefficient, that utilizes this uncertain prior information. A factorial experiment with one replicate is used to illustrate the application of this new confidence interval.  相似文献   

19.
Abstract.  In the context of survival analysis it is possible that increasing the value of a covariate X has a beneficial effect on a failure time, but this effect is reversed when conditioning on any possible value of another covariate Y . When studying causal effects and influence of covariates on a failure time, this state of affairs appears paradoxical and raises questions about the real effect of X . Situations of this kind may be seen as a version of Simpson's paradox. In this paper, we study this phenomenon in terms of the linear transformation model. The introduction of a time variable makes the paradox more interesting and intricate: it may hold conditionally on a certain survival time, i.e. on an event of the type { T > t } for some but not all t , and it may hold only for some range of survival times.  相似文献   

20.
Extreme Value Theory (EVT) aims to study the tails of probability distributions in order to measure and quantify extreme events of maximum and minimum. In river flow data, an extreme level of a river may be related to the level of a neighboring river that flows into it. In this type of data, it is very common for flooding of a location to have been caused by a very large flow from an affluent river that is tens or hundreds of kilometers from this location. In this sense, an interesting approach is to consider a conditional model for the estimation of a multivariate model. Inspired by this idea, we propose a Bayesian model to describe the dependence of exceedance between rivers, where we considered a conditionally independent structure. In this model, the dependence between rivers is captured by modeling the excess marginally of one river as a consequence of linear functions of the other rivers. The results showed that there is a strong and positive connection between excesses in one river caused by the excesses of the other rivers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号