首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
一、技术进步与经济增长的关系 技术进步包括广义技术进步和狭义技术进步,狭义技术进步是指生产要素(包括生产设备、工具、器具和原材料)以及生产工艺的改进,广义技术进步还包括软技术如组织、管理、政策、体制等的调整与改进.  相似文献   

2.
天津滨海新区工业结构竞争力研究   总被引:1,自引:0,他引:1  
一、引言 工业结构是指一个国家或地区工业经济系统中各产业部门之间的相互组合关系。从广义看,工业结构包括两方面的内容:各产业部门在生产规模上的相互比例关系:各产业间的相互关联方式。本文所要研究的则是狭义的产业结构概念,即工业各产业部门在生产规模上的相互比例关系。合理的工业结构是区域经济健康发展的前提,它不但有利于充分利用区域资源,发挥区域优势,提高区域产业经济效益,增强区域经济实力,  相似文献   

3.
一、消费水平的内涵消费水平有狭义和广义之分。狭义的消费水平是指按人口平均的消费品数量(含劳务),反映人们物质生活需要的实际满足程度。它可以用货币表现,如人均消费额、人均货币收入;也可以用实物量表现,如人均肉制品消费量、人均奶制品消费量等。广义的消费水平,不仅包括消费品数量,而且包括消费品质量。因为同一种消费品,尽管数量相同,但如果质量(档次)不同,其价格差别也很大,人们因此获得的满足程度也就有很大差别,所反映的消费水平的差别也就大。  相似文献   

4.
一、当前农村基础设施、信息服务现状 农村基础设施指那些为农民的生产和生活服务的使用期限较长的设施,可大致分为三类:一是生产服务设施,如水利设施、农业科研和技术推广服务机构等;二是生活服务设施,如医疗、文化设施;三是生产生活服务设施,如道路和通信设施等.我国农村基础设施、信息服务相对于城市的发展水平来说是比较落后的.  相似文献   

5.
丁大建 《统计研究》1990,7(2):15-18
一、劳动力产业结构转变的一般理论描述劳动力产业结构是指一个国家或地区的社会劳动力按其所属的产业部门进行划分而形成的劳动力在各产业部门间的分布、构成和比例关系。这里必须首先明确国民经济部门是如何划分的,本文采用三次产业的分类方法,把全部经济活动分为第一产业——广义的农业;第二产业——广义的工业,第三产业——广义的服务业。(一)劳动力产业结构的类型  相似文献   

6.
环保产业是一项新兴的产业,它是以防治环境污染、改善生态环境、保护自然资源为目的所进行的技术开发、产品开发、商业流通、资源利用、信息服务、工程承包等活动的总称。从目前世界上对环保产业的定义看,有广义和狭义之分。狭义的环保产业针对环境问题的终端治理而言,其范畴包括污水治理、废弃物处理、大气质量控制、噪声控制、三废综合利用等方面。广义的环保产业是针对产品的生命全过程而言,它不仅包括狭义的内容,还包括涉及产品生产过程中的洁净技术与产品使用过程中的洁净产品,节能技术与工艺以及绿色设计,即在产品设计时就考虑回收利…  相似文献   

7.
《内蒙古统计》2000,(6):5-6
为了掌握我市科技进步对农业贡献的水平,市统计局和市农调队联合组成课题组对包头市农业科技进步贡献率进行了测算和探讨。一、农业科技进步贡献率的含义和测算方法农业科技进步通常分成狭义和广义两种。狭义科技进步包括技术进化和技术革命;广义料技进步除包括狭义科技内容外,还包括管理水平、决策水平、知识水平等软科学的进步。国家农业部科学技术与质量标准司在1997年提出了一套规范的农业科技进步贡献率测算方法:具体测算方法是采用增长速度测算模型以及生产函数来测定技术进步的经济贡献份额。增长速度测算方程是:δ是农业科…  相似文献   

8.
陈勇 《统计与决策》2006,(21):67-68
一、绿色GD P核算的内涵绿色G DP是绿色核算的一个核心指标。所谓绿色核算,有广义和狭义之分。广义的绿色核算是指包括绿色统计核算、绿色会计核算、绿色技术核算、绿色审计核算在内的核算;狭义的绿色核算仅指绿色GDP的核算。本文探讨的主要是狭义的绿色核算。绿色核算的主要对  相似文献   

9.
期货市场     
期货市场的概念有广义和狭义之分。广义的期货市场是指期货交易关系的总和,包括期货交易所、期货结算所、经纪行和交易者(包括投机者和套期保值者)。狭义的期货市场是指进行期货合约交易的场所,即期货交易所。这种交易所是由转移价格波动风险的生产经营者和承受价格风险  相似文献   

10.
城市作为一个综合的社会——经济——自然复合生态系统,在发展过程中逐渐形成了其独特的城市环境,主要包括社会环境、经济环境和自然生态环境^[1],这是广义的城市环境。而狭义的城市生态环境仅指城市自然生态环境。本文所指的城市生态环境就是狭义上的城市生态环境。  相似文献   

11.
In this paper factor analysis is reviewed as a special case of interpolation in reproducing kernel Hilbebt spaces, Thus a new point of view and an illustration particularly for the problems of factor indeterminacy and factor scores estimation is gained, More

generally it is argued that HILBERT space theory and projection methods provide a unifying framework for different approaches to problems of factor analysis, Conclusions for the practice of factor analysis result in a recommendation for using it as a confirmative method of data analysis only  相似文献   

12.
文章用主成分分析的方法得到了衡量公司业绩的综合指标,以此作为因变量,而以股权治理因子(包括国有股,流通A股,第一大股东,股权集中度等)为解释变量,进行了回归分析.采用涉及机械、金属、批发、石油四大行业共231家上市公司的有关数据,进行了回归分析和假设检验,其结论是不同的行业即使类似的股权结构也有着不同的治理效果,目前中国上市公司的治理应该遵循"行业性".  相似文献   

13.
The topic of this paper was prompted by a study for which one of us was the statistician. It was submitted to Annals of Internal Medicine. The paper had positive reviewer comment; however, the statistical reviewer stated that for the analysis to be acceptable for publication, the missing data had to be accounted for in the analysis through the use of baseline in a last observation carried forward imputation. We discuss the issues associated with this form of imputation and recommend that it should not be undertaken as a primary analysis.  相似文献   

14.
Summary Nonsymmetric correspondence analysis is a model meant for the analysis of the dependence in a two-way continengy table, and is an alternative to correspondence analysis. Correspondence analysis is based on the decomposition of Pearson's Ф2-index Nonsymmetric correspondence analysis is based on the decomposition of Goodman-Kruskal's τ-index for predicatablity. In this paper, we approach nonsymmetric correspondence analysis as a statistical model based on a probability distribution. We provide algorithms for the maximum likelihood and the least-squares estimation with linear constraints upon model parameters. The nonsymmetric correspondence analysis model has many properties that can be useful for prediction analysis in contingency tables. Predictability measures are introduced to identify the categories of the response variable that can be best predicted, as well as the categories of the explanatory variable having the highest predictability power. We describe the interpretation of model parameters in two examples. In the end, we discuss the relations of nonsymmetric correspondence analysis with other reduced-rank models.  相似文献   

15.
Tukey proposed a class of distributions, the g-and-h family (gh family), based on a transformation of a standard normal variable to accommodate different skewness and elongation in the distribution of variables arising in practical applications. It is easy to draw values from this distribution even though it is hard to explicitly state the probability density function. Given this flexibility, the gh family may be extremely useful in creating multiple imputations for missing data. This article demonstrates how this family, as well as its generalizations, can be used in the multiple imputation analysis of incomplete data. The focus of this article is on a scalar variable with missing values. In the absence of any additional information, data are missing completely at random, and hence the correct analysis is the complete-case analysis. Thus, the application of the gh multiple imputation to the scalar cases affords comparison with the correct analysis and with other model-based multiple imputation methods. Comparisons are made using simulated datasets and the data from a survey of adolescents ascertaining driving after drinking alcohol.  相似文献   

16.
This paper deals with the analysis of datasets, where the subjects are described by the estimated means of a p-dimensional variable. Classical statistical methods of data analysis do not treat measurements affected by intrinsic variability, as in the case of estimates, so that the heterogeneity induced among subjects by this condition is not taken into account. In this paper a way to solve the problem is suggested in the context of symbolic data analysis, whose specific aim is to handle data tables where single valued measurements are substituted by complex data structures like frequency distributions, intervals, and sets of values. A principal component analysis is carried out according to this proposal, with a significant improvement in the treatment of information.  相似文献   

17.
The difference between a path analysis and the other multivariate analyses is that the path analysis has the ability to compute the indirect effects apart from the direct effects. The aim of this study is to investigate the distribution of indirect effects that is one of the components of path analysis via generated data. To realize this, a simulation study has been conducted with four different sample sizes, three different numbers of explanatory variables and with three different correlation matrices. A replication of 1000 has been applied for every single combination. According to the results obtained, it is found that irrespective of the sample size path coefficients tend to be stable. Moreover, path coefficients are not affected by correlation types either. Since the replication number is 1000, which is fairly large, the indirect effects from the path models have been treated as normal and their confidence intervals have been presented as well. It is also found that the path analysis should not be used with three explanatory variables. We think that this study would help scientists who are working in both natural and social sciences to determine sample size and different number of variables in the path analysis.  相似文献   

18.
王志宏 《统计研究》1998,15(3):53-56
可持续发展与投入占用产出分析王志宏ABSTRACTInthispaper,someimportantapplicationsofInput-Occupancy-OutputAnaly-sisinthesustainabledevelopmentare...  相似文献   

19.
Non-Gaussian factor analysis differs from ordinary factor analysis because of the distribution assumption on the factors which are modelled by univariate mixtures of Gaussians thus relaxing the classical normal hypothesis. From this point of view, the model can be thought of as a generalization of ordinary factor analysis and its estimation problem can still be solved via the maximum likelihood method. The focus of this work is to introduce, develop and explore a Bayesian analysis of the model in order to provide an answer to unresolved questions about the number of latent factors and simultaneously the number of mixture components to model each factor. The effectiveness of the proposed method is explored in a simulation study and in a real example of international exchange rates.  相似文献   

20.
函数数据聚类分析方法探析   总被引:3,自引:0,他引:3  
函数数据是目前数据分析中新出现的一种数据类型,它同时具有时间序列和横截面数据的特征,通常可以描述为关于某一变量的函数图像,在实际应用中具有很强的实用性。首先简要分析函数数据的一些基本特征和目前提出的一些函数数据聚类方法,如均匀修正的函数数据K均值聚类方法、函数数据层次聚类方法等,并在此基础上,从函数特征分析的角度探讨了函数数据聚类方法,提出了一种基于导数分析的函数数据区间聚类分析方法,并利用中国中部六省的就业人口数据对该方法进行实证分析,取得了聚类结果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号