首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 609 毫秒
1.
本文对高考填报志愿系统进行定量分析,采用模糊AHP方法设计了评价模型、灰色预测方法给出了预测模型,结合实例论述了模糊AHP以及灰色预测的基本过程,对填报高考志愿这个主客观信息综合集成的复杂过程具有一定的指导意义。  相似文献   

2.
朱雪鸿 《中国统计》2006,(10):57-58
我是一名统计工作者,长期从事统计工作使我养成了一个习惯,生活中遇到繁琐的事情总喜欢用统计分析的方法去解决。去年这个时候,我的孩子正上高三,将要面临高考填报志愿。谁都知道,填报志愿时对高考分数和大学录取分数这两个分数的估计是最棘手而且很关键的问题(针对考后估分填报志愿的地区)。弄不好,不是高分低就委屈了孩子就是金榜无名来年复读。这时,我用自己的看家本领——统计分析,成功地帮孩子填报了志愿,最后使他如愿以偿,考取了著名高校——中国科技大学。我用的这个方法叫分数差分析法(笑:自定义)。高考年年有,这个方法不敢独用,现…  相似文献   

3.
新书介绍     
由《中国大学评价》课题组编写的《挑大学选专业——高考志愿填报指南2007》一书可以说为考生提供了一本了解全国各类大学基本情况(一本、二本、三本),以及各专业在全国高校的排名位置,使考生可以清楚、明白、准确地填报志愿,实为一本实实在在为考生填报志愿服务的工具指南。该  相似文献   

4.
1979年,填报高考志愿时,不知何故,我鬼使神差地将陕西财经学院统计学专业作为第一志愿,糊里糊涂地上了“统计”这条船。4年的大学生活,使我对“统计”有了粗浅的认识:“课程比别的专业难,作业比别的专业多,分配比别的专业差。”记得一位老师说过,  相似文献   

5.
2015年,《挑大学选专业》系列书进行了全新改版!改版后的《挑大学选专业——2015高考志愿填报指南》一书保持并加强了自身的品牌特色,定位于以大学综合实力排名和学科门类排名、专业排名为主线条,以教师学术水平、教师绩效、新生质量、毕业生质量排名为辅助线条的高考志愿填报参考书。为了更加方便读者阅读和使用本书,改版后的新书增设了六大检索入口,分别为:Ⅰ.中国一流大学、研究型大学名库;Ⅱ.985工程大学及211工程大学名库;Ⅲ.按学校综合实力排序的大学名库;Ⅳ.  相似文献   

6.
由《中国大学评价》课题组编写的《挑大学选专业——高考志愿填报指南2007》一书可以说为考生提供了一本了解全国各类大学基本情况(一本、二本、三本),以及各专业在全国高校的排名位置,  相似文献   

7.
<正>从万人景仰的最高学府到普普通通的职业学校,抛开那些吸引眼球的"标题党"式解读不谈,周浩的故事其实说明了一个朴素得不能再朴素的道理:兴趣是最好的老师,学习和成长终归是学生自己的事情。实际上,对一个十七八岁的中学生来说,高考填报志愿很可能是比较盲目的。高考前,一切都是为了分数,其他的都不用多想,也没时间想;高考出分后...  相似文献   

8.
本书是以全国普通本科大学学科门类评价和本科专业评价为主线条的高考志愿填报参考书。编写本书的目的,是使考生能够依据自己的平时成绩和高考成绩,选择最适合自己的学校和专业。更进一步,是使有志成为中国科学精英的考生通过参考本书,进入一所优秀的大学,接受国内顶尖的本科专业教育,为将来对中国科学乃至世界科学作出贡献打下坚实的基础。  相似文献   

9.
中国大学综合实力一目了然中国大学学科强弱一目了然中国大学专业层次一目了然中国大学老师水平一目了然中国大学新生质量一目了然中国大学毕业质量一目了然2015年,《挑大学选专业》系列书进行了全新改版!改版后的《挑大学选专业2015高考志愿填报指南》一书仍是以大学综合实力排名和学科门类排名、专业排名为主线条,教师学术水平、教师绩效、新生质量、毕业生质量排名为辅助线条的高考志愿填报参考书,为了更加方便读者阅读和使用本书,改版后新增了六大检索入口,分别为:中国一流大学、研究型大学名库;985工程大学及211工程大学名库;按学校综合实力排序的大学名库;按学科专业实力排序的大学名库;按教师水平、教师绩效、新生质量、毕业生质量排序的大学名库;按区域分省份排序的大学  相似文献   

10.
文章针对现实中一些定性描述的序列,以及取值不确定的灰数据序列的预测问题,提出了基于GM(1,1)模型的三角模糊数序列预测的概念,对那些非精确量化的数据序列用三角模糊数来表示;再把该三角模糊数表示为对应的非模糊数,从而得到一个新的数据序列;然后再结合灰色系统的一些理论方法,对这个新的非模糊数数据序列进行预测。将该方法应用于我国古代人口数据的实证预测中,得到的预测精度较高,预测结果可为决策部门提供相关的决策依据。  相似文献   

11.
针对PISA 2009中国试测研究主观题评分环节所采用的多重编码设计,分析在阅读、数学和科学领域的评分中是否存在评分者效应.根据多侧面Rasch模型方法,分别对这三个领域进行评分者主效应的分析.结果显示:阅读和科学领域中,评分者之间的严苛度/宽松度差异非常显著;而数学领域中,评分者之间的严苛度/宽松度差异较小.最后,探讨了这些结果的可能原因以及对高考网上阅卷评分借鉴的建议.  相似文献   

12.
Quantile regression models are a powerful tool for studying different points of the conditional distribution of univariate response variables. Their multivariate counterpart extension though is not straightforward, starting with the definition of multivariate quantiles. We propose here a flexible Bayesian quantile regression model when the response variable is multivariate, where we are able to define a structured additive framework for all predictor variables. We build on previous ideas considering a directional approach to define the quantiles of a response variable with multiple outputs, and we define noncrossing quantiles in every directional quantile model. We define a Markov chain Monte Carlo (MCMC) procedure for model estimation, where the noncrossing property is obtained considering a Gaussian process design to model the correlation between several quantile regression models. We illustrate the results of these models using two datasets: one on dimensions of inequality in the population, such as income and health; the second on scores of students in the Brazilian High School National Exam, considering three dimensions for the response variable.  相似文献   

13.
层次分析法在选择最佳零售业态的应用及其改进   总被引:2,自引:0,他引:2  
在介绍层次分析法和各种零售业态特点的基础上,以天津市某个居民小区为例,说明了应用层次分析法在居民小区中选取最佳的零售业态的过程,并在此过程中,对层次分析法进行了改进。  相似文献   

14.
Application of the minimum distance (MD) estimation method to the linear regression model for estimating regression parameters is a difficult and time-consuming process due to the complexity of its distance function, and hence, it is computationally expensive. To deal with the computational cost, this paper proposes a fast algorithm which makes the best use of coordinate-wise minimization technique in order to obtain the MD estimator. R package (KoulMde) based on the proposed algorithm and written in Rcpp is available online.  相似文献   

15.
由于灰代数运算体系尚不完善,难以有效构建基于灰数序列的灰色模型,而传统灰数序列的白化方法又将导致信息损失,故在不破坏区间灰数独立性及信息完整性的前提下,设计一种区间灰数序列白化处理的新方法,重点研究白化序列与原区间灰数序列在平移变换及倍乘变换过程中的数据特点;同时将白化序列成功地应用于区间灰数预测及关联分析模型的构建。这一研究成果对拓展灰色模型的适用范围具有重要意义。  相似文献   

16.
A common assumption in modeling stochastic processes is that of weak stationarity. Although this is a convenient and sometimes justifiable assumption for many applications, there are other applications for which it is clearly inappropriate. One such application occurs when the process is driven by action at a limited number of sites, or point sources. Interest may lie not only in predicting the process, but also in assessing the effect of the point sources. In this article we present a general parametric approach of accounting for the effect of point sources in the covariance model of a stochastic process, and we discuss properties of a particular family from this general class. A simulation study demonstrates the performance of parameter estimation using this model, and the predictive ability of this model is shown to be better than some commonly used modeling approaches. Application to a dataset of electromagnetism measurements in a field containing a metal pole shows the advantages of our parametric nonstationary covariance models.  相似文献   

17.
A family of kernels (with the sinc kernel as the simplest member) is introduced for which the associated deconvolving kernels (assuming normally distributed measurement errors) can be represented by relatively simple analytic functions. For this family, deconvolving kernel density estimation is not more sophisticated than ordinary kernel density estimation. Application examples suggest that it may be advantageous to overestimate the measurement error, because the resulting deconvolving kernels can partially compensate for the blurring inherent to the density estimation itself. A corollary of this proposition is that, even without error, it may be rational to use deconvolving rather than ordinary kernels.  相似文献   

18.
Application of the balanced repeated replication method of variance estimation can become cumbersome as well as expensive when the number of replicates involved is large. While a number of replication methods of variance estimation requiring a reduced number of replicates have been proposed, the corresponding reduction in computational effort is accompanied by a loss in precision. In this article, this loss in precision is evaluated in the linear case. The results obtained may be useful in practice in balancing precision against computational cost.  相似文献   

19.
ABSTRACT

In economics and government statistics, aggregated data instead of individual level data are usually reported for data confidentiality and for simplicity. In this paper we develop a method of flexibly estimating the probability density function of the population using aggregated data obtained as group averages when individual level data are grouped according to quantile limits. The kernel density estimator has been commonly applied to such data without taking into account the data aggregation process and has been shown to perform poorly. Our method models the quantile function as an integral of the exponential of a spline function and deduces the density function from the quantile function. We match the aggregated data to their theoretical counterpart using least squares, and regularize the estimation by using the squared second derivatives of the density function as the penalty function. A computational algorithm is developed to implement the method. Application to simulated data and US household income survey data show that our penalized spline estimator can accurately recover the density function of the underlying population while the common use of kernel density estimation is severely biased. The method is applied to study the dynamic of China's urban income distribution using published interval aggregated data of 1985–2010.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号