共查询到19条相似文献,搜索用时 609 毫秒
1.
本文对高考填报志愿系统进行定量分析,采用模糊AHP方法设计了评价模型、灰色预测方法给出了预测模型,结合实例论述了模糊AHP以及灰色预测的基本过程,对填报高考志愿这个主客观信息综合集成的复杂过程具有一定的指导意义。 相似文献
2.
我是一名统计工作者,长期从事统计工作使我养成了一个习惯,生活中遇到繁琐的事情总喜欢用统计分析的方法去解决。去年这个时候,我的孩子正上高三,将要面临高考填报志愿。谁都知道,填报志愿时对高考分数和大学录取分数这两个分数的估计是最棘手而且很关键的问题(针对考后估分填报志愿的地区)。弄不好,不是高分低就委屈了孩子就是金榜无名来年复读。这时,我用自己的看家本领——统计分析,成功地帮孩子填报了志愿,最后使他如愿以偿,考取了著名高校——中国科技大学。我用的这个方法叫分数差分析法(笑:自定义)。高考年年有,这个方法不敢独用,现… 相似文献
3.
4.
5.
2015年,《挑大学选专业》系列书进行了全新改版!改版后的《挑大学选专业——2015高考志愿填报指南》一书保持并加强了自身的品牌特色,定位于以大学综合实力排名和学科门类排名、专业排名为主线条,以教师学术水平、教师绩效、新生质量、毕业生质量排名为辅助线条的高考志愿填报参考书。为了更加方便读者阅读和使用本书,改版后的新书增设了六大检索入口,分别为:Ⅰ.中国一流大学、研究型大学名库;Ⅱ.985工程大学及211工程大学名库;Ⅲ.按学校综合实力排序的大学名库;Ⅳ. 相似文献
6.
7.
8.
9.
中国大学综合实力一目了然中国大学学科强弱一目了然中国大学专业层次一目了然中国大学老师水平一目了然中国大学新生质量一目了然中国大学毕业质量一目了然2015年,《挑大学选专业》系列书进行了全新改版!改版后的《挑大学选专业2015高考志愿填报指南》一书仍是以大学综合实力排名和学科门类排名、专业排名为主线条,教师学术水平、教师绩效、新生质量、毕业生质量排名为辅助线条的高考志愿填报参考书,为了更加方便读者阅读和使用本书,改版后新增了六大检索入口,分别为:中国一流大学、研究型大学名库;985工程大学及211工程大学名库;按学校综合实力排序的大学名库;按学科专业实力排序的大学名库;按教师水平、教师绩效、新生质量、毕业生质量排序的大学名库;按区域分省份排序的大学 相似文献
10.
文章针对现实中一些定性描述的序列,以及取值不确定的灰数据序列的预测问题,提出了基于GM(1,1)模型的三角模糊数序列预测的概念,对那些非精确量化的数据序列用三角模糊数来表示;再把该三角模糊数表示为对应的非模糊数,从而得到一个新的数据序列;然后再结合灰色系统的一些理论方法,对这个新的非模糊数数据序列进行预测。将该方法应用于我国古代人口数据的实证预测中,得到的预测精度较高,预测结果可为决策部门提供相关的决策依据。 相似文献
11.
PISA中国试测研究的评分者效应分析对高考网上阅卷的启示 总被引:1,自引:0,他引:1
针对PISA 2009中国试测研究主观题评分环节所采用的多重编码设计,分析在阅读、数学和科学领域的评分中是否存在评分者效应.根据多侧面Rasch模型方法,分别对这三个领域进行评分者主效应的分析.结果显示:阅读和科学领域中,评分者之间的严苛度/宽松度差异非常显著;而数学领域中,评分者之间的严苛度/宽松度差异较小.最后,探讨了这些结果的可能原因以及对高考网上阅卷评分借鉴的建议. 相似文献
12.
Quantile regression models are a powerful tool for studying different points of the conditional distribution of univariate response variables. Their multivariate counterpart extension though is not straightforward, starting with the definition of multivariate quantiles. We propose here a flexible Bayesian quantile regression model when the response variable is multivariate, where we are able to define a structured additive framework for all predictor variables. We build on previous ideas considering a directional approach to define the quantiles of a response variable with multiple outputs, and we define noncrossing quantiles in every directional quantile model. We define a Markov chain Monte Carlo (MCMC) procedure for model estimation, where the noncrossing property is obtained considering a Gaussian process design to model the correlation between several quantile regression models. We illustrate the results of these models using two datasets: one on dimensions of inequality in the population, such as income and health; the second on scores of students in the Brazilian High School National Exam, considering three dimensions for the response variable. 相似文献
13.
层次分析法在选择最佳零售业态的应用及其改进 总被引:2,自引:0,他引:2
在介绍层次分析法和各种零售业态特点的基础上,以天津市某个居民小区为例,说明了应用层次分析法在居民小区中选取最佳的零售业态的过程,并在此过程中,对层次分析法进行了改进。 相似文献
14.
Jiwoong Kim 《Journal of Statistical Computation and Simulation》2018,88(3):482-497
Application of the minimum distance (MD) estimation method to the linear regression model for estimating regression parameters is a difficult and time-consuming process due to the complexity of its distance function, and hence, it is computationally expensive. To deal with the computational cost, this paper proposes a fast algorithm which makes the best use of coordinate-wise minimization technique in order to obtain the MD estimator. R package (KoulMde) based on the proposed algorithm and written in Rcpp is available online. 相似文献
15.
由于灰代数运算体系尚不完善,难以有效构建基于灰数序列的灰色模型,而传统灰数序列的白化方法又将导致信息损失,故在不破坏区间灰数独立性及信息完整性的前提下,设计一种区间灰数序列白化处理的新方法,重点研究白化序列与原区间灰数序列在平移变换及倍乘变换过程中的数据特点;同时将白化序列成功地应用于区间灰数预测及关联分析模型的构建。这一研究成果对拓展灰色模型的适用范围具有重要意义。 相似文献
16.
《Journal of statistical planning and inference》1999,77(1):51-72
A common assumption in modeling stochastic processes is that of weak stationarity. Although this is a convenient and sometimes justifiable assumption for many applications, there are other applications for which it is clearly inappropriate. One such application occurs when the process is driven by action at a limited number of sites, or point sources. Interest may lie not only in predicting the process, but also in assessing the effect of the point sources. In this article we present a general parametric approach of accounting for the effect of point sources in the covariance model of a stochastic process, and we discuss properties of a particular family from this general class. A simulation study demonstrates the performance of parameter estimation using this model, and the predictive ability of this model is shown to be better than some commonly used modeling approaches. Application to a dataset of electromagnetism measurements in a field containing a metal pole shows the advantages of our parametric nonstationary covariance models. 相似文献
17.
《Journal of Statistical Computation and Simulation》2012,82(12):2347-2363
A family of kernels (with the sinc kernel as the simplest member) is introduced for which the associated deconvolving kernels (assuming normally distributed measurement errors) can be represented by relatively simple analytic functions. For this family, deconvolving kernel density estimation is not more sophisticated than ordinary kernel density estimation. Application examples suggest that it may be advantageous to overestimate the measurement error, because the resulting deconvolving kernels can partially compensate for the blurring inherent to the density estimation itself. A corollary of this proposition is that, even without error, it may be rational to use deconvolving rather than ordinary kernels. 相似文献
18.
Daniel Krewski 《Journal of statistical planning and inference》1978,2(1):45-51
Application of the balanced repeated replication method of variance estimation can become cumbersome as well as expensive when the number of replicates involved is large. While a number of replication methods of variance estimation requiring a reduced number of replicates have been proposed, the corresponding reduction in computational effort is accompanied by a loss in precision. In this article, this loss in precision is evaluated in the linear case. The results obtained may be useful in practice in balancing precision against computational cost. 相似文献
19.
《Journal of Statistical Computation and Simulation》2012,82(15):3093-3105
ABSTRACTIn economics and government statistics, aggregated data instead of individual level data are usually reported for data confidentiality and for simplicity. In this paper we develop a method of flexibly estimating the probability density function of the population using aggregated data obtained as group averages when individual level data are grouped according to quantile limits. The kernel density estimator has been commonly applied to such data without taking into account the data aggregation process and has been shown to perform poorly. Our method models the quantile function as an integral of the exponential of a spline function and deduces the density function from the quantile function. We match the aggregated data to their theoretical counterpart using least squares, and regularize the estimation by using the squared second derivatives of the density function as the penalty function. A computational algorithm is developed to implement the method. Application to simulated data and US household income survey data show that our penalized spline estimator can accurately recover the density function of the underlying population while the common use of kernel density estimation is severely biased. The method is applied to study the dynamic of China's urban income distribution using published interval aggregated data of 1985–2010. 相似文献