首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   279篇
  免费   3篇
管理学   3篇
丛书文集   3篇
理论方法论   2篇
综合类   8篇
社会学   19篇
统计学   247篇
  2021年   1篇
  2020年   4篇
  2019年   8篇
  2018年   14篇
  2017年   26篇
  2016年   6篇
  2015年   6篇
  2014年   11篇
  2013年   81篇
  2012年   15篇
  2011年   9篇
  2010年   9篇
  2009年   10篇
  2008年   7篇
  2007年   9篇
  2006年   13篇
  2005年   8篇
  2004年   5篇
  2003年   2篇
  2002年   7篇
  2001年   5篇
  2000年   5篇
  1999年   7篇
  1998年   7篇
  1997年   3篇
  1995年   1篇
  1993年   1篇
  1984年   1篇
  1979年   1篇
排序方式: 共有282条查询结果,搜索用时 812 毫秒
71.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   
72.
基于聚类关联规则的缺失数据处理研究   总被引:2,自引:1,他引:2       下载免费PDF全文
 本文提出了基于聚类和关联规则的缺失数据处理新方法,通过聚类方法将含有缺失数据的数据集相近的记录归到一类,然后利用改进后的关联规则方法对各子数据集挖掘变量间的关联性,并利用这种关联性来填补缺失数据。通过实例分析,发现该方法对缺失数据处理,尤其是海量数据集具有较好的效果。  相似文献   
73.
Social network data usually contain different types of errors. One of them is missing data due to actor non-response. This can seriously jeopardize the results of analyses if not appropriately treated. The impact of missing data may be more severe in valued networks where not only the presence of a tie is recorded, but also its magnitude or strength. Blockmodeling is a technique for delineating network structure. We focus on an indirect approach suitable for valued networks. Little is known about the sensitivity of valued networks to different types of measurement errors. As it is reasonable to expect that blockmodeling, with its positional outcomes, could be vulnerable to the presence of non-respondents, such errors require treatment. We examine the impacts of seven actor non-response treatments on the positions obtained when indirect blockmodeling is used. The start point for our simulation are networks whose structure is known. Three structures were considered: cohesive subgroups, core-periphery, and hierarchy. The results show that the number of non-respondents, the type of underlying blockmodel structure, and the employed treatment all have an impact on the determined partitions of actors in complex ways. Recommendations for best practices are provided.  相似文献   
74.
Missing data is an important, but often ignored, aspect of a network study. Measurement validity is affected by missing data, but the level of bias can be difficult to gauge. Here, we describe the effect of missing data on network measurement across widely different circumstances. In Part I of this study (Smith and Moody, 2013), we explored the effect of measurement bias due to randomly missing nodes. Here, we drop the assumption that data are missing at random: what happens to estimates of key network statistics when central nodes are more/less likely to be missing? We answer this question using a wide range of empirical networks and network measures. We find that bias is worse when more central nodes are missing. With respect to network measures, Bonacich centrality is highly sensitive to the loss of central nodes, while closeness centrality is not; distance and bicomponent size are more affected than triad summary measures and behavioral homophily is more robust than degree-homophily. With respect to types of networks, larger, directed networks tend to be more robust, but the relation is weak. We end the paper with a practical application, showing how researchers can use our results (translated into a publically available java application) to gauge the bias in their own data.  相似文献   
75.
This article considers inference for the log-normal distribution based on progressive Type I interval censored data by both frequentist and Bayesian methods. First, the maximum likelihood estimates (MLEs) of the unknown model parameters are computed by expectation-maximization (EM) algorithm. The asymptotic standard errors (ASEs) of the MLEs are obtained by applying the missing information principle. Next, the Bayes’ estimates of the model parameters are obtained by Gibbs sampling method under both symmetric and asymmetric loss functions. The Gibbs sampling scheme is facilitated by adopting a similar data augmentation scheme as in EM algorithm. The performance of the MLEs and various Bayesian point estimates is judged via a simulation study. A real dataset is analyzed for the purpose of illustration.  相似文献   
76.
黄雪成 《统计研究》2019,36(6):15-27
基础数据质量问题对PPP测算结果的影响既需要有定性的客观认识,更需要量化的测算结果做依据。本文首先依据数据审核的基本逻辑将价格数据质量问题抽象为数据缺失和数据失真两种类型;其次借鉴对照实验方法,以CPD法为基础,通过对两类数据质量问题不同形式和程度的量化模拟,综合考察了两类数据质量问题影响下的CPD-PPP测算偏差特征与规律;最后综合模拟测度结果为ICP基础数据质量的控制与优化以及比较结果的量化评估与调整提出了意见与建议。  相似文献   
77.
This work was motivated by a real problem of comparing binary diagnostic tests based upon a gold standard, where the collected data showed that the large majority of classifications were incomplete and the feedback received from the medical doctors allowed us to consider the missingness as non-informative. Taking into account the degree of data incompleteness, we used a Bayesian approach via MCMC methods for drawing inferences of interest on accuracy measures. Its direct implementation by well-known software demonstrated serious problems of chain convergence. The difficulties were overcome by the proposal of a simple, efficient and easily adaptable data augmentation algorithm, performed through an ad hoc computer program.  相似文献   
78.
79.
80.
运用统计图解方法研究,发现中国农户家庭偏好中存在防卫性机制。适当选取模型和计量工具,随后的参数检验也证实防卫性机制存在。防卫性储蓄是金融保险制度缺失条件下的行为替代,决策应沿着合理利用这一行为机制来思考,以建立城乡一体的国家社保体系。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号