首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   693篇
  免费   45篇
管理学   80篇
民族学   5篇
人口学   67篇
丛书文集   56篇
理论方法论   39篇
综合类   72篇
社会学   127篇
统计学   292篇
  2023年   2篇
  2022年   5篇
  2021年   16篇
  2020年   23篇
  2019年   17篇
  2018年   23篇
  2017年   30篇
  2016年   17篇
  2015年   20篇
  2014年   23篇
  2013年   140篇
  2012年   55篇
  2011年   47篇
  2010年   26篇
  2009年   41篇
  2008年   33篇
  2007年   39篇
  2006年   43篇
  2005年   16篇
  2004年   16篇
  2003年   16篇
  2002年   14篇
  2001年   6篇
  2000年   12篇
  1999年   15篇
  1998年   10篇
  1997年   5篇
  1996年   3篇
  1995年   2篇
  1994年   2篇
  1991年   2篇
  1990年   3篇
  1989年   1篇
  1988年   1篇
  1985年   2篇
  1983年   1篇
  1982年   1篇
  1981年   4篇
  1980年   3篇
  1978年   3篇
排序方式: 共有738条查询结果,搜索用时 921 毫秒
1.
Abstract

In general, survival data are time-to-event data, such as time to death, time to appearance of a tumor, or time to recurrence of a disease. Models for survival data have frequently been based on the proportional hazards model, proposed by Cox. The Cox model has intensive application in the field of social, medical, behavioral and public health sciences. In this paper we propose a more efficient sampling method of recruiting subjects for survival analysis. We propose using a Moving Extreme Ranked Set Sampling (MERSS) scheme with ranking based on an easy-to-evaluate baseline auxiliary variable known to be associated with survival time. This paper demonstrates that this approach provides a more powerful testing procedure as well as a more efficient estimate of hazard ratio than that based on simple random sampling (SRS). Theoretical derivation and simulation studies are provided. The Iowa 65+ Rural study data are used to illustrate the methods developed in this paper.  相似文献   
2.
贺建风  李宏煜 《统计研究》2021,38(4):131-144
数字经济时代,社交网络作为数字化平台经济的重要载体,受到了国内外学者的广泛关注。大数据背景下,社交网络的商业应用价值巨大,但由于其网络规模空前庞大,传统的网络分析方法 因计算成本过高而不再适用。而通过网络抽样算法获取样本网络,再推断整体网络,可节约计算资源, 因此抽样算法的好坏将直接影响社交网络分析结论的准确性。现有社交网络抽样算法存在忽略网络内部拓扑结构、容易陷入局部网络、抽样效率过低等缺陷。为了弥补现有社交网络抽样算法的缺陷,本文结合大数据社交网络的社区特征,提出了一种聚类随机游走抽样算法。该方法首先使用社区聚类算法将原始网络节点进行社区划分,得到多个社区网络,然后分别对每个社区进行随机游走抽样获取样本网 络。数值模拟和案例应用的结果均表明,聚类随机游走抽样算法克服了传统网络抽样算法的缺点,能够在降低网络规模的同时较好地保留原始网络的结构特征。此外,该抽样算法还可以并行运算,有效提升抽样效率,对于大数据背景下大规模社交网络的抽样实践具有重大现实意义。  相似文献   
3.
Changing Frameworks in Attitudes Toward Abortion   总被引:1,自引:0,他引:1  
For more than two decades, legal abortion has been the subject of heated political debate and adversarial social movement activity; however, national polls have shown little change in aggregate levels of support for abortion. This analysis examines how the determinants of abortion attitudes have changed between 1977 and 1996, using data from the General Social Surveys. While in early time periods, whites were more approving of abortion than blacks, that pattern had reversed by the late 1980s. After controlling for other factors, older people are more accepting of abortion throughout the two decades, while gender is generally unrelated to abortion views. Catholic religion weakens slightly as a predictor of abortion attitudes, while religious fundamentalism and political liberalism increase in explanatory power. The associations between attitudinal correlates and abortion approval also change over this time period. Religiosity becomes a less powerful predictor of abortion attitudes, while respondents' attitude toward sexual freedom and belief in the sanctity of human life increase in their predictive power. Support for gender inequality remains a weak but stable predictor of abortion attitudes. This pattern of results suggests that the public is influenced more by the pro-life framework of viewing abortion than by the pro-choice perspective.  相似文献   
4.
Summary.  As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project.  相似文献   
5.
关于大学英语写作教学实践的调查与反思   总被引:1,自引:0,他引:1  
本研究以第二语言写作理论为指导,对五所高校的大学英语写作教学现状进行抽样调查。调查结果表明,学生写作能力欠佳的原因主要来自于教师的教及学生的学等不同的层面。为提高大学英语写作教学的质量,在教学中教师应该改变现有的教学模式,加强英语写作地位;合理设置写作任务,激发学生写作动机;加强目的语输入,扩大学生积极词汇量。  相似文献   
6.
7.
The sampling designs dependent on sample moments of auxiliary variables are well known. Lahiri (Bull Int Stat Inst 33:133–140, 1951) considered a sampling design proportionate to a sample mean of an auxiliary variable. Sing and Srivastava (Biometrika 67(1):205–209, 1980) proposed the sampling design proportionate to a sample variance while Wywiał (J Indian Stat Assoc 37:73–87, 1999) a sampling design proportionate to a sample generalized variance of auxiliary variables. Some other sampling designs dependent on moments of an auxiliary variable were considered e.g. in Wywiał (Some contributions to multivariate methods in, survey sampling. Katowice University of Economics, Katowice, 2003a); Stat Transit 4(5):779–798, 2000) where accuracy of some sampling strategies were compared, too.These sampling designs cannot be useful in the case when there are some censored observations of the auxiliary variable. Moreover, they can be much too sensitive to outliers observations. In these cases the sampling design proportionate to the order statistic of an auxiliary variable can be more useful. That is why such an unequal probability sampling design is proposed here. Its particular cases as well as its conditional version are considered, too. The sampling scheme implementing this sampling design is proposed. The inclusion probabilities of the first and second orders were evaluated. The well known Horvitz–Thompson estimator is taken into account. A ratio estimator dependent on an order statistic is constructed. It is similar to the well known ratio estimator based on the population and sample means. Moreover, it is an unbiased estimator of the population mean when the sample is drawn according to the proposed sampling design dependent on the appropriate order statistic.  相似文献   
8.
Statistical process monitoring (SPM) is a very efficient tool to maintain and to improve the quality of a product. In many industrial processes, end product has two or more attribute-type quality characteristics. Some of them are independent, but the observations are Markovian dependent. It is essential to develop a control chart for such situations. In this article, we develop an Independent Attributes Control Chart for Markov Dependent Processes based on error probabilities criterion under the assumption of one-step Markov dependency. Implementation of the chart is similar to that of Shewhart-type chart. Performance of the chart has been studied using probability of detecting shift criterion. A procedure to identify the attribute(s) responsible for out-of-control status of the process is given.  相似文献   
9.
Simulations of forest inventory in several populations compared simple random with “quick probability proportional to size” (QPPS) sampling. The latter may be applied in the absence of a list sampling frame and/or prior measurement of the auxiliary variable. The correlation between the auxiliary and target variables required to render QPPS sampling more efficient than simple random sampling varied over the range 0.3–0.6 and was lower when sampling from populations that were skewed to the right. Two possible analytical estimators of the standard error of the estimate of the mean for QPPS sampling were found to be less reliable than bootstrapping.  相似文献   
10.
In many industrial quality control experiments and destructive stress testing, the only available data are successive minima (or maxima)i.e., record-breaking data. There are two sampling schemes used to collect record-breaking data: random sampling and inverse sampling. For random sampling, the total sample size is predetermined and the number of records is a random variable while in inverse-sampling the number of records to be observed is predetermined; thus the sample size is a random variable. The purpose of this papper is to determinevia simulations, which of the two schemes, if any, is more efficient. Since the two schemes are equivalent asymptotically, the simulations were carried out for small to moderate sized record-breaking samples. Simulated biases and mean square errors of the maximum likelihood estimators of the parameters using the two sampling schemes were compared. In general, it was found that if the estimators were well behaved, then there was no significant difference between the mean square errors of the estimates for the two schemes. However, for certain distributions described by both a shape and a scale parameter, random sampling led to estimators that were inconsistent. On the other hand, the estimated obtained from inverse sampling were always consistent. Moreover, for moderated sized record-breaking samples, the total sample size that needs to be observed is smaller for inverse sampling than for random sampling.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号