首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   629篇
  免费   6篇
管理学   18篇
人口学   3篇
丛书文集   8篇
理论方法论   4篇
综合类   36篇
社会学   14篇
统计学   552篇
  2023年   2篇
  2021年   3篇
  2020年   7篇
  2019年   17篇
  2018年   24篇
  2017年   39篇
  2016年   7篇
  2015年   12篇
  2014年   39篇
  2013年   222篇
  2012年   45篇
  2011年   17篇
  2010年   13篇
  2009年   13篇
  2008年   23篇
  2007年   23篇
  2006年   12篇
  2005年   12篇
  2004年   10篇
  2003年   3篇
  2002年   5篇
  2001年   10篇
  2000年   6篇
  1999年   10篇
  1998年   5篇
  1997年   8篇
  1996年   7篇
  1995年   2篇
  1993年   3篇
  1992年   4篇
  1990年   1篇
  1989年   3篇
  1988年   4篇
  1986年   1篇
  1985年   3篇
  1984年   3篇
  1983年   2篇
  1982年   3篇
  1981年   1篇
  1980年   1篇
  1979年   2篇
  1978年   6篇
  1977年   1篇
  1975年   1篇
排序方式: 共有635条查询结果,搜索用时 46 毫秒
11.
The purpose was to assess RDS estimators in populations simulated with diverse connectivity characteristics, incorporating the putative influence of misreported degrees and transmission processes. Four populations were simulated using different random graph models. Each population was “infected” using four different transmission processes. From each combination of population x transmission, one thousand samples were obtained using a RDS-like sampling strategy. Three estimators were used to predict the population-level prevalence of the “infection”. Several types of misreported degrees were simulated. Also, samples were generated using the standard random sampling method and the respective prevalence estimates, using the classical frequentist estimator. Estimation biases in relation to population parameters were assessed, as well as the variance. Variability was associated with the connectivity characteristics of each simulated population. Clustered populations yield greater variability and no RDS-based strategy could address the estimation biases. Misreporting degrees had modest effects, especially when RDS estimators were used. The best results for RDS-based samples were observed when the “infection” was randomly attributed, without any relation with the underlying network structure.  相似文献   
12.
Many tree algorithms have been developed for regression problems. Although they are regarded as good algorithms, most of them suffer from loss of prediction accuracy when there are many irrelevant variables and the number of predictors exceeds the number of observations. We propose the multistep regression tree with adaptive variable selection to handle this problem. The variable selection step and the fitting step comprise the multistep method.

The multistep generalized unbiased interaction detection and estimation (GUIDE) with adaptive forward selection (fg) algorithm, as a variable selection tool, performs better than some of the well-known variable selection algorithms such as efficacy adaptive regression tube hunting (EARTH), FSR (false selection rate), LSCV (least squares cross-validation), and LASSO (least absolute shrinkage and selection operator) for the regression problem. The results based on simulation study show that fg outperforms other algorithms in terms of selection result and computation time. It generally selects the important variables correctly with relatively few irrelevant variables, which gives good prediction accuracy with less computation time.  相似文献   
13.
14.
From the economical viewpoint in reliability theory, this paper addresses a scheduling replacement problem for a single operating system which works at random times for multiple jobs. The system is subject to stochastic failure which results the imperfect maintenance activity based on some random failure mechanism: minimal repair due to type-I (repairable) failure, or corrective replacement due to type-II (non-repairable) failure. Three scheduling models for the system with multiple jobs are considered: a single work, N tandem works, and N parallel works. To control the deterioration process, the preventive replacement is planned to undergo at a scheduling time T or the job's completion time of for each model. The objective is to determine the optimal scheduling parameters (T* or N*) that minimizes the mean cost rate function in a finite time horizon for each model. A numerical example is provided to illustrate the proposed analytical model. Because the framework and analysis are general, the proposed models extend several existing results.  相似文献   
15.
Let X = {X1, X2, …} be a sequence of independent but not necessarily identically distributed random variables, and let η be a counting random variable independent of X. Consider randomly stopped sum Sη = ∑ηk = 1Xk and random maximum S(η) ? max?{S0, …, Sη}. Assuming that each Xk belongs to the class of consistently varying distributions, on the basis of the well-known precise large deviation principles, we prove that the distributions of Sη and S(η) belong to the same class under some mild conditions. Our approach is new and the obtained results are further studies of Kizinevi?, Sprindys, and ?iaulys (2016) and Andrulyt?, Manstavi?ius, and ?iaulys (2017).  相似文献   
16.
《新诗杂话》是朱自清先生研究中国现代新诗的一部著作。朱先生在这部诗学著作中,坚守诗歌文本的分析,坚持对诗歌意义进进缜密详实的“解读”,并认为这是新诗研究的基础。论及范围宽泛,富于创见性,是这部著作的显著特点。以“解诗”为出发点,《新诗杂话》探讨了诗与感觉、诗与哲理、诗与幽默、抗战与诗、诗与建国、诗的形式、歌谣与译诗、新诗的进步性、新诗的发展趋势等诸多关系,平实而深刻,精确而简洁,“将深化为浅”是《新诗杂话》最富于生机与活力的语言叙述原则。  相似文献   
17.
Summary.  We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency.  相似文献   
18.
This paper illustrates the use of quasi-likelihood methods of inference for hidden Markov random fields. These are simple to use and can be employed under circumstances where only the model form and its covariance structure are specified. In particular they can be used to derive the same estimating equations as the E-M algorithm or change of measure methods, which make full distributional assumptions.  相似文献   
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号