首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7918篇
  免费   286篇
  国内免费   100篇
管理学   462篇
劳动科学   11篇
民族学   137篇
人才学   1篇
人口学   197篇
丛书文集   1470篇
理论方法论   401篇
综合类   4702篇
社会学   356篇
统计学   567篇
  2024年   13篇
  2023年   47篇
  2022年   138篇
  2021年   155篇
  2020年   141篇
  2019年   113篇
  2018年   108篇
  2017年   191篇
  2016年   154篇
  2015年   251篇
  2014年   327篇
  2013年   459篇
  2012年   372篇
  2011年   527篇
  2010年   532篇
  2009年   549篇
  2008年   507篇
  2007年   550篇
  2006年   533篇
  2005年   465篇
  2004年   298篇
  2003年   343篇
  2002年   417篇
  2001年   349篇
  2000年   209篇
  1999年   106篇
  1998年   70篇
  1997年   54篇
  1996年   57篇
  1995年   55篇
  1994年   37篇
  1993年   35篇
  1992年   24篇
  1991年   26篇
  1990年   15篇
  1989年   10篇
  1988年   12篇
  1987年   3篇
  1986年   9篇
  1985年   9篇
  1984年   7篇
  1983年   3篇
  1982年   2篇
  1981年   8篇
  1980年   4篇
  1979年   2篇
  1975年   2篇
  1974年   1篇
  1967年   1篇
  1965年   2篇
排序方式: 共有8304条查询结果,搜索用时 15 毫秒
21.
本文以延时反馈光电混合型光学双稳装置为例,在先前他人工作的基础上,给出了下列光学湍流中的新的结果:在适当选择系统增益情况下,原有双稳区可能分裂为三个,以及延时算子的高阶影响,特别是在实验上首次观察到了分岔点的滞后现象.  相似文献   
22.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   
23.
In disease screening and diagnosis, often multiple markers are measured and combined to improve the accuracy of diagnosis. McIntosh and Pepe [Combining several screening tests: optimality of the risk score, Biometrics 58 (2002), pp. 657–664] showed that the risk score, defined as the probability of disease conditional on multiple markers, is the optimal function for classification based on the Neyman–Pearson lemma. They proposed a two-step procedure to approximate the risk score. However, the resulting receiver operating characteristic (ROC) curve is only defined in a subrange (L, h) of false-positive rates in (0,1) and the determination of the lower limit L needs extra prior information. In practice, most diagnostic tests are not perfect, and it is usually rare that a single marker is uniformly better than the other tests. Using simulation, I show that multivariate adaptive regression spline is a useful tool to approximate the risk score when combining multiple markers, especially when ROC curves from multiple tests cross. The resulting ROC is defined in the whole range of (0,1) and is easy to implement and has intuitive interpretation. The sample code of the application is shown in the appendix.  相似文献   
24.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
25.
余芳东 《统计研究》2013,30(3):25-29
 全球国际比较项目大约每隔5年或6年调查一次,有关国际组织应用总量外推法、分类项目外推法、基期滚动法和简缩信息法等方法推算非基准年的PPP数据,以便取得完整的连续时间序列。本文对国际上常用的各种非基准年PPP推算方法和结果作详细分析、评价。  相似文献   
26.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   
27.
DIMITROV, RACHEV and YAKOVLEV ( 1985 ) have obtained the isotonic maximum likelihood estimator for the bimodal failure rate function. The authors considered only the complete failure time data. The generalization of this estimator for the case of censored and tied observations is now proposed.  相似文献   
28.
The authors derive the analytic expressions for the mean and variance of the log-likelihood ratio for testing equality of k (k ≥ 2) normal populations, and suggest a chi-square approximation and a gamma approximation to the exact null distribution. Numerical comparisons show that the two approximations and the original beta approximation of Neyman and Pearson (1931 Neyman , J. , Pearson , E. S. ( 1931 ). On the problem of k samples . In: Neyman , J. , Pearson , E. S. , eds. Joint Statistical Papers . Cambridge : Cambridge University Press , pp. 116131 . [Google Scholar]) are all accurate, and the gamma approximation is the most accurate.  相似文献   
29.
Longitudinal investigations play an increasingly prominent role in biomedical research. Much of the literature on specifying and fitting linear models for serial measurements uses methods based on the standard multivariate linear model. This article proposes a more flexible approach that permits specification of the expected response as an arbitrary linear function of fixed and time-varying covariates so that mean-value functions can be derived from subject matter considerations rather than methodological constraints. Three families of models for the covariance function are discussed: multivariate, autoregressive, and random effects. Illustrations demonstrate the flexibility and utility of the proposed approach to longitudinal analysis.  相似文献   
30.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号