首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   28965篇
  免费   771篇
  国内免费   21篇
管理学   4116篇
民族学   156篇
人口学   2790篇
丛书文集   118篇
教育普及   1篇
理论方法论   2426篇
综合类   507篇
社会学   13763篇
统计学   5880篇
  2023年   175篇
  2021年   177篇
  2020年   458篇
  2019年   662篇
  2018年   854篇
  2017年   1127篇
  2016年   896篇
  2015年   574篇
  2014年   774篇
  2013年   4812篇
  2012年   1145篇
  2011年   1046篇
  2010年   709篇
  2009年   600篇
  2008年   711篇
  2007年   713篇
  2006年   711篇
  2005年   1096篇
  2004年   842篇
  2003年   629篇
  2002年   564篇
  2001年   742篇
  2000年   623篇
  1999年   557篇
  1998年   433篇
  1997年   370篇
  1996年   397篇
  1995年   384篇
  1994年   343篇
  1993年   344篇
  1992年   429篇
  1991年   406篇
  1990年   392篇
  1989年   359篇
  1988年   327篇
  1987年   287篇
  1986年   321篇
  1985年   345篇
  1984年   320篇
  1983年   306篇
  1982年   252篇
  1981年   215篇
  1980年   238篇
  1979年   255篇
  1978年   209篇
  1977年   185篇
  1976年   150篇
  1975年   157篇
  1974年   142篇
  1973年   127篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
71.
72.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
73.
74.
侵权纠纷是民事诉讼机制解决的主要对象。随着社会的发展,侵权案件逐渐增多,过量的侵权诉讼会带来很多不良后果,不仅是对法律制度本身的损害,更对经济发展产生严重的负面影响。要解决侵权诉讼的膨胀问题,必须找到控制侵权诉讼的正确途径。  相似文献   
75.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
76.
We define a class of count distributions which includes the Poisson as well as many alternative count models. Then the empirical probability generating function is utilized to construct a test for the Poisson distribution, which is consistent against this class of alternatives. The limit distribution of the test statistic is derived in case of a general underlying distribution, and efficiency considerations are addressed. A simulation study indicates that the new test is comparable in performance to more complicated omnibus tests.  相似文献   
77.
Parameter design or robust parameter design (RPD) is an engineering methodology intended as a cost-effective approach for improving the quality of products and processes. The goal of parameter design is to choose the levels of the control variables that optimize a defined quality characteristic. An essential component of RPD involves the assumption of well estimated models for the process mean and variance. Traditionally, the modeling of the mean and variance has been done parametrically. It is often the case, particularly when modeling the variance, that nonparametric techniques are more appropriate due to the nature of the curvature in the underlying function. Most response surface experiments involve sparse data. In sparse data situations with unusual curvature in the underlying function, nonparametric techniques often result in estimates with problematic variation whereas their parametric counterparts may result in estimates with problematic bias. We propose the use of semi-parametric modeling within the robust design setting, combining parametric and nonparametric functions to improve the quality of both mean and variance model estimation. The proposed method will be illustrated with an example and simulations.  相似文献   
78.
Many applications of nonparametric tests based on curve estimation involve selecting a smoothing parameter. The author proposes an adaptive test that combines several generalized likelihood ratio tests in order to get power performance nearly equal to whichever of the component tests is best. She derives the asymptotic joint distribution of the component tests and that of the proposed test under the null hypothesis. She also develops a simple method of selecting the smoothing parameters for the proposed test and presents two approximate methods for obtaining its P‐value. Finally, she evaluates the proposed test through simulations and illustrates its application to a set of real data.  相似文献   
79.
The 1998 Korean Survey of Family Income and Expenditures was used to examine the overall consumption and saving behavior of Korean baby boomers and compared the differences in consumption and saving behavior between older and younger boomers. The t -test results indicated that the younger boomers allocated a significantly higher percentage of their expenditures on food away from home, household appliances, transportation and communication than did the older boomers, whereas the older boomers spent higher amounts and allocated larger budget shares on their children's education than did the younger boomers. The results of Ordinary Least-Squares (OLS) regression analysis showed that, holding other factors constant, older boomers not only spent significantly more in the total consumption expenditures and education expenditures, but older boomers also saved significantly less than did younger boomers.  相似文献   
80.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号