首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1093篇
  免费   35篇
  国内免费   1篇
管理学   27篇
民族学   1篇
人口学   9篇
丛书文集   16篇
理论方法论   7篇
综合类   68篇
社会学   26篇
统计学   975篇
  2023年   6篇
  2022年   5篇
  2021年   15篇
  2020年   26篇
  2019年   44篇
  2018年   41篇
  2017年   68篇
  2016年   30篇
  2015年   33篇
  2014年   22篇
  2013年   376篇
  2012年   94篇
  2011年   35篇
  2010年   28篇
  2009年   35篇
  2008年   28篇
  2007年   33篇
  2006年   22篇
  2005年   26篇
  2004年   21篇
  2003年   14篇
  2002年   19篇
  2001年   18篇
  2000年   14篇
  1999年   3篇
  1998年   8篇
  1997年   5篇
  1996年   7篇
  1995年   2篇
  1994年   3篇
  1993年   5篇
  1992年   4篇
  1991年   3篇
  1990年   5篇
  1989年   6篇
  1988年   4篇
  1986年   2篇
  1985年   1篇
  1984年   4篇
  1983年   4篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   3篇
排序方式: 共有1129条查询结果,搜索用时 15 毫秒
91.
高校是“立德树人”培养高素质人才的基地,始终坚持在党的领导下办好中国特色社会主义高校,需要在思想和行动上执行好“四种形态”。正确认识当前高校践行“四种形态”存在的“精细治理”“文化管理”“经验管束”“粗放管理”等“四级样态”,对于高校党组织更好推进“四种形态”在高校落地生根具有重要意义  相似文献   
92.
93.
The most common charting procedure used for monitoring the variance of the distribution of a quality characteristic is the S control chart. As a Shewhart-type control chart, it is relatively insensitive in the quick detection of small and moderate shifts in process variance. The performance of the S chart can be improved by supplementing it with runs rules or by varying the sample size and the sampling interval. In this work, we introduce and study one-sided adaptive S control charts, supplemented or not with one powerful runs rule, for detecting increases or decreases in process variation. The properties of the proposed control schemes are obtained by using a Markov chain approach. Furthermore, a practical guidance for the choice of the most suitable control scheme is also provided.  相似文献   
94.
Baseline adjusted analyses are commonly encountered in practice, and regulatory guidelines endorse this practice. Sample size calculations for this kind of analyses require knowledge of the magnitude of nuisance parameters that are usually not given when the results of clinical trials are reported in the literature. It is therefore quite natural to start with a preliminary calculated sample size based on the sparse information available in the planning phase and to re‐estimate the value of the nuisance parameters (and with it the sample size) when a portion of the planned number of patients have completed the study. We investigate the characteristics of this internal pilot study design when an analysis of covariance with normally distributed outcome and one random covariate is applied. For this purpose we first assess the accuracy of four approximate sample size formulae within the fixed sample size design. Then the performance of the recalculation procedure with respect to its actual Type I error rate and power characteristics is examined. The results of simulation studies show that this approach has favorable properties with respect to the Type I error rate and power. Together with its simplicity, these features should make it attractive for practical application. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
95.
The problem of estimating the sample size for a phase III trial on the basis of existing phase II data is considered, where data from phase II cannot be combined with those of the new phase III trial. Focus is on the test for comparing the means of two independent samples. A launching criterion is adopted in order to evaluate the relevance of phase II results: phase III is run if the effect size estimate is higher than a threshold of clinical importance. The variability in sample size estimation is taken into consideration. Then, the frequentist conservative strategies with a fixed amount of conservativeness and Bayesian strategies are compared. A new conservative strategy is introduced and is based on the calibration of the optimal amount of conservativeness – calibrated optimal strategy (COS). To evaluate the results we compute the Overall Power (OP) of the different strategies, as well as the mean and the MSE of sample size estimators. Bayesian strategies have poor characteristics since they show a very high mean and/or MSE of sample size estimators. COS clearly performs better than the other conservative strategies. Indeed, the OP of COS is, on average, the closest to the desired level; it is also the highest. COS sample size is also the closest to the ideal phase III sample size MI, showing averages and MSEs lower than those of the other strategies. Costs and experimental times are therefore considerably reduced and standardized. However, if the ideal sample size MI is to be estimated the phase II sample size n should be around the ideal phase III sample size, i.e. n?2MI/3. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
96.
The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented.  相似文献   
97.
We study application of the Exponential Tilt Model (ETM) to compare survival distributions in two groups. The ETM assumes a parametric form for the density ratio of the two distributions. It accommodates a broad array of parametric models such as the log-normal and gamma models and can be sufficiently flexible to allow for crossing hazard and crossing survival functions. We develop a nonparametric likelihood approach to estimate ETM parameters in the presence of censoring and establish related asymptotic results. We compare the ETM to the Proportional Hazards Model (PHM) in simulation studies. When the proportional hazards assumption is not satisfied but the ETM assumption is, the ETM has better power for testing the hypothesis of no difference between the two groups. And, importantly, when the ETM relation is not satisfied but the PHM assumption is, the ETM can still have power reasonably close to that of the PHM. Application of the ETM is illustrated by a gastrointestinal tumor study.  相似文献   
98.
We propose a modification to the regular kernel density estimation method that use asymmetric kernels to circumvent the spill over problem for densities with positive support. First a pivoting method is introduced for placement of the data relative to the kernel function. This yields a strongly consistent density estimator that integrates to one for each fixed bandwidth in contrast to most density estimators based on asymmetric kernels proposed in the literature. Then a data-driven Bayesian local bandwidth selection method is presented and lognormal, gamma, Weibull and inverse Gaussian kernels are discussed as useful special cases. Simulation results and a real-data example illustrate the advantages of the new methodology.  相似文献   
99.
When describing a failure time distribution, the mean residual life is sometimes preferred to the survival or hazard rate. Regression analysis making use of the mean residual life function has recently drawn a great deal of attention. In this paper, a class of mean residual life regression models are proposed for censored data, and estimation procedures and a goodness-of-fit test are developed. Both asymptotic and finite sample properties of the proposed estimators are established, and the proposed methods are applied to a cancer data set from a clinic trial.  相似文献   
100.
This paper proposes a Poisson‐based model that uses both error‐free data and error‐prone data subject to misclassification in the form of false‐negative and false‐positive counts. It derives maximum likelihood estimators (MLEs) for the Poisson rate parameter and the two misclassification parameters — the false‐negative parameter and the false‐positive parameter. It also derives expressions for the information matrix and the asymptotic variances of the MLE for the rate parameter, the MLE for the false‐positive parameter, and the MLE for the false‐negative parameter. Using these expressions the paper analyses the value of the fallible data. It studies characteristics of the new double‐sampling rate estimator via a simulation experiment and applies the new MLE estimators and confidence intervals to a real dataset.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号