首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1510篇
  免费   36篇
  国内免费   9篇
管理学   56篇
劳动科学   1篇
民族学   2篇
人口学   12篇
丛书文集   39篇
理论方法论   8篇
综合类   244篇
社会学   7篇
统计学   1186篇
  2023年   8篇
  2022年   11篇
  2021年   12篇
  2020年   29篇
  2019年   50篇
  2018年   59篇
  2017年   88篇
  2016年   33篇
  2015年   36篇
  2014年   46篇
  2013年   467篇
  2012年   111篇
  2011年   50篇
  2010年   46篇
  2009年   28篇
  2008年   37篇
  2007年   46篇
  2006年   39篇
  2005年   34篇
  2004年   34篇
  2003年   26篇
  2002年   24篇
  2001年   33篇
  2000年   32篇
  1999年   19篇
  1998年   25篇
  1997年   12篇
  1996年   13篇
  1995年   14篇
  1994年   14篇
  1993年   10篇
  1992年   13篇
  1991年   7篇
  1990年   11篇
  1989年   5篇
  1988年   7篇
  1987年   3篇
  1986年   2篇
  1985年   2篇
  1984年   1篇
  1983年   4篇
  1982年   2篇
  1980年   5篇
  1979年   2篇
  1978年   4篇
  1976年   1篇
排序方式: 共有1555条查询结果,搜索用时 31 毫秒
1.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
2.
Abstract

This paper focuses on the inference of suitable generally non linear functions in stochastic volatility models. In this context, in order to estimate the variance of the proposed estimators, a moving block bootstrap (MBB) approach is suggested and discussed. Under mild assumptions, we show that the MBB procedure is weakly consistent. Moreover, a methodology to choose the optimal length block in the MBB is proposed. Some examples and simulations on the model are also made to show the performance of the proposed procedure.  相似文献   
3.
Abstract

The mean estimators with ratio depend on multiple auxiliary variables and unknown parameters in a finite population setting. We propose a new generalized approach with matrices for modeling the mutivariate mean estimators with two auxiliary variables. Our approach brings naturally a graphical analysis for comparing mean estimators.  相似文献   
4.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   
5.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification.  相似文献   
6.
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however, the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual data and the differences among the results. This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University.  相似文献   
7.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   
8.
借助于李雅谱洛夫理论、矩阵分析方法和It?公式,结合不等式分析技巧,研究了随机细胞神经网络系统的均方指数稳定性,给出了系统的解的二阶矩Liapunov指数估计式和均方指数稳定的充分条件。  相似文献   
9.
WEIGHTED SUMS OF NEGATIVELY ASSOCIATED RANDOM VARIABLES   总被引:2,自引:0,他引:2  
In this paper, we establish strong laws for weighted sums of negatively associated (NA) random variables which have a higher‐order moment condition. Some results of Bai Z.D. & Cheng P.E. (2000) [Marcinkiewicz strong laws for linear statistics. Statist. and Probab. Lett. 43, 105–112,] and Sung S.K. (2001) [Strong laws for weighted sums of i.i.d. random variables, Statist. and Probab. Lett. 52, 413–419] are sharpened and extended from the independent identically distributed case to the NA setting. Also, one of the results of Li D.L. et al. (1995) [Complete convergence and almost sure convergence of weighted sums of random variables. J. Theoret. Probab. 8, 49–76,] is complemented and extended.  相似文献   
10.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号