首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2401篇
  免费   201篇
  国内免费   3篇
管理学   322篇
民族学   7篇
人才学   1篇
人口学   100篇
丛书文集   126篇
理论方法论   90篇
综合类   678篇
社会学   127篇
统计学   1154篇
  2024年   1篇
  2023年   4篇
  2022年   21篇
  2021年   38篇
  2020年   54篇
  2019年   85篇
  2018年   82篇
  2017年   61篇
  2016年   55篇
  2015年   81篇
  2014年   131篇
  2013年   450篇
  2012年   232篇
  2011年   187篇
  2010年   147篇
  2009年   124篇
  2008年   112篇
  2007年   92篇
  2006年   89篇
  2005年   78篇
  2004年   74篇
  2003年   54篇
  2002年   52篇
  2001年   45篇
  2000年   28篇
  1999年   15篇
  1998年   13篇
  1997年   20篇
  1996年   7篇
  1995年   9篇
  1994年   9篇
  1993年   13篇
  1992年   16篇
  1991年   19篇
  1990年   20篇
  1989年   13篇
  1988年   10篇
  1987年   2篇
  1986年   5篇
  1985年   7篇
  1984年   10篇
  1983年   3篇
  1982年   14篇
  1981年   12篇
  1980年   7篇
  1979年   2篇
  1978年   1篇
  1966年   1篇
排序方式: 共有2605条查询结果,搜索用时 15 毫秒
1.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
2.
严文龙等 《统计研究》2020,37(7):93-103
在经济下行压力加大、资本市场进一步开放的新形势下,厘清审计市场交易——监管机制,完善审计服务市场尤为必要。借由2010 年审计定价管制政策失效的自然实验,本文通过嵌入双边随机边界模型,得到审计双方的定价交易剩余指标,运用双重差分模型解析价格管制与交易定价的作用机制。研究发现,定价管制失效的原因不在于规制俘获,而在于价格管制与当前的市场效率不匹配。下限管制尽管能够提高审计师剩余,但同时会放大交易定价风险,增加剩余的错配,扰乱交易秩序。上限管制则进一步固化市场的低价竞争。进一步研究发现审计师剩余与盈余质量显著相关,2014年的放开定价管制政策提高了审计师剩余。研究厘清了审计市场交易机制,有利于未来研究审计交易机制的微观影响及与盈余质量的关联,为在新时代把握审计市场交易——监管规律、培育自发良性交易的审计市场提供有益借鉴。  相似文献   
3.
Empirical applications of poverty measurement often have to deal with a stochastic weighting variable such as household size. Within the framework of a bivariate distribution function defined over income and weight, I derive the limiting distributions of the decomposable poverty measures and of the ordinates of stochastic dominance curves. The poverty line is allowed to depend on the income distribution. It is shown how the results can be used to test hypotheses concerning changes in poverty. The inference procedures are briefly illustrated using Belgian data. An erratum to this article can be found at  相似文献   
4.
统计执法的博弈分析   总被引:1,自引:0,他引:1  
针对目前中国统计数据失真相当严重并引起社会各界普遍关注的现象,运用博弈论作为分析工具,引入重复博弈研究了统计执法中数据报方与查方的利益冲突关系,从统计执法的角度揭示了统计数据失真的主要原因,并提出了相应的五项对策。  相似文献   
5.
Parameter design or robust parameter design (RPD) is an engineering methodology intended as a cost-effective approach for improving the quality of products and processes. The goal of parameter design is to choose the levels of the control variables that optimize a defined quality characteristic. An essential component of RPD involves the assumption of well estimated models for the process mean and variance. Traditionally, the modeling of the mean and variance has been done parametrically. It is often the case, particularly when modeling the variance, that nonparametric techniques are more appropriate due to the nature of the curvature in the underlying function. Most response surface experiments involve sparse data. In sparse data situations with unusual curvature in the underlying function, nonparametric techniques often result in estimates with problematic variation whereas their parametric counterparts may result in estimates with problematic bias. We propose the use of semi-parametric modeling within the robust design setting, combining parametric and nonparametric functions to improve the quality of both mean and variance model estimation. The proposed method will be illustrated with an example and simulations.  相似文献   
6.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small.  相似文献   
7.
Many economic duration variables are often available only up to intervals, and not up to exact points. However, continuous time duration models are conceptually superior to discrete ones. Hence, in duration analyses, one faces a situation with discrete data and a continuous model. This paper discusses (i) the asymptotic bias of a conventional approximation procedure in which a discrete duration is treated as an exact observation; and (ii) the efficiency of a correct maximum likelihood estimator which appropriately accounts for the discrete nature of the data.  相似文献   
8.
Summary. The paper develops methods for the design of experiments for mechanistic models when the response must be transformed to achieve symmetry and constant variance. The power transformation that is used is partially justified by a rule in analytical chemistry. Because of the nature of the relationship between the response and the mechanistic model, it is necessary to transform both sides of the model. Expressions are given for the parameter sensitivities in the transformed model and examples are given of optimum designs, not only for single-response models, but also for experiments in which multivariate responses are measured and for experiments in which the model is defined by a set of differential equations which cannot be solved analytically. The extension to designs for checking models is discussed.  相似文献   
9.
Detection and correction of artificial shifts in climate series   总被引:6,自引:0,他引:6  
Summary.  Many long instrumental climate records are available and might provide useful information in climate research. These series are usually affected by artificial shifts, due to changes in the conditions of measurement and various kinds of spurious data. A comparison with surrounding weather-stations by means of a suitable two-factor model allows us to check the reliability of the series. An adapted penalized log-likelihood procedure is used to detect an unknown number of breaks and outliers. An example concerning temperature series from France confirms that a systematic comparison of the series together is valuable and allows us to correct the data even when no reliable series can be taken as a reference.  相似文献   
10.
Abstract. The use of the concept of ‘direct’ versus ‘indirect’ causal effects is common, not only in statistics but also in many areas of social and economic sciences. The related terms of ‘biomarkers’ and ‘surrogates’ are common in pharmacological and biomedical sciences. Sometimes this concept is represented by graphical displays of various kinds. The view here is that there is a great deal of imprecise discussion surrounding this topic and, moreover, that the most straightforward way to clarify the situation is by using potential outcomes to define causal effects. In particular, I suggest that the use of principal stratification is key to understanding the meaning of direct and indirect causal effects. A current study of anthrax vaccine will be used to illustrate ideas.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号