首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3675篇
  免费   135篇
  国内免费   11篇
管理学   259篇
民族学   1篇
人口学   12篇
丛书文集   142篇
理论方法论   61篇
综合类   1108篇
社会学   35篇
统计学   2203篇
  2024年   1篇
  2023年   36篇
  2022年   31篇
  2021年   49篇
  2020年   49篇
  2019年   96篇
  2018年   115篇
  2017年   210篇
  2016年   112篇
  2015年   103篇
  2014年   165篇
  2013年   659篇
  2012年   274篇
  2011年   160篇
  2010年   130篇
  2009年   157篇
  2008年   149篇
  2007年   192篇
  2006年   191篇
  2005年   180篇
  2004年   156篇
  2003年   140篇
  2002年   94篇
  2001年   74篇
  2000年   74篇
  1999年   36篇
  1998年   32篇
  1997年   24篇
  1996年   14篇
  1995年   12篇
  1994年   15篇
  1993年   9篇
  1992年   13篇
  1991年   14篇
  1990年   3篇
  1989年   8篇
  1988年   10篇
  1987年   4篇
  1986年   4篇
  1985年   6篇
  1984年   2篇
  1983年   4篇
  1982年   5篇
  1981年   2篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有3821条查询结果,搜索用时 31 毫秒
1.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
2.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
3.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities.  相似文献   
4.
Summary.  Alongside the development of meta-analysis as a tool for summarizing research literature, there is renewed interest in broader forms of quantitative synthesis that are aimed at combining evidence from different study designs or evidence on multiple parameters. These have been proposed under various headings: the confidence profile method, cross-design synthesis, hierarchical models and generalized evidence synthesis. Models that are used in health technology assessment are also referred to as representing a synthesis of evidence in a mathematical structure. Here we review alternative approaches to statistical evidence synthesis, and their implications for epidemiology and medical decision-making. The methods include hierarchical models, models informed by evidence on different functions of several parameters and models incorporating both of these features. The need to check for consistency of evidence when using these powerful methods is emphasized. We develop a rationale for evidence synthesis that is based on Bayesian decision modelling and expected value of information theory, which stresses not only the need for a lack of bias in estimates of treatment effects but also a lack of bias in assessments of uncertainty. The increasing reliance of governmental bodies like the UK National Institute for Clinical Excellence on complex evidence synthesis in decision modelling is discussed.  相似文献   
5.
依据教育学、心理学有关理论,从教学环节入手,探讨了英语教学在多媒体环境下的教学策略选择原则及相应英语教学策略,提出英语教学的整体筹划策略、信息输入策略、全息评价策略和重现巩固策略。  相似文献   
6.
Let ( Xk ) k be a sequence of i.i.d. random variables taking values in a set , and consider the problem of estimating the law of X1 in a Bayesian framework. We prove, under mild conditions on the prior, that the sequence of posterior distributions satisfies a moderate deviation principle.  相似文献   
7.
关于元评估     
本文阐述了元评估工作的重要意义 ,并对元评估活动所依据的原则 (客观性、整体性 全面性、综合性 )、应遵循的程序和采用的方法 (元研究方法、系统方法、评价方法、模型方法 )作了系统论述。元评估活动将对提高各级领导决策的科学性和自觉性 ,推动社会各个领域乃至整个社会与自然界协调而持续的发展作出宝贵的贡献  相似文献   
8.
Summary. We model daily catches of fishing boats in the Grand Bank fishing grounds. We use data on catches per species for a number of vessels collected by the European Union in the context of the Northwest Atlantic Fisheries Organization. Many variables can be thought to influence the amount caught: a number of ship characteristics (such as the size of the ship, the fishing technique used and the mesh size of the nets) are obvious candidates, but one can also consider the season or the actual location of the catch. Our database leads to 28 possible regressors (arising from six continuous variables and four categorical variables, whose 22 levels are treated separately), resulting in a set of 177 million possible linear regression models for the log-catch. Zero observations are modelled separately through a probit model. Inference is based on Bayesian model averaging, using a Markov chain Monte Carlo approach. Particular attention is paid to the prediction of catches for single and aggregated ships.  相似文献   
9.
陪审制度的功能是陪审制度得以存在和延续的理由 ,笔者首先剖析了陪审制度的沿革及其发展趋势 ;进而阐述陪审制度在两大法系中共有的功能 ;最后分别阐述英美法系和大陆法系刑事陪审制度功能  相似文献   
10.
犯罪行为引起的刑事法律关系是考察刑事诉讼主体问题的真正法理基础。国家与被告人作为刑事法律关系的主体是刑事诉讼的当事人 ,被害人及其近亲属不是刑事法律关系的主体 ,故不是刑事诉讼当事人 ,只是发动、支持公法性诉讼的公民追诉人。审判权与公诉权的分立 ,本质上是国家将自身作为自己的对象 ,并在自己(裁判者 )与作为对象的自身 (当事人 )之间建立绝对区分 ,体现了作为公共权力的国家的自我反思节制。其他刑事诉讼主体均不是刑事法律关系的主体 ,按照意志关系或者是国家的代表、被告人和被害人的代理人或帮助人 ,或者是不从属于任何一方的诉讼主体。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号