首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2509篇
  免费   72篇
  国内免费   6篇
管理学   241篇
民族学   1篇
人口学   12篇
丛书文集   34篇
理论方法论   46篇
综合类   144篇
社会学   40篇
统计学   2069篇
  2023年   36篇
  2022年   28篇
  2021年   43篇
  2020年   41篇
  2019年   92篇
  2018年   106篇
  2017年   192篇
  2016年   89篇
  2015年   78篇
  2014年   113篇
  2013年   577篇
  2012年   202篇
  2011年   87篇
  2010年   73篇
  2009年   95篇
  2008年   82篇
  2007年   81篇
  2006年   77篇
  2005年   60篇
  2004年   70篇
  2003年   44篇
  2002年   39篇
  2001年   38篇
  2000年   41篇
  1999年   29篇
  1998年   30篇
  1997年   21篇
  1996年   12篇
  1995年   11篇
  1994年   14篇
  1993年   8篇
  1992年   13篇
  1991年   13篇
  1990年   3篇
  1989年   6篇
  1988年   8篇
  1987年   4篇
  1986年   4篇
  1985年   4篇
  1984年   4篇
  1983年   4篇
  1982年   6篇
  1981年   3篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有2587条查询结果,搜索用时 8 毫秒
1.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
2.
Summary.  Alongside the development of meta-analysis as a tool for summarizing research literature, there is renewed interest in broader forms of quantitative synthesis that are aimed at combining evidence from different study designs or evidence on multiple parameters. These have been proposed under various headings: the confidence profile method, cross-design synthesis, hierarchical models and generalized evidence synthesis. Models that are used in health technology assessment are also referred to as representing a synthesis of evidence in a mathematical structure. Here we review alternative approaches to statistical evidence synthesis, and their implications for epidemiology and medical decision-making. The methods include hierarchical models, models informed by evidence on different functions of several parameters and models incorporating both of these features. The need to check for consistency of evidence when using these powerful methods is emphasized. We develop a rationale for evidence synthesis that is based on Bayesian decision modelling and expected value of information theory, which stresses not only the need for a lack of bias in estimates of treatment effects but also a lack of bias in assessments of uncertainty. The increasing reliance of governmental bodies like the UK National Institute for Clinical Excellence on complex evidence synthesis in decision modelling is discussed.  相似文献   
3.
Let ( Xk ) k be a sequence of i.i.d. random variables taking values in a set , and consider the problem of estimating the law of X1 in a Bayesian framework. We prove, under mild conditions on the prior, that the sequence of posterior distributions satisfies a moderate deviation principle.  相似文献   
4.
Summary. We model daily catches of fishing boats in the Grand Bank fishing grounds. We use data on catches per species for a number of vessels collected by the European Union in the context of the Northwest Atlantic Fisheries Organization. Many variables can be thought to influence the amount caught: a number of ship characteristics (such as the size of the ship, the fishing technique used and the mesh size of the nets) are obvious candidates, but one can also consider the season or the actual location of the catch. Our database leads to 28 possible regressors (arising from six continuous variables and four categorical variables, whose 22 levels are treated separately), resulting in a set of 177 million possible linear regression models for the log-catch. Zero observations are modelled separately through a probit model. Inference is based on Bayesian model averaging, using a Markov chain Monte Carlo approach. Particular attention is paid to the prediction of catches for single and aggregated ships.  相似文献   
5.
Estimated associations between an outcome variable and misclassified covariates tend to be biased when the methods of estimation that ignore the classification error are applied. Available methods to account for misclassification often require the use of a validation sample (i.e. a gold standard). In practice, however, such a gold standard may be unavailable or impractical. We propose a Bayesian approach to adjust for misclassification in a binary covariate in the random effect logistic model when a gold standard is not available. This Markov Chain Monte Carlo (MCMC) approach uses two imperfect measures of a dichotomous exposure under the assumptions of conditional independence and non-differential misclassification. A simulated numerical example and a real clinical example are given to illustrate the proposed approach. Our results suggest that the estimated log odds of inpatient care and the corresponding standard deviation are much larger in our proposed method compared with the models ignoring misclassification. Ignoring misclassification produces downwardly biased estimates and underestimate uncertainty.  相似文献   
6.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification.  相似文献   
7.
In this paper, we present a Bayesian methodology for modelling accelerated lifetime tests under a stress response relationship with a threshold stress. Both Laplace and MCMC methods are considered. The methodology is described in detail for the case when an exponential distribution is assumed to express the behaviour of lifetimes, and a power law model with a threshold stress is assumed as the stress response relationship. We assume vague but proper priors for the parameters of interest. The methodology is illustrated by a accelerated failure test on an electrical insulation film.  相似文献   
8.
The authors consider the optimal design of sampling schedules for binary sequence data. They propose an approach which allows a variety of goals to be reflected in the utility function by including deterministic sampling cost, a term related to prediction, and if relevant, a term related to learning about a treatment effect To this end, they use a nonparametric probability model relying on a minimal number of assumptions. They show how their assumption of partial exchangeability for the binary sequence of data allows the sampling distribution to be written as a mixture of homogeneous Markov chains of order k. The implementation follows the approach of Quintana & Müller (2004), which uses a Dirichlet process prior for the mixture.  相似文献   
9.
The evaluation of DNA evidence in pedigrees requiring population inference   总被引:1,自引:0,他引:1  
Summary. The evaluation of nuclear DNA evidence for identification purposes is performed here taking account of the uncertainty about population parameters. Graphical models are used to detail the hypotheses being debated in a trial with the aim of obtaining a directed acyclic graph. Graphs also clarify the set of evidence that contributes to population inferences and they also describe the conditional independence structure of DNA evidence. Numerical illustrations are provided by re-examining three case-studies taken from the literature. Our calculations of the weight of evidence differ from those given by the authors of case-studies in that they reveal more conservative values.  相似文献   
10.
To explore the projection efficiency of a design, Tsai, et al [2000. Projective three-level main effects designs robust to model uncertainty. Biometrika 87, 467–475] introduced the Q criterion to compare three-level main-effects designs for quantitative factors that allow the consideration of interactions in addition to main effects. In this paper, we extend their method and focus on the case in which experimenters have some prior knowledge, in advance of running the experiment, about the probabilities of effects being non-negligible. A criterion which incorporates experimenters’ prior beliefs about the importance of each effect is introduced to compare orthogonal, or nearly orthogonal, main effects designs with robustness to interactions as a secondary consideration. We show that this criterion, exploiting prior information about model uncertainty, can lead to more appropriate designs reflecting experimenters’ prior beliefs.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号