首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22158篇
  免费   453篇
  国内免费   1篇
管理学   2728篇
民族学   134篇
人才学   4篇
人口学   2572篇
丛书文集   86篇
理论方法论   1814篇
综合类   452篇
社会学   10166篇
统计学   4656篇
  2023年   93篇
  2021年   114篇
  2020年   255篇
  2019年   330篇
  2018年   934篇
  2017年   1109篇
  2016年   813篇
  2015年   377篇
  2014年   450篇
  2013年   3418篇
  2012年   752篇
  2011年   902篇
  2010年   770篇
  2009年   618篇
  2008年   680篇
  2007年   753篇
  2006年   387篇
  2005年   491篇
  2004年   442篇
  2003年   485篇
  2002年   462篇
  2001年   508篇
  2000年   468篇
  1999年   427篇
  1998年   335篇
  1997年   271篇
  1996年   343篇
  1995年   280篇
  1994年   286篇
  1993年   262篇
  1992年   307篇
  1991年   306篇
  1990年   297篇
  1989年   257篇
  1988年   299篇
  1987年   277篇
  1986年   236篇
  1985年   285篇
  1984年   266篇
  1983年   253篇
  1982年   189篇
  1981年   165篇
  1980年   154篇
  1979年   194篇
  1978年   161篇
  1977年   134篇
  1976年   99篇
  1975年   123篇
  1974年   115篇
  1973年   93篇
排序方式: 共有10000条查询结果,搜索用时 390 毫秒
981.
Monte Carlo methods represent the de facto standard for approximating complicated integrals involving multidimensional target distributions. In order to generate random realizations from the target distribution, Monte Carlo techniques use simpler proposal probability densities to draw candidate samples. The performance of any such method is strictly related to the specification of the proposal distribution, such that unfortunate choices easily wreak havoc on the resulting estimators. In this work, we introduce a layered (i.e., hierarchical) procedure to generate samples employed within a Monte Carlo scheme. This approach ensures that an appropriate equivalent proposal density is always obtained automatically (thus eliminating the risk of a catastrophic performance), although at the expense of a moderate increase in the complexity. Furthermore, we provide a general unified importance sampling (IS) framework, where multiple proposal densities are employed and several IS schemes are introduced by applying the so-called deterministic mixture approach. Finally, given these schemes, we also propose a novel class of adaptive importance samplers using a population of proposals, where the adaptation is driven by independent parallel or interacting Markov chain Monte Carlo (MCMC) chains. The resulting algorithms efficiently combine the benefits of both IS and MCMC methods.  相似文献   
982.
983.
Children represent a large underserved population of “therapeutic orphans,” as an estimated 80% of children are treated off‐label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or “borrowing”) of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure‐response information for antiepileptic drugs to pediatrics.  相似文献   
984.
A common objective of cohort studies and clinical trials is to assess time-varying longitudinal continuous biomarkers as correlates of the instantaneous hazard of a study endpoint. We consider the setting where the biomarkers are measured in a designed sub-sample (i.e., case-cohort or two-phase sampling design), as is normative for prevention trials. We address this problem via joint models, with underlying biomarker trajectories characterized by a random effects model and their relationship with instantaneous risk characterized by a Cox model. For estimation and inference we extend the conditional score method of Tsiatis and Davidian (Biometrika 88(2):447–458, 2001) to accommodate the two-phase biomarker sampling design using augmented inverse probability weighting with nonparametric kernel regression. We present theoretical properties of the proposed estimators and finite-sample properties derived through simulations, and illustrate the methods with application to the AIDS Clinical Trials Group 175 antiretroviral therapy trial. We discuss how the methods are useful for evaluating a Prentice surrogate endpoint, mediation, and for generating hypotheses about biological mechanisms of treatment efficacy.  相似文献   
985.
This paper is about satisficing behaviour. Rather tautologically, this is when decision-makers are satisfied with achieving some objective, rather than in obtaining the best outcome. The term was coined by Simon (Q J Econ 69:99–118, 1955), and has stimulated many discussions and theories. Prominent amongst these theories are models of incomplete preferences, models of behaviour under ambiguity, theories of rational inattention, and search theories. Most of these, however, seem to lack an answer to at least one of two key questions: when should the decision-maker (DM) satisfice; and how should the DM satisfice. In a sense, search models answer the latter question (in that the theory tells the DM when to stop searching), but not the former; moreover, usually the question as to whether any search at all is justified is left to a footnote. A recent paper by Manski (Theory Decis. doi: 10.1007/s11238-017-9592-1, 2017) fills the gaps in the literature and answers the questions: when and how to satisfice? He achieves this by setting the decision problem in an ambiguous situation (so that probabilities do not exist, and many preference functionals can therefore not be applied) and by using the Minimax Regret criterion as the preference functional. The results are simple and intuitive. This paper reports on an experimental test of his theory. The results show that some of his propositions (those relating to the ‘how’) appear to be empirically valid while others (those relating to the ‘when’) are less so.  相似文献   
986.
Choice under risk is modelled using a piecewise linear version of rank-dependent utility. This model can be considered a continuous version of NEO-expected utility (Chateauneuf et al., J Econ Theory 137:538–567, 2007). In a framework of objective probabilities, a preference foundation is given, without requiring a rich structure on the outcome set. The key axiom is called complementary additivity.  相似文献   
987.
988.
A gap in the proof of a non stationary mixingale invariance principle is identified and fixed by introducing a skipped subsampling of a partial sum process and letting the skipped interval vanish asymptotically at an appropriate rate as the sample size increases. The corrected proof produces a mixingale limit theorem in the form of a mixing convergence in law, occurring jointly with the stable convergence in law for the same σ-field relative to which they are stable and mixing. The applicability of established results to a high-frequency estimation of the quadratic variation of financial price process is discussed.  相似文献   
989.
We present the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that extends and improves the stochastic approximation annealing (SAA) by using population Monte Carlo ideas. The efficiency of standard SAA algorithm crucially depends on its self-adjusting mechanism which presents stability issues in high dimensional or rugged optimisation problems. The proposed algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self-adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used. It can be implemented in parallel computing environments in order to mitigate the computational overhead. As a result, PISAA can address complex optimisation problems that it would be difficult for SAA to satisfactory address. We demonstrate the good performance of the proposed algorithm on challenging applications including Bayesian network learning and protein folding. Our numerical comparisons suggest that PISAA outperforms the simulated annealing, stochastic approximation annealing, and annealing evolutionary stochastic approximation Monte Carlo.  相似文献   
990.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号