首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2952篇
  免费   158篇
  国内免费   16篇
管理学   877篇
民族学   1篇
人才学   1篇
人口学   46篇
丛书文集   125篇
理论方法论   199篇
综合类   1154篇
社会学   434篇
统计学   289篇
  2024年   4篇
  2023年   42篇
  2022年   12篇
  2021年   26篇
  2020年   70篇
  2019年   78篇
  2018年   88篇
  2017年   115篇
  2016年   77篇
  2015年   79篇
  2014年   170篇
  2013年   436篇
  2012年   150篇
  2011年   180篇
  2010年   141篇
  2009年   146篇
  2008年   158篇
  2007年   170篇
  2006年   151篇
  2005年   125篇
  2004年   119篇
  2003年   90篇
  2002年   103篇
  2001年   56篇
  2000年   58篇
  1999年   51篇
  1998年   25篇
  1997年   27篇
  1996年   16篇
  1995年   15篇
  1994年   34篇
  1993年   8篇
  1992年   22篇
  1991年   11篇
  1990年   9篇
  1989年   8篇
  1988年   15篇
  1987年   6篇
  1986年   8篇
  1985年   4篇
  1984年   7篇
  1983年   7篇
  1982年   6篇
  1981年   3篇
排序方式: 共有3126条查询结果,搜索用时 601 毫秒
991.
英国地方政府绩效评估体系改革及对中国的启示   总被引:1,自引:0,他引:1  
本文是对英国地方政府缋效评估体系改革情况进行实地考察并对中国开展政府绩效评估改革试点调研的阶段性成果.本文分析了英国现行地方政府绩效评估体系的主要特点、改革的动因和改革的主要内容,进而提出对中国地方政府绩效管理可资借鉴的主要方面.  相似文献   
992.
Ten years ago, the National Academy of Science released its risk assessment/risk management (RA/RM) “paradigm” that served to crystallize much of the early thinking about these concepts. By defining RA as a four-step process, operationally independent from RM, the paradigm has presented society with a scheme, or a conceptually common framework, for addressing many risky situations (e.g., carcinogens, noncarcinogens, and chemical mixtures). The procedure has facilitated decision-making in a wide variety of situations and has identified the most important research needs. The past decade, however, has revealed that additional progress is needed. These areas include addressing the appropriate interaction (not isolation) between RA and RM, improving the methods for assessing risks from mixtures, dealing with “adversity of effect,” deciding whether “hazard” should imply an exposure to environmental conditions or to laboratory conditions, and evolving the concept to include both health and ecological risk. Interest in and expectations of risk assessment are increasing rapidly. The emerging concept of “comparative risk” (i.e., distinguishing between large risks and smaller risks that may be qualitatively different) is at a level comparable to that held by the concept of “risk” just 10 years ago. Comparative risk stands in need of a paradigm of its own, especially given the current economic limitations. “Times are tough; Brother, can you paradigm?”  相似文献   
993.
Risk management decisions are not made only on the basis of expert risk assessment. In numerous instances, public controversy erupts, questioning the results of previous risk assessment procedures and shaping the development of risk management episodes. This article presents a case study of risk management in the context of a 1980s controversy over aerial spraying against a spruce budworm epidemic in Quebec and draws some general conclusions concerning the relationship between risk analysis and public controversies. Actors in public controversies define risks more broadly than risk assessment experts. Moreover, public controversies only partly concern issues of risk. They are first and foremost debates about social choices in which actors carry with them a multidimensional social experience of technology, trust, credibility and decision-making institutions. This experience contributes to the construction of a plurality of emergent representations of what is at stake in a controversy, referred to in this paper as "worlds of relevance." Analysis shows that in any given public controversy, there are not just two parties arguing against each other. Rather, several "worlds of relevance" can be found that link, in a variety of ways, a variety of entities not necessarily shared by all these worlds. Each "world of relevance" presents a different definition of what the issues and the stakes of the controversy are. Risks are only part of the picture, and they are embedded in "worlds of relevance" from which they take their significance. The successful management of a controversy entails the association of entities from different worlds.  相似文献   
994.
Reviews     
Andersen, P. K., Borgan, O., Gill, R. D. and Keiding, N. Statistical Models based on Counting Processes
Anderson, T. W. and Finn, J. D. The New Statistical Analysis of Data
Azzalini, A. Statistical Inference—based on the Likelihood
Borodin, A. N. and Salminen, P. Handbook of Brownian Motion—Facts and Formulae
Brockwell, P. J. and Davis, R. A. Introduction to Time Series and Forecasting
Chapman, M. and Wykes, C. Plain Figures
Clarke, G. M. and Kempson, R. E. Introduction to the Design and Analysis of Experiments
Goldstein, H. and Lewis, T. (eds) Assessment: Problems, Developments and Statistical Issues; a Volume of Expert Contributions
Grenander, U. Elements of Pattern Theory
Högnäs, G. and Mukherjea, A. Probability Measures on Semigroups
Levitas, R. and Guy, W. Interpreting Official Statistics
van der Linden, W. J. and Hambleton, R. K. (eds) Handbook of Modern Item Response Theory
Ross, S. M. Simulation
Simonnet, M. Measures and Probabilities
Small, C. G. The Statistical Theory of Shape
van der Vaart, A. and Wellner, J. A. Weak Convergence and Empirical Processes with Applications to Statistics  相似文献   
995.
996.
As part of its preparation to review a potential license application from the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission (NRC) is examining the performance of the proposed Yucca Mountain nuclear waste repository. In this regard, we evaluated postclosure repository performance using Monte Carlo analyses with an NRC-developed system model that has 950 input parameters, of which 330 are sampled to represent system uncertainties. The quantitative compliance criterion for dose was established by NRC to protect inhabitants who might be exposed to any releases from the repository. The NRC criterion limits the peak-of-the-mean dose, which in our analysis is estimated by averaging the potential exposure at any instant in time for all Monte Carlo realizations, and then determining the maximum value of the mean curve within 10000 years, the compliance period. This procedure contrasts in important ways with a more common measure of risk based on the mean of the ensemble of peaks from each Monte Carlo realization. The NRC chose the former (peak-of-the-mean) because it more correctly represents the risk to an exposed individual. Procedures for calculating risk in the expected case of slow repository degradation differ from those for low-probability cases of disruption by external forces such as volcanism. We also explored the possibility of risk dilution (i.e., lower calculated risk) that could result from arbitrarily defining wide probability distributions for certain parameters. Finally, our sensitivity analyses to identify influential parameters used two approaches: (1). the ensemble of doses from each Monte Carlo realization at the time of the peak risk (i.e., peak-of-the-mean) and (2). the ensemble of peak doses calculated from each realization within 10000 years. The latter measure appears to have more discriminatory power than the former for many parameters (based on the greater magnitude of the sensitivity coefficient), but can yield different rankings, especially for parameters that influence the timing of releases.  相似文献   
997.
In November 2001, the University of Michigan hosted one of the first dialogues among international trade law scholars and scientists in the field of risk assessment with the goal of identifying critical areas of misunderstanding between the two fields. This article discusses key issues that need to be addressed in order to better harmonize the scientific and legal systems of evidence within the context of trade disputes and trade law and presents the recommendations that emerged from the Michigan meeting.  相似文献   
998.
Several assumptions, defined and undefined, are used in the toxicity assessment of chemical mixtures. In scientific practice mixture components in the low-dose region, particularly subthreshold doses, are often assumed to behave additively (i.e., zero interaction) based on heuristic arguments. This assumption has important implications in the practice of risk assessment, but has not been experimentally tested. We have developed methodology to test for additivity in the sense of Berenbaum (Advances in Cancer Research, 1981), based on the statistical equivalence testing literature where the null hypothesis of interaction is rejected for the alternative hypothesis of additivity when data support the claim. The implication of this approach is that conclusions of additivity are made with a false positive rate controlled by the experimenter. The claim of additivity is based on prespecified additivity margins, which are chosen using expert biological judgment such that small deviations from additivity, which are not considered to be biologically important, are not statistically significant. This approach is in contrast to the usual hypothesis-testing framework that assumes additivity in the null hypothesis and rejects when there is significant evidence of interaction. In this scenario, failure to reject may be due to lack of statistical power making the claim of additivity problematic. The proposed method is illustrated in a mixture of five organophosphorus pesticides that were experimentally evaluated alone and at relevant mixing ratios. Motor activity was assessed in adult male rats following acute exposure. Four low-dose mixture groups were evaluated. Evidence of additivity is found in three of the four low-dose mixture groups. The proposed method tests for additivity of the whole mixture and does not take into account subset interactions (e.g., synergistic, antagonistic) that may have occurred and cancelled each other out.  相似文献   
999.
Decisions concerning the management of fisheries are founded on confidence statements for interest parameters such as biomass and exploitation rate, derived from complex structural models that describe the dynamics of fisheries. We identify four generic statistical issues and focus on how they impact on the reliability of those confidence statements: (a) parameters for which the data have little or no information; (b) competing structural relationships; (c) weighting of observations; and (d) alternative methods for computing confidence statements. Our purpose is to give an exposition of how these issues impact on fisheries' analyses, with the intent of stimulating thought on more effective alternatives. We describe the fisheries' management context and use two specific studies to illustrate how these generic statistical issues impact on fisheries assessment results. It is demonstrated that these statistical issues can have a profound impact on fishery management decisions and that established approaches to handle them have not been fully developed.  相似文献   
1000.
Although analysis of in vivo pharmacokinetic data necessitates use of time-dependent physiologically-based pharmacokinetic (PBPK) models, risk assessment applications are often driven primarily by steady-state and/or integrated (e.g., AUC) dosimetry. To that end, we present an analysis of steady-state solutions to a PBPK model for a generic volatile chemical metabolized in the liver. We derive an equivalent model that is much simpler and contains many fewer parameters than the full PBPK model. The state of the system can be specified by two state variables-the rate of metabolism and the rate of clearance by exhalation. For a given oral dose rate or inhalation exposure concentration, the system state only depends on the blood-air partition coefficient, metabolic constants, and the rates of blood flow to the liver and of alveolar ventilation. At exposures where metabolism is close to linear, only the effective first-order metabolic rate is needed. Furthermore, in this case, the relationship between cumulative exposure and average internal dose (e.g., AUCs) remains the same for time-varying exposures. We apply our analysis to oral-inhalation route extrapolation, showing that for any dose metric, route equivalence only depends on the parameters that determine the system state. Even if the appropriate dose metric is unknown, bounds can be placed on the route-to-route equivalence with very limited data. We illustrate this analysis by showing that it reproduces exactly the PBPK-model-based route-to-route extrapolation in EPA's 2000 risk assessment for vinyl chloride. Overall, we find that in many cases, steady-state solutions exactly reproduce or closely approximate the solutions using the full PBPK model, while being substantially more transparent. Subsequent work will examine the utility of steady-state solutions for analyzing cross-species extrapolation and intraspecies variability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号