首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1176篇
  免费   22篇
  国内免费   1篇
管理学   463篇
民族学   1篇
人口学   46篇
丛书文集   49篇
理论方法论   86篇
综合类   195篇
社会学   173篇
统计学   186篇
  2023年   3篇
  2022年   13篇
  2021年   17篇
  2020年   30篇
  2019年   29篇
  2018年   31篇
  2017年   27篇
  2016年   28篇
  2015年   34篇
  2014年   45篇
  2013年   125篇
  2012年   57篇
  2011年   67篇
  2010年   30篇
  2009年   57篇
  2008年   57篇
  2007年   53篇
  2006年   40篇
  2005年   34篇
  2004年   27篇
  2003年   38篇
  2002年   35篇
  2001年   51篇
  2000年   29篇
  1999年   19篇
  1998年   21篇
  1997年   22篇
  1996年   18篇
  1995年   13篇
  1994年   25篇
  1993年   15篇
  1992年   15篇
  1991年   11篇
  1990年   10篇
  1989年   13篇
  1988年   13篇
  1987年   6篇
  1986年   12篇
  1985年   4篇
  1984年   8篇
  1983年   7篇
  1982年   3篇
  1981年   2篇
  1980年   4篇
  1978年   1篇
排序方式: 共有1199条查询结果,搜索用时 733 毫秒
11.
Modeling for Risk Assessment of Neurotoxic Effects   总被引:2,自引:0,他引:2  
The regulation of noncancer toxicants, including neurotoxicants, has usually been based upon a reference dose (allowable daily intake). A reference dose is obtained by dividing a no-observed-effect level by uncertainty (safety) factors to account for intraspecies and interspecies sensitivities to a chemical. It is assumed that the risk at the reference dose is negligible, but no attempt generally is made to estimate the risk at the reference dose. A procedure is outlined that provides estimates of risk as a function of dose. The first step is to establish a mathematical relationship between a biological effect and the dose of a chemical. Knowledge of biological mechanisms and/or pharmacokinetics can assist in the choice of plausible mathematical models. The mathematical model provides estimates of average responses as a function of dose. Secondly, estimates of risk require selection of a distribution of individual responses about the average response given by the mathematical model. In the case of a normal or lognormal distribution, only an estimate of the standard deviation is needed. The third step is to define an adverse level for a response so that the probability (risk) of exceeding that level can be estimated as a function of dose. Because a firm response level often cannot be established at which adverse biological effects occur, it may be necessary to at least establish an abnormal response level that only a small proportion of individuals would exceed in an unexposed group. That is, if a normal range of responses can be established, then the probability (risk) of abnormal responses can be estimated. In order to illustrate this process, measures of the neurotransmitter serotonin and its metabolite 5-hydroxyindoleacetic acid in specific areas of the brain of rats and monkeys are analyzed after exposure to the neurotoxicant methylene-dioxymethamphetamine. These risk estimates are compared with risk estimates from the quantal approach in which animals are classified as either abnormal or not depending upon abnormal serotonin levels.  相似文献   
12.
This article argues that those interested in social policy should by definition be interested in issues of transport policy. It analyses data on road traffic fatalities and suggests, in the light of this evidence, that those who benefit least from the motor vehicle seem disproportionately likely, given their relative exposure to the risk, to die in road traffic accidents.  相似文献   
13.
Sample selection in radiocarbon dating   总被引:1,自引:0,他引:1  
Archaeologists working on the island of O'ahu, Hawai'i, use radiocarbon dating of samples of organic matter found trapped in fish-pond sediments to help them to learn about the chronology of the construction and use of the aquicultural systems created by the Polynesians. At one particular site, Loko Kuwili, 25 organic samples were obtained and funds were available to date an initial nine. However, on calibration to the calendar scale, the radiocarbon determinations provided date estimates that had very large variances. As a result, major issues of chronology remained unresolved and the archaeologists were faced with the prospect of another expensive programme of radiocarbon dating. This paper presents results of research that tackles the problems associated with selecting samples from those which are still available. Building on considerable recent research that utilizes Markov chain Monte Carlo methods to aid archaeologists in their radiocarbon calibration and interpretation, we adopt the standard Bayesian framework of risk functions, which allows us to assess the optimal samples to be sent for dating. Although rather computer intensive, our algorithms are simple to implement within the Bayesian radiocarbon framework that is already in place and produce results that are capable of direct interpretation by the archaeologists. By dating just three more samples from Loko Kuwili the expected variance on the date of greatest interest could be substantially reduced.  相似文献   
14.
Mosler  Karl 《Theory and Decision》1997,42(3):215-233
Indices and orderings are developed for evaluating alternative strategies in the management of risk. They reflect the goals of reducing individual and collective risks, of increasing equity, and of assigning priority to the reduction and to the equity of high risks. Individual risk is defined as the (random or non-random) level of exposure to a danger. In particular the role of a lower negligibility level is investigated. A class of indices is proposed which involves two parameters, a negligibility level and a parameter of inequality aversion, and several interpretations of the indices are discussed. We provide a set of eight axioms which are necessary and sufficient for this class of indices, and we present an approach to deal with partial information on the parameters.  相似文献   
15.
This study examines a key component of environmental risk communication; trust and credibility. The study was conducted in two parts. In the first part, six hypotheses regarding the perceptions and determinants of trust and credibility were tested against survey data. The hypotheses were supported by the data. The most important hypothesis was that perceptions of trust and credibility are dependent on three factors: perceptions of knowledge and expertise; perceptions of openness and honesty; and perceptions of concern and care. In the second part, models were constructed with perceptions of trust and credibility as the dependent variable. The goal was to examine the data for findings with direct policy implications. One such finding was that defying a negative stereotype is key to improving perceptions of trust and credibility.  相似文献   
16.
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables.  相似文献   
17.
The evaluation of hazards from complex, large scale, technologically advanced systems often requires the construction of computer implemented mathematical models. These models are used to evaluate the safety of the systems and to evaluate the consequences of modifications to the systems. These evaluations, however, are normally surrounded by significant uncertainties related to the uncertainty inherent in natural phenomena such as the weather and those related to uncertainties in the parameters and models used in the evaluation.

Another use of these models is to evaluate strategies for improving information used in the modeling process itself. While sensitivity analysis is useful in defining variables in the model that are important, uncertainty analysis provides a tool for assessing the importance of uncertainty about these variables. A third complementary technique, is decision analysis. It provides a methodology for explicitly evaluating and ranking potential improvements to the model. Its use in the development of information gathering strategies for a nuclear waste repository are discussed in this paper.  相似文献   
18.
Project control has been a research topic since decades that attracts both academics and practitioners. Project control systems indicate the direction of change in preliminary planning variables compared with actual performance. In case their current project performance deviates from the planned performance, a warning is indicated by the system in order to take corrective actions.Earned value management/earned schedule (EVM/ES) systems have played a central role in project control, and provide straightforward key performance metrics that measure the deviations between planned and actual performance in terms of time and cost. In this paper, a new statistical project control procedure sets tolerance limits to improve the discriminative power between progress situations that are either statistically likely or less likely to occur under the project baseline schedule. In this research, the tolerance limits are derived from subjective estimates for the activity durations of the project. Using the existing and commonly known EVM/ES metrics, the resulting project control charts will have an improved ability to trigger actions when variation in a project׳s progress exceeds certain predefined thresholdsA computational experiment has been set up to test the ability of these statistical project control charts to discriminate between variations that are either acceptable or unacceptable in the duration of the individual activities. The computational experiments compare the use of statistical tolerance limits with traditional earned value management thresholds and validate their power to report warning signals when projects tend to deviate significantly from the baseline schedule.  相似文献   
19.
《统计学通讯:理论与方法》2012,41(16-17):3138-3149
This article deals with the quantitative Fault Tree Analysis (FTA) and the estimation method of the top-event in case of dependent events. It aims at addressing two main issues: (1) the decomposition of variability for the top-event according to several error components linked to the estimation of the top-event and sources of internal and external variations for a complex system; and (2) the definition of a Performance Measure Independent of Adjustment in order to set the quality of the top-event as a complex measure of the system failure. A simulated study applied to the health system is also carried out.  相似文献   
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号