首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   966篇
  免费   71篇
  国内免费   17篇
管理学   521篇
民族学   1篇
人口学   5篇
丛书文集   24篇
理论方法论   86篇
综合类   261篇
社会学   53篇
统计学   103篇
  2024年   2篇
  2023年   13篇
  2022年   24篇
  2021年   17篇
  2020年   41篇
  2019年   31篇
  2018年   33篇
  2017年   35篇
  2016年   42篇
  2015年   36篇
  2014年   47篇
  2013年   86篇
  2012年   56篇
  2011年   51篇
  2010年   43篇
  2009年   40篇
  2008年   51篇
  2007年   46篇
  2006年   36篇
  2005年   53篇
  2004年   49篇
  2003年   24篇
  2002年   31篇
  2001年   20篇
  2000年   18篇
  1999年   19篇
  1998年   9篇
  1997年   14篇
  1996年   11篇
  1995年   9篇
  1994年   15篇
  1993年   9篇
  1992年   10篇
  1991年   4篇
  1990年   5篇
  1989年   8篇
  1988年   4篇
  1987年   5篇
  1986年   2篇
  1985年   1篇
  1984年   1篇
  1981年   3篇
排序方式: 共有1054条查询结果,搜索用时 218 毫秒
51.
Invasive species risk maps provide broad guidance on where to allocate resources for pest monitoring and regulation, but they often present individual risk components (such as climatic suitability, host abundance, or introduction potential) as independent entities. These independent risk components are integrated using various multicriteria analysis techniques that typically require prior knowledge of the risk components’ importance. Such information is often nonexistent for many invasive pests. This study proposes a new approach for building integrated risk maps using the principle of a multiattribute efficient frontier and analyzing the partial order of elements of a risk map as distributed in multidimensional criteria space. The integrated risks are estimated as subsequent multiattribute frontiers in dimensions of individual risk criteria. We demonstrate the approach with the example of Agrilus biguttatus Fabricius, a high‐risk pest that may threaten North American oak forests in the near future. Drawing on U.S. and Canadian data, we compare the performance of the multiattribute ranking against a multicriteria linear weighted averaging technique in the presence of uncertainties, using the concept of robustness from info‐gap decision theory. The results show major geographic hotspots where the consideration of tradeoffs between multiple risk components changes integrated risk rankings. Both methods delineate similar geographical regions of high and low risks. Overall, aggregation based on a delineation of multiattribute efficient frontiers can be a useful tool to prioritize risks for anticipated invasive pests, which usually have an extremely poor prior knowledge base.  相似文献   
52.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   
53.
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location‐scale families (including the log‐normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.  相似文献   
54.
55.
In this paper we compare expectations derived from 10 different human physiologically based pharmacokinetic models for perchloroethylene with data on absorption via inhalation, and concentrations in alveolar air and venous blood. Our most interesting finding is that essentially all of the models show a time pattern of departures of predictions of air and blood levels relative to experimental data that might be corrected by more sophisticated model structures incorporating either (a) heterogeneity of the fat compartment (with respect to either perfusion or partition coefficients or both) or (b) intertissue diffusion of perchloroethylene between the fat and muscle/VRG groups. Similar types of corrections have recently been proposed to reduce analogous anomalies in the fits of pharmacokinetic models to the data for several volatile anesthetics.(17-20) A second finding is that models incorporating resting values for alveolar ventilation in the region of 5.4 L/min seemed to be most compatible with the most reliable set of perchloroethylene uptake data.  相似文献   
56.
Most public health risk assessments assume and combine a series of average, conservative, and worst-case values to derive a conservative point estimate of risk. This procedure has major limitations. This paper demonstrates a new methodology for extended uncertainty analyses in public health risk assessments using Monte Carlo techniques. The extended method begins as do some conventional methods--with the preparation of a spreadsheet to estimate exposure and risk. This method, however, continues by modeling key inputs as random variables described by probability density functions (PDFs). Overall, the technique provides a quantitative way to estimate the probability distributions for exposure and health risks within the validity of the model used. As an example, this paper presents a simplified case study for children playing in soils contaminated with benzene and benzo(a)pyrene (BaP).  相似文献   
57.
Mixed Levels of Uncertainty in Complex Policy Models   总被引:3,自引:0,他引:3  
The characterization and treatment of uncertainty poses special challenges when modeling indeterminate or complex coupled systems such as those involved in the interactions between human activity, climate and the ecosystem. Uncertainty about model structure may become as, or more important than, uncertainty about parameter values. When uncertainty grows so large that prediction or optimization no longer makes sense, it may still be possible to use the model as a behavioral test bed to examine the relative robustness of alternative observational and behavioral strategies. When models must be run into portions of their phase space that are not well understood, different submodels may become unreliable at different rates. A common example involves running a time stepped model far into the future. Several strategies can be used to deal with such situations. The probability of model failure can be reported as a function of time. Possible alternative surprises can be assigned probabilities, modeled separately, and combined. Finally, through the use of subjective judgments, one may be able to combine, and over time shift between models, moving from more detailed to progressively simpler order-of-magnitude models, and perhaps ultimately, on to simple bounding analysis.  相似文献   
58.
研究具有不确定的时滞微分大系统的鲁棒稳定性问题,利用李雅普诺夫函数和不等式分析方法,给出了不确定性时滞微分大系统在不依赖时滞的反馈分散控制下是稳定化的、渐近稳定化和指数稳定化的充分条件。  相似文献   
59.
This paper focuses on the relationship between experiential and statistical uncertainties in the timing of births in Cameroon (Central Africa). Most theories of fertility level and change emphasize the emergence of parity-specific control, treating desired family size as both central, and stable across the life course. By contrast, this paper argues for a theory of reproduction that emphasizes process, social context, and contingency. The paper concentrates on the second birth interval, showing that it is longer and more variable among educated than among uneducated women. The paper argues that this difference is due to the specific forms of uncertainty associated with education in contemporary Cameroon.  相似文献   
60.
The attack that occurred on September 11, 2001 was, in the end, the result of a failure to detect and prevent the terrorist operations that hit the United States. The U.S. government thus faces at this time the daunting tasks of first, drastically increasing its ability to obtain and interpret different types of signals of impending terrorist attacks with sufficient lead time and accuracy, and second, improving its ability to react effectively. One of the main challenges is the fusion of information, from different sources (U.S. or foreign), and of different types (electronic signals, human intelligence. etc.). Fusion thus involves two very distinct and separate issues: communications, i.e., ensuring that the different U.S. and foreign intelligence agencies communicate all relevant and accurate information in a timely fashion and, perhaps more difficult, merging the content of signals, some "sharp" and some "fuzzy," some dependent and some independent into useful information. The focus of this article is on the latter issue, and on the use of the results. In this article, I present a classic probabilistic Bayesian model sometimes used in engineering risk analysis, which can be helpful in the fusion of information because it allows computation of the posterior probability of an event given its prior probability (before the signal is observed) and the quality of the signal characterized by the probabilities of false positive and false negative. Experience suggests that the nature of these errors has been sometimes misunderstood; therefore, I discuss the validity of several possible definitions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号