全文获取类型
收费全文 | 4153篇 |
免费 | 634篇 |
国内免费 | 2篇 |
专业分类
管理学 | 1130篇 |
民族学 | 7篇 |
人口学 | 47篇 |
丛书文集 | 11篇 |
理论方法论 | 827篇 |
综合类 | 78篇 |
社会学 | 1644篇 |
统计学 | 1045篇 |
出版年
2023年 | 2篇 |
2021年 | 95篇 |
2020年 | 160篇 |
2019年 | 338篇 |
2018年 | 205篇 |
2017年 | 357篇 |
2016年 | 348篇 |
2015年 | 345篇 |
2014年 | 362篇 |
2013年 | 578篇 |
2012年 | 373篇 |
2011年 | 259篇 |
2010年 | 259篇 |
2009年 | 153篇 |
2008年 | 190篇 |
2007年 | 103篇 |
2006年 | 105篇 |
2005年 | 95篇 |
2004年 | 115篇 |
2003年 | 77篇 |
2002年 | 84篇 |
2001年 | 89篇 |
2000年 | 72篇 |
1999年 | 5篇 |
1998年 | 3篇 |
1997年 | 3篇 |
1996年 | 3篇 |
1994年 | 1篇 |
1993年 | 2篇 |
1992年 | 2篇 |
1991年 | 1篇 |
1989年 | 1篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1985年 | 1篇 |
1984年 | 1篇 |
排序方式: 共有4789条查询结果,搜索用时 15 毫秒
81.
Anthropogenic climate change information tends to be interpreted against the backdrop of initial environmental beliefs, which can lead to some people being resistant toward the information. In this article (N = 88), we examined whether self‐affirmation via reflection on personally important values could attenuate the impact of initial beliefs on the acceptance of anthropogenic climate change evidence. Our findings showed that initial beliefs about the human impact on ecological stability influenced the acceptance of information only among nonaffirmed participants. Self‐affirmed participants who were initially resistant toward the information showed stronger beliefs in the existence of climate change risks and greater acknowledgment that individual efficacy has a role to play in reducing climate change risks than did their nonaffirmed counterparts. 相似文献
82.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden. 相似文献
83.
The three classic pillars of risk analysis are risk assessment (how big is the risk and how sure can we be?), risk management (what shall we do about it?), and risk communication (what shall we say about it, to whom, when, and how?). We propose two complements as important parts of these three bases: risk attribution (who or what addressable conditions actually caused an accident or loss?) and learning from experience about risk reduction (what works, and how well?). Failures in complex systems usually evoke blame, often with insufficient attention to root causes of failure, including some aspects of the situation, design decisions, or social norms and culture. Focusing on blame, however, can inhibit effective learning, instead eliciting excuses to deflect attention and perceived culpability. Productive understanding of what went wrong, and how to do better, thus requires moving past recrimination and excuses. This article identifies common blame‐shifting “lame excuses” for poor risk management. These generally contribute little to effective improvements and may leave real risks and preventable causes unaddressed. We propose principles from risk and decision sciences and organizational design to improve results. These start with organizational leadership. More specifically, they include: deliberate testing and learning—especially from near‐misses and accident precursors; careful causal analysis of accidents; risk quantification; candid expression of uncertainties about costs and benefits of risk‐reduction options; optimization of tradeoffs between gathering additional information and immediate action; promotion of safety culture; and mindful allocation of people, responsibilities, and resources to reduce risks. We propose that these principles provide sound foundations for improving successful risk management. 相似文献
84.
Sondra S. Teske Mark H. Weir Timothy A. Bartrand Yin Huang Sushil B. Tamrakar Charles N. Haas 《Risk analysis》2014,34(5):911-928
The effect of bioaerosol size was incorporated into predictive dose‐response models for the effects of inhaled aerosols of Francisella tularensis (the causative agent of tularemia) on rhesus monkeys and guinea pigs with bioaerosol diameters ranging between 1.0 and 24 μm. Aerosol‐size‐dependent models were formulated as modification of the exponential and β‐Poisson dose‐response models and model parameters were estimated using maximum likelihood methods and multiple data sets of quantal dose‐response data for which aerosol sizes of inhaled doses were known. Analysis of F. tularensis dose‐response data was best fit by an exponential dose‐response model with a power function including the particle diameter size substituting for the rate parameter k scaling the applied dose. There were differences in the pathogen's aerosol‐size‐dependence equation and models that better represent the observed dose‐response results than the estimate derived from applying the model developed by the International Commission on Radiological Protection (ICRP, 1994) that relies on differential regional lung deposition for human particle exposure. 相似文献
85.
In chemical and microbial risk assessments, risk assessors fit dose‐response models to high‐dose data and extrapolate downward to risk levels in the range of 1–10%. Although multiple dose‐response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose‐response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. 相似文献
86.
Land subsidence risk assessment (LSRA) is a multi‐attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor‐level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster‐Shafer (D‐S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi‐Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information—quantitative or qualitative, complete or incomplete, and precise or imprecise—to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states. 相似文献
87.
Panos G. Georgopoulos Christopher J. Brinkerhoff Sastry Isukapalli Michael Dellarco Philip J. Landrigan Paul J. Lioy 《Risk analysis》2014,34(7):1299-1316
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data. 相似文献
88.
This article investigates the impact of information discrepancy between a drop‐shipper and an online retailer on the drop‐shipping supply chain performance. The inventory information misalignment between them contributes to the failure of order fulfillment and demand satisfaction, and hence the associated penalties are incurred. In this article, we first analyze the penalties of ignoring such information discrepancy on both the drop‐shipper and the online retailer. We then assess the impact of information discrepancy on both parties when the drop‐shipper understands the existence of the information discrepancy but is not able to eliminate the errors. The numerical experiments indicate that both parties can have significant amount of the percentage cost reductions if the information discrepancy can be eliminated, and the potential savings are substantial especially when the errors have large variability. Furthermore, we observe that the online retailer is more vulnerable to information discrepancy than the drop‐shipper, and the drop‐shipper is likely to suffer from the online retailer's underestimation of the physical inventory level more than the problem of its overestimation. Moreover, even if eliminating errors is not possible, both parties could still benefit from taking the possibility of errors into consideration in decision making. 相似文献
89.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon. 相似文献
90.
Ning Zhang 《统计学通讯:理论与方法》2020,49(21):5252-5272
AbstractUnder non‐additive probabilities, cluster points of the empirical average have been proved to quasi-surely fall into the interval constructed by either the lower and upper expectations or the lower and upper Choquet expectations. In this paper, based on the initiated notion of independence, we obtain a different Marcinkiewicz-Zygmund type strong law of large numbers. Then the Kolmogorov type strong law of large numbers can be derived from it directly, stating that the closed interval between the lower and upper expectations is the smallest one that covers cluster points of the empirical average quasi-surely. 相似文献