全文获取类型
收费全文 | 4089篇 |
免费 | 637篇 |
专业分类
管理学 | 1099篇 |
民族学 | 5篇 |
人口学 | 47篇 |
丛书文集 | 4篇 |
理论方法论 | 819篇 |
综合类 | 54篇 |
社会学 | 1628篇 |
统计学 | 1070篇 |
出版年
2023年 | 2篇 |
2022年 | 2篇 |
2021年 | 93篇 |
2020年 | 160篇 |
2019年 | 340篇 |
2018年 | 208篇 |
2017年 | 359篇 |
2016年 | 349篇 |
2015年 | 341篇 |
2014年 | 357篇 |
2013年 | 575篇 |
2012年 | 362篇 |
2011年 | 250篇 |
2010年 | 260篇 |
2009年 | 148篇 |
2008年 | 185篇 |
2007年 | 99篇 |
2006年 | 104篇 |
2005年 | 95篇 |
2004年 | 106篇 |
2003年 | 74篇 |
2002年 | 81篇 |
2001年 | 87篇 |
2000年 | 74篇 |
1999年 | 6篇 |
1998年 | 2篇 |
1997年 | 1篇 |
1996年 | 3篇 |
1995年 | 1篇 |
1992年 | 1篇 |
1981年 | 1篇 |
排序方式: 共有4726条查询结果,搜索用时 15 毫秒
71.
The three classic pillars of risk analysis are risk assessment (how big is the risk and how sure can we be?), risk management (what shall we do about it?), and risk communication (what shall we say about it, to whom, when, and how?). We propose two complements as important parts of these three bases: risk attribution (who or what addressable conditions actually caused an accident or loss?) and learning from experience about risk reduction (what works, and how well?). Failures in complex systems usually evoke blame, often with insufficient attention to root causes of failure, including some aspects of the situation, design decisions, or social norms and culture. Focusing on blame, however, can inhibit effective learning, instead eliciting excuses to deflect attention and perceived culpability. Productive understanding of what went wrong, and how to do better, thus requires moving past recrimination and excuses. This article identifies common blame‐shifting “lame excuses” for poor risk management. These generally contribute little to effective improvements and may leave real risks and preventable causes unaddressed. We propose principles from risk and decision sciences and organizational design to improve results. These start with organizational leadership. More specifically, they include: deliberate testing and learning—especially from near‐misses and accident precursors; careful causal analysis of accidents; risk quantification; candid expression of uncertainties about costs and benefits of risk‐reduction options; optimization of tradeoffs between gathering additional information and immediate action; promotion of safety culture; and mindful allocation of people, responsibilities, and resources to reduce risks. We propose that these principles provide sound foundations for improving successful risk management. 相似文献
72.
Sondra S. Teske Mark H. Weir Timothy A. Bartrand Yin Huang Sushil B. Tamrakar Charles N. Haas 《Risk analysis》2014,34(5):911-928
The effect of bioaerosol size was incorporated into predictive dose‐response models for the effects of inhaled aerosols of Francisella tularensis (the causative agent of tularemia) on rhesus monkeys and guinea pigs with bioaerosol diameters ranging between 1.0 and 24 μm. Aerosol‐size‐dependent models were formulated as modification of the exponential and β‐Poisson dose‐response models and model parameters were estimated using maximum likelihood methods and multiple data sets of quantal dose‐response data for which aerosol sizes of inhaled doses were known. Analysis of F. tularensis dose‐response data was best fit by an exponential dose‐response model with a power function including the particle diameter size substituting for the rate parameter k scaling the applied dose. There were differences in the pathogen's aerosol‐size‐dependence equation and models that better represent the observed dose‐response results than the estimate derived from applying the model developed by the International Commission on Radiological Protection (ICRP, 1994) that relies on differential regional lung deposition for human particle exposure. 相似文献
73.
In chemical and microbial risk assessments, risk assessors fit dose‐response models to high‐dose data and extrapolate downward to risk levels in the range of 1–10%. Although multiple dose‐response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose‐response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. 相似文献
74.
Land subsidence risk assessment (LSRA) is a multi‐attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor‐level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster‐Shafer (D‐S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi‐Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information—quantitative or qualitative, complete or incomplete, and precise or imprecise—to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states. 相似文献
75.
Panos G. Georgopoulos Christopher J. Brinkerhoff Sastry Isukapalli Michael Dellarco Philip J. Landrigan Paul J. Lioy 《Risk analysis》2014,34(7):1299-1316
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data. 相似文献
76.
Max Travers 《The Australian journal of social issues》2009,44(3):273-289
A driving force behind the establishment of a qualitative data archive in the United Kingdom has been the oral historian, Paul Thompson. He has complained that there is a ‘strange silence’ among qualitative sociologists on re‐analysis, and that many have been reluctant to deposit data. The first part of the paper suggests that the common ethical and practical objections can be overcome in establishing an archive in Australia. However, there is a more serious underlying ideological objection: that archiving promotes and institutionalises a narrow empiricist version of qualitative research. The rest of the paper makes this case by examining teaching materials on a British website, by reviewing Thompson's arguments, and by considering some examples of re‐analysis by sociologists. It is argued that qualitative researchers should respond critically, but that it is possible to address and overcome these problems when developing an Australian archive. 相似文献
77.
We examine the relationship between vocational education and occupational burnout among workers in different forms of employment. Although the self‐employed enjoy higher levels of job autonomy and work‐related satisfaction, we do not know whether they experience lower rates of occupational burnout, and whether vocational education plays a role in this relationship. This latter consideration is important, given that vocational qualifications often lead to self‐employment and prior research demonstrated that formal training may reduce burnout. However, formal education was previously measured in years of schooling, without considering the distinction between academically‐oriented and vocational courses. Therefore, using data from a 2001 national survey of working Australians, we first establish that the self‐employed are significantly less likely to experience burnout. We then demonstrate that some resilience to burnout can be attributed to the attainment of skilled vocational training, net of employment characteristics which are also very important. 相似文献
78.
This article investigates the impact of information discrepancy between a drop‐shipper and an online retailer on the drop‐shipping supply chain performance. The inventory information misalignment between them contributes to the failure of order fulfillment and demand satisfaction, and hence the associated penalties are incurred. In this article, we first analyze the penalties of ignoring such information discrepancy on both the drop‐shipper and the online retailer. We then assess the impact of information discrepancy on both parties when the drop‐shipper understands the existence of the information discrepancy but is not able to eliminate the errors. The numerical experiments indicate that both parties can have significant amount of the percentage cost reductions if the information discrepancy can be eliminated, and the potential savings are substantial especially when the errors have large variability. Furthermore, we observe that the online retailer is more vulnerable to information discrepancy than the drop‐shipper, and the drop‐shipper is likely to suffer from the online retailer's underestimation of the physical inventory level more than the problem of its overestimation. Moreover, even if eliminating errors is not possible, both parties could still benefit from taking the possibility of errors into consideration in decision making. 相似文献
79.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon. 相似文献
80.
Ning Zhang 《统计学通讯:理论与方法》2020,49(21):5252-5272
AbstractUnder non‐additive probabilities, cluster points of the empirical average have been proved to quasi-surely fall into the interval constructed by either the lower and upper expectations or the lower and upper Choquet expectations. In this paper, based on the initiated notion of independence, we obtain a different Marcinkiewicz-Zygmund type strong law of large numbers. Then the Kolmogorov type strong law of large numbers can be derived from it directly, stating that the closed interval between the lower and upper expectations is the smallest one that covers cluster points of the empirical average quasi-surely. 相似文献