首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18487篇
  免费   1654篇
  国内免费   225篇
管理学   4246篇
劳动科学   2篇
民族学   197篇
人才学   1篇
人口学   213篇
丛书文集   1093篇
理论方法论   1536篇
综合类   8536篇
社会学   2718篇
统计学   1824篇
  2024年   34篇
  2023年   214篇
  2022年   213篇
  2021年   317篇
  2020年   563篇
  2019年   705篇
  2018年   554篇
  2017年   760篇
  2016年   793篇
  2015年   792篇
  2014年   1144篇
  2013年   1684篇
  2012年   1259篇
  2011年   1219篇
  2010年   1038篇
  2009年   892篇
  2008年   984篇
  2007年   965篇
  2006年   973篇
  2005年   929篇
  2004年   876篇
  2003年   822篇
  2002年   659篇
  2001年   670篇
  2000年   448篇
  1999年   175篇
  1998年   88篇
  1997年   61篇
  1996年   56篇
  1995年   54篇
  1994年   68篇
  1993年   50篇
  1992年   42篇
  1991年   32篇
  1990年   43篇
  1989年   24篇
  1988年   39篇
  1987年   22篇
  1986年   20篇
  1985年   14篇
  1984年   18篇
  1983年   17篇
  1982年   17篇
  1981年   16篇
  1980年   1篇
  1979年   1篇
  1977年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
1.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   
2.
Believing action to reduce the risks of climate change is both possible (self‐efficacy) and effective (response efficacy) is essential to motivate and sustain risk mitigation efforts, according to current risk communication theory. Although the public recognizes the dangers of climate change, and is deluged with lists of possible mitigative actions, little is known about public efficacy beliefs in the context of climate change. Prior efficacy studies rely on conflicting constructs and measures of efficacy, and links between efficacy and risk management actions are muddled. As a result, much remains to learn about how laypersons think about the ease and effectiveness of potential mitigative actions. To bring clarity and inform risk communication and management efforts, we investigate how people think about efficacy in the context of climate change risk management by analyzing unprompted and prompted beliefs from two national surveys (N = 405, N = 1,820). In general, respondents distinguish little between effective and ineffective climate strategies. While many respondents appreciate that reducing fossil fuel use is an effective risk mitigation strategy, overall assessments reflect persistent misconceptions about climate change causes, and uncertainties about the effectiveness of risk mitigation strategies. Our findings suggest targeting climate change risk communication and management strategies to (1) address gaps in people's existing mental models of climate action, (2) leverage existing public understanding of both potentially effective mitigation strategies and the collective action dilemma at the heart of climate change action, and (3) take into account ideologically driven reactions to behavior change and government action framed as climate action.  相似文献   
3.
This article highlights three dimensions to understanding children's well‐being during and after parental imprisonment which have not been fully explored in current research. A consideration of ‘time’ reveals the importance of children's past experiences and their anticipated futures. A focus on ‘space’ highlights the impact of new or altered environmental dynamics. A study of ‘agency’ illuminates how children cope within structural, material and social confines which intensify vulnerability and dependency. This integrated perspective reveals important differences in individual children's experiences and commonalities in broader systemic and social constraints on prisoners’ children. The paper analyses data from a prospective longitudinal study of 35 prisoners’ children during and after their (step) father's imprisonment to illustrate the arguments.  相似文献   
4.
As part of the celebration of the 40th anniversary of the Society for Risk Analysis and Risk Analysis: An International Journal, this essay reviews the 10 most important accomplishments of risk analysis from 1980 to 2010, outlines major accomplishments in three major categories from 2011 to 2019, discusses how editors circulate authors’ accomplishments, and proposes 10 major risk-related challenges for 2020–2030. Authors conclude that the next decade will severely test the field of risk analysis.  相似文献   
5.
现代经济主体间网络关联性越来越强,风险很容易在不同行业间扩散,因此有效识别并分析系统性风险是防范金融危机的关键步骤。基于条件风险价值(CoVaR)和边际期望损失(MES)两个指标,对巨潮行业指数系统性风险的静态和动态特征进行了研究。结果发现,各行业间系统性风险的相关性较强,动态特征显示2009年年初和2016年3月为系统性风险的两个峰值;从分行业来看,材料行业的系统性风险最高,而消费和医药行业的系统性风险最低。采用动态面板模型分析影响行业系统性风险的市场面因素发现,短期涨幅较高、长期涨幅较低及流动性较充分的行业,其系统性风险往往更低。因此,应加强对系统性风险较高行业的监管力度,建立好金融防火墙,防止外部金融风险的过度传染;同时应加强对各行业的实时监控,尤其是关注短期暴涨暴跌及流动性充分与否的监控。  相似文献   
6.
This article presents a flood risk analysis model that considers the spatially heterogeneous nature of flood events. The basic concept of this approach is to generate a large sample of flood events that can be regarded as temporal extrapolation of flood events. These are combined with cumulative flood impact indicators, such as building damages, to finally derive time series of damages for risk estimation. Therefore, a multivariate modeling procedure that is able to take into account the spatial characteristics of flooding, the regionalization method top‐kriging, and three different impact indicators are combined in a model chain. Eventually, the expected annual flood impact (e.g., expected annual damages) and the flood impact associated with a low probability of occurrence are determined for a study area. The risk model has the potential to augment the understanding of flood risk in a region and thereby contribute to enhanced risk management of, for example, risk analysts and policymakers or insurance companies. The modeling framework was successfully applied in a proof‐of‐concept exercise in Vorarlberg (Austria). The results of the case study show that risk analysis has to be based on spatially heterogeneous flood events in order to estimate flood risk adequately.  相似文献   
7.
Perceptions of infectious diseases are important predictors of whether people engage in disease‐specific preventive behaviors. Having accurate beliefs about a given infectious disease has been found to be a necessary condition for engaging in appropriate preventive behaviors during an infectious disease outbreak, while endorsing conspiracy beliefs can inhibit preventive behaviors. Despite their seemingly opposing natures, knowledge and conspiracy beliefs may share some of the same psychological motivations, including a relationship with perceived risk and self‐efficacy (i.e., control). The 2015–2016 Zika epidemic provided an opportunity to explore this. The current research provides some exploratory tests of this topic derived from two studies with similar measures, but different primary outcomes: one study that included knowledge of Zika as a key outcome and one that included conspiracy beliefs about Zika as a key outcome. Both studies involved cross‐sectional data collections that occurred during the same two periods of the Zika outbreak: one data collection prior to the first cases of local Zika transmission in the United States (March–May 2016) and one just after the first cases of local transmission (July–August). Using ordinal logistic and linear regression analyses of data from two time points in both studies, the authors show an increase in relationship strength between greater perceived risk and self‐efficacy with both increased knowledge and increased conspiracy beliefs after local Zika transmission in the United States. Although these results highlight that similar psychological motivations may lead to Zika knowledge and conspiracy beliefs, there was a divergence in demographic association.  相似文献   
8.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   
9.
Strong orthogonal arrays (SOAs) were recently introduced and studied as a class of space‐filling designs for computer experiments. An important problem that has not been addressed in the literature is that of design selection for such arrays. In this article, we conduct a systematic investigation into this problem, and we focus on the most useful SOA(n,m,4,2 + )s and SOA(n,m,4,2)s. This article first addresses the problem of design selection for SOAs of strength 2+ by examining their three‐dimensional projections. Both theoretical and computational results are presented. When SOAs of strength 2+ do not exist, we formulate a general framework for the selection of SOAs of strength 2 by looking at their two‐dimensional projections. The approach is fruitful, as it is applicable when SOAs of strength 2+ do not exist and it gives rise to them when they do. The Canadian Journal of Statistics 47: 302–314; 2019 © 2019 Statistical Society of Canada  相似文献   
10.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号