首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6102篇
  免费   226篇
  国内免费   70篇
管理学   927篇
民族学   15篇
人口学   72篇
丛书文集   190篇
理论方法论   122篇
综合类   1357篇
社会学   195篇
统计学   3520篇
  2024年   5篇
  2023年   58篇
  2022年   83篇
  2021年   88篇
  2020年   126篇
  2019年   214篇
  2018年   257篇
  2017年   374篇
  2016年   242篇
  2015年   210篇
  2014年   281篇
  2013年   1098篇
  2012年   486篇
  2011年   293篇
  2010年   260篇
  2009年   275篇
  2008年   234篇
  2007年   261篇
  2006年   216篇
  2005年   213篇
  2004年   184篇
  2003年   169篇
  2002年   136篇
  2001年   101篇
  2000年   95篇
  1999年   79篇
  1998年   59篇
  1997年   59篇
  1996年   29篇
  1995年   27篇
  1994年   30篇
  1993年   15篇
  1992年   29篇
  1991年   25篇
  1990年   10篇
  1989年   16篇
  1988年   8篇
  1987年   8篇
  1986年   6篇
  1985年   7篇
  1984年   9篇
  1983年   5篇
  1982年   9篇
  1981年   1篇
  1980年   3篇
  1979年   1篇
  1977年   1篇
  1976年   2篇
  1975年   1篇
排序方式: 共有6398条查询结果,搜索用时 15 毫秒
1.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   
2.
Abstract

Characterizing relations via Rényi entropy of m-generalized order statistics are considered along with examples and related stochastic orderings. Previous results for common order statistics are included.  相似文献   
3.
2012年3D 版《泰坦尼克号》的华丽上映曾掀起影迷们对它的新一轮热捧。文章就同样备受中国观众喜爱的中国方言版《泰坦尼克号》为案例,结合功能目的论来探寻中国方言版字幕翻译的目的与性质,进而分析、总结英文影片字幕的中国方言版译制活动过程。  相似文献   
4.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
5.
Open innovation and absorptive capacity are two concepts based on the idea that companies can leverage the knowledge generated externally to improve their innovation performance. The aim of this paper is to analyse the joint effect of open innovation and absorptive capacity on a firm's radical innovation. Open innovation is expressed in terms of external search breadth and depth strategies and absorptive capacity is described by distinguishing between potential and realized absorptive capacity. In order to test our hypotheses, we carried out empirical research in firms operating in high-technology industries. The results indicate that internal routines and processes for absorbing external knowledge help explain radical innovation as they show a significant effect of potential and realized absorptive capacity. Also, there is a moderating effect of absorptive capacity on open innovation. Specifically, potential absorptive capacity exerts a positive effect on the relationship between external search breadth and depth and radical innovation. Realized absorptive capacity moderates the influence of external search breadth. These findings confirm the complementary nature of absorptive capacity and open innovation search strategies on radical innovation.  相似文献   
6.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
7.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities.  相似文献   
8.
Although increasingly appreciated for their explanatory power in developed societies, marital search models have yet to be widely applied to developing nations. This article evaluates the applicability of marital search models to marriage timing in Mexico. The analysis compares separate models of union formation for men and women that include individual and marriage market predictors. Results show that union formation is closely linked to the uncertainties surrounding the transition to adulthood and the availability of marriageable partners. Improvements in women's economic position do not diminish the attractiveness of marriage, as female independence arguments would suggest. Instead, they are a central force behind the stability of marriage behavior in Mexico. A central transformation identified in the analysis is the reduction in sex differences in age at marriage as women expand their education and labor force participation.  相似文献   
9.
Summary.  Alongside the development of meta-analysis as a tool for summarizing research literature, there is renewed interest in broader forms of quantitative synthesis that are aimed at combining evidence from different study designs or evidence on multiple parameters. These have been proposed under various headings: the confidence profile method, cross-design synthesis, hierarchical models and generalized evidence synthesis. Models that are used in health technology assessment are also referred to as representing a synthesis of evidence in a mathematical structure. Here we review alternative approaches to statistical evidence synthesis, and their implications for epidemiology and medical decision-making. The methods include hierarchical models, models informed by evidence on different functions of several parameters and models incorporating both of these features. The need to check for consistency of evidence when using these powerful methods is emphasized. We develop a rationale for evidence synthesis that is based on Bayesian decision modelling and expected value of information theory, which stresses not only the need for a lack of bias in estimates of treatment effects but also a lack of bias in assessments of uncertainty. The increasing reliance of governmental bodies like the UK National Institute for Clinical Excellence on complex evidence synthesis in decision modelling is discussed.  相似文献   
10.
Herein, we propose a data-driven test that assesses the lack of fit of nonlinear regression models. The comparison of local linear kernel and parametric fits is the basis of this test, and specific boundary-corrected kernels are not needed at the boundary when local linear fitting is used. Under the parametric null model, the asymptotically optimal bandwidth can be used for bandwidth selection. This selection method leads to the data-driven test that has a limiting normal distribution under the null hypothesis and is consistent against any fixed alternative. The finite-sample property of the proposed data-driven test is illustrated, and the power of the test is compared with that of some existing tests via simulation studies. We illustrate the practicality of the proposed test by using two data sets.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号