首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6003篇
  免费   1022篇
  国内免费   41篇
管理学   1820篇
民族学   16篇
人口学   77篇
丛书文集   125篇
理论方法论   979篇
综合类   1078篇
社会学   1843篇
统计学   1128篇
  2024年   3篇
  2023年   24篇
  2022年   22篇
  2021年   138篇
  2020年   217篇
  2019年   381篇
  2018年   254篇
  2017年   416篇
  2016年   406篇
  2015年   417篇
  2014年   458篇
  2013年   746篇
  2012年   476篇
  2011年   380篇
  2010年   332篇
  2009年   271篇
  2008年   315篇
  2007年   214篇
  2006年   246篇
  2005年   240篇
  2004年   224篇
  2003年   169篇
  2002年   201篇
  2001年   167篇
  2000年   133篇
  1999年   35篇
  1998年   25篇
  1997年   21篇
  1996年   19篇
  1995年   14篇
  1994年   14篇
  1993年   19篇
  1992年   7篇
  1991年   11篇
  1990年   9篇
  1989年   8篇
  1988年   15篇
  1987年   1篇
  1986年   3篇
  1985年   5篇
  1984年   3篇
  1983年   3篇
  1981年   4篇
排序方式: 共有7066条查询结果,搜索用时 859 毫秒
231.
This article investigates the impact of information discrepancy between a drop‐shipper and an online retailer on the drop‐shipping supply chain performance. The inventory information misalignment between them contributes to the failure of order fulfillment and demand satisfaction, and hence the associated penalties are incurred. In this article, we first analyze the penalties of ignoring such information discrepancy on both the drop‐shipper and the online retailer. We then assess the impact of information discrepancy on both parties when the drop‐shipper understands the existence of the information discrepancy but is not able to eliminate the errors. The numerical experiments indicate that both parties can have significant amount of the percentage cost reductions if the information discrepancy can be eliminated, and the potential savings are substantial especially when the errors have large variability. Furthermore, we observe that the online retailer is more vulnerable to information discrepancy than the drop‐shipper, and the drop‐shipper is likely to suffer from the online retailer's underestimation of the physical inventory level more than the problem of its overestimation. Moreover, even if eliminating errors is not possible, both parties could still benefit from taking the possibility of errors into consideration in decision making.  相似文献   
232.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon.  相似文献   
233.
Abstract

Under non‐additive probabilities, cluster points of the empirical average have been proved to quasi-surely fall into the interval constructed by either the lower and upper expectations or the lower and upper Choquet expectations. In this paper, based on the initiated notion of independence, we obtain a different Marcinkiewicz-Zygmund type strong law of large numbers. Then the Kolmogorov type strong law of large numbers can be derived from it directly, stating that the closed interval between the lower and upper expectations is the smallest one that covers cluster points of the empirical average quasi-surely.  相似文献   
234.
区间数排序的可能度计算模型是学术界一直在不断探索的基础性问题之一。区间数刻画着事物属性特征的取值范围,以往学术界都假设在区间内的取值服从均匀分布。本文将均匀分布推广到一般分布,运用概率论的方法,构建了一个新的区间数排序的可能度计算模型,由此修正了以往关于两个区间数全等的定义,提出了区间数形等的概念,同时进一步修正了可能度的自反性条件和区间数的综合排序方法,并将理论应用于多属性决策问题,给出了基本的决策过程,通过实例决策问题的计算,呈现了新理论和新方法的可行性和合理性,具有很好的推广应用价值。  相似文献   
235.
Agent具有情感后进行的劝说决策会更加理性,而现有研究还不够全面深入。针对此问题,首先结合形式逻辑理论,定义了基于Agent的情感劝说及其决策过程,并将基于Agent的情感劝说的决策过程划分为评价情感劝说行为、更新情感劝说状态、调整情感劝说目标、产生情感劝说行为四个阶段;其次针对这四个阶段,结合OCC情感模型和PAD心情模型,运用多属性效用理论,引入情感淡化因子和情感评价因子,定义了Agent情感触发函数,建立了八种Agent基本情感与劝说目标的映射关系,将Agent的情感劝说行为分为奖励型、申辩型、威胁型和反辩型四类,分别构建了相应的决策模型,从而构成了更加完整和合理的基于Agent的情感劝说的决策过程模型;最后通过算例证明了模型的合理性和有效性。  相似文献   
236.
以往研究认为沉没成本效应的产生与损失厌恶和后悔厌恶相关,但对其相互关系的探讨较少考虑货币性沉没成本和非货币性沉没成本的不同影响.文章选取证券监管者和证券市场投资者作为被试对象,通过有情境因素的调查问卷对损失厌恶、后悔厌恶与沉没成本效应之间的关联性进行验证.实证结果发现,与证券市场投资者相比,证券监管者的损失厌恶倾向要显著更低,而二者的后悔厌恶和沉没成本效应则无显著差异;证券监管者的数据结果表明,后悔厌恶与沉没成本效应之间存在显著的相关性,而损失厌恶与沉没成本效应之间不存在显著的相关性;相比之下,证券市场投资者的数据结果则表明,损失厌恶与沉没成本之间存在显著的相关性,而后悔厌恶与沉没成本之间则不存在显著的相关性.  相似文献   
237.
李振钧出身安徽太湖官宦世家,清代道光九年状元.他的诗集《味灯听叶庐诗草》中有59首题画诗.清代题画诗的繁荣,对书画的喜爱以及与画家的广泛交游促使他创作了大量的题画诗.他的题画诗反映了他看重性情、坦率放纵、追求个性自由、热爱艺术的特点,为我们提供了观照他的新角度.诗歌书画是他安顿自己痛苦心灵的处所,也是其解救自己的途径.他的人生经历反映了一个灵魂在现实的挤压和艺术的放纵里挣扎的心路历程.最终,他为了外在的功名放弃了自我,又不能适应官场的倾轧,45岁便抑郁而终.  相似文献   
238.
面对资本市场风险加剧的现实背景,以"公司经营业绩与股票市场业绩一致趋优"为稳健型投资的核心要素,立足于区间数据表示、会计信息度量两个关键要素,开展稳健型股票价值投资的多准则决策建模研究。面向稳健型投资决策目标,提出满足"稳健性""局部性""全局性"3个特性的序化机理,围绕关键特征选择、特征评价、全序化建模的主体脉络建立系统性多准则决策方法,进而构建"稳健型股票价值投资决策"的研究框架。  相似文献   
239.
经济逻辑学是经济学和逻辑学的交叉学科,它的产生有其必然性,我们可以从经济学的基本假设、经济学的研究方法、经济学研究的确定性以及逻辑学的发展等几个角度给予论证。从贝叶斯决策理论的视角看,它是研究经济活动中理性决策和策略推理的科学。在理论驱动力和现实驱动力的双重作用下,经济逻辑的研究已开始了由形式逻辑范式向科学逻辑范式的转向。  相似文献   
240.
Single cohort stage‐frequency data are considered when assessing the stage reached by individuals through destructive sampling. For this type of data, when all hazard rates are assumed constant and equal, Laplace transform methods have been applied in the past to estimate the parameters in each stage‐duration distribution and the overall hazard rates. If hazard rates are not all equal, estimating stage‐duration parameters using Laplace transform methods becomes complex. In this paper, two new models are proposed to estimate stage‐dependent maturation parameters using Laplace transform methods where non‐trivial hazard rates apply. The first model encompasses hazard rates that are constant within each stage but vary between stages. The second model encompasses time‐dependent hazard rates within stages. Moreover, this paper introduces a method for estimating the hazard rate in each stage for the stage‐wise constant hazard rates model. This work presents methods that could be used in specific types of laboratory studies, but the main motivation is to explore the relationships between stage maturation parameters that, in future work, could be exploited in applying Bayesian approaches. The application of the methodology in each model is evaluated using simulated data in order to illustrate the structure of these models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号