首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11031篇
  免费   220篇
  国内免费   78篇
管理学   445篇
劳动科学   2篇
民族学   77篇
人才学   1篇
人口学   87篇
丛书文集   813篇
理论方法论   245篇
综合类   5494篇
社会学   178篇
统计学   3987篇
  2024年   6篇
  2023年   33篇
  2022年   42篇
  2021年   82篇
  2020年   138篇
  2019年   172篇
  2018年   202篇
  2017年   353篇
  2016年   190篇
  2015年   231篇
  2014年   413篇
  2013年   1846篇
  2012年   674篇
  2011年   530篇
  2010年   504篇
  2009年   502篇
  2008年   559篇
  2007年   669篇
  2006年   641篇
  2005年   599篇
  2004年   548篇
  2003年   501篇
  2002年   409篇
  2001年   398篇
  2000年   252篇
  1999年   129篇
  1998年   100篇
  1997年   96篇
  1996年   79篇
  1995年   94篇
  1994年   57篇
  1993年   54篇
  1992年   46篇
  1991年   28篇
  1990年   24篇
  1989年   32篇
  1988年   22篇
  1987年   15篇
  1986年   8篇
  1985年   8篇
  1984年   11篇
  1983年   10篇
  1982年   2篇
  1981年   4篇
  1980年   5篇
  1979年   4篇
  1978年   5篇
  1977年   1篇
  1976年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
151.
中小板ETF的价格发现能力研究   总被引:1,自引:0,他引:1  
肖倬  郭彦峰 《管理学报》2010,7(1):118-122
使用日内5分钟交易高频数据,通过误差修正模型和方差分解等技术研究中小板ETF与其标的指数间的价格发现,进而探讨信息传递过程。实证结果显示:中小板ETF价格与中小板P指数间存在协整关系,达到了长期均衡;价格发现能力上,中小板P指数领先中小板ETF;中小板P指数受到新信息影响所产生的冲击大于中小板ETF价格所产生的冲击,中小板P指数对预测误差方差的解释能力强于中小板ETF价格,中小板P指数为信息传递的领先指标。我国ETF市场的有效性有待提升。  相似文献   
152.
在模糊不确定环境下,利用证券价格为梯形模糊数的模糊AR时间序列预测证券价格,描述市场运动趋势,将半绝对偏差风险约束调整为模糊松弛约束,在均值-半绝对偏差框架下,构建出目标函数服从梯形模糊数的可能性分布,风险约束为模糊松弛约束的模糊投资规划,并求得了有效性前沿。采用上证50的15只指标股进行实证检验,表明:规划可以给投资者带来较高的投资满意度水平;规划考虑了市场趋势,具有决策的针对性;风险的容差水平体现了投资者自身评定程度,在不同的市场行情下,风险容差水平具有不同的作用;规划比均值-半绝对偏差模型具有更高的有效性前沿,更加具有投资的针对性。  相似文献   
153.
一般竞争战略的逻辑基础重构   总被引:2,自引:0,他引:2  
在评价波特的竞争战略理论、基本战略理论和相关实证研究科学性的重大缺陷的基础上,尝试建立在新的产业环境中发展竞争战略理论所必需的新的逻辑概念基础,给出了公司所在产业、参照产品和相对溢价的定义,以及成本领先战略和标歧立异战略的定义,修正了两种基本战略相容的经典条件,界定了低成本差异化战略,提出若干假设及推论供分析和实证检验。  相似文献   
154.
知识型员工流失的深层研究——基于心理契约的视角   总被引:1,自引:0,他引:1  
知识型员工由于其地位和作用的特殊性,一直是企业竞争的主要对象,其流动率也一直较高。知识型员工流失在很大程度上是心理契约违背的必然结果,因此,重构和优化心理契约至关重要。具体而言,在招聘、入职、变动和离职四个阶段,企业应分别采取真实岗位预视、建立动态档案、加强沟通、及时补偿和离职面谈等措施,防止知识型员工流失。  相似文献   
155.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   
156.
We propose a novel methodology for evaluating the accuracy of numerical solutions to dynamic economic models. It consists in constructing a lower bound on the size of approximation errors. A small lower bound on errors is a necessary condition for accuracy: If a lower error bound is unacceptably large, then the actual approximation errors are even larger, and hence, the approximation is inaccurate. Our lower‐bound error analysis is complementary to the conventional upper‐error (worst‐case) bound analysis, which provides a sufficient condition for accuracy. As an illustration of our methodology, we assess approximation in the first‐ and second‐order perturbation solutions for two stylized models: a neoclassical growth model and a new Keynesian model. The errors are small for the former model but unacceptably large for the latter model under some empirically relevant parameterizations.  相似文献   
157.
As flood risks grow worldwide, a well‐designed insurance program engaging various stakeholders becomes a vital instrument in flood risk management. The main challenge concerns the applicability of standard approaches for calculating insurance premiums of rare catastrophic losses. This article focuses on the design of a flood‐loss‐sharing program involving private insurance based on location‐specific exposures. The analysis is guided by a developed integrated catastrophe risk management (ICRM) model consisting of a GIS‐based flood model and a stochastic optimization procedure with respect to location‐specific risk exposures. To achieve the stability and robustness of the program towards floods with various recurrences, the ICRM uses stochastic optimization procedure, which relies on quantile‐related risk functions of a systemic insolvency involving overpayments and underpayments of the stakeholders. Two alternative ways of calculating insurance premiums are compared: the robust derived with the ICRM and the traditional average annual loss approach. The applicability of the proposed model is illustrated in a case study of a Rotterdam area outside the main flood protection system in the Netherlands. Our numerical experiments demonstrate essential advantages of the robust premiums, namely, that they: (1) guarantee the program's solvency under all relevant flood scenarios rather than one average event; (2) establish a tradeoff between the security of the program and the welfare of locations; and (3) decrease the need for other risk transfer and risk reduction measures.  相似文献   
158.
The implementation of the government supervision of the quality of the project is an international practice. The basic form of government supervision of engineering quality is government supervision on the quality behavior of the engineering main bodies and its results by the competent government department entrusted. Its essence is a dual principal-agent process. The frequent accidents of the engineering quality reflect the loss and failure of the government law enforcement supervision of the engineering quality to some extent. Its root lies in the lack of endogenous power in the law enforcement supervision of the project quality government supervisors in the law enforcement supervision. Therefore, the incentive coordination mechanism of the government supervision based on the multi-level interest distribution is worth explored. In views of the multi-level management system which is formed by the government departments, government quality supervision organizations, quality supervision team (or group) for the government supervision of engineering quality. The benefit distribution function between every party is constructed, and the game model of the multi-level incentive and coordination for the government supervision in engineering quality is built. To solve and deduce from the first stage of the cooperative game and the second stage of the non-cooperative game, the cooperative game can obtain the reward coefficient: . The coordination degree of the best effort can be obtained by the non cooperative game. The result shows that:the coordination degree of government engineering quality supervisor is related to the coordination costs, and had nothing to do with fixed costs. The benefit distribution coefficient not only depends on the efforts of the quality government monitors, but also on the efficiency of other parties' efforts. The quality supervisors of the project will also focus on the coordination with other parties when enhancing their management capabilities to improve the overall performance of project quality government supervision. The strategy of the incentive coordination mechanism for the supervision and cooperation of the project quality government is:the government quality supervision team should set up the supervisory team properly, improve the coordination efficiency and reduce the cost of supervision-coordination to maximize the value of self-motivation. Quality supervision team (or group) should establish the partnership to improve the coordination efficiency for achieving the maximization of their own incentive value.The model and conclusion of incentive synergy mechanism based on multi-level benefit allocation mechanism are researched. It can provide theoretical support and practice reference for the market governance and supervision of general public goods.  相似文献   
159.
针对我国政府、企业和银行等金融机构共同关注的债转股问题,基于债务协商谈判思想,建立部分债务股权互换模型,计算公司证券价格,探讨了债转股对公司价值、破产概率、破产损失成本和资本结构的影响,给出了银行等债权人愿意债转股的充分条件。结果表明:在事先破产清算协议贷款下,事后全部债转股总能提高公司股权价值,但并不一定能提高债券价值。只有其协商谈判能力满足一定条件,公司债权人才愿意事后选择债转股,实现帕累托改进、提高社会福利水平。其次,在公司股东协商谈判能力的一定范围内,部分债转股能提高公司价值,其最优转股债息比例随着公司资产风险的增大而增加。再次,债转股能降低公司破产风险和破产损失成本,但同时也提高了债券风险溢价。最后,随着股东谈判能力增强,最优协商转股债务比例、杠杆率都减少,而债券风险溢价增大。本文所得结果对我国政府、企业和银行如何实施债转股提供理论参考和实践指导。  相似文献   
160.
Conventional spirometry produces measurement error by using repeatability criteria (RC) to discard acceptable data and terminating tests early when RC are met. These practices also implicitly assume that there is no variation across maneuvers within each test. This has implications for air pollution regulations that rely on pulmonary function tests to determine adverse effects or set standards. We perform a Monte Carlo simulation of 20,902 tests of forced expiratory volume in 1 second (FEV1), each with eight maneuvers, for an individual with empirically obtained, plausibly normal pulmonary function. Default coefficients of variation for inter‐ and intratest variability (3% and 6%, respectively) are employed. Measurement error is defined as the difference between results from the conventional protocol and an unconstrained, eight‐maneuver alternative. In the default model, average measurement error is shown to be ~5%. The minimum difference necessary for statistical significance at p < 0.05 for a before/after comparison is shown to be 16%. Meanwhile, the U.S. Environmental Protection Agency has deemed single‐digit percentage decrements in FEV1 sufficient to justify more stringent national ambient air quality standards. Sensitivity analysis reveals that results are insensitive to intertest variability but highly sensitive to intratest variability. Halving the latter to 3% reduces measurement error by 55%. Increasing it to 9% or 12% increases measurement error by 65% or 125%, respectively. Within‐day FEV1 differences ≤5% among normal subjects are believed to be clinically insignificant. Therefore, many differences reported as statistically significant are likely to be artifactual. Reliable data are needed to estimate intratest variability for the general population, subpopulations of interest, and research samples. Sensitive subpopulations (e.g., chronic obstructive pulmonary disease or COPD patients, asthmatics, children) are likely to have higher intratest variability, making it more difficult to derive valid statistical inferences about differences observed after treatment or exposure.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号