首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1662篇
  免费   83篇
  国内免费   18篇
管理学   165篇
人口学   10篇
丛书文集   71篇
理论方法论   47篇
综合类   1034篇
社会学   62篇
统计学   374篇
  2024年   2篇
  2023年   31篇
  2022年   18篇
  2021年   22篇
  2020年   32篇
  2019年   33篇
  2018年   38篇
  2017年   52篇
  2016年   63篇
  2015年   44篇
  2014年   73篇
  2013年   151篇
  2012年   100篇
  2011年   107篇
  2010年   89篇
  2009年   87篇
  2008年   90篇
  2007年   87篇
  2006年   90篇
  2005年   87篇
  2004年   62篇
  2003年   48篇
  2002年   47篇
  2001年   43篇
  2000年   36篇
  1999年   27篇
  1998年   16篇
  1997年   25篇
  1996年   41篇
  1995年   45篇
  1994年   13篇
  1993年   11篇
  1992年   15篇
  1991年   10篇
  1990年   11篇
  1989年   4篇
  1988年   9篇
  1987年   2篇
  1986年   1篇
  1984年   1篇
排序方式: 共有1763条查询结果,搜索用时 468 毫秒
231.
构建风险视域下研发网络企业自适应行为规则,基于SIS模型构建研发网络风险传播模型,运用数值仿真的方法通过改变模型参数探索在考虑自适应行为的情况下研发网络的风险传播规律,研究结果表明:(1)C1策略增强了网络的层次性和社团强度,一定程度上抑制了研发网络中风险的传播;C2策略下节点之间新连接的建立更多是基于临近性的考量,容易陷入路径依赖和能力陷阱;(2)研发网络企业的自适应行为会导致社团强度的涨落,平均路径长度的下降以及平均聚类系数的增长充分体现出C1策略的有效性。(3)C1策略下,断边概率p与I*之间呈现"U"型相关关系;在C2策略下随着断边概率p的增长I*逐渐降低。(4)在C1策略和C2策略下,随着参数ζ的增长I*也随之增长,可知组织依赖水平是研发网络风险传播控制中需要重点关注的因素。本文揭示了在考虑自适应行为的情况下研发网络的风险传播规律,为网络化运作背景下研发网络治理提供理论依据。  相似文献   
232.
大数据背景下个人信息处理行为既引发数据安全风险等负外部性,也会带来分享经济价值实现等正外部性,"大数据悖论"现象揭示了双重外部性成因。基于"外部性内在化"原理赋予信息主体以个人信息权并赋予数据控制者以大数据财产权虽有助分别规制其双重外部性,但网络大数据背景下其双重外部性规制彼此交互影响而面临两难困境。法经济学关于"损害之相互性"理论证立了基于"风险导向理念"规制个人信息处理以破解其两难困境的经济逻辑。大数据产业发展有赖个人信息处理的"外部性外部化"而实现分享经济。法经济学关于"公地喜剧"理论证立了基于"外部性外部化"规制个人信息处理而满足大数据产业发展需要的经济逻辑。大数据背景下个人信息处理的规制有赖从"压制型法"到兼及"回应型法"的理念转换,从"外部性内在化"兼及"外部性外部化"的机制并举,从数据资源权利配置兼及数字技术权力干预的措施协同,实现大数据产业创新与个人信息安全的有机平衡。  相似文献   
233.
万物互联时代,数据被认为是与土地、劳动、资本、知识、技术和管理并重的第七大生产要素。当前,多样化的商业实践创新对数据价值的深度挖掘正在突破经济学在传统意义上对数据及其价值的限定,然而,理论上对此还缺乏相应的概念凝练和探讨。本文从企业与消费者价值共创角度提出大数据合作资产的概念,并探讨基于大数据合作资产和适应性创新的数字经济创新逻辑。研究表明:大数据合作资产是一种互动性资源,是企业与消费者在数字化服务交互中成为能够被另一方所拥有和利用并能创造当前或未来经济收益的数字化资产;单纯拥有异质资源不一定会形成合作资产,只有当异质化资源被有效整合、且被使用于企业与消费者服务交换中,才能体现为行动者创造收益的价值潜力;协同演化促进了企业与消费者对彼此的适应性调整,基于大数据合作资产激发以即时调整、即时反馈和难以预测为特征的适应性创新,进而促进了数字经济创新。  相似文献   
234.
金融危机以来国际贸易保护主义色彩浓郁,中国内部粗放型经济增长方式存在一定的不可持续性和人口红利开始逐渐消失,中国加工贸易嵌入发达国家主导的全球价值链低附加值和低生产率问题引起了学者的注意。随着外部贸易环境、国内新常态要素禀赋结构的变化和深化供给侧结构性改革的需要,推动加工贸易企业升级已经成为中国经济结构调整的一个重要部分。通过分析中国加工贸易的起源、发展和现状,描述全球价值链背景下加工贸易的若干特征和“低端锁定”问题,引出了融资约束对企业经营决策、出口行为以及企业全球价值链上地位的影响,并围绕以生产过程、产品升级和功能升级为主的产业内升级和以劳动密集型向技术密集型转变为特征的产业间升级两个角度,提出了中国加工贸易企业在全球价值链上攀升的路径。研究认为,着重强调人力资本积累、本土企业技术外溢的吸收能力以及自主研发能力的培养,同时结合国际产能合作和“一带一路”倡议等现实背景,实现东西双向开放,塑造以我为主的包容性的区域价值链和国内价值链等,是中国加工贸易企业升级的路径和新机遇。  相似文献   
235.
适应性是主体适应其环境的特性,是物理演化系统和生物演化系统的本质属性,反映在认知上就是适应性表征。适应性表征也因此成为知识显现的方式和创造的核心,具有协调性、匹配性、互补性、模拟性和类比性,其表现方式有直接具象表征、间接具象表征、直接抽象表征和间接抽象表征,分别对应于经验主义、建构经验主义、理性主义和科学实在论。适应性表征的实现是通过科学理论核心概念的变化、定律的凝练、理论的更替、模型推理和世界观的改变展开的,具有经验上的适当性。这种经验适当性不仅是感知层次的体验,更是一种认知模式。在认知意义上,经验也是基于心理模型的,心理建模的路径有思想语言、心理表象、心理命题、思想实验和心理模拟推理。这种基于心理建模的适应性表征为科学认知和科学发现提供了有益的方法论。  相似文献   
236.
Data censoring causes ordinary least-square estimators of linear models to be biased and inconsistent. The Tobit estimator yields consistent estimators in the presence of data censoring if the errors are normally distributed. However, nonnormality or heteroscedasticity results in the Tobit estimators being inconsistent. Various estimators have been proposed for circumventing the normality assumption. Some of these estimators include censored least absolute deviations (CLAD), symmetrically censored least-square (SCLS), and partially adaptive estimators. CLAD and SCLS will be consistent in the presence of heteroscedasticity; however, SCLS performs poorly in the presence of asymmetric errors. This article extends the partially adaptive estimation approach to accommodate possible heteroscedasticity as well as nonnormality. A simulation study is used to investigate the estimators’ relative performance in these settings. The partially adaptive censored regression estimators have little efficiency loss for censored normal errors and appear to outperform the Tobit and semiparametric estimators for nonnormal error distributions and be less sensitive to the presence of heteroscedasticity. An empirical example is considered, which supports these results.  相似文献   
237.
In a clinical trial, sometimes it is desirable to allocate as many patients as possible to the best treatment, in particular, when a trial for a rare disease may contain a considerable portion of the whole target population. The Gittins index rule is a powerful tool for sequentially allocating patients to the best treatment based on the responses of patients already treated. However, its application in clinical trials is limited due to technical complexity and lack of randomness. Thompson sampling is an appealing approach, since it makes a compromise between optimal treatment allocation and randomness with some desirable optimal properties in the machine learning context. However, in clinical trial settings, multiple simulation studies have shown disappointing results with Thompson samplers. We consider how to improve short-run performance of Thompson sampling and propose a novel acceleration approach. This approach can also be applied to situations when patients can only be allocated by batch and is very easy to implement without using complex algorithms. A simulation study showed that this approach could improve the performance of Thompson sampling in terms of average total response rate. An application to a redesign of a preference trial to maximize patient's satisfaction is also presented.  相似文献   
238.
Randomised controlled trials are considered the gold standard in trial design. However, phase II oncology trials with a binary outcome are often single-arm. Although a number of reasons exist for choosing a single-arm trial, the primary reason is that single-arm designs require fewer participants than their randomised equivalents. Therefore, the development of novel methodology that makes randomised designs more efficient is of value to the trials community. This article introduces a randomised two-arm binary outcome trial design that includes stochastic curtailment (SC), allowing for the possibility of stopping a trial before the final conclusions are known with certainty. In addition to SC, the proposed design involves the use of a randomised block design, which allows investigators to control the number of interim analyses. This approach is compared with existing designs that also use early stopping, through the use of a loss function comprised of a weighted sum of design characteristics. Comparisons are also made using an example from a real trial. The comparisons show that for many possible loss functions, the proposed design is superior to existing designs. Further, the proposed design may be more practical, by allowing a flexible number of interim analyses. One existing design produces superior design realisations when the anticipated response rate is low. However, when using this design, the probability of rejecting the null hypothesis is sensitive to misspecification of the null response rate. Therefore, when considering randomised designs in phase II, we recommend the proposed approach be preferred over other sequential designs.  相似文献   
239.
A biosimilar drug is a biological product that is highly similar to and at the same time has no clinically meaningful difference from licensed product in terms of safety, purity, and potency. Biosimilar study design is essential to demonstrate the equivalence between biosimilar drug and reference product. However, existing designs and assessment methods are primarily based on binary and continuous endpoints. We propose a Bayesian adaptive design for biosimilarity trials with time-to-event endpoint. The features of the proposed design are twofold. First, we employ the calibrated power prior to precisely borrow relevant information from historical data for the reference drug. Second, we propose a two-stage procedure using the Bayesian biosimilarity index (BBI) to allow early stop and improve the efficiency. Extensive simulations are conducted to demonstrate the operating characteristics of the proposed method in contrast with some naive method. Sensitivity analysis and extension with respect to the assumptions are presented.  相似文献   
240.
Bayesian MARS   总被引:1,自引:0,他引:1  
A Bayesian approach to multivariate adaptive regression spline (MARS) fitting (Friedman, 1991) is proposed. This takes the form of a probability distribution over the space of possible MARS models which is explored using reversible jump Markov chain Monte Carlo methods (Green, 1995). The generated sample of MARS models produced is shown to have good predictive power when averaged and allows easy interpretation of the relative importance of predictors to the overall fit.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号