首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13464篇
  免费   112篇
管理学   1981篇
民族学   128篇
人才学   1篇
人口学   2708篇
丛书文集   11篇
理论方法论   875篇
综合类   339篇
社会学   5960篇
统计学   1573篇
  2024年   6篇
  2023年   43篇
  2022年   19篇
  2021年   42篇
  2020年   77篇
  2019年   126篇
  2018年   1736篇
  2017年   1755篇
  2016年   1214篇
  2015年   133篇
  2014年   152篇
  2013年   448篇
  2012年   475篇
  2011年   1274篇
  2010年   1122篇
  2009年   870篇
  2008年   905篇
  2007年   1113篇
  2006年   96篇
  2005年   325篇
  2004年   353篇
  2003年   299篇
  2002年   163篇
  2001年   90篇
  2000年   67篇
  1999年   71篇
  1998年   47篇
  1997年   31篇
  1996年   65篇
  1995年   30篇
  1994年   35篇
  1993年   21篇
  1992年   32篇
  1991年   27篇
  1990年   25篇
  1989年   23篇
  1988年   27篇
  1987年   16篇
  1986年   28篇
  1985年   20篇
  1984年   16篇
  1983年   19篇
  1982年   17篇
  1981年   11篇
  1980年   15篇
  1979年   25篇
  1978年   11篇
  1976年   8篇
  1975年   7篇
  1974年   10篇
排序方式: 共有10000条查询结果,搜索用时 816 毫秒
851.
From research on Western new product development (NPD) practices a rich body of literature has emerged. However, the impact of country specific and cultural influences has not been examined in this context yet. This study is a first attempt to identify differences in NPD practices between Research and Development (R&D) subsidiaries in Germany, China and India within multinational companies. Data has been generated by qualitative interviews with R&D executives in those countries across multiple cases. The study samples strategic, organizational and operational aspects indicates definite differences in process coordination, rewarding systems, market orientation and the average age of NPD teams. Other aspects like strategic targets, the management involvement, etc. show rather slight differences across the countries. Hence, findings suggest that while some aspects are universally applicable across cultural frontiers, Western companies have to understand different expectations regarding NPD in India and China by adjusting practices accordingly.  相似文献   
852.
Despite the growing trend of logistics outsourcing, there are very limited sources of literature on logistics outsourcing, especially in determining the relationship between factors influencing outsourcing and the extent of logistics outsourcing practices. In this study, we tap into the field of strategic management to help clarify the mechanisms underlying the links between factors influencing, logistics outsourcing practices and outsourcing performance. A model based on the resource based view illustrates the hypothetical connections among these variables. The data gathered from the survey were analysed using SmartPLS software. A response rate of 21 % out of the 486 firms selected was achieved and fixed as the empirical data for this study. The results of this study provide support that superior performance is correlated to the resources of the firm. The analysis shows that lack of human and physical asset capabilities, as well as transaction uncertainty influence the extent of different logistics outsourcing practices. The four logistics outsourcing practices under study were found to have a positive relationship with logistics outsourcing performance, particularly strategic focus. This study shows that although theoretically firms aim at cost reduction by employing a logistics outsourcing strategy but not proven in this study because the financial benefit was only positively contributed by one of the four logistics outsourcing practices under study. Besides that, the results from this study also support that most firms outsource their non-core activities of the logistics practices to respond to the transaction uncertainty that their business experiences.  相似文献   
853.
854.
Crime or disease surveillance commonly rely in space-time clustering methods to identify emerging patterns. The goal is to detect spatial-temporal clusters as soon as possible after its occurrence and to control the rate of false alarms. With this in mind, a spatio-temporal multiple cluster detection method was developed as an extension of a previous proposal based on a spatial version of the Shiryaev–Roberts statistic. Besides the capability of multiple cluster detection, the method have less input parameter than the previous proposal making its use more intuitive to practitioners. To evaluate the new methodology a simulation study is performed in several scenarios and enlighten many advantages of the proposed method. Finally, we present a case study to a crime data-set in Belo Horizonte, Brazil.  相似文献   
855.
Residual marked empirical process-based tests are commonly used in regression models. However, they suffer from data sparseness in high-dimensional space when there are many covariates. This paper has three purposes. First, we suggest a partial dimension reduction adaptive-to-model testing procedure that can be omnibus against general global alternative models although it fully use the dimension reduction structure under the null hypothesis. This feature is because that the procedure can automatically adapt to the null and alternative models, and thus greatly overcomes the dimensionality problem. Second, to achieve the above goal, we propose a ridge-type eigenvalue ratio estimate to automatically determine the number of linear combinations of the covariates under the null and alternative hypotheses. Third, a Monte-Carlo approximation to the sampling null distribution is suggested. Unlike existing bootstrap approximation methods, this gives an approximation as close to the sampling null distribution as possible by fully utilising the dimension reduction model structure under the null model. Simulation studies and real data analysis are then conducted to illustrate the performance of the new test and compare it with existing tests.  相似文献   
856.
\(\alpha \)-Stable distributions are a family of probability distributions found to be suitable to model many complex processes and phenomena in several research fields, such as medicine, physics, finance and networking, among others. However, the lack of closed expressions makes their evaluation analytically intractable, and alternative approaches are computationally expensive. Existing numerical programs are not fast enough for certain applications and do not make use of the parallel power of general purpose graphic processing units. In this paper, we develop novel parallel algorithms for the probability density function and cumulative distribution function—including a parallel Gauss–Kronrod quadrature—, quantile function, random number generator and maximum likelihood estimation of \(\alpha \)-stable distributions using OpenCL, achieving significant speedups and precision in all cases. Thanks to the use of OpenCL, we also evaluate the results of our library with different GPU architectures.  相似文献   
857.
In drug development, it sometimes occurs that a new drug does not demonstrate effectiveness for the full study population but appears to be beneficial in a relevant subgroup. In case the subgroup of interest was not part of a confirmatory testing strategy, the inflation of the overall type I error is substantial and therefore such a subgroup analysis finding can only be seen as exploratory at best. To support such exploratory findings, an appropriate replication of the subgroup finding should be undertaken in a new trial. We should, however, be reasonably confident in the observed treatment effect size to be able to use this estimate in a replication trial in the subpopulation of interest. We were therefore interested in evaluating the bias of the estimate of the subgroup treatment effect, after selection based on significance for the subgroup in an overall “failed” trial. Different scenarios, involving continuous as well as dichotomous outcomes, were investigated via simulation studies. It is shown that the bias associated with subgroup findings in overall nonsignificant clinical trials is on average large and varies substantially across plausible scenarios. This renders the subgroup treatment estimate from the original trial of limited value to design the replication trial. An empirical Bayesian shrinkage method is suggested to minimize this overestimation. The proposed estimator appears to offer either a good or a conservative correction to the observed subgroup treatment effect hence provides a more reliable subgroup treatment effect estimate for adequate planning of future studies.  相似文献   
858.
A variety of primary endpoints are used in clinical trials treating patients with severe infectious diseases, and existing guidelines do not provide a consistent recommendation. We propose to study simultaneously two primary endpoints, cure and death, in a comprehensive multistate cure‐death model as starting point for a treatment comparison. This technique enables us to study the temporal dynamic of the patient‐relevant probability to be cured and alive. We describe and compare traditional and innovative methods suitable for a treatment comparison based on this model. Traditional analyses using risk differences focus on one prespecified timepoint only. A restricted logrank‐based test of treatment effect is sensitive to ordered categories of responses and integrates information on duration of response. The pseudo‐value regression provides a direct regression model for examination of treatment effect via difference in transition probabilities. Applied to a topical real data example and simulation scenarios, we demonstrate advantages and limitations and provide an insight into how these methods can handle different kinds of treatment imbalances. The cure‐death model provides a suitable framework to gain a better understanding of how a new treatment influences the time‐dynamic cure and death process. This might help the future planning of randomised clinical trials, sample size calculations, and data analyses.  相似文献   
859.
A common objective of cohort studies and clinical trials is to assess time-varying longitudinal continuous biomarkers as correlates of the instantaneous hazard of a study endpoint. We consider the setting where the biomarkers are measured in a designed sub-sample (i.e., case-cohort or two-phase sampling design), as is normative for prevention trials. We address this problem via joint models, with underlying biomarker trajectories characterized by a random effects model and their relationship with instantaneous risk characterized by a Cox model. For estimation and inference we extend the conditional score method of Tsiatis and Davidian (Biometrika 88(2):447–458, 2001) to accommodate the two-phase biomarker sampling design using augmented inverse probability weighting with nonparametric kernel regression. We present theoretical properties of the proposed estimators and finite-sample properties derived through simulations, and illustrate the methods with application to the AIDS Clinical Trials Group 175 antiretroviral therapy trial. We discuss how the methods are useful for evaluating a Prentice surrogate endpoint, mediation, and for generating hypotheses about biological mechanisms of treatment efficacy.  相似文献   
860.
Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号