首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3327篇
  免费   131篇
管理学   562篇
民族学   15篇
人才学   1篇
人口学   230篇
丛书文集   26篇
理论方法论   469篇
综合类   42篇
社会学   1531篇
统计学   582篇
  2023年   18篇
  2021年   20篇
  2020年   48篇
  2019年   85篇
  2018年   83篇
  2017年   113篇
  2016年   121篇
  2015年   80篇
  2014年   85篇
  2013年   470篇
  2012年   137篇
  2011年   113篇
  2010年   104篇
  2009年   100篇
  2008年   115篇
  2007年   122篇
  2006年   101篇
  2005年   109篇
  2004年   105篇
  2003年   97篇
  2002年   103篇
  2001年   87篇
  2000年   77篇
  1999年   63篇
  1998年   51篇
  1997年   68篇
  1996年   56篇
  1995年   42篇
  1994年   48篇
  1993年   49篇
  1992年   31篇
  1991年   33篇
  1990年   41篇
  1989年   37篇
  1988年   34篇
  1987年   31篇
  1986年   23篇
  1985年   35篇
  1984年   37篇
  1983年   21篇
  1982年   32篇
  1981年   30篇
  1980年   27篇
  1979年   31篇
  1978年   17篇
  1977年   11篇
  1976年   28篇
  1975年   14篇
  1974年   18篇
  1973年   16篇
排序方式: 共有3458条查询结果,搜索用时 15 毫秒
151.
This paper explores the role of local context in cross‐border acquisitions by emerging economy multinational enterprises (EMNEs). It argues that the importance of local context has remained despite the increased global integration of the world economy. Hypotheses are tested using data on Indian acquisitions hosted in 70 countries over an eight‐year period. Results, which are consistent across number and value of cross‐border acquisitions, show that the local context in host countries offers contrasting benefits. Emerging economy multinational enterprises exploited these benefits by embedding in host countries through acquisitions. The acquisition strategy is conventional in the motives underpinning internationalization, but novel in its geographical clustering of host countries, and idiosyncratic owing to the EMNE's ability to draw on home country embeddedness. The paper develops theoretical implications and extends the concept of embeddedness, treating it as a series of internalization or quasi‐internalization decisions across a variety of local contexts by multinationals.  相似文献   
152.
Decades of questionnaire and interview studies have revealed various leadership behaviors observed in successful leaders. However, little is known about the actual behaviors that cause those observations. Given that lay observers are prone to cognitive biases, such as the halo effect, the validity of theories that are exclusively based on observed behaviors is questionable. We thus follow the call of leading scientists in the field and derive a parsimonious model of leadership behavior that is informed by established psychological theories. Building on the taxonomy of Yukl (2012), we propose three task-oriented behavior categories (enhancing understanding, strengthening motivation and facilitating implementation) and three relation-oriented behavior categories (fostering coordination, promoting cooperation and activating resources), each of which is further specified by a number of distinct behaviors. While the task-oriented behaviors are directed towards the accomplishment of shared objectives, the relation-oriented behaviors support this process by increasing the coordinated engagement of the team members. Our model contributes to the advancement of leadership behavior theory by (1) consolidating current taxonomies, (2) sharpening behavioral concepts of leadership behavior, (3) specifying precise relationships between those categories and (4) spurring new hypotheses that can be derived from existing findings in the field of psychology. To test our model as well as the hypotheses derived from this model, we advocate the development of new measurements that overcome the limitations associated with questionnaire and interview studies.  相似文献   
153.
The linear sum assignment problem is a fundamental combinatorial optimisation problem and can be broadly defined as: given an \(n \times m, m \ge n\) benefit matrix \(B = (b_{ij})\), matching each row to a different column so that the sum of entries at the row-column intersections is maximised. This paper describes the application of a new fast heuristic algorithm, Asymmetric Greedy Search, to the asymmetric version (\(n \ne m\)) of the linear sum assignment problem. Extensive computational experiments, using a range of model graphs demonstrate the effectiveness of the algorithm. The heuristic was also incorporated within an algorithm for the non-sequential protein structure matching problem where non-sequential alignment between two proteins, normally of different numbers of amino acids, needs to be maximised.  相似文献   
154.
A common objective of cohort studies and clinical trials is to assess time-varying longitudinal continuous biomarkers as correlates of the instantaneous hazard of a study endpoint. We consider the setting where the biomarkers are measured in a designed sub-sample (i.e., case-cohort or two-phase sampling design), as is normative for prevention trials. We address this problem via joint models, with underlying biomarker trajectories characterized by a random effects model and their relationship with instantaneous risk characterized by a Cox model. For estimation and inference we extend the conditional score method of Tsiatis and Davidian (Biometrika 88(2):447–458, 2001) to accommodate the two-phase biomarker sampling design using augmented inverse probability weighting with nonparametric kernel regression. We present theoretical properties of the proposed estimators and finite-sample properties derived through simulations, and illustrate the methods with application to the AIDS Clinical Trials Group 175 antiretroviral therapy trial. We discuss how the methods are useful for evaluating a Prentice surrogate endpoint, mediation, and for generating hypotheses about biological mechanisms of treatment efficacy.  相似文献   
155.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   
156.
157.
The responses obtained from response surface designs that are run sequentially often exhibit serial correlation or time trends. The order in which the runs of the design are performed then has an impact on the precision of the parameter estimators. This article proposes the use of a variable-neighbourhood search algorithm to compute run orders that guarantee a precise estimation of the effects of the experimental factors. The importance of using good run orders is demonstrated by seeking D-optimal run orders for a central composite design in the presence of an AR(1) autocorrelation pattern.  相似文献   
158.
Summary.  We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency.  相似文献   
159.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
160.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号