首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3912篇
  免费   87篇
管理学   682篇
民族学   18篇
人才学   1篇
人口学   255篇
丛书文集   26篇
理论方法论   528篇
综合类   44篇
社会学   1794篇
统计学   651篇
  2023年   27篇
  2021年   22篇
  2020年   60篇
  2019年   98篇
  2018年   93篇
  2017年   136篇
  2016年   132篇
  2015年   90篇
  2014年   95篇
  2013年   553篇
  2012年   152篇
  2011年   133篇
  2010年   120篇
  2009年   112篇
  2008年   129篇
  2007年   146篇
  2006年   120篇
  2005年   126篇
  2004年   133篇
  2003年   112篇
  2002年   117篇
  2001年   100篇
  2000年   85篇
  1999年   71篇
  1998年   60篇
  1997年   77篇
  1996年   65篇
  1995年   51篇
  1994年   54篇
  1993年   54篇
  1992年   38篇
  1991年   42篇
  1990年   46篇
  1989年   41篇
  1988年   35篇
  1987年   36篇
  1986年   28篇
  1985年   40篇
  1984年   39篇
  1983年   25篇
  1982年   36篇
  1981年   33篇
  1980年   32篇
  1979年   36篇
  1978年   24篇
  1977年   16篇
  1976年   29篇
  1975年   16篇
  1974年   18篇
  1973年   18篇
排序方式: 共有3999条查询结果,搜索用时 15 毫秒
151.
Decades of questionnaire and interview studies have revealed various leadership behaviors observed in successful leaders. However, little is known about the actual behaviors that cause those observations. Given that lay observers are prone to cognitive biases, such as the halo effect, the validity of theories that are exclusively based on observed behaviors is questionable. We thus follow the call of leading scientists in the field and derive a parsimonious model of leadership behavior that is informed by established psychological theories. Building on the taxonomy of Yukl (2012), we propose three task-oriented behavior categories (enhancing understanding, strengthening motivation and facilitating implementation) and three relation-oriented behavior categories (fostering coordination, promoting cooperation and activating resources), each of which is further specified by a number of distinct behaviors. While the task-oriented behaviors are directed towards the accomplishment of shared objectives, the relation-oriented behaviors support this process by increasing the coordinated engagement of the team members. Our model contributes to the advancement of leadership behavior theory by (1) consolidating current taxonomies, (2) sharpening behavioral concepts of leadership behavior, (3) specifying precise relationships between those categories and (4) spurring new hypotheses that can be derived from existing findings in the field of psychology. To test our model as well as the hypotheses derived from this model, we advocate the development of new measurements that overcome the limitations associated with questionnaire and interview studies.  相似文献   
152.
The linear sum assignment problem is a fundamental combinatorial optimisation problem and can be broadly defined as: given an \(n \times m, m \ge n\) benefit matrix \(B = (b_{ij})\), matching each row to a different column so that the sum of entries at the row-column intersections is maximised. This paper describes the application of a new fast heuristic algorithm, Asymmetric Greedy Search, to the asymmetric version (\(n \ne m\)) of the linear sum assignment problem. Extensive computational experiments, using a range of model graphs demonstrate the effectiveness of the algorithm. The heuristic was also incorporated within an algorithm for the non-sequential protein structure matching problem where non-sequential alignment between two proteins, normally of different numbers of amino acids, needs to be maximised.  相似文献   
153.
A common objective of cohort studies and clinical trials is to assess time-varying longitudinal continuous biomarkers as correlates of the instantaneous hazard of a study endpoint. We consider the setting where the biomarkers are measured in a designed sub-sample (i.e., case-cohort or two-phase sampling design), as is normative for prevention trials. We address this problem via joint models, with underlying biomarker trajectories characterized by a random effects model and their relationship with instantaneous risk characterized by a Cox model. For estimation and inference we extend the conditional score method of Tsiatis and Davidian (Biometrika 88(2):447–458, 2001) to accommodate the two-phase biomarker sampling design using augmented inverse probability weighting with nonparametric kernel regression. We present theoretical properties of the proposed estimators and finite-sample properties derived through simulations, and illustrate the methods with application to the AIDS Clinical Trials Group 175 antiretroviral therapy trial. We discuss how the methods are useful for evaluating a Prentice surrogate endpoint, mediation, and for generating hypotheses about biological mechanisms of treatment efficacy.  相似文献   
154.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   
155.
156.
157.
The responses obtained from response surface designs that are run sequentially often exhibit serial correlation or time trends. The order in which the runs of the design are performed then has an impact on the precision of the parameter estimators. This article proposes the use of a variable-neighbourhood search algorithm to compute run orders that guarantee a precise estimation of the effects of the experimental factors. The importance of using good run orders is demonstrated by seeking D-optimal run orders for a central composite design in the presence of an AR(1) autocorrelation pattern.  相似文献   
158.
Summary.  Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders.  相似文献   
159.
Summary.  We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency.  相似文献   
160.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号