首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5664篇
  免费   97篇
管理学   856篇
民族学   31篇
人才学   2篇
人口学   541篇
丛书文集   37篇
理论方法论   679篇
综合类   48篇
社会学   2857篇
统计学   710篇
  2023年   21篇
  2020年   88篇
  2019年   105篇
  2018年   91篇
  2017年   132篇
  2016年   144篇
  2015年   100篇
  2014年   139篇
  2013年   935篇
  2012年   169篇
  2011年   155篇
  2010年   131篇
  2009年   147篇
  2008年   164篇
  2007年   163篇
  2006年   149篇
  2005年   190篇
  2004年   212篇
  2003年   179篇
  2002年   191篇
  2001年   146篇
  2000年   114篇
  1999年   127篇
  1998年   111篇
  1997年   99篇
  1996年   87篇
  1995年   92篇
  1994年   87篇
  1993年   99篇
  1992年   73篇
  1991年   83篇
  1990年   59篇
  1989年   58篇
  1988年   67篇
  1987年   50篇
  1986年   50篇
  1985年   62篇
  1984年   57篇
  1983年   55篇
  1982年   59篇
  1981年   48篇
  1980年   47篇
  1979年   46篇
  1978年   44篇
  1977年   44篇
  1976年   44篇
  1975年   43篇
  1974年   32篇
  1973年   29篇
  1971年   22篇
排序方式: 共有5761条查询结果,搜索用时 14 毫秒
251.
252.
The aim of this case study is to discuss the role of technology in addressing environmental problems. The paper tries to scratch beneath the surface of the increasingly frequent ‘quick-fix’ solutions to the present environmental problems, based on such beguiling catchwords as Cleaner Technologies, Best Available Technologies, and Best Available Technologies Not Exceeding Excessive Costs, etc., in an attempt to discover whether there is any substance in them, or whether they are just full of hot air. Recent data from case studies performed by the author in Germany and Finland as well as a postal questionnaire in Denmark are presented. The paper analyses and discusses the roles and responsibilities of designers, industrialists, and government policy-makers. It is argued that existing regulatory regimes, supranational industrial structures, and market mechanisms do not favour the development of cleaner technologies, nor do they promote a reduction in consumption patterns. Evidence from ongoing empirical research in Northwest Europe suggests that industry is far from developing and/or implementing cleaner technologies. The paper closes with a discussion of some of the policy implications involved and some examples of urgently needed further research.  相似文献   
253.
This paper explores an infrequently discussed methodological divide within qualitative research; that between conscious and unconscious accounts of organizational processes. The paper makes use of an empirical case study of problems encountered in the growth of a high-technology company. The conscious accounts of growth treat members of the firm as knowledgeable agents whose understandings are then drawn upon to generate an account of growth in terms of interrelated processes of power, meaning and legitimacy. These conscious accounts are then complemented through an exploration of unconscious dynamics in the personality of the entrepreneur, the work group and the `family' structure of the firm. It is argued that the key shift in moving between conscious and unconscious interpretation, involves the bracketing of the reality claims implicit in conscious rationalizations, and a re-listening to research material as a largely unconscious projection of individuals' `inner worlds'. With unconscious interpretations, the character of language, its emotional content and in particular the sources of individual and group anxiety all realize a central importance. Despite the established divide in the literature between these two forms of interpretation, it is argued that the case suggests the value and necessity of their integration, particularly for the understanding of creativity.  相似文献   
254.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
255.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
256.
Regression models for survival data are often specified from the hazard function while classical regression analysis of quantitative outcomes focuses on the mean value (possibly after suitable transformations). Methods for regression analysis of mean survival time and the related quantity, the restricted mean survival time, are reviewed and compared to a method based on pseudo-observations. Both Monte Carlo simulations and two real data sets are studied. It is concluded that while existing methods may be superior for analysis of the mean, pseudo-observations seem well suited when the restricted mean is studied.  相似文献   
257.
The complex Bingham distribution is relevant for the shape analysis of landmark data in two dimensions. In this paper it is shown that the problem of simulating from this distribution reduces to simulation from a truncated multivariate exponential distribution. Several simulation methods are described and their efficiencies are compared.  相似文献   
258.
A marker's capacity to predict risk of a disease depends on disease prevalence in the target population and its classification accuracy, i.e. its ability to discriminate diseased subjects from non-diseased subjects. The latter is often considered an intrinsic property of the marker; it is independent of disease prevalence and hence more likely to be similar across populations than risk prediction measures. In this paper, we are interested in evaluating the population-specific performance of a risk prediction marker in terms of positive predictive value (PPV) and negative predictive value (NPV) at given thresholds, when samples are available from the target population as well as from another population. A default strategy is to estimate PPV and NPV using samples from the target population only. However, when the marker's classification accuracy as characterized by a specific point on the receiver operating characteristics (ROC) curve is similar across populations, borrowing information across populations allows increased efficiency in estimating PPV and NPV. We develop estimators that optimally combine information across populations. We apply this methodology to a cross-sectional study where we evaluate PCA3 as a risk prediction marker for prostate cancer among subjects with or without previous negative biopsy.  相似文献   
259.
Longitudinal studies suffer from patient dropout. The dropout process may be informative if there exists an association between dropout patterns and the rate of change in the response over time. Multiple patterns are plausible in that different causes of dropout might contribute to different patterns. These multiple patterns can be dichotomized into two groups: quantitative and qualitative interaction. Quantitative interaction indicates that each of the multiple sources is biasing the estimate of the rate of change in the same direction, although with differing magnitudes. Alternatively, qualitative interaction results in the multiple sources biasing the estimate of the rate of change in opposing directions. Qualitative interaction is of special concern, since it is less likely to be detected by conventional methods and can lead to highly misleading slope estimates. We explore a test for qualitative interaction based on simultaneous confidence intervals. The test accommodates the realistic situation where reasons for dropout are not fully understood, or even entirely unknown. It allows for an additional level of clustering among participating subjects. We apply these methods to a study exploring tumor growth rates in mice as well as a longitudinal study exploring rates of change in cognitive functioning for Alzheimer's patients.  相似文献   
260.
The purpose of this paper is to highlight some classic issues in the measurement of change and to show how contemporary solutions can be used to deal with some of these issues. Five classic issues will be raised here: (1) Separating individual changes from group differences; (2) options for incomplete longitudinal data over time, (3) options for nonlinear changes over time; (4) measurement invariance in studies of changes over time; and (5) new opportunities for modeling dynamic changes. For each issue we will describe the problem, and then review some contemporary solutions to these problems base on Structural Equation Models (SEM). We will fit these SEM to using existing panel data from the Health & Retirement Study (HRS) cognitive variables. This is not intended as an overly technical treatment, so only a few basic equations are presented, examples will be displayed graphically, and more complete references to the contemporary solutions will be given throughout.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号