全文获取类型
收费全文 | 28278篇 |
免费 | 781篇 |
国内免费 | 1篇 |
专业分类
管理学 | 4132篇 |
民族学 | 125篇 |
人才学 | 6篇 |
人口学 | 2656篇 |
丛书文集 | 125篇 |
教育普及 | 2篇 |
理论方法论 | 2651篇 |
现状及发展 | 1篇 |
综合类 | 546篇 |
社会学 | 13398篇 |
统计学 | 5418篇 |
出版年
2023年 | 134篇 |
2021年 | 167篇 |
2020年 | 395篇 |
2019年 | 561篇 |
2018年 | 650篇 |
2017年 | 901篇 |
2016年 | 722篇 |
2015年 | 542篇 |
2014年 | 675篇 |
2013年 | 4591篇 |
2012年 | 959篇 |
2011年 | 867篇 |
2010年 | 677篇 |
2009年 | 583篇 |
2008年 | 708篇 |
2007年 | 683篇 |
2006年 | 663篇 |
2005年 | 722篇 |
2004年 | 631篇 |
2003年 | 638篇 |
2002年 | 680篇 |
2001年 | 745篇 |
2000年 | 721篇 |
1999年 | 652篇 |
1998年 | 499篇 |
1997年 | 429篇 |
1996年 | 448篇 |
1995年 | 416篇 |
1994年 | 414篇 |
1993年 | 419篇 |
1992年 | 479篇 |
1991年 | 461篇 |
1990年 | 406篇 |
1989年 | 395篇 |
1988年 | 409篇 |
1987年 | 374篇 |
1986年 | 346篇 |
1985年 | 407篇 |
1984年 | 379篇 |
1983年 | 352篇 |
1982年 | 301篇 |
1981年 | 261篇 |
1980年 | 230篇 |
1979年 | 267篇 |
1978年 | 259篇 |
1977年 | 220篇 |
1976年 | 186篇 |
1975年 | 214篇 |
1974年 | 169篇 |
1973年 | 155篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
941.
In this paper, we analyze the ethical issues of using honesty and integrity tests in employment screening. Our focus will be on the United States context: legal requirements related to applicant privacy differ in other countries, but we posit that our proposed balancing test is broadly applicable. We start by discussing why companies have ethical and legal obligations, based on a stakeholder analysis, to assess the integrity of potential employees. We then move to a consideration of how companies currently use background checks as a pre‐employment screening tool, noting their limitations. We then take up honesty and integrity testing, focusing particularly on the problems of false positives and due process. We offer a balancing test for the use of honesty and integrity testing that takes in three factors: (1) the potential harm posed by a dishonest employee in a particular job, (2) the linkage between the test and the assessment process, and (3) the accuracy and validity of the honesty and integrity test. We conclude with implications for practice and future research. 相似文献
942.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon. 相似文献
943.
Mini-batch algorithms have become increasingly popular due to the requirement for solving optimization problems, based on large-scale data sets. Using an existing online expectation–maximization (EM) algorithm framework, we demonstrate how mini-batch (MB) algorithms may be constructed, and propose a scheme for the stochastic stabilization of the constructed mini-batch algorithms. Theoretical results regarding the convergence of the mini-batch EM algorithms are presented. We then demonstrate how the mini-batch framework may be applied to conduct maximum likelihood (ML) estimation of mixtures of exponential family distributions, with emphasis on ML estimation for mixtures of normal distributions. Via a simulation study, we demonstrate that the mini-batch algorithm for mixtures of normal distributions can outperform the standard EM algorithm. Further evidence of the performance of the mini-batch framework is provided via an application to the famous MNIST data set. 相似文献
944.
Insights into the dynamics of human behavior in response to flooding are urgently needed for the development of effective integrated flood risk management strategies, and for integrating human behavior in flood risk modeling. However, our understanding of the dynamics of risk perceptions, attitudes, individual recovery processes, as well as adaptive (i.e., risk reducing) intention and behavior are currently limited because of the predominant use of cross-sectional surveys in the flood risk domain. Here, we present the results from one of the first panel surveys in the flood risk domain covering a relatively long period of time (i.e., four years after a damaging event), three survey waves, and a wide range of topics relevant to the role of citizens in integrated flood risk management. The panel data, consisting of 227 individuals affected by the 2013 flood in Germany, were analyzed using repeated-measures ANOVA and latent class growth analysis (LCGA) to utilize the unique temporal dimension of the data set. Results show that attitudes, such as the respondents’ perceived responsibility within flood risk management, remain fairly stable over time. Changes are observed partly for risk perceptions and mainly for individual recovery and intentions to undertake risk-reducing measures. LCGA reveal heterogeneous recovery and adaptation trajectories that need to be taken into account in policies supporting individual recovery and stimulating societal preparedness. More panel studies in the flood risk domain are needed to gain better insights into the dynamics of individual recovery, risk-reducing behavior, and associated risk and protective factors. 相似文献
945.
This paper analyses how network embeddedness affects the exploration and exploitation of R&D project performance. By developing joint projects, partners and projects are linked to one another and form a network that generates social capital. We examine how the location, which determines the access to information and knowledge within a network of relationships, affects the performance of projects. We consider this question in the setup of exploration and exploitation projects, using a database built from an EU framework. We find that each of the structural embeddedness dimensions (degree, betweenness and eigenvector centrality) have a different impact on the exploration and exploitation project performance. Our empirical analysis extends to project management literature and social capital theory, by including the effect that the acquisition of external knowledge has on the performance of the project. 相似文献
946.
Slack Neale J. Singh Gurmeet Narayan Jashwini Sharma Shavneet 《Public Organization Review》2020,20(4):631-646
Public Organization Review - The purpose of this study is to explore how servant leadership affects public sector employee engagement, organisational ethical climate, and public sector reform, of... 相似文献
947.
本文认为,公共服务应从以产品为主导的逻辑转向服务途径。通过采取服务导向,公共服务递送的经验性、组织间和系统性,以及作为共同生产者的服务使用者角色,将一同被考虑。论文将通过服务蓝图的应用,解释共同生产如何操作。并介绍了高等教育中的一个案例。在这一案例中,蓝图的创建将师生汇聚在一起,专注于学生入学的设计,从而改善学生体验,并支持共同生产。 相似文献
948.
Nonparametric Estimation of the Number of Drug Users in Hong Kong Using Repeated Multiple Lists
下载免费PDF全文
![点击此处可从《Australian & New Zealand Journal of Statistics》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Richard M. Huggins Paul S.F. Yip Jakub Stoklosa 《Australian & New Zealand Journal of Statistics》2016,58(1):1-13
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods. 相似文献
949.
Bernard Sébastien David Hoffman Clémence Rigaux Franck Pellissier Jérôme Msihid 《Pharmaceutical statistics》2016,15(6):450-458
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
950.
Two new nonparametric common principal component model selection procedures based on bootstrap distributions of the vector correlations of all combinations of the eigenvectors from two groups are proposed. The performance of these methods is compared in a simulation study to the two parametric methods previously suggested by Flury in 1988, as well as modified versions of two nonparametric methods proposed by Klingenberg in 1996 and then by Klingenberg and McIntyre in 1998. The proposed bootstrap vector correlation distribution (BVD) method is shown to outperform all of the existing methods in most of the simulated situations considered. 相似文献