首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8744篇
  免费   178篇
管理学   1307篇
民族学   35篇
人口学   808篇
丛书文集   33篇
理论方法论   687篇
综合类   113篇
社会学   3822篇
统计学   2117篇
  2021年   48篇
  2020年   127篇
  2019年   168篇
  2018年   230篇
  2017年   291篇
  2016年   205篇
  2015年   146篇
  2014年   218篇
  2013年   1504篇
  2012年   283篇
  2011年   260篇
  2010年   200篇
  2009年   182篇
  2008年   168篇
  2007年   198篇
  2006年   192篇
  2005年   191篇
  2004年   166篇
  2003年   160篇
  2002年   176篇
  2001年   216篇
  2000年   204篇
  1999年   184篇
  1998年   146篇
  1997年   135篇
  1996年   138篇
  1995年   115篇
  1994年   116篇
  1993年   111篇
  1992年   147篇
  1991年   129篇
  1990年   125篇
  1989年   115篇
  1988年   104篇
  1987年   101篇
  1986年   103篇
  1985年   112篇
  1984年   118篇
  1983年   133篇
  1982年   104篇
  1981年   97篇
  1980年   94篇
  1979年   104篇
  1978年   98篇
  1977年   85篇
  1976年   78篇
  1975年   81篇
  1974年   63篇
  1973年   63篇
  1971年   51篇
排序方式: 共有8922条查询结果,搜索用时 484 毫秒
261.
This study considers a typical scheduling environment that is influenced by the behavioral phenomenon of multitasking. Under multitasking, the processing of a selected job suffers from interruption by other jobs that are available but unfinished. This situation arises in a wide variety of applications; for example, administration, manufacturing, and process and project management. Several classical solution methods for scheduling problems no longer apply in the presence of multitasking. The solvability of any scheduling problem under multitasking is no easier than that of the corresponding classical problem. We develop optimal algorithms for some fundamental and practical single machine scheduling problems with multitasking. For other problems, we show that they are computationally intractable, even though in some cases the corresponding problem in classical scheduling is efficiently solvable. We also study the cost increase and value gained due to multitasking. This analysis informs companies about how much it would be worthwhile to invest in measures to reduce or encourage multitasking.  相似文献   
262.
In this paper, we analyze the ethical issues of using honesty and integrity tests in employment screening. Our focus will be on the United States context: legal requirements related to applicant privacy differ in other countries, but we posit that our proposed balancing test is broadly applicable. We start by discussing why companies have ethical and legal obligations, based on a stakeholder analysis, to assess the integrity of potential employees. We then move to a consideration of how companies currently use background checks as a pre‐employment screening tool, noting their limitations. We then take up honesty and integrity testing, focusing particularly on the problems of false positives and due process. We offer a balancing test for the use of honesty and integrity testing that takes in three factors: (1) the potential harm posed by a dishonest employee in a particular job, (2) the linkage between the test and the assessment process, and (3) the accuracy and validity of the honesty and integrity test. We conclude with implications for practice and future research.  相似文献   
263.
264.
Two new nonparametric common principal component model selection procedures based on bootstrap distributions of the vector correlations of all combinations of the eigenvectors from two groups are proposed. The performance of these methods is compared in a simulation study to the two parametric methods previously suggested by Flury in 1988, as well as modified versions of two nonparametric methods proposed by Klingenberg in 1996 and then by Klingenberg and McIntyre in 1998. The proposed bootstrap vector correlation distribution (BVD) method is shown to outperform all of the existing methods in most of the simulated situations considered.  相似文献   
265.
In this article, we propose a new class of distributions defined by a quantile function, which nests several distributions as its members. The quantile function proposed here is the sum of the quantile functions of the generalized Pareto and Weibull distributions. Various distributional properties and reliability characteristics of the class are discussed. The estimation of the parameters of the model using L-moments is studied. Finally, we apply the model to a real life dataset.  相似文献   
266.
In this article, we provide a semiparametric approach to the joint measurement of technical and allocative inefficiency in a way that the internal consistency of the specification of allocative errors in the objective function (e.g., cost function) and the derivative equations (e.g., share or input demand functions) is assured. We start from the Cobb–Douglas production and shadow cost system. We show that the shadow cost system has a closed-form likelihood function contrary to what was previously thought. In turn, we use the method of local maximum likelihood applied to a system of equations to obtain firm-specific parameter estimates (which reveal heterogeneity in production) as well as measures of technical and allocative inefficiency and its cost. We illustrate its practical application using data on U.S. electric utilities.  相似文献   
267.
In this paper we propose a new lifetime model for multivariate survival data in presence of surviving fractions and examine some of its properties. Its genesis is based on situations in which there are m types of unobservable competing causes, where each cause is related to a time of occurrence of an event of interest. Our model is a multivariate extension of the univariate survival cure rate model proposed by Rodrigues et al. [37 J. Rodrigues, V.G. Cancho, M. de Castro, and F. Louzada-Neto, On the unification of long-term survival models, Statist. Probab. Lett. 79 (2009), pp. 753759. doi: 10.1016/j.spl.2008.10.029[Crossref], [Web of Science ®] [Google Scholar]]. The inferential approach exploits the maximum likelihood tools. We perform a simulation study in order to verify the asymptotic properties of the maximum likelihood estimators. The simulation study also focus on size and power of the likelihood ratio test. The methodology is illustrated on a real data set on customer churn data.  相似文献   
268.
In this paper, we propose a flexible cure rate survival model by assuming that the number of competing causes of the event of interest follows the Negative Binomial distribution and the time to event follows a Weibull distribution. Indeed, we introduce the Weibull-Negative-Binomial (WNB) distribution, which can be used in order to model survival data when the hazard rate function is increasing, decreasing and some non-monotonous shaped. Another advantage of the proposed model is that it has some distributions commonly used in lifetime analysis as particular cases. Moreover, the proposed model includes as special cases some of the well-know cure rate models discussed in the literature. We consider a frequentist analysis for parameter estimation of a WNB model with cure rate. Then, we derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and present some ways to perform global influence analysis. Finally, the methodology is illustrated on a medical data.  相似文献   
269.
270.
As known, the least-squares estimator of the slope of a univariate linear model sets to zero the covariance between the regression residuals and the values of the explanatory variable. To prevent the estimation process from being influenced by outliers, which can be theoretically modelled by a heavy-tailed distribution for the error term, one can substitute covariance with some robust measures of association, for example Kendall's tau in the popular Theil–Sen estimator. In a scarcely known Italian paper, Cifarelli [(1978), ‘La Stima del Coefficiente di Regressione Mediante l'Indice di Cograduazione di Gini’, Rivista di matematica per le scienze economiche e sociali, 1, 7–38. A translation into English is available at http://arxiv.org/abs/1411.4809 and will appear in Decisions in Economics and Finance] shows that a gain of efficiency can be obtained by using Gini's cograduation index instead of Kendall's tau. This paper introduces a new estimator, derived from another association measure recently proposed. Such a measure is strongly related to Gini's cograduation index, as they are both built to vanish in the general framework of indifference. The newly proposed estimator is shown to be unbiased and asymptotically normally distributed. Moreover, all considered estimators are compared via their asymptotic relative efficiency and a small simulation study. Finally, some indications about the performance of the considered estimators in the presence of contaminated normal data are provided.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号