全文获取类型
收费全文 | 1799篇 |
免费 | 53篇 |
国内免费 | 7篇 |
专业分类
管理学 | 155篇 |
劳动科学 | 10篇 |
民族学 | 15篇 |
人口学 | 82篇 |
丛书文集 | 197篇 |
理论方法论 | 125篇 |
综合类 | 824篇 |
社会学 | 228篇 |
统计学 | 223篇 |
出版年
2024年 | 4篇 |
2023年 | 7篇 |
2022年 | 11篇 |
2021年 | 40篇 |
2020年 | 35篇 |
2019年 | 32篇 |
2018年 | 43篇 |
2017年 | 51篇 |
2016年 | 41篇 |
2015年 | 50篇 |
2014年 | 73篇 |
2013年 | 192篇 |
2012年 | 109篇 |
2011年 | 104篇 |
2010年 | 87篇 |
2009年 | 104篇 |
2008年 | 99篇 |
2007年 | 94篇 |
2006年 | 82篇 |
2005年 | 71篇 |
2004年 | 64篇 |
2003年 | 87篇 |
2002年 | 112篇 |
2001年 | 110篇 |
2000年 | 47篇 |
1999年 | 18篇 |
1998年 | 10篇 |
1997年 | 6篇 |
1996年 | 11篇 |
1995年 | 5篇 |
1994年 | 10篇 |
1993年 | 3篇 |
1992年 | 4篇 |
1991年 | 2篇 |
1990年 | 6篇 |
1989年 | 3篇 |
1988年 | 2篇 |
1987年 | 5篇 |
1986年 | 4篇 |
1985年 | 2篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1982年 | 4篇 |
1980年 | 1篇 |
1978年 | 2篇 |
1977年 | 2篇 |
1975年 | 2篇 |
1974年 | 1篇 |
1972年 | 1篇 |
1970年 | 1篇 |
排序方式: 共有1859条查询结果,搜索用时 0 毫秒
221.
邢畅 《北京交通大学学报(社会科学版)》2019,18(1):133-138
当前,自媒体的商业化运作已成为常态,自媒体内容盈利模式因其内容产品所带有的文化属性和意识形态特征而受到人们的广泛关注,然而,在利用内容获取盈利的过程中频频出现一系列道德问题。对自媒体内容盈利模式进行伦理审视发现,公民道德素质的提升是解决问题的内在逻辑。因此,可从公民意识的强化和公民人格的完善两个层面入手,探寻完善自媒体内容盈利模式的伦理途径,即增强权利意识,守住自媒体的三道防线;强化平等意识,明确自身定位;增强准则意识,培养和提升自律能力;树立正确的理想信念,为自媒体发展指引方向。 相似文献
222.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1. It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width. 相似文献
223.
In this paper, we propose new estimation techniques in connection with the system of S-distributions. Besides “exact” maximum likelihood (ML), we propose simulated ML and a characteristic function-based procedure. The “exact” and simulated likelihoods can be used to provide numerical, MCMC-based Bayesian inferences. 相似文献
224.
Muhammad Aslam Ching-Ho Yen Chia-Hao Chang Chi-Hyuck Jun Munir Ahmad Mujahid Rasool 《统计学通讯:理论与方法》2013,42(20):3633-3647
In this article, a variable two-stage acceptance sampling plan is developed when the quality characteristic is evaluated through a process loss function. The plan parameters of the proposed plan are determined by using the two-point approach and tabulated according to various quality levels. Two cases are discussed when the process mean lies at the target value and when it does not, respectively. Extensive tables are provided for both cases and the results are explained with examples. The advantage of the proposed plan is compared with the existing variable single acceptance sampling plan using the process loss function. 相似文献
225.
By modifying the direct method to solve the overdetermined linear system we are able to present an algorithm for L1 estimation which appears to be superior computationally to any other known algorithm for the simple linear regression problem. 相似文献
226.
Consider the D-optimal designs for a combined polynomial and trigonometric regression on a partial circle. It is shown that the optimal design is equally supported and the structure of the optimal design depends only on the length of the design interval and the support points are analytic functions of this parameter. Moreover, the Taylor expansion of the optimal support points can be determined efficiently by a recursive procedure. Examples are presented to illustrate the procedures for computing the optimal designs. 相似文献
227.
The article discusses alternative Research Assessment Measures (RAM), with an emphasis on the Thomson Reuters ISI Web of Science database (hereafter ISI). Some analysis and comparisons are also made with data from the SciVerse Scopus database. The various RAM that are calculated annually or updated daily are defined and analyzed, including the classic 2-year impact factor (2YIF), 2YIF without journal self-citations (2YIF*), 5-year impact factor (5YIF), Immediacy (or zero-year impact factor (0YIF)), Impact Factor Inflation (IFI), Self-citation Threshold Approval Rating (STAR), Eigenfactor score, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, and PI-BETA (Papers Ignored – By Even The Authors). The RAM are analyzed for 10 leading econometrics journals and 4 leading statistics journals. The application to econometrics can be used as a template for other areas in economics, for other scientific disciplines, and as a benchmark for newer journals in a range of disciplines. In addition to evaluating high quality research in leading econometrics journals, the paper also compares econometrics and statistics, alternative RAM, highlights the similarities and differences of the alternative RAM, finds that several RAM capture similar performance characteristics for the leading econometrics and statistics journals, while the new PI-BETA criterion is not highly correlated with any of the other RAM, and hence conveys additional information regarding RAM, highlights major research areas in leading journals in econometrics, and discusses some likely future uses of RAM, and shows that the harmonic mean of 13 RAM provides more robust journal rankings than relying solely on 2YIF. 相似文献
228.
Youngjae Chang 《统计学通讯:模拟与计算》2013,42(9):1728-1744
Many algorithms originated from decision trees have been developed for classification problems. Although they are regarded as good algorithms, most of them suffer from loss of prediction accuracy, namely high misclassification rates when there are many irrelevant variables. We propose multi-step classification trees with adaptive variable selection (the multi-step GUIDE classification tree (MG) and the multi-step CRUISE classification tree (MC) to handle this problem. The variable selection step and the fitting step comprise the multi-step method. We compare the performance of classification trees in the presence of irrelevant variables. MG and MC perform better than Random Forest and C4.5 with an extremely noisy dataset. Furthermore, the prediction accuracy of our proposed algorithm is relatively stable even when the number of irrelevant variables increases, while that of other algorithms worsens. 相似文献
229.
This article presents the results of a simulation study of variable selection in a multiple regression context that evaluates the frequency of selecting noise variables and the bias of the adjusted R 2 of the selected variables when some of the candidate variables are authentic. It is demonstrated that for most samples a large percentage of the selected variables is noise, particularly when the number of candidate variables is large relative to the number of observations. The adjusted R 2 of the selected variables is highly inflated. 相似文献
230.
In this paper, we study the Kullback–Leibler (KL) information of a censored variable, which we will simply call it censored KL information. The censored KL information is shown to have the necessary monotonicity property in addition to inherent properties of nonnegativity and characterization. We also present a representation of the censored KL information in terms of the relative risk and study its relation with the Fisher information in censored data. Finally, we evaluate the estimated censored KL information as a goodness-of-fit test statistic. 相似文献