全文获取类型
收费全文 | 2438篇 |
免费 | 72篇 |
国内免费 | 11篇 |
专业分类
管理学 | 164篇 |
劳动科学 | 13篇 |
民族学 | 51篇 |
人才学 | 1篇 |
人口学 | 69篇 |
丛书文集 | 373篇 |
理论方法论 | 126篇 |
综合类 | 1356篇 |
社会学 | 168篇 |
统计学 | 200篇 |
出版年
2024年 | 3篇 |
2023年 | 9篇 |
2022年 | 19篇 |
2021年 | 50篇 |
2020年 | 33篇 |
2019年 | 22篇 |
2018年 | 38篇 |
2017年 | 52篇 |
2016年 | 44篇 |
2015年 | 60篇 |
2014年 | 98篇 |
2013年 | 180篇 |
2012年 | 138篇 |
2011年 | 166篇 |
2010年 | 131篇 |
2009年 | 155篇 |
2008年 | 138篇 |
2007年 | 142篇 |
2006年 | 142篇 |
2005年 | 98篇 |
2004年 | 91篇 |
2003年 | 135篇 |
2002年 | 208篇 |
2001年 | 176篇 |
2000年 | 78篇 |
1999年 | 21篇 |
1998年 | 13篇 |
1997年 | 9篇 |
1996年 | 14篇 |
1995年 | 6篇 |
1994年 | 9篇 |
1993年 | 6篇 |
1992年 | 5篇 |
1991年 | 2篇 |
1990年 | 4篇 |
1989年 | 2篇 |
1988年 | 1篇 |
1987年 | 5篇 |
1986年 | 5篇 |
1985年 | 3篇 |
1984年 | 1篇 |
1982年 | 2篇 |
1981年 | 1篇 |
1977年 | 1篇 |
1975年 | 1篇 |
1974年 | 1篇 |
1972年 | 2篇 |
1970年 | 1篇 |
排序方式: 共有2521条查询结果,搜索用时 20 毫秒
11.
Youngjae Chang 《统计学通讯:模拟与计算》2013,42(8):1703-1726
Many tree algorithms have been developed for regression problems. Although they are regarded as good algorithms, most of them suffer from loss of prediction accuracy when there are many irrelevant variables and the number of predictors exceeds the number of observations. We propose the multistep regression tree with adaptive variable selection to handle this problem. The variable selection step and the fitting step comprise the multistep method. The multistep generalized unbiased interaction detection and estimation (GUIDE) with adaptive forward selection (fg) algorithm, as a variable selection tool, performs better than some of the well-known variable selection algorithms such as efficacy adaptive regression tube hunting (EARTH), FSR (false selection rate), LSCV (least squares cross-validation), and LASSO (least absolute shrinkage and selection operator) for the regression problem. The results based on simulation study show that fg outperforms other algorithms in terms of selection result and computation time. It generally selects the important variables correctly with relatively few irrelevant variables, which gives good prediction accuracy with less computation time. 相似文献
12.
13.
This article presents the results of a simulation study of variable selection in a multiple regression context that evaluates the frequency of selecting noise variables and the bias of the adjusted R 2 of the selected variables when some of the candidate variables are authentic. It is demonstrated that for most samples a large percentage of the selected variables is noise, particularly when the number of candidate variables is large relative to the number of observations. The adjusted R 2 of the selected variables is highly inflated. 相似文献
14.
We consider asymmetric kernel estimates based on grouped data. We propose an iterated scheme for constructing such an estimator and apply an iterated smoothed bootstrap approach for bandwidth selection. We compare our approach with competing methods in estimating actuarial loss models using both simulations and data studies. The simulation results show that with this new method, the estimated density from grouped data matches the true density more closely than with competing approaches. 相似文献
15.
Methods for a sequential test of a dose-response effect in pre-clinical studies are investigated. The objective of the test procedure is to compare several dose groups with a zero-dose control. The sequential testing is conducted within a closed family of one-sided tests. The procedures investigated are based on a monotonicity assumption. These closed procedures strongly control the familywise error rate while providing information about the shape of the dose-responce relationship. Performance of sequential testing procedures are compared via a Monte Carlo simulation study. We illustrae the procedures by application to a real data set. 相似文献
16.
ABSTRACTTraditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models. 相似文献
17.
In this paper, we propose new estimation techniques in connection with the system of S-distributions. Besides “exact” maximum likelihood (ML), we propose simulated ML and a characteristic function-based procedure. The “exact” and simulated likelihoods can be used to provide numerical, MCMC-based Bayesian inferences. 相似文献
18.
Muhammad Aslam Ching-Ho Yen Chia-Hao Chang Chi-Hyuck Jun Munir Ahmad Mujahid Rasool 《统计学通讯:理论与方法》2013,42(20):3633-3647
In this article, a variable two-stage acceptance sampling plan is developed when the quality characteristic is evaluated through a process loss function. The plan parameters of the proposed plan are determined by using the two-point approach and tabulated according to various quality levels. Two cases are discussed when the process mean lies at the target value and when it does not, respectively. Extensive tables are provided for both cases and the results are explained with examples. The advantage of the proposed plan is compared with the existing variable single acceptance sampling plan using the process loss function. 相似文献
19.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1. It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width. 相似文献
20.
The article discusses alternative Research Assessment Measures (RAM), with an emphasis on the Thomson Reuters ISI Web of Science database (hereafter ISI). Some analysis and comparisons are also made with data from the SciVerse Scopus database. The various RAM that are calculated annually or updated daily are defined and analyzed, including the classic 2-year impact factor (2YIF), 2YIF without journal self-citations (2YIF*), 5-year impact factor (5YIF), Immediacy (or zero-year impact factor (0YIF)), Impact Factor Inflation (IFI), Self-citation Threshold Approval Rating (STAR), Eigenfactor score, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, and PI-BETA (Papers Ignored – By Even The Authors). The RAM are analyzed for 10 leading econometrics journals and 4 leading statistics journals. The application to econometrics can be used as a template for other areas in economics, for other scientific disciplines, and as a benchmark for newer journals in a range of disciplines. In addition to evaluating high quality research in leading econometrics journals, the paper also compares econometrics and statistics, alternative RAM, highlights the similarities and differences of the alternative RAM, finds that several RAM capture similar performance characteristics for the leading econometrics and statistics journals, while the new PI-BETA criterion is not highly correlated with any of the other RAM, and hence conveys additional information regarding RAM, highlights major research areas in leading journals in econometrics, and discusses some likely future uses of RAM, and shows that the harmonic mean of 13 RAM provides more robust journal rankings than relying solely on 2YIF. 相似文献