首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19744篇
  免费   844篇
  国内免费   278篇
管理学   1804篇
劳动科学   2篇
民族学   126篇
人才学   2篇
人口学   473篇
丛书文集   1164篇
理论方法论   616篇
综合类   9623篇
社会学   1451篇
统计学   5605篇
  2024年   33篇
  2023年   204篇
  2022年   220篇
  2021年   288篇
  2020年   437篇
  2019年   551篇
  2018年   633篇
  2017年   798篇
  2016年   630篇
  2015年   679篇
  2014年   1047篇
  2013年   2439篇
  2012年   1361篇
  2011年   1225篇
  2010年   1008篇
  2009年   965篇
  2008年   1085篇
  2007年   1171篇
  2006年   1082篇
  2005年   952篇
  2004年   844篇
  2003年   702篇
  2002年   571篇
  2001年   453篇
  2000年   342篇
  1999年   209篇
  1998年   139篇
  1997年   127篇
  1996年   106篇
  1995年   91篇
  1994年   81篇
  1993年   67篇
  1992年   59篇
  1991年   31篇
  1990年   40篇
  1989年   37篇
  1988年   31篇
  1987年   28篇
  1986年   26篇
  1985年   20篇
  1984年   21篇
  1983年   12篇
  1982年   6篇
  1981年   9篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
871.
This article explores the calculation of tolerance limits for the Poisson regression model based on the profile likelihood methodology and small-sample asymptotic corrections to improve the coverage probability performance. The data consist of n counts, where the mean or expected rate depends upon covariates via the log regression function. This article evaluated upper tolerance limits as a function of covariates. The upper tolerance limits are obtained from upper confidence limits of the mean. To compute upper confidence limits the following methodologies were considered: likelihood based asymptotic methods, small-sample asymptotic methods to improve the likelihood based methodology, and the delta method. Two applications are discussed: one application relating to defects in semiconductor wafers due to plasma etching and the other examining the number of surface faults in upper seams of coal mines. All three methodologies are illustrated for the two applications.  相似文献   
872.
Several probability distributions have been proposed in the literature, especially with the aim of obtaining models that are more flexible relative to the behaviors of the density and hazard rate functions. Recently, two generalizations of the Lindley distribution were proposed in the literature: the power Lindley distribution and the inverse Lindley distribution. In this article, a distribution is obtained from these two generalizations and named as inverse power Lindley distribution. Some properties of this distribution and study of the behavior of maximum likelihood estimators are presented and discussed. It is also applied considering two real datasets and compared with the fits obtained for already-known distributions. When applied, the inverse power Lindley distribution was found to be a good alternative for modeling survival data.  相似文献   
873.
In this article, we perform Bayesian estimation of stochastic volatility models with heavy tail distributions using Metropolis adjusted Langevin (MALA) and Riemman manifold Langevin (MMALA) methods. We provide analytical expressions for the application of these methods, assess the performance of these methodologies in simulated data, and illustrate their use on two financial time series datasets.  相似文献   
874.
The traditional Cobb–Douglas production function uses the compact mathematical form to describe the relationship between the production results and production factors in the production technology process. However, in macro-economic production, multi-structured production exists universally. In order to better demonstrate such input–output relation, a composite production function model is proposed in this article. In aspect of model parameter estimation, artificial fish swarm algorithm is applied. The algorithm has satisfactory performance in overcoming local extreme value and acquiring global extreme value. Moreover, realization of the algorithm does not need the gradient value of the objective function. For this reason, it is adaptive to searching space. Through the improved artificial fish swarm algorithm, convergence rate and precision are both considerably improved. In aspect of model application, the composite production function model is mainly used to calculate economic growth factor contribution rate. In this article, a relatively more accurate calculating method is proposed. In the end, empirical analysis on economic growth contribution rate of China is implemented.  相似文献   
875.
In this article, we study global L2 error of non linear wavelet estimator of density in the Besov space Bspq for missing data model when covariables are present and prove that the estimator can achieve the optimal rate of convergence, which is similar to the result studied by Donoho et al. (1996) Donoho, D.L., Johnstone, I.M., Kerkyacharian, G., Picard, D. (1996). Density estimation by wavelet thresholding. Ann. Stat. 24:508539.[Crossref], [Web of Science ®] [Google Scholar] in complete independent data case with term-by-term thresholding of the empirical wavelet coefficients. Finite-sample behavior of the proposed estimator is explored via simulations.  相似文献   
876.
In this article, we consider a linear model in which the covariates are measured with errors. We propose a t-type corrected-loss estimation of the covariate effect, when the measurement error follows the Laplace distribution. The proposed estimator is asymptotically normal. In practical studies, some outliers that diminish the robustness of the estimation occur. Simulation studies show that the estimators are resistant to vertical outliers and an application of 6-minute walk test is presented to show that the proposed method performs well.  相似文献   
877.
We propose two tests for testing compound periodicities which are the uniformly most powerful invariant decision procedures against simple periodicities. The second test can provide an excellent estimation of a compound periodic non linear function from observed data. These tests were compared with the tests proposed by Fisher and Siegel by Monte Carlo studies and we found that all the tests showed high power and high probability of a correct decision when all the amplitudes of underlying periods were the same. However, if there are at least several different periods with unequal amplitudes, then the second test proposed always showed high power and high probability of a correct decision, whereas the tests proposed by Fisher and Siegel gave 0 for the power and 0 for the probability of a correct decision, whatever the standard deviation of pseudo normal random numbers. Overall, the second test proposed is the best of all in view of the probability of a correct decision and power.  相似文献   
878.
Rubin (1976 Rubin, D.B. (1976). Inference and missing data. Biometrika 63(3):581592.[Crossref], [Web of Science ®] [Google Scholar]) derived general conditions under which inferences that ignore missing data are valid. These conditions are sufficient but not generally necessary, and therefore may be relaxed in some special cases. We consider here the case of frequentist estimation of a conditional cdf subject to missing outcomes. We partition a set of data into outcome, conditioning, and latent variables, all of which potentially affect the probability of a missing response. We describe sufficient conditions under which a complete-case estimate of the conditional cdf of the outcome given the conditioning variable is unbiased. We use simulations on a renal transplant data set (Dienemann et al.) to illustrate the implications of these results.  相似文献   
879.
Consider the standard treatment-control model with a time-to-event endpoint. We propose a novel interpretable test statistic from a quantile function point of view. The large sample consistency of our estimator is proven for fixed bandwidth values theoretically and validated empirically. A Monte Carlo simulation study also shows that given small sample sizes, utilization of a tuning parameter through the application of a smooth quantile function estimator shows an improvement in efficiency in terms of the MSE when compared to direct application of classic Kaplan–Meier survival function estimator. The procedure is finally illustrated via an application to epithelial ovarian cancer data.  相似文献   
880.
Here we consider a multinomial probit regression model where the number of variables substantially exceeds the sample size and only a subset of the available variables is associated with the response. Thus selecting a small number of relevant variables for classification has received a great deal of attention. Generally when the number of variables is substantial, sparsity-enforcing priors for the regression coefficients are called for on grounds of predictive generalization and computational ease. In this paper, we propose a sparse Bayesian variable selection method in multinomial probit regression model for multi-class classification. The performance of our proposed method is demonstrated with one simulated data and three well-known gene expression profiling data: breast cancer data, leukemia data, and small round blue-cell tumors. The results show that compared with other methods, our method is able to select the relevant variables and can obtain competitive classification accuracy with a small subset of relevant genes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号