首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   23855篇
  免费   741篇
  国内免费   208篇
管理学   1174篇
劳动科学   55篇
民族学   658篇
人才学   23篇
人口学   414篇
丛书文集   5726篇
理论方法论   1147篇
综合类   13280篇
社会学   984篇
统计学   1343篇
  2024年   16篇
  2023年   89篇
  2022年   320篇
  2021年   374篇
  2020年   300篇
  2019年   229篇
  2018年   314篇
  2017年   500篇
  2016年   386篇
  2015年   690篇
  2014年   893篇
  2013年   1199篇
  2012年   1357篇
  2011年   1705篇
  2010年   1770篇
  2009年   1827篇
  2008年   1776篇
  2007年   1933篇
  2006年   1884篇
  2005年   1613篇
  2004年   971篇
  2003年   812篇
  2002年   1053篇
  2001年   837篇
  2000年   521篇
  1999年   345篇
  1998年   178篇
  1997年   162篇
  1996年   178篇
  1995年   125篇
  1994年   100篇
  1993年   75篇
  1992年   82篇
  1991年   43篇
  1990年   34篇
  1989年   25篇
  1988年   33篇
  1987年   9篇
  1986年   9篇
  1985年   7篇
  1984年   3篇
  1983年   6篇
  1982年   5篇
  1981年   4篇
  1980年   4篇
  1976年   1篇
  1974年   1篇
  1973年   2篇
  1972年   2篇
  1970年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
81.
建立基于银行目标导向的理财产品费率模型,通过对不同目标下银行行为的讨论,得出理财产品费率调整的一个分析框架。研究发现:存在使银行的经济效益和社会效益综合提升程度最大的最优理财产品费率;最优费率不是固定值,而是一个动态概念;单纯考虑经济效益时,银行将调高理财产品费率;银行调高理财产品费率,将提升自身综合目标的实现程度;银行调低理财产品费率,并不改变银行的综合目标。  相似文献   
82.
Conventional Phase II statistical process control (SPC) charts are designed using control limits; a chart gives a signal of process distributional shift when its charting statistic exceeds a properly chosen control limit. To do so, we only know whether a chart is out-of-control at a given time. It is therefore not informative enough about the likelihood of a potential distributional shift. In this paper, we suggest designing the SPC charts using p values. By this approach, at each time point of Phase II process monitoring, the p value of the observed charting statistic is computed, under the assumption that the process is in-control. If the p value is less than a pre-specified significance level, then a signal of distributional shift is delivered. This p value approach has several benefits, compared to the conventional design using control limits. First, after a signal of distributional shift is delivered, we could know how strong the signal is. Second, even when the p value at a given time point is larger than the significance level, it still provides us useful information about how stable the process performs at that time point. The second benefit is especially useful when we adopt a variable sampling scheme, by which the sampling time can be longer when we have more evidence that the process runs stably, supported by a larger p value. To demonstrate the p value approach, we consider univariate process monitoring by cumulative sum control charts in various cases.  相似文献   
83.
We extend the random permutation model to obtain the best linear unbiased estimator of a finite population mean accounting for auxiliary variables under simple random sampling without replacement (SRS) or stratified SRS. The proposed method provides a systematic design-based justification for well-known results involving common estimators derived under minimal assumptions that do not require specification of a functional relationship between the response and the auxiliary variables.  相似文献   
84.
Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003 Goldstein , H. ( 2003 ). Multilevel Statistical Models. 3rd ed . London : Edward Arnold . [Google Scholar]) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.  相似文献   
85.
In this paper, we consider tests for assessing whether two stationary and independent time series have the same spectral densities (or same autocovariance functions). Both frequency domain and time domain test statistics for this purpose are reviewed. The adaptive Neyman tests are then introduced and their performances are investigated. Our tests are adaptive, that is, they are constructed completely by the data and do not involve any unknown smoothing parameters. Simulation studies show that our proposed tests are at least comparable to the current tests in most cases. Furthermore, our tests are much more powerful in some cases, such as against the long orders of autoregressive moving average (ARMA) models such as seasonal ARMA series.  相似文献   
86.
Cox's discrete logistic model was extended to the study of the life table by Thompson (1977) to handle grouped survival data. Inferences about the effect of grouping are studies byMonte Carlo methods. The results show that the effect of grouping is not substantial. This approach is applied to the grouped data on liver cancer. The computer program developed for grouped censored data with continuous and indicator covariates is of practical importance and is available fromThe Ohio State University  相似文献   
87.
A computer algorithm for computing the alternative distributions of the Wilcoxon signed rank statistic under shift alternatives is discussed. An explicit error bound is derived for the numeric integration approximation to these distributions.

A nonparametric process control procedure in which the standard CUSUM procedure is applied to the Wilcoxon signed rank statistic is discussed. In order to implement this procedure, the distribution of the Wilcoxon statistic under shift of the underlying distribution from its point of symmetry needs to be computed. The average run length of the nonparametric and parametric CUSUM are compared.  相似文献   
88.
Selecting predictors to optimize the outcome prediction is an important statistical method. However, it usually ignores the false positives in the selected predictors. In this article, we advocate a conventional stepwise forward variable selection method based on the predicted residual sum of squares, and develop a positive false discovery rate (pFDR) estimate for the selected predictor subset, and a local pFDR estimate to prioritize the selected predictors. This pFDR estimate takes account of the existence of non null predictors, and is proved to be asymptotically conservative. In addition, we propose two views of a variable selection process: an overall and an individual test. An interesting feature of the overall test is that its power of selecting non null predictors increases with the proportion of non null predictors among all candidate predictors. Data analysis is illustrated with an example, in which genetic and clinical predictors were selected to predict the cholesterol level change after four months of tamoxifen treatment, and pFDR was estimated. Our method's performance is evaluated through statistical simulations.  相似文献   
89.
ABSTRACT

In high-dimensional regression, the presence of influential observations may lead to inaccurate analysis results so that it is a prime and important issue to detect these unusual points before statistical regression analysis. Most of the traditional approaches are, however, based on single-case diagnostics, and they may fail due to the presence of multiple influential observations that suffer from masking effects. In this paper, an adaptive multiple-case deletion approach is proposed for detecting multiple influential observations in the presence of masking effects in high-dimensional regression. The procedure contains two stages. Firstly, we propose a multiple-case deletion technique, and obtain an approximate clean subset of the data that is presumably free of influential observations. To enhance efficiency, in the second stage, we refine the detection rule. Monte Carlo simulation studies and a real-life data analysis investigate the effective performance of the proposed procedure.  相似文献   
90.
An envelope-rejection method is used to generate random variates from the Watson distribution. The method is compact and is competitive with, if not superior to, the existing sampling algorithms. For the girdle form of the Watson distribution, a faster algorithm is proposed. As a result, Johnson's sampling algorithm for the Bingham distribution is improved.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号