首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4408篇
  免费   149篇
  国内免费   62篇
管理学   202篇
劳动科学   2篇
民族学   6篇
人才学   1篇
人口学   13篇
丛书文集   199篇
理论方法论   48篇
综合类   1600篇
社会学   55篇
统计学   2493篇
  2024年   6篇
  2023年   30篇
  2022年   55篇
  2021年   60篇
  2020年   98篇
  2019年   109篇
  2018年   138篇
  2017年   205篇
  2016年   146篇
  2015年   144篇
  2014年   188篇
  2013年   1011篇
  2012年   392篇
  2011年   227篇
  2010年   200篇
  2009年   202篇
  2008年   211篇
  2007年   183篇
  2006年   140篇
  2005年   151篇
  2004年   126篇
  2003年   97篇
  2002年   63篇
  2001年   83篇
  2000年   73篇
  1999年   53篇
  1998年   31篇
  1997年   40篇
  1996年   21篇
  1995年   21篇
  1994年   14篇
  1993年   16篇
  1992年   10篇
  1991年   8篇
  1990年   12篇
  1989年   6篇
  1988年   9篇
  1987年   6篇
  1986年   3篇
  1985年   5篇
  1984年   6篇
  1983年   6篇
  1982年   4篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
排序方式: 共有4619条查询结果,搜索用时 203 毫秒
71.
A stable money demand function is essential when using monetary aggregate as a monetary policy. Thus, there is need to examine the stability of the money demand function in Nigeria after the deregulation of the financial sector. To achieve this, the study employed CUSUM (cumulative sum) and CUSUMSQ (CUSUM of square) tests after using autoregressive distributive lag bounds test to determine the existence of a long run relationship between monetary aggregates and their determinants. Results of the study show that a long-run relationship holds and that the demand for money is stable in Nigeria. In addition, the inflation rate is found to be a better proxy for an opportunity variable when compared to interest rate. The main implication of the study is that interest rate is ineffective as a monetary policy instrument in Nigeria.  相似文献   
72.
针对消费者对转换成本和价格属性具有显著性偏好的实际情况,基于显著性理论对垄断双边平台企业的转换成本和定价策略问题进行了研究。研究发现:1)在显著性偏好非对称型市场上,对价格敏感的一方收取的价格最低,对高转换成本的一方收取的价格最高,而在显著性偏好对称型市场上,平台的最优价格位于非对称型市场之间,但是对价格敏感型市场收取的价格低于转换成本敏感型市场的价格。2)在高转换成本的市场,平台利润最低;低转换成本市场平台利润最高,而混合型情况的最优利润位于这两者之间。该研究结论说明平台企业应该采取措施来降低用户加入平台的转换成本,从而增加利润,这与现实的案例研究结论相吻合。  相似文献   
73.
This paper presents some powerful omnibus tests for multivariate normality based on the likelihood ratio and the characterizations of the multivariate normal distribution. The power of the proposed tests is studied against various alternatives via Monte Carlo simulations. Simulation studies show our tests compare well with other powerful tests including multivariate versions of the Shapiro–Wilk test and the Anderson–Darling test.  相似文献   
74.
Tests for equality of variances using independent samples are widely used in data analysis. Conover et al. [A comparative study of tests for homogeneity of variance, with applications to the outer continental shelf bidding data. Technometrics. 1981;23:351–361], won the Youden Prize by comparing 56 variations of popular tests for variance on the basis of robustness and power in 60 different scenarios. None of the tests they compared were robust and powerful for the skewed distributions they considered. This study looks at 12 variations they did not consider, and shows that 10 are robust for the skewed distributions they considered plus the lognormal distribution, which they did not study. Three of these 12 have clearly superior power for skewed distributions, and are competitive in terms of robustness and power for all of the distributions considered. They are recommended for general use based on robustness, power, and ease of application.  相似文献   
75.
In late-phase confirmatory clinical trials in the oncology field, time-to-event (TTE) endpoints are commonly used as primary endpoints for establishing the efficacy of investigational therapies. Among these TTE endpoints, overall survival (OS) is always considered as the gold standard. However, OS data can take years to mature, and its use for measurement of efficacy can be confounded by the use of post-treatment rescue therapies or supportive care. Therefore, to accelerate the development process and better characterize the treatment effect of new investigational therapies, other TTE endpoints such as progression-free survival and event-free survival (EFS) are applied as primary efficacy endpoints in some confirmatory trials, either as a surrogate for OS or as a direct measure of clinical benefits. For evaluating novel treatments for acute myeloid leukemia, EFS has been gradually recognized as a direct measure of clinical benefits. However, the application of an EFS endpoint is still controversial mainly due to the debate surrounding definition of treatment failure (TF) events. In this article, we investigate the EFS endpoint with the most conservative definition for the timing of TF, which is Day 1 since randomization. Specifically, the corresponding non-proportional hazard pattern of the EFS endpoint is investigated with both analytical and numerical approaches.  相似文献   
76.
In problems related to evaluations of products or services (e.g. in customer satisfaction analysis) the main difficulties concern the synthesis of the information, which is necessary for the presence of several evaluators and many response variables (aspects under evaluation). In this article, the problem of determining and comparing the satisfaction of different groups of customers, in the presence of multivariate response variables and using the results of pairwise comparisons is addressed. Within the framework of group ranking methods and multi criteria decision making theory, a new approach, based on nonparametric techniques, for evaluating group satisfaction in a multivariate framework is proposed and the concept of Multivariate Relative Satisfaction is defined. An application to the evaluation of public transport services, like the railway service and the urban bus service, by students of the University of Ferrara (Italy) is also discussed.  相似文献   
77.
The process of using data to infer the existence of stochastic dominance is subject to sampling error. Kroll and Levy (1980), among others, have presented simulation results for several normal and lognormal distributions which show high error probabilities for a wide range of parameter values. This paper continues this line of research and uses simulation to estimate error probabilities. Distributions considered are a pair of normals and a pair of lognormals. Analysis of these distributions is made computationally feasible through theoretical results which reduce the number of parameters of the pair of distributions from four to two.  相似文献   
78.
It has been known that when there is a break in the variance (unconditional heteroskedasticity) of the error term in linear regression models, a routine application of the Lagrange multiplier (LM) test for autocorrelation can cause potentially significant size distortions. We propose a new test for autocorrelation that is robust in the presence of a break in variance. The proposed test is a modified LM test based on a generalized least squares regression. Monte Carlo simulations show that the new test performs well in finite samples and it is especially comparable to other existing heteroskedasticity-robust tests in terms of size, and much better in terms of power.  相似文献   
79.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   
80.
This article considers the issue of performing tests in linear heteroskedastic models when the test statistic employs a consistent variance estimator. Several different estimators are considered, namely: HC0, HC1, HC2, HC3, and their bias-adjusted versions. The numerical evaluation is performed using numerical integration methods; the Imhof algorithm is used to that end. The results show that bias-adjustment of variance estimators used to construct test statistics delivers more reliable tests when they are performed for the HC0 and HC1 estimators, but the same does not hold for the HC3 estimator. Overall, the most reliable test is the HC3-based one.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号