全文获取类型
收费全文 | 3480篇 |
免费 | 146篇 |
国内免费 | 65篇 |
专业分类
管理学 | 383篇 |
劳动科学 | 1篇 |
民族学 | 11篇 |
人口学 | 41篇 |
丛书文集 | 193篇 |
理论方法论 | 52篇 |
综合类 | 1922篇 |
社会学 | 257篇 |
统计学 | 831篇 |
出版年
2024年 | 64篇 |
2023年 | 139篇 |
2022年 | 123篇 |
2021年 | 85篇 |
2020年 | 74篇 |
2019年 | 108篇 |
2018年 | 91篇 |
2017年 | 139篇 |
2016年 | 119篇 |
2015年 | 121篇 |
2014年 | 160篇 |
2013年 | 449篇 |
2012年 | 287篇 |
2011年 | 176篇 |
2010年 | 125篇 |
2009年 | 157篇 |
2008年 | 121篇 |
2007年 | 174篇 |
2006年 | 149篇 |
2005年 | 145篇 |
2004年 | 128篇 |
2003年 | 111篇 |
2002年 | 96篇 |
2001年 | 78篇 |
2000年 | 52篇 |
1999年 | 35篇 |
1998年 | 19篇 |
1997年 | 21篇 |
1996年 | 32篇 |
1995年 | 26篇 |
1994年 | 15篇 |
1993年 | 17篇 |
1992年 | 18篇 |
1991年 | 7篇 |
1990年 | 9篇 |
1989年 | 6篇 |
1988年 | 5篇 |
1987年 | 6篇 |
1986年 | 2篇 |
1984年 | 1篇 |
1982年 | 1篇 |
排序方式: 共有3691条查询结果,搜索用时 0 毫秒
191.
《Journal of Statistical Computation and Simulation》2012,82(1-4):287-310
For the two-sample location and scale problem we propose an adaptive test which is based on so called Lepage type tests. The well known test of Lepage (1971) is a combination of the Wilcoxon test for location alternatives and the Ansari-Bradley test for scale alternatives and it behaves well for symmetric and medium-tailed distributions. For the cae of short-, medium- and long-tailed distributions we replace the Wilcoxon test and the .Ansari-Bradley test by suitable other two-sample tests for location and scale, respectively, in oder to get higher power than the classical Lepage test for such distribotions. These tests here are called Lepage type tests. in practice, however, we generally have no clear idea about the distribution having generated our data. Thus, an adaptive test should be applied which takes the the given data set inio consideration. The proposed adaptive test is based on the concept of Hogg (1974), i.e., first, to classify the unknown symmetric distribution function with respect to a measure for tailweight and second, to apply an appropriate Lepage type test for this classified type of distribution. We compare the adaptive test with the three Lepage type tests in the adaptive scheme and with the classical Lepage test as well as with other parametric and nonparametric tests. The power comparison is carried out via Monte Carlo simulation. It is shown that the adaptive test is the best one for the broad class of distributions considered. 相似文献
192.
《Journal of Statistical Computation and Simulation》2012,82(2):157-170
In this paper the Bayesian analysis of incomplete categorical data under informative general censoring proposed by Paulino and Pereira (1995) is revisited. That analysis is based on Dirichlet priors and can be applied to any missing data pattern. However, the known properties of the posterior distributions are scarce and therefore severe limitations to the posterior computations remain. Here is shown how a Monte Carlo simulation approach based on an alternative parameterisation can be used to overcome the former computational difficulties. The proposed simulation approach makes available the approximate estimation of general parametric functions and can be implemented in a very straightforward way. 相似文献
193.
Jared L. Deutsch Clayton V. Deutsch 《Journal of statistical planning and inference》2012,142(3):763-772
Complex models can only be realized a limited number of times due to large computational requirements. Methods exist for generating input parameters for model realizations including Monte Carlo simulation (MCS) and Latin hypercube sampling (LHS). Recent algorithms such as maximinLHS seek to maximize the minimum distance between model inputs in the multivariate space. A novel extension of Latin hypercube sampling (LHSMDU) for multivariate models is developed here that increases the multidimensional uniformity of the input parameters through sequential realization elimination. Correlations are considered in the LHSMDU sampling matrix using a Cholesky decomposition of the correlation matrix. Computer code implementing the proposed algorithm supplements this article. A simulation study comparing MCS, LHS, maximinLHS and LHSMDU demonstrates that increased multidimensional uniformity can significantly improve realization efficiency and that LHSMDU is effective for large multivariate problems. 相似文献
194.
The complex Bingham distribution is relevant for the shape analysis of landmark data in two dimensions. In this paper it is shown that the problem of simulating from this distribution reduces to simulation from a truncated multivariate exponential distribution. Several simulation methods are described and their efficiencies are compared. 相似文献
195.
The traditional exponentially weighted moving average (EWMA) chart is one of the most popular control charts used in practice today. The in-control robustness is the key to the proper design and implementation of any control chart, lack of which can render its out-of-control shift detection capability almost meaningless. To this end, Borror et al. [5] studied the performance of the traditional EWMA chart for the mean for i.i.d. data. We use a more extensive simulation study to further investigate the in-control robustness (to non-normality) of the three different EWMA designs studied by Borror et al. [5]. Our study includes a much wider collection of non-normal distributions including light- and heavy-tailed and symmetric and asymmetric bi-modal as well as the contaminated normal, which is particularly useful to study the effects of outliers. Also, we consider two separate cases: (i) when the process mean and standard deviation are both known and (ii) when they are both unknown and estimated from an in-control Phase I sample. In addition, unlike in the study done by Borror et al. [5], the average run-length (ARL) is not used as the sole performance measure in our study, we consider the standard deviation of the run-length (SDRL), the median run-length (MDRL), and the first and the third quartiles as well as the first and the 99th percentiles of the in-control run-length distribution for a better overall assessment of the traditional EWMA chart's in-control performance. Our findings sound a cautionary note to the (over) use of the EWMA chart in practice, at least with some types of non-normal data. A summary and recommendations are provided. 相似文献
196.
在拍卖实证研究的大量文献中,越来越多的证据表明投标人更趋向于风险规避,然而目前拍卖计量方法通常只考虑风险中性的情形。针对这一问题,推广了针对第一价格拍卖的非参数估计方法,给出了多个风险规避参数估计量来处理风险规避的情形,并总结了相应的估计过程。为验证估计效果,蒙特卡罗模拟实验被用于进行分析和评价。实验结果表明,无论是对风险规避参数,还是私有估值的估计,总体上都能取得不错的效果,验证了方法的有效性,同时为风险规避参数估计量的选择提供了一些指导性建议。 相似文献
197.
测度不同贫困线标准下数字普惠金融对农户相对贫困脆弱性的作用效果及其最优指数区间,揭示数字普惠金融对不同农户相对贫困脆弱性影响的结构性差异和作用机制。研究发现:数字普惠金融对农户相对贫困脆弱性的影响既存在"数字红利"效应也存在"数字鸿沟"效应,二者呈现出倒"U"型关系,降低农户相对贫困脆弱性的最优数字普惠金融指数区间位于108~160之间。分结构看,较低贫困线标准下数字金融覆盖广度对农户相对贫困脆弱性的减缓程度更加明显,但较高贫困线标准下数字金融使用深度对农户相对贫困脆弱性的减缓程度更大,且呈小幅递增趋势。在作用机制层面,数字普惠金融主要通过提高农户数字技能和风险管理能力,降低居民收入差距来发挥中介作用,减缓农户相对贫困脆弱性。 相似文献
198.
Abstract. This is probably the first paper which discusses likelihood inference for a random set using a germ‐grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation‐based maximum likelihood inference and the effect of specifying different reference Poisson models. 相似文献
199.
黎国华 《武汉理工大学学报(社会科学版)》2012,(1):131-134
针对数字化出版的蓬勃发展,以及我国将大力实施建设中国特色的数字化出版战略的趋势,对高校学报率先开展数字化出版模式进行了探讨。通过分析欧美发达国家在数字化出版中的成功经验,提出了符合中国国情的高校学报几种数字化出版模式。 相似文献
200.
通过对江汉大学学生2007—2011年纸质图书借阅数据的统计与研究,从借阅数量和借阅比例两个方面分析习惯于数字阅读的大学生对传统纸质图书文献的实际需求状况,进而分析大学生的阅读心理及阅读特点,提出图书馆阅读推广活动建议。 相似文献