全文获取类型
收费全文 | 11563篇 |
免费 | 195篇 |
专业分类
管理学 | 1653篇 |
民族学 | 52篇 |
人才学 | 1篇 |
人口学 | 1216篇 |
丛书文集 | 66篇 |
教育普及 | 1篇 |
理论方法论 | 974篇 |
现状及发展 | 1篇 |
综合类 | 165篇 |
社会学 | 5408篇 |
统计学 | 2221篇 |
出版年
2023年 | 81篇 |
2021年 | 65篇 |
2020年 | 170篇 |
2019年 | 238篇 |
2018年 | 273篇 |
2017年 | 377篇 |
2016年 | 301篇 |
2015年 | 185篇 |
2014年 | 253篇 |
2013年 | 1978篇 |
2012年 | 398篇 |
2011年 | 325篇 |
2010年 | 269篇 |
2009年 | 260篇 |
2008年 | 283篇 |
2007年 | 275篇 |
2006年 | 250篇 |
2005年 | 259篇 |
2004年 | 247篇 |
2003年 | 227篇 |
2002年 | 254篇 |
2001年 | 292篇 |
2000年 | 289篇 |
1999年 | 269篇 |
1998年 | 181篇 |
1997年 | 172篇 |
1996年 | 164篇 |
1995年 | 165篇 |
1994年 | 134篇 |
1993年 | 153篇 |
1992年 | 168篇 |
1991年 | 146篇 |
1990年 | 157篇 |
1989年 | 181篇 |
1988年 | 151篇 |
1987年 | 136篇 |
1986年 | 148篇 |
1985年 | 167篇 |
1984年 | 165篇 |
1983年 | 156篇 |
1982年 | 121篇 |
1981年 | 105篇 |
1980年 | 102篇 |
1979年 | 137篇 |
1978年 | 88篇 |
1977年 | 87篇 |
1976年 | 88篇 |
1975年 | 81篇 |
1974年 | 80篇 |
1973年 | 73篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Unreplicated factorial designs pose a difficult problem in analysis because there are no degrees of freedom left to estimate the error. Daniel [Technometrics 1 (1959), pp. 311-341] proposed an ingenious graphical method that does not require σ to be estimated. Here we try to put Daniel's method into a formal framework and lift the subjectiveness that carries. A simulation study has been conducted that shows that the proposed method behaves better than Lenth's [Technometrics 31 (1989), pp. 469-473] popular method. 相似文献
992.
Terence C. Mills 《Journal of applied statistics》2008,35(10):1131-1138
While body fat is the most accurate measure of obesity, its measurement requires special equipment that can be costly and time consuming to operate. Attention has thus typically focused on the easier to calculate body mass index (BMI). However, the ability of BMI to accurately identify obesity has been increasingly questioned. This paper focuses attention on whether more general body mass indices are appropriate measures of body fat. Using a data set of body fat, height, and weight measurements, general models are estimated which nest a wide variety of weight–height indices as special cases. In the absence of a race and gender categorisation, the conventional BMI was found to be the appropriate index with which to predict body fat. When such a categorisation was made, however, the BMI was never selected as the appropriate index. In general, predicted female body fat was some 10 kg higher than that of a male of identical build and predicted % body fat was over 11 percentage points higher, but age effects were smaller for females. Considerable racial differences in predicted body fat were found for males, but such differences were less marked for females. The implications of this finding for interpreting recent research on the effect of obesity on health, society, and economic factors are considered. 相似文献
993.
In this paper, we introduce logistic models to analyse fertility curves. The models are formulated as linear models of the log odds of fertility and are defined in terms of parameters that are interpreted as measures of level, location and shape of the fertility schedule. This parameterization is useful for the evaluation, and interpretation of fertility trends and projections of future period fertility. For a series of years, the proposed models admit a state-space formulation that allows a coherent joint estimation of parameters and forecasting. The main features of the models compared with other alternatives are the functional simplicity, the flexibility, and the interpretability of the parameters. These and other features are analysed in this paper using examples and theoretical results. Data from different countries are analysed, and to validate the logistic approach, we compare the goodness of fit of the new model against well-known alternatives; the analysis gives superior results in most developed countries. 相似文献
994.
For asymptotic posterior normality in the one-parameter cases, Weng [2003. On Stein's identity for posterior normality. Statist. Sinica 13, 495–506] proposed to use a version of Stein's Identity to write the posterior expectations for functions of a normalized quantity in a form that is more transparent and can be easily analyzed. In the present paper we extend this approach to the multi-parameter cases and compare our conditions with earlier work. Three examples are used to illustrate the application of this method. 相似文献
995.
This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are sampled with every price change. Both sampling schemes have been used in the literature on realized variance, but a formal investigation into their properties has been lacking. Our empirical and theoretical results indicate that the return dynamics in transaction time are very different from those in tick time and the choice of sampling scheme can therefore have an important impact on the properties of realized variance. For RV we find that tick time sampling is superior to transaction time sampling in terms of mean-squared-error, especially when the level of noise, number of ticks, or the arrival frequency of efficient price moves is low. Importantly, we show that while the microstructure noise may appear close to IID in transaction time, in tick time it is highly dependent. As a result, bias correction procedures that rely on the noise being independent, can fail in tick time and are better implemented in transaction time. 相似文献
996.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n-1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n-1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
997.
W. W. Cooper Subhash C. Ray 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2008,171(2):433-448
Summary. This is a response to Stone's criticisms of the Spottiswoode report to the UK Treasury which was responding to the Treasury's request for improved methods to evaluate the efficiency and productivity of the 43 police districts in England and Wales. The Spottiswoode report recommended uses of data envelopment analysis (DEA) and stochastic frontier analysis (SFA), which Stone critiqued en route to proposing an alternative approach. Here we note some of the most serious errors in his criticism and inaccurate portrayals of DEA and SFA. Most of our attention is devoted to DEA, and to Stone's recommended alternative approach without much attention to SFA, partly because of his abbreviated discussion of the latter. In our response we attempt to be constructive as well as critical by showing how Stone's proposed approach can be joined to DEA to expand his proposal beyond limitations in his formulations. 相似文献
998.
The Pushcart Prize, established in 1976, has a well-deserved reputation for highlighting the best in small press publication. The authors examined the first thirty volumes, 1976/1977 through 2006, to identify attributes of the items included in each volume and placed the volumes into five time periods of six volumes each to facilitate trend analysis. In order to identify the most productive publications, titles that had fewer than four selections in the thirty volumes and did not appear in at least two time periods were eliminated. The authors examined: press status as independent or affiliated, state and region where published, and type of work (poetry or other). Finally, highly productive titles were reviewed in WorldCat to determine how frequently these were held in the United States.California, Massachusetts, New York, and Ohio have a continuing, substantial presence in the Prize volumes. Most of the publications included were still active and were affiliated with a larger institution. The three small press titles appearing most frequently were Ploughshares, Paris Review, and American Poetry Review. The Pushcart Prize selections most frequently listed in WorldCat were the Hudson Review, the Paris Review, and the American Poetry Review. Each is held by more than eight hundred U.S. libraries. 相似文献
999.
C. A. Glasbey D. J. Allcroft 《Journal of the Royal Statistical Society. Series C, Applied statistics》2008,57(3):343-355
Summary. To investigate the variability in energy output from a network of photovoltaic cells, solar radiation was recorded at 10 sites every 10 min in the Pentland Hills to the south of Edinburgh. We identify spatiotemporal auto-regressive moving average models as the most appropriate to address this problem. Although previously considered computationally prohibitive to work with, we show that by approximating using toroidal space and fitting by matching auto-correlations, calculations can be substantially reduced. We find that a first-order spatiotemporal auto-regressive (STAR(1)) process with a first-order neighbourhood structure and a Matern noise process provide an adequate fit to the data, and we demonstrate its use in simulating realizations of energy output. 相似文献
1000.
The basic assumption underlying the concept of ranked set sampling is that actual measurement of units is expensive, whereas ranking is cheap. This may not be true in reality in certain cases where ranking may be moderately expensive. In such situations, based on total cost considerations, k-tuple ranked set sampling is known to be a viable alternative, where one selects k units (instead of one) from each ranked set. In this article, we consider estimation of the distribution function based on k-tuple ranked set samples when the cost of selecting and ranking units is not ignorable. We investigate estimation both in the balanced and unbalanced data case. Properties of the estimation procedure in the presence of ranking error are also investigated. Results of simulation studies as well as an application to a real data set are presented to illustrate some of the theoretical findings. 相似文献