全文获取类型
收费全文 | 1963篇 |
免费 | 22篇 |
专业分类
管理学 | 271篇 |
民族学 | 6篇 |
人口学 | 171篇 |
丛书文集 | 9篇 |
理论方法论 | 196篇 |
综合类 | 52篇 |
社会学 | 776篇 |
统计学 | 504篇 |
出版年
2021年 | 16篇 |
2020年 | 22篇 |
2019年 | 45篇 |
2018年 | 48篇 |
2017年 | 63篇 |
2016年 | 36篇 |
2015年 | 37篇 |
2014年 | 46篇 |
2013年 | 363篇 |
2012年 | 73篇 |
2011年 | 57篇 |
2010年 | 44篇 |
2009年 | 47篇 |
2008年 | 35篇 |
2007年 | 39篇 |
2006年 | 45篇 |
2005年 | 29篇 |
2004年 | 29篇 |
2003年 | 29篇 |
2002年 | 27篇 |
2001年 | 43篇 |
2000年 | 50篇 |
1999年 | 60篇 |
1998年 | 43篇 |
1997年 | 32篇 |
1996年 | 25篇 |
1995年 | 35篇 |
1994年 | 33篇 |
1993年 | 31篇 |
1992年 | 28篇 |
1991年 | 28篇 |
1990年 | 28篇 |
1989年 | 19篇 |
1988年 | 26篇 |
1987年 | 33篇 |
1986年 | 25篇 |
1985年 | 29篇 |
1984年 | 34篇 |
1983年 | 22篇 |
1982年 | 24篇 |
1981年 | 19篇 |
1980年 | 18篇 |
1979年 | 16篇 |
1978年 | 17篇 |
1977年 | 15篇 |
1976年 | 15篇 |
1975年 | 12篇 |
1974年 | 12篇 |
1973年 | 13篇 |
1972年 | 11篇 |
排序方式: 共有1985条查询结果,搜索用时 15 毫秒
61.
Various mathematical and statistical models for estimation of automobile insurance pricing are reviewed. The methods are compared on their predictive ability based on two sets of automobile insurance data for two different states collected over two different periods. The issue of model complexity versus data availability is resolved through a comparison of the accuracy of prediction. The models reviewed range from the use of simple cell means to various multiplicative-additive schemes to the empirical-Bayes approach. The empirical-Bayes approach, with prediction based on both model-based and individual cell estimates, seems to yield the best forecast. 相似文献
62.
Iliyan Georgiev David I. Harvey Stephen J. Leybourne A. M. Robert Taylor 《商业与经济统计学杂志》2013,31(3):528-541
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006). Supplementary materials for this article are available online. 相似文献
63.
A 2 2 2 contingency table can often be analysed in an exact fashion by using Fisher's exact test and in an approximate fashion by using the chi-squared test with Yates' continuity correction, and it is traditionally held that the approximation is valid when the minimum expected quantity E is E S 5. Unfortunately, little research has been carried out into this belief, other than that it is necessary to establish a bound E>E*, that the condition E S 5 may not be the most appropriate (Martín Andrés et al., 1992) and that E* is not a constant, but usually increasing with the growth of the sample size (Martín Andrés & Herranz Tejedor, 1997). In this paper, the authors conduct a theoretical experimental study from which they ascertain that E* value (which is very variable and frequently quite a lot greater than 5) is strongly related to the magnitude of the skewness of the underlying hypergeometric distribution, and that bounding the skewness is equivalent to bounding E (which is the best control procedure). The study enables estimating the expression for the above-mentioned E* (which in turn depends on the number of tails in the test, the alpha error used, the total sample size, and the minimum marginal imbalance) to be estimated. Also the authors show that E* increases generally with the sample size and with the marginal imbalance, although it does reach a maximum. Some general and very conservative validity conditions are E S 35.53 (one-tailed test) and E S 7.45 (two-tailed test) for alpha nominal errors in 1% h f h 10%. The traditional condition E S 5 is only valid when the samples are small and one of the marginals is very balanced; alternatively, the condition E S 5.5 is valid for small samples or a very balanced marginal. Finally, it is proved that the chi-squared test is always valid in tables where both marginals are balanced, and that the maximum skewness permitted is related to the maximum value of the bound E*, to its value for tables with at least one balanced marginal and to the minimum value that those marginals must have (in non-balanced tables) for the chi-squared test to be valid. 相似文献
64.
Andreas I. Sashegyi K. Stephen Brown Patrick J. Farrell 《Revue canadienne de statistique》2000,28(1):45-63
Some studies generate data that can be grouped into clusters in more than one way. Consider for instance a smoking prevention study in which responses on smoking status are collected over several years in a cohort of students from a number of different schools. This yields longitudinal data, also cross‐sectionaliy clustered in schools. The authors present a model for analyzing binary data of this type, combining generalized estimating equations and estimation of random effects to address the longitudinal and cross‐sectional dependence, respectively. The estimation procedure for this model is discussed, as are the results of a simulation study used to investigate the properties of its estimates. An illustration using data from a smoking prevention trial is given. 相似文献
65.
How does granting certificates of ‘business clean of Arab workers’ to owners of shops, stores, and Jewish businesses who prove they are not employing Arab workers shape identity? Identity development involves making sense of, and coming to terms with, the social world one inhabits, recognizing choices and making decisions within contexts, and finding a sense of unity within one's self while claiming a place in the world. Since there is no objective, ahistoric, universal trans-cultural identity, views of identity must be historically and culturally situated. This paper explores identity issues among members of the Palestinian Arab minority in Israel. While there is a body of literature exploring this subject, we will offer a different perspective by contextualizing the political and economic contexts that form an essential foundation for understanding identity formation among this minority group. We argue that, as a genre of settler colonialism, ‘pure settlement colonies’ involve the conquering not only of land, but of labor as well, excluding the natives from the economy. Such an exclusion from the economy is significant for its cultural, social, and ideological consequences, and therefore is especially significant in identity formation discussed in the paper. We briefly review existing approaches to the study of identity among Palestinian Arabs in Israel, and illustrate our theoretical contextual framework. Finally, we present and discuss findings from a new study of identity among Palestinian Arab college students in Israel through the lens of this framework. 相似文献
66.
A. K. S. Alshabani I. L. Dryden C. D. Litton J. Richardson 《Journal of the Royal Statistical Society. Series C, Applied statistics》2007,56(4):415-428
Summary. We consider the Bayesian analysis of human movement data, where the subjects perform various reaching tasks. A set of markers is placed on each subject and a system of cameras records the three-dimensional Cartesian co-ordinates of the markers during the reaching movement. It is of interest to describe the mean and variability of the curves that are traced by the markers during one reaching movement, and to identify any differences due to covariates. We propose a methodology based on a hierarchical Bayesian model for the curves. An important part of the method is to obtain identifiable features of the movement so that different curves can be compared after temporal warping. We consider four landmarks and a set of equally spaced pseudolandmarks are located in between. We demonstrate that the algorithm works well in locating the landmarks, and shape analysis techniques are used to describe the posterior distribution of the mean curve. A feature of this type of data is that some parts of the movement data may be missing—the Bayesian methodology is easily adapted to cope with this situation. 相似文献
67.
I. H. Tajuddin 《Journal of applied statistics》1999,26(6):767-774
In 1995, Arnold and Groeneveld introduced the measure of skewness gammaM in terms of F(mode)-the cumulative probability of a random variable less than or equal to the mode of the distribution. They assumed that the mode of a distribution exists and is unique. Independently, in 1996, the present author arrived at the measure of skewness T, which is given in terms of F(mean). This measure possesses desirable properties and is equally simple. The measure gammaM satisfies - 1 gammaM 1 , with 1 (- 1) indicating extreme right (left) skewness. However, the measure T can take on any value on the real line; hence, an equivalent measure gammaT is considered and is compared with gammaM. We consider a variety of families of distributions and include in our study other measures of skewness of interest. Skewness values are easily obtained using MINITAB programs. 相似文献
68.
69.
Brown JJ Diamond ID Chambers RL Buckner LJ Teague AD 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》1999,162(2):247-267
As a result of lessons learnt from the 1991 census, a research programme was set up to seek improvements in census methodology. Underenumeration has been placed top of the agenda in this programme, and every effort is being made to achieve as high a coverage as possible in the 2001 census. In recognition, however, that 100% coverage will never be achieved, the one-number census (ONC) project was established to measure the degree of underenumeration in the 2001 census and, if possible, to adjust fully the outputs from the census for that undercount. A key component of this adjustment process is a census coverage survey (CCS). This paper presents an overview of the ONC project, focusing on the design and analysis methodology for the CCS. It also presents results that allow the reader to evaluate the robustness of this methodology. 相似文献
70.
I. Gijbels A. Pope & M. P. Wand 《Journal of the Royal Statistical Society. Series B, Statistical methodology》1999,61(1):39-50
Exponential smoothing is the most common model-free means of forecasting a future realization of a time series. It requires the specification of a smoothing factor which is usually chosen from the data to minimize the average squared residual of previous one-step-ahead forecasts. In this paper we show that exponential smoothing can be put into a nonparametric regression framework and gain some interesting insights into its performance through this interpretation. We also use theoretical developments from the kernel regression field to derive, for the first time, asymptotic properties of exponential smoothing forecasters. 相似文献