首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2158篇
  免费   34篇
管理学   339篇
民族学   7篇
人口学   194篇
丛书文集   9篇
理论方法论   215篇
综合类   59篇
社会学   845篇
统计学   524篇
  2023年   17篇
  2022年   15篇
  2021年   21篇
  2020年   28篇
  2019年   52篇
  2018年   65篇
  2017年   78篇
  2016年   56篇
  2015年   51篇
  2014年   63篇
  2013年   375篇
  2012年   91篇
  2011年   66篇
  2010年   51篇
  2009年   58篇
  2008年   45篇
  2007年   45篇
  2006年   48篇
  2005年   29篇
  2004年   33篇
  2003年   32篇
  2002年   28篇
  2001年   43篇
  2000年   50篇
  1999年   60篇
  1998年   43篇
  1997年   33篇
  1996年   25篇
  1995年   35篇
  1994年   34篇
  1993年   31篇
  1992年   28篇
  1991年   29篇
  1990年   28篇
  1989年   20篇
  1988年   27篇
  1987年   34篇
  1986年   25篇
  1985年   30篇
  1984年   34篇
  1983年   22篇
  1982年   24篇
  1981年   19篇
  1980年   18篇
  1979年   16篇
  1978年   17篇
  1977年   15篇
  1976年   15篇
  1974年   12篇
  1973年   13篇
排序方式: 共有2192条查询结果,搜索用时 15 毫秒
61.
Let F(x) be a life distribution. An exact test is given for testing H0 F is exponential, versusH1Fε NBUE (NWUE); along with a table of critical values for n=5(l)80, and n=80(5)65. An asymptotic test is made available for large values of n, where the standardized normal table can be used for testing.  相似文献   
62.
Various mathematical and statistical models for estimation of automobile insurance pricing are reviewed. The methods are compared on their predictive ability based on two sets of automobile insurance data for two different states collected over two different periods. The issue of model complexity versus data availability is resolved through a comparison of the accuracy of prediction. The models reviewed range from the use of simple cell means to various multiplicative-additive schemes to the empirical-Bayes approach. The empirical-Bayes approach, with prediction based on both model-based and individual cell estimates, seems to yield the best forecast.  相似文献   
63.
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006 Campbell, J. Y. and Yogo, M. (2006), “Efficient Tests of Stock Return Predictability,” Journal of Financial Economics, 81, 2760.[Crossref], [Web of Science ®] [Google Scholar]). Supplementary materials for this article are available online.  相似文献   
64.
A 2 2 2 contingency table can often be analysed in an exact fashion by using Fisher's exact test and in an approximate fashion by using the chi-squared test with Yates' continuity correction, and it is traditionally held that the approximation is valid when the minimum expected quantity E is E S 5. Unfortunately, little research has been carried out into this belief, other than that it is necessary to establish a bound E>E*, that the condition E S 5 may not be the most appropriate (Martín Andrés et al., 1992) and that E* is not a constant, but usually increasing with the growth of the sample size (Martín Andrés & Herranz Tejedor, 1997). In this paper, the authors conduct a theoretical experimental study from which they ascertain that E* value (which is very variable and frequently quite a lot greater than 5) is strongly related to the magnitude of the skewness of the underlying hypergeometric distribution, and that bounding the skewness is equivalent to bounding E (which is the best control procedure). The study enables estimating the expression for the above-mentioned E* (which in turn depends on the number of tails in the test, the alpha error used, the total sample size, and the minimum marginal imbalance) to be estimated. Also the authors show that E* increases generally with the sample size and with the marginal imbalance, although it does reach a maximum. Some general and very conservative validity conditions are E S 35.53 (one-tailed test) and E S 7.45 (two-tailed test) for alpha nominal errors in 1% h f h 10%. The traditional condition E S 5 is only valid when the samples are small and one of the marginals is very balanced; alternatively, the condition E S 5.5 is valid for small samples or a very balanced marginal. Finally, it is proved that the chi-squared test is always valid in tables where both marginals are balanced, and that the maximum skewness permitted is related to the maximum value of the bound E*, to its value for tables with at least one balanced marginal and to the minimum value that those marginals must have (in non-balanced tables) for the chi-squared test to be valid.  相似文献   
65.
Some studies generate data that can be grouped into clusters in more than one way. Consider for instance a smoking prevention study in which responses on smoking status are collected over several years in a cohort of students from a number of different schools. This yields longitudinal data, also cross‐sectionaliy clustered in schools. The authors present a model for analyzing binary data of this type, combining generalized estimating equations and estimation of random effects to address the longitudinal and cross‐sectional dependence, respectively. The estimation procedure for this model is discussed, as are the results of a simulation study used to investigate the properties of its estimates. An illustration using data from a smoking prevention trial is given.  相似文献   
66.
How does granting certificates of ‘business clean of Arab workers’ to owners of shops, stores, and Jewish businesses who prove they are not employing Arab workers shape identity? Identity development involves making sense of, and coming to terms with, the social world one inhabits, recognizing choices and making decisions within contexts, and finding a sense of unity within one's self while claiming a place in the world. Since there is no objective, ahistoric, universal trans-cultural identity, views of identity must be historically and culturally situated. This paper explores identity issues among members of the Palestinian Arab minority in Israel. While there is a body of literature exploring this subject, we will offer a different perspective by contextualizing the political and economic contexts that form an essential foundation for understanding identity formation among this minority group. We argue that, as a genre of settler colonialism, ‘pure settlement colonies’ involve the conquering not only of land, but of labor as well, excluding the natives from the economy. Such an exclusion from the economy is significant for its cultural, social, and ideological consequences, and therefore is especially significant in identity formation discussed in the paper. We briefly review existing approaches to the study of identity among Palestinian Arabs in Israel, and illustrate our theoretical contextual framework. Finally, we present and discuss findings from a new study of identity among Palestinian Arab college students in Israel through the lens of this framework.  相似文献   
67.
Summary.  We consider the Bayesian analysis of human movement data, where the subjects perform various reaching tasks. A set of markers is placed on each subject and a system of cameras records the three-dimensional Cartesian co-ordinates of the markers during the reaching movement. It is of interest to describe the mean and variability of the curves that are traced by the markers during one reaching movement, and to identify any differences due to covariates. We propose a methodology based on a hierarchical Bayesian model for the curves. An important part of the method is to obtain identifiable features of the movement so that different curves can be compared after temporal warping. We consider four landmarks and a set of equally spaced pseudolandmarks are located in between. We demonstrate that the algorithm works well in locating the landmarks, and shape analysis techniques are used to describe the posterior distribution of the mean curve. A feature of this type of data is that some parts of the movement data may be missing—the Bayesian methodology is easily adapted to cope with this situation.  相似文献   
68.
In 1995, Arnold and Groeneveld introduced the measure of skewness gammaM in terms of F(mode)-the cumulative probability of a random variable less than or equal to the mode of the distribution. They assumed that the mode of a distribution exists and is unique. Independently, in 1996, the present author arrived at the measure of skewness T, which is given in terms of F(mean). This measure possesses desirable properties and is equally simple. The measure gammaM satisfies - 1 gammaM 1 , with 1 (- 1) indicating extreme right (left) skewness. However, the measure T can take on any value on the real line; hence, an equivalent measure gammaT is considered and is compared with gammaM. We consider a variety of families of distributions and include in our study other measures of skewness of interest. Skewness values are easily obtained using MINITAB programs.  相似文献   
69.
70.
As a result of lessons learnt from the 1991 census, a research programme was set up to seek improvements in census methodology. Underenumeration has been placed top of the agenda in this programme, and every effort is being made to achieve as high a coverage as possible in the 2001 census. In recognition, however, that 100% coverage will never be achieved, the one-number census (ONC) project was established to measure the degree of underenumeration in the 2001 census and, if possible, to adjust fully the outputs from the census for that undercount. A key component of this adjustment process is a census coverage survey (CCS). This paper presents an overview of the ONC project, focusing on the design and analysis methodology for the CCS. It also presents results that allow the reader to evaluate the robustness of this methodology.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号