首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3095篇
  免费   89篇
管理学   329篇
民族学   41篇
人口学   327篇
丛书文集   2篇
理论方法论   252篇
综合类   61篇
社会学   1546篇
统计学   626篇
  2024年   31篇
  2023年   59篇
  2022年   36篇
  2021年   50篇
  2020年   141篇
  2019年   119篇
  2018年   238篇
  2017年   278篇
  2016年   207篇
  2015年   121篇
  2014年   157篇
  2013年   582篇
  2012年   271篇
  2011年   118篇
  2010年   99篇
  2009年   87篇
  2008年   84篇
  2007年   64篇
  2006年   66篇
  2005年   48篇
  2004年   38篇
  2003年   44篇
  2002年   39篇
  2001年   29篇
  2000年   19篇
  1999年   11篇
  1998年   10篇
  1997年   12篇
  1996年   10篇
  1995年   8篇
  1994年   12篇
  1993年   5篇
  1992年   9篇
  1991年   9篇
  1990年   7篇
  1989年   9篇
  1988年   7篇
  1987年   4篇
  1986年   5篇
  1985年   5篇
  1984年   3篇
  1980年   2篇
  1979年   2篇
  1978年   3篇
  1977年   3篇
  1976年   2篇
  1975年   4篇
  1974年   3篇
  1973年   4篇
  1971年   2篇
排序方式: 共有3184条查询结果,搜索用时 0 毫秒
31.
A new discrete family of probability distributions that are generated by the 3 F 2 function with complex parameters is presented. Some of the properties of this new family are studied as well as methods of estimation for its parameters. It affords considerable flexibility of shape which turns the distribution into an appropriate candidate for modeling data that cannot be adequately fitted by classical families with fewer parameters. Finally, three examples in the fields of Agriculture and Education are included in order to show the versatility and utility of this distribution.  相似文献   
32.
Conventional approaches for inference about efficiency in parametric stochastic frontier (PSF) models are based on percentiles of the estimated distribution of the one-sided error term, conditional on the composite error. When used as prediction intervals, coverage is poor when the signal-to-noise ratio is low, but improves slowly as sample size increases. We show that prediction intervals estimated by bagging yield much better coverages than the conventional approach, even with low signal-to-noise ratios. We also present a bootstrap method that gives confidence interval estimates for (conditional) expectations of efficiency, and which have good coverage properties that improve with sample size. In addition, researchers who estimate PSF models typically reject models, samples, or both when residuals have skewness in the “wrong” direction, i.e., in a direction that would seem to indicate absence of inefficiency. We show that correctly specified models can generate samples with “wrongly” skewed residuals, even when the variance of the inefficiency process is nonzero. Both our bagging and bootstrap methods provide useful information about inefficiency and model parameters irrespective of whether residuals have skewness in the desired direction.  相似文献   
33.
This article develops nonparametric tests of independence between two stochastic processes satisfying β-mixing conditions. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, we take advantage of a generalized entropic measure so as to build a whole family of nonparametric tests of independence. We derive asymptotic normality and local power using the functional delta method for kernels. As a corollary, we also develop a class of entropy-based tests for serial independence. The latter are nuisance parameter free, and hence also qualify for dynamic misspecification analyses. We then investigate the finite-sample properties of our serial independence tests through Monte Carlo simulations. They perform quite well, entailing more power against some nonlinear AR alternatives than two popular nonparametric serial-independence tests.  相似文献   
34.
The use of flexible functional forms is a standard practice in applied econometrics. Many flexible forms have been proposed. In this study, we investigate the behavior of three of them—the translog, the symmetric McFadden, and the symmetric generalized Barnett. Based on Monte Carlo experiments, we assess the ability of these forms to test theoretical properties and to measure technological characteristics.  相似文献   
35.
We propose a novel observation-driven finite mixture model for the study of banking data. The model accommodates time-varying component means and covariance matrices, normal and Student’s t distributed mixtures, and economic determinants of time-varying parameters. Monte Carlo experiments suggest that units of interest can be classified reliably into distinct components in a variety of settings. In an empirical study of 208 European banks between 2008Q1–2015Q4, we identify six business model components and discuss how their properties evolve over time. Changes in the yield curve predict changes in average business model characteristics.  相似文献   
36.
A measure of multivariate correlation between two sets of vectors is considered when the underlying joint distribution is a member of the class of elliptical distributions. Its asymptotic distribution is derived under different situations and these results are used to test hypotheses on vector correlation when the underlying joint distribution is non-normal.  相似文献   
37.
This paper considers a likelihood ratio test for testing hypotheses defined by non-oblique closed convex cones, satisfying the so called iteration projection property, in a set of k normal means. We obtain the critical values of the test using the Chi-Bar-Squared distribution. The obtuse cones are introduced as a particular class of cones which are non-oblique with every one of their faces. Examples with the simple tree order cone and the total order cone are given to illustrate the results.  相似文献   
38.
Questions related to lotteries are usually of interest to the public since people think there is a magic formula which will help them to win lottery draws. This note shows how to compute the expected waiting time to observe specific numbers in a sequence of lottery draws and show that surprising facts are expected to occur.  相似文献   
39.
A 2 2 2 contingency table can often be analysed in an exact fashion by using Fisher's exact test and in an approximate fashion by using the chi-squared test with Yates' continuity correction, and it is traditionally held that the approximation is valid when the minimum expected quantity E is E S 5. Unfortunately, little research has been carried out into this belief, other than that it is necessary to establish a bound E>E*, that the condition E S 5 may not be the most appropriate (Martín Andrés et al., 1992) and that E* is not a constant, but usually increasing with the growth of the sample size (Martín Andrés & Herranz Tejedor, 1997). In this paper, the authors conduct a theoretical experimental study from which they ascertain that E* value (which is very variable and frequently quite a lot greater than 5) is strongly related to the magnitude of the skewness of the underlying hypergeometric distribution, and that bounding the skewness is equivalent to bounding E (which is the best control procedure). The study enables estimating the expression for the above-mentioned E* (which in turn depends on the number of tails in the test, the alpha error used, the total sample size, and the minimum marginal imbalance) to be estimated. Also the authors show that E* increases generally with the sample size and with the marginal imbalance, although it does reach a maximum. Some general and very conservative validity conditions are E S 35.53 (one-tailed test) and E S 7.45 (two-tailed test) for alpha nominal errors in 1% h f h 10%. The traditional condition E S 5 is only valid when the samples are small and one of the marginals is very balanced; alternatively, the condition E S 5.5 is valid for small samples or a very balanced marginal. Finally, it is proved that the chi-squared test is always valid in tables where both marginals are balanced, and that the maximum skewness permitted is related to the maximum value of the bound E*, to its value for tables with at least one balanced marginal and to the minimum value that those marginals must have (in non-balanced tables) for the chi-squared test to be valid.  相似文献   
40.
Reliability sampling plans provide an efficient method to determine the acceptability of a product based upon the lifelengths of some test units. Usually, they depend on the producer and consumer’s quality requirements and do not admit closed form solutions. Acceptance sampling plans for one- and two-parameter exponential lifetime models, derived by approximating the operating characteristic curve, are presented in this paper. The accuracy of these approximate plans, which are explicitly expressible and valid for failure and progressive censoring, is assessed. The approximation proposed in the one-parameter case is found to be practically exact. Explicit lower and upper bounds on the smallest sample size are given in the two-parameter case. Some additional advantages are also pointed out.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号