全文获取类型
收费全文 | 5758篇 |
免费 | 378篇 |
国内免费 | 1篇 |
专业分类
管理学 | 1062篇 |
民族学 | 45篇 |
人才学 | 1篇 |
人口学 | 363篇 |
丛书文集 | 2篇 |
理论方法论 | 787篇 |
综合类 | 50篇 |
社会学 | 2904篇 |
统计学 | 923篇 |
出版年
2024年 | 23篇 |
2023年 | 53篇 |
2022年 | 35篇 |
2021年 | 132篇 |
2020年 | 321篇 |
2019年 | 553篇 |
2018年 | 329篇 |
2017年 | 397篇 |
2016年 | 424篇 |
2015年 | 344篇 |
2014年 | 372篇 |
2013年 | 797篇 |
2012年 | 356篇 |
2011年 | 258篇 |
2010年 | 224篇 |
2009年 | 183篇 |
2008年 | 222篇 |
2007年 | 149篇 |
2006年 | 159篇 |
2005年 | 123篇 |
2004年 | 125篇 |
2003年 | 107篇 |
2002年 | 76篇 |
2001年 | 86篇 |
2000年 | 66篇 |
1999年 | 23篇 |
1998年 | 20篇 |
1997年 | 18篇 |
1996年 | 24篇 |
1995年 | 13篇 |
1994年 | 14篇 |
1993年 | 11篇 |
1992年 | 11篇 |
1991年 | 13篇 |
1990年 | 8篇 |
1989年 | 12篇 |
1988年 | 9篇 |
1987年 | 7篇 |
1986年 | 4篇 |
1985年 | 7篇 |
1984年 | 5篇 |
1982年 | 2篇 |
1981年 | 3篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1975年 | 3篇 |
1971年 | 1篇 |
1969年 | 1篇 |
1968年 | 4篇 |
1967年 | 1篇 |
排序方式: 共有6137条查询结果,搜索用时 15 毫秒
61.
In animal digestibility the proportion of degraded food along the time has usually been modeled as a normal random variable with mean a function of the time and the following three parameters: the proportion of degraded food almost instantaneously, remaining proportion of food to be degraded, and velocity of degradation. The estimation of these parameters has been carried out mainly from a frequentist viewpoint by using the asymptotic distribution of the maximum likelihood estimator. This may give inadmissible estimates, such as values outside of the range of the parameters. This drawback could not appear if a Bayesian approach were adopted. In this article an objective Bayesian analysis is developed and illustrated on real and simulated data. 相似文献
62.
We propose a new method to estimate the cumulative hazard function and the corresponding distribution function of survival times under randomly left-truncated and right-censored observations (LTRC). The new estimators are based on presmoothing ideas, the estimation of the conditional expectation m of the censoring indicator. An almost sure representation for both estimators is established, from which a strong consistency rate and asymptotic normality are derived. It is shown that the presmoothed modification leads to a gain in terms of asymptotic mean squared error. This efficiency with respect to the classical estimators is also shown in a simulation study. Finally, an application to a real data set is provided. 相似文献
63.
J. Rodríguez Avi M. J. Olmo Jiménez A. Conde Sánchez A. J. Sáez Castillo 《统计学通讯:理论与方法》2013,42(19):3009-3022
A new discrete family of probability distributions that are generated by the 3 F 2 function with complex parameters is presented. Some of the properties of this new family are studied as well as methods of estimation for its parameters. It affords considerable flexibility of shape which turns the distribution into an appropriate candidate for modeling data that cannot be adequately fitted by classical families with fewer parameters. Finally, three examples in the fields of Agriculture and Education are included in order to show the versatility and utility of this distribution. 相似文献
64.
José Galvāo Leite Carlos Alberto de Bragança Pereira Flávio Wagner Rodrigues 《统计学通讯:理论与方法》2013,42(1):301-310
Questions related to lotteries are usually of interest to the public since people think there is a magic formula which will help them to win lottery draws. This note shows how to compute the expected waiting time to observe specific numbers in a sequence of lottery draws and show that surprising facts are expected to occur. 相似文献
65.
A 2 2 2 contingency table can often be analysed in an exact fashion by using Fisher's exact test and in an approximate fashion by using the chi-squared test with Yates' continuity correction, and it is traditionally held that the approximation is valid when the minimum expected quantity E is E S 5. Unfortunately, little research has been carried out into this belief, other than that it is necessary to establish a bound E>E*, that the condition E S 5 may not be the most appropriate (Martín Andrés et al., 1992) and that E* is not a constant, but usually increasing with the growth of the sample size (Martín Andrés & Herranz Tejedor, 1997). In this paper, the authors conduct a theoretical experimental study from which they ascertain that E* value (which is very variable and frequently quite a lot greater than 5) is strongly related to the magnitude of the skewness of the underlying hypergeometric distribution, and that bounding the skewness is equivalent to bounding E (which is the best control procedure). The study enables estimating the expression for the above-mentioned E* (which in turn depends on the number of tails in the test, the alpha error used, the total sample size, and the minimum marginal imbalance) to be estimated. Also the authors show that E* increases generally with the sample size and with the marginal imbalance, although it does reach a maximum. Some general and very conservative validity conditions are E S 35.53 (one-tailed test) and E S 7.45 (two-tailed test) for alpha nominal errors in 1% h f h 10%. The traditional condition E S 5 is only valid when the samples are small and one of the marginals is very balanced; alternatively, the condition E S 5.5 is valid for small samples or a very balanced marginal. Finally, it is proved that the chi-squared test is always valid in tables where both marginals are balanced, and that the maximum skewness permitted is related to the maximum value of the bound E*, to its value for tables with at least one balanced marginal and to the minimum value that those marginals must have (in non-balanced tables) for the chi-squared test to be valid. 相似文献
66.
67.
The authors develop consistent nonparametric estimation techniques for the directional mixing density. Classical spherical harmonics are used to adapt Euclidean techniques to this directional environment. Minimax rates of convergence are obtained for rotation ally invariant densities verifying various smoothness conditions. It is found that the differences in smoothness between the Laplace, the Gaussian and the von Mises‐Fisher distributions lead to contrasting inferential conclusions. 相似文献
68.
Reliability sampling plans provide an efficient method to determine the acceptability of a product based upon the lifelengths of some test units. Usually, they depend on the producer and consumer’s quality requirements and do not admit closed form solutions. Acceptance sampling plans for one- and two-parameter exponential lifetime models, derived by approximating the operating characteristic curve, are presented in this paper. The accuracy of these approximate plans, which are explicitly expressible and valid for failure and progressive censoring, is assessed. The approximation proposed in the one-parameter case is found to be practically exact. Explicit lower and upper bounds on the smallest sample size are given in the two-parameter case. Some additional advantages are also pointed out. 相似文献
69.
Bengt Muthén Tihomir Asparouhov 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2009,172(3):639-657
Summary. A two-level regression mixture model is discussed and contrasted with the conventional two-level regression model. Simulated and real data shed light on the modelling alternatives. The real data analyses investigate gender differences in mathematics achievement from the US National Education Longitudinal Survey. The two-level regression mixture analyses show that unobserved heterogeneity should not be presupposed to exist only at level 2 at the expense of level 1. Both the simulated and the real data analyses show that level 1 heterogeneity in the form of latent classes can be mistaken for level 2 heterogeneity in the form of the random effects that are used in conventional two-level regression analysis. Because of this, mixture models have an important role to play in multilevel regression analyses. Mixture models allow heterogeneity to be investigated more fully, more correctly attributing different portions of the heterogeneity to the different levels. 相似文献
70.
We proposed a modification to the variant of link-tracing sampling suggested by Félix-Medina and Thompson [M.H. Félix-Medina, S.K. Thompson, Combining cluster sampling and link-tracing sampling to estimate the size of hidden populations, Journal of Official Statistics 20 (2004) 19–38] that allows the researcher to have certain control of the final sample size, precision of the estimates or other characteristics of the sample that the researcher is interested in controlling. We achieve this goal by selecting an initial sequential sample of sites instead of an initial simple random sample of sites as those authors suggested. We estimate the population size by means of the maximum likelihood estimators suggested by the above-mentioned authors or by the Bayesian estimators proposed by Félix-Medina and Monjardin [M.H. Félix-Medina, P.E. Monjardin, Combining link-tracing sampling and cluster sampling to estimate the size of hidden populations: A Bayesian-assisted approach, Survey Methodology 32 (2006) 187–195]. Variances are estimated by means of jackknife and bootstrap estimators as well as by the delta estimators proposed in the two above-mentioned papers. Interval estimates of the population size are obtained by means of Wald and bootstrap confidence intervals. The results of an exploratory simulation study indicate good performance of the proposed sampling strategy. 相似文献