全文获取类型
收费全文 | 1675篇 |
免费 | 49篇 |
专业分类
管理学 | 220篇 |
民族学 | 11篇 |
人口学 | 136篇 |
丛书文集 | 10篇 |
理论方法论 | 220篇 |
综合类 | 15篇 |
社会学 | 860篇 |
统计学 | 252篇 |
出版年
2023年 | 10篇 |
2022年 | 10篇 |
2021年 | 17篇 |
2020年 | 56篇 |
2019年 | 75篇 |
2018年 | 82篇 |
2017年 | 85篇 |
2016年 | 62篇 |
2015年 | 47篇 |
2014年 | 78篇 |
2013年 | 296篇 |
2012年 | 61篇 |
2011年 | 62篇 |
2010年 | 68篇 |
2009年 | 40篇 |
2008年 | 47篇 |
2007年 | 56篇 |
2006年 | 55篇 |
2005年 | 48篇 |
2004年 | 43篇 |
2003年 | 33篇 |
2002年 | 44篇 |
2001年 | 38篇 |
2000年 | 24篇 |
1999年 | 32篇 |
1998年 | 20篇 |
1997年 | 28篇 |
1996年 | 16篇 |
1995年 | 15篇 |
1994年 | 14篇 |
1993年 | 15篇 |
1992年 | 9篇 |
1991年 | 12篇 |
1990年 | 9篇 |
1989年 | 14篇 |
1988年 | 8篇 |
1987年 | 8篇 |
1986年 | 5篇 |
1985年 | 9篇 |
1984年 | 9篇 |
1983年 | 8篇 |
1982年 | 3篇 |
1981年 | 8篇 |
1980年 | 8篇 |
1979年 | 4篇 |
1978年 | 7篇 |
1977年 | 5篇 |
1976年 | 6篇 |
1975年 | 3篇 |
1968年 | 3篇 |
排序方式: 共有1724条查询结果,搜索用时 0 毫秒
21.
J. Andrew Howe 《Journal of Statistical Computation and Simulation》2013,83(3):446-457
In this paper, we address the problem of simulating from a data-generating process for which the observed data do not follow a regular probability distribution. One existing method for doing this is bootstrapping, but it is incapable of interpolating between observed data. For univariate or bivariate data, in which a mixture structure can easily be identified, we could instead simulate from a Gaussian mixture model. In general, though, we would have the problem of identifying and estimating the mixture model. Instead of these, we introduce a non-parametric method for simulating datasets like this: Kernel Carlo Simulation. Our algorithm begins by using kernel density estimation to build a target probability distribution. Then, an envelope function that is guaranteed to be higher than the target distribution is created. We then use simple accept–reject sampling. Our approach is more flexible than others, can simulate intelligently across gaps in the data, and requires no subjective modelling decisions. With several univariate and multivariate examples, we show that our method returns simulated datasets that, compared with the observed data, retain the covariance structures and have distributional characteristics that are remarkably similar. 相似文献
22.
Log-location-scale distributions are widely used parametric models that have fundamental importance in both parametric and semiparametric frameworks. The likelihood equations based on a Type II censored sample from location-scale distributions do not provide explicit solutions for the para-meters. Statistical software is widely available and is based on iterative methods (such as, Newton Raphson Algorithm, EM algorithm etc.), which require starting values near the global maximum. There are also many situations that the specialized software does not handle. This paper provides a method for determining explicit estimators for the location and scale parameters by approximating the likelihood function, where the method does not require any starting values. The performance of the proposed approximate method for the Weibull distribution and Log-Logistic distributions is compared with those based on iterative methods through the use of simulation studies for a wide range of sample size and Type II censoring schemes. Here we also examine the probability coverages of the pivotal quantities based on asymptotic normality. In addition, two examples are given. 相似文献
23.
Analytical methods for interval estimation of differences between variances have not been described. A simple analytical method is given for interval estimation of the difference between variances of two independent samples. It is shown, using simulations, that confidence intervals generated with this method have close to nominal coverage even when sample sizes are small and unequal and observations are highly skewed and leptokurtic, provided the difference in variances is not very large. The method is also adapted for testing the hypothesis of no difference between variances. The test is robust but slightly less powerful than Bonett's test with small samples. 相似文献
24.
Andrew T. A. Wood 《统计学通讯:模拟与计算》2013,42(4):1439-1456
A three-parameter F approximation to the distribution of a positive linear combination of central chi-squared variables is described. It is about as easy to implement as the Satterthwaite-Welsh and Hall-Buckley-Eagleson approximations. Some reassuring properties of the F approximation are derived, and numerical results are presented. The numerical results indicate that the new approximation is superior to the Satterthwaite approximation and, for some purposes, better than the Hall-Buckley-Eagleson approximation. It is not quite as good as the Gamma-Weibull approximation due to Solomon and Stephens, but is easier to implement because iterative methods are not required. 相似文献
25.
We consider the problem of estimating the two parameters of the discrete Good distribution. We first show that the sufficient statistics for the parameters are the arithmetic and the geometric means. The maximum likelihood estimators (MLE's) of the parameters are obtained by solving numerically a system of equations involving the Lerch zeta function and the sufficient statistics. We find an expression for the asymptotic variance-covariance matrix of the MLE's, which can be evaluated numerically. We show that the probability mass function satisfies a simple recurrence equation linear in the two parameters, and propose the quadratic distance estimator (QDE) which can be computed with an ineratively reweighted least-squares algorithm. the QDE is easy to calculate and admits a simple expression for its asymptotic variance-covariance matrix. We compute this matrix for the MLE's and the QDE for various values of the parameters and see that the QDE has very high asymptotic efficiency. Finally, we present a numerical example. 相似文献
26.
Andrew P. Soms 《统计学通讯:理论与方法》2013,42(12):4459-4469
The results of Hoeffding (1956), Pledger and Proschan (1971), Gleser (1975) and Boland and Proschan (1983) are used to obtain Buehler (1957) 1-α lower confidence limits for the reliability of k of n systems of independent components when the subsystem data have equal sample sizes and the observed failures satisfy certain conditions. To the best of our knowledge, for k ≠ 1 or n, this is the first time the exact optimal lower confidence limits for system reliability have been given. The observed failure vectors are a generalization of key test results for k of n systems, k ≠ n (Soms (1984) and Winterbottom (1974)). Two examples applying the above theory are also given. 相似文献
27.
We develop a methodology for examining savings behavior in rural areas of developing countries that explicitly incorporates the sequential decision process in agriculture. The approach is used to examine the relative importance of alternative forms of savings in the presence and absence of formal financial intermediaries. Our results, based on stage-specific panel data from Pakistan, provide evidence that the presence of financial intermediaries importantly influences the use of formal savings and transfers for income smoothing. We also find that there are significant biases in evaluations of the savings-income relationship that are inattentive to the within-year dynamics of agricultural production. 相似文献
28.
Andrew Evans 《Significance》2007,4(1):15-18
Privatisation of the state-owned British railway system was completed in 1997. The following 6 years saw four serious fatal train accidents, leading to 49 deaths. These were the train collisions at Southall in 1997 and Ladbroke Grove in 1999, each caused by trains passing red signals, and the derailments at Hatfield in 2000 and Potters Bar in 2002, each caused by defective track. Has safety been compromised by the sell-off of the railways? Andrew Evans looks at the evidence and asks has privatisation led to more accidents? 相似文献
29.
The evaluation of decision trees under uncertainty is difficult because of the required nested operations of maximizing and averaging. Pure maximizing (for deterministic decision trees) or pure averaging (for probability trees) are both relatively simple because the maximum of a maximum is a maximum, and the average of an average is an average. But when the two operators are mixed, no simplification is possible, and one must evaluate the maximization and averaging operations in a nested fashion, following the structure of the tree. Nested evaluation requires large sample sizes (for data collection) or long computation times (for simulations). 相似文献
30.
AbstractGrubbs and Weaver (1947) suggest a minimum-variance unbiased estimator for the population standard deviation of a normal random variable, where a random sample is drawn and a weighted sum of the ranges of subsamples is calculated. The optimal choice involves using as many subsamples of size eight as possible. They verified their results numerically for samples of size up to 100, and conjectured that their “rule of eights” is valid for all sample sizes. Here we examine the analogous problem where the underlying distribution is exponential and find that a “rule of fours” yields optimality and prove the result rigorously. 相似文献