全文获取类型
收费全文 | 3384篇 |
免费 | 74篇 |
专业分类
管理学 | 562篇 |
民族学 | 15篇 |
人才学 | 1篇 |
人口学 | 230篇 |
丛书文集 | 26篇 |
理论方法论 | 468篇 |
综合类 | 42篇 |
社会学 | 1532篇 |
统计学 | 582篇 |
出版年
2023年 | 18篇 |
2021年 | 20篇 |
2020年 | 48篇 |
2019年 | 85篇 |
2018年 | 83篇 |
2017年 | 113篇 |
2016年 | 121篇 |
2015年 | 80篇 |
2014年 | 85篇 |
2013年 | 470篇 |
2012年 | 137篇 |
2011年 | 113篇 |
2010年 | 104篇 |
2009年 | 100篇 |
2008年 | 115篇 |
2007年 | 122篇 |
2006年 | 101篇 |
2005年 | 109篇 |
2004年 | 105篇 |
2003年 | 97篇 |
2002年 | 103篇 |
2001年 | 87篇 |
2000年 | 77篇 |
1999年 | 63篇 |
1998年 | 51篇 |
1997年 | 68篇 |
1996年 | 56篇 |
1995年 | 42篇 |
1994年 | 48篇 |
1993年 | 49篇 |
1992年 | 31篇 |
1991年 | 33篇 |
1990年 | 41篇 |
1989年 | 37篇 |
1988年 | 34篇 |
1987年 | 31篇 |
1986年 | 23篇 |
1985年 | 35篇 |
1984年 | 37篇 |
1983年 | 21篇 |
1982年 | 32篇 |
1981年 | 30篇 |
1980年 | 27篇 |
1979年 | 31篇 |
1978年 | 17篇 |
1977年 | 11篇 |
1976年 | 28篇 |
1975年 | 14篇 |
1974年 | 18篇 |
1973年 | 16篇 |
排序方式: 共有3458条查询结果,搜索用时 15 毫秒
61.
Peter M. Hooper 《Revue canadienne de statistique》2001,29(3):343-364
The author proposes a new method for flexible regression modeling of multi‐dimensional data, where the regression function is approximated by a linear combination of logistic basis functions. The method is adaptive, selecting simple or more complex models as appropriate. The number, location, and (to some extent) shape of the basis functions are automatically determined from the data. The method is also affine invariant, so accuracy of the fit is not affected by rotation or scaling of the covariates. Squared error and absolute error criteria are both available for estimation. The latter provides a robust estimator of the conditional median function. Computation is relatively fast, particularly for large data sets, so the method is well suited for data mining applications. 相似文献
62.
Edwin Choi & Peter Hall 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):461-477
Given a linear time series, e.g. an autoregression of infinite order, we may construct a finite order approximation and use that as the basis for confidence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has significantly better performance than blocking methods and offers a wider range of opportunities. In particular, since it does not corrupt second-order properties then it may be used in a double-bootstrap form, with the second bootstrap application being employed to calibrate a basic percentile method confidence interval. This approach confers second-order accuracy without the need to estimate variance. That offers substantial benefits, since variances of statistics based on time series can be difficult to estimate reliably, and—partly because of the relatively small amount of information contained in a dependent process—are notorious for causing problems when used to Studentize. Other advantages of the sieve bootstrap include considerably greater robustness against variations in the choice of the tuning parameter, here equal to the autoregressive order, and the fact that, in contradistinction to the case of the block bootstrap, the percentile t version of the sieve bootstrap may be based on the 'raw' estimator of standard error. In the process of establishing these properties we show that the sieve bootstrap is second order correct. 相似文献
63.
Peter Hall Stephen M.-S. Lee & G. Alastair Young 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):479-491
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error. 相似文献
64.
In this paper we present a parsimonious model for the analysis of underreported Poisson count data. In contrast to previously developed methods, we are able to derive analytic expressions for the key marginal posterior distributions that are of interest. The usefulness of this model is explored via a re-examination of previously analysed data covering the purchasing of port wine (Ramos, 1999). 相似文献
65.
Stein's method is used to prove the Lindeberg-Feller theorem and a generalization of the Berry-Esséen theorem. The arguments involve only manipulation of probability inequalities, and form an attractive alternative to the less direct Fourier-analytic methods which are traditionally employed. 相似文献
66.
Peter Hall Tapabrata Maiti 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(3):703-718
Summary. We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency. 相似文献
67.
A Bayesian discovery procedure 总被引:1,自引:0,他引:1
Michele Guindani Peter Müller Song Zhang 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(5):905-925
Summary. We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data. 相似文献
68.
69.
The responses obtained from response surface designs that are run sequentially often exhibit serial correlation or time trends. The order in which the runs of the design are performed then has an impact on the precision of the parameter estimators. This article proposes the use of a variable-neighbourhood search algorithm to compute run orders that guarantee a precise estimation of the effects of the experimental factors. The importance of using good run orders is demonstrated by seeking D-optimal run orders for a central composite design in the presence of an AR(1) autocorrelation pattern. 相似文献
70.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion. 相似文献