全文获取类型
收费全文 | 6663篇 |
免费 | 167篇 |
国内免费 | 3篇 |
专业分类
管理学 | 1007篇 |
民族学 | 59篇 |
人才学 | 9篇 |
人口学 | 521篇 |
丛书文集 | 50篇 |
理论方法论 | 822篇 |
综合类 | 52篇 |
社会学 | 3470篇 |
统计学 | 843篇 |
出版年
2023年 | 45篇 |
2022年 | 31篇 |
2021年 | 51篇 |
2020年 | 122篇 |
2019年 | 174篇 |
2018年 | 205篇 |
2017年 | 209篇 |
2016年 | 212篇 |
2015年 | 147篇 |
2014年 | 175篇 |
2013年 | 1050篇 |
2012年 | 234篇 |
2011年 | 248篇 |
2010年 | 200篇 |
2009年 | 159篇 |
2008年 | 206篇 |
2007年 | 227篇 |
2006年 | 215篇 |
2005年 | 230篇 |
2004年 | 196篇 |
2003年 | 174篇 |
2002年 | 177篇 |
2001年 | 122篇 |
2000年 | 158篇 |
1999年 | 128篇 |
1998年 | 119篇 |
1997年 | 106篇 |
1996年 | 96篇 |
1995年 | 88篇 |
1994年 | 110篇 |
1993年 | 94篇 |
1992年 | 99篇 |
1991年 | 69篇 |
1990年 | 57篇 |
1989年 | 59篇 |
1988年 | 74篇 |
1987年 | 55篇 |
1986年 | 49篇 |
1985年 | 58篇 |
1984年 | 72篇 |
1983年 | 57篇 |
1982年 | 59篇 |
1981年 | 54篇 |
1980年 | 54篇 |
1979年 | 45篇 |
1978年 | 36篇 |
1977年 | 36篇 |
1976年 | 48篇 |
1975年 | 31篇 |
1974年 | 39篇 |
排序方式: 共有6833条查询结果,搜索用时 31 毫秒
91.
Polynomial spline regression models of low degree have proved useful in modeling responses from designed experiments in science and engineering when simple polynomial models are inadequate. Where there is uncertainty in the number and location of the knots, or breakpoints, of the spline, then designs that minimize the systematic errors resulting from model misspecification may be appropriate. This paper gives a method for constructing such all‐bias designs for a single variable spline when the distinct knots in the assumed and true models come from some specified set. A class of designs is defined in terms of the inter‐knot intervals and sufficient conditions are obtained for a design within this class to be all‐bias under linear, quadratic and cubic spline models. An example of the construction of all‐bias designs is given. 相似文献
92.
BAYESIAN SUBSET SELECTION AND MODEL AVERAGING USING A CENTRED AND DISPERSED PRIOR FOR THE ERROR VARIANCE 总被引:1,自引:0,他引:1
Edward Cripps Robert Kohn David Nott 《Australian & New Zealand Journal of Statistics》2006,48(2):237-252
This article proposes a new data‐based prior distribution for the error variance in a Gaussian linear regression model, when the model is used for Bayesian variable selection and model averaging. For a given subset of variables in the model, this prior has a mode that is an unbiased estimator of the error variance but is suitably dispersed to make it uninformative relative to the marginal likelihood. The advantage of this empirical Bayes prior for the error variance is that it is centred and dispersed sensibly and avoids the arbitrary specification of hyperparameters. The performance of the new prior is compared to that of a prior proposed previously in the literature using several simulated examples and two loss functions. For each example our paper also reports results for the model that orthogonalizes the predictor variables before performing subset selection. A real example is also investigated. The empirical results suggest that for both the simulated and real data, the performance of the estimators based on the prior proposed in our article compares favourably with that of a prior used previously in the literature. 相似文献
93.
A hierarchical model for extreme wind speeds 总被引:3,自引:0,他引:3
Lee Fawcett David Walshaw 《Journal of the Royal Statistical Society. Series C, Applied statistics》2006,55(5):631-646
Summary. A typical extreme value analysis is often carried out on the basis of simplistic inferential procedures, though the data being analysed may be structurally complex. Here we develop a hierarchical model for hourly gust maximum wind speed data, which attempts to identify site and seasonal effects for the marginal densities of hourly maxima, as well as for the serial dependence at each location. A Gaussian model for the random effects exploits the meteorological structure in the data, enabling increased precision for inferences at individual sites and in individual seasons. The Bayesian framework that is adopted is also exploited to obtain predictive return level estimates at each site, which incorporate uncertainty due to model estimation, as well as the randomness that is inherent in the processes that are involved. 相似文献
94.
David G. T. Denison 《Statistics and Computing》2001,11(2):171-178
Boosting is a new, powerful method for classification. It is an iterative procedure which successively classifies a weighted version of the sample, and then reweights this sample dependent on how successful the classification was. In this paper we review some of the commonly used methods for performing boosting and show how they can be fit into a Bayesian setup at each iteration of the algorithm. We demonstrate how this formulation gives rise to a new splitting criterion when using a domain-partitioning classification method such as a decision tree. Further we can improve the predictive performance of simple decision trees, known as stumps, by using a posterior weighted average of them to classify at each step of the algorithm, rather than just a single stump. The main advantage of this approach is to reduce the number of boosting iterations required to produce a good classifier with only a minimal increase in the computational complexity of the algorithm. 相似文献
95.
The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified
in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice
of the parameters, including an infinite-dimensional distributional parameter G
0. Most previous applications have either fixed G
0 as a member of a parametric family or treated G
0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method
for constructing smooth estimates of G
0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an
existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical
Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G
0 are of intrinsic interest, or where the structure of G
0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates
the robustness of this approach. 相似文献
96.
In many domains, simple forms of classification rules are needed because of requirements such as ease of use. A particularly simple form splits each variable into just a few categories, assigns weights to the categories, sums the weights for a new object to be classified, and produces a classification by comparing the score with a threshold. Such instruments are often called scorecards. We describe a way to find the best partition of each variable using a simulated annealing strategy. We present theoretical and empirical comparisons of two such additive models, one based on weights of evidence and another based on logistic regression. 相似文献
97.
98.
99.
Julian P. T. Higgins Simon G. Thompson David J. Spiegelhalter 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2009,172(1):137-159
Summary. Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders. 相似文献
100.
David E. Tyler Frank Critchley Lutz Dümbgen Hannu Oja 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(3):549-592
Summary. A general method for exploring multivariate data by comparing different estimates of multivariate scatter is presented. The method is based on the eigenvalue–eigenvector decomposition of one scatter matrix relative to another. In particular, it is shown that the eigenvectors can be used to generate an affine invariant co-ordinate system for the multivariate data. Consequently, we view this method as a method for invariant co-ordinate selection . By plotting the data with respect to this new invariant co-ordinate system, various data structures can be revealed. For example, under certain independent components models, it is shown that the invariant co- ordinates correspond to the independent components. Another example pertains to mixtures of elliptical distributions. In this case, it is shown that a subset of the invariant co-ordinates corresponds to Fisher's linear discriminant subspace, even though the class identifications of the data points are unknown. Some illustrative examples are given. 相似文献