全文获取类型
收费全文 | 3383篇 |
免费 | 74篇 |
专业分类
管理学 | 562篇 |
民族学 | 15篇 |
人才学 | 1篇 |
人口学 | 230篇 |
丛书文集 | 26篇 |
理论方法论 | 468篇 |
综合类 | 42篇 |
社会学 | 1531篇 |
统计学 | 582篇 |
出版年
2023年 | 18篇 |
2021年 | 20篇 |
2020年 | 48篇 |
2019年 | 85篇 |
2018年 | 83篇 |
2017年 | 113篇 |
2016年 | 120篇 |
2015年 | 80篇 |
2014年 | 85篇 |
2013年 | 470篇 |
2012年 | 137篇 |
2011年 | 113篇 |
2010年 | 104篇 |
2009年 | 100篇 |
2008年 | 115篇 |
2007年 | 122篇 |
2006年 | 101篇 |
2005年 | 109篇 |
2004年 | 105篇 |
2003年 | 97篇 |
2002年 | 103篇 |
2001年 | 87篇 |
2000年 | 77篇 |
1999年 | 63篇 |
1998年 | 51篇 |
1997年 | 68篇 |
1996年 | 56篇 |
1995年 | 42篇 |
1994年 | 48篇 |
1993年 | 49篇 |
1992年 | 31篇 |
1991年 | 33篇 |
1990年 | 41篇 |
1989年 | 37篇 |
1988年 | 34篇 |
1987年 | 31篇 |
1986年 | 23篇 |
1985年 | 35篇 |
1984年 | 37篇 |
1983年 | 21篇 |
1982年 | 32篇 |
1981年 | 30篇 |
1980年 | 27篇 |
1979年 | 31篇 |
1978年 | 17篇 |
1977年 | 11篇 |
1976年 | 28篇 |
1975年 | 14篇 |
1974年 | 18篇 |
1973年 | 16篇 |
排序方式: 共有3457条查询结果,搜索用时 15 毫秒
81.
In the Bayesian analysis of a multiple-recapture census, different diffuse prior distributions can lead to markedly different inferences about the population size N. Through consideration of the Fisher information matrix it is shown that the number of captures in each sample typically provides little information about N. This suggests that if there is no prior information about capture probabilities, then knowledge of just the sample sizes and not the number of recaptures should leave the distribution of Nunchanged. A prior model that has this property is identified and the posterior distribution is examined. In particular, asymptotic estimates of the posterior mean and variance are derived. Differences between Bayesian and classical point and interval estimators are illustrated through examples. 相似文献
82.
Peter Thompson 《统计学通讯:理论与方法》2013,42(3):537-553
The admissibility of testing procedures is examined when the loss function used is an increasing function of the p-value rather than the standard 0–1 loss. It is shown that the class of admissible procedures using the new approach is a subset of the class of admissible procedures using the 0–1 loss. 相似文献
83.
Peter Thompson 《统计学通讯:理论与方法》2013,42(10):2343-2347
In the bivariate normal, n=2 case, when testing H0:μx=μy=0,σ2 x=σ2 y=1, ρ=0 vs. H1:μx=μy=0,σ2 x=σ2 y=1, 0<ρ<1, it is shown that the median p-values given by the locally most powerful test and the distantly most powerful test are both beaten everywhere by the median of a third test. 相似文献
84.
Peter Wessman 《统计学通讯:理论与方法》2013,42(5):1143-1161
The surveillance of multivariate processes has received growing attention during the last decade. Several generalizations of well-known methods such as Shewhart, CUSUM and EWMA charts have been proposed. Many of these multivariate procedures are based on a univariate summarized statistic of the multivariate observations, usually the likelihood ratio statistic. In this paper we consider the surveillance of multivariate observation processes for a shift between two fully specified alternatives. The effect of the dimension reduction using likelihood ratio statistics are discussed in the context of sufficiency properties. Also, an example of the loss of efficiency when not using the univariate sufficient statistic is given. Furthermore, a likelihood ratio method, the LR method, for constructing surveillance procedures is suggested for multivariate surveillance situations. It is shown to produce univariate surveillance procedures based on the sufficient likelihood ratios. As the LR procedure has several optimality properties in the univariate, it is also used here as a benchmark for comparisons between multivariate surveillance procedures 相似文献
85.
Time series models are presented, for which the seasonal-component estimates delivered by linear least squares signal extraction closely approximate those of the standard option of the widely-used Census X-11 program. Earlier work is extended by consideration of a broader class of models and by examination of asymmetric filters, in addition to the symmetric filter implicit in the adjustment of historical data. Various criteria that guide the specification of unobserved- components models are discussed, and a new preferred model is presented. Some nonstandard options in X-11 are considered in the Appendix. 相似文献
86.
We consider a new class of scale estimators with 50% breakdown point. The estimators are defined as order statistics of certain subranges. They all have a finite-sample breakdown point of [n/2]/n, which is the best possible value. (Here, [...] denotes the integer part.) One estimator in this class has the same influence function as the median absolute deviation and the least median of squares (LMS) scale estimator (i.e., the length of the shortest half), but its finite-sample efficiency is higher. If we consider the standard deviation of a subsample instead of its range, we obtain a different class of 50% breakdown estimators. This class contains the least trimmed squares (LTS) scale estimator. Simulation shows that the LTS scale estimator is nearly unbiased, so it does not need a small-sample correction factor. Surprisingly, the efficiency of the LTS scale estimator is less than that of the LMS scale estimator. 相似文献
87.
The authors develop consistent nonparametric estimation techniques for the directional mixing density. Classical spherical harmonics are used to adapt Euclidean techniques to this directional environment. Minimax rates of convergence are obtained for rotation ally invariant densities verifying various smoothness conditions. It is found that the differences in smoothness between the Laplace, the Gaussian and the von Mises‐Fisher distributions lead to contrasting inferential conclusions. 相似文献
88.
Francesco Audrino Peter Bühlmann 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(3):655-670
Summary. We propose a flexible generalized auto-regressive conditional heteroscedasticity type of model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B -splines of lagged observations and volatilities. Estimation of such a B -spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B -spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, and also in comparison with other approaches, and we present some supporting asymptotic arguments. 相似文献
89.
Peter Hall Tapabrata Maiti 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(3):703-718
Summary. We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency. 相似文献
90.
A Bayesian discovery procedure 总被引:1,自引:0,他引:1
Michele Guindani Peter Müller Song Zhang 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(5):905-925
Summary. We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data. 相似文献