全文获取类型
收费全文 | 4053篇 |
免费 | 88篇 |
专业分类
管理学 | 599篇 |
劳动科学 | 1篇 |
民族学 | 26篇 |
人才学 | 1篇 |
人口学 | 335篇 |
丛书文集 | 93篇 |
理论方法论 | 657篇 |
综合类 | 183篇 |
社会学 | 1636篇 |
统计学 | 610篇 |
出版年
2023年 | 19篇 |
2021年 | 23篇 |
2020年 | 51篇 |
2019年 | 91篇 |
2018年 | 86篇 |
2017年 | 124篇 |
2016年 | 124篇 |
2015年 | 92篇 |
2014年 | 109篇 |
2013年 | 543篇 |
2012年 | 159篇 |
2011年 | 179篇 |
2010年 | 144篇 |
2009年 | 150篇 |
2008年 | 176篇 |
2007年 | 152篇 |
2006年 | 128篇 |
2005年 | 142篇 |
2004年 | 153篇 |
2003年 | 121篇 |
2002年 | 123篇 |
2001年 | 97篇 |
2000年 | 91篇 |
1999年 | 76篇 |
1998年 | 60篇 |
1997年 | 74篇 |
1996年 | 60篇 |
1995年 | 49篇 |
1994年 | 54篇 |
1993年 | 51篇 |
1992年 | 35篇 |
1991年 | 37篇 |
1990年 | 45篇 |
1989年 | 39篇 |
1988年 | 35篇 |
1987年 | 36篇 |
1986年 | 26篇 |
1985年 | 35篇 |
1984年 | 40篇 |
1983年 | 24篇 |
1982年 | 37篇 |
1981年 | 31篇 |
1980年 | 28篇 |
1979年 | 31篇 |
1978年 | 17篇 |
1977年 | 13篇 |
1976年 | 33篇 |
1975年 | 16篇 |
1974年 | 18篇 |
1973年 | 16篇 |
排序方式: 共有4141条查询结果,搜索用时 15 毫秒
91.
Peter Wessman 《统计学通讯:理论与方法》2013,42(5):1143-1161
The surveillance of multivariate processes has received growing attention during the last decade. Several generalizations of well-known methods such as Shewhart, CUSUM and EWMA charts have been proposed. Many of these multivariate procedures are based on a univariate summarized statistic of the multivariate observations, usually the likelihood ratio statistic. In this paper we consider the surveillance of multivariate observation processes for a shift between two fully specified alternatives. The effect of the dimension reduction using likelihood ratio statistics are discussed in the context of sufficiency properties. Also, an example of the loss of efficiency when not using the univariate sufficient statistic is given. Furthermore, a likelihood ratio method, the LR method, for constructing surveillance procedures is suggested for multivariate surveillance situations. It is shown to produce univariate surveillance procedures based on the sufficient likelihood ratios. As the LR procedure has several optimality properties in the univariate, it is also used here as a benchmark for comparisons between multivariate surveillance procedures 相似文献
92.
Time series models are presented, for which the seasonal-component estimates delivered by linear least squares signal extraction closely approximate those of the standard option of the widely-used Census X-11 program. Earlier work is extended by consideration of a broader class of models and by examination of asymmetric filters, in addition to the symmetric filter implicit in the adjustment of historical data. Various criteria that guide the specification of unobserved- components models are discussed, and a new preferred model is presented. Some nonstandard options in X-11 are considered in the Appendix. 相似文献
93.
We consider a new class of scale estimators with 50% breakdown point. The estimators are defined as order statistics of certain subranges. They all have a finite-sample breakdown point of [n/2]/n, which is the best possible value. (Here, [...] denotes the integer part.) One estimator in this class has the same influence function as the median absolute deviation and the least median of squares (LMS) scale estimator (i.e., the length of the shortest half), but its finite-sample efficiency is higher. If we consider the standard deviation of a subsample instead of its range, we obtain a different class of 50% breakdown estimators. This class contains the least trimmed squares (LTS) scale estimator. Simulation shows that the LTS scale estimator is nearly unbiased, so it does not need a small-sample correction factor. Surprisingly, the efficiency of the LTS scale estimator is less than that of the LMS scale estimator. 相似文献
94.
The authors develop consistent nonparametric estimation techniques for the directional mixing density. Classical spherical harmonics are used to adapt Euclidean techniques to this directional environment. Minimax rates of convergence are obtained for rotation ally invariant densities verifying various smoothness conditions. It is found that the differences in smoothness between the Laplace, the Gaussian and the von Mises‐Fisher distributions lead to contrasting inferential conclusions. 相似文献
95.
Francesco Audrino Peter Bühlmann 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(3):655-670
Summary. We propose a flexible generalized auto-regressive conditional heteroscedasticity type of model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B -splines of lagged observations and volatilities. Estimation of such a B -spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B -spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, and also in comparison with other approaches, and we present some supporting asymptotic arguments. 相似文献
96.
Peter Hall Tapabrata Maiti 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(3):703-718
Summary. We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency. 相似文献
97.
A Bayesian discovery procedure 总被引:1,自引:0,他引:1
Michele Guindani Peter Müller Song Zhang 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(5):905-925
Summary. We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data. 相似文献
98.
99.
The responses obtained from response surface designs that are run sequentially often exhibit serial correlation or time trends. The order in which the runs of the design are performed then has an impact on the precision of the parameter estimators. This article proposes the use of a variable-neighbourhood search algorithm to compute run orders that guarantee a precise estimation of the effects of the experimental factors. The importance of using good run orders is demonstrated by seeking D-optimal run orders for a central composite design in the presence of an AR(1) autocorrelation pattern. 相似文献
100.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion. 相似文献