首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3390篇
  免费   73篇
管理学   562篇
民族学   15篇
人才学   1篇
人口学   230篇
丛书文集   26篇
理论方法论   471篇
综合类   42篇
社会学   1534篇
统计学   582篇
  2023年   18篇
  2021年   20篇
  2020年   48篇
  2019年   85篇
  2018年   83篇
  2017年   113篇
  2016年   120篇
  2015年   80篇
  2014年   85篇
  2013年   471篇
  2012年   137篇
  2011年   113篇
  2010年   104篇
  2009年   101篇
  2008年   115篇
  2007年   123篇
  2006年   102篇
  2005年   109篇
  2004年   105篇
  2003年   97篇
  2002年   103篇
  2001年   88篇
  2000年   77篇
  1999年   64篇
  1998年   51篇
  1997年   68篇
  1996年   56篇
  1995年   42篇
  1994年   48篇
  1993年   49篇
  1992年   31篇
  1991年   33篇
  1990年   41篇
  1989年   37篇
  1988年   34篇
  1987年   31篇
  1986年   23篇
  1985年   35篇
  1984年   37篇
  1983年   21篇
  1982年   32篇
  1981年   30篇
  1980年   27篇
  1979年   31篇
  1978年   17篇
  1977年   11篇
  1976年   28篇
  1975年   14篇
  1974年   18篇
  1973年   16篇
排序方式: 共有3463条查询结果,搜索用时 15 毫秒
71.
The admissibility of testing procedures is examined when the loss function used is an increasing function of the p-value rather than the standard 0–1 loss. It is shown that the class of admissible procedures using the new approach is a subset of the class of admissible procedures using the 0–1 loss.  相似文献   
72.
In the bivariate normal, n=2 case, when testing H0xy=0,σ2 x2 y=1, ρ=0 vs. H1xy=0,σ2 x2 y=1, 0<ρ<1, it is shown that the median p-values given by the locally most powerful test and the distantly most powerful test are both beaten everywhere by the median of a third test.  相似文献   
73.
The surveillance of multivariate processes has received growing attention during the last decade. Several generalizations of well-known methods such as Shewhart, CUSUM and EWMA charts have been proposed. Many of these multivariate procedures are based on a univariate summarized statistic of the multivariate observations, usually the likelihood ratio statistic. In this paper we consider the surveillance of multivariate observation processes for a shift between two fully specified alternatives. The effect of the dimension reduction using likelihood ratio statistics are discussed in the context of sufficiency properties. Also, an example of the loss of efficiency when not using the univariate sufficient statistic is given. Furthermore, a likelihood ratio method, the LR method, for constructing surveillance procedures is suggested for multivariate surveillance situations. It is shown to produce univariate surveillance procedures based on the sufficient likelihood ratios. As the LR procedure has several optimality properties in the univariate, it is also used here as a benchmark for comparisons between multivariate surveillance procedures  相似文献   
74.
Time series models are presented, for which the seasonal-component estimates delivered by linear least squares signal extraction closely approximate those of the standard option of the widely-used Census X-11 program. Earlier work is extended by consideration of a broader class of models and by examination of asymmetric filters, in addition to the symmetric filter implicit in the adjustment of historical data. Various criteria that guide the specification of unobserved- components models are discussed, and a new preferred model is presented. Some nonstandard options in X-11 are considered in the Appendix.  相似文献   
75.
We consider a new class of scale estimators with 50% breakdown point. The estimators are defined as order statistics of certain subranges. They all have a finite-sample breakdown point of [n/2]/n, which is the best possible value. (Here, [...] denotes the integer part.) One estimator in this class has the same influence function as the median absolute deviation and the least median of squares (LMS) scale estimator (i.e., the length of the shortest half), but its finite-sample efficiency is higher. If we consider the standard deviation of a subsample instead of its range, we obtain a different class of 50% breakdown estimators. This class contains the least trimmed squares (LTS) scale estimator. Simulation shows that the LTS scale estimator is nearly unbiased, so it does not need a small-sample correction factor. Surprisingly, the efficiency of the LTS scale estimator is less than that of the LMS scale estimator.  相似文献   
76.
The authors develop consistent nonparametric estimation techniques for the directional mixing density. Classical spherical harmonics are used to adapt Euclidean techniques to this directional environment. Minimax rates of convergence are obtained for rotation ally invariant densities verifying various smoothness conditions. It is found that the differences in smoothness between the Laplace, the Gaussian and the von Mises‐Fisher distributions lead to contrasting inferential conclusions.  相似文献   
77.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   
78.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
79.
In this paper we present a parsimonious model for the analysis of underreported Poisson count data. In contrast to previously developed methods, we are able to derive analytic expressions for the key marginal posterior distributions that are of interest. The usefulness of this model is explored via a re-examination of previously analysed data covering the purchasing of port wine (Ramos, 1999).  相似文献   
80.
Given a linear time series, e.g. an autoregression of infinite order, we may construct a finite order approximation and use that as the basis for confidence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has significantly better performance than blocking methods and offers a wider range of opportunities. In particular, since it does not corrupt second-order properties then it may be used in a double-bootstrap form, with the second bootstrap application being employed to calibrate a basic percentile method confidence interval. This approach confers second-order accuracy without the need to estimate variance. That offers substantial benefits, since variances of statistics based on time series can be difficult to estimate reliably, and—partly because of the relatively small amount of information contained in a dependent process—are notorious for causing problems when used to Studentize. Other advantages of the sieve bootstrap include considerably greater robustness against variations in the choice of the tuning parameter, here equal to the autoregressive order, and the fact that, in contradistinction to the case of the block bootstrap, the percentile t version of the sieve bootstrap may be based on the 'raw' estimator of standard error. In the process of establishing these properties we show that the sieve bootstrap is second order correct.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号