首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3426篇
  免费   73篇
管理学   563篇
民族学   15篇
人才学   1篇
人口学   232篇
丛书文集   26篇
理论方法论   477篇
综合类   42篇
社会学   1558篇
统计学   585篇
  2023年   18篇
  2021年   20篇
  2020年   48篇
  2019年   87篇
  2018年   83篇
  2017年   113篇
  2016年   120篇
  2015年   80篇
  2014年   85篇
  2013年   474篇
  2012年   137篇
  2011年   116篇
  2010年   105篇
  2009年   101篇
  2008年   116篇
  2007年   124篇
  2006年   101篇
  2005年   110篇
  2004年   105篇
  2003年   98篇
  2002年   106篇
  2001年   87篇
  2000年   77篇
  1999年   64篇
  1998年   52篇
  1997年   68篇
  1996年   56篇
  1995年   43篇
  1994年   50篇
  1993年   51篇
  1992年   33篇
  1991年   35篇
  1990年   44篇
  1989年   38篇
  1988年   35篇
  1987年   32篇
  1986年   23篇
  1985年   36篇
  1984年   37篇
  1983年   22篇
  1982年   32篇
  1981年   31篇
  1980年   27篇
  1979年   32篇
  1978年   18篇
  1977年   12篇
  1976年   28篇
  1975年   14篇
  1974年   18篇
  1973年   16篇
排序方式: 共有3499条查询结果,搜索用时 14 毫秒
81.
The surveillance of multivariate processes has received growing attention during the last decade. Several generalizations of well-known methods such as Shewhart, CUSUM and EWMA charts have been proposed. Many of these multivariate procedures are based on a univariate summarized statistic of the multivariate observations, usually the likelihood ratio statistic. In this paper we consider the surveillance of multivariate observation processes for a shift between two fully specified alternatives. The effect of the dimension reduction using likelihood ratio statistics are discussed in the context of sufficiency properties. Also, an example of the loss of efficiency when not using the univariate sufficient statistic is given. Furthermore, a likelihood ratio method, the LR method, for constructing surveillance procedures is suggested for multivariate surveillance situations. It is shown to produce univariate surveillance procedures based on the sufficient likelihood ratios. As the LR procedure has several optimality properties in the univariate, it is also used here as a benchmark for comparisons between multivariate surveillance procedures  相似文献   
82.
Time series models are presented, for which the seasonal-component estimates delivered by linear least squares signal extraction closely approximate those of the standard option of the widely-used Census X-11 program. Earlier work is extended by consideration of a broader class of models and by examination of asymmetric filters, in addition to the symmetric filter implicit in the adjustment of historical data. Various criteria that guide the specification of unobserved- components models are discussed, and a new preferred model is presented. Some nonstandard options in X-11 are considered in the Appendix.  相似文献   
83.
We consider a new class of scale estimators with 50% breakdown point. The estimators are defined as order statistics of certain subranges. They all have a finite-sample breakdown point of [n/2]/n, which is the best possible value. (Here, [...] denotes the integer part.) One estimator in this class has the same influence function as the median absolute deviation and the least median of squares (LMS) scale estimator (i.e., the length of the shortest half), but its finite-sample efficiency is higher. If we consider the standard deviation of a subsample instead of its range, we obtain a different class of 50% breakdown estimators. This class contains the least trimmed squares (LTS) scale estimator. Simulation shows that the LTS scale estimator is nearly unbiased, so it does not need a small-sample correction factor. Surprisingly, the efficiency of the LTS scale estimator is less than that of the LMS scale estimator.  相似文献   
84.
The authors develop consistent nonparametric estimation techniques for the directional mixing density. Classical spherical harmonics are used to adapt Euclidean techniques to this directional environment. Minimax rates of convergence are obtained for rotation ally invariant densities verifying various smoothness conditions. It is found that the differences in smoothness between the Laplace, the Gaussian and the von Mises‐Fisher distributions lead to contrasting inferential conclusions.  相似文献   
85.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   
86.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
87.
In this paper we present a parsimonious model for the analysis of underreported Poisson count data. In contrast to previously developed methods, we are able to derive analytic expressions for the key marginal posterior distributions that are of interest. The usefulness of this model is explored via a re-examination of previously analysed data covering the purchasing of port wine (Ramos, 1999).  相似文献   
88.
Given a linear time series, e.g. an autoregression of infinite order, we may construct a finite order approximation and use that as the basis for confidence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has significantly better performance than blocking methods and offers a wider range of opportunities. In particular, since it does not corrupt second-order properties then it may be used in a double-bootstrap form, with the second bootstrap application being employed to calibrate a basic percentile method confidence interval. This approach confers second-order accuracy without the need to estimate variance. That offers substantial benefits, since variances of statistics based on time series can be difficult to estimate reliably, and—partly because of the relatively small amount of information contained in a dependent process—are notorious for causing problems when used to Studentize. Other advantages of the sieve bootstrap include considerably greater robustness against variations in the choice of the tuning parameter, here equal to the autoregressive order, and the fact that, in contradistinction to the case of the block bootstrap, the percentile t version of the sieve bootstrap may be based on the 'raw' estimator of standard error. In the process of establishing these properties we show that the sieve bootstrap is second order correct.  相似文献   
89.
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error.  相似文献   
90.
伦理经济原理与市场经济伦理   总被引:3,自引:0,他引:3  
伦理学和经济学仿佛是互有敌意的兄弟。说它们是兄弟,因为它们都是关于人的行动和决策的理论,都关心行动和决策的合理性和正确性。说它们互有敌意,因为它们的规范性内容似乎是互相矛盾的,伦理学追求至善,而经济学追求效率,至善的东西未必有效率,有效率的东西未必至善;追求至善的好心人未必总是得到好报,追求效率的坏人未必总是得到恶报。怎样解决两者之间的这种矛盾,伦理经济提出了若干基本原理,其中最重要的有:善的三重性(道德、效果和效率)原理;伦理经济的双重性(经济和伦理性质)原理;作为伦理学经济的伦理经济原理(如道德利益相容性原理、普遍利益权重原理等);作为市场经济的伦理预设的伦理经济原理(如契约第三方原理、双重效应原理、超动机原理等)。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号