首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   926篇
  免费   60篇
管理学   160篇
民族学   3篇
人口学   18篇
丛书文集   18篇
理论方法论   17篇
综合类   89篇
社会学   35篇
统计学   646篇
  2023年   1篇
  2022年   5篇
  2021年   8篇
  2020年   13篇
  2019年   35篇
  2018年   21篇
  2017年   37篇
  2016年   13篇
  2015年   22篇
  2014年   24篇
  2013年   246篇
  2012年   62篇
  2011年   36篇
  2010年   36篇
  2009年   28篇
  2008年   41篇
  2007年   30篇
  2006年   23篇
  2005年   21篇
  2004年   22篇
  2003年   21篇
  2002年   29篇
  2001年   26篇
  2000年   8篇
  1999年   10篇
  1998年   12篇
  1997年   8篇
  1996年   1篇
  1995年   6篇
  1994年   9篇
  1993年   14篇
  1992年   17篇
  1991年   14篇
  1990年   15篇
  1989年   12篇
  1988年   9篇
  1987年   1篇
  1986年   5篇
  1985年   7篇
  1984年   10篇
  1983年   2篇
  1982年   9篇
  1981年   9篇
  1980年   7篇
  1978年   1篇
排序方式: 共有986条查询结果,搜索用时 15 毫秒
101.
对某个具体的政府行政职能部门而言,其数据供给和需求常常是不对称的。如何构建一套统计体系,将与其有关的统计数据信息采集、加工、应用、发布通过一个信息系统来实现,以满足其对外的数据供给和对内的数据需求?本文以北京市住建委为例,给出了政府行政职能部门统计体系设计的四个基本步骤:解析行政管理职能,构造统计体系的基本框架,分模块进行具体统计内容设计,落实统计报送周期、数据来源和传输渠道。  相似文献   
102.
薛洁  赵志飞 《统计研究》2012,29(4):16-19
作为一项新事物,物联网得到愈发多的关注,它被称为是继计算机、互联网之后世界信息产业发展的第三次浪潮。作为我国的战略性新兴产业之一,物联网产业是我国目前和未来要重点发展的产业,但是,关于物联网产业尚无明确的统计界定,无形中给该产业的统计研究工作带来了一定的困难。在此背景下,本文在探讨国内外对物联网的界定及存在的问题的基础上,提出我国物联网产业的统计界定、划分我国物联网产业统计分类应遵循的原则及其分类框架,并分析该分类与《国民经济行业分类》间的关系。  相似文献   
103.
李强 《统计研究》2012,29(8):3-7
统计调查制度是统计工作的基础和规范。目前我国已建立起一个庞大的统计调查制度体系,从内容上讲,包括国民经济核算体系、统计指标体系、统计标准体系、统计调查方法体系和统计调查制度管理体系。文章首先就新中国政府统计成立60年以来我国上述五个体系的建立和发展历程进行了回顾和梳理,然后就我国政府统计调查制度的主要特点及推行中获得的有益经验进行了总结,最后指出现行统计调查制度中存在的主要问题和改革的方向。  相似文献   
104.
The non-central gamma distribution can be regarded as a general form of non-central χ2 distributions whose computations were thoroughly investigated (Ruben, H., 1974, Non-central chi-square and gamma revisited. Communications in Statistics, 3(7), 607–633; Knüsel, L., 1986, Computation of the chi-square and Poisson distribution. SIAM Journal on Scientific and Statistical Computing, 7, 1022–1036; Voit, E.O. and Rust, P.F., 1987, Noncentral chi-square distributions computed by S-system differential equations. Proceedings of the Statistical Computing Section, ASA, pp. 118–121; Rust, P.F. and Voit, E.O., 1990, Statistical densities, cumulatives, quantiles, and power obtained by S-systems differential equations. Journal of the American Statistical Association, 85, 572–578; Chattamvelli, R., 1994, Another derivation of two algorithms for the noncentral χ2 and F distributions. Journal of Statistical Computation and Simulation, 49, 207–214; Johnson, N.J., Kotz, S. and Balakrishnan, N., 1995, Continuous Univariate Distributions, Vol. 2 (2nd edn) (New York: Wiley). Both distributional function forms are usually in terms of weighted infinite series of the central one. The ad hoc approximations to cumulative probabilities of non-central gamma were extended or discussed by Chattamvelli, Knüsel and Bablok (Knüsel, L. and Bablok, B., 1996, Computation of the noncentral gamma distribution. SIAM Journal on Scientific Computing, 17, 1224–1231), and Ruben (Ruben, H., 1974, Non-central chi-square and gamma revisited. Communications in Statistics, 3(7), 607–633). However, they did not implement and demonstrate proposed numerical procedures. Approximations to non-central densities and quantiles are not available. In addition, its S-system formulation has not been derived. Here, approximations to cumulative probabilities, density, and quantiles based on the method of Knüsel and Bablok are derived and implemented in R codes. Furthermore, two alternate S-system forms are recast on the basis of techniques of Savageau and Voit (Savageau, M.A. and Voit, E.O., 1987, Recasting nonlinear differential equations as S-systems: A canonical nonlinear form. Mathematical Biosciences, 87, 83–115) as well as Chen (Chen, Z.-Y., 2003, Computing the distribution of the squared sample multiple correlation coefficient with S-Systems. Communications in Statistics—Simulation and Computation, 32(3), 873–898.) and Chen and Chou (Chen, Z.-Y. and Chou, Y.-C., 2000, Computing the noncentral beta distribution with S-system. Computational Statistics and Data Analysis, 33, 343–360.). Statistical densities, cumulative probabilities, quantiles can be evaluated by only one numerical solver power low analysis and simulation (PLAS). With the newly derived S-systems of non-central gamma, the specialized non-central χ2 distributions are demonstrated under five cases in the same three situations studied by Rust and Voit. Both numerical values in pairs are almost equal. Based on these, nine cases in three similar situations are designed for demonstration and evaluation. In addition, exact values in finite significant digits are provided for comparison. Demonstrations are conducted by R package and PLAS solver in the same PC system. By doing these, very accurate and consistent numerical results are obtained by three methods in two groups. On the other hand, these three methods are performed competitively with respect to speed of computation. Numerical advantages of S-systems over the ad hoc approximation and related properties are also discussed.  相似文献   
105.
An assumption made in the classification problem is that the distribution of the data being classified has the same parameters as the data used to obtain the discriminant functions. A method based on mixtures of two normal distributions is proposed as method of checking this assumption and modifying the discriminant functions accordingly. As a first step, the case considered in this paper, is that of a shift in the mean of one or two univariate normal distributions with all other parameters remaining fixed and known. Calculations based on the asymptotic the proposed method works well even for small shifts.  相似文献   
106.
Quality control chart interpretation is usually based on the assumption that successive observations are independent over time. In this article we show the effect of autocorrelation on the retrospective Shewhart chart for individuals, often referred to as the X-chart, with the control limits based on moving ranges. It is shown that the presence of positive first lag autocorrelation results in an increased number of false alarms from the control chart. Negative first lag autocorrelation can result in unnecessarily wide control limits such that significant shifts in the process mean may go undetected. We use first-order autoregressive and first-order moving average models in our simulation of small samples of autocorrelated data.  相似文献   
107.
We propose a mixture of latent variables model for the model-based clustering, classification, and discriminant analysis of data comprising variables with mixed type. This approach is a generalization of latent variable analysis, and model fitting is carried out within the expectation-maximization framework. Our approach is outlined and a simulation study conducted to illustrate the effect of sample size and noise on the standard errors and the recovery probabilities for the number of groups. Our modelling methodology is then applied to two real data sets and their clustering and classification performance is discussed. We conclude with discussion and suggestions for future work.  相似文献   
108.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   
109.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small.  相似文献   
110.
Summary.  A fully Bayesian analysis of directed graphs, with particular emphasis on applica- tions in social networks, is explored. The model is capable of incorporating the effects of covariates, within and between block ties and multiple responses. Inference is straightforward by using software that is based on Markov chain Monte Carlo methods. Examples are provided which highlight the variety of data sets that can be entertained and the ease with which they can be analysed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号