首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5221篇
  免费   111篇
  国内免费   22篇
管理学   310篇
民族学   9篇
人口学   75篇
丛书文集   93篇
理论方法论   66篇
综合类   844篇
社会学   110篇
统计学   3847篇
  2024年   3篇
  2023年   36篇
  2022年   49篇
  2021年   38篇
  2020年   103篇
  2019年   181篇
  2018年   209篇
  2017年   334篇
  2016年   159篇
  2015年   120篇
  2014年   162篇
  2013年   1388篇
  2012年   433篇
  2011年   169篇
  2010年   175篇
  2009年   180篇
  2008年   187篇
  2007年   153篇
  2006年   142篇
  2005年   147篇
  2004年   114篇
  2003年   110篇
  2002年   104篇
  2001年   93篇
  2000年   81篇
  1999年   74篇
  1998年   68篇
  1997年   53篇
  1996年   33篇
  1995年   28篇
  1994年   32篇
  1993年   26篇
  1992年   30篇
  1991年   11篇
  1990年   21篇
  1989年   11篇
  1988年   21篇
  1987年   9篇
  1986年   6篇
  1985年   5篇
  1984年   12篇
  1983年   14篇
  1982年   6篇
  1981年   6篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
  1973年   1篇
排序方式: 共有5354条查询结果,搜索用时 15 毫秒
1.
SαS稳定分布是一类非常重要的非高斯随机分布,具有这类分布的噪声称为冲激噪声。在冲激噪声情况下,α阶以上的矩均不存在,导致基于二阶矩的高斯模型算法性能下降,甚至不能正常工作。该文提出了一种在冲激噪声环境下线性调频信号特征参数估计的算法,通过分析冲激噪声的具体特点,给出了修正的低阶矩模糊函数,并结合Radon变换估计了冲激噪声环境下LFM信号的参数。该算法既可应用于冲激噪声下,又可应用于高斯噪声环境,故具有较好的鲁棒性。最后用计算机仿真验证了该算法的有效性。  相似文献   
2.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
3.
Lin  Tsung I.  Lee  Jack C.  Ni  Huey F. 《Statistics and Computing》2004,14(2):119-130
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data.  相似文献   
4.
A test of congruence among distance matrices is described. It tests the hypothesis that several matrices, containing different types of variables about the same objects, are congruent with one another, so they can be used jointly in statistical analysis. Raw data tables are turned into similarity or distance matrices prior to testing; they can then be compared to data that naturally come in the form of distance matrices. The proposed test can be seen as a generalization of the Mantel test of matrix correspondence to any number of distance matrices. This paper shows that the new test has the correct rate of Type I error and good power. Power increases as the number of objects and the number of congruent data matrices increase; power is higher when the total number of matrices in the study is smaller. To illustrate the method, the proposed test is used to test the hypothesis that matrices representing different types of organoleptic variables (colour, nose, body, palate and finish) in single‐malt Scotch whiskies are congruent.  相似文献   
5.
Maximum likelihood estimation and goodness-of-fit techniques are used within a competing risks framework to obtain maximum likelihood estimates of hazard, density, and survivor functions for randomly right-censored variables. Goodness-of- fit techniques are used to fit distributions to the crude lifetimes, which are used to obtain an estimate of the hazard function, which, in turn, is used to construct the survivor and density functions of the net lifetime of the variable of interest. If only one of the crude lifetimes can be adequately characterized by a parametric model, then semi-parametric estimates may be obtained using a maximum likelihood estimate of one crude lifetime and the empirical distribution function of the other. Simulation studies show that the survivor function estimates from crude lifetimes compare favourably with those given by the product-limit estimator when crude lifetimes are chosen correctly. Other advantages are discussed.  相似文献   
6.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   
7.
We describe an image reconstruction problem and the computational difficulties arising in determining the maximum a posteriori (MAP) estimate. Two algorithms for tackling the problem, iterated conditional modes (ICM) and simulated annealing, are usually applied pixel by pixel. The performance of this strategy can be poor, particularly for heavily degraded images, and as a potential improvement Jubb and Jennison (1991) suggest the cascade algorithm in which ICM is initially applied to coarser images formed by blocking squares of pixels. In this paper we attempt to resolve certain criticisms of cascade and present a version of the algorithm extended in definition and implementation. As an illustration we apply our new method to a synthetic aperture radar (SAR) image. We also carry out a study of simulated annealing, with and without cascade, applied to a more tractable minimization problem from which we gain insight into the properties of cascade algorithms.  相似文献   
8.
We propose some estimators of noncentrality parameters which improve upon usual unbiased estimators under quadratic loss. The distributions we consider are the noncentral chi-square and the noncentral F. However, we give more general results for the family of elliptically contoured distributions and propose a robust dominating estimator.  相似文献   
9.
Previous investigations have shown that a social choice function which is partially implementable must be characterized by pervasive veto power. This paper investigates how much additional latitude in the design of social choice functions, and how much relief from this vetoers result, can be achieved by examining multi-valued social choice rules and relaxing the requirement of partial implementability to a requirement that we call weak partial implementability. We find that the power structures which characterize partially implementable social choice functions, including the veto properties, also characterize weakly partially implementable social choice rules. The conclusion is that invoking multi-valuedness and implementation of appealing social choice rules in strong Nash equilibria. Our results apparently exhaust the possibilities for implementation in strong Nash equilibrium. If any implementation possibility results are to be achieved, they can apparently come only by weakening the equilibrium requirement.  相似文献   
10.
Against a background of mass unemployment and the increasing precariousness of employment, the current changes in the labour market demand that we examine the previously central role of paid employment in social integration. If the individual now deprived of employment or having a temporary contract is by dint of this fact less included in society or is less secure by not having a permanent salary, is it to say that he embodies the face of the de-affiliated, of those who miss out on social benefits? Taking the example of the Reunion Island which has the highest level of unemployment and population covered by the minimum income (RMI: Revenu Minimum d'Insertion) in the country, this article endeavours to explain and understand how, in the context of a protective society and increasingly flexible employment market, new methods of social integration without and resulting from paid employment are being formed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号