首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2550篇
  免费   160篇
管理学   275篇
民族学   35篇
人口学   298篇
丛书文集   2篇
理论方法论   221篇
综合类   59篇
社会学   1282篇
统计学   538篇
  2024年   3篇
  2023年   38篇
  2022年   34篇
  2021年   45篇
  2020年   118篇
  2019年   106篇
  2018年   210篇
  2017年   236篇
  2016年   184篇
  2015年   109篇
  2014年   137篇
  2013年   459篇
  2012年   223篇
  2011年   110篇
  2010年   87篇
  2009年   81篇
  2008年   85篇
  2007年   64篇
  2006年   49篇
  2005年   45篇
  2004年   37篇
  2003年   45篇
  2002年   30篇
  2001年   25篇
  2000年   15篇
  1999年   11篇
  1998年   6篇
  1997年   11篇
  1996年   8篇
  1995年   9篇
  1994年   10篇
  1993年   4篇
  1992年   9篇
  1991年   8篇
  1990年   7篇
  1989年   9篇
  1988年   8篇
  1987年   4篇
  1986年   5篇
  1985年   4篇
  1984年   3篇
  1979年   2篇
  1978年   2篇
  1977年   2篇
  1975年   2篇
  1974年   1篇
  1973年   3篇
  1972年   1篇
  1971年   2篇
  1967年   1篇
排序方式: 共有2710条查询结果,搜索用时 203 毫秒
371.
The distribution of the ratio of two independent normal random variables X and Y is heavy tailed and has no moments. The shape of its density can be unimodal, bimodal, symmetric, asymmetric, and/or even similar to a normal distribution close to its mode. To our knowledge, conditions for a reasonable normal approximation to the distribution of ZX/Y have been presented in scientific literature only through simulations and empirical results. A proof of the existence of a proposed normal approximation to the distribution of Z, in an interval I centered at βE(X) /E(Y), is given here for the case where both X and Y are independent, have positive means, and their coefficients of variation fulfill some conditions. In addition, a graphical informative way of assessing the closeness of the distribution of a particular ratio X/Y to the proposed normal approximation is suggested by means of a receiver operating characteristic (ROC) curve.  相似文献   
372.
The two well-known and widely used multinomial selection procedures Bechhofor, Elmaghraby, and Morse (BEM) and all vector comparison (AVC) are critically compared in applications related to simulation optimization problems.

Two configurations of population probability distributions in which the best system has the greatest probability p i of yielding the largest value of the performance measure and has or does not have the largest expected performance measure were studied.

The numbers achieved by our simulations clearly show that none of the studied procedures outperform the other in all situations. The user must take into consideration the complexity of the simulations and the performance measure probability distribution properties when deciding which procedure to employ.

An important discovery was that the AVC does not work in populations in which the best system has the greatest probability p i of yielding the largest value of the performance measure but does not have the largest expected performance measure.  相似文献   
373.
In the time series literature, recent interest has focused on the so-called subspace methods. These techniques use canonical correlations and linear regressions to estimate the system matrices of an ARMAX model expressed in state space form. In this article, we use subspace methods to forecast two series with the help of some exogenous variables related to them. We compare the results with those obtained using traditional transfer function models and find that the forecasts obtained with both methods are similar. This result is very encouraging because, in contrast to transfer function models, subspace methods can be considered as almost automatic.  相似文献   
374.
ABSTRACT

A common Bayesian hierarchical model is where high-dimensional observed data depend on high-dimensional latent variables that, in turn, depend on relatively few hyperparameters. When the full conditional distribution over latent variables has a known form, general MCMC sampling need only be performed on the low-dimensional marginal posterior distribution over hyperparameters. This improves on popular Gibbs sampling that computes over the full space. Sampling the marginal posterior over hyperparameters exhibits good scaling of compute cost with data size, particularly when that distribution depends on a low-dimensional sufficient statistic.  相似文献   
375.
ABSTRACT

We propose an extension of parametric product partition models. We name our proposal nonparametric product partition models because we associate a random measure instead of a parametric kernel to each set within a random partition. Our methodology does not impose any specific form on the marginal distribution of the observations, allowing us to detect shifts of behaviour even when dealing with heavy-tailed or skewed distributions. We propose a suitable loss function and find the partition of the data having minimum expected loss. We then apply our nonparametric procedure to multiple change-point analysis and compare it with PPMs and with other methodologies that have recently appeared in the literature. Also, in the context of missing data, we exploit the product partition structure in order to estimate the distribution function of each missing value, allowing us to detect change points using the loss function mentioned above. Finally, we present applications to financial as well as genetic data.  相似文献   
376.
The 2 × 2 tables used to present the data in an experiment for comparing two proportions by means of two observations of two independent binomial distributions may appear simple but are not. The debate about the best method to use is unending, and has divided statisticians into practically irreconcilable groups. In this article, all the available non-asymptotic tests are reviewed (except the Bayesian methodology). The author states which is the optimal (for each group), referring to the tables and programs that exist for them, and contrast the arguments used by supporters of each of the options. They also sort the tangle of solutions into "families", based on the methodology used and/or prior assumptions, and point out the most frequent methodological mistakes committed when comparing the different families.  相似文献   
377.
In this article, we give the asymptotic mean integrated squared error and the mean squared error for the kernel estimator of the hazard rate from truncated and censored data. Martingale techniques and combinatory calculus are used to obtain these results. A probability bound and the optimal bandwidth choice are also given.  相似文献   
378.
379.
380.
When estimating loss distributions in insurance, large and small losses are usually split because it is difficult to find a simple parametric model that fits all claim sizes. This approach involves determining the threshold level between large and small losses. In this article, a unified approach to the estimation of loss distributions is presented. We propose an estimator obtained by transforming the data set with a modification of the Champernowne cdf and then estimating the density of the transformed data by use of the classical kernel density estimator. We investigate the asymptotic bias and variance of the proposed estimator. In a simulation study, the proposed method shows a good performance. We also present two applications dealing with claims costs in insurance.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号