首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1566篇
  免费   31篇
  国内免费   2篇
管理学   97篇
人才学   1篇
人口学   20篇
丛书文集   10篇
理论方法论   16篇
综合类   105篇
社会学   24篇
统计学   1326篇
  2024年   1篇
  2023年   10篇
  2022年   6篇
  2021年   13篇
  2020年   21篇
  2019年   66篇
  2018年   52篇
  2017年   105篇
  2016年   50篇
  2015年   35篇
  2014年   64篇
  2013年   487篇
  2012年   107篇
  2011年   42篇
  2010年   44篇
  2009年   40篇
  2008年   43篇
  2007年   44篇
  2006年   31篇
  2005年   43篇
  2004年   36篇
  2003年   22篇
  2002年   42篇
  2001年   19篇
  2000年   27篇
  1999年   20篇
  1998年   20篇
  1997年   22篇
  1996年   14篇
  1995年   5篇
  1994年   8篇
  1993年   5篇
  1992年   7篇
  1991年   5篇
  1990年   4篇
  1989年   3篇
  1988年   5篇
  1987年   4篇
  1986年   6篇
  1985年   5篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1980年   4篇
  1979年   4篇
  1976年   1篇
  1975年   2篇
排序方式: 共有1599条查询结果,搜索用时 453 毫秒
61.
Abstract.  One of the main research areas in Bayesian Nonparametrics is the proposal and study of priors which generalize the Dirichlet process. In this paper, we provide a comprehensive Bayesian non-parametric analysis of random probabilities which are obtained by normalizing random measures with independent increments (NRMI). Special cases of these priors have already shown to be useful for statistical applications such as mixture models and species sampling problems. However, in order to fully exploit these priors, the derivation of the posterior distribution of NRMIs is crucial: here we achieve this goal and, indeed, provide explicit and tractable expressions suitable for practical implementation. The posterior distribution of an NRMI turns out to be a mixture with respect to the distribution of a specific latent variable. The analysis is completed by the derivation of the corresponding predictive distributions and by a thorough investigation of the marginal structure. These results allow to derive a generalized Blackwell–MacQueen sampling scheme, which is then adapted to cover also mixture models driven by general NRMIs.  相似文献   
62.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate deconvolution density estimation in a very straightforward manner.  相似文献   
63.
Using generalized linear models (GLMs), Jalaludin  et al. (2006;  J. Exposure Analysis and Epidemiology   16 , 225–237) studied the association between the daily number of visits to emergency departments for cardiovascular disease by the elderly (65+) and five measures of ambient air pollution. Bayesian methods provide an alternative approach to classical time series modelling and are starting to be more widely used. This paper considers Bayesian methods using the dataset used by Jalaludin  et al.  (2006) , and compares the results from Bayesian methods with those obtained by Jalaludin  et al.  (2006) using GLM methods.  相似文献   
64.
ABSTRACT

Adjustment for nonresponse should reduce the nonresponse bias without decreasing the precision of the estimates. Adjustment for nonresponses are commonly based on socio-demographic variables, although these variables may be poorly correlated with response propensities and with variables of interest. Such variables nevertheless have the advantage of being available for all sample units, whether or not they are participating in the survey. Alternatively, adjustment for nonresponse can be obtained from a follow-up survey aimed at sample units which did not participate in the survey and from which the variables are designed to be correlated with response propensities. However, information collected through these follow-up surveys is not available for people in the sample who participated neither in the survey nor in its nonresponse follow-up. These two sets of variables when used in a nonresponse model for the Swiss European Social Survey 2012 differ only slightly with regard to their effect on bias correction and on the precision of estimates. The variables from the follow-up are performing slightly better. In both cases, the adjustment for nonresponse performs poorly.  相似文献   
65.
In this paper, we propose a new partial correlation, the so-called composite quantile partial correlation, to measure the relationship of two variables given other variables. We further use this correlation to screen variables in ultrahigh-dimensional varying coefficient models. Our proposed method is fast and robust against outliers and can be efficiently employed in both single index variable and multiple index variable varying coefficient models. Numerical results indicate the preference of our proposed method.  相似文献   
66.
In this paper, we consider a generalisation of the backward simulation method of Duch et al. [New approaches to operational risk modeling. IBM J Res Develop. 2014;58:1–9] to build bivariate Poisson processes with flexible time correlation structures, and to simulate the arrival times of the processes. The proposed backward construction approach uses the Marshall–Olkin bivariate binomial distribution for the conditional law and some well-known families of bivariate copulas for the joint success probability in lieu of the typical conditional independence assumption. The resulting bivariate Poisson process can exhibit various time correlation structures which are commonly observed in real data.  相似文献   
67.
68.
Random effects model can account for the lack of fitting a regression model and increase precision of estimating area‐level means. However, in case that the synthetic mean provides accurate estimates, the prior distribution may inflate an estimation error. Thus, it is desirable to consider the uncertain prior distribution, which is expressed as the mixture of a one‐point distribution and a proper prior distribution. In this paper, we develop an empirical Bayes approach for estimating area‐level means, using the uncertain prior distribution in the context of a natural exponential family, which we call the empirical uncertain Bayes (EUB) method. The regression model considered in this paper includes the Poisson‐gamma and the binomial‐beta, and the normal‐normal (Fay–Herriot) model, which are typically used in small area estimation. We obtain the estimators of hyperparameters based on the marginal likelihood by using a well‐known expectation‐maximization algorithm and propose the EUB estimators of area means. For risk evaluation of the EUB estimator, we derive a second‐order unbiased estimator of a conditional mean squared error by using some techniques of numerical calculation. Through simulation studies and real data applications, we evaluate a performance of the EUB estimator and compare it with the usual empirical Bayes estimator.  相似文献   
69.
We propose methods for detecting structural changes in time series with discrete‐valued observations. The detector statistics come in familiar L2‐type formulations incorporating the empirical probability generating function. Special emphasis is given to the popular models of integer autoregression and Poisson autoregression. For both models, we study mainly structural changes due to a change in distribution, but we also comment for the classical problem of parameter change. The asymptotic properties of the proposed test statistics are studied under the null hypothesis as well as under alternatives. A Monte Carlo power study on bootstrap versions of the new methods is also included along with a real data example.  相似文献   
70.
In many applications, the clustered count data often contain excess zeros and the zero-inflated generalized Poisson mixed (ZIGPM) regression model may be suitable. However, dispersion in ZIGPM is often treated as fixed unknown parameter, and this assumption may be not appropriate in some situations. In this article, a score test for homogeneity of dispersion parameter in ZIGPM regression model is developed and corresponding test statistic is obtained. Sampling distribution and power of the score test statistic are investigated through Monte Carlo simulation. Finally, results from a biological example illustrate the usefulness of the diagnostic statistic.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号