首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2220篇
  免费   51篇
  国内免费   40篇
管理学   444篇
民族学   1篇
人口学   1篇
丛书文集   34篇
理论方法论   4篇
综合类   703篇
社会学   8篇
统计学   1116篇
  2024年   7篇
  2023年   15篇
  2022年   40篇
  2021年   22篇
  2020年   35篇
  2019年   67篇
  2018年   71篇
  2017年   125篇
  2016年   74篇
  2015年   64篇
  2014年   71篇
  2013年   308篇
  2012年   144篇
  2011年   100篇
  2010年   88篇
  2009年   78篇
  2008年   113篇
  2007年   115篇
  2006年   106篇
  2005年   100篇
  2004年   97篇
  2003年   64篇
  2002年   51篇
  2001年   49篇
  2000年   61篇
  1999年   51篇
  1998年   37篇
  1997年   27篇
  1996年   30篇
  1995年   21篇
  1994年   17篇
  1993年   8篇
  1992年   20篇
  1991年   6篇
  1990年   5篇
  1989年   10篇
  1988年   6篇
  1987年   3篇
  1985年   2篇
  1984年   1篇
  1981年   2篇
排序方式: 共有2311条查询结果,搜索用时 15 毫秒
1.
Summary.  Social data often contain missing information. The problem is inevitably severe when analysing historical data. Conventionally, researchers analyse complete records only. Listwise deletion not only reduces the effective sample size but also may result in biased estimation, depending on the missingness mechanism. We analyse household types by using population registers from ancient China (618–907 AD) by comparing a simple classification, a latent class model of the complete data and a latent class model of the complete and partially missing data assuming four types of ignorable and non-ignorable missingness mechanisms. The findings show that either a frequency classification or a latent class analysis using the complete records only yielded biased estimates and incorrect conclusions in the presence of partially missing data of a non-ignorable mechanism. Although simply assuming ignorable or non-ignorable missing data produced consistently similarly higher estimates of the proportion of complex households, a specification of the relationship between the latent variable and the degree of missingness by a row effect uniform association model helped to capture the missingness mechanism better and improved the model fit.  相似文献   
2.
The HastingsMetropolis algorithm is a general MCMC method for sampling from a density known up to a constant. Geometric convergence of this algorithm has been proved under conditions relative to the instrumental (or proposal) distribution. We present an inhomogeneous HastingsMetropolis algorithm for which the proposal density approximates the target density, as the number of iterations increases. The proposal density at the n th step is a non-parametric estimate of the density of the algorithm, and uses an increasing number of i.i.d. copies of the Markov chain. The resulting algorithm converges (in n ) geometrically faster than a HastingsMetropolis algorithm with any fixed proposal distribution. The case of a strictly positive density with compact support is presented first, then an extension to more general densities is given. We conclude by proposing a practical way of implementation for the algorithm, and illustrate it over simulated examples.  相似文献   
3.
A Bayesian approach is presented for detecting influential observations using general divergence measures on the posterior distributions. A sampling-based approach using a Gibbs or Metropolis-within-Gibbs method is used to compute the posterior divergence measures. Four specific measures are proposed, which convey the effects of a single observation or covariate on the posterior. The technique is applied to a generalized linear model with binary response data, an overdispersed model and a nonlinear model. An asymptotic approximation using Laplace method to obtain the posterior divergence is also briefly discussed.  相似文献   
4.
The World Health Organization (WHO) diagnostic criteria for diabetes mellitus were determined in part by evidence that in some populations the plasma glucose level 2 h after an oral glucose load is a mixture of two distinct distributions. We present a finite mixture model that allows the two component densities to be generalized linear models and the mixture probability to be a logistic regression model. The model allows us to estimate the prevalence of diabetes and sensitivity and specificity of the diagnostic criteria as a function of covariates and to estimate them in the absence of an external standard. Sensitivity is the probability that a test indicates disease conditionally on disease being present. Specificity is the probability that a test indicates no disease conditionally on no disease being present. We obtained maximum likelihood estimates via the EM algorithm and derived the standard errors from the information matrix and by the bootstrap. In the application to data from the diabetes in Egypt project a two-component mixture model fits well and the two components are interpreted as normal and diabetes. The means and variances are similar to results found in other populations. The minimum misclassification cutpoints decrease with age, are lower in urban areas and are higher in rural areas than the 200 mg dl-1 cutpoint recommended by the WHO. These differences are modest and our results generally support the WHO criterion. Our methods allow the direct inclusion of concomitant data whereas past analyses were based on partitioning the data.  相似文献   
5.
In this paper, a new multivariate zero-inflated binomial (MZIB) distribution is proposed to analyse the correlated proportional data with excessive zeros. The distributional properties of purposed model are studied. The Fisher scoring algorithm and EM algorithm are given for the computation of estimates of parameters in the proposed MZIB model with/without covariates. The score tests and the likelihood ratio tests are derived for assessing both the zero-inflation and the equality of multiple binomial probabilities in correlated proportional data. A limited simulation study is performed to evaluate the performance of derived EM algorithms for the estimation of parameters in the model with/without covariates and to compare the nominal levels and powers of both score tests and likelihood ratio tests. The whitefly data is used to illustrate the proposed methodologies.  相似文献   
6.
本文在分析一维优化方法的比例因子法的基础上,提出了一种直接调整步长的最优步长因子法并给出框图。该法较比例因子法理论上完善,迭代控制可靠。  相似文献   
7.
8.
Summary.  We consider the Bayesian analysis of human movement data, where the subjects perform various reaching tasks. A set of markers is placed on each subject and a system of cameras records the three-dimensional Cartesian co-ordinates of the markers during the reaching movement. It is of interest to describe the mean and variability of the curves that are traced by the markers during one reaching movement, and to identify any differences due to covariates. We propose a methodology based on a hierarchical Bayesian model for the curves. An important part of the method is to obtain identifiable features of the movement so that different curves can be compared after temporal warping. We consider four landmarks and a set of equally spaced pseudolandmarks are located in between. We demonstrate that the algorithm works well in locating the landmarks, and shape analysis techniques are used to describe the posterior distribution of the mean curve. A feature of this type of data is that some parts of the movement data may be missing—the Bayesian methodology is easily adapted to cope with this situation.  相似文献   
9.
A data-driven approach for modeling volatility dynamics and co-movements in financial markets is introduced. Special emphasis is given to multivariate conditionally heteroscedastic factor models in which the volatilities of the latent factors depend on their past values, and the parameters are driven by regime switching in a latent state variable. We propose an innovative indirect estimation method based on the generalized EM algorithm principle combined with a structured variational approach that can handle models with large cross-sectional dimensions. Extensive Monte Carlo simulations and preliminary experiments with financial data show promising results.  相似文献   
10.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号