首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   354篇
  免费   4篇
管理学   11篇
民族学   1篇
人口学   13篇
丛书文集   2篇
理论方法论   5篇
综合类   16篇
社会学   5篇
统计学   305篇
  2022年   4篇
  2020年   3篇
  2019年   7篇
  2018年   9篇
  2017年   18篇
  2016年   10篇
  2015年   6篇
  2014年   6篇
  2013年   144篇
  2012年   31篇
  2011年   9篇
  2010年   2篇
  2009年   6篇
  2008年   4篇
  2007年   6篇
  2006年   2篇
  2005年   10篇
  2004年   5篇
  2003年   4篇
  2002年   6篇
  2001年   3篇
  2000年   6篇
  1999年   6篇
  1998年   9篇
  1997年   1篇
  1996年   4篇
  1995年   9篇
  1994年   4篇
  1993年   4篇
  1992年   5篇
  1991年   1篇
  1990年   3篇
  1986年   3篇
  1985年   2篇
  1984年   2篇
  1983年   2篇
  1982年   1篇
  1978年   1篇
排序方式: 共有358条查询结果,搜索用时 46 毫秒
271.
The EM algorithm is a popular method for computing maximum likelihood estimates or posterior modes in models that can be formulated in terms of missing data or latent structure. Although easy implementation and stable convergence help to explain the popularity of the algorithm, its convergence is sometimes notoriously slow. In recent years, however, various adaptations have significantly improved the speed of EM while maintaining its stability and simplicity. One especially successful method for maximum likelihood is known as the parameter expanded EM or PXEM algorithm. Unfortunately, PXEM does not generally have a closed form M-step when computing posterior modes, even when the corresponding EM algorithm is in closed form. In this paper we confront this problem by adapting the one-step-late EM algorithm to PXEM to establish a fast closed form algorithm that improves on the one-step-late EM algorithm by insuring monotone convergence. We use this algorithm to fit a probit regression model and a variety of dynamic linear models, showing computational savings of as much as 99.9%, with the biggest savings occurring when the EM algorithm is the slowest to converge.  相似文献   
272.
One of the common approaches to the problem of scaling and combining examination marks has its roots in the least squares tradition. In this framework, examiners' preconceptions about the transformations must be dealt with in an ad hoc way. This paper investigates another, likelihood-based, approach which allows preconceptions to be handled by standard Bayesian techniques. The likelihood and least squares approaches are not directly parallel (essentially because a Jacobian must be included in the likelihood). Nonetheless, the device of introducing fictitious candidates to deal with prior beliefs in the least squares set-up can be understood in a Bayesian sense.  相似文献   
273.
We propose a general family of nonparametric mixed effects models. Smoothing splines are used to model the fixed effects and are estimated by maximizing the penalized likelihood function. The random effects are generic and are modelled parametrically by assuming that the covariance function depends on a parsimonious set of parameters. These parameters and the smoothing parameter are estimated simultaneously by the generalized maximum likelihood method. We derive a connection between a nonparametric mixed effects model and a linear mixed effects model. This connection suggests a way of fitting a nonparametric mixed effects model by using existing programs. The classical two-way mixed models and growth curve models are used as examples to demonstrate how to use smoothing spline analysis-of-variance decompositions to build nonparametric mixed effects models. Similarly to the classical analysis of variance, components of these nonparametric mixed effects models can be interpreted as main effects and interactions. The penalized likelihood estimates of the fixed effects in a two-way mixed model are extensions of James–Stein shrinkage estimates to correlated observations. In an example three nested nonparametric mixed effects models are fitted to a longitudinal data set.  相似文献   
274.
New robust estimates for variance components are introduced. Two simple models are considered: the balanced one-way classification model with a random factor and the balanced mixed model with one random factor and one fixed factor. However, the method of estimation proposed can be extended to more complex models. The new method of estimation we propose is based on the relationship between the variance components and the coefficients of the least-mean-squared-error predictor between two observations of the same group. This relationship enables us to transform the problem of estimating the variance components into the problem of estimating the coefficients of a simple linear regression model. The variance-component estimators derived from the least-squares regression estimates are shown to coincide with the maximum-likelihood estimates. Robust estimates of the variance components can be obtained by replacing the least-squares estimates by robust regression estimates. In particular, a Monte Carlo study shows that for outlier-contaminated normal samples, the estimates of variance components derived from GM regression estimates and the derived test outperform other robust procedures.  相似文献   
275.
In non-parametric function estimation selection of a smoothing parameter is one of the most important issues. The performance of smoothing techniques depends highly on the choice of this parameter. Preferably the bandwidth should be determined via a data-driven procedure. In this paper we consider kernel estimators in a white noise model, and investigate whether locally adaptive plug-in bandwidths can achieve optimal global rates of convergence. We consider various classes of functions: Sobolev classes, bounded variation function classes, classes of convex functions and classes of monotone functions. We study the situations of pilot estimation with oversmoothing and without oversmoothing. Our main finding is that simple local plug-in bandwidth selectors can adapt to spatial inhomogeneity of the regression function as long as there are no local oscillations of high frequency. We establish the pointwise asymptotic distribution of the regression estimator with local plug-in bandwidth.  相似文献   
276.
277.
The Buckley–James (BJ) estimator is known to be consistent and efficient for a linear regression model with censored data. However, its application in practice is handicapped by the lack of a reliable numerical algorithm for finding the solution. For a given data set, the iterative approach may yield multiple solutions, or no solution at all. To alleviate this problem, we modify the induced smoothing approach originally proposed in 2005 by Brown & Wang. The resulting estimating functions become smooth, thus eliminating the tendency of the iterative procedure to oscillate between different parameter values. In addition to facilitating point estimation the smoothing approach enables easy evaluation of the projection matrix, thus providing a means of calculating standard errors. Extensive simulation studies were carried out to evaluate the performance of different estimators. In general, smoothing greatly alleviates numerical issues that arise in the estimation process. In particular, the one‐step smoothing estimator eliminates non‐convergence problems and performs similarly to full iteration until convergence. The proposed estimation procedure is illustrated using a dataset from a multiple myeloma study.  相似文献   
278.
Organizational research often relies onsurrogate variables. By surrogate we donot refer to family of construct, factor, or latentvariables. Rather, we address the situation where onevariable is literally the substitute for another variablethat is generally unavailable. Consider, for example,the use of intent to turnover orintent to transfer variables commonly usedwhen actual turnover or transfer data are unavailable. Wedemonstrate that reliance on such surrogate variablesmay lead to some misinterpretation. This tendency may beparticularly apparent when the relationship between the surrogate and the actual variable is low. Thismay be further exacerbated when the relationship betweenthe surrogate variable and a third variable is modest aswell.  相似文献   
279.
In this work, we study the maximum likelihood (ML) estimation problem for the parameters of the two-piece (TP) distribution based on the scale mixtures of normal (SMN) distributions. This is a family of skewed distributions that also includes the scales mixtures of normal class, and is flexible enough for modeling symmetric and asymmetric data. The ML estimates of the proposed model parameters are obtained via an expectation-maximization (EM)-type algorithm.  相似文献   
280.
This paper considers the constant-partially accelerated life tests for series system products, where dependent M-O bivariate exponential distribution is assumed for the components.

Based on progressive type-II censored and masked data, the maximum likelihood estimates for the parameters and acceleration factors are obtained by using the decomposition approach. In addition, this method can also be applied to the Bayes estimates, which are too complex to obtain as usual way. Finally, a Monte Carlo simulation study is carried out to verify the accuracy of the methods under different masking probabilities and censoring schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号