全文获取类型
收费全文 | 279篇 |
免费 | 3篇 |
专业分类
管理学 | 3篇 |
丛书文集 | 3篇 |
理论方法论 | 2篇 |
综合类 | 8篇 |
社会学 | 19篇 |
统计学 | 247篇 |
出版年
2021年 | 1篇 |
2020年 | 4篇 |
2019年 | 8篇 |
2018年 | 14篇 |
2017年 | 26篇 |
2016年 | 6篇 |
2015年 | 6篇 |
2014年 | 11篇 |
2013年 | 81篇 |
2012年 | 15篇 |
2011年 | 9篇 |
2010年 | 9篇 |
2009年 | 10篇 |
2008年 | 7篇 |
2007年 | 9篇 |
2006年 | 13篇 |
2005年 | 8篇 |
2004年 | 5篇 |
2003年 | 2篇 |
2002年 | 7篇 |
2001年 | 5篇 |
2000年 | 5篇 |
1999年 | 7篇 |
1998年 | 7篇 |
1997年 | 3篇 |
1995年 | 1篇 |
1993年 | 1篇 |
1984年 | 1篇 |
1979年 | 1篇 |
排序方式: 共有282条查询结果,搜索用时 15 毫秒
171.
Rianne Margaretha Schouten Peter Lugtig Gerko Vink 《Journal of Statistical Computation and Simulation》2018,88(15):2909-2930
Missing data form a ubiquitous problem in scientific research, especially since most statistical analyses require complete data. To evaluate the performance of methods dealing with missing data, researchers perform simulation studies. An important aspect of these studies is the generation of missing values in a simulated, complete data set: the amputation procedure. We investigated the methodological validity and statistical nature of both the current amputation practice and a newly developed and implemented multivariate amputation procedure. We found that the current way of practice may not be appropriate for the generation of intuitive and reliable missing data problems. The multivariate amputation procedure, on the other hand, generates reliable amputations and allows for a proper regulation of missing data problems. The procedure has additional features to generate any missing data scenario precisely as intended. Hence, the multivariate amputation procedure is an efficient method to accurately evaluate missing data methodology. 相似文献
172.
Nonparametric maximum likelihood estimation of population size based on the counting distribution 总被引:1,自引:0,他引:1
Dankmar Böhning Dieter Schön 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(4):721-737
Summary. The paper discusses the estimation of an unknown population size n . Suppose that an identification mechanism can identify n obs cases. The Horvitz–Thompson estimator of n adjusts this number by the inverse of 1− p 0 , where the latter is the probability of not identifying a case. When repeated counts of identifying the same case are available, we can use the counting distribution for estimating p 0 to solve the problem. Frequently, the Poisson distribution is used and, more recently, mixtures of Poisson distributions. Maximum likelihood estimation is discussed by means of the EM algorithm. For truncated Poisson mixtures, a nested EM algorithm is suggested and illustrated for several application cases. The algorithmic principles are used to show an inequality, stating that the Horvitz–Thompson estimator of n by using the mixed Poisson model is always at least as large as the estimator by using a homogeneous Poisson model. In turn, if the homogeneous Poisson model is misspecified it will, potentially strongly, underestimate the true population size. Examples from various areas illustrate this finding. 相似文献
173.
Xiao-Li Meng & David van Dyk 《Journal of the Royal Statistical Society. Series B, Statistical methodology》1997,59(3):511-567
Celebrating the 20th anniversary of the presentation of the paper by Dempster, Laird and Rubin which popularized the EM algorithm, we investigate, after a brief historical account, strategies that aim to make the EM algorithm converge faster while maintaining its simplicity and stability (e.g. automatic monotone convergence in likelihood). First we introduce the idea of a 'working parameter' to facilitate the search for efficient data augmentation schemes and thus fast EM implementations. Second, summarizing various recent extensions of the EM algorithm, we formulate a general alternating expectation–conditional maximization algorithm AECM that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations. We illustrate these methods using multivariate t -models with known or unknown degrees of freedom and Poisson models for image reconstruction. We show, through both empirical and theoretical evidence, the potential for a dramatic reduction in computational time with little increase in human effort. We also discuss the intrinsic connection between EM-type algorithms and the Gibbs sampler, and the possibility of using the techniques presented here to speed up the latter. The main conclusion of the paper is that, with the help of statistical considerations, it is possible to construct algorithms that are simple, stable and fast. 相似文献
174.
Liming Cai Nathaniel Schenker James Lubitz 《Journal of the Royal Statistical Society. Series C, Applied statistics》2006,55(4):477-491
Summary. To analyse functional status transitions in the older population better, we fit a semi-Markov process model to data from the 1992–2002 Medicare Current Beneficiary Survey. We used an analogue of the stochastic EM algorithm to address the problem of left censoring of spells in longitudinal data. The iterative algorithm converged robustly under various initial values for the unobserved elapsed durations of spells in progress at base-line. Results on life expectancy and recovery from functional limitations based on the semi-Markov process model differ from those based on the traditional multistate life-table method. The proposed treatment of left-censored spells has the potential to expand the modelling capability that is available to researchers in fields where left censoring is a concern. 相似文献
175.
Nicholas J. Horton Garrett M. Fitzmaurice 《Journal of the Royal Statistical Society. Series C, Applied statistics》2002,51(3):281-295
Summary. Missing observations are a common problem that complicate the analysis of clustered data. In the Connecticut child surveys of childhood psychopathology, it was possible to identify reasons why outcomes were not observed. Of note, some of these causes of missingness may be assumed to be ignorable , whereas others may be non-ignorable . We consider logistic regression models for incomplete bivariate binary outcomes and propose mixture models that permit estimation assuming that there are two distinct types of missingness mechanisms: one that is ignorable; the other non-ignorable. A feature of the mixture modelling approach is that additional analyses to assess the sensitivity to assumptions about the missingness are relatively straightforward to incorporate. The methods were developed for analysing data from the Connecticut child surveys, where there are missing informant reports of child psychopathology and different reasons for missingness can be distinguished. 相似文献
176.
M. C. Paik & R. L. Sacco 《Journal of the Royal Statistical Society. Series C, Applied statistics》2000,49(1):145-156
We consider methods for analysing matched case–control data when some covariates ( W ) are completely observed but other covariates ( X ) are missing for some subjects. In matched case–control studies, the complete-record analysis discards completely observed subjects if none of their matching cases or controls are completely observed. We investigate an imputation estimate obtained by solving a joint estimating equation for log-odds ratios of disease and parameters in an imputation model. Imputation estimates for coefficients of W are shown to have smaller bias and mean-square error than do estimates from the complete-record analysis. 相似文献
177.
The maximum likelihood estimates (MLEs) of the parameters of a two-parameter lognormal distribution with left truncation and right censoring are developed through the Expectation Maximization (EM) algorithm. For comparative purpose, the MLEs are also obtained by the Newton–Raphson method. The asymptotic variance-covariance matrix of the MLEs is obtained by using the missing information principle, under the EM framework. Then, using asymptotic normality of the MLEs, asymptotic confidence intervals for the parameters are constructed. Asymptotic confidence intervals are also obtained using the estimated variance of the MLEs by the observed information matrix, and by using parametric bootstrap technique. Different confidence intervals are then compared in terms of coverage probabilities, through a Monte Carlo simulation study. A prediction problem concerning the future lifetime of a right censored unit is also considered. A numerical example is given to illustrate all the inferential methods developed here. 相似文献
178.
It is well known that a ranked set sample under perfect ranking provides more information than an i.i.d. sample of the same size. Then it may be interesting to study how much information is lost due to imperfect ranking. In this article, we consider some ranking mechanisms and study the loss of the Fisher information according to the degree of imperfect ranking. Then we continue to discuss the optimal combination of the sample size and number of strata in terms of maximizing the Fisher information for the bivariate normal and exponential distributions. 相似文献
179.
Stochastic ordering is a useful concept in order restricted inferences. In this paper, we propose a new estimation technique for the parameters in two multinomial populations under stochastic orderings when missing data are present. In comparison with traditional maximum likelihood estimation method, our new method can guarantee the uniqueness of the maximum of the likelihood function. Furthermore, it does not depend on the choice of initial values for the parameters in contrast to the EM algorithm. Finally, we give the asymptotic distributions of the likelihood ratio statistics based on the new estimation method. 相似文献
180.
Unbalanced panel data: A survey 总被引:2,自引:0,他引:2
This paper surveys the econometrics literature on unbalanced panels. This includes panels with randomly and non-randomly missing observations. In addition, we survey panels with special features including pseudo panels, rotating panels and censored panels. 相似文献