首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract.  The expectation-maximization (EM) algorithm is a popular approach for obtaining maximum likelihood estimates in incomplete data problems because of its simplicity and stability (e.g. monotonic increase of likelihood). However, in many applications the stability of EM is attained at the expense of slow, linear convergence. We have developed a new class of iterative schemes, called squared iterative methods (SQUAREM), to accelerate EM, without compromising on simplicity and stability. SQUAREM generally achieves superlinear convergence in problems with a large fraction of missing information. Globally convergent schemes are easily obtained by viewing SQUAREM as a continuation of EM. SQUAREM is especially attractive in high-dimensional problems, and in problems where model-specific analytic insights are not available. SQUAREM can be readily implemented as an 'off-the-shelf' accelerator of any EM-type algorithm, as it only requires the EM parameter updating. We present four examples to demonstrate the effectiveness of SQUAREM. A general-purpose implementation (written in R) is available.  相似文献   

2.
An acceleration of backpropagation algorithm with momentum (BPM) is introduced. At every stage of the learning process, local quadratic approximation of the error function is performed and the Hessian matrix of the quadratic function is approximated. Effective learning rate and momentum factor are determined by means of maximum and minimum eigenvalues of the approximated Hessian matrix at each step. BPM algorithm is modified so as to work automatically with these effective parameters. Performance of this new approach is demonstrated in comparison with well-known training algorithms on conventional problems by an experimental evaluation.  相似文献   

3.
《随机性模型》2013,29(2-3):279-302
ABSTRACT

By using properties of canonical factorizations, we prove that under very mild assumptions, the shifted cyclic reduction method (SCR) can be applied for solving QBD problems with no breakdown and that it always converges. For general M/G/1 type Markov chains we prove that SCR always converges if no breakdown is encountered. Numerical experiments showing the acceleration provided by SCR versus cyclic reduction are presented.  相似文献   

4.
This contribution discusses various acceleration techniques of fixed-point methods for iteratively finding percentage points of a given distribution function. Recently, Farnum (1991) suggests a particular fixed-point method for solving this problem. In this paper, his method is discussed and some disadvantages are highlighted. Alternatives are suggested and discussed. In addition, methodology is developed which transforms a linearly convergent fixed point sequence into one which is converging quadratically. This includes the discussion of various forms of Aitken acceleration as well as a parametric form of acceleration.  相似文献   

5.
Relative motion between the camera and the object results in the recording of a motion-blurred image. Under certain idealized conditions, such blurring can be mathematically corrected. We refer to this as 'motion deblurring'. We start with some idealized assumptions under which the motion deblurring problem is a linear inverse problem with certain positivity constraints; LININPOS problems, for short. Such problems, even in the case of no statistical noise, can be solved using the maximum likelihood/EM approach in the following sense. If they have a solution, the ML/EM iterative method will converge to it; otherwise, it will converge to the nearest approximation of a solution, where 'nearest' is interpreted in a likelihood sense or, equivalently, in a Kullback-Leibler information divergence sense. We apply the ML/EM algorithm to such problems and discuss certain special cases, such as motion along linear or circular paths with or without acceleration. The idealized assumptions under which the method is developed are hardly ever satisfied in real applications, so we experiment with the method under conditions that violate these assumptions. Specifically, we experimented with an image created through a computer-simulated digital motion blurring corrupted with noise, and with an image of a moving toy cart recorded with a 35 mm camera while in motion. The gross violations of the idealized assumptions, especially in the toy cart example, led to a host of very difficult problems which always occur under real-life conditions and need to be addressed. We discuss these problems in detail and propose some 'engineering solutions' that, when put together, appear to lead to a good methodology for certain motion deblurring problems. Some of the issues we discuss, in various degrees of detail, include estimating the speed of motion which is referred to as 'blur identification'; non-zero-background artefacts and pre- and post- processing of the images to remove such artefacts; the need to 'stabilize' the solution because of the inherent ill-posedness of the problem; and computer implemetation.  相似文献   

6.
Relative motion between the camera and the object results in the recording of a motion-blurred image. Under certain idealized conditions, such blurring can be mathematically corrected. We refer to this as ‘motion deblurring’. We start with some idealized assumptions under which the motion deblurring problem is a linear inverse problem with certain positivity constraints; LININPOS problems, for short. Such problems, even in the case of no statistical noise, can be solved using the maximum likelihood/EM approach in the following sense. If they have a solution, the ML/EM iterative method will converge to it; otherwise, it will converge to the nearest approximation of a solution, where ‘nearest’ is interpreted in a likelihood sense or, equivalently, in a Kullback-Leibler information divergence sense. We apply the ML/EM algorithm to such problems and discuss certain special cases, such as motion along linear or circular paths with or without acceleration. The idealized assumptions under which the method is developed are hardly ever satisfied in real applications, so we experiment with the method under conditions that violate these assumptions. Specifically, we experimented with an image created through a computer-simulated digital motion blurring corrupted with noise, and with an image of a moving toy cart recorded with a 35 mm camera while in motion. The gross violations of the idealized assumptions, especially in the toy cart example, led to a host of very difficult problems which always occur under real-life conditions and need to be addressed. We discuss these problems in detail and propose some ‘engineering solutions' that, when put together, appear to lead to a good methodology for certain motion deblurring problems. Some of the issues we discuss, in various degrees of detail, include estimating the speed of motion which is referred to as ‘blur identification’; non-zero-background artefacts and pre- and post- processing of the images to remove such artefacts; the need to ‘stabilize’ the solution because of the inherent ill-posedness of the problem; and computer implemetation.  相似文献   

7.
This paper introduces a mixture model that combines proportional hazards regression with logistic regression for the analysis of survival data, and describes its parameter estimation via an expectation maximization algorithm. The mixture model is then applied to analyze the determinants of the timing of intrauterine device (IUD) discontinuation and long-term IUD use, utilizing 14 639 instances of IUD use by Chinese women. The results show that socio-economic and demographic characteristics of women have different influences on the acceleration or deceleration of the timing of stopping IUD use and on the likelihood of long-term IUD use.  相似文献   

8.
Abstract

This paper investigates the statistical analysis of grouped accelerated temperature cycling test data when the product lifetime follows a Weibull distribution. A log-linear acceleration equation is derived from the Coffin-Manson model. The problem is transformed to a constant-stress accelerated life test with grouped data and multiple acceleration variables. The Jeffreys prior and reference priors are derived. Maximum likelihood estimation and Bayesian estimation with objective priors are obtained by applying the technique of data augmentation. A simulation study shows that both of these two methods perform well when sample size is large, and the Bayesian method gives better performance under small sample sizes.  相似文献   

9.
A review of several statistical methods that are currently in use for outlier identification is presented, and their performances are compared theoretically for typical statistical distributions of experimental data, considering values derived from the distribution of extreme order statistics as reference terms. A simple modification of a popular, broadly used method based upon box-plot is introduced, in order to overcome a major limitation concerning sample size. Examples are presented concerning exploitation of methods considered on two data sets: a historical one concerning evaluation of an astronomical constant performed by a number of leading observatories and a substantial database pertaining to an ongoing investigation on absolute measurement of gravity acceleration, exhibiting peculiar aspects concerning outliers. Some problems related to outlier treatment are examined, and the requirement of both statistical analysis and expert opinion for proper outlier management is underlined.  相似文献   

10.
The Fels growth data record at half-yearly intervals the heights of children from birth to adulthood, and are the basis for pediatricians' growth charts used throughout North America. Aspects of human growth are the subject of a large medical and statistical literature. This paper uses smoothing splines to study the variation in height acceleration. By use of a functional version of principal-components analysis, we find that variation in the acceleration curve is essentially three-dimensional in nature. Evidence for a small growth spurt between the ages of six and eight, reported for data collected in Switzerland, is examined, and little support is found for the existence of this phenomenon in the Fels data.  相似文献   

11.
An asymptotic series for sums of powers of binomial coefficients is derived, the general term being defined and usable with a computer symbolic language. Sums of squares of coefficients in the symmetric case are shown to have a link with classical moment problems, but this property breaks down for cubes and higher powers. Problems of remainders for the asymptotic series are mentioned. Using the reflection formula for I'(.), a continuous form for a binomial function is set up, and this becomes oscillatory outstde the usual range. A new contmued fraction emerges for the logarithm of an adjusted sum of binomial squares. The note is a contribution to the problem of the interpretation of asymptotic series and processes for their convergence acceleration.  相似文献   

12.
李强 《统计研究》2012,29(7):3-8
 本文总结了2011年国家统计系统信息化建设工作的成绩,提出了当前信息化建议的主要任务及加快现代化信息技术在统计工作中的应用、加速统计工作现代化的建议。  相似文献   

13.
We propose a new criterion for model selection in prediction problems. The covariance inflation criterion adjusts the training error by the average covariance of the predictions and responses, when the prediction rule is applied to permuted versions of the data set. This criterion can be applied to general prediction problems (e.g. regression or classification) and to general prediction rules (e.g. stepwise regression, tree-based models and neural nets). As a by-product we obtain a measure of the effective number of parameters used by an adaptive procedure. We relate the covariance inflation criterion to other model selection procedures and illustrate its use in some regression and classification problems. We also revisit the conditional bootstrap approach to model selection.  相似文献   

14.
This paper describes problems met in the calculation of the new index of industrial production in Zambia. It discusses deficiencies of the old index and measures taken to remedy the situation. Comparison of the indices shows that the old index was very unreliable. The average error for most indices was as large, or larger, than their average year-to-year change. Thus it was almost impossible to distinguish real movements in the old index from error variations. Similar problems were found in other developing countries. It is likely that many indices currently in use in these countries are also unreliable.This paper describes problems met in the calculation of the new index of industrial production in Zambia. It discusses deficiencies of the old index and measures taken to remedy the situation. Comparison of the indices shows that the old index was very unreliable. The average error for most indices was as large, or larger, than their average year-to-year change. Thus it was almost impossible to distinguish real movements in the old index from error variations. Similar problems were found in other developing countries. It is likely that many indices currently in use in these countries are also unreliable.This paper describes problems met in the calculation of the new index of industrial production in Zambia. It discusses deficiencies of the old index and measures taken to remedy the situation. Comparison of the indices shows that the old index was very unreliable. The average error for most indices was as large, or larger, than their average year-to-year change. Thus it was almost impossible to distinguish real movements in the old index from error variations. Similar problems were found in other developing countries. It is likely that many indices currently in use in these countries are also unreliable.This paper describes problems met in the calculation of the new index of industrial production in Zambia. It discusses deficiencies of the old index and measures taken to remedy the situation. Comparison of the indices shows that the old index was very unreliable. The average error for most indices was as large, or larger, than their average year-to-year change. Thus it was almost impossible to distinguish real movements in the old index from error variations. Similar problems were found in other developing countries. It is likely that many indices currently in use in these countries are also unreliable.  相似文献   

15.
The objective of Taguchi's robust design method is to reduce the output variation from the target (the desired output) by making the performance insensitive to noise, such as manufacturing imperfections, environmental variations and deterioration. This objective has been recognized to be very effective in improving product and manufacturing process design. In application, however, Taguchi's analysis approach of modelling the average loss (or signal-to-noise ratios) may lead to non-optimal solutions, efficiency loss and information loss. In addition, since his modelling loss approach requires a special experimental format that contains a cross-product of two separate arrays for control and noise factors, this leads to less flexible and unnecessarily expensive experiments. The response model approach, an alternative approach proposed by Welch et al. , Box and Jones, Lucas and Shoemaker et al. , does not have these problems. However, this alternative approach also has its own problems. This paper reviews and discusses the potential problems of Taguchi's modelling approach. We illustrate these problems with examples and numerical studies. We also compare the advantages and disadvantages of Taguchi's approach and the alternative approach.  相似文献   

16.
17.
Conclusion In this brief and highly selective examination of the adequacy of ISSS, it is suggested that statistical and economic research is essential if we are to ensure that these systems are fit for their intended purposes. It is too readily assumed that the statistical infrastructure is either adequate or, if not, of little significance in dealing with the major problems faced by the world today. This is far from being so and, if the necessary research were carried out, the results could make an invaluable contribution to resolving many of those problems.  相似文献   

18.
Model-based clustering typically involves the development of a family of mixture models and the imposition of these models upon data. The best member of the family is then chosen using some criterion and the associated parameter estimates lead to predicted group memberships, or clusterings. This paper describes the extension of the mixtures of multivariate t-factor analyzers model to include constraints on the degrees of freedom, the factor loadings, and the error variance matrices. The result is a family of six mixture models, including parsimonious models. Parameter estimates for this family of models are derived using an alternating expectation-conditional maximization algorithm and convergence is determined based on Aitken’s acceleration. Model selection is carried out using the Bayesian information criterion (BIC) and the integrated completed likelihood (ICL). This novel family of mixture models is then applied to simulated and real data where clustering performance meets or exceeds that of established model-based clustering methods. The simulation studies include a comparison of the BIC and the ICL as model selection techniques for this novel family of models. Application to simulated data with larger dimensionality is also explored.  相似文献   

19.
Summary. To construct an optimal estimating function by weighting a set of score functions, we must either know or estimate consistently the covariance matrix for the individual scores. In problems with high dimensional correlated data the estimated covariance matrix could be unreliable. The smallest eigenvalues of the covariance matrix will be the most important for weighting the estimating equations, but in high dimensions these will be poorly determined. Generalized estimating equations introduced the idea of a working correlation to minimize such problems. However, it can be difficult to specify the working correlation model correctly. We develop an adaptive estimating equation method which requires no working correlation assumptions. This methodology relies on finding a reliable approximation to the inverse of the variance matrix in the quasi-likelihood equations. We apply a multivariate generalization of the conjugate gradient method to find estimating equations that preserve the information well at fixed low dimensions. This approach is particularly useful when the estimator of the covariance matrix is singular or close to singular, or impossible to invert owing to its large size.  相似文献   

20.
A wooden historic building located in Tibet, China, experienced structural damage when subjected to tourists visit. This kind of ancient building attends to too many visitors every day because heritage sites never fail to attract tourists. There should be a balance between accepting the visitors and the protection of historic buildings considering the importance of the cultural relics. In this paper, the singular spectrum analysis (SSA) is used for forecasting the number of tourist for the building management to exercise maintenance measures to the structure. The analyzed results can be used to control the tourist flow to avoid excessive pedestrian loading on the structure. The relationship between the measured acceleration from the structure and the tourist number is firstly studied. The root-mean-square (RMS) value of the measured acceleration in the passage route of the tourist is selected for forecasting future tourist number. The forecasting results from different methods are compared. The SSA is found slightly outperforms the autoregressive integrated moving average model (ARIMA), the X-11-ARIMA model and the cubic spline extrapolation in terms of the RMS error, mean absolute error and mean absolute percentage error for long-term prediction, whereas the opposite is observed for short-term forecasting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号