首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Common factor analysis (CFA) and principal component analysis (PCA) are widely used multivariate techniques. Using simulations, we compared CFA with PCA loadings for distortions of a perfect cluster configuration. Results showed that nonzero PCA loadings were higher and more stable than nonzero CFA loadings. Compared to CFA loadings, PCA loadings correlated weakly with the true factor loadings for underextraction, overextraction, and heterogeneous loadings within factors. The pattern of differences between CFA and PCA was consistent across sample sizes, levels of loadings, principal axis factoring versus maximum likelihood factor analysis, and blind versus target rotation.  相似文献   

2.
We study a factor analysis model with two normally distributed observations and one factor. In the case when the errors have equal variance, the maximum likelihood estimate of the factor loading is given in closed form. Exact and approximate distributions of the maximum likelihood estimate are considered. The exact distribution function is given in a complex form that involves the incomplete Beta function. Approximations to the distribution function are given for the cases of large sample sizes and small error variances. The accuracy of the approximations is discussed  相似文献   

3.
We consider the problem of full information maximum likelihood (FIML) estimation in factor analysis when a majority of the data values are missing. The expectation–maximization (EM) algorithm is often used to find the FIML estimates, in which the missing values on manifest variables are included in complete data. However, the ordinary EM algorithm has an extremely high computational cost. In this paper, we propose a new algorithm that is based on the EM algorithm but that efficiently computes the FIML estimates. A significant improvement in the computational speed is realized by not treating the missing values on manifest variables as a part of complete data. When there are many missing data values, it is not clear if the FIML procedure can achieve good estimation accuracy. In order to investigate this, we conduct Monte Carlo simulations under a wide variety of sample sizes.  相似文献   

4.
Influence functions are derived for covariance structure analysis with equality constraints, where the parameters are estimated by minimizing a discrepancy function between the assumed covariance matrix and the sample covariance matrix. As a special case maximum likelihood exploratory factor analysis is studied precisely with a numerical example. Comparison is made with the the results of Tanaka and Odaka (1989), who have proposed a sensitivity analysis procedure in maximum likelihood exploratory factor analysis using the perturbation expansion of a certain function of eigenvalues and eigenvectors of a real symmetric matrix. Also the present paper gives a generalization of Tanaka, Watadani and Moon (1991) to the case with equality constraints.  相似文献   

5.
This paper considers variable and factor selection in factor analysis. We treat the factor loadings for each observable variable as a group, and introduce a weighted sparse group lasso penalty to the complete log-likelihood. The proposal simultaneously selects observable variables and latent factors of a factor analysis model in a data-driven fashion; it produces a more flexible and sparse factor loading structure than existing methods. For parameter estimation, we derive an expectation-maximization algorithm that optimizes the penalized log-likelihood. The tuning parameters of the procedure are selected by a likelihood cross-validation criterion that yields satisfactory results in various simulation settings. Simulation results reveal that the proposed method can better identify the possibly sparse structure of the true factor loading matrix with higher estimation accuracy than existing methods. A real data example is also presented to demonstrate its performance in practice.  相似文献   

6.
This paper describes a proposal for the extension of the dual multiple factor analysis (DMFA) method developed by Lê and Pagès 15 to the analysis of categorical tables in which the same set of variables is measured on different sets of individuals. The extension of DMFA is based on the transformation of categorical variables into properly weighted indicator variables, in a way analogous to that used in the multiple factor analysis of categorical variables. The DMFA of categorical variables enables visual comparison of the association structures between categories over the sample as a whole and in the various subsamples (sets of individuals). For each category, DMFA allows us to obtain its global (considering all the individuals) and partial (considering each set of individuals) coordinates in a factor space. This visual analysis allows us to compare the set of individuals to identify their similarities and differences. The suitability of the technique is illustrated through two applications: one using simulated data for two groups of individuals with very different association structures and the other using real data from a voting intention survey in which some respondents were interviewed by telephone and others face to face. The results indicate that the two data collection methods, while similar, are not entirely equivalent.  相似文献   

7.
Recent small sample studies of estimators for the shape parameter a of the negative binomial distribution (NBD) tend to indicate that the choice of estimator can be reduced to a choice between the method of moments estimator, maximum likelihood estimator (MLE), maximum quasi-likelihood estimator and the conditional likelihood estimator (CLE). In this paper the results of a comprehensive simulation study are reported to assist with the choice from these four estimators. The study includes a traditional procedure for assessing estimators for the shape parameter of the NBD and in addition introduces an alternative assessment procedure. Based on the traditional approach the CLE is considered to perform the best overall for the range of parameter values and sample sizes considered. The alternative assessment procedure indicates that the MLE is the preferred estimator.  相似文献   

8.
Classical factor analysis relies on the assumption of normally distributed factors that guarantees the model to be estimated via the maximum likelihood method. Even when the assumption of Gaussian factors is not explicitly formulated and estimation is performed via the iterated principal factors’ method, the interest is actually mainly focussed on the linear structure of the data, since only moments up to the second ones are involved. In many real situations, the factors could not be adequately described by the first two moments only. For example, skewness characterizing most latent variables in social analysis can be properly measured by the third moment: the factors are not normally distributed and covariance is no longer a sufficient statistic. In this work we propose a factor model characterized by skew-normally distributed factors. Skew-normal refers to a parametric class of probability distributions, that extends the normal distribution by an additional shape parameter regulating the skewness. The model estimation can be solved by the generalized EM algorithm, in which the iterative Newthon–Raphson procedure is needed in the M-step to estimate the factor shape parameter. The proposed skew-normal factor analysis is applied to the study of student satisfaction towards university courses, in order to identify the factors representing different aspects of the latent overall satisfaction.  相似文献   

9.
This paper presents a robust extension of factor analysis model by assuming the multivariate normal mean–variance mixture of Birnbaum–Saunders distribution for the unobservable factors and errors. A computationally analytical EM-based algorithm is developed to find maximum likelihood estimates of the parameters. The asymptotic standard errors of parameter estimates are derived under an information-based paradigm. Numerical merits of the proposed methodology are illustrated using both simulated and real datasets.  相似文献   

10.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

11.
The four-parameter kappa distribution (K4D) is a generalized form of some commonly used distributions such as generalized logistic, generalized Pareto, generalized Gumbel, and generalized extreme value (GEV) distributions. Owing to its flexibility, the K4D is widely applied in modeling in several fields such as hydrology and climatic change. For the estimation of the four parameters, the maximum likelihood approach and the method of L-moments are usually employed. The L-moment estimator (LME) method works well for some parameter spaces, with up to a moderate sample size, but it is sometimes not feasible in terms of computing the appropriate estimates. Meanwhile, using the maximum likelihood estimator (MLE) with small sample sizes shows substantially poor performance in terms of a large variance of the estimator. We therefore propose a maximum penalized likelihood estimation (MPLE) of K4D by adjusting the existing penalty functions that restrict the parameter space. Eighteen combinations of penalties for two shape parameters are considered and compared. The MPLE retains modeling flexibility and large sample optimality while also improving on small sample properties. The properties of the proposed estimator are verified through a Monte Carlo simulation, and an application case is demonstrated taking Thailand’s annual maximum temperature data.  相似文献   

12.
The factor score determinacy coefficient represents the common variance of the factor score predictor with the corresponding factor. The aim of the present simulation study was to compare the bias of determinacy coefficients based on different estimation methods of the exploratory factor model. Overall, determinacy coefficients computed from parameters based on maximum likelihood estimation, unweighted least squares estimation, and principal axis factoring were more precise than determinacy coefficients based on generalized least squares estimation and alpha factoring.  相似文献   

13.
It is assumed that the logs of the time to failure in a life test follow a normal distribution. If the test is terminated after r of a sample of n items fail, the test is said to be censored. If the sample size is small and censoring severe, the usual maximum likelihood estimator of a is downwardly biased. Monte Carlo techniques and regression analysis were used to develop an empirical correction factor. Applying the correction factor to the maximum likelihood estimator yields an unbiased estimate of σ.  相似文献   

14.
Construction of closed-form confidence intervals on linear combinations of variance components were developed generically for balanced data and studied mainly for one-way and two-way random effects analysis of variance models. The Satterthwaite approach is easily generalized to unbalanced data and modified to increase its coverage probability. They are applied on measures of assay precision in combination with (restricted) maximum likelihood and Henderson III Type 1 and 3 estimation. Simulations of interlaboratory studies with unbalanced data and with small sample sizes do not show superiority of any of the possible combinations of estimation methods and Satterthwaite approaches on three measures of assay precision. However, the modified Satterthwaite approach with Henderson III Type 3 estimation is often preferred above the other combinations.  相似文献   

15.
Performance of maximum likelihood estimators (MLE) of the change-point in normal series is evaluated considering three scenarios where process parameters are assumed to be unknown. Different shifts, sample sizes, and locations of a change-point were tested. A comparison is made with estimators based on cumulative sums and Bartlett's test. Performance analysis done with extensive simulations for normally distributed series showed that the MLEs perform better (or equal) in almost every scenario, with smaller bias and standard error. In addition, robustness of MLE to non-normality is also studied.  相似文献   

16.
We study a factor analysis model with two normally distributed observations and one factor. Two approximate conditional inference procedures for the factor loading are developed. The first proposal is a very simple procedure but it is not very accurate. The second proposal gives extremely accurate results even for very small sample size. Moreover, the calculations require only the signed log-likelihood ratio statistic and a measure of the standardized maximum likelihood departure. Simulations are used to study the accuracy of the proposed procedures.  相似文献   

17.
Using ranked set sampling, a viable BLUE estimator is obtained for estimating the mean of a Poisson distribution. Its properties, such as efficiency relative to the ranked set sample mean and to the maximum likelihood estimator, have been calculated for different sample sizes and values of the Poisson parameter. The estimator (termed the normal modified r.s.s. estimator is more efficient than both the ranked set sample mean and the MLE. It is recommended as a reasonable estimator of the Poisson mean ( λ) to be used in a ranked set sampling environment.  相似文献   

18.
Algorithms for computing the maximum likelihood estimators and the estimated covariance matrix of the estimators of the factor model are derived. The algorithms are particularly suitable for large matrices and for samples that give zero estimates of some error variances. A method of constructing estimators for reduced models is presented. The algorithms can also be used for the multivariate errors-in-variables model with known error covariance matrix.  相似文献   

19.
A new method for estimating a set of odds ratios under an order restriction based on estimating equations is proposed. The method is applied to those of the conditional maximum likelihood estimators and the Mantel-Haenszel estimators. The estimators derived from the conditional likelihood estimating equations are shown to maximize the conditional likelihoods. It is also seen that the restricted estimators converge almost surely to the respective odds ratios when the respective sample sizes become large regularly. The restricted estimators are compared with the unrestricted maximum likelihood estimators by a Monte Carlo simulation. The simulation studies show that the restricted estimates improve the mean squared errors remarkably, while the Mantel-Haenszel type estimates are competitive with the conditional maximum likelihood estimates, being slightly worse.  相似文献   

20.
Influence functions are derived for the parameters in covariance structure analysis, where the parameters are estimated by minimizing a discrepancy function between the assumed covariance matrix and the sample covariance matrix. The case of confirmatory factor analysis is studied precisely with a numerical example. Comparing with a general procedure called one-step estimation, the proposed procedure has two advantages:1) computing cost is cheaper, 2) the property that arbitrary influence can be decomposed into a fi-nite number of components discussed by Tanaka and Castano-Tostado(1990) can be used for efficient computing and the characterization of a covariance structure model from the sensitivity perspective. A numerical comparison is made among the confirmatory factor analysis and some procedures of ex-ploratory factor analysis by using the decomposition mentioned above.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号