首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 672 毫秒
1.
Standard methods for maximum likelihood parameter estimation in latent variable models rely on the Expectation-Maximization algorithm and its Monte Carlo variants. Our approach is different and motivated by similar considerations to simulated annealing; that is we build a sequence of artificial distributions whose support concentrates itself on the set of maximum likelihood estimates. We sample from these distributions using a sequential Monte Carlo approach. We demonstrate state-of-the-art performance for several applications of the proposed approach.  相似文献   

2.
Approximate normality and unbiasedness of the maximum likelihood estimate (MLE) of the long-memory parameter H of a fractional Brownian motion hold reasonably well for sample sizes as small as 20 if the mean and scale parameter are known. We show in a Monte Carlo study that if the latter two parameters are unknown the bias and variance of the MLE of H both increase substantially. We also show that the bias can be reduced by using a parametric bootstrap procedure. In very large samples, maximum likelihood estimation becomes problematic because of the large dimension of the covariance matrix that must be inverted. To overcome this difficulty, we propose a maximum likelihood method based upon first differences of the data. These first differences form a short-memory process. We split the data into a number of contiguous blocks consisting of a relatively small number of observations. Computation of the likelihood function in a block then presents no computational problem. We form a pseudo-likelihood function consisting of the product of the likelihood functions in each of the blocks and provide a formula for the standard error of the resulting estimator of H. This formula is shown in a Monte Carlo study to provide a good approximation to the true standard error. The computation time required to obtain the estimate and its standard error from large data sets is an order of magnitude less than that required to obtain the widely used Whittle estimator. Application of the methodology is illustrated on two data sets.  相似文献   

3.
In this paper, we consider the analysis of hybrid censored competing risks data, based on Cox's latent failure time model assumptions. It is assumed that lifetime distributions of latent causes of failure follow Weibull distribution with the same shape parameter, but different scale parameters. Maximum likelihood estimators (MLEs) of the unknown parameters can be obtained by solving a one-dimensional optimization problem, and we propose a fixed-point type algorithm to solve this optimization problem. Approximate MLEs have been proposed based on Taylor series expansion, and they have explicit expressions. Bayesian inference of the unknown parameters are obtained based on the assumption that the shape parameter has a log-concave prior density function, and for the given shape parameter, the scale parameters have Beta–Gamma priors. We propose to use Markov Chain Monte Carlo samples to compute Bayes estimates and also to construct highest posterior density credible intervals. Monte Carlo simulations are performed to investigate the performances of the different estimators, and two data sets have been analysed for illustrative purposes.  相似文献   

4.
Li Yan 《Statistics》2015,49(5):978-988
Empirical likelihood inference for generalized linear models with fixed and adaptive designs is considered. It is shown that the empirical log-likelihood ratio at the true parameters converges to the standard chi-square distribution. Furthermore, we obtain the maximum empirical likelihood estimate of the unknown parameter and the resulting estimator is shown to be asymptotically normal. Some simulations are conducted to illustrate the proposed method.  相似文献   

5.
We propose a multivariate tobit (MT) latent variable model that is defined by a confirmatory factor analysis with covariates for analysing the mixed type data, which is inherently non-negative and sometimes has a large proportion of zeros. Some useful MT models are special cases of our proposed model. To obtain maximum likelihood estimates, we use the expectation maximum algorithm with its E-step via the Gibbs sampler made feasible by Monte Carlo simulation and its M-step greatly simplified by a sequence of conditional maximization. Standard errors are evaluated by inverting a Monte Carlo approximation of the information matrix using Louis's method. The methodology is illustrated with a simulation study and a real example.  相似文献   

6.
Arnab Koley  Ayon Ganguly 《Statistics》2017,51(6):1304-1325
Kundu and Gupta [Analysis of hybrid life-tests in presence of competing risks. Metrica. 2007;65:159–170] provided the analysis of Type-I hybrid censored competing risks data, when the lifetime distributions of the competing cause of failures follows exponential distribution. In this paper, we consider the analysis of Type-II hybrid censored competing risks data. It is assumed that latent lifetime distributions of the competing causes of failures follow independent exponential distributions with different scale parameters. It is observed that the maximum likelihood estimators of the unknown parameters do not always exist. We propose the modified estimators of the scale parameters, which coincide with the corresponding maximum likelihood estimators when they exist, and asymptotically they are equivalent. We obtain the exact distribution of the proposed estimators. Using the exact distributions of the proposed estimators, associated confidence intervals are obtained. The asymptotic and bootstrap confidence intervals of the unknown parameters are also provided. Further, Bayesian inference of some unknown parametric functions under a very flexible Beta-Gamma prior is considered. Bayes estimators and associated credible intervals of the unknown parameters are obtained using the Monte Carlo method. Extensive Monte Carlo simulations are performed to see the effectiveness of the proposed estimators and one real data set has been analysed for the illustrative purposes. It is observed that the proposed model and the method work quite well for this data set.  相似文献   

7.
The properties of a method of estimating the ratio of parameters for ordered categorical response regression models are discussed. If the link function relating the response variable to the linear combination of covariates is unknown then it is only possible to estimate the ratio of regression parameters. This ratio of parameters has a substitutability or relative importance interpretation.

The maximum likelihood estimate of the ratio of parameters, assuming a logistic function (McCullagh, 1980), is found to have very small bias for a wide variety of true link functions. Further it is shown using Monte Carlo simulations that this maximum likelihood estimate, has good coverage properties, even if the link function is incorrectly specified. It is demonstrated that combining adjacent categories to make the response binary can result in an analysis which is appreciably less efficient. The size of the efficiency loss on, among other factors, the marginal distribution in the ordered categories  相似文献   

8.
Likelihood cross-validation for kernel density estimation is known to be sensitive to extreme observations and heavy-tailed distributions. We propose a robust likelihood-based cross-validation method to select bandwidths in multivariate density estimations. We derive this bandwidth selector within the framework of robust maximum likelihood estimation. This method establishes a smooth transition from likelihood cross-validation for nonextreme observations to least squares cross-validation for extreme observations, thereby combining the efficiency of likelihood cross-validation and the robustness of least-squares cross-validation. We also suggest a simple rule to select the transition threshold. We demonstrate the finite sample performance and practical usefulness of the proposed method via Monte Carlo simulations and a real data application on Chinese air pollution.  相似文献   

9.
In this paper we propose an alternative procedure for estimating the parameters of the beta regression model. This alternative estimation procedure is based on the EM-algorithm. For this, we took advantage of the stochastic representation of the beta random variable through ratio of independent gamma random variables. We present a complete approach based on the EM-algorithm. More specifically, this approach includes point and interval estimations and diagnostic tools for detecting outlying observations. As it will be illustrated in this paper, the EM-algorithm approach provides a better estimation of the precision parameter when compared to the direct maximum likelihood (ML) approach. We present the results of Monte Carlo simulations to compare EM-algorithm and direct ML. Finally, two empirical examples illustrate the full EM-algorithm approach for the beta regression model. This paper contains a Supplementary Material.  相似文献   

10.
The maximum likelihood (ML) method is used to estimate the unknown Gamma regression (GR) coefficients. In the presence of multicollinearity, the variance of the ML method becomes overstated and the inference based on the ML method may not be trustworthy. To combat multicollinearity, the Liu estimator has been used. In this estimator, estimation of the Liu parameter d is an important problem. A few estimation methods are available in the literature for estimating such a parameter. This study has considered some of these methods and also proposed some new methods for estimation of the d. The Monte Carlo simulation study has been conducted to assess the performance of the proposed methods where the mean squared error (MSE) is considered as a performance criterion. Based on the Monte Carlo simulation and application results, it is shown that the Liu estimator is always superior to the ML and recommendation about which best Liu parameter should be used in the Liu estimator for the GR model is given.  相似文献   

11.
Likelihood‐based inference with missing data is challenging because the observed log likelihood is often an (intractable) integration over the missing data distribution, which also depends on the unknown parameter. Approximating the integral by Monte Carlo sampling does not necessarily lead to a valid likelihood over the entire parameter space because the Monte Carlo samples are generated from a distribution with a fixed parameter value. We consider approximating the observed log likelihood based on importance sampling. In the proposed method, the dependency of the integral on the parameter is properly reflected through fractional weights. We discuss constructing a confidence interval using the profile likelihood ratio test. A Newton–Raphson algorithm is employed to find the interval end points. Two limited simulation studies show the advantage of the Wilks inference over the Wald inference in terms of power, parameter space conformity and computational efficiency. A real data example on salamander mating shows that our method also works well with high‐dimensional missing data.  相似文献   

12.
A hybrid censoring is a mixture of Type-I and Type-II censoring schemes. This article presents the statistical inferences on Weibull parameters when the data are hybrid censored. The maximum likelihood estimators (MLEs) and the approximate maximum likelihood estimators are developed for estimating the unknown parameters. Asymptotic distributions of the MLEs are used to construct approximate confidence intervals. Bayes estimates and the corresponding highest posterior density credible intervals of the unknown parameters are obtained under suitable priors on the unknown parameters and using the Gibbs sampling procedure. The method of obtaining the optimum censoring scheme based on the maximum information measure is also developed. Monte Carlo simulations are performed to compare the performances of the different methods and one data set is analyzed for illustrative purposes.  相似文献   

13.
This paper is concerned with person parameter estimation in the binary Rasch model. The loss of efficiency of a pseudo, quasi, or composite likelihood approach investigated. By means of a Monte Carlo study, two quasi likelihood estimators are compared to two well-established maximum likelihood approaches, one of which being a weighted likelihood procedure. The results show that the observed values of the root mean squared error are practically equivalent for the compared estimators in the case of a sufficiently large number of items.  相似文献   

14.
It is well-known that the nonparametric maximum likelihood estimator (NPMLE) of a survival function may severely underestimate the survival probabilities at very early times for left truncated data. This problem might be overcome by instead computing a smoothed nonparametric estimator (SNE) via the EMS algorithm. The close connection between the SNE and the maximum penalized likelihood estimator is also established. Extensive Monte Carlo simulations demonstrate the superior performance of the SNE over that of the NPMLE, in terms of either bias or variance, even for moderately large Samples. The methodology is illustrated with an application to the Massachusetts Health Care Panel Study dataset to estimate the probability of being functionally independent for non-poor male and female groups rcspectively.  相似文献   

15.
In many situations it is necessary to test the equality of the means of two normal populations when the variances are unknown and unequal. This paper studies the celebrated and controversial Behrens-Fisher problem via an adjusted likelihood-ratio test using the maximum likelihood estimates of the parameters under both the null and the alternative models. This procedure allows the significance level to be adjusted in accordance with the degrees of freedom to balance the risk due to the bias in using the maximum likelihood estimates and the risk due to the increase of variance. A large scale Monte Carlo investigation is carried out to show that -2 InA has an empirical chi-square distribution with fractional degrees of freedom instead of a chi-square distribution with one degree of freedom. Also Monte Carlo power curves are investigated under several different conditions to evaluate the performances of several conventional procedures with that of this procedure with respect to control over Type I errors and power.  相似文献   

16.
In this paper, we consider the problem of estimating the scale parameter of the inverse Rayleigh distribution based on general progressively Type-II censored samples and progressively Type-II censored samples. The pivotal quantity method is used to derive the estimator of the scale parameter. Besides, considering that the maximum likelihood estimator is tough to obtain for this distribution, we derive an explicit estimator of the scale parameter by approximating the likelihood equation with Taylor expansion. The interval estimation is also studied based on pivotal inference. Then we conduct Monte Carlo simulations and compare the performance of different estimators. We demonstrate that the pivotal inference is simpler and more effective. The further application of the pivotal quantity method is also discussed theoretically. Finally, two real data sets are analyzed using our methods.  相似文献   

17.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

18.
This note compares a Bayesian Markov chain Monte Carlo approach implemented by Watanabe with a maximum likelihood ML approach based on an efficient importance sampling procedure to estimate dynamic bivariate mixture models. In these models, stock price volatility and trading volume are jointly directed by the unobservable number of price-relevant information arrivals, which is specified as a serially correlated random variable. It is shown that the efficient importance sampling technique is extremely accurate and that it produces results that differ significantly from those reported by Watanabe.  相似文献   

19.
This article contains comments on “Bayesian Analysis of Stochastic Volatility Models,” by Jacquier, Poison, and Rossi. The Markov-chain Monte Carlo (MCMC) method proposed is compared empirically with a simulated maximum likelihood (SML) method. The MCMC and SML estimators yield very similar results, both when applied to actual data and in a Monte Carlo experiment.  相似文献   

20.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号