首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
ABSTRACT

This article discusses two asymmetrization methods, Azzalini's representation and beta generation, to generate asymmetric bimodal models including two novel beta-generated models. The practical utility of these models is assessed with nine data sets from different fields of applied sciences. Besides this tutorial assessment, some methodological contributions are made: a random number generator for the asymmetric Rathie–Swamee model is developed (generators for the other models are already known and briefly described) and a new likelihood ratio test of unimodality is compared via simulations with other available tests. Several tools have been used to quantify and test for bimodality and assess goodness of fit including Bayesian information criterion, measures of agreement with the empirical distribution and the Kolmogorov–Smirnoff test. In the nine case studies, the results favoured models derived from Azzalini's asymmetrization, but no single model provided a best fit across the applications considered. In only two cases the normal mixture was selected as best model. Parameter estimation has been done by likelihood maximization. Numerical optimization must be performed with care since local optima are often present. We concluded that the models considered are flexible enough to fit different bimodal shapes and that the tools studied should be used with care and attention to detail.  相似文献   

2.
We present a novel methodology for estimating the parameters of a finite mixture model (FMM) based on partially rank‐ordered set (PROS) sampling and use it in a fishery application. A PROS sampling design first selects a simple random sample of fish and creates partially rank‐ordered judgement subsets by dividing units into subsets of prespecified sizes. The final measurements are then obtained from these partially ordered judgement subsets. The traditional expectation–maximization algorithm is not directly applicable for these observations. We propose a suitable expectation–maximization algorithm to estimate the parameters of the FMMs based on PROS samples. We also study the problem of classification of the PROS sample into the components of the FMM. We show that the maximum likelihood estimators based on PROS samples perform substantially better than their simple random sample counterparts even with small samples. The results are used to classify a fish population using the length‐frequency data.  相似文献   

3.
It is well known that there exist multiple roots of the likelihood equations for finite normal mixture models. Selecting a consistent root for finite normal mixture models has long been a challenging problem. Simply using the root with the largest likelihood will not work because of the spurious roots. In addition, the likelihood of normal mixture models with unequal variance is unbounded and thus its maximum likelihood estimate (MLE) is not well defined. In this paper, we propose a simple root selection method for univariate normal mixture models by incorporating the idea of goodness of fit test. Our new method inherits both the consistency properties of distance estimators and the efficiency of the MLE. The new method is simple to use and its computation can be easily done using existing R packages for mixture models. In addition, the proposed root selection method is very general and can be also applied to other univariate mixture models. We demonstrate the effectiveness of the proposed method and compare it with some other existing methods through simulation studies and a real data application.  相似文献   

4.
Let ( X , Y ) be a random vector, where Y denotes the variable of interest possibly subject to random right censoring, and X is a covariate. We construct confidence intervals and bands for the conditional survival and quantile function of Y given X using a non-parametric likelihood ratio approach. This approach was introduced by Thomas & Grunkemeier (1975 ), who estimated confidence intervals of survival probabilities based on right censored data. The method is appealing for several reasons: it always produces intervals inside [0, 1], it does not involve variance estimation, and can produce asymmetric intervals. Asymptotic results for the confidence intervals and bands are obtained, as well as simulation results, in which the performance of the likelihood ratio intervals and bands is compared with that of the normal approximation method. We also propose a bandwidth selection procedure based on the bootstrap and apply the technique on a real data set.  相似文献   

5.
The paper compares several versions of the likelihood ratio test for exponential homogeneity against mixtures of two exponentials. They are based on different implementations of the likelihood maximization algorithm. We show that global maximization of the likelihood is not appropriate to obtain a good power of the LR test. A simple starting strategy for the EM algorithm, which under the null hypothesis often fails to find the global maximum, results in a rather powerful test. On the other hand, a multiple starting strategy that comes close to global maximization under both the null and the alternative hypotheses leads to inferior power.  相似文献   

6.
The paper presents an overview of maximum likelihood estimation using simulated likelihood, including the use of antithetic variables and evaluation of the simulation error of the resulting estimates. It gives a general purpose implementation of simulated maximum likelihood and uses it to re‐visit four models that have previously appeared in the published literature: a state–space model for count data; a nested random effects model for binomial data; a nonlinear growth model with crossed random effects; and a crossed random effects model for binary salamander‐mating data. In the case of the last three examples, this appears to be the first time that maximum likelihood fits of these models have been presented.  相似文献   

7.
In this article, we consider a competing cause scenario and assume the wider family of Conway–Maxwell–Poisson (COM–Poisson) distribution to model the number of competing causes. Assuming the type of the data to be interval censored, the main contribution is in developing the steps of the expectation maximization (EM) algorithm to determine the maximum likelihood estimates (MLEs) of the model parameters. A profile likelihood approach within the EM framework is proposed to estimate the COM–Poisson shape parameter. An extensive simulation study is conducted to evaluate the performance of the proposed EM algorithm. Model selection within the wider class of COM–Poisson distribution is carried out using likelihood ratio test and information-based criteria. A study to demonstrate the effect of model mis-specification is also carried out. Finally, the proposed estimation method is applied to a data on smoking cessation and a detailed analysis of the obtained results is presented.  相似文献   

8.
This article investigates the Farlie–Gumbel–Morgenstern class of models for exchangeable continuous data. We show how the model specification can account for both individual and cluster level covariates, we derive insights from comparisons with the multivariate normal distribution, and we discuss maximum likelihood inference when a sample of independent clusters of varying sizes is available. We propose a method for maximum likelihood estimation which is an alternative to direct numerical maximization of the likelihood that sometimes exhibits non-convergence problems. We describe an algorithm for generating samples from the exchangeable multivariate Farlie–Gumbel–Morgenstern distribution with any marginals, using the structural properties of the distribution. Finally, we present the results of a simulation study designed to assess the properties of the maximum likelihood estimators, and we illustrate the use of the FGM distributions with the analysis of a small data set from a developmental toxicity study.  相似文献   

9.
Multivariate normal, due to its well-established theories, is commonly utilized to analyze correlated data of various types. However, the validity of the resultant inference is, more often than not, erroneous if the model assumption fails. We present a modification for making the multivariate normal likelihood acclimatize itself to general correlated data. The modified likelihood is asymptotically legitimate for any true underlying joint distributions so long as they have finite second moments. One can, hence, acquire full likelihood inference without knowing the true random mechanisms underlying the data. Simulations and real data analysis are provided to demonstrate the merit of our proposed parametric robust method.  相似文献   

10.
This article presents a mixture three-parameter Weibull distribution to model wind speed data. The parameters are estimated by using maximum likelihood (ML) method in which the maximization problem is regarded as a nonlinear programming with only inequality constraints and is solved numerically by the interior-point method. By applying this model to four lattice-point wind speed sequences extracted from National Centers for Environmental Prediction (NCEP) reanalysis data, it is observed that the mixture three-parameter Weibull distribution model proposed in this paper provides a better fit than the existing Weibull models for the analysis of wind speed data under study.  相似文献   

11.
Maximum likelihood is a widely used estimation method in statistics. This method is model dependent and as such is criticized as being non robust. In this article, we consider using weighted likelihood method to make robust inferences for linear mixed models where weights are determined at both the subject level and the observation level. This approach is appropriate for problems where maximum likelihood is the basic fitting technique, but a subset of data points is discrepant with the model. It allows us to reduce the impact of outliers without complicating the basic linear mixed model with normally distributed random effects and errors. The weighted likelihood estimators are shown to be robust and asymptotically normal. Our simulation study demonstrates that the weighted estimates are much better than the unweighted ones when a subset of data points is far away from the rest. Its application to the analysis of deglutition apnea duration in normal swallows shows that the differences between the weighted and unweighted estimates are due to large amount of outliers in the data set.  相似文献   

12.
The paper derives Bartlett corrections for improving the chisquare approximation to the likelihood ratio statistics in a class of location-scale family of distributions, which encompasses the elliptical family of distributions and also asymmetric distributions such as the extreme value distributions. We present, in matrix notation, a Bartlett corrected likelihood ratio statistic for testing that a subset of the nonlinear regression coefficients in this class of models equals a given vector of constants. The formulae derived are simple enough to be used analytically to obtain several Bartlett corrections in a variety of important models. We show that these formulae generalize a number of previously published results. We also present simulation results comparing the sizes and powers of the usual likelihood ratio tests and their Bartlett corrected versions when the scale parameter is considered known and when this parameter is uncorrectly specified.  相似文献   

13.
In this article, we introduce a new extension of the Birnbaum–Saunders (BS) distribution as a follow-up to the family of skew-flexible-normal distributions. This extension produces a family of BS distributions including densities that can be unimodal as well as bimodal. This flexibility is important in dealing with positive bimodal data, given the difficulties experienced by the use of mixtures of distributions. Some basic properties of the new distribution are studied including moments. Parameter estimation is approached by the method of moments and also by maximum likelihood, including a derivation of the Fisher information matrix. Three real data illustrations indicate satisfactory performance of the proposed model.  相似文献   

14.
This paper presents a Bayesian analysis of partially linear additive models for quantile regression. We develop a semiparametric Bayesian approach to quantile regression models using a spectral representation of the nonparametric regression functions and the Dirichlet process (DP) mixture for error distribution. We also consider Bayesian variable selection procedures for both parametric and nonparametric components in a partially linear additive model structure based on the Bayesian shrinkage priors via a stochastic search algorithm. Based on the proposed Bayesian semiparametric additive quantile regression model referred to as BSAQ, the Bayesian inference is considered for estimation and model selection. For the posterior computation, we design a simple and efficient Gibbs sampler based on a location-scale mixture of exponential and normal distributions for an asymmetric Laplace distribution, which facilitates the commonly used collapsed Gibbs sampling algorithms for the DP mixture models. Additionally, we discuss the asymptotic property of the sempiparametric quantile regression model in terms of consistency of posterior distribution. Simulation studies and real data application examples illustrate the proposed method and compare it with Bayesian quantile regression methods in the literature.  相似文献   

15.
Although error probability law selection of models of location-scale forms is of importance in some sense, the commonly used model selection procedures, such as AIC and BIC, do not apply to it. By treating error probability law as a “parameter” of interest, location and scale as nuisance parameters, this paper proposes that generalized modified profile likelihood (GMPL), considered as a quasi-likelihood function of error probability law, be used to select the error probability laws. The GMPL method achieves minimax rate optimality and proves to be consistent. Simulations show its good performance for finite and even small samples. Note that it is straightforward to generalize the GMPL of location-scale models to various models of location-scale forms particularly including the various linear regression models and their variations, to select their error probability laws. The author believes that GMPL and its variations would be quite promising for various model selection problems.  相似文献   

16.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

17.
We introduce a new class of distributions called the Weibull Marshall–Olkin-G family. We obtain some of its mathematical properties. The special models of this family provide bathtub-shaped, decreasing-increasing, increasing-decreasing-increasing, decreasing-increasing-decreasing, monotone, unimodal and bimodal hazard functions. The maximum likelihood method is adopted for estimating the model parameters. We assess the performance of the maximum likelihood estimators by means of two simulation studies. We also propose a new family of linear regression models for censored and uncensored data. The flexibility and importance of the proposed models are illustrated by means of three real data sets.  相似文献   

18.
A marginal–pairwise-likelihood estimation approach is examined in the mixed Rasch model with the binary response and logit link. This method belonging to the broad class of composite likelihood provides estimators with desirable asymptotic properties such as consistency and asymptotic normality. We study the performance of the proposed methodology when the random effect distribution is misspecified. A simulation study was conducted to compare this approach with the maximum marginal likelihood. The different results are also illustrated with an analysis of the real data set from a quality-of-life study.  相似文献   

19.
The skew-generalized-normal distribution [Arellano-Valle, RB, Gómez, HW, Quintana, FA. A new class of skew-normal distributions. Comm Statist Theory Methods 2004;33(7):1465–1480] is a class of asymmetric normal distributions, which contains the normal and skew-normal distributions as special cases. The main virtues of this distribution is that it is easy to simulate from and it also supplies a genuine expectation–maximization (EM) algorithm for maximum likelihood estimation. In this paper, we extend the EM algorithm for linear regression models assuming skew-generalized-normal random errors and we develop a diagnostics analyses via local influence and generalized leverage, following Zhu and Lee's approach. This is because Cook's well-known approach would be more complicated to use to obtain measures of local influence. Finally, results obtained for a real data set are reported, illustrating the usefulness of the proposed method.  相似文献   

20.
This article considers a class of estimators for the location and scale parameters in the location-scale model based on ‘synthetic data’ when the observations are randomly censored on the right. The asymptotic normality of the estimators is established using counting process and martingale techniques when the censoring distribution is known and unknown, respectively. In the case when the censoring distribution is known, we show that the asymptotic variances of this class of estimators depend on the data transformation and have a lower bound which is not achievable by this class of estimators. However, in the case that the censoring distribution is unknown and estimated by the Kaplan–Meier estimator, this class of estimators has the same asymptotic variance and attains the lower bound for variance for the case of known censoring distribution. This is different from censored regression analysis, where asymptotic variances depend on the data transformation. Our method has three valuable advantages over the method of maximum likelihood estimation. First, our estimators are available in a closed form and do not require an iterative algorithm. Second, simulation studies show that our estimators being moment-based are comparable to maximum likelihood estimators and outperform them when sample size is small and censoring rate is high. Third, our estimators are more robust to model misspecification than maximum likelihood estimators. Therefore, our method can serve as a competitive alternative to the method of maximum likelihood in estimation for location-scale models with censored data. A numerical example is presented to illustrate the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号