首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Censoring can be occurred in many statistical analyses in the framework of experimental design. In this study, we estimate the model parameters in one-way ANOVA under Type II censoring. We assume that the distribution of the error terms is Azzalini's skew normal. We use Tiku's modified maximum likelihood (MML) methodology which is a modified version of the well-known maximum likelihood (ML) in the estimation procedure. Unlike ML methodology, MML methodology is non-iterative and gives explicit estimators of the model parameters. We also propose new test statistics based on the proposed estimators. The performances of the proposed estimators and the test statistics based on them are compared with the corresponding normal theory results via Monte Carlo simulation study. A real life data is analysed to show the implementation of the methodology presented in this paper at the end of the study.  相似文献   

2.
In this article, we assume that the distribution of the error terms is skew t in two-way analysis of variance (ANOVA). Skew t distribution is very flexible for modeling the symmetric and the skew datasets, since it reduces to the well-known normal, skew normal, and Student's t distributions. We obtain the estimators of the model parameters by using the maximum likelihood (ML) and the modified maximum likelihood (MML) methodologies. We also propose new test statistics based on these estimators for testing the equality of the treatment and the block means and also the interaction effect. The efficiencies of the ML and the MML estimators and the power values of the test statistics based on them are compared with the corresponding normal theory results via Monte Carlo simulation study. Simulation results show that the proposed methodologies are more preferable. We also show that the test statistics based on the ML estimators are more powerful than the test statistics based on the MML estimators as expected. However, power values of the test statistics based on the MML estimators are very close to the corresponding test statistics based on the ML estimators. At the end of the study, a real life example is given to show the implementation of the proposed methodologies.  相似文献   

3.
In this study, testing the equality of mean vectors in a one-way multivariate analysis of variance (MANOVA) is considered when each dataset has a monotone pattern of missing observations. The likelihood ratio test (LRT) statistic in a one-way MANOVA with monotone missing data is given. Furthermore, the modified test (MT) statistic based on likelihood ratio (LR) and the modified LRT (MLRT) statistic with monotone missing data are proposed using the decomposition of the LR and an asymptotic expansion for each decomposed LR. The accuracy of the approximation for the Chi-square distribution is investigated using a Monte Carlo simulation. Finally, an example is given to illustrate the methods.  相似文献   

4.
A bayesian approach to dynamic tobit models   总被引:1,自引:0,他引:1  
This paper develops a posterior simulation method for a dynamic Tobit model. The major obstacle rooted in such a problem lies in high dimensional integrals, induced by dependence among censored observations, in the likelihood function. The primary contribution of this study is to develop a practical and efficient sampling scheme for the conditional posterior distributions of the censored (i.e., unobserved) data, so that the Gibbs sampler with the data augmentation algorithm is successfully applied. The substantial differences between this approach and some existing methods are highlighted. The proposed simulation method is investigated by means of a Monte Carlo study and applied to a regression model of Japanese exports of passenger cars to the U.S. subject to a non-tariff trade barrier.  相似文献   

5.
This paper develops a posterior simulation method for a dynamic Tobit model. The major obstacle rooted in such a problem lies in high dimensional integrals, induced by dependence among censored observations, in the likelihood function. The primary contribution of this study is to develop a practical and efficient sampling scheme for the conditional posterior distributions of the censored (i.e., unobserved) data, so that the Gibbs sampler with the data augmentation algorithm is successfully applied. The substantial differences between this approach and some existing methods are highlighted. The proposed simulation method is investigated by means of a Monte Carlo study and applied to a regression model of Japanese exports of passenger cars to the U.S. subject to a non-tariff trade barrier.  相似文献   

6.
We develop and evaluate analytic and bootstrap bias-corrected maximum-likelihood estimators for the shape parameter in the Nakagami distribution. This distribution is widely used in a variety of disciplines, and the corresponding estimator of its scale parameter is trivially unbiased. We find that both ‘corrective’ and ‘preventive’ analytic approaches to eliminating the bias, to O(n ?2), are equally, and extremely, effective and simple to implement. As a bonus, the sizeable reduction in bias comes with a small reduction in the mean-squared error. Overall, we prefer analytic bias corrections in the case of this estimator. This preference is based on the relative computational costs and the magnitudes of the bias reductions that can be achieved in each case. Our results are illustrated with two real-data applications, including the one which provides the first application of the Nakagami distribution to data for ocean wave heights.  相似文献   

7.
In this paper, we discuss some theoretical results and properties of the discrete Weibull distribution, which was introduced by Nakagawa and Osaki [The discrete Weibull distribution. IEEE Trans Reliab. 1975;24:300–301]. We study the monotonicity of the probability mass, survival and hazard functions. Moreover, reliability, moments, p-quantiles, entropies and order statistics are also studied. We consider likelihood-based methods to estimate the model parameters based on complete and censored samples, and to derive confidence intervals. We also consider two additional methods to estimate the model parameters. The uniqueness of the maximum likelihood estimate of one of the parameters that index the discrete Weibull model is discussed. Numerical evaluation of the considered model is performed by Monte Carlo simulations. For illustrative purposes, two real data sets are analyzed.  相似文献   

8.
This article considers misclassification of categorical covariates in the context of regression analysis; if unaccounted for, such errors usually result in mis-estimation of model parameters. With the presence of additional covariates, we exploit the fact that explicitly modelling non-differential misclassification with respect to the response leads to a mixture regression representation. Under the framework of mixture of experts, we enable the reclassification probabilities to vary with other covariates, a situation commonly caused by misclassification that is differential on certain covariates and/or by dependence between the misclassified and additional covariates. Using Bayesian inference, the mixture approach combines learning from data with external information on the magnitude of errors when it is available. In addition to proving the theoretical identifiability of the mixture of experts approach, we study the amount of efficiency loss resulting from covariate misclassification and the usefulness of external information in mitigating such loss. The method is applied to adjust for misclassification on self-reported cocaine use in the Longitudinal Studies of HIV-Associated Lung Infections and Complications.  相似文献   

9.
We consider the problem of making inferences on the common mean of several heterogeneous log-normal populations. We apply the parametric bootstrap (PB) approach and the method of variance estimate recovery (MOVER) to construct confidence intervals for the log-normal common mean. We then compare the performances of the proposed confidence intervals with the existing confidence intervals via an extensive simulation study. Simulation results show that our proposed MOVER and PB confidence intervals can be recommended generally for different sample sizes and number of populations.  相似文献   

10.
Cooray and Ananda introduced a two-parameter generalized Half-Normal distribution which is useful for modelling lifetime data, while its maximum likelihood estimators (MLEs) are biased in finite samples. This motivates us to construct nearly unbiased estimators for the unknown parameters of the model. In this paper, we adopt two approaches for bias reduction of the MLEs of the parameters of generalized Half-Normal distribution. The first approach is the analytical methodology suggested by Cox and Snell and the second is based on parametric Bootstrap resampling method. Additionally, the method of moments (MMEs) is used for comparison purposes. The numerical evidence shows that the analytic bias-corrected estimators significantly outperform their bootstrapped-based counterpart for small and moderate samples as well as for MLEs and MMEs. Also, it is apparent from the results that bias- corrected estimates of shape parameter perform better than that of scale parameter. Further, the results show that bias-correction scheme yields nearly unbiased estimates. Finally, six fracture toughness real data sets illustrate the application of our methods.  相似文献   

11.
Different procedures for testing problems concerning intraclass correlation from familial data are considered in the case of varying number of siblings per family. Under the assumption of multivariate normality, the hypotheses that the intraclass correlation is equal to a specified value are tested. To assess the performance of the tests, Monte Carlo simulations are designed to compare their powers. The Neyman's (1959) C(α) test and the test based on the modified ANOVA F statistic are shown to be consistently more powerful than other procedures.  相似文献   

12.
A normal-theory and two distribution-free statistics used for multiple comparisons of homogeneity of location are compared on simulated data generated from six distributions. The normal-theory statistic is found to be fairly robust to departures from the assumption of normally distributed data of the types considered. The Steel-Dwass statistic is generally more powerful than a Kruskal-Wallis range statistic.  相似文献   

13.
Suppose that several different imperfect instruments and one perfect instrument are used independently to measure some characteristic of a population. The authors consider the problem of combining this information to make statistical inference on parameters of interest, in particular the population mean and cumulative distribution function. They develop maximum empirical likelihood estimators and study their asymptotic properties. They also present simulation results on the finite sample efficiency of these estimators.  相似文献   

14.
The panel variant of the KPSS tests developed by Hadri [Hadri, K., 2000, Testing for stationarity in heterogeneous panels. Econometrics Journal, 3, 148–161] for the null of stationarity suffers from size distortions in the presence of cross-section dependence. However, applying the bootstrap methodology, we find that these tests are approximately correctly sized.  相似文献   

15.
The parameters and quantiles of the three-parameter generalized Pareto distribution (GPD3) were estimated using six methods for Monte Carlo generated samples. The parameter estimators were the moment estimator and its two variants, probability-weighted moment estimator, maximum likelihood estimator, and entropy estimator. Parameters were investigated using a factorial experiment. The performance of these estimators was statistically compared, with the objective of identifying the most robust estimator from amongst them.  相似文献   

16.
The literature on testing the unit root hypothesis in the presence of GARCH errors is extended. A new test based upon the combination of local-to-unity detrending and joint maximum likelihood estimation of the autoregressive parameter and GARCH process is presented. The finite sample distribution of the test is derived under alternative decisions regarding the deterministic terms employed. Using Monte Carlo simulation, the newly proposed ML t-test is shown to exhibit increased power of relative to rival tests. Finally, the empirical relevance of the simulation results is illustrated via an application to real GDP for the UK.  相似文献   

17.
Abstract

In his Fisher Lecture, Efron (Efron, B. R. A. (1998 Efron, B. R. A. 1998. Fisher in the 21st century (with discussion). Statistical Science, 13: 95122. [Crossref], [Web of Science ®] [Google Scholar]). Fisher in the 21st Century (with discussion). Statistical Science 13:95–122) pointed out that maximum likelihood estimates (MLE) can be badly biased in certain situations involving many nuisance parameters. He predicted that with modern computing equipment a computer-modified version of the MLE that was less biased could become the default estimator of choice in applied problems in the 21st century. This article discusses three modifications—Lindsay's conditional likelihood, integrated likelihood, and Bartlett's bias-corrected estimating function. Each is evaluated through a study of the bias and MSE of the estimates in a stratified Weibull model with a moderate number of nuisance parameters. In Lindsay's estimating equation, three different methods for estimation of the nuisance parameters are evaluated—the restricted maximum likelihood estimate (RMLE), a Bayes estimator, and a linear Bayes estimator. In our model, the conditional likelihood with RMLE of the nuisance parameters is equivalent to Bartlett's bias-corrected estimating function. In the simulation we show that Lindsay's conditional likelihood is in general preferred, irrespective of the estimator of the nuisance parameters. Although the integrated likelihood has smaller MSE when the precise nature of the prior distribution of the nuisance parameters is known, this approach may perform poorly in cases where the prior distribution of the nuisance parameters is not known, especially using a non-informative prior. In practice, Lindsay's method using the RMLE of the nuisance parameters is recommended.  相似文献   

18.
This paper presents a comprehensive review and comparison of five computational methods for Bayesian model selection, based on MCMC simulations from posterior model parameter distributions. We apply these methods to a well-known and important class of models in financial time series analysis, namely GARCH and GARCH-t models for conditional return distributions (assuming normal and t-distributions). We compare their performance with the more common maximum likelihood-based model selection for simulated and real market data. All five MCMC methods proved reliable in the simulation study, although differing in their computational demands. Results on simulated data also show that for large degrees of freedom (where the t-distribution becomes more similar to a normal one), Bayesian model selection results in better decisions in favor of the true model than maximum likelihood. Results on market data show the instability of the harmonic mean estimator and reliability of the advanced model selection methods.  相似文献   

19.
This paper develops Bayesian analysis in the context of progressively Type II censored data from the compound Rayleigh distribution. The maximum likelihood and Bayes estimates along with associated posterior risks are derived for reliability performances under balanced loss functions by assuming continuous priors for parameters of the distribution. A practical example is used to illustrate the estimation methods. A simulation study has been carried out to compare the performance of estimates. The study indicates that Bayesian estimation should be preferred over maximum likelihood estimation. In Bayesian estimation, the balance general entropy loss function can be effectively employed for optimal decision-making.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号