首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 424 毫秒
1.
The generalized lambda distribution, GLD(λ1, λ2 λ3, λ4), is a four-parameter family that has been used for fitting distributions to a wide variety of data sets. The analysis of the λ3 and λ4 values that actually yield valid distributions has (until now) been incomplete. Moreover, because of computational problems and theoretical shortcomings, the moment space over which the GLD can be applied has been limited. This paper completes the analysis of the λ3 and λ4 values that are associated with valid distributions, improves previous computational methods to reduce errors associated with fitting data, expands the parameter space over which the GLD can be used, and uses a four-parameter generalized beta distribution to cover the portion of the parameter space where the GLD is not applicable. In short, the paper extends the GLD to an EGLD system that can be used for fitting distributions to data sets that that are cited in the literature as actually occurring in practice. Examples of use of the proposed system are included  相似文献   

2.
Distribution fitting is widely practiced in all branches of engineering and applied science, yet only a few studies have examined the relative capability of various parameter-rich families of distributions to represent a wide spectrum of diversely shaped distributions. In this article, two such families of distributions, Generalized Lambda Distribution (GLD) and Response Modeling Methodology (RMM), are compared. For a sample of some commonly used distributions, each family is fitted to each distribution, using two methods: fitting by minimization of the L 2 norm (minimizing density function distance) and nonlinear regression applied to a sample of exact quantile values (minimizing quantile function distance). The resultant goodness-of-fit is assessed by four criteria: the optimized value of the L 2 norm, and three additional criteria, relating to quantile function matching. Results show that RMM is uniformly better than GLD. An additional study includes Shore's quantile function (QF) and again RMM is the best performer, followed by Shore's QF and then GLD.  相似文献   

3.
Bayesian change points analysis on the seismic activity in northeastern Taiwan is studied via the reversible jump Markov chain Monte Carlo simulation. An epidemic model is considered with Gamma prior distributions for the parameters. The prior distributions are essentially determined based on an earlier period of the seismic data in the same region. It is investigated that there exist two change points during the time period considered. This result is also confirmed by the BIC criteria.  相似文献   

4.
Density estimation for pre-binned data is challenging due to the loss of exact position information of the original observations. Traditional kernel density estimation methods cannot be applied when data are pre-binned in unequally spaced bins or when one or more bins are semi-infinite intervals. We propose a novel density estimation approach using the generalized lambda distribution (GLD) for data that have been pre-binned over a sequence of consecutive bins. This method enjoys the high power of the parametric model and the great shape flexibility of the GLD. The performances of the proposed estimators are benchmarked via simulation studies. Both simulation results and a real data application show that the proposed density estimators work well for data of moderate or large sizes.  相似文献   

5.
In this paper we propose a quantile survival model to analyze censored data. This approach provides a very effective way to construct a proper model for the survival time conditional on some covariates. Once a quantile survival model for the censored data is established, the survival density, survival or hazard functions of the survival time can be obtained easily. For illustration purposes, we focus on a model that is based on the generalized lambda distribution (GLD). The GLD and many other quantile function models are defined only through their quantile functions, no closed‐form expressions are available for other equivalent functions. We also develop a Bayesian Markov Chain Monte Carlo (MCMC) method for parameter estimation. Extensive simulation studies have been conducted. Both simulation study and application results show that the proposed quantile survival models can be very useful in practice.  相似文献   

6.
A method for efficiently calculating exact marginal, conditional and joint distributions for change points defined by general finite state Hidden Markov Models is proposed. The distributions are not subject to any approximation or sampling error once parameters of the model have been estimated. It is shown that, in contrast to sampling methods, very little computation is needed. The method provides probabilities associated with change points within an interval, as well as at specific points.  相似文献   

7.
Abstract

In this article we consider the problem of fitting a five-parameter generalization of the lambda distribution to data given in the form of a grouped frequency table. The estimation of parameters is done by six different procedures: percentiles, moments, probability-weighted moments, minimum Cramér-Von Mises, maximum likelihood, and pseudo least squares. These methods are evaluated and compared using a Monte Carlo study where the parent populations were generalized lambda distribution (GLD) approximations of Normal, Beta, Gamma random variables, and for nine combinations of sample sizes and number of classes. Of the estimators analyzed it is concluded that, although the method of pseudo least squares suffers from a number of limitations, it appears to be the candidate procedure to estimate the parameters of a GLD from grouped data.  相似文献   

8.
Zero-inflated models are commonly used for modeling count and continuous data with extra zeros. Inflations at one point or two points apart from zero for modeling continuous data have been discussed less than that of zero inflation. In this article, inflation at an arbitrary point α as a semicontinuous distribution is presented and the mean imputation for a continuous response is discussed as a cause of having semicontinuous data. Also, inflation at two points and generally at k arbitrary points and their relation to cell-mean imputation in the mixture of continuous distributions are studied. To analyze the imputed data, a mixture of semicontinuous distributions is used. The effects of covariates on the dependent variable in a mixture of k semicontinuous distributions with inflation at k points are also investigated. In order to find the parameter estimates, the method of expectation–maximization (EM) algorithm is used. In a real data of Iranian Households Income and Expenditure Survey (IHIES), it is shown how to obtain a proper estimate of the population variance when continuous missing at random responses are mean imputed.  相似文献   

9.
Likelihood ratio type test statistic and Schwarz information criterion statistics are proposed for detecting possible bathtub-shaped changes in the parameter in a sequence of exponential distributions. The asymptotic distribution of likelihood ratio type statistic under the null hypothesis and the testing procedure based on Schwarz information criterion are derived. Numerical critical values and powers of two methods are tabulated for certain selected values of the parameters. The tests are applied to detect the change points for the predator data and Stanford heart transplant data.  相似文献   

10.
In the present article we propose the modified lambda family (MLF) which is the Freimer, Mudholkar, Kollia, and Lin (FMKL) parametrization of generalized lambda distribution (GLD) as a model for censored data. The expressions for probability weighted moments of MLF are derived and used to estimate the parameters of the distribution. We modified the estimation technique using probability weighted moments. It is shown that the distribution provides reasonable fit to a real censored data.  相似文献   

11.
This paper considers least absolute deviations estimation of a regression model with multiple change points occurring at unknown times. Some asymptotic results, including rates of convergence and asymptotic distributions, for the estimated change points and the estimated regression coefficient are derived. Results are obtained without assuming that each regime spans a positive fraction of the sample size. In addition, the number of change points is allowed to grow as the sample size increases. Estimation of the number of change points is also considered. A feasible computational algorithm is developed. An application is also given, along with some Monte Carlo simulations.  相似文献   

12.
Estimation of points of rapid change in the mean function m(t) is considered under long memory residuals, irregularily spaced time points and smoothly changing marginal distributions obtained by local Gaussian subordination. The approach is based on kernel estimation of derivatives of the trend function. An asymptotic expression for the mean squared error is obtained. Limit theorems are derived for derivatives of m and the time points where rapid change occurs. The results are illustrated by an application to measurements of oxygen isotopes trapped in the Greenland ice sheets during the last 20,000 years.  相似文献   

13.
An exact permutation test for analyzing and/or dredging multi-response data at the ordinal or higher levels is presented. The associated test statistic is based on the average distance (or any specified norm) between points within a priori disjoint subgroups of a finite population of points in an r-dimensional space (corresponding to r measured responses from each object in a finite population of objects). Alternative approximate tests based on the beta and normal distributions are provided. Two detailed examples utilizing actual social science data are considered, including comparisons of the approximate tests. An additional example describes the behavior of these tests under a variety of conditions, including extreme data configurations  相似文献   

14.
SUMMARY The term 'principal points' originated in a problem of determining 'typical' heads for the design of protection masks, as described by Flury. Two principal points in the mask example correspond to a small and a large size. Principal points are cluster means for theoretical distributions, and sample cluster means from a k -means algorithm are non-parametric estimators of principal points. This paper demonstrates that maximum likelihood estimators and semi-parametric estimators based on symmetry constraints typically perform much better than the k -means estimators. Asymptotic results on the efficiency of these estimators of two principal points for four symmetric univariate distributions are given. Simulation results are provided to examine the performance of the estimators for finite sample sizes. Finally, the different estimators of two principal points are compared using the head dimension data for the design of protection masks.  相似文献   

15.

Structural change in any time series is practically unavoidable, and thus correctly detecting breakpoints plays a pivotal role in statistical modelling. This research considers segmented autoregressive models with exogenous variables and asymmetric GARCH errors, GJR-GARCH and exponential-GARCH specifications, which utilize the leverage phenomenon to demonstrate asymmetry in response to positive and negative shocks. The proposed models incorporate skew Student-t distribution and prove the advantages of the fat-tailed skew Student-t distribution versus other distributions when structural changes appear in financial time series. We employ Bayesian Markov Chain Monte Carlo methods in order to make inferences about the locations of structural change points and model parameters and utilize deviance information criterion to determine the optimal number of breakpoints via a sequential approach. Our models can accurately detect the number and locations of structural change points in simulation studies. For real data analysis, we examine the impacts of daily gold returns and VIX on S&P 500 returns during 2007–2019. The proposed methods are able to integrate structural changes through the model parameters and to capture the variability of a financial market more efficiently.

  相似文献   

16.
In the recent years, the notion of data depth has been used in nonparametric multivariate data analysis since it gives natural ‘centre-outward’ ordering of multivariate data points with respect to the given data cloud. In the literature, various nonparametric tests are developed for testing equality of location of two multivariate distributions based on data depth. Here, we define two nonparametric tests based on two different test statistic for testing equality of locations of two multivariate distributions. In the present work, we compare the performance of these tests with the tests developed by Li and Liu [New nonparametric tests of multivariate locations and scales using data depth. Statist Sci. 2004;(1):686–696] for testing equality of locations of two multivariate distributions. Comparison in terms of power is done for multivariate symmetric and skewed distributions using simulation for three popular depth functions. Application of tests to real life data is provided. Conclusion and recommendations are also provided.  相似文献   

17.
The problem of multiple change points has been discussed in these years on the background of financial shocks. In order to decrease the damage, it is worthy to find a more available model for the problem as precise as possible by the information from data set. This paper proposes the problem of detecting the change points by semiparametric test. The change points estimations are obtained by empirical likelihood method. Then some asymptotic results for multiple change points are obtained by loglikelihood ratio test and law of large numbers. Furthermore, the consistency of change points estimations is presented. Indeed, the method and steps to find the change points are derived. The simulation experiments prove that the semiparametric test is more efficient than nonparametric test. The diagnosis with simulation and the applications for multiple change points also illustrates the proposed model well.  相似文献   

18.
A Bayesian multi-category kernel classification method is proposed. The algorithm performs the classification of the projections of the data to the principal axes of the feature space. The advantage of this approach is that the regression coefficients are identifiable and sparse, leading to large computational savings and improved classification performance. The degree of sparsity is regulated in a novel framework based on Bayesian decision theory. The Gibbs sampler is implemented to find the posterior distributions of the parameters, thus probability distributions of prediction can be obtained for new data points, which gives a more complete picture of classification. The algorithm is aimed at high dimensional data sets where the dimension of measurements exceeds the number of observations. The applications considered in this paper are microarray, image processing and near-infrared spectroscopy data.  相似文献   

19.
This paper was motivated by the problem of the determination of the change points of the failure rate of a mixture of two gamma distributions. For certain values of the parameters the existing methods are not applicable since, in this case, there are two turning points of the failure rate. Thus, we extend the results to models having two or more turning points of the failure rates. The extended procedure helps us to identify failure rates of more complex forms. Finally, the mixture gamma case is completely resolved employing theoretical, graphical and numerical techniques wherever necessary.  相似文献   

20.
Recently, there has been great interest in estimating the decline in cognitive ability in patients with Alzheimer's disease. Measuring decline is not straightforward, since one must consider the choice of scale to measure cognitive ability, possible floor and ceiling effects, between-patient variability, and the unobserved age of onset. The authors demonstrate how to account for the above features by modeling decline in scores on the Mini-Mental State Exam in two different data sets. To this end, they use hierarchical Bayesian models with change points, for which posterior distributions are calculated using the Gibbs sampler. They make comparisons between several such models using both prior and posterior Bayes factors, and compare the results from the models suggested by these two model selection criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号