首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Highly skewed and non-negative data can often be modeled by the delta-lognormal distribution in fisheries research. However, the coverage probabilities of extant interval estimation procedures are less satisfactory in small sample sizes and highly skewed data. We propose a heuristic method of estimating confidence intervals for the mean of the delta-lognormal distribution. This heuristic method is an estimation based on asymptotic generalized pivotal quantity to construct generalized confidence interval for the mean of the delta-lognormal distribution. Simulation results show that the proposed interval estimation procedure yields satisfactory coverage probabilities, expected interval lengths and reasonable relative biases. Finally, the proposed method is employed in red cod densities data for a demonstration.  相似文献   

2.
A method based on the prediction of order statistics is proposed to select the underlying parent distribution. A cross-validatory predictor and the best linear unbiased predictor are considered in choosing between gamma and Weibull models when shape parameters are only known to lie within a range. The proposed approach is evaluated using a large-scale Monte Carlo study. The results clearly show that the cross-validatory predictor performs well as a robust procedure in selecting between probability densities. Two well-known data sets are used to illustrate the procedure.  相似文献   

3.
A new bivariate beta distribution capable of providing better fits than all its competitors is introduced. Various representations are derived for its product moments, marginal densities, marginal moments, conditional densities and conditional moments. The method of maximum likelihood is used to derive the associated estimation procedure. Applications to six bivariate data sets are illustrated.  相似文献   

4.
New techniques for the analysis of stochastic volatility models in which the logarithm of conditional variance follows an autoregressive model are developed. A cyclic Metropolis algorithm is used to construct a Markov-chain simulation tool. Simulations from this Markov chain converge in distribution to draws from the posterior distribution enabling exact finite-sample inference. The exact solution to the filtering/smoothing problem of inferring about the unobserved variance states is a by-product of our Markov-chain method. In addition, multistep-ahead predictive densities can be constructed that reflect both inherent model variability and parameter uncertainty. We illustrate our method by analyzing both daily and weekly data on stock returns and exchange rates. Sampling experiments are conducted to compare the performance of Bayes estimators to method of moments and quasi-maximum likelihood estimators proposed in the literature. In both parameter estimation and filtering, the Bayes estimators outperform these other approaches.  相似文献   

5.
The Wehrly–Johnson family of bivariate circular distributions is by far the most general one currently available for modelling data on the torus. It allows complete freedom in the specification of the marginal circular densities as well as the binding circular density which regulates any dependence that might exist between them. We propose a parametric bootstrap approach for testing the goodness-of-fit of Wehrly–Johnson distributions when the forms of their marginal and binding densities are assumed known. The approach admits the use of any test for toroidal uniformity, and we consider versions of it incorporating three such tests. Simulation is used to illustrate the operating characteristics of the approach when the underlying distribution is assumed to be bivariate wrapped Cauchy. An analysis of wind direction data recorded at a Texan weather station illustrates the use of the proposed goodness-of-fit testing procedure.  相似文献   

6.
This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability distributions that is widely used in modern high-dimensional statistics and data analysis. The method is based on a new first-order approximation for Langevin diffusions that exploits log-concavity to construct Markov chains with favourable convergence properties. This approximation is closely related to Moreau–Yoshida regularisations for convex functions and uses proximity mappings instead of gradient mappings to approximate the continuous-time process. The proposed method complements existing MALA methods in two ways. First, the method is shown to have very robust stability properties and to converge geometrically for many target densities for which other MALA are not geometric, or only if the step size is sufficiently small. Second, the method can be applied to high-dimensional target densities that are not continuously differentiable, a class of distributions that is increasingly used in image processing and machine learning and that is beyond the scope of existing MALA and HMC algorithms. To use this method it is necessary to compute or to approximate efficiently the proximity mappings of the logarithm of the target density. For several popular models, including many Bayesian models used in modern signal and image processing and machine learning, this can be achieved with convex optimisation algorithms and with approximations based on proximal splitting techniques, which can be implemented in parallel. The proposed method is demonstrated on two challenging high-dimensional and non-differentiable models related to image resolution enhancement and low-rank matrix estimation that are not well addressed by existing MCMC methodology.  相似文献   

7.
The exponentiated sinh Cauchy distribution is characterized by four parameters: location, scale, symmetry, and asymmetry. The symmetry parameter preserves the symmetry of the distribution by producing both bimodal and unimodal densities having coefficient of kurtosis values ranging from one to positive infinity. The asymmetry parameter changes the symmetry of the distribution by producing both positively and negatively skewed densities having coefficient of skewness values ranging from negative infinity to positive infinity. Bimodality, skewness, and kurtosis properties of this regular distribution are presented. In addition, relations to some well-known distributions are examined in terms of skewness and kurtosis by constructing aliases of the proposed distribution on the symmetry and asymmetry parameter plane. The maximum likelihood parameter estimation technique is discussed, and examples are provided and analyzed based on data from astronomy and medical sciences to illustrate the flexibility of the distribution for modeling bimodal and unimodal data.  相似文献   

8.
This paper presents a method for Bayesian inference for the regression parameters in a linear model with independent and identically distributed errors that does not require the specification of a parametric family of densities for the error distribution. This method first selects a nonparametric kernel density estimate of the error distribution which is unimodal and based on the least-squares residuals. Once the error distribution is selected, the Metropolis algorithm is used to obtain the marginal posterior distribution of the regression parameters. The methodology is illustrated with data sets, and its performance relative to standard Bayesian techniques is evaluated using simulation results.  相似文献   

9.
A general model is proposed for flexibly estimating the density of a continuous response variable conditional on a possibly high-dimensional set of covariates. The model is a finite mixture of asymmetric student t densities with covariate-dependent mixture weights. The four parameters of the components, the mean, degrees of freedom, scale and skewness, are all modeled as functions of the covariates. Inference is Bayesian and the computation is carried out using Markov chain Monte Carlo simulation. To enable model parsimony, a variable selection prior is used in each set of covariates and among the covariates in the mixing weights. The model is used to analyze the distribution of daily stock market returns, and shown to more accurately forecast the distribution of returns than other widely used models for financial data.  相似文献   

10.
We present a single-pass, low-storage, sequential method for estimating an arbitrary quantile of an unknown distribution. The proposed method performs very well when compared to existing methods for estimating the median as well as arbitrary quantiles for a wide range of densities. In addition to explaining the method and presenting the results of the simulation study, we discuss intuition behind the method and demonstrate empirically, for certain densities, that the proposed estimator converges to the sample quantile.  相似文献   

11.
CORRECTING FOR KURTOSIS IN DENSITY ESTIMATION   总被引:1,自引:0,他引:1  
Using a global window width kernel estimator to estimate an approximately symmetric probability density with high kurtosis usually leads to poor estimation because good estimation of the peak of the distribution leads to unsatisfactory estimation of the tails and vice versa. The technique proposed corrects for kurtosis via a transformation of the data before using a global window width kernel estimator. The transformation depends on a “generalised smoothing parameter” consisting of two real-valued parameters and a window width parameter which can be selected either by a simple graphical method or, for a completely data-driven implementation, by minimising an estimate of mean integrated squared error. Examples of real and simulated data demonstrate the effectiveness of this approach, which appears suitable for a wide range of symmetric, unimodal densities. Its performance is similar to ordinary kernel estimation in situations where the latter is effective, e.g. Gaussian densities. For densities like the Cauchy where ordinary kernel estimation is not satisfactory, our methodology offers a substantial improvement.  相似文献   

12.
In this article, we introduce a new extension of the Birnbaum–Saunders (BS) distribution as a follow-up to the family of skew-flexible-normal distributions. This extension produces a family of BS distributions including densities that can be unimodal as well as bimodal. This flexibility is important in dealing with positive bimodal data, given the difficulties experienced by the use of mixtures of distributions. Some basic properties of the new distribution are studied including moments. Parameter estimation is approached by the method of moments and also by maximum likelihood, including a derivation of the Fisher information matrix. Three real data illustrations indicate satisfactory performance of the proposed model.  相似文献   

13.
A new distribution called the beta generalized exponential distribution is proposed. It includes the beta exponential and generalized exponential (GE) distributions as special cases. We provide a comprehensive mathematical treatment of this distribution. The density function can be expressed as a mixture of generalized exponential densities. This is important to obtain some mathematical properties of the new distribution in terms of the corresponding properties of the GE distribution. We derive the moment generating function (mgf) and the moments, thus generalizing some results in the literature. Expressions for the density, mgf and moments of the order statistics are also obtained. We discuss estimation of the parameters by maximum likelihood and obtain the information matrix that is easily numerically determined. We observe in one application to a real skewed data set that this model is quite flexible and can be used effectively in analyzing positive data in place of the beta exponential and GE distributions.  相似文献   

14.
The Inverse Gaussian (IG) distribution is commonly introduced to model and examine right skewed data having positive support. When applying the IG model, it is critical to develop efficient goodness-of-fit tests. In this article, we propose a new test statistic for examining the IG goodness-of-fit based on approximating parametric likelihood ratios. The parametric likelihood ratio methodology is well-known to provide powerful likelihood ratio tests. In the nonparametric context, the classical empirical likelihood (EL) ratio method is often applied in order to efficiently approximate properties of parametric likelihoods, using an approach based on substituting empirical distribution functions for their population counterparts. The optimal parametric likelihood ratio approach is however based on density functions. We develop and analyze the EL ratio approach based on densities in order to test the IG model fit. We show that the proposed test is an improvement over the entropy-based goodness-of-fit test for IG presented by Mudholkar and Tian (2002). Theoretical support is obtained by proving consistency of the new test and an asymptotic proposition regarding the null distribution of the proposed test statistic. Monte Carlo simulations confirm the powerful properties of the proposed method. Real data examples demonstrate the applicability of the density-based EL ratio goodness-of-fit test for an IG assumption in practice.  相似文献   

15.
Kernel smoothing methods are widely used in many research areas in statistics. However, kernel estimators suffer from boundary effects when the support of the function to be estimated has finite endpoints. Boundary effects seriously affect the overall performance of the estimator. In this article, we propose a new method of boundary correction for univariate kernel density estimation. Our technique is based on a data transformation that depends on the point of estimation. The proposed method possesses desirable properties such as local adaptivity and non-negativity. Furthermore, unlike many other transformation methods available, the proposed estimator is easy to implement. In a Monte Carlo study, the accuracy of the proposed estimator is numerically analyzed and compared with the existing methods of boundary correction. We find that it performs well for most shapes of densities. The theory behind the new methodology, along with the bias and variance of the proposed estimator, are presented. Results of a data analysis are also given.  相似文献   

16.
Shapes of service-time distributions in queueing network models have a great impact on the distribution of system response-times. It is essential for the analysis of response-time distribution that the modeled service-time distributions have the correct shape. Tradionally modeling of service-time distributions is based on a parametric approach by assuming a specific distribution and estimating its parameters. We introduce an alternative approach based on the principles of exploratory data analysis and nonparametric data modeling. The proposed method applies nonlinear data transformation and resistant curve fitting. The method can be used in cases, where the available data is a complete sample, a histogram, or the mean and a set of 5-10 quantiles. The reported results indicate that the proposed method is able to approximate the distribution of measured service times so that accurate estimates for quantiles of the response-time distribution are obtained  相似文献   

17.
We propose a new class of continuous distributions with two extra shape parameters named the generalized odd log-logistic family of distributions. The proposed family contains as special cases the proportional reversed hazard rate and odd log-logistic classes. Its density function can be expressed as a linear combination of exponentiated densities based on the same baseline distribution. Some of its mathematical properties including ordinary moments, quantile and generating functions, two entropy measures and order statistics are obtained. We derive a power series for the quantile function. We discuss the method of maximum likelihood to estimate the model parameters. We study the behaviour of the estimators by means of Monte Carlo simulations. We introduce the log-odd log-logistic Weibull regression model with censored data based on the odd log-logistic-Weibull distribution. The importance of the new family is illustrated using three real data sets. These applications indicate that this family can provide better fits than other well-known classes of distributions. The beauty and importance of the proposed family lies in its ability to model different types of real data.  相似文献   

18.
This article focused on the definition and the study of a binary Bayesian criterion which measures a statistical agreement between a subjective prior and data information. The setting of this work is concrete Bayesian studies. It is an alternative and a complementary tool to the method recently proposed by Evans and Moshonov, [M. Evans and H. Moshonov, Checking for Prior-data conflict, Bayesian Anal. 1 (2006), pp. 893–914]. Both methods try to help the work of the Bayesian analyst, from preliminary to the posterior computation. Our criterion is defined as a ratio of Kullback–Leibler divergences; two of its main features are to make easy the check of a hierarchical prior and be used as a default calibration tool to obtain flat but proper priors in applications. Discrete and continuous distributions exemplify the approach and an industrial case study in reliability, involving the Weibull distribution, is highlighted.  相似文献   

19.
On boundary correction in kernel density estimation   总被引:1,自引:0,他引:1  
It is well known now that kernel density estimators are not consistent when estimating a density near the finite end points of the support of the density to be estimated. This is due to boundary effects that occur in nonparametric curve estimation problems. A number of proposals have been made in the kernel density estimation context with some success. As of yet there appears to be no single dominating solution that corrects the boundary problem for all shapes of densities. In this paper, we propose a new general method of boundary correction for univariate kernel density estimation. The proposed method generates a class of boundary corrected estimators. They all possess desirable properties such as local adaptivity and non-negativity. In simulation, it is observed that the proposed method perform quite well when compared with other existing methods available in the literature for most shapes of densities, showing a very important robustness property of the method. The theory behind the new approach and the bias and variance of the proposed estimators are given. Results of a data analysis are also given.  相似文献   

20.
In this paper a new method called the EMS algorithm is used to solve Wicksell's corpuscle problem, that is the determination of the distribution of the sphere radii in a medium given the radii of their profiles in a random slice. The EMS algorithm combines the EM algorithm, a procedure for obtaining maximum likelihood estimates of parameters from incomplete data, with simple smoothing. The method is tested on simulated data from three different sphere radii densities, namely a bimodal mixture of Normals, a Weibull and a Normal. The effect of varying the level of smoothing, the number of classes in which the data is binned and the number of classes for which the estimated density is evaluated, is investigated. Comparisons are made between these results and those obtained by others in this field.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号