首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An accurate procedure is proposed to calculate approximate moments of progressive order statistics in the context of statistical inference for lifetime models. The study analyses the performance of power series expansion to approximate the moments for location and scale distributions with high precision and smaller deviations with respect to the exact values. A comparative analysis between exact and approximate methods is shown using some tables and figures. The different approximations are applied in two situations. First, we consider the problem of computing the large sample variance–covariance matrix of maximum likelihood estimators. We also use the approximations to obtain progressively censored sampling plans for log-normal distributed data. These problems illustrate that the presented procedure is highly useful to compute the moments with precision for numerous censoring patterns and, in many cases, is the only valid method because the exact calculation may not be applicable.  相似文献   

2.
Apart from having intrinsic mathematical interest, order statistics are also useful in the solution of many applied sampling and analysis problems. For a general review of the properties and uses of order statistics, see David (1981). This paper provides tabulations of means and variances of certain order statistics from the gamma distribution, for parameter values not previously available. The work was motivated by a particular quota sampling problem, for which existing tables are not adequate. The solution to this sampling problem actually requires the moments of the highest order statistic within a given set; however the calculation algorithm used involves a recurrence relation, which causes all the lower order statistics to be calculated first. Therefore we took the opportunity to develop more extensive tables for the gamma order statistic moments in general. Our tables provide values for the order statistic moments which were not available in previous tables, notably those for higher values of m, the gamma distribution shape parameter. However we have also retained the corresponding statistics for lower values of m, first to allow for checking accuracy of the computtions agtainst previous tables, and second to provide an integrated presentation of our new results with the previously known values in a consistent format  相似文献   

3.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

4.
It is shown in this article that, given the moments of a distribution, any percentage point can be accurately determined from an approximation of the corresponding density function in terms of the product of an appropriate baseline density and a polynomial adjustment. This approach, which is based on a moment-matching technique, is not only conceptually simple but easy to implement. As illustrated by several applications, the percentiles so obtained are in excellent agreement with the tabulated values. Whereas statistical tables, if at all available or accessible, can hardly ever cover all the potentially useful combinations of the parameters associated with a random quantity of interest, the proposed methodology has no such limitation.  相似文献   

5.
In this paper, we study some mathematical properties of the beta Weibull (BW) distribution, which is a quite flexible model in analysing positive data. It contains the Weibull, exponentiated exponential, exponentiated Weibull and beta exponential distributions as special sub-models. We demonstrate that the BW density can be expressed as a mixture of Weibull densities. We provide their moments and two closed-form expressions for their moment-generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and two entropies. The density of the BW-order statistics is a mixture of Weibull densities and two closed-form expressions are derived for their moments. The estimation of the parameters is approached by two methods: moments and maximum likelihood. We compare the performances of the estimates obtained from both the methods by simulation. The expected information matrix is derived. For the first time, we introduce a log-BW regression model to analyse censored data. The usefulness of the BW distribution is illustrated in the analysis of three real data sets.  相似文献   

6.
The main topic of the paper is on-line filtering for non-Gaussian dynamic (state space) models by approximate computation of the first two posterior moments using efficient numerical integration. Based on approximating the prior of the state vector by a normal density, we prove that the posterior moments of the state vector are related to the posterior moments of the linear predictor in a simple way. For the linear predictor Gauss-Hermite integration is carried out with automatic reparametrization based on an approximate posterior mode filter. We illustrate how further topics in applied state space modelling, such as estimating hyperparameters, computing model likelihoods and predictive residuals, are managed by integration-based Kalman-filtering. The methodology derived in the paper is applied to on-line monitoring of ecological time series and filtering for small count data.  相似文献   

7.
This paper is concerned with estimating the parameters of Tadikamalla-Johnson's LB distribution using the first four moments. Tables of the parameters of the LB distribution are given for selected values of skewness (0.0(0.05) 1.0(0.1)2.0) and corresponding available values of kurtosis at intervals of 0.2. The construction and use of these tables is explained with a numerical example.  相似文献   

8.
The Friedman's test is used for assessing the independence of repeated experiments resulting in ranks, summarized as a table of integer entries ranging from 1 to k, with k columns and N rows. For its practical use, the hypothesis testing can be derived either from published tables with exact values for small k and N, or using an asymptotic analytical approximation valid for large N or large k. The quality of the approximation, measured as the relative difference of the true critical values with respect those arising from the asymptotic approximation is simply not known. The literature review shows cases where the wrong conclusion could have been drawn using it, although it may not be the only cause of opposite decisions. By Monte Carlo simulation we conclude that published tables do not cover a large enough set of (k, N) values to assure adequate accuracy. Our proposal is to systematically extend existing tables for k and N values, so that using the analytical approximation for values outside it will have less than a prescribed relative error. For illustration purposes some of the tables have been included in the paper, but the complete set is presented as a source code valid for Octave/Matlab/Scilab etc., and amenable to be ported to other programming languages.  相似文献   

9.
Two approaches to the problem of goodness-of-fit with nuisance parameters are presented in this paper, both based on modifications of the Kolmogorov-Smirnov statistics. Improved tables of critical values originally computed by Lilliefors and Srinivasan are presented in the normal and exponential cases. Also given are tables for the uniform case, normal with known mean and normal with known variance. All tables were computed using Monte Carlo simulation with sample size n = 20000.  相似文献   

10.
Inference methods for the positive stable laws, which have no closed form expression for the density functions are developed based on a special quadratic distance using negative moments. Asymptotic properties of the quadratic distance estimator (QDE) are established. The QDE is shown to have asymptotic relative efficiency close to 1 for almost all the values of the parameter space.Goodness-of-fit tests are also developed for testing the parametric families and practical numerical techniques are considered for implementing the methods. With simple and efficient methods to estimate the parameters, positive stable laws could find new applications in actuarial science for modelling insurance claims and lifetime data.  相似文献   

11.
Consider r independent and identically distributed random points in a unit n-ball of which p are in the interior and rp are on the surface. These r points, via their convex hull, generate an r-simplex. This article deals with the exact density of the r-content when the points are uniformly distributed. The exact density of the r-content is obtained for the general values of the parameters r, n and p. A representation of the density is given as a mixture of beta type-1 densities so that one can evaluate various types of probabilities by using incom-plete beta tables.  相似文献   

12.
An example of the classical occupancy problem is to sample with replacement from an urn containing several colours of balls and count the number of balls sampled until a given number of “quotas” are filled. This and the corresponding random variable for sampling without replacement will be referred to as quota fulfillment times. Asymptotic and exact methods for computing moments and distributions are given in this paper. Moments of quota fulfillment times are related to moments of order statistics of beta and gamma random variables. Most of the results for sampling without replacement and some of the results for sampling with replacement are believed to be new. Some other known sampling-with-replacement results are given for comparative purposes.  相似文献   

13.
This paper is concerned with estimating the parameters of Tadikamalla-Johnson's LUdistributions based on the method of moments. Tables of the parameters of the LU distribution are given for selected values of skewness (0.0(0.05) 1.0(0.1)2.0) and for twenty values of kurtosis at intervals of 0.2. The construction and use of these tables is explained with a numerical example.  相似文献   

14.
Two approximations recovering the functions from their transformed moments are proposed. The upper bounds for the uniform rate of convergence are derived. In addition, the comparisons of the estimates of the cumulative distribution function and its density function with the empirical distribution and the kernel density estimates are conducted via a simulation study. The plots of recovered functions are presented for several examples as well.  相似文献   

15.
Bayesian analysis of predictive values and related parameters of a diagnostic test are derived. In one case, the estimates are conditional on values of the prevalence of the disease; in the second case, the corresponding unconditional estimates are presented. Small-sample point estimates, posterior moments, and credibility intervals for all related parameters are obtained. Numerical methods of solution are also discussed.  相似文献   

16.
This note presents tables for Friedman's test for two-way analysis of variance by ranks. These tables are more accurate than those that are presented in the literature. After intensive simulations, we have found for particular critical values some discrepancies with tables published earlier. The tables are also more extensive than those previously available.  相似文献   

17.
The important problem of the ratio of Weibull random variables is considered. Two motivating examples from engineering are discussed. Exact expressions are derived for the probability density function, cumulative distribution function, hazard rate function, shape characteristics, moments, factorial moments, skewness, kurtosis and percentiles of the ratio. Estimation procedures by the methods of moments and maximum likelihood are provided. The performances of the estimates from these methods are compared by simulation. Finally, an application is discussed for aspect and performance ratios of systems.  相似文献   

18.
We study in detail the so-called beta-modified Weibull distribution, motivated by the wide use of the Weibull distribution in practice, and also for the fact that the generalization provides a continuous crossover towards cases with different shapes. The new distribution is important since it contains as special sub-models some widely-known distributions, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among several others. It also provides more flexibility to analyse complex real data. Various mathematical properties of this distribution are derived, including its moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are also derived for the chf, mean deviations, Bonferroni and Lorenz curves, reliability and entropies. The estimation of parameters is approached by two methods: moments and maximum likelihood. We compare by simulation the performances of the estimates from these methods. We obtain the expected information matrix. Two applications are presented to illustrate the proposed distribution.  相似文献   

19.
The Kumaraswamy Gumbel distribution   总被引:1,自引:0,他引:1  
The Gumbel distribution is perhaps the most widely applied statistical distribution for problems in engineering. We propose a generalization—referred to as the Kumaraswamy Gumbel distribution—and provide a comprehensive treatment of its structural properties. We obtain the analytical shapes of the density and hazard rate functions. We calculate explicit expressions for the moments and generating function. The variation of the skewness and kurtosis measures is examined and the asymptotic distribution of the extreme values is investigated. Explicit expressions are also derived for the moments of order statistics. The methods of maximum likelihood and parametric bootstrap and a Bayesian procedure are proposed for estimating the model parameters. We obtain the expected information matrix. An application of the new model to a real dataset illustrates the potentiality of the proposed model. Two bivariate generalizations of the model are proposed.  相似文献   

20.
The best-known non-asymptotic method for comparing two independent proportions is Fisher's exact text. The usual critical region (CR) tables for this test contain one or more of the following defects:they distinguish between rows and columns; they distinguish between the alternatives H = p1 < p2 and H = p1 > p2; they assume that the error for the two-tailed test is twice that of the one-tailed test; they do not use the optimal version of the test; they do not give both CRs for one and two tails at the same time. All this results in the unnecessary duplication of the space required for the tables, the construction of tables of low-powered methods, or the need to manipulate two different tables (one for the one-tailed test, the other for the two-tailed test). This paper presents CR tables which have been obtained from the most powerful version of Fisher's exact test and which occupy the minimum space possible. The tables, which are valid for one- or two-tailed tests, have levels of significance of 10%, 5% and 1% and values for N (the total size of both samples) of less than or equal to 40. This article shows how to calculate the P value in a specific problem, using the tables as a means of partial checking and as a preliminary step to determining the exact P value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号