首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

The inverse Gaussian (IG) family is now widely used for modeling non negative skewed measurements. In this article, we construct the likelihood-ratio tests (LRTs) for homogeneity of the order constrained IG means and study the null distributions for simple order and simple tree order cases. Interestingly, it is seen that the null distribution results for the normal case are applicable without modification to the IG case. This supplements the numerous well known and striking analogies between Gaussian and inverse Gaussian families  相似文献   

2.
The authors extend Fisher's method of combining two independent test statistics to test homogeneity of several two‐parameter populations. They explore two procedures combining asymptotically independent test statistics: the first pools two likelihood ratio statistics and the other, score test statistics. They then give specific results to test homogeneity of several normal, negative binomial or beta‐binomial populations. Their simulations provide evidence that in this context, Fisher's method performs generally well, even when the statistics to be combined are only asymptotically independent. They are led to recommend Fisher's test based on score statistics, since the latter have simple forms, are easy to calculate, and have uniformly good level properties.  相似文献   

3.
Two-treatment multicentre clinical trials are very common in practice. In cases where a non-parametric analysis is appropriate, a rank-sum test for grouped data called the van Elteren test can be applied. As an alternative approach, one may apply a combination test such as Fisher's combination test or the inverse normal combination test (also called Liptak's method) in order to combine centre-specific P-values. If there are no ties and no differences between centres with regard to the groups’ sample sizes, the inverse normal combination test using centre-specific Wilcoxon rank-sum tests is equivalent to the van Elteren test. In this paper, the van Elteren test is compared with Fisher's combination test based on Wilcoxon rank-sum tests. Data from two multicentre trials as well as simulated data indicate that Fisher's combination of P-values is more powerful than the van Elteren test in realistic scenarios, i.e. when there are large differences between the centres’ P-values, some quantitative interaction between treatment and centre, and/or heterogeneity in variability. The combination approach opens the possibility of using statistics other than the rank sum, and it is also a suitable method for more complicated designs, e.g. when covariates such as age or gender are included in the analysis.  相似文献   

4.
In many linear inverse problems the unknown function f (or its discrete approximation Θ p×1), which needs to be reconstructed, is subject to the non negative constraint(s); we call these problems the non negative linear inverse problems (NNLIPs). This article considers NNLIPs. However, the error distribution is not confined to the traditional Gaussian or Poisson distributions. We adopt the exponential family of distributions where Gaussian and Poisson are special cases. We search for the non negative maximum penalized likelihood (NNMPL) estimate of Θ. The size of Θ often prohibits direct implementation of the traditional methods for constrained optimization. Given that the measurements and point-spread-function (PSF) values are all non negative, we propose a simple multiplicative iterative algorithm. We show that if there is no penalty, then this algorithm is almost sure to converge; otherwise a relaxation or line search is necessitated to assure its convergence.  相似文献   

5.
The approximate likelihood function introduced by Whittle has been used to estimate the spectral density and certain parameters of a variety of time series models. In this note we attempt to empirically quantify the loss of efficiency of Whittle's method in nonstandard settings. A recently developed representation of some first-order non-Gaussian stationary autoregressive process allows a direct comparison of the true likelihood function with that of Whittle. The conclusion is that Whittle's likelihood can produce unreliable estimates in the non-Gaussian case, even for moderate sample sizes. Moreover, for small samples, and if the autocorrelation of the process is high, Whittle's approximation is not efficient even in the Gaussian case. While these facts are known to some extent, the present study sheds more light on the degree of efficiency loss incurred by using Whittle's likelihood, in both Gaussian and non-Gaussian cases.  相似文献   

6.
The likelihood ratio test for equality of ordered means is known to have power characteristics that are generally superior to those of competing procedures. Difficulties in implementing this test have led to the development of alternative approaches, most of which are based on contrasts. While orthogonal contrasts can be chosen to simplify the distribution theory, we propose a class of tests that is easy to implement even if the contrasts used are not orthogonal. An overall measure of significance may be obtained by using Fisher's combination statistic to combine the dependent p-values arising from these contrasts. This method can be easily implemented for testing problems involving unequal sample sizes and any partial order, and has power properties that compare well with those of the likelihood ratio test and other contrast-based tests.  相似文献   

7.
ABSTRACT

The non parametric approach is considered to estimate probability density function (Pdf) which is supported on(0, ∞). This approach is the inverse gamma kernel. We show that it has same properties as gamma, reciprocal inverse Gaussian, and inverse Gaussian kernels such that it is free of the boundary bias, non negative, and it achieves the optimal rate of convergence for the mean integrated squared error. Also some properties of the estimator were established such as bias and variance. Comparison of the bandwidth selection methods for inverse gamma kernel estimation of Pdf is done.  相似文献   

8.
In applications of generalized order statistics as, for instance, reliability analysis of engineering systems, prior knowledge about the order of the underlying model parameters is often available and may therefore be incorporated in inferential procedures. Taking this information into account, we establish the likelihood ratio test, Rao's score test, and Wald's test for test problems arising from the question of appropriate model selection for ordered data, where simple order restrictions are put on the parameters under the alternative hypothesis. For simple and composite null hypothesis, explicit representations of the corresponding test statistics are obtained along with some properties and their asymptotic distributions. A simulation study is carried out to compare the order restricted tests in terms of their power. In the set-up considered, the adapted tests significantly improve the power of the associated omnibus versions for small sample sizes, especially when testing a composite null hypothesis.  相似文献   

9.
We consider maximum likelihood estimation and likelihood ratio tests under inequality restrictions on the parameters. A special case are order restrictions, which may appear for example in connection with effects of an ordinal qualitative covariate. Our estimation approach is based on the principle of sequential quadratic programming, where the restricted estimate is computed iteratively and a quadratic optimization problem under inequality restrictions is solved in each iteration. Testing for inequality restrictions is based on the likelihood ratio principle. Under certain regularity assumptions the likelihood ratio test statistic is asymptotically distributed like a mixture of χ2, where the weights are a function of the restrictions and the information matrix. A major problem in theory is that in general there is no unique least favourable point. We present some empirical findings on finite-sample behaviour of tests and apply the methods to examples from credit scoring and dentistry.  相似文献   

10.
Results are given of an empirical power study of three statistical procedures for testing for exponentiality of several independent samples. The test procedures are the Tiku (1974) test, a multi-sample Durbin (1975) test, and a multi-sample Shapiro–Wilk (1972) test. The alternative distributions considered in the study were selected from the gamma, Weibull, Lomax, lognormal, inverse Gaussian, and Burr families of positively skewed distributions. The general behavior of the conditional mean exceedance function is used to classify each alternative distribution. It is shown that Tiku's test generally exhibits overall greater power than either of the other two test procedures. For certain alternative distributions, Shapiro–Wilk's test is superior when the sample sizes are small.  相似文献   

11.
The Inverse Gaussian (IG) distribution is commonly introduced to model and examine right skewed data having positive support. When applying the IG model, it is critical to develop efficient goodness-of-fit tests. In this article, we propose a new test statistic for examining the IG goodness-of-fit based on approximating parametric likelihood ratios. The parametric likelihood ratio methodology is well-known to provide powerful likelihood ratio tests. In the nonparametric context, the classical empirical likelihood (EL) ratio method is often applied in order to efficiently approximate properties of parametric likelihoods, using an approach based on substituting empirical distribution functions for their population counterparts. The optimal parametric likelihood ratio approach is however based on density functions. We develop and analyze the EL ratio approach based on densities in order to test the IG model fit. We show that the proposed test is an improvement over the entropy-based goodness-of-fit test for IG presented by Mudholkar and Tian (2002). Theoretical support is obtained by proving consistency of the new test and an asymptotic proposition regarding the null distribution of the proposed test statistic. Monte Carlo simulations confirm the powerful properties of the proposed method. Real data examples demonstrate the applicability of the density-based EL ratio goodness-of-fit test for an IG assumption in practice.  相似文献   

12.
The inverse Gaussian (IG) distribution is widely used to model positively skewed data. An important issue is to develop a powerful goodness-of-fit test for the IG distribution. We propose and examine novel test statistics for testing the IG goodness of fit based on the density-based empirical likelihood (EL) ratio concept. To construct the test statistics, we use a new approach that employs a method of the minimization of the discrimination information loss estimator to minimize Kullback–Leibler type information. The proposed tests are shown to be consistent against wide classes of alternatives. We show that the density-based EL ratio tests are more powerful than the corresponding classical goodness-of-fit tests. The practical efficiency of the tests is illustrated by using real data examples.  相似文献   

13.
Abstract. A substantive problem in neuroscience is the lack of valid statistical methods for non‐Gaussian random fields. In the present study, we develop a flexible, yet tractable model for a random field based on kernel smoothing of a so‐called Lévy basis. The resulting field may be Gaussian, but there are many other possibilities, including random fields based on Gamma, inverse Gaussian and normal inverse Gaussian (NIG) Lévy bases. It is easy to estimate the parameters of the model and accordingly to assess by simulation the quantiles of test statistics commonly used in neuroscience. We give a concrete example of magnetic resonance imaging scans that are non‐Gaussian. For these data, simulations under the fitted models show that traditional methods based on Gaussian random field theory may leave small, but significant changes in signal level undetected, while these changes are detectable under a non‐Gaussian Lévy model.  相似文献   

14.
In his articles (1966-1968) concerning statistical inference based on lower and upper probabilities, Dempster refers to the connection between Fisher's fiducial argument and his own ideas of statistical inference. Dempster's main concern however focuses on the “Bayesian” aspects of his theory and not on an elaboration of the relation between Fisher's and his ideas. This article attempts to work out the connection between those two approaches and focuses primarily on the question, whether Dempster's combination rule, his upper and lower probabilty based on sufficient statistics and inference based on sufficient statistics in Fisher's sense are consistent. To be adequate to Fisher's reasoning, we deal with absolutely continuous, one parametric families of distributions.This is certainly not the usual assumption in context with Dempster's theory and implies a normative but straightforward definition concerning the underlying conditional distribution; this definition however is done in Dempster's spirit as can be seen from his articles, (1966, 1968,a,b). Under those assumptions it can be shown that - similar to Lindley's results concerning consistency in fiducial reasoning (1958) - the combination rule, Dempster's procedure based on sufficient statistics and fiducial inference by sufficient statistics agree iff the parametric family under consideration can be transformed to location parameter form.  相似文献   

15.
Summary.  The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.  相似文献   

16.
In this paper, we propose an estimation method when sample data are incomplete. We decompose the likelihood according to missing patterns and combine the estimators based on each likelihood weighting by the Fisher information ratio. This approach provides a simple way of estimating parameters, especially for non‐monotone missing data. Numerical examples are presented to illustrate this method.  相似文献   

17.
In this article, maximum likelihood estimates of an exchangeable multinomial distribution using a parametric form to model the parameters as functions of covariates are derived. The non linearity of the exchangeable multinomial distribution and the parametric model make direct application of Newton Rahpson and Fisher's scoring algorithms computationally infeasible. Instead parameter estimates are obtained as solutions to an iterative weighted least-squares algorithm. A completely monotonic parametric form is proposed for defining the marginal probabilities that results in a valid probability model.  相似文献   

18.
Methods for interval estimation and hypothesis testing about the ratio of two independent inverse Gaussian (IG) means based on the concept of generalized variable approach are proposed. As assessed by simulation, the coverage probabilities of the proposed approach are found to be very close to the nominal level even for small samples. The proposed new approaches are conceptually simple and are easy to use. Similar procedures are developed for constructing confidence intervals and hypothesis testing about the difference between two independent IG means. Monte Carlo comparison studies show that the results based on the generalized variable approach are as good as those based on the modified likelihood ratio test. The methods are illustrated using two examples.  相似文献   

19.
Several mathematical programming approaches to the classification problem in discriminant analysis have recently been introduced. This paper empirically compares these newly introduced classification techniques with Fisher's linear discriminant analysis (FLDA), quadratic discriminant analysis (QDA), logit analysis, and several rank-based procedures for a variety of symmetric and skewed distributions. The percent of correctly classified observations by each procedure in a holdout sample indicate that while under some experimental conditions the linear programming approaches compete well with the classical procedures, overall, however, their performance lags behind that of the classical procedures.  相似文献   

20.
The uniformly minimum variance unbiased estimator (UMVUE) of the variance of the inverse Gaussian distribution is shown to be inadmissible in terms of the mean squared error, and a dominating estimator is given. A dominating estimator to the maximum likelihood estimator (MLE) of the variance and estimators dominating the MLE's and the UMVUE's of other parameters are also given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号