首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We present a non-parametric affine-invariant test for the multivariate Behrens–Fisher problem. The proposed method based on the spatial medians is asymptotic and does not require normality of the data. To improve its finite sample performance, we apply a correction of the type which was already used in a similar test based on trimmed means, however, our simulations show that in the case of heavy-tailed distributions our method performs better. Also in a simulation comparison with a recently published rank-based test our test yields satisfactory results.  相似文献   

2.
Non-normality and heteroscedasticity are common in applications. For the comparison of two samples in the non-parametric Behrens–Fisher problem, different tests have been proposed, but no single test can be recommended for all situations. Here, we propose combining two tests, the Welch t test based on ranks and the Brunner–Munzel test, within a maximum test. Simulation studies indicate that this maximum test, performed as a permutation test, controls the type I error rate and stabilizes the power. That is, it has good power characteristics for a variety of distributions, and also for unbalanced sample sizes. Compared to the single tests, the maximum test shows acceptable type I error control.  相似文献   

3.
The exact distribution of a modified Behrens–Fisher statistic is derived. The distribution function is mostly elementary and is simpler than the exact distribution derived by Nel et al. Its practical use (including computationalefficiency and computational convenience) is discussed.  相似文献   

4.
ABSTRACT

This article provides three approximate solutions to the multivariate Behrens–Fisher problem: the F statistic, the Bartlett, as well as the modified Bartlett corrected statistics. Empirical results indicate that the F statistic outperforms the other two and five existing procedures. The modified Bartlett corrected statistic is also very competitive.  相似文献   

5.
We revisit the well-known Behrens–Fisher problem and apply a newly developed ‘Computational Approach Test’ (CAT) to test the equality of two population means where the populations are assumed to be normal with unknown and possibly unequal variances. An advantage of the CAT is that it does not require the explicit knowledge of the sampling distribution of the test statistic. The CAT is then compared with three widely accepted tests—Welch–Satterthwaite test (WST), Cochran–Cox test (CCT), ‘Generalized p-value’ test (GPT)—and a recently suggested test based on the jackknife procedure, called Singh–Saxena–Srivastava test (SSST). Further, model robustness of these five tests are studied when the data actually came from t-distributions, but wrongly perceived as normal ones. Our detailed study based on a comprehensive simulation indicate some interesting results including the facts that the GPT is quite conservative, and the SSST is not as good as it has been claimed in the literature. To the best of our knowledge, the trends observed in our study have not been reported earlier in the existing literature.  相似文献   

6.
7.
ABSTRACT

This article presents a Bayesian analysis of the von Mises–Fisher distribution, which is the most important distribution in the analysis of directional data. We obtain samples from the posterior distribution using a sampling-importance-resampling method. The procedure is illustrated using simulated data as well as real data sets previously analyzed in the literature.  相似文献   

8.
The purpose of this paper is to develop a Bayesian analysis for the right-censored survival data when immune or cured individuals may be present in the population from which the data is taken. In our approach the number of competing causes of the event of interest follows the Conway–Maxwell–Poisson distribution which generalizes the Poisson distribution. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the proposed model. Also, some discussions on the model selection and an illustration with a real data set are considered.  相似文献   

9.
Gene copy number (GCN) changes are common characteristics of many genetic diseases. Comparative genomic hybridization (CGH) is a new technology widely used today to screen the GCN changes in mutant cells with high resolution genome-wide. Statistical methods for analyzing such CGH data have been evolving. Existing methods are either frequentist's or full Bayesian. The former often has computational advantage, while the latter can incorporate prior information into the model, but could be misleading when one does not have sound prior information. In an attempt to take full advantages of both approaches, we develop a Bayesian-frequentist hybrid approach, in which a subset of the model parameters is inferred by the Bayesian method, while the rest parameters by the frequentist's. This new hybrid approach provides advantages over those of the Bayesian or frequentist's method used alone. This is especially the case when sound prior information is available on part of the parameters, and the sample size is relatively small. Spatial dependence and false discovery rate are also discussed, and the parameter estimation is efficient. As an illustration, we used the proposed hybrid approach to analyze a real CGH data.  相似文献   

10.
Fisher succeeded early on in redefining Student’s t-distribution in geometrical terms on a central hypersphere. Intriguingly, a noncentral analytical extension for this fundamental Fisher–Student’s central hypersphere h-distribution does not exist. We therefore set to derive the noncentral h-distribution and use it to graphically illustrate the limitations of the Neyman–Pearson null hypothesis significance testing framework and the strengths of the Bayesian statistical hypothesis analysis framework on the hypersphere polar axis, a compact nontrivial one-dimensional parameter space. Using a geometrically meaningful maximal entropy prior, we requalify the apparent failure of an important psychological science reproducibility project. We proceed to show that the Bayes factor appropriately models the two-sample t-test p-value density of a gene expression profile produced by the high-throughput genomic-scale microarray technology, and provides a simple expression for a local false discovery rate addressing the multiple hypothesis testing problem brought about by such a technology.  相似文献   

11.
In this paper, we propose a mixture of beta–Dirichlet processes as a nonparametric prior for the cumulative intensity functions of a Markov process. This family of priors is a natural extension of a mixture of Dirichlet processes or a mixture of beta processes which are devised to compromise advantages of parametric and nonparametric approaches. They give most of their prior mass to the small neighborhood of a specific parametric model. We show that a mixture of beta–Dirichlet processes prior is conjugate with Markov processes. Formulas for computing the posterior distribution are derived. Finally, results of analyzing credit history data are given.  相似文献   

12.
This paper primarily is concerned with the sampling of the Fisher–Bingham distribution and we describe a slice sampling algorithm for doing this. A by-product of this task gave us an infinite mixture representation of the Fisher–Bingham distribution; the mixing distributions being based on the Dirichlet distribution. Finite numerical approximations are considered and a sampling algorithm based on a finite mixture approximation is compared with the slice sampling algorithm.  相似文献   

13.
In this paper we introduce a three-parameter lifetime distribution following the Marshall and Olkin [New method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Biometrika. 1997;84(3):641–652] approach. The proposed distribution is a compound of the Lomax and Logarithmic distributions (LLD). We provide a comprehensive study of the mathematical properties of the LLD. In particular, the density function, the shape of the hazard rate function, a general expansion for moments, the density of the rth order statistics, and the mean and median deviations of the LLD are derived and studied in detail. The maximum likelihood estimators of the three unknown parameters of LLD are obtained. The asymptotic confidence intervals for the parameters are also obtained based on asymptotic variance–covariance matrix. Finally, a real data set is analysed to show the potential of the new proposed distribution.  相似文献   

14.
When the finite population ‘totals’ are estimated for individual areas, they do not necessarily add up to the known ‘total’ for all areas. Benchmarking (BM) is a technique used to ensure that the totals for all areas match the grand total, which can be obtained from an independent source. BM is desirable to practitioners of survey sampling. BM shifts the small-area estimators to accommodate the constraint. In doing so, it can provide increased precision to the small-area estimators of the finite population means or totals. The Scott–Smith model is used to benchmark the finite population means of small areas. This is a one-way random effects model for a superpopulation, and it is computationally convenient to use a Bayesian approach. We illustrate our method by estimating body mass index using data in the third National Health and Nutrition Examination Survey. Several properties of the benchmarked small-area estimators are obtained using a simulation study.  相似文献   

15.
We consider a hypothesis problem with directional alternatives. We approach the problem from a Bayesian decision theoretic point of view and consider a situation when one side of the alternatives is more important or more probable than the other. We develop a general Bayesian framework by specifying a mixture prior structure and a loss function related to the Kullback–Leibler divergence. This Bayesian decision method is applied to Normal and Poisson populations. Simulations are performed to compare the performance of the proposed method with that of a method based on a classical z-test and a Bayesian method based on the “0–1” loss.  相似文献   

16.
17.
Suppose we want to estimate some smooth function of two types of parameters. The first can be estimated by sample means, while the second is known exactly up to the number of decimal places recorded, that is they are subject to roundoff. We obtain the Cornish–Fisher expansions and associated nonparametric confidence intervals for such functions. These results are illustrated by a simulation study.  相似文献   

18.
In this work, a simulation study is conducted to evaluate the performance of Bayesian estimators for the log–linear exponential regression model under different levels of censoring and degrees of collinearity for two covariates. The diffuse normal, independent Student-t and multivariate Student-t distributions are considered as prior distributions and to draw from the posterior distributions, the Metropolis algorithm is implemented. Also, the results are compared with the maximum likelihood estimators in terms of the mean squared error, coverages and length of the credibility and confidence intervals.  相似文献   

19.
In this article, we present the analysis of head and neck cancer data using generalized inverse Lindley stress–strength reliability model. We propose Bayes estimators for estimating P(X > Y), when X and Y represent survival times of two groups of cancer patients observed under different therapies. The X and Y are assumed to be independent generalized inverse Lindley random variables with common shape parameter. Bayes estimators are obtained under the considerations of symmetric and asymmetric loss functions assuming independent gamma priors. Since posterior becomes complex and does not possess closed form expressions for Bayes estimators, Lindley’s approximation and Markov Chain Monte Carlo techniques are utilized for Bayesian computation. An extensive simulation experiment is carried out to compare the performances of Bayes estimators with the maximum likelihood estimators on the basis of simulated risks. Asymptotic, bootstrap, and Bayesian credible intervals are also computed for the P(X > Y).  相似文献   

20.
The Lloyd–Moulton price index does not make use of current-period expenditure data and, as it is commonly known, it allows us to approximate superlative indices, in particular the Fisher price index. This is a very important property for the inflation measurement and the Consumer Price Index bias calculations. In this article, we verify the utility of the Lloyd–Moulton price index in the Fisher price index approximation. We propose a simple modification of that index which reduces the variation of the estimator of an unknown parameter in this index formula. We also examine the influence of the price volatility on the quality of estimation of the parameter from the Lloyd–Moulton formula.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号