首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

This article presents a Bayesian analysis of the von Mises–Fisher distribution, which is the most important distribution in the analysis of directional data. We obtain samples from the posterior distribution using a sampling-importance-resampling method. The procedure is illustrated using simulated data as well as real data sets previously analyzed in the literature.  相似文献   

2.
3.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

4.
5.
In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk scenario. The properties of the proposed distribution are discussed, including a formal proof of its density function and an explicit algebraic formulae for its quantiles and survival and hazard functions. Also, we have discussed inference aspects of the model proposed via Bayesian inference by using Markov chain Monte Carlo simulation. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumptions of non-informative priors. Further, some discussions on models selection criteria are given. The developed methodology is illustrated on a real data set.  相似文献   

6.
The exact distribution of a modified Behrens–Fisher statistic is derived. The distribution function is mostly elementary and is simpler than the exact distribution derived by Nel et al. Its practical use (including computationalefficiency and computational convenience) is discussed.  相似文献   

7.
In the presence of covariate information, the proportional hazards model is one of the most popular models. In this paper, in a Bayesian nonparametric framework, we use a Markov (Lévy-driven) process to model the baseline hazard rate. Previous Bayesian nonparametric models have been based on neutral to the right processes, which have a number of drawbacks, such as discreteness of the cumulative hazard function. We allow the covariates to be time dependent functions and develop a full posterior analysis via substitution sampling. A detailed illustration is presented.  相似文献   

8.
The ratio of normal tail probabilities and the ratio of Student’s t tail probabilities have gained an increased attention in statistics and related areas. However, they are not well studied in the literature. In this paper, we systematically study the functional behaviors of these two ratios. Meanwhile, we explore their difference as well as their relationship. It is surprising that the two ratios behave very different to each other. Finally, we conclude the paper by conducting some lower and upper bounds for the two ratios.  相似文献   

9.
We would like to thank all the discussants for their stimulating comments. While our article to a large extent reviews current practice of Bayesian analysis of Dynamic Stochastic General Equilibrium (DSGE) models the discussants provide many ideas to improve upon the current practice, thereby outlining a research agenda for the years to come. In our rejoinder we will briefly revisit some of the issues that were raised.  相似文献   

10.
When the null hypothesis of Friedman’s test is rejected, there is a wide variety of multiple comparisons that can be used to determine which treatments differ from each other. We will discuss the contexts where different multiple comparisons should be applied, when the population follows some discrete distributions commonly used to model count data in biological and ecological fields. Our simulation study shows that sign test is very conservative. Fisher’s LSD and Tukey’s HSD tests computed with ranks are the most liberal. Theoretical considerations are illustrated with data of the Azores Buzzard (Buteo buteo rothschildi) population from Azores, Portugal.  相似文献   

11.
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens’ failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum–Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.  相似文献   

12.
ABSTRACT

This paper proposes an adaptive quasi-maximum likelihood estimation (QMLE) when forecasting the volatility of financial data with the generalized autoregressive conditional heteroscedasticity (GARCH) model. When the distribution of volatility data is unspecified or heavy-tailed, we worked out adaptive QMLE based on data by using the scale parameter ηf to identify the discrepancy between wrongly specified innovation density and the true innovation density. With only a few assumptions, this adaptive approach is consistent and asymptotically normal. Moreover, it gains better efficiency under the condition that innovation error is heavy-tailed. Finally, simulation studies and an application show its advantage.  相似文献   

13.
The Kolassa method implemented in the nQuery Advisor software has been widely used for approximating the power of the Wilcoxon–Mann–Whitney (WMW) test for ordered categorical data, in which Edgeworth approximation is used to estimate the power of an unconditional test based on the WMW U statistic. When the sample size is small or when the sizes in the two groups are unequal, Kolassa’s method may yield quite poor approximation to the power of the conditional WMW test that is commonly implemented in statistical packages. Two modifications of Kolassa’s formula are proposed and assessed by simulation studies.  相似文献   

14.
The integer-valued autoregressive (INAR) model has been widely used in diverse fields. Since the task of identifying the underlying distribution of time-series models is a crucial step for further inferences, we consider the goodness-of-fit test for the Poisson assumption on first-order INAR models. For a test, we employ Fisher’s dispersion test due to its simplicity and then derive its null limiting distribution. As an illustration, a simulation study and real data analysis are conducted for the counts of coal mining disasters, the monthly crime data set from New South Wales, and the annual numbers of worldwide earthquakes.  相似文献   

15.
The present article deals with the problem of misspecifying the disturbance-covariance matrix as scalar, when it is locally non scalar. We consider a family of shrinkage estimators based on OLS estimator and compare its asymptotic properties with the properties of OLS estimator. We proposed a similar family of estimators based on FGLS and compared its asymptotic properties with the shrinkage estimator based on OLS under a Pitman's drift process. The effect of misspecifying the disturbances covariance matrix was analyzed with the help of a numerical simulation.  相似文献   

16.
We treat robust M-estimators for independent and identically distributed Poisson data. We introduce modified Tukey M-estimators with bias correction and compare them to M-estimators based on the Huber function as well as to weighted likelihood and other estimators by simulation in case of clean data and data with outliers. In particular, we investigate the problem of combining robustness and high efficiencies at small Poisson means caused by the strong asymmetry of such Poisson distributions and propose a further estimator based on adaptive trimming. The advantages of the constructed estimators are illustrated by an application to smoothing count data with a time varying mean and level shifts.  相似文献   

17.
This paper primarily is concerned with the sampling of the Fisher–Bingham distribution and we describe a slice sampling algorithm for doing this. A by-product of this task gave us an infinite mixture representation of the Fisher–Bingham distribution; the mixing distributions being based on the Dirichlet distribution. Finite numerical approximations are considered and a sampling algorithm based on a finite mixture approximation is compared with the slice sampling algorithm.  相似文献   

18.
In survival analysis and reliability studies, problems with random sample size arise quite frequently. More specifically, in cancer studies, the number of clonogens is unknown and the time to relapse of the cancer is defined by the minimum of the incubation times of the various clonogenic cells. In this article, we have proposed a new model where the distribution of the incubation time is taken as Weibull and the distribution of the random sample size as Bessel, giving rise to a Weibull–Bessel distribution. The maximum likelihood estimation of the model parameters is studied and a score test is developed to compare it with its special submodel, namely, exponential–Bessel distribution. To illustrate the model, two real datasets are examined, and it is shown that the proposed model, presented here, fits better than several other existing models in the literature. Extensive simulation studies are also carried out to examine the performance of the estimates.  相似文献   

19.
20.
In this article we present a technique for implementing large-scale optimal portfolio selection. We use high-frequency daily data to capture valuable statistical information in asset returns. We describe several statistical issues involved in quantitative approaches to portfolio selection. Our methodology applies to large-scale portfolio-selection problems in which the number of possible holdings is large relative to the estimation period provided by historical data. We illustrate our approach on an equity database that consists of stocks from the Standard and Poor's index, and we compare our portfolios to this benchmark index. Our methodology differs from the usual quadratic programming approach to portfolio selection in three ways: (1) We employ informative priors on the expected returns and variance-covariance matrices, (2) we use daily data for estimation purposes, with upper and lower holding limits for individual securities, and (3) we use a dynamic asset-allocation approach that is based on reestimating and then rebalancing the portfolio weights on a prespecified time window. The key inputs to the optimization process are the predictive distributions of expected returns and the predictive variance-covariance matrix. We describe the statistical issues involved in modeling these inputs for high-dimensional portfolio problems in which our data frequency is daily. In our application, we find that our optimal portfolio outperforms the underlying benchmark.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号