首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Synthetic likelihood is an attractive approach to likelihood-free inference when an approximately Gaussian summary statistic for the data, informative for inference about the parameters, is available. The synthetic likelihood method derives an approximate likelihood function from a plug-in normal density estimate for the summary statistic, with plug-in mean and covariance matrix obtained by Monte Carlo simulation from the model. In this article, we develop alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads. Our approach uses stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood. We compare the new method with a related likelihood-free variational inference technique in the literature, while at the same time improving the implementation of that approach in a number of ways. These new algorithms are feasible to implement in situations which are challenging for conventional approximate Bayesian computation methods, in terms of the dimensionality of the parameter and summary statistic.  相似文献   

2.
A semiparametric logistic regression model is proposed in which its nonparametric component is approximated with fixed-knot cubic B-splines. To assess the linearity of the nonparametric component, we construct a penalized likelihood ratio test statistic. When the number of knots is fixed, the null distribution of the test statistic is shown to be asymptotically the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom. We set the asymptotic null expectation of this test statistic equal to a value to determine the smoothing parameter value. Monte Carlo experiments are conducted to investigate the performance of the proposed test. Its practical use is illustrated with a real-life example.  相似文献   

3.
We present a test for detecting 'multivariate structure' in data sets. This procedure consists of transforming the data to remove the correlations, then discretizing the data and, finally, studying the cell counts in the resulting contingency table. A formal test can be performed using the usual chi-squared test statistic. We give the limiting distribution of the chi-squared statistic and also present simulation results to examine the accuracy of this limiting distribution in finite samples. Several examples show that our procedure can detect a variety of different types of structure. Our examples include data with clustering, digitized speech data, and residuals from a fitted time series model. The chi-squared statistic can also be used as a test for multivariate normality.  相似文献   

4.
Bayesian synthetic likelihood (BSL) is now a well-established method for performing approximate Bayesian parameter estimation for simulation-based models that do not possess a tractable likelihood function. BSL approximates an intractable likelihood function of a carefully chosen summary statistic at a parameter value with a multivariate normal distribution. The mean and covariance matrix of this normal distribution are estimated from independent simulations of the model. Due to the parametric assumption implicit in BSL, it can be preferred to its nonparametric competitor, approximate Bayesian computation, in certain applications where a high-dimensional summary statistic is of interest. However, despite several successful applications of BSL, its widespread use in scientific fields may be hindered by the strong normality assumption. In this paper, we develop a semi-parametric approach to relax this assumption to an extent and maintain the computational advantages of BSL without any additional tuning. We test our new method, semiBSL, on several challenging examples involving simulated and real data and demonstrate that semiBSL can be significantly more robust than BSL and another approach in the literature.  相似文献   

5.
We define a chi-squared statistic for p-dimensional data as follows. First, we transform the data to remove the correlations between the p variables. Then, we discretize each variable into groups of equal size and compute the cell counts in the resulting p-way contingency table. Our statistic is just the usual chi-squared statistic for testing independence in a contingency table. Because the cells have been chosen in a data-dependent manner, this statistic does not have the usual limiting distribution. We derive the limiting joint distribution of the cell counts and the limiting distribution of the chi-squared statistic when the data is sampled from a multivariate normal distribution. The chi-squared statistic is useful in detecting hidden structure in raw data or residuals. It can also be used as a test for multivariate normality.  相似文献   

6.
The nonparametric component in a partially linear model is estimated by a linear combination of fixed-knot cubic B-splines with a second-order difference penalty on the adjacent B-spline coefficients. The resulting penalized least-squares estimator is used to construct two Wald-type spline-based test statistics for the null hypothesis of the linearity of the nonparametric function. When the number of knots is fixed, the first test statistic asymptotically has the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom, under the null hypothesis. The smoothing parameter is determined by specifying a value for the asymptotically expected value of the test statistic under the null hypothesis. When the number of knots is fixed and under the null hypothesis, the second test statistic asymptotically has a chi-squared distribution with K=q+2 degrees of freedom, where q is the number of knots used for estimation. The power performances of the two proposed tests are investigated via simulation experiments, and the practicality of the proposed methodology is illustrated using a real-life data set.  相似文献   

7.

Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods.  相似文献   

8.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

9.
In this article, we employ the variational Bayesian method to study the parameter estimation problems of linear regression model, wherein some regressors are of Gaussian distribution with nonzero prior means. We obtain an analytical expression of the posterior parameter distribution, and then propose an iterative algorithm for the model. Simulations are carried out to test the performance of the proposed algorithm, and the simulation results confirm both the effectiveness and the reliability of the proposed algorithm.  相似文献   

10.
We present a Bayesian analysis framework for matrix-variate normal data with dependency structures induced by rows and columns. This framework of matrix normal models includes prior specifications, posterior computation using Markov chain Monte Carlo methods, evaluation of prediction uncertainty, model structure search, and extensions to multidimensional arrays. Compared with Bayesian probabilistic matrix factorization, which integrates a Gaussian prior for single row of the data matrix, our proposed model, namely Bayesian hierarchical kernelized probabilistic matrix factorization, imposes Gaussian Process priors over multiple rows of the matrix. Hence, the learned model explicitly captures the underlying correlation among the rows and the columns. In addition, our method requires no specific assumptions like independence of latent factors for rows and columns, which obtains more flexibility for modeling real data compared to existing works. Finally, the proposed framework can be adapted to a wide range of applications, including multivariate analysis, times series, and spatial modeling. Experiments highlight the superiority of the proposed model in handling model uncertainty and model optimization.  相似文献   

11.
Rukhin's statistic family for goodness-of-fit, under the null hypothesis, has asymptotic chi-squared distribution; however, for small samples the chi-squared approximation in some cases does not well agree with the exact distribution. In this paper we consider this approximation and other three to get appropriate test levels in comparison with the exact level. Moreover, exact power comparisons for several values of the parameter under specified alternatives provide that the classical Pearson's statistic, obtained as a particular case of Rukhin statistic, can be improved by choosing other statistics from the family. An explanation is proposed in terms of the effects of individual cell frequencies on the Rukhin statistic. This work was supported in part by the DGCYT grants No. PR156/97-7159 and PB96-0635  相似文献   

12.
We consider a Bayesian approach to the study of independence in a two-way contingency table which has been obtained from a two-stage cluster sampling design. If a procedure based on single-stage simple random sampling (rather than the appropriate cluster sampling) is used to test for independence, the p-value may be too small, resulting in a conclusion that the null hypothesis is false when it is, in fact, true. For many large complex surveys the Rao–Scott corrections to the standard chi-squared (or likelihood ratio) statistic provide appropriate inference. For smaller surveys, though, the Rao–Scott corrections may not be accurate, partly because the chi-squared test is inaccurate. In this paper, we use a hierarchical Bayesian model to convert the observed cluster samples to simple random samples. This provides surrogate samples which can be used to derive the distribution of the Bayes factor. We demonstrate the utility of our procedure using an example and also provide a simulation study which establishes our methodology as a viable alternative to the Rao–Scott approximations for relatively small two-stage cluster samples. We also show the additional insight gained by displaying the distribution of the Bayes factor rather than simply relying on a summary of the distribution.  相似文献   

13.
An empirical likelihood-based inferential procedure is developed for a class of general additive-multiplicative hazard models. The proposed log-empirical likelihood ratio test statistic for the parameter vector is shown to have a chi-squared limiting distribution. The result can be used to make inference about the entire parameter vector as well as any linear combination of it. The asymptotic power of the proposed test statistic under contiguous alternatives is discussed. The method is illustrated by extensive simulation studies and a real example.  相似文献   

14.
李小胜  王申令 《统计研究》2016,33(11):85-92
本文首先构造线性约束条件下的多元线性回归模型的样本似然函数,利用Lagrange法证明其合理性。其次,从似然函数的角度讨论线性约束条件对模型参数的影响,对由传统理论得出的参数估计作出贝叶斯与经验贝叶斯的改进。做贝叶斯改进时,将矩阵正态-Wishart分布作为模型参数和精度阵的联合共轭先验分布,结合构造的似然函数得出参数的后验分布,计算出参数的贝叶斯估计;做经验贝叶斯改进时,将样本分组,从方差的角度讨论由子样得出的参数估计对总样本的参数估计的影响,计算出经验贝叶斯估计。最后,利用Matlab软件生成的随机矩阵做模拟。结果表明,这两种改进后的参数估计均较由传统理论得出的参数估计更精确,拟合结果的误差比更小,可信度更高,在大数据的情况下,这种计算方法的速度更快。  相似文献   

15.
We propose a multivariate extension of the univariate chi-squared normality test. Using a known result for the distribution of quadratic forms in normal variables, we show that the proposed test statistic has an approximated chi-squared distribution under the null hypothesis of multivariate normality. As in the univariate case, the new test statistic is based on a comparison of observed and expected frequencies for specified events in sample space. In the univariate case, these events are the standard class intervals, but in the multivariate extension we propose these become hyper-ellipsoidal annuli in multivariate sample space. We assess the performance of the new test using Monte Carlo simulation. Keeping the type I error rate fixed, we show that the new test has power that compares favourably with other standard normality tests, though no uniformly most powerful test has been found. We recommend the new test due to its competitive advantages.  相似文献   

16.
This article considers statistical inference for partially linear varying-coefficient models when the responses are missing at random. We propose a profile least-squares estimator for the parametric component with complete-case data and show that the resulting estimator is asymptotically normal. To avoid to estimate the asymptotic covariance in establishing confidence region of the parametric component with the normal-approximation method, we define an empirical likelihood based statistic and show that its limiting distribution is chi-squared distribution. Then, the confidence regions of the parametric component with asymptotically correct coverage probabilities can be constructed by the result. To check the validity of the linear constraints on the parametric component, we construct a modified generalized likelihood ratio test statistic and demonstrate that it follows asymptotically chi-squared distribution under the null hypothesis. Then, we extend the generalized likelihood ratio technique to the context of missing data. Finally, some simulations are conducted to illustrate the proposed methods.  相似文献   

17.
The negative binomial (NB) is frequently used to model overdispersed Poisson count data. To study the effect of a continuous covariate of interest in an NB model, a flexible procedure is used to model the covariate effect by fixed-knot cubic basis-splines or B-splines with a second-order difference penalty on the adjacent B-spline coefficients to avoid undersmoothing. A penalized likelihood is used to estimate parameters of the model. A penalized likelihood ratio test statistic is constructed for the null hypothesis of the linearity of the continuous covariate effect. When the number of knots is fixed, its limiting null distribution is the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom. The smoothing parameter value is determined by setting a specified value equal to the asymptotic expectation of the test statistic under the null hypothesis. The power performance of the proposed test is studied with simulation experiments.  相似文献   

18.
In this paper, we focus on the empirical likelihood (EL) inference for high-dimensional partially linear model with martingale difference errors. An empirical log-likelihood ratio statistic of unknown parameter is constructed and is shown to have asymptotically normality distribution under some suitable conditions. This result is different from those derived before. Furthermore, an empirical log-likelihood ratio for a linear combination of unknown parameter is also proposed and its asymptotic distribution is chi-squared. Based on these results, the confidence regions both for unknown parameter and a linear combination of parameter can be obtained. A simulation study is carried out to show that our proposed approach performs better than normal approximation-based method.  相似文献   

19.
In this paper, we obtain an adjusted version of the likelihood ratio (LR) test for errors-in-variables multivariate linear regression models. The error terms are allowed to follow a multivariate distribution in the class of the elliptical distributions, which has the multivariate normal distribution as a special case. We derive a modified LR statistic that follows a chi-squared distribution with a high degree of accuracy. Our results generalize those in Melo and Ferrari (Advances in Statistical Analysis, 2010, 94, pp. 75–87) by allowing the parameter of interest to be vector-valued in the multivariate errors-in-variables model. We report a simulation study which shows that the proposed test displays superior finite sample behavior relative to the standard LR test.  相似文献   

20.
Two-sample comparison problems are often encountered in practical projects and have widely been studied in literature. Owing to practical demands, the research for this topic under special settings such as a semiparametric framework have also attracted great attentions. Zhou and Liang (Biometrika 92:271–282, 2005) proposed an empirical likelihood-based semi-parametric inference for the comparison of treatment effects in a two-sample problem with censored data. However, their approach is actually a pseudo-empirical likelihood and the method may not be fully efficient. In this study, we develop a new empirical likelihood-based inference under more general framework by using the hazard formulation of censored data for two sample semi-parametric hybrid models. We demonstrate that our empirical likelihood statistic converges to a standard chi-squared distribution under the null hypothesis. We further illustrate the use of the proposed test by testing the ROC curve with censored data, among others. Numerical performance of the proposed method is also examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号