首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 608 毫秒
1.
Approximate Bayesian computation (ABC) methods permit approximate inference for intractable likelihoods when it is possible to simulate from the model. However, they perform poorly for high-dimensional data and in practice must usually be used in conjunction with dimension reduction methods, resulting in a loss of accuracy which is hard to quantify or control. We propose a new ABC method for high-dimensional data based on rare event methods which we refer to as RE-ABC. This uses a latent variable representation of the model. For a given parameter value, we estimate the probability of the rare event that the latent variables correspond to data roughly consistent with the observations. This is performed using sequential Monte Carlo and slice sampling to systematically search the space of latent variables. In contrast, standard ABC can be viewed as using a more naive Monte Carlo estimate. We use our rare event probability estimator as a likelihood estimate within the pseudo-marginal Metropolis–Hastings algorithm for parameter inference. We provide asymptotics showing that RE-ABC has a lower computational cost for high-dimensional data than standard ABC methods. We also illustrate our approach empirically, on a Gaussian distribution and an application in infectious disease modelling.  相似文献   

2.
Mixed effects models or random effects models are popular for the analysis of longitudinal data. In practice, longitudinal data are often complex since there may be outliers in both the response and the covariates and there may be measurement errors. The likelihood method is a common approach for these problems but it can be computationally very intensive and sometimes may even be computationally infeasible. In this article, we consider approximate robust methods for nonlinear mixed effects models to simultaneously address outliers and measurement errors. The approximate methods are computationally very efficient. We show the consistency and asymptotic normality of the approximate estimates. The methods can also be extended to missing data problems. An example is used to illustrate the methods and a simulation is conducted to evaluate the methods.  相似文献   

3.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   

4.
Considerable progress has been made in applying Markov chain Monte Carlo (MCMC) methods to the analysis of epidemic data. However, this likelihood based method can be inefficient due to the limited data available concerning an epidemic outbreak. This paper considers an alternative approach to studying epidemic data using Approximate Bayesian Computation (ABC) methodology. ABC is a simulation-based technique for obtaining an approximate sample from the posterior distribution of the parameters of the model and in an epidemic context is very easy to implement. A new approach to ABC is introduced which generates a set of values from the (approximate) posterior distribution of the parameters during each simulation rather than a single value. This is based upon coupling simulations with different sets of parameters and we call the resulting algorithm coupled ABC. The new methodology is used to analyse final size data for epidemics amongst communities partitioned into households. It is shown that for the epidemic data sets coupled ABC is more efficient than ABC and MCMC-ABC.  相似文献   

5.
We analyze the computational efficiency of approximate Bayesian computation (ABC), which approximates a likelihood function by drawing pseudo-samples from the associated model. For the rejection sampling version of ABC, it is known that multiple pseudo-samples cannot substantially increase (and can substantially decrease) the efficiency of the algorithm as compared to employing a high-variance estimate based on a single pseudo-sample. We show that this conclusion also holds for a Markov chain Monte Carlo version of ABC, implying that it is unnecessary to tune the number of pseudo-samples used in ABC-MCMC. This conclusion is in contrast to particle MCMC methods, for which increasing the number of particles can provide large gains in computational efficiency.  相似文献   

6.
Random effects models have been playing a critical role for modelling longitudinal data. However, there are little studies on the kernel-based maximum likelihood method for semiparametric random effects models. In this paper, based on kernel and likelihood methods, we propose a pooled global maximum likelihood method for the partial linear random effects models. The pooled global maximum likelihood method employs the local approximations of the nonparametric function at a group of grid points simultaneously, instead of one point. Gaussian quadrature is used to approximate the integration of likelihood with respect to random effects. The asymptotic properties of the proposed estimators are rigorously studied. Simulation studies are conducted to demonstrate the performance of the proposed approach. We also apply the proposed method to analyse correlated medical costs in the Medical Expenditure Panel Survey data set.  相似文献   

7.
Progressive Type-II hybrid censoring is a mixture of progressive Type-II and hybrid censoring schemes. In this paper, we discuss the statistical inference on Weibull parameters when the observed data are progressively Type-II hybrid censored. We derive the maximum likelihood estimators (MLEs) and the approximate maximum likelihood estimators (AMLEs) of the Weibull parameters. We then use the asymptotic distributions of the maximum likelihood estimators to construct approximate confidence intervals. Bayes estimates and the corresponding highest posterior density credible intervals of the unknown parameters are obtained under suitable priors on the unknown parameters and also by using the Gibbs sampling procedure. Monte Carlo simulations are then performed for comparing the confidence intervals based on all those different methods. Finally, one data set is analyzed for illustrative purposes.  相似文献   

8.
In this paper, we consider the maximum likelihood and Bayes estimation of the scale parameter of the half-logistic distribution based on a multiply type II censored sample. However, the maximum likelihood estimator(MLE) and Bayes estimator do not exist in an explicit form for the scale parameter. We consider a simple method of deriving an explicit estimator by approximating the likelihood function and discuss the asymptotic variances of MLE and approximate MLE. Also, an approximation based on the Laplace approximation (Tierney & Kadane, 1986) is used to obtain the Bayes estimator. In order to compare the MLE, approximate MLE and Bayes estimates of the scale parameter, Monte Carlo simulation is used.  相似文献   

9.
In this paper, we discuss the problem of estimating the mean and standard deviation of a logistic population based on multiply Type-II censored samples. First, we discuss the best linear unbiased estimation and the maximum likelihood estimation methods. Next, by appropriately approximating the likelihood equations we derive approximate maximum likelihood estimators for the two parameters and show that these estimators are quite useful as they do not need the construction of any special tables (as required for the best linear unbiased estimators) and are explicit estimators (unlike the maximum likelihood estimators which need to be determined by numerical methods). We show that these estimators are also quite efficient, and derive the asymptotic variances and covariance of the estimators. Finally, we present an example to illustrate the methods of estimation discussed in this paper.  相似文献   

10.
The approximate Bayesian computation (ABC) algorithm is used to estimate parameters from complicated phenomena, where likelihood is intractable. Here, we report the development of an algorithm to choose the tolerance level for ABC. We have illustrated the performance of our proposed method by simulating the estimation of scaled mutation and recombination rates. The result shows that the proposed algorithm performs well.  相似文献   

11.
The POT (Peaks-Over-Threshold) approach consists of using the generalized Pareto distribution (GPD) to approximate the distribution of excesses over thresholds. In this article, we establish the asymptotic normality of the well-known extreme quantile estimators based on this POT method, under very general assumptions. As an illustration, from this result, we deduce the asymptotic normality of the POT extreme quantile estimators in the case where the maximum likelihood (ML) or the generalized probability-weighted moments (GPWM) methods are used. Simulations are provided in order to compare the efficiency of these estimators based on ML or GPWM methods with classical ones proposed in the literature.  相似文献   

12.
In this paper, we consider the empirical likelihood inferences of the partial functional linear model with missing responses. Two empirical log-likelihood ratios of the parameters of interest are constructed, and the corresponding maximum empirical likelihood estimators of parameters are derived. Under some regularity conditions, we show that the proposed two empirical log-likelihood ratios are asymptotic standard Chi-squared. Thus, the asymptotic results can be used to construct the confidence intervals/regions for the parameters of interest. We also establish the asymptotic distribution theory of corresponding maximum empirical likelihood estimators. A simulation study indicates that the proposed methods are comparable in terms of coverage probabilities and average lengths of confidence intervals. An example of real data is also used to illustrate our proposed methods.  相似文献   

13.
Based on a progressively type II censored sample, the maximum likelihood and Bayes estimators of the scale parameter of the half-logistic distribution are derived. However, since the maximum likelihood estimator (MLE) and Bayes estimator do not exist in an explicit form for the scale parameter, we consider a simple method of deriving an explicit estimator by approximating the likelihood function and derive the asymptotic variances of MLE and approximate MLE. Also, an approximation based on the Laplace approximation (Tierney and Kadane in J Am Stat Assoc 81:82–86, 1986) and importance sampling methods are used for obtaining the Bayes estimator. In order to compare the performance of the MLE, approximate MLE and Bayes estimates of the scale parameter, we use Monte Carlo simulation.  相似文献   

14.
In this paper, we consider the problem of estimating the location and scale parameters of an extreme value distribution based on multiply Type-II censored samples. We first describe the best linear unbiased estimators and the maximum likelihood estimators of these parameters. After observing that the best linear unbiased estimators need the construction of some tables for its coefficients and that the maximum likelihood estimators do not exist in an explicit algebraic form and hence need to be found by numerical methods, we develop approximate maximum likelihood estimators by appropriately approximating the likelihood equations. In addition to being simple explicit estimators, these estimators turn out to be nearly as efficient as the best linear unbiased estimators and the maximum likelihood estimators. Next, we derive the asymptotic variances and covariance of these estimators in terms of the first two single moments and the product moments of order statistics from the standard extreme value distribution. Finally, we present an example in order to illustrate all the methods of estimation of parameters discussed in this paper.  相似文献   

15.
In a clinical trial, the responses to the new treatment may vary among patient subsets with different characteristics in a biomarker. It is often necessary to examine whether there is a cutpoint for the biomarker that divides the patients into two subsets of those with more favourable and less favourable responses. More generally, we approach this problem as a test of homogeneity in the effects of a set of covariates in generalized linear regression models. The unknown cutpoint results in a model with nonidentifiability and a nonsmooth likelihood function to which the ordinary likelihood methods do not apply. We first use a smooth continuous function to approximate the indicator function defining the patient subsets. We then propose a penalized likelihood ratio test to overcome the model irregularities. Under the null hypothesis, we prove that the asymptotic distribution of the proposed test statistic is a mixture of chi-squared distributions. Our method is based on established asymptotic theory, is simple to use, and works in a general framework that includes logistic, Poisson, and linear regression models. In extensive simulation studies, we find that the proposed test works well in terms of size and power. We further demonstrate the use of the proposed method by applying it to clinical trial data from the Digitalis Investigation Group (DIG) on heart failure.  相似文献   

16.
For curved exponential families we consider modified likelihood ratio statistics of the form rL=r+ log( u/r)/r , where r is the signed root of the likelihood ratio statistic. We are testing a one-dimensional hypothesis, but in order to specify approximate ancillary statistics we consider the test as one in a series of tests. By requiring asymptotic independence and asymptotic normality of the test statistics in a large deviation region there is a particular choice of the statistic u which suggests itself. The derivation of this result is quite simple, only involving a standard saddlepoint approximation followed by a transformation. We give explicit formulas for the statistic u , and include a discussion of the case where some coordinates of the underlying variable are lattice.  相似文献   

17.
In this paper, maximum likelihood and Bayes estimators of the parameters, reliability and hazard functions have been obtained for two-parameter bathtub-shaped lifetime distribution when sample is available from progressive Type-II censoring scheme. The Markov chain Monte Carlo (MCMC) method is used to compute the Bayes estimates of the model parameters. It has been assumed that the parameters have gamma priors and they are independently distributed. Gibbs within the Metropolis–Hasting algorithm has been applied to generate MCMC samples from the posterior density function. Based on the generated samples, the Bayes estimates and highest posterior density credible intervals of the unknown parameters as well as reliability and hazard functions have been computed. The results of Bayes estimators are obtained under both the balanced-squared error loss and balanced linear-exponential (BLINEX) loss. Moreover, based on the asymptotic normality of the maximum likelihood estimators the approximate confidence intervals (CIs) are obtained. In order to construct the asymptotic CI of the reliability and hazard functions, we need to find the variance of them, which are approximated by delta and Bootstrap methods. Two real data sets have been analyzed to demonstrate how the proposed methods can be used in practice.  相似文献   

18.
The parametric bootstrap tests and the asymptotic or approximate tests for detecting difference of two Poisson means are compared. The test statistics used are the Wald statistics with and without log-transformation, the Cox F statistic and the likelihood ratio statistic. It is found that the type I error rate of an asymptotic/approximate test may deviate too much from the nominal significance level α under some situations. It is recommended that we should use the parametric bootstrap tests, under which the four test statistics are similarly powerful and their type I error rates are all close to α. We apply the tests to breast cancer data and injurious motor vehicle crash data.  相似文献   

19.
We discuss the maximum likelihood estimates (MLEs) of the parameters of the log-gamma distribution based on progressively Type-II censored samples. We use the profile likelihood approach to tackle the problem of the estimation of the shape parameter κ. We derive approximate maximum likelihood estimators of the parameters μ and σ and use them as initial values in the determination of the MLEs through the Newton–Raphson method. Next, we discuss the EM algorithm and propose a modified EM algorithm for the determination of the MLEs. A simulation study is conducted to evaluate the bias and mean square error of these estimators and examine their behavior as the progressive censoring scheme and the shape parameter vary. We also discuss the interval estimation of the parameters μ and σ and show that the intervals based on the asymptotic normality of MLEs have very poor probability coverages for small values of m. Finally, we present two examples to illustrate all the methods of inference discussed in this paper.  相似文献   

20.
The ensemble Kalman filter is an ABC algorithm   总被引:1,自引:0,他引:1  
The ensemble Kalman filter is the method of choice for many difficult high-dimensional filtering problems in meteorology, oceanography, hydrology and other fields. In this note we show that a common variant of the ensemble Kalman filter is an approximate Bayesian computation (ABC) algorithm. This is of interest for a number of reasons. First, the ensemble Kalman filter is an example of an ABC algorithm that predates the development of ABC algorithms. Second, the ensemble Kalman filter is used for very high-dimensional problems, whereas ABC methods are normally applied only in very low-dimensional problems. Third, recent state of the art extensions of the ensemble Kalman filter can also be understood within the ABC framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号