首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Heng Lian 《Statistics》2013,47(6):777-785
Improving efficiency of the importance sampler is at the centre of research on Monte Carlo methods. While the adaptive approach is usually not so straightforward within the Markov chain Monte Carlo framework, the counterpart in importance sampling can be justified and validated easily. We propose an iterative adaptation method for learning the proposal distribution of an importance sampler based on stochastic approximation. The stochastic approximation method can recruit general iterative optimization techniques like the minorization–maximization algorithm. The effectiveness of the approach in optimizing the Kullback divergence between the proposal distribution and the target is demonstrated using several examples.  相似文献   

2.
ABSTRACT

We propose an extension of parametric product partition models. We name our proposal nonparametric product partition models because we associate a random measure instead of a parametric kernel to each set within a random partition. Our methodology does not impose any specific form on the marginal distribution of the observations, allowing us to detect shifts of behaviour even when dealing with heavy-tailed or skewed distributions. We propose a suitable loss function and find the partition of the data having minimum expected loss. We then apply our nonparametric procedure to multiple change-point analysis and compare it with PPMs and with other methodologies that have recently appeared in the literature. Also, in the context of missing data, we exploit the product partition structure in order to estimate the distribution function of each missing value, allowing us to detect change points using the loss function mentioned above. Finally, we present applications to financial as well as genetic data.  相似文献   

3.
The paper considers high‐frequency sampled multivariate continuous‐time autoregressive moving average (MCARMA) models and derives the asymptotic behaviour of the sample autocovariance function to a normal random matrix. Moreover, we obtain the asymptotic behaviour of the cross‐covariances between different components of the model. We will see that the limit distribution of the sample autocovariance function has a similar structure in the continuous‐time and in the discrete‐time model. As a special case, we consider a CARMA (one‐dimensional MCARMA) process. For a CARMA process, we prove Bartlett's formula for the sample autocorrelation function. Bartlett's formula has the same form in both models; only the sums in the discrete‐time model are exchanged by integrals in the continuous‐time model. Finally, we present limit results for multivariate MA processes as well, which are not known in this generality in the multivariate setting yet.  相似文献   

4.
We investigate empirical likelihood for the additive hazards model with current status data. An empirical log-likelihood ratio for a vector or subvector of regression parameters is defined and its limiting distribution is shown to be a standard chi-squared distribution. The proposed inference procedure enables us to make empirical likelihood-based inference for the regression parameters. Finite sample performance of the proposed method is assessed in simulation studies to compare with that of a normal approximation method, it shows that the empirical likelihood method provides more accurate inference than the normal approximation method. A real data example is used for illustration.  相似文献   

5.
Two commonly used approximations for the inverse distribution function of the normal distribution are Schmeiser's and Shore's. Both approximations are based on a power transformation of either the cumulative density function (CDF) or a simple function of it. In this note we demonstrate, that if these approximations are presented in the form of the classical one-parameter Box-Cox transformation, and the exponent of the transformation is expressed as a simple function of the CDF, then the accuracy of both approximations may be considerably enhanced, without losing much in algebraic simplicity. Since both approximations are special cases of more general four-parameter systems of distributions, the results presented here indicate that the accuracy of the latter, when used to represent non-normal density functions, may also be considerably enhanced.  相似文献   

6.
This article develops combined exponentially weighted moving average (EWMA) charts for the mean and variance of a normal distribution. A Bayesian approach is used to incorporate parameter uncertainty. We first use a Bayesian predictive distribution to construct the control chart, and we then use a sampling theory approach to evaluate it under various hypothetical specifications for the data generation model. Simulations are used to compare the proposed charts for different values of both the weighing constant for the exponentially weighted moving averages and for the size of the calibration sample that is used to estimate the in-statistical-control process parameters. We also examine the separate performance of the EWMA chart for the variance.  相似文献   

7.
Under non-normality, this article is concerned with testing diagonality of high-dimensional covariance matrix, which is more practical than testing sphericity and identity in high-dimensional setting. The existing testing procedure for diagonality is not robust against either the data dimension or the data distribution, producing tests with distorted type I error rates much larger than nominal levels. This is mainly due to bias from estimating some functions of high-dimensional covariance matrix under non-normality. Compared to the sphericity and identity hypotheses, the asymptotic property of the diagonality hypothesis would be more involved and we should be more careful to deal with bias. We develop a correction that makes the existing test statistic robust against both the data dimension and the data distribution. We show that the proposed test statistic is asymptotically normal without the normality assumption and without specifying an explicit relationship between the dimension p and the sample size n. Simulations show that it has good size and power for a wide range of settings.  相似文献   

8.
A generalized version of inverted exponential distribution (IED) is considered in this paper. This lifetime distribution is capable of modeling various shapes of failure rates, and hence various shapes of aging criteria. The model can be considered as another useful two-parameter generalization of the IED. Maximum likelihood and Bayes estimates for two parameters of the generalized inverted exponential distribution (GIED) are obtained on the basis of a progressively type-II censored sample. We also showed the existence, uniqueness and finiteness of the maximum likelihood estimates of the parameters of GIED based on progressively type-II censored data. Bayesian estimates are obtained using squared error loss function. These Bayesian estimates are evaluated by applying the Lindley's approximation method and via importance sampling technique. The importance sampling technique is used to compute the Bayes estimates and the associated credible intervals. We further consider the Bayes prediction problem based on the observed samples, and provide the appropriate predictive intervals. Monte Carlo simulations are performed to compare the performances of the proposed methods and a data set has been analyzed for illustrative purposes.  相似文献   

9.
For the first time, we introduce a generalized form of the exponentiated generalized gamma distribution [Cordeiro et al. The exponentiated generalized gamma distribution with application to lifetime data, J. Statist. Comput. Simul. 81 (2011), pp. 827–842.] that is the baseline for the log-exponentiated generalized gamma regression model. The new distribution can accommodate increasing, decreasing, bathtub- and unimodal-shaped hazard functions. A second advantage is that it includes classical distributions reported in the lifetime literature as special cases. We obtain explicit expressions for the moments of the baseline distribution of the new regression model. The proposed model can be applied to censored data since it includes as sub-models several widely known regression models. It therefore can be used more effectively in the analysis of survival data. We obtain maximum likelihood estimates for the model parameters by considering censored data. We show that our extended regression model is very useful by means of two applications to real data.  相似文献   

10.
This article describes a procedure for Bayesian longitudinal paired comparison data analysis to rank stimuli. The proposed model is developed by combining the Bradley–Terry model and a nonlinear model that utilizes an exponential distribution to describe longitudinal changes in scale values. The weighted likelihood bootstrap method (WLB) is used to obtain samples from posterior distributions of parameters. WLB is an effective tool because neither diagnosing parameter convergence nor specifying proposal distributions is required, which decreases both the preparation necessary and the time involved. The proposed model is a simple one with few parameters, so WLB can be effectively accommodated. An actual example using sports data from sumo wrestling is presented to verify the efficacy of the proposed method.  相似文献   

11.
After initiation of treatment, HIV viral load has multiphasic changes, which indicates that the viral decay rate is a time-varying process. Mixed-effects models with different time-varying decay rate functions have been proposed in literature. However, there are two unresolved critical issues: (i) it is not clear which model is more appropriate for practical use, and (ii) the model random errors are commonly assumed to follow a normal distribution, which may be unrealistic and can obscure important features of within- and among-subject variations. Because asymmetry of HIV viral load data is still noticeable even after transformation, it is important to use a more general distribution family that enables the unrealistic normal assumption to be relaxed. We developed skew-elliptical (SE) Bayesian mixed-effects models by considering the model random errors to have an SE distribution. We compared the performance among five SE models that have different time-varying decay rate functions. For each model, we also contrasted the performance under different model random error assumptions such as normal, Student-t, skew-normal, or skew-t distribution. Two AIDS clinical trial datasets were used to illustrate the proposed models and methods. The results indicate that the model with a time-varying viral decay rate that has two exponential components is preferred. Among the four distribution assumptions, the skew-t and skew-normal models provided better fitting to the data than normal or Student-t model, suggesting that it is important to assume a model with a skewed distribution in order to achieve reasonable results when the data exhibit skewness.  相似文献   

12.
Many disease processes are characterized by two or more successive health states, and it is often of interest and importance to assess state-specific covariate effects. However, with incomplete follow-up data such inference has not been satisfactorily addressed in the literature. We model the logarithm-transformed sojourn time in each state as linearly related to the covariates; however, neither the distributional form of the error term nor the dependence structure of the states needs to be specified. We propose a regression procedure to accommodate incomplete follow-up data. Asymptotic theory is presented, along with some tools for goodness-of-fit diagnostics. Simulation studies show that the proposal is reliable for practical use. We illustrate it by application to a cancer clinical trial.  相似文献   

13.
Among the diverse frameworks that have been proposed for regression analysis of angular data, the projected multivariate linear model provides a particularly appealing and tractable methodology. In this model, the observed directional responses are assumed to correspond to the angles formed by latent bivariate normal random vectors that are assumed to depend upon covariates through a linear model. This implies an angular normal distribution for the observed angles, and incorporates a regression structure through a familiar and convenient relationship. In this paper we extend this methodology to accommodate clustered data (e.g., longitudinal or repeated measures data) by formulating a marginal version of the model and basing estimation on an EM‐like algorithm in which correlation among within‐cluster responses is taken into account by incorporating a working correlation matrix into the M step. A sandwich estimator is used for the parameter estimates’ covariance matrix. The methodology is motivated and illustrated using an example involving clustered measurements of microbril angle on loblolly pine (Pinus taeda L.) Simulation studies are presented that evaluate the finite sample properties of the proposed fitting method. In addition, the relationship between within‐cluster correlation on the latent Euclidean vectors and the corresponding correlation structure for the observed angles is explored.  相似文献   

14.
When the shape parameter is a non-integer of the generalized exponential (GE) distribution, the analytical renewal function (RF) usually is not tractable. To overcome this, the approximation method has been used in this paper. In the proposed model, the n-fold convolution of the GE cumulative distribution function (CDF) is approximated by n-fold convolutions of gamma and normal CDFs. We obtain the GE RF by a series approximation model. The method is very simple in the computation. Numerical examples have shown that the approximate models are accurate and robust. When the parameters are unknown, we present the asymptotic confidence interval of the RF. The validity of the asymptotic confidence interval is checked via numerical experiments.  相似文献   

15.
We present a variational estimation method for the mixed logistic regression model. The method is based on a lower bound approximation of the logistic function [Jaakkola, J.S. and Jordan, M.I., 2000, Bayesian parameter estimation via variational methods. Statistics & Computing, 10, 25–37.]. Based on the approximation, an EM algorithm can be derived that results in a considerable simplification of the maximization problem in that it does not require the numerical evaluation of integrals over the random effects. We assess the performance of the variational method for the mixed logistic regression model in a simulation study and an empirical data example, and compare it to Laplace's method. The results indicate that the variational method is a viable choice for estimating the fixed effects of the mixed logistic regression model under the condition that the number of outcomes within each cluster is sufficiently high.  相似文献   

16.
Linear mixed models are widely used when multiple correlated measurements are made on each unit of interest. In many applications, the units may form several distinct clusters, and such heterogeneity can be more appropriately modelled by a finite mixture linear mixed model. The classical estimation approach, in which both the random effects and the error parts are assumed to follow normal distribution, is sensitive to outliers, and failure to accommodate outliers may greatly jeopardize the model estimation and inference. We propose a new mixture linear mixed model using multivariate t distribution. For each mixture component, we assume the response and the random effects jointly follow a multivariate t distribution, to conveniently robustify the estimation procedure. An efficient expectation conditional maximization algorithm is developed for conducting maximum likelihood estimation. The degrees of freedom parameters of the t distributions are chosen data adaptively, for achieving flexible trade-off between estimation robustness and efficiency. Simulation studies and an application on analysing lung growth longitudinal data showcase the efficacy of the proposed approach.  相似文献   

17.
The authors show how saddlepoint techniques lead to highly accurate approximations for Bayesian predictive densities and cumulative distribution functions in stochastic model settings where the prior is tractable, but not necessarily the likelihood or the predictand distribution. They consider more specifically models involving predictions associated with waiting times for semi‐Markov processes whose distributions are indexed by an unknown parameter θ. Bayesian prediction for such processes when they are not stationary is also addressed and the inverse‐Gaussian based saddlepoint approximation of Wood, Booth & Butler (1993) is shown to accurately deal with the nonstationarity whereas the normal‐based Lugannani & Rice (1980) approximation cannot, Their methods are illustrated by predicting various waiting times associated with M/M/q and M/G/1 queues. They also discuss modifications to the matrix renewal theory needed for computing the moment generating functions that are used in the saddlepoint methods.  相似文献   

18.
We propose a new stochastic approximation (SA) algorithm for maximum-likelihood estimation (MLE) in the incomplete-data setting. This algorithm is most useful for problems when the EM algorithm is not possible due to an intractable E-step or M-step. Compared to other algorithm that have been proposed for intractable EM problems, such as the MCEM algorithm of Wei and Tanner (1990), our proposed algorithm appears more generally applicable and efficient. The approach we adopt is inspired by the Robbins-Monro (1951) stochastic approximation procedure, and we show that the proposed algorithm can be used to solve some of the long-standing problems in computing an MLE with incomplete data. We prove that in general O(n) simulation steps are required in computing the MLE with the SA algorithm and O(n log n) simulation steps are required in computing the MLE using the MCEM and/or the MCNR algorithm, where n is the sample size of the observations. Examples include computing the MLE in the nonlinear error-in-variable model and nonlinear regression model with random effects.  相似文献   

19.
Simulated maximum likelihood estimates an analytically intractable likelihood function with an empirical average based on data simulated from a suitable importance sampling distribution. In order to use simulated maximum likelihood in an efficient way, the choice of the importance sampling distribution as well as the mechanism to generate the simulated data are crucial. In this paper we develop a new heuristic for an automated, multistage implementation of simulated maximum likelihood which, by adaptively updating the importance sampler, approximates the (locally) optimal importance sampling distribution. The proposed approach also allows for a convenient incorporation of quasi-Monte Carlo methods. Quasi-Monte Carlo methods produce simulated data which can significantly increase the accuracy of the likelihood-estimate over regular Monte Carlo methods. Several examples provide evidence for the potential efficiency gain of this new method. We apply the method to a computationally challenging geostatistical model of online retailing.  相似文献   

20.
Multilevel models have been widely applied to analyze data sets which present some hierarchical structure. In this paper we propose a generalization of the normal multilevel models, named elliptical multilevel models. This proposal suggests the use of distributions in the elliptical class, thus involving all symmetric continuous distributions, including the normal distribution as a particular case. Elliptical distributions may have lighter or heavier tails than the normal ones. In the case of normal error models with the presence of outlying observations, heavy-tailed error models may be applied to accommodate such observations. In particular, we discuss some aspects of the elliptical multilevel models, such as maximum likelihood estimation and residual analysis to assess features related to the fitting and the model assumptions. Finally, two motivating examples analyzed under normal multilevel models are reanalyzed under Student-t and power exponential multilevel models. Comparisons with the normal multilevel model are performed by using residual analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号