首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The choice of the summary statistics in approximate maximum likelihood is often a crucial issue. We develop a criterion for choosing the most effective summary statistic and then focus on the empirical characteristic function. In the iid setting, the approximating posterior distribution converges to the approximate distribution of the parameters conditional upon the empirical characteristic function. Simulation experiments suggest that the method is often preferable to numerical maximum likelihood. In a time-series framework, no optimality result can be proved, but the simulations indicate that the method is effective in small samples.  相似文献   

2.
Bayesian synthetic likelihood (BSL) is now a well-established method for performing approximate Bayesian parameter estimation for simulation-based models that do not possess a tractable likelihood function. BSL approximates an intractable likelihood function of a carefully chosen summary statistic at a parameter value with a multivariate normal distribution. The mean and covariance matrix of this normal distribution are estimated from independent simulations of the model. Due to the parametric assumption implicit in BSL, it can be preferred to its nonparametric competitor, approximate Bayesian computation, in certain applications where a high-dimensional summary statistic is of interest. However, despite several successful applications of BSL, its widespread use in scientific fields may be hindered by the strong normality assumption. In this paper, we develop a semi-parametric approach to relax this assumption to an extent and maintain the computational advantages of BSL without any additional tuning. We test our new method, semiBSL, on several challenging examples involving simulated and real data and demonstrate that semiBSL can be significantly more robust than BSL and another approach in the literature.  相似文献   

3.
与阿基米德copula相比,分层阿基米德copula(HAC)的结构更具一般性,而相比于椭圆型copula它的待估参数个数更少。用两阶段极大似然法来估计HAC函数,主要的步骤是先估计出每个分量的边际分布,以此为基础再估计copula函数。实证分析中,采取Clayton和Gumbel型的HAC分析四只股票价格序列之间的相关性。在得出HAC的结构和估计其参数之前,运用ARMA-GARCH过程消除了序列的自相关性和条件异方差。通过比较赤迟信息准则,认为完全嵌套的Gumbel型HAC能更好地刻画这种相关性。  相似文献   

4.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   

5.
Synthetic likelihood is an attractive approach to likelihood-free inference when an approximately Gaussian summary statistic for the data, informative for inference about the parameters, is available. The synthetic likelihood method derives an approximate likelihood function from a plug-in normal density estimate for the summary statistic, with plug-in mean and covariance matrix obtained by Monte Carlo simulation from the model. In this article, we develop alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads. Our approach uses stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood. We compare the new method with a related likelihood-free variational inference technique in the literature, while at the same time improving the implementation of that approach in a number of ways. These new algorithms are feasible to implement in situations which are challenging for conventional approximate Bayesian computation methods, in terms of the dimensionality of the parameter and summary statistic.  相似文献   

6.
The model-based approach to estimation of finite population distribution functions introduced in Chambers & Dunstan (1986) is extended to the case where only summary information is available for the auxiliary size variable. Monte Carlo results indicate that this ‘limited information’ extension is almost as efficient as the ‘full information’ method proposed in the above reference. These results also indicate that the model-based confidence intervals generated by either of these methods have superior coverage properties to more conventional design-based confidence intervals.  相似文献   

7.
Summary.  Normal tissue complications are a common side effect of radiation therapy. They are the consequence of the dose of radiation that is received by the normal tissue surrounding the site of the tumour. Within a specified organ each voxel receives a certain dose of radiation, leading to a distribution of doses over the organ. It is often not known what aspect of the dose distribution drives the presence and severity of the complications. A summary measure of the dose distribution can be obtained by integrating a weighting function of dose ( w ( d )) over the density of dose. For biological reasons the weight function should be monotonic. We propose a generalized monotonic functional mixed model to study the dose effect on a clinical outcome by estimating this weight function non-parametrically by using splines and subject to the monotonicity constraint, while allowing for overdispersion and correlation of multiple obervations within the same subject. We illustrate our method with data from a head and neck cancer study in which the irradiation of the parotid gland results in loss of saliva flow.  相似文献   

8.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   

9.
Roger J. Bowden 《Statistics》2013,47(2):249-262
Reflexive shifting of a given distribution, using its own distribution function, can reveal information. The shifts are changes in measure such that the separation of the resulting left and right unit shifted distributions reveals the binary entropy of position, called locational or partition entropy. This can be used for spread and asymmetry functions. Alternatively, summary metrics for distributional asymmetry and spread can be based on the relative strengths of left- and right-hand shifts. Such metrics are applicable even for long tail densities where distributional moments may not exist.  相似文献   

10.
summary In this paper we derive the predictive density function of a future observation under the assumption of Edgeworth-type non-normal prior distribution for the unknown mean of a normal population. Fixed size single sample and sequential sampling inspection plans, in a decisive prediction framework, are examined for their sensitivity to departures from normality of the prior distribution. Numerical illustrations indicate that the decision to market the remaining items of a given lot for a fixed size plan may be sensitive to the presence of skewness or kurtosis in the prior distribution. However, Bayes'decision based on the sequential plan may not change though expected gains may change with variation in the non-normality of the prior distribution.  相似文献   

11.
Progression-free survival (PFS) is a frequently used endpoint in oncological clinical studies. In case of PFS, potential events are progression and death. Progressions are usually observed delayed as they can be diagnosed not before the next study visit. For this reason potential bias of treatment effect estimates for progression-free survival is a concern. In randomized trials and for relative treatment effects measures like hazard ratios, bias-correcting methods are not necessarily required or have been proposed before. However, less is known on cross-trial comparisons of absolute outcome measures like median survival times. This paper proposes a new method for correcting the assessment time bias of progression-free survival estimates to allow a fair cross-trial comparison of median PFS. Using median PFS for example, the presented method approximates the unknown posterior distribution by a Bayesian approach based on simulations. It is shown that the proposed method leads to a substantial reduction of bias as compared to estimates derived from maximum likelihood or Kaplan–Meier estimates. Bias could be reduced by more than 90% over a broad range of considered situations differing in assessment times and underlying distributions. By coverage probabilities of at least 94% based on the credibility interval of the posterior distribution the resulting parameters hold common confidence levels. In summary, the proposed approach is shown to be useful for a cross-trial comparison of median PFS.  相似文献   

12.
In this study, a new extension of generalized half-normal (GHN) distribution is introduced. Since this new distribution can be viewed as weighted version of GHN distribution, it is called as weighted generalized half-normal (WGHN) distribution. It is shown that WGHN distribution can be observed as a single constrained and hidden truncation model. Therefore, the new distribution is more flexible than the GHN distribution. Some statistical properties of the WGHN distribution are studied, i.e. moments, cumulative distribution function, hazard rate function are derived. Furthermore, maximum likelihood estimation of the parameters is considered. Some real-life data sets taken from the literature are modelled using the WGHN distribution. It is seen that for these data sets the WGHN distribution provides better fitting than the GHN and slashed generalized half-normal (SGHN) distributions.  相似文献   

13.
Bayesian statistical inference relies on the posterior distribution. Depending on the model, the posterior can be more or less difficult to derive. In recent years, there has been a lot of interest in complex settings where the likelihood is analytically intractable. In such situations, approximate Bayesian computation (ABC) provides an attractive way of carrying out Bayesian inference. For obtaining reliable posterior estimates however, it is important to keep the approximation errors small in ABC. The choice of an appropriate set of summary statistics plays a crucial role in this effort. Here, we report the development of a new algorithm that is based on least angle regression for choosing summary statistics. In two population genetic examples, the performance of the new algorithm is better than a previously proposed approach that uses partial least squares.  相似文献   

14.
The Pareto distribution is a well-known probability distribution in statistics, which has been widely used in many fields, such as finance, physics, hydrology, geology and astronomy. However, the parameter estimation for the truncated Pareto distribution is much more complicated than that for the Pareto distribution. In this paper, we demonstrate that the bias of the maximum likelihood estimation for the truncated Pareto distribution can be significantly reduced by its jackknife estimation, which has a very simple form.  相似文献   

15.
Linear rank tests are used extensively for comparing two or more groups of continuous outcomes. Tests in this class retain proper test size with minimal assumptions and can have high efficiency towards an alternative of interest. In recent years, these tests have been increasingly used in settings where an individual's observation is itself a scalar summary of several outcome measures. Here, simple distributional structures on the outcome variables can lead to complex differences between the distributions of summary statistics of the comparison groups. The local asymptotic power of linear rank tests when the groups are assumed to differ by a location or scale alternative has been studied in detail. However, not much is known about their behavior for other types of alternatives. To address this, we derive the asymptotic distribution of linear rank tests under a general contiguous alternative and then investigate the implications for location–scale families and more general settings, including an example drawn from an AIDS clinical trial where the continuous outcome is a summary statistic computed from repeated measures of a biological marker.  相似文献   

16.
For many scientific experiments computing a p-value is the standard method for reporting the outcome. It is a simple way of summarizing the information in the data. One theoretical justification for p-values is the Neyman-Pearson theory of hypotheses testing. However, the decision making focus of this theory does not correspond well with the desire, in most scientific experiments, for a simple and easily interpretable summary of the data. Fuzzy set theory with its notion of a membership function gives a non-probabilistic way to talk about uncertainty. Here, we argue that for some situations, where a p-value is computed, it may make more sense to formulate the question as one of estimating a membership function of the subset of special parameter points which are of particular interest for the experiment. Choosing the appropriate membership function can be more difficult than specifying the null and alternative hypotheses but the resulting payoff is greater. This is because a membership function can better represent the shades of desirability among the parameter points than the sharp division of the parameter space into the null and alternative hypotheses. This approach yields an estimate which is easy to interpret and more flexible and informative than the cruder p-value.  相似文献   

17.
We present a new method to describe shape change and shape differences in curves, by constructing a deformation function in terms of a wavelet decomposition. Wavelets form an orthonormal basis which allows representations at multiple resolutions. The deformation function is estimated, in a fully Bayesian framework, using a Markov chain Monte Carlo algorithm. This Bayesian formulation incorporates prior information about the wavelets and the deformation function. The flexibility of the MCMC approach allows estimation of complex but clinically important summary statistics, such as curvature in our case, as well as estimates of deformation functions with variance estimates, and allows thorough investigation of the posterior distribution. This work is motivated by multi-disciplinary research involving a large-scale longitudinal study of idiopathic scoliosis in UK children. This paper provides novel statistical tools to study this spinal deformity, from which 5% of UK children suffer. Using the data we consider statistical inference for shape differences between normals, scoliotics and developers of scoliosis, in particular for spinal curvature, and look at longitudinal deformations to describe shape changes with time.  相似文献   

18.
In the analysis of competing risks data, cumulative incidence function is a useful summary of the overall crude risk for a failure type of interest. Mixture regression modeling has served as a natural approach to performing covariate analysis based on this quantity. However, existing mixture regression methods with competing risks data either impose parametric assumptions on the conditional risks or require stringent censoring assumptions. In this article, we propose a new semiparametric regression approach for competing risks data under the usual conditional independent censoring mechanism. We establish the consistency and asymptotic normality of the resulting estimators. A simple resampling method is proposed to approximate the distribution of the estimated parameters and that of the predicted cumulative incidence functions. Simulation studies and an analysis of a breast cancer dataset demonstrate that our method performs well with realistic sample sizes and is appropriate for practical use.  相似文献   

19.
We propose a new class of semiparametric estimators for proportional hazards models in the presence of measurement error in the covariates, where the baseline hazard function, the hazard function for the censoring time, and the distribution of the true covariates are considered as unknown infinite dimensional parameters. We estimate the model components by solving estimating equations based on the semiparametric efficient scores under a sequence of restricted models where the logarithm of the hazard functions are approximated by reduced rank regression splines. The proposed estimators are locally efficient in the sense that the estimators are semiparametrically efficient if the distribution of the error‐prone covariates is specified correctly and are still consistent and asymptotically normal if the distribution is misspecified. Our simulation studies show that the proposed estimators have smaller biases and variances than competing methods. We further illustrate the new method with a real application in an HIV clinical trial.  相似文献   

20.
The cost of certain types of warranties is closely related to functions that arise in renewal theory. The problem of estimating the warranty cost for a random sample of size n can be reduced to estimating these functions. In an earlier paper, I gave several methods of estimating the expected number of renewals, called the renewal function. This answered an important accounting question of how to arrive at a good approximation of the expected warranty cost. In this article, estimation of the renewal function is reviewed and several extensions are given. In particular, a resampling estimator of the renewal function is introduced. Further, I argue that managers may wish to examine other summary measures of the warranty cost, in particular the variability. To estimate this variability, I introduce estimators, both parametric and nonparametric, of the variance associated with the number of renewals. Several numerical examples are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号