首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes a nonparametric approach to make inferences for aggregate loss models in the insurance framework. We assume that an insurance company provides a historical sample of claims given by claim occurrence times and claim sizes. Furthermore, information may be incomplete as claims may be censored and/or truncated. In this context, the main goal of this work consists of fitting a probability model for the total amount that will be paid on all claims during a fixed future time period. In order to solve this prediction problem, we propose a new methodology based on nonparametric estimators for the density functions with censored and truncated data, the use of Monte Carlo simulation methods and bootstrap resampling. The developed methodology is useful to compare alternative pricing strategies in different insurance decision problems. The proposed procedure is illustrated with a real dataset provided by the insurance department of an international commercial company.  相似文献   

2.
We consider inference in randomized longitudinal studies with missing data that is generated by skipped clinic visits and loss to follow-up. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and partial ignorability for the intermittent missingness. We posit an exponential tilt model that links non-identifiable distributions and distributions identified under partial ignorability. This exponential tilt model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated by, and applied to, data from the Breast Cancer Prevention Trial.  相似文献   

3.
In this paper, we derive explicit computable expressions for the asymptotic distribution of the maximum likelihood estimate of an unknown change-point in a sequence of independently and exponentially distributed random variables. First we state and prove a theorem that shows asymptotic equivalence of the change-point mle for the cases of both known and unknown parameters, respectively. Thereafter, the computational form of the asymptotic distribution of the change-point mle is derived for the case of known parameter situation only. Simulations show that the distribution for the known case applies very well to the case where the parameters are estimated. Further, it is seen from simulations that the derived unconditional mle shows better performance compared to the conditional solution of Cobb. Application of change detection methodology and the derived estimation methodology show strong support in favor the dynamic triggering hypothesis for seismic faults in Sumatra, Indonesia region.  相似文献   

4.
Models for dealing with survival data in the presence of a cured fraction of individuals have attracted the attention of many researchers and practitioners in recent years. In this paper, we propose a cure rate model under the competing risks scenario. For the number of causes that can lead to the event of interest, we assume the polylogarithm distribution. The model is flexible in the sense it encompasses some well-known models, which can be tested using large sample test statistics applied to nested models. Maximum-likelihood estimation based on the EM algorithm and hypothesis testing are investigated. Results of simulation studies designed to gauge the performance of the estimation method and of two test statistics are reported. The methodology is applied in the analysis of a data set.  相似文献   

5.
In this paper, we propose nonlinear elliptical models for correlated data with heteroscedastic and/or autoregressive structures. Our aim is to extend the models proposed by Russo et al. 22 by considering a more sophisticated scale structure to deal with variations in data dispersion and/or a possible autocorrelation among measurements taken throughout the same experimental unit. Moreover, to avoid the possible influence of outlying observations or to take into account the non-normal symmetric tails of the data, we assume elliptical contours for the joint distribution of random effects and errors, which allows us to attribute different weights to the observations. We propose an iterative algorithm to obtain the maximum-likelihood estimates for the parameters and derive the local influence curvatures for some specific perturbation schemes. The motivation for this work comes from a pharmacokinetic indomethacin data set, which was analysed previously by Bocheng and Xuping 1 under normality.  相似文献   

6.
In this paper, we assume the number of competing causes to follow an exponentially weighted Poisson distribution. By assuming the initial number of competing causes can undergo destruction and that the population of interest has a cure fraction, we develop the EM algorithm for the determination of the MLEs of the model parameters of such a general cure model. This model is more flexible than the promotion time cure model and also provides an interesting and realistic interpretation of the biological mechanism of the occurrence of an event of interest. Instead of assuming a particular parametric distribution for the lifetime, we assume the lifetime to belong to the wider class of generalized gamma distribution. This allows us to carry out a model discrimination to select a parsimonious lifetime distribution that provides the best fit to the data. Within the EM framework, a two-way profile likelihood approach is proposed to estimate the shape parameters. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination is carried out by means of the likelihood ratio test and information-based methods. Finally, a data on melanoma is analyzed for illustrative purpose.  相似文献   

7.
In this article, we utilize a scale mixture of Gaussian random field as a tool for modeling spatial ordered categorical data with non-Gaussian latent variables. In fact, we assume a categorical random field is created by truncating a Gaussian Log-Gaussian latent variable model to accommodate heavy tails. Since the traditional likelihood approach for the considered model involves high-dimensional integrations which are computationally intensive, the maximum likelihood estimates are obtained using a stochastic approximation expectation–maximization algorithm. For this purpose, Markov chain Monte Carlo methods are employed to draw from the posterior distribution of latent variables. A numerical example illustrates the methodology.  相似文献   

8.
Censoring can be occurred in many statistical analyses in the framework of experimental design. In this study, we estimate the model parameters in one-way ANOVA under Type II censoring. We assume that the distribution of the error terms is Azzalini's skew normal. We use Tiku's modified maximum likelihood (MML) methodology which is a modified version of the well-known maximum likelihood (ML) in the estimation procedure. Unlike ML methodology, MML methodology is non-iterative and gives explicit estimators of the model parameters. We also propose new test statistics based on the proposed estimators. The performances of the proposed estimators and the test statistics based on them are compared with the corresponding normal theory results via Monte Carlo simulation study. A real life data is analysed to show the implementation of the methodology presented in this paper at the end of the study.  相似文献   

9.
Bayesian analysis of dynamic magnetic resonance breast images   总被引:2,自引:0,他引:2  
Summary.  We describe an integrated methodology for analysing dynamic magnetic resonance images of the breast. The problems that motivate this methodology arise from a collaborative study with a tumour institute. The methods are developed within the Bayesian framework and comprise image restoration and classification steps. Two different approaches are proposed for the restoration. Bayesian inference is performed by means of Markov chain Monte Carlo algorithms. We make use of a Metropolis algorithm with a specially chosen proposal distribution that performs better than more commonly used proposals. The classification step is based on a few attribute images yielded by the restoration step that describe the essential features of the contrast agent variation over time. Procedures for hyperparameter estimation are provided, so making our method automatic. The results show the potential of the methodology to extract useful information from acquired dynamic magnetic resonance imaging data about tumour morphology and internal pathophysiological features.  相似文献   

10.
In this paper, we present a Bayesian methodology for modelling accelerated lifetime tests under a stress response relationship with a threshold stress. Both Laplace and MCMC methods are considered. The methodology is described in detail for the case when an exponential distribution is assumed to express the behaviour of lifetimes, and a power law model with a threshold stress is assumed as the stress response relationship. We assume vague but proper priors for the parameters of interest. The methodology is illustrated by a accelerated failure test on an electrical insulation film.  相似文献   

11.
Variable selection in finite mixture of regression (FMR) models is frequently used in statistical modeling. The majority of applications of variable selection in FMR models use a normal distribution for regression error. Such assumptions are unsuitable for a set of data containing a group or groups of observations with asymmetric behavior. In this paper, we introduce a variable selection procedure for FMR models using the skew-normal distribution. With appropriate choice of the tuning parameters, we establish the theoretical properties of our procedure, including consistency in variable selection and the oracle property in estimation. To estimate the parameters of the model, a modified EM algorithm for numerical computations is developed. The methodology is illustrated through numerical experiments and a real data example.  相似文献   

12.
Linear mixed models are widely used when multiple correlated measurements are made on each unit of interest. In many applications, the units may form several distinct clusters, and such heterogeneity can be more appropriately modelled by a finite mixture linear mixed model. The classical estimation approach, in which both the random effects and the error parts are assumed to follow normal distribution, is sensitive to outliers, and failure to accommodate outliers may greatly jeopardize the model estimation and inference. We propose a new mixture linear mixed model using multivariate t distribution. For each mixture component, we assume the response and the random effects jointly follow a multivariate t distribution, to conveniently robustify the estimation procedure. An efficient expectation conditional maximization algorithm is developed for conducting maximum likelihood estimation. The degrees of freedom parameters of the t distributions are chosen data adaptively, for achieving flexible trade-off between estimation robustness and efficiency. Simulation studies and an application on analysing lung growth longitudinal data showcase the efficacy of the proposed approach.  相似文献   

13.
It is not uncommon with astrophysical and epidemiological data sets that the variances of the observations are accessible from an analytical treatment of the data collection process. Moreover, in a regression model, heteroscedastic measurement errors and equation errors are common situations when modelling such data. This article deals with the limiting distribution of the maximum-likelihood and method-of-moments estimators for the line parameters of the regression model. We use the delta method to achieve it, making it possible to build joint confidence regions and hypothesis testing. This technique produces closed expressions for the asymptotic covariance matrix of those estimators. In the moment approach we do not assign any distribution for the unobservable covariate while with the maximum-likelihood approach, we assume a normal distribution. We also conduct simulation studies of rejection rates for Wald-type statistics in order to verify the test size and power. Practical applications are reported for a data set produced by the Chandra observatory and also from the WHO MONICA Project on cardiovascular disease.  相似文献   

14.
Summary. The problem of analysing longitudinal data that are complicated by possibly informative drop-out has received considerable attention in the statistical literature. Most researchers have concentrated on either methodology or application, but we begin this paper by arguing that more attention could be given to study objectives and to the relevant targets for inference. Next we summarize a variety of approaches that have been suggested for dealing with drop-out. A long-standing concern in this subject area is that all methods require untestable assumptions. We discuss circumstances in which we are willing to make such assumptions and we propose a new and computationally efficient modelling and analysis procedure for these situations. We assume a dynamic linear model for the expected increments of a constructed variable, under which subject-specific random effects follow a martingale process in the absence of drop-out. Informal diagnostic procedures to assess the tenability of the assumption are proposed. The paper is completed by simulations and a comparison of our method and several alternatives in the analysis of data from a trial into the treatment of schizophrenia, in which approximately 50% of recruited subjects dropped out before the final scheduled measurement time.  相似文献   

15.
This paper explores the asymptotic distribution of the restricted maximum likelihood estimator of the variance components in a general mixed model. Restricting attention to hierarchical models, central limit theorems are obtained using elementary arguments with only mild conditions on the covariates in the fixed part of the model and without having to assume that the data are either normally or spherically symmetrically distributed. Further, the REML and maximum likelihood estimators are shown to be asymptotically equivalent in this general framework, and the asymptotic distribution of the weighted least squares estimator (based on the REML estimator) of the fixed effect parameters is derived.  相似文献   

16.
Proportional hazards frailty models use a random effect, so called frailty, to construct association for clustered failure time data. It is customary to assume that the random frailty follows a gamma distribution. In this paper, we propose a graphical method for assessing adequacy of the proportional hazards frailty models. In particular, we focus on the assessment of the gamma distribution assumption for the frailties. We calculate the average of the posterior expected frailties at several followup time points and compare it at these time points to 1, the known mean frailty. Large discrepancies indicate lack of fit. To aid in assessing the goodness of fit, we derive and estimate the standard error of the mean of the posterior expected frailties at each time point examined. We give an example to illustrate the proposed methodology and perform sensitivity analysis by simulations.  相似文献   

17.
The potency of antiretroviral agents in AIDS clinical trials can be assessed on the basis of a viral response such as viral decay rate or change in viral load (number of HIV RNA copies in plasma). Linear, nonlinear, and nonparametric mixed-effects models have been proposed to estimate such parameters in viral dynamic models. However, there are two critical questions that stand out: whether these models achieve consistent estimates for viral decay rates, and which model is more appropriate for use in practice. Moreover, one often assumes that a model random error is normally distributed, but this assumption may be unrealistic, obscuring important features of within- and among-subject variations. In this article, we develop a skew-normal (SN) Bayesian linear mixed-effects (SN-BLME) model, an SN Bayesian nonlinear mixed-effects (SN-BNLME) model, and an SN Bayesian semiparametric nonlinear mixed-effects (SN-BSNLME) model that relax the normality assumption by considering model random error to have an SN distribution. We compare the performance of these SN models, and also compare their performance with the corresponding normal models. An AIDS dataset is used to test the proposed models and methods. It was found that there is a significant incongruity in the estimated viral decay rates. The results indicate that SN-BSNLME model is preferred to the other models, implying that an arbitrary data truncation is not necessary. The findings also suggest that it is important to assume a model with an SN distribution in order to achieve reasonable results when the data exhibit skewness.  相似文献   

18.
If at least one out of two serial machines that produce a specific product in manufacturing environments malfunctions, there will be non conforming items produced. Determining the optimal time of the machines' maintenance is the one of major concerns. While a convenient common practice for this kind of problem is to fit a single probability distribution to the combined defect data, it does not adequately capture the fact that there are two different underlying causes of failures. A better approach is to view the defects as arising from a mixture population: one due to the first machine failures and the other due to the second one. In this article, a mixture model along with both Bayesian inference and stochastic dynamic programming approaches are used to find the multi-stage optimal replacement strategy. Using the posterior probability of the machines to be in state λ1, λ2 (the failure rates of defective items produced by machine 1 and 2, respectively), we first formulate the problem as a stochastic dynamic programming model. Then, we derive some properties for the optimal value of the objective function and propose a solution algorithm. At the end, the application of the proposed methodology is demonstrated by a numerical example and an error analysis is performed to evaluate the performances of the proposed procedure. The results of this analysis show that the proposed method performs satisfactorily when a different number of observations on the times between productions of defective products is available.  相似文献   

19.
For right-censored data, the accelerated failure time (AFT) model is an alternative to the commonly used proportional hazards regression model. It is a linear model for the (log-transformed) outcome of interest, and is particularly useful for censored outcomes that are not time-to-event, such as laboratory measurements. We provide a general and easily computable definition of the R2 measure of explained variation under the AFT model for right-censored data. We study its behavior under different censoring scenarios and under different error distributions; in particular, we also study its robustness when the parametric error distribution is misspecified. Based on Monte Carlo investigation results, we recommend the log-normal distribution as a robust error distribution to be used in practice for the parametric AFT model, when the R2 measure is of interest. We apply our methodology to an alcohol consumption during pregnancy data set from Ukraine.  相似文献   

20.
A longitudinal study commonly follows a set of variables, measured for each individual repeatedly over time, and usually suffers from incomplete data problem. A common approach for dealing with longitudinal categorical responses is to use the Generalized Linear Mixed Model (GLMM). This model induces the potential relation between response variables over time via a vector of random effects, assumed to be shared parameters in the non-ignorable missing mechanism. Most GLMMs assume that the random-effects parameters follow a normal or symmetric distribution and this leads to serious problems in real applications. In this paper, we propose GLMMs for the analysis of incomplete multivariate longitudinal categorical responses with a non-ignorable missing mechanism based on a shared parameter framework with the less restrictive assumption of skew-normality for the random effects. These models may contain incomplete data with monotone and non-monotone missing patterns. The performance of the model is evaluated using simulation studies and a well-known longitudinal data set extracted from a fluvoxamine trial is analyzed to determine the profile of fluvoxamine in ambulatory clinical psychiatric practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号