首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, a new survival cure rate model is introduced considering the Yule–Simon distribution [12 H.A. Simon, On a class of skew distribution functions, Biometrika 42 (1955), pp. 425440.[Crossref], [Web of Science ®] [Google Scholar]] to model the number of concurrent causes. We study some properties of this distribution and the model arising when the distribution of the competing causes is the Weibull model. We call this distribution the Weibull–Yule–Simon distribution. Maximum likelihood estimation is conducted for model parameters. A small scale simulation study is conducted indicating satisfactory parameter recovery by the estimation approach. Results are applied to a real data set (melanoma) illustrating the fact that the model proposed can outperform traditional alternative models in terms of model fitting.  相似文献   

2.
In this paper, we develop a flexible cure rate survival model by assuming the number of competing causes of the event of interest to follow the Conway–Maxwell Poisson distribution. This model includes as special cases some of the well-known cure rate models discussed in the literature. Next, we discuss the maximum likelihood estimation of the parameters of this cure rate survival model. Finally, we illustrate the usefulness of this model by applying it to a real cutaneous melanoma data.  相似文献   

3.
The purpose of this paper is to develop a Bayesian analysis for the right-censored survival data when immune or cured individuals may be present in the population from which the data is taken. In our approach the number of competing causes of the event of interest follows the Conway–Maxwell–Poisson distribution which generalizes the Poisson distribution. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the proposed model. Also, some discussions on the model selection and an illustration with a real data set are considered.  相似文献   

4.
5.
The tobit model allows a censored response variable to be described by covariates. Its applications cover different areas such as economics, engineering, environment and medicine. A strong assumption of the standard tobit model is that its errors follow a normal distribution. However, not all applications are well modeled by this distribution. Some efforts have relaxed the normality assumption by considering more flexible distributions. Nevertheless, the presence of asymmetry could not be well described by these flexible distributions. A real-world data application of measles vaccine in Haiti is explored, which confirms this asymmetry. We propose a tobit model with errors following a Birnbaum–Saunders (BS) distribution, which is asymmetrical and has shown to be a good alternative for describing medical data. Inference based on the maximum likelihood method and a type of residual are derived for the tobit–BS model. We perform global and local influence diagnostics to assess the sensitivity of the maximum likelihood estimators to atypical cases. A Monte Carlo simulation study is carried out to empirically evaluate the performance of these estimators. We conduct a data analysis for the mentioned application of measles vaccine based on the proposed model with the help of the R software. The results show the good performance of the tobit–BS model.  相似文献   

6.
We propose a generalized estimating equations (GEE) approach to the estimation of the mean and covariance structure of bivariate time series processes of panel data. The one-step approach allows for mixed continuous and discrete dependent variables. A Monte Carlo Study is presented to compare our particular GEE estimator with more standard GEE-estimators. In the empirical illustration, we apply our estimator to the analysis of individual wage dynamics and the incidence of profit-sharing in West Germany. Our findings show that time-invariant unobserved individual ability jointly influences individual wages and participation in profit sharing schemes.  相似文献   

7.
In this article, we proposed a new three-parameter probability distribution, called Topp–Leone normal, for modelling increasing failure rate data. The distribution is obtained by using Topp–Leone-X family of distributions with normal as a baseline model. The basic properties including moments, quantile function, stochastic ordering and order statistics are derived here. The estimation of unknown parameters is approached by the method of maximum likelihood, least squares, weighted least squares and maximum product spacings. An extensive simulation study is carried out to compare the long-run performance of the estimators. Applicability of the distribution is illustrated by means of three real data analyses over existing distributions.  相似文献   

8.
In this paper, a jump–diffusion Omega model with a two-step premium rate is studied. In this model, the surplus process is a perturbation of a compound Poisson process by a Brown motion. Firstly, using the strong Markov property, the integro-differential equations for the Gerber–Shiu expected discounted penalty function and the bankruptcy probability are derived. Secondly, for a constant bankruptcy rate function, the renewal equations satisfied by the Gerber–Shiu expected discounted penalty function are obtained, and by iteration, the closed-form solutions of the function are also given. Further, the explicit solutions of the Gerber–Shiu expected discounted penalty function are obtained when the individual claim size is subject to exponential distribution. Finally, a numerical example is presented to illustrate some properties of the model.  相似文献   

9.
In this article, we consider a competing cause scenario and assume the wider family of Conway–Maxwell–Poisson (COM–Poisson) distribution to model the number of competing causes. Assuming the type of the data to be interval censored, the main contribution is in developing the steps of the expectation maximization (EM) algorithm to determine the maximum likelihood estimates (MLEs) of the model parameters. A profile likelihood approach within the EM framework is proposed to estimate the COM–Poisson shape parameter. An extensive simulation study is conducted to evaluate the performance of the proposed EM algorithm. Model selection within the wider class of COM–Poisson distribution is carried out using likelihood ratio test and information-based criteria. A study to demonstrate the effect of model mis-specification is also carried out. Finally, the proposed estimation method is applied to a data on smoking cessation and a detailed analysis of the obtained results is presented.  相似文献   

10.
In this paper we discuss new adaptive proposal strategies for sequential Monte Carlo algorithms—also known as particle filters—relying on criteria evaluating the quality of the proposed particles. The choice of the proposal distribution is a major concern and can dramatically influence the quality of the estimates. Thus, we show how the long-used coefficient of variation (suggested by Kong et al. in J. Am. Stat. Assoc. 89(278–288):590–599, 1994) of the weights can be used for estimating the chi-square distance between the target and instrumental distributions of the auxiliary particle filter. As a by-product of this analysis we obtain an auxiliary adjustment multiplier weight type for which this chi-square distance is minimal. Moreover, we establish an empirical estimate of linear complexity of the Kullback-Leibler divergence between the involved distributions. Guided by these results, we discuss adaptive designing of the particle filter proposal distribution and illustrate the methods on a numerical example. This work was partly supported by the National Research Agency (ANR) under the program “ANR-05-BLAN-0299”.  相似文献   

11.
In a multivariate mean–variance model, the class of linear score (LS) estimators based on an unbiased linear estimating function is introduced. A special member of this class is the (extended) quasi-score (QS) estimator. It is ‘extended’ in the sense that it comprises the parameters describing the distribution of the regressor variables. It is shown that QS is (asymptotically) most efficient within the class of LS estimators. An application is the multivariate measurement error model, where the parameters describing the regressor distribution are nuisance parameters. A special case is the zero-inflated Poisson model with measurement errors, which can be treated within this framework.  相似文献   

12.
In this article, we propose a new class of distributions defined by a quantile function, which nests several distributions as its members. The quantile function proposed here is the sum of the quantile functions of the generalized Pareto and Weibull distributions. Various distributional properties and reliability characteristics of the class are discussed. The estimation of the parameters of the model using L-moments is studied. Finally, we apply the model to a real life dataset.  相似文献   

13.
14.
Gene copy number (GCN) changes are common characteristics of many genetic diseases. Comparative genomic hybridization (CGH) is a new technology widely used today to screen the GCN changes in mutant cells with high resolution genome-wide. Statistical methods for analyzing such CGH data have been evolving. Existing methods are either frequentist's or full Bayesian. The former often has computational advantage, while the latter can incorporate prior information into the model, but could be misleading when one does not have sound prior information. In an attempt to take full advantages of both approaches, we develop a Bayesian-frequentist hybrid approach, in which a subset of the model parameters is inferred by the Bayesian method, while the rest parameters by the frequentist's. This new hybrid approach provides advantages over those of the Bayesian or frequentist's method used alone. This is especially the case when sound prior information is available on part of the parameters, and the sample size is relatively small. Spatial dependence and false discovery rate are also discussed, and the parameter estimation is efficient. As an illustration, we used the proposed hybrid approach to analyze a real CGH data.  相似文献   

15.
A case–control study of lung cancer mortality in U.S. railroad workers in jobs with and without diesel exhaust exposure is reanalyzed using a new threshold regression methodology. The study included 1256 workers who died of lung cancer and 2385 controls who died primarily of circulatory system diseases. Diesel exhaust exposure was assessed using railroad job history from the US Railroad Retirement Board and an industrial hygiene survey. Smoking habits were available from next-of-kin and potential asbestos exposure was assessed by job history review. The new analysis reassesses lung cancer mortality and examines circulatory system disease mortality. Jobs with regular exposure to diesel exhaust had a survival pattern characterized by an initial delay in mortality, followed by a rapid deterioration of health prior to death. The pattern is seen in subjects dying of lung cancer, circulatory system diseases, and other causes. The unique pattern is illustrated using a new type of Kaplan–Meier survival plot in which the time scale represents a measure of disease progression rather than calendar time. The disease progression scale accounts for a healthy-worker effect when describing the effects of cumulative exposures on mortality.  相似文献   

16.
The paper is inspired by the stress?Cstrength models in the reliability literature, in which given the strength (Y) and the stress (X) of a component, its reliability is measured by P(X < Y). In this literature, X and Y are typically modeled as independent. Since in many applications such an assumption might not be realistic, we propose a copula approach in order to take into account the dependence between X and Y. We then apply a copula-based approach to the measurement of household financial fragility. Specifically, we define as financially fragile those households whose yearly consumption (X) is higher than income (Y), so that P(X > Y) is the measure of interest and X and Y are clearly not independent. Modeling income and consumption as non-identically Dagum distributed variables and their dependence by a Frank copula, we show that the proposed method improves the estimation of household financial fragility. Using data from the 2008 wave of the Bank of Italy??s Survey on Household Income and Wealth we point out that neglecting the existing dependence in fact overestimates the actual household fragility.  相似文献   

17.
Copula, marginal distributions and model selection: a Bayesian note   总被引:3,自引:0,他引:3  
Copula functions and marginal distributions are combined to produce multivariate distributions. We show advantages of estimating all parameters of these models using the Bayesian approach, which can be done with standard Markov chain Monte Carlo algorithms. Deviance-based model selection criteria are also discussed when applied to copula models since they are invariant under monotone increasing transformations of the marginals. We focus on the deviance information criterion. The joint estimation takes into account all dependence structure of the parameters’ posterior distributions in our chosen model selection criteria. Two Monte Carlo studies are conducted to show that model identification improves when the model parameters are jointly estimated. We study the Bayesian estimation of all unknown quantities at once considering bivariate copula functions and three known marginal distributions.  相似文献   

18.
A non-parametric transformation function is introduced to transform data to any continuous distribution. When transformation of data to normality is desired, the use of a suitable parametric pre-transformation function improves the performance of the proposed non-parametric transformation function. The resulting semi-parametric transformation function is shown empirically, via a Monte Carlo study, to perform at least as well as any parametric transformation currently available in the literature.  相似文献   

19.
In this paper, we establish the strong consistency and asymptotic normality for the least square (LS) estimators in simple linear errors-in-variables (EV) regression models when the errors form a stationary α-mixing sequence of random variables. The quadratic-mean consistency is also considered.  相似文献   

20.
Multiple imputation is widely accepted as the method of choice to address item nonresponse in surveys. Nowadays most statistical software packages include features to multiply impute missing values in a dataset. Nevertheless, the application to real data imposes many implementation problems. To define useful imputation models for a dataset that consists of categorical and possibly skewed continuous variables, contains skip patterns and all sorts of logical constraints is a challenging task. Besides, in most applications little attention is paid to the evaluation of the underlying assumptions behind the imputation models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号