首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
As the treatments of cancer progress, a certain number of cancers are curable if diagnosed early. In population‐based cancer survival studies, cure is said to occur when mortality rate of the cancer patients returns to the same level as that expected for the general cancer‐free population. The estimates of cure fraction are of interest to both cancer patients and health policy makers. Mixture cure models have been widely used because the model is easy to interpret by separating the patients into two distinct groups. Usually parametric models are assumed for the latent distribution for the uncured patients. The estimation of cure fraction from the mixture cure model may be sensitive to misspecification of latent distribution. We propose a Bayesian approach to mixture cure model for population‐based cancer survival data, which can be extended to county‐level cancer survival data. Instead of modeling the latent distribution by a fixed parametric distribution, we use a finite mixture of the union of the lognormal, loglogistic, and Weibull distributions. The parameters are estimated using the Markov chain Monte Carlo method. Simulation study shows that the Bayesian method using a finite mixture latent distribution provides robust inference of parameter estimates. The proposed Bayesian method is applied to relative survival data for colon cancer patients from the Surveillance, Epidemiology, and End Results (SEER) Program to estimate the cure fractions. The Canadian Journal of Statistics 40: 40–54; 2012 © 2012 Statistical Society of Canada  相似文献   

2.
In this paper, the destructive negative binomial (DNB) cure rate model with a latent activation scheme [V. Cancho, D. Bandyopadhyay, F. Louzada, and B. Yiqi, The DNB cure rate model with a latent activation scheme, Statistical Methodology 13 (2013b), pp. 48–68] is extended to the case where the observations are grouped into clusters. Parameter estimation is performed based on the restricted maximum likelihood approach and on a Bayesian approach based on Dirichlet process priors. An application to a real data set related to a sealant study in a dentistry experiment is considered to illustrate the performance of the proposed model.  相似文献   

3.
We consider improving estimating parameters of diffusion processes for interest rates by incorporating information in bond prices. This is designed to improve the estimation of the drift parameters, which are known to be subject to large estimation errors. It is shown that having the bond prices together with the short rates leads to more efficient estimation of all parameters for the interest rate models. It enhances the estimation efficiency of the maximum likelihood estimation based on the interest rate dynamics alone. The combined estimation based on the bond prices and the interest rate dynamics can also provide inference to the risk premium parameter. Simulation experiments were conducted to confirm the theoretical properties of the estimators concerned. We analyze the overnight Fed fund rates together with the U.S. Treasury bond prices. Supplementary materials for this article are available online.  相似文献   

4.
The multivariate t linear mixed model (MtLMM) has been recently proposed as a robust tool for analysing multivariate longitudinal data with atypical observations. Missing outcomes frequently occur in longitudinal research even in well controlled situations. As a powerful alternative to the traditional expectation maximization based algorithm employing single imputation, we consider a Bayesian analysis of the MtLMM to account for the uncertainties of model parameters and missing outcomes through multiple imputation. An inverse Bayes formulas sampler coupled with Metropolis-within-Gibbs scheme is used to effectively draw the posterior distributions of latent data and model parameters. The techniques for multiple imputation of missing values, estimation of random effects, prediction of future responses, and diagnostics of potential outliers are investigated as well. The proposed methodology is illustrated through a simulation study and an application to AIDS/HIV data.  相似文献   

5.
Very often, in psychometric research, as in educational assessment, it is necessary to analyze item response from clustered respondents. The multiple group item response theory (IRT) model proposed by Bock and Zimowski [12] provides a useful framework for analyzing such type of data. In this model, the selected groups of respondents are of specific interest such that group-specific population distributions need to be defined. The usual assumption for parameter estimation in this model, which is that the latent traits are random variables following different symmetric normal distributions, has been questioned in many works found in the IRT literature. Furthermore, when this assumption does not hold, misleading inference can result. In this paper, we consider that the latent traits for each group follow different skew-normal distributions, under the centered parameterization. We named it skew multiple group IRT model. This modeling extends the works of Azevedo et al. [4], Bazán et al. [11] and Bock and Zimowski [12] (concerning the latent trait distribution). Our approach ensures that the model is identifiable. We propose and compare, concerning convergence issues, two Monte Carlo Markov Chain (MCMC) algorithms for parameter estimation. A simulation study was performed in order to evaluate parameter recovery for the proposed model and the selected algorithm concerning convergence issues. Results reveal that the proposed algorithm recovers properly all model parameters. Furthermore, we analyzed a real data set which presents asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of negative asymmetry for some latent trait distributions.  相似文献   

6.
Abstract

In the fields of internet financial transactions and reliability engineering, there could be more zero and one observations simultaneously. In this paper, considering that it is beyond the range where the conventional model can fit, zero-and-one-inflated geometric distribution regression model is proposed. Ingeniously introducing Pólya-Gamma latent variables in the Bayesian inference, posterior sampling with high-dimensional parameters is converted to latent variables sampling and posterior sampling with lower-dimensional parameters, respectively. Circumventing the need for Metropolis-Hastings sampling, the sample with higher sampling efficiency is obtained. A simulation study is conducted to assess the performance of the proposed estimation for various sample sizes. Finally, a doctoral dissertation data set is analyzed to illustrate the practicability of the proposed method, research shows that zero-and-one-inflated geometric distribution regression model using Pólya-Gamma latent variables can achieve better fitting results.  相似文献   

7.
Model-based clustering methods for continuous data are well established and commonly used in a wide range of applications. However, model-based clustering methods for categorical data are less standard. Latent class analysis is a commonly used method for model-based clustering of binary data and/or categorical data, but due to an assumed local independence structure there may not be a correspondence between the estimated latent classes and groups in the population of interest. The mixture of latent trait analyzers model extends latent class analysis by assuming a model for the categorical response variables that depends on both a categorical latent class and a continuous latent trait variable; the discrete latent class accommodates group structure and the continuous latent trait accommodates dependence within these groups. Fitting the mixture of latent trait analyzers model is potentially difficult because the likelihood function involves an integral that cannot be evaluated analytically. We develop a variational approach for fitting the mixture of latent trait models and this provides an efficient model fitting strategy. The mixture of latent trait analyzers model is demonstrated on the analysis of data from the National Long Term Care Survey (NLTCS) and voting in the U.S. Congress. The model is shown to yield intuitive clustering results and it gives a much better fit than either latent class analysis or latent trait analysis alone.  相似文献   

8.
In this article, we introduce a new method for modelling curves with dynamic structures, using a non-parametric approach formulated as a state space model. The non-parametric approach is based on the use of penalised splines, represented as a dynamic mixed model. This formulation can capture the dynamic evolution of curves using a limited number of latent factors, allowing an accurate fit with a small number of parameters. We also present a new method to determine the optimal smoothing parameter through an adaptive procedure, using a formulation analogous to a model of stochastic volatility (SV). The non-parametric state space model allows unifying different methods applied to data with a functional structure in finance. We present the advantages and limitations of this method through simulation studies and also by comparing its predictive performance with other parametric and non-parametric methods used in financial applications using data on the term structure of interest rates.  相似文献   

9.
A simple computational method for estimation of parameters via a type of EM algorithm is proposed in restricted latent class analysis, where equality and constant constraints are considered. These constraints create difficulty in estimation. In order to simply and stably estimate parameters in restricted latent class analysis, a simple computational method using only first-order differentials is proposed, where the step-halving method is adopted. A simulation study shows that in almost all cases the new method gives parameter sequences monotonously increasing the Q-function in the EM algorithm. Analysis of real data is provided.  相似文献   

10.
This paper describes inference methods for functional data under the assumption that the functional data of interest are smooth latent functions, characterized by a Gaussian process, which have been observed with noise over a finite set of time points. The methods we propose are completely specified in a Bayesian environment that allows for all inferences to be performed through a simple Gibbs sampler. Our main focus is in estimating and describing uncertainty in the covariance function. However, these models also encompass functional data estimation, functional regression where the predictors are latent functions, and an automatic approach to smoothing parameter selection. Furthermore, these models require minimal assumptions on the data structure as the time points for observations do not need to be equally spaced, the number and placement of observations are allowed to vary among functions, and special treatment is not required when the number of functional observations is less than the dimensionality of those observations. We illustrate the effectiveness of these models in estimating latent functional data, capturing variation in the functional covariance estimate, and in selecting appropriate smoothing parameters in both a simulation study and a regression analysis of medfly fertility data.  相似文献   

11.
Population-parameter mapping (PPM) is a method for estimating the parameters of latent scientific models that describe the statistical likelihood function. The PPM method involves a Bayesian inference in terms of the statistical parameters and the mapping from the statistical parameter space to the parameter space of the latent scientific parameters, and obtains a model coherence estimate, P(coh). The P(coh) statistic can be valuable for designing experiments, comparing competing models, and can be helpful in redesigning flawed models. Examples are provided where greater estimation precision was found for small sample sizes for the PPM point estimates relative to the maximum likelihood estimator (MLE).  相似文献   

12.
This study proposes a class of non-linear realized stochastic volatility (SV) model by applying the Box–Cox (BC) transformation, instead of the logarithmic transformation, to the realized estimator. The non-Gaussian distributions such as Student's t, non-central Student's t, and generalized hyperbolic skew Student's t-distributions are applied to accommodate heavy-tailedness and skewness in returns. The proposed models are fitted to daily returns and realized kernel of six stocks: SP500, FTSE100, Nikkei225, Nasdaq100, DAX, and DJIA using an Markov chain Monte Carlo Bayesian method, in which the Hamiltonian Monte Carlo (HMC) algorithm updates BC parameter and the Riemann manifold HMC algorithm updates latent variables and other parameters that are unable to be sampled directly. Empirical studies provide evidence against both the logarithmic transformation and raw versions of realized SV model.  相似文献   

13.
This article proposes a Bayesian estimation framework for a typical multi-factor model with time-varying risk exposures to macroeconomic risk factors and corresponding premia to price U.S. publicly traded assets. The model assumes that risk exposures and idiosyncratic volatility follow a break-point latent process, allowing for changes at any point on time but not restricting them to change at all points. The empirical application to 40 years of U.S. data and 23 portfolios shows that the approach yields sensible results compared to previous two-step methods based on naive recursive estimation schemes, as well as a set of alternative model restrictions. A variance decomposition test shows that although most of the predictable variation comes from the market risk premium, a number of additional macroeconomic risks, including real output and inflation shocks, are significantly priced in the cross-section. A Bayes factor analysis massively favors the proposed change-point model. Supplementary materials for this article are available online.  相似文献   

14.
Models that involve an outcome variable, covariates, and latent variables are frequently the target for estimation and inference. The presence of missing covariate or outcome data presents a challenge, particularly when missingness depends on the latent variables. This missingness mechanism is called latent ignorable or latent missing at random and is a generalisation of missing at random. Several authors have previously proposed approaches for handling latent ignorable missingness, but these methods rely on prior specification of the joint distribution for the complete data. In practice, specifying the joint distribution can be difficult and/or restrictive. We develop a novel sequential imputation procedure for imputing covariate and outcome data for models with latent variables under latent ignorable missingness. The proposed method does not require a joint model; rather, we use results under a joint model to inform imputation with less restrictive modelling assumptions. We discuss identifiability and convergence‐related issues, and simulation results are presented in several modelling settings. The method is motivated and illustrated by a study of head and neck cancer recurrence. Imputing missing data for models with latent variables under latent‐dependent missingness without specifying a full joint model.  相似文献   

15.
ABSTRACT

We develop Markov chain Monte Carlo algorithms for estimating the parameters of the short-term interest rate model. Using Monte Carlo experiments we compare the Bayes estimators with the maximum likelihood and generalized method of moments estimators. We estimate the model using the Japanese overnight call rate data.  相似文献   

16.
In this work, we define a new method of ranked set sampling (RSS) which is suitable when the characteristic (variable) Y of primary interest on the units is jointly distributed with an auxiliary characteristic X on which one can take its measurement on any number of units, so that units having record values on X alone are ranked and retained for making measurement on Y. We name this RSS as concomitant record ranked set sampling (CRRSS). We propose estimators of the parameters associated with the variable Y of primary interest based on observations of the proposed CRRSS which are applicable to a very large class of distributions viz. Morgenstern family of distributions. We illustrate the application of CRRSS and our estimation technique of parameters, when the basic distribution is Morgenstern-type bivariate logistic distribution. A primary data collected by CRRSS method is demonstrated and the obtained data used to illustrate the results developed in this work.  相似文献   

17.
Abstract

In this article we suggest a new multivariate autoregressive process for modeling time-dependent extreme value distributed observations. The idea behind the approach is to transform the original observations to latent variables that are univariate normally distributed. Then the vector autoregressive DCC model is fitted to the multivariate latent process. The distributional properties of the suggested model are extensively studied. The process parameters are estimated by applying a two-stage estimation procedure. We derive a prediction interval for future values of the suggested process. The results are applied in an empirically study by modeling the behavior of extreme daily stock prices.  相似文献   

18.
This article investigates if the impact of uncertainty shocks on the U.S. economy has changed over time. To this end, we develop an extended factor augmented vector autoregression (VAR) model that simultaneously allows the estimation of a measure of uncertainty and its time-varying impact on a range of variables. We find that the impact of uncertainty shocks on real activity and financial variables has declined systematically over time. In contrast, the response of inflation and the short-term interest rate to this shock has remained fairly stable. Simulations from a nonlinear dynamic stochastic general equilibrium (DSGE) model suggest that these empirical results are consistent with an increase in the monetary authorities’ antiinflation stance and a “flattening” of the Phillips curve. Supplementary materials for this article are available online.  相似文献   

19.
In this paper we present an indirect estimation procedure for (ARFIMA) fractional time series models.The estimation method is based on an ‘incorrect’criterion which does not directly provide a consistent estimator of the parameters of interest,but leads to correct inference by using simulations.

The main steps are the following. First,we consider an auxiliary model which can be easily estimated.Specifically,we choose the finite lag Autoregressive model.Then, this is estimated on the observations and simulated values drawn from the ARFIMA model associated with a given value of the parameters of interest.Finally,the latter is calibrated in order to obtain close values of the two estimators of the auxiliary parameters.

In this article,we describe the estimation procedure and compare the performance of the indirect estimator with some alternative estimators based on the likelihood function by a Monte Carlo study.  相似文献   

20.
Abstract

A class of objective functions, related to the Cox partial likelihood, that generates unbiased estimating equations is proposed. These equations allow for estimation of interest parameters when nuisance parameters are proportional to expectations. Examples of the objective functions are applied to binary data with a log-link in three situations: independent observations, independent groups of observations with common random intercept and discrete survival data. It is pointed out that the Peto–Breslow approximation to the partial likelihood with discrete failure times fits a conditional model with a log-link.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号