首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The paper deals with discrete-time regression models to analyze multistate—multiepisode models for event history data or failure time data collected in follow-up studies, retrospective studies, or longitudinal panels. The models are applicable if the events are not dated exactly but only a time interval is recorded. The models include individual specific parameters to account for unobserved heterogeneity. The explantory variables may be time-varying and random with distributions depending on the observed history of the process. Different estimation procedures are considered: Estimation of structural as well as individual specific parameters by maximization of a joint likelihood function, estimation of the structural parameters by maximization of a conditional likelihood function conditioning on a set of sufficient statistics for the individual specific parameters, and estimation of the structural parameters by maximization of a marginal likelihood function assuming that the individual specific parameters follow a distribution. The advantages and limitations of the different approaches are discussed.  相似文献   

2.
It is quite appealing to extend existing theories in classical linear models to correlated responses where linear mixed-effects models are utilized and the dependency in the data is modeled by random effects. In the mixed modeling framework, missing values occur naturally due to dropouts or non-responses, which is frequently encountered when dealing with real data. Motivated by such problems, we aim to investigate the estimation and model selection performance in linear mixed models when missing data are present. Inspired by the property of the indicator function for missingness and its relation to missing rates, we propose an approach that records missingness in an indicator-based matrix and derive the likelihood-based estimators for all parameters involved in the linear mixed-effects models. Based on the proposed method for estimation, we explore the relationship between estimation and selection behavior over missing rates. Simulations and a real data application are conducted for illustrating the effectiveness of the proposed method in selecting the most appropriate model and in estimating parameters.  相似文献   

3.
This paper deals with the problem of local sensitivity analysis in ordered parameter models. In addition to order restrictions, some constraints imposed on the parameters by the model and/or the data are considered. Measures for assessing how much a change in the data modifies the results and conclusions of a statistical analysis of these models are presented. The sensitivity measures are derived using recent results in mathematical programming. The estimation problem is formulated as a primal nonlinear programming problem, and the sensitivities of the parameter estimates as well as the objective function sensitivities with respect to data are obtained. They are very effective in revealing the influential observations in this type of models and in evaluating the changes due to changes in data values. The methods are illustrated by their application to a wide variety of examples of order-restricted models including ordered exponential family parameters, ordered multinomial parameters, ordered linear model parameters, ordered and data constrained parameters, and ordered functions of parameters.  相似文献   

4.
An EM algorithm for multivariate Poisson distribution and related models   总被引:2,自引:0,他引:2  
Multivariate extensions of the Poisson distribution are plausible models for multivariate discrete data. The lack of estimation and inferential procedures reduces the applicability of such models. In this paper, an EM algorithm for Maximum Likelihood estimation of the parameters of the Multivariate Poisson distribution is described. The algorithm is based on the multivariate reduction technique that generates the Multivariate Poisson distribution. Illustrative examples are also provided. Extension to other models, generated via multivariate reduction, is discussed.  相似文献   

5.
The class of joint mean‐covariance models uses the modified Cholesky decomposition of the within subject covariance matrix in order to arrive to an unconstrained, statistically meaningful reparameterisation. The new parameterisation of the covariance matrix has two sets of parameters that separately describe the variances and correlations. Thus, with the mean or regression parameters, these models have three sets of distinct parameters. In order to alleviate the problem of inefficient estimation and downward bias in the variance estimates, inherent in the maximum likelihood estimation procedure, the usual REML estimation procedure adjusts for the degrees of freedom lost due to the estimation of the mean parameters. Because of the parameterisation of the joint mean covariance models, it is possible to adapt the usual REML procedure in order to estimate the variance (correlation) parameters by taking into account the degrees of freedom lost by the estimation of both the mean and correlation (variance) parameters. To this end, here we propose adjustments to the estimation procedures based on the modified and adjusted profile likelihoods. The methods are illustrated by an application to a real data set and simulation studies. The Canadian Journal of Statistics 40: 225–242; 2012 © 2012 Statistical Society of Canada  相似文献   

6.
Even though integer-valued time series are common in practice, the methods for their analysis have been developed only in recent past. Several models for stationary processes with discrete marginal distributions have been proposed in the literature. Such processes assume the parameters of the model to remain constant throughout the time period. However, this need not be true in practice. In this paper, we introduce non-stationary integer-valued autoregressive (INAR) models with structural breaks to model a situation, where the parameters of the INAR process do not remain constant over time. Such models are useful while modelling count data time series with structural breaks. The Bayesian and Markov Chain Monte Carlo (MCMC) procedures for the estimation of the parameters and break points of such models are discussed. We illustrate the model and estimation procedure with the help of a simulation study. The proposed model is applied to the two real biometrical data sets.  相似文献   

7.
本文拓展构建了后顾、同期和前瞻三种类型的货币政策规则,并基于实时数据和最终数据实证分析数据修订和实时估计对货币政策参数的影响效应。研究结果发现,数据修订对泰勒规则的影响取决于不同模型,而且在三种模型设定中,盯住产出缺口和通胀目标的时变参数均在不同程度上受数据修订的影响。特别是,对于最终数据,采用同期性货币政策规则展开估计最为有效;而对于实时数据,则基于后顾性货币政策规则模型估计是最佳的。最后,本文在数据选择和模型匹配上提出相应的对策建议。  相似文献   

8.
The model parameters of linear state space models are typically estimated with maximum likelihood estimation, where the likelihood is computed analytically with the Kalman filter. Outliers can deteriorate the estimation. Therefore we propose an alternative estimation method. The Kalman filter is replaced by a robust version and the maximum likelihood estimator is robustified as well. The performance of the robust estimator is investigated in a simulation study. Robust estimation of time varying parameter regression models is considered as a special case. Finally, the methodology is applied to real data.  相似文献   

9.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

10.
In this article, we present EM algorithms for performing maximum likelihood estimation for three multivariate skew-normal regression models of considerable practical interest. We also consider the restricted estimation of the parameters of certain important special cases of two models. The methodology developed is applied in the analysis of longitudinal data on dental plaque and cholesterol levels.  相似文献   

11.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

12.
In testing, item response theory models are widely used in order to estimate item parameters and individual abilities. However, even unidimensional models require a considerable sample size so that all parameters can be estimated precisely. The introduction of empirical prior information about candidates and items might reduce the number of candidates needed for parameter estimation. Using data for IQ measurement, this work shows how empirical information about items can be used effectively for item calibration and in adaptive testing. First, we propose multivariate regression trees to predict the item parameters based on a set of covariates related to the item-solving process. Afterwards, we compare the item parameter estimation when tree-fitted values are included in the estimation or when they are ignored. Model estimation is fully Bayesian, and is conducted via Markov chain Monte Carlo methods. The results are two-fold: (a) in item calibration, it is shown that the introduction of prior information is effective with short test lengths and small sample sizes and (b) in adaptive testing, it is demonstrated that the use of the tree-fitted values instead of the estimated parameters leads to a moderate increase in the test length, but provides a considerable saving of resources.  相似文献   

13.
Linear regression models are useful statistical tools to analyze data sets in different fields. There are several methods to estimate the parameters of a linear regression model. These methods usually perform under normally distributed and uncorrelated errors. If error terms are correlated the Conditional Maximum Likelihood (CML) estimation method under normality assumption is often used to estimate the parameters of interest. The CML estimation method is required a distributional assumption on error terms. However, in practice, such distributional assumptions on error terms may not be plausible. In this paper, we propose to estimate the parameters of a linear regression model with autoregressive error term using Empirical Likelihood (EL) method, which is a distribution free estimation method. A small simulation study is provided to evaluate the performance of the proposed estimation method over the CML method. The results of the simulation study show that the proposed estimators based on EL method are remarkably better than the estimators obtained from CML method in terms of mean squared errors (MSE) and bias in almost all the simulation configurations. These findings are also confirmed by the results of the numerical and real data examples.  相似文献   

14.
We propose a class of general partially linear additive transformation models (GPLATM) with right-censored survival data in this work. The class of models are flexible enough to cover many commonly used parametric and nonparametric survival analysis models as its special cases. Based on the B spline interpolation technique, we estimate the unknown regression parameters and functions by the maximum marginal likelihood estimation method. One important feature of the estimation procedure is that it does not need the baseline and censoring cumulative density distributions. Some numerical studies illustrate that this procedure can work very well for the moderate sample size.  相似文献   

15.
The method of target estimation developed by Cabrera and Fernholz [(1999). Target estimation for bias and mean square error reduction. The Annals of Statistics, 27(3), 1080–1104.] to reduce bias and variance is applied to logistic regression models of several parameters. The expectation functions of the maximum likelihood estimators for the coefficients in the logistic regression models of one and two parameters are analyzed and simulations are given to show a reduction in both bias and variability after targeting the maximum likelihood estimators. In addition to bias and variance reduction, it is found that targeting can also correct the skewness of the original statistic. An example based on real data is given to show the advantage of using target estimators for obtaining better confidence intervals of the corresponding parameters. The notion of the target median is also presented with some applications to the logistic models.  相似文献   

16.
We propose a new methodology for maximum likelihood estimation in mixtures of non linear mixed effects models (NLMEM). Such mixtures of models include mixtures of distributions, mixtures of structural models and mixtures of residual error models. Since the individual parameters inside the NLMEM are not observed, we propose to combine the EM algorithm usually used for mixtures models when the mixture structure concerns an observed variable, with the Stochastic Approximation EM (SAEM) algorithm, which is known to be suitable for maximum likelihood estimation in NLMEM and also has nice theoretical properties. The main advantage of this hybrid procedure is to avoid a simulation step of unknown group labels required by a “full” version of SAEM. The resulting MSAEM (Mixture SAEM) algorithm is now implemented in the Monolix software. Several criteria for classification of subjects and estimation of individual parameters are also proposed. Numerical experiments on simulated data show that MSAEM performs well in a general framework of mixtures of NLMEM. Indeed, MSAEM provides an estimator close to the maximum likelihood estimator in very few iterations and is robust with regard to initialization. An application to pharmacokinetic (PK) data demonstrates the potential of the method for practical applications.  相似文献   

17.
Correlated data are commonly analyzed using models constructed using population-averaged generalized estimating equations (GEEs). The specification of a population-averaged GEE model includes selection of a structure describing the correlation of repeated measures. Accurate specification of this structure can improve efficiency, whereas the finite-sample estimation of nuisance correlation parameters can inflate the variances of regression parameter estimates. Therefore, correlation structure selection criteria should penalize, or account for, correlation parameter estimation. In this article, we compare recently proposed penalties in terms of their impacts on correlation structure selection and regression parameter estimation, and give practical considerations for data analysts. Supplementary materials for this article are available online.  相似文献   

18.
The transformed likelihood approach to estimation of fixed effects dynamic panel data models is shown to present very good inferential properties but it is not directly implemented in the most diffused statistical software. The present paper aims at showing how a simple model reformulation can be adopted to describe the problem in terms of classical linear mixed models. The transformed likelihood approach is based on the first differences data transformation, the following results derive from a convenient reformulation in terms of deviations from the first observations. Given the invariance to data transformation, the likelihood functions defined in the two cases coincide. Resulting in a classical random effect linear model form, the proposed approach significantly improves the number of available estimation procedures and provides a straightforward interpretation for the parameters. Moreover, the proposed model specification allows to consider all the estimation improvements typical of the random effects model literature. Simulation studies are conducted in order to study the robustness of the estimation method to mean stationarity violation.  相似文献   

19.
M-estimation (robust estimation) for the parameters in nonlinear mixed effects models using Fisher scoring method is investigated in the article, which shares some of the features of the existing maximum likelihood estimation: consistency and asymptotic normality. Score tests for autocorrelation and random effects based on M-estimation, together with their asymptotic distribution are also studied. The performance of the test statistics are evaluated via simulations and a real data analysis of plasma concentrations data.  相似文献   

20.
In survey sampling, policymaking regarding the allocation of resources to subgroups (called small areas) or the determination of subgroups with specific properties in a population should be based on reliable estimates. Information, however, is often collected at a different scale than that of these subgroups; hence, the estimation can only be obtained on finer scale data. Parametric mixed models are commonly used in small‐area estimation. The relationship between predictors and response, however, may not be linear in some real situations. Recently, small‐area estimation using a generalised linear mixed model (GLMM) with a penalised spline (P‐spline) regression model, for the fixed part of the model, has been proposed to analyse cross‐sectional responses, both normal and non‐normal. However, there are many situations in which the responses in small areas are serially dependent over time. Such a situation is exemplified by a data set on the annual number of visits to physicians by patients seeking treatment for asthma, in different areas of Manitoba, Canada. In cases where covariates that can possibly predict physician visits by asthma patients (e.g. age and genetic and environmental factors) may not have a linear relationship with the response, new models for analysing such data sets are required. In the current work, using both time‐series and cross‐sectional data methods, we propose P‐spline regression models for small‐area estimation under GLMMs. Our proposed model covers both normal and non‐normal responses. In particular, the empirical best predictors of small‐area parameters and their corresponding prediction intervals are studied with the maximum likelihood estimation approach being used to estimate the model parameters. The performance of the proposed approach is evaluated using some simulations and also by analysing two real data sets (precipitation and asthma).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号