首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many of the recently developed alternative econometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implict preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

2.
Reply     
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

3.
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

4.
ABSTRACT

This paper analyses the behaviour of the goodness-of-fit tests for regression models. To this end, it uses statistics based on an estimation of the integrated regression function with missing observations either in the response variable or in some of the covariates. It proposes several versions of one empirical process, constructed from a previous estimation, that uses only the complete observations or replaces the missing observations with imputed values. In the case of missing covariates, a link model is used to fill the missing observations with other complete covariates. In all the situations, Bootstrap methodology is used to calibrate the distribution of the test statistics. A broad simulation study compares the different procedures based on empirical regression methodology, with smoothed tests previously studied in the literature. The comparison reflects the effect of the correlation between the covariates in the tests based on the imputed sample for missing covariates. In addition, the paper proposes a computational binning strategy to evaluate the tests based on an empirical process for large data sets. Finally, two applications to real data illustrate the performance of the tests.  相似文献   

5.
In longitudinal studies, missing responses and mismeasured covariates are commonly seen due to the data collection process. Without cautiousness in data analysis, inferences from the standard statistical approaches may lead to wrong conclusions. In order to improve the estimation for longitudinal data analysis, a doubly robust estimation method for partially linear models, which can simultaneously account for the missing responses and mismeasured covariates, is proposed. Imprecisions of covariates are corrected by taking advantage of the independence between replicate measurement errors, and missing responses are handled by the doubly robust estimation under the mechanism of missing at random. The asymptotic properties of the proposed estimators are established under regularity conditions, and simulation studies demonstrate desired properties. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study.  相似文献   

6.
Inverse probability weighting (IPW) and multiple imputation are two widely adopted approaches dealing with missing data. The former models the selection probability, and the latter models data distribution. Consistent estimation requires correct specification of corresponding models. Although the augmented IPW method provides an extra layer of protection on consistency, it is usually not sufficient in practice as the true data‐generating process is unknown. This paper proposes a method combining the two approaches in the same spirit of calibration in sampling survey literature. Multiple models for both the selection probability and data distribution can be simultaneously accounted for, and the resulting estimator is consistent if any model is correctly specified. The proposed method is within the framework of estimating equations and is general enough to cover regression analysis with missing outcomes and/or missing covariates. Results on both theoretical and numerical investigation are provided.  相似文献   

7.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

8.
Mixed effect models, which contain both fixed effects and random effects, are frequently used in dealing with correlated data arising from repeated measurements (made on the same statistical units). In mixed effect models, the distributions of the random effects need to be specified and they are often assumed to be normal. The analysis of correlated data from repeated measurements can also be done with GEE by assuming any type of correlation as initial input. Both mixed effect models and GEE are approaches requiring distribution specifications (likelihood, score function). In this article, we consider a distribution-free least square approach under a general setting with missing value allowed. This approach does not require the specifications of the distributions and initial correlation input. Consistency and asymptotic normality of the estimation are discussed.  相似文献   

9.
This paper presents a novel framework for maximum likelihood (ML) estimation in skew-t factor analysis (STFA) models in the presence of missing values or nonresponses. As a robust extension of the ordinary factor analysis model, the STFA model assumes a restricted version of the multivariate skew-t distribution for the latent factors and the unobservable errors to accommodate non-normal features such as asymmetry and heavy tails or outliers. An EM-type algorithm is developed to carry out ML estimation and imputation of missing values under a missing at random mechanism. The practical utility of the proposed methodology is illustrated through real and synthetic data examples.  相似文献   

10.
It is quite appealing to extend existing theories in classical linear models to correlated responses where linear mixed-effects models are utilized and the dependency in the data is modeled by random effects. In the mixed modeling framework, missing values occur naturally due to dropouts or non-responses, which is frequently encountered when dealing with real data. Motivated by such problems, we aim to investigate the estimation and model selection performance in linear mixed models when missing data are present. Inspired by the property of the indicator function for missingness and its relation to missing rates, we propose an approach that records missingness in an indicator-based matrix and derive the likelihood-based estimators for all parameters involved in the linear mixed-effects models. Based on the proposed method for estimation, we explore the relationship between estimation and selection behavior over missing rates. Simulations and a real data application are conducted for illustrating the effectiveness of the proposed method in selecting the most appropriate model and in estimating parameters.  相似文献   

11.
Semiparametric models: a generalized self-consistency approach   总被引:1,自引:0,他引:1  
Summary. In semiparametric models, the dimension d of the maximum likelihood problem is potentially unlimited. Conventional estimation methods generally behave like O ( d 3). A new O ( d ) estimation procedure is proposed for a large class of semiparametric models. Potentially unlimited dimension is handled in a numerically efficient way through a Nelson–Aalen-like estimator. Discussion of the new method is put in the context of recently developed minorization–maximization algorithms based on surrogate objective functions. The procedure for semiparametric models is used to demonstrate three methods to construct a surrogate objective function: using the difference of two concave functions, the EM way and the new quasi-EM (QEM) approach. The QEM approach is based on a generalization of the EM-like construction of the surrogate objective function so it does not depend on the missing data representation of the model. Like the EM algorithm, the QEM method has a dual interpretation, a result of merging the idea of surrogate maximization with the idea of imputation and self-consistency. The new approach is compared with other possible approaches by using simulations and analysis of real data. The proportional odds model is used as an example throughout the paper.  相似文献   

12.
Sequential Monte Carlo methods (also known as particle filters and smoothers) are used for filtering and smoothing in general state-space models. These methods are based on importance sampling. In practice, it is often difficult to find a suitable proposal which allows effective importance sampling. This article develops an original particle filter and an original particle smoother which employ nonparametric importance sampling. The basic idea is to use a nonparametric estimate of the marginally optimal proposal. The proposed algorithms provide a better approximation of the filtering and smoothing distributions than standard methods. The methods’ advantage is most distinct in severely nonlinear situations. In contrast to most existing methods, they allow the use of quasi-Monte Carlo (QMC) sampling. In addition, they do not suffer from weight degeneration rendering a resampling step unnecessary. For the estimation of model parameters, an efficient on-line maximum-likelihood (ML) estimation technique is proposed which is also based on nonparametric approximations. All suggested algorithms have almost linear complexity for low-dimensional state-spaces. This is an advantage over standard smoothing and ML procedures. Particularly, all existing sequential Monte Carlo methods that incorporate QMC sampling have quadratic complexity. As an application, stochastic volatility estimation for high-frequency financial data is considered, which is of great importance in practice. The computer code is partly available as supplemental material.  相似文献   

13.
14.
The need to use rigorous, transparent, clearly interpretable, and scientifically justified methodology for preventing and dealing with missing data in clinical trials has been a focus of much attention from regulators, practitioners, and academicians over the past years. New guidelines and recommendations emphasize the importance of minimizing the amount of missing data and carefully selecting primary analysis methods on the basis of assumptions regarding the missingness mechanism suitable for the study at hand, as well as the need to stress‐test the results of the primary analysis under different sets of assumptions through a range of sensitivity analyses. Some methods that could be effectively used for dealing with missing data have not yet gained widespread usage, partly because of their underlying complexity and partly because of lack of relatively easy approaches to their implementation. In this paper, we explore several strategies for missing data on the basis of pattern mixture models that embody clear and realistic clinical assumptions. Pattern mixture models provide a statistically reasonable yet transparent framework for translating clinical assumptions into statistical analyses. Implementation details for some specific strategies are provided in an Appendix (available online as Supporting Information), whereas the general principles of the approach discussed in this paper can be used to implement various other analyses with different sets of assumptions regarding missing data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
We propose data generating structures which can be represented as the nonlinear autoregressive models with single and finite mixtures of scale mixtures of skew normal innovations. This class of models covers symmetric/asymmetric and light/heavy-tailed distributions, so provide a useful generalization of the symmetrical nonlinear autoregressive models. As semiparametric and nonparametric curve estimation are the approaches for exploring the structure of a nonlinear time series data set, in this article the semiparametric estimator for estimating the nonlinear function of the model is investigated based on the conditional least square method and nonparametric kernel approach. Also, an Expectation–Maximization-type algorithm to perform the maximum likelihood (ML) inference of unknown parameters of the model is proposed. Furthermore, some strong and weak consistency of the semiparametric estimator in this class of models are presented. Finally, to illustrate the usefulness of the proposed model, some simulation studies and an application to real data set are considered.  相似文献   

16.
Bayesian item response theory models have been widely used in different research fields. They support measuring constructs and modeling relationships between constructs, while accounting for complex test situations (e.g., complex sampling designs, missing data, heterogenous population). Advantages of this flexible modeling framework together with powerful simulation-based estimation techniques are discussed. Furthermore, it is shown how the Bayes factor can be used to test relevant hypotheses in assessment using the College Basic Academic Subjects Examination (CBASE) data.  相似文献   

17.
Four basic strands in the disequilibrium literature are identified. Some examples are discussed and the canonical econometric disequilibrium model and its estimation are dealt with in detail. Specific criticisms of the canonical model,dealing with price and wage rigidity, with the nature of the min condition and the price-adjustment equation, are considered and a variety of modifications is entertained. Tests of the “equilibrium vs. disequilibrium” hypothesis are discussed, as well as several classes of models that may switch between equilibrium and disequilibrium modes. Finally, consideration is given to multimarket disequilibrium models with particular emphasis on the problems of coherence and estimation.  相似文献   

18.
We suggest a generalized spatial system GMM (SGMM) estimation for short dynamic panel data models with spatial errors and fixed effects when n is large and T is fixed (usually small). Monte Carlo studies are conducted to evaluate the finite sample properties with the quasi-maximum likelihood estimation (QMLE). The results show that, QMLE, with a proper approximation for initial observation, performs better than SGMM in general cases. However, it performs poorly when spatial dependence is large. QMLE and SGMM perform better for different parameters when there is unknown heteroscedasticity in the disturbances and the data are highly persistent. Both estimates are not sensitive to the treatment of initial values. Estimation of the spatial autoregressive parameter is generally biased when either the data are highly persistent or spatial dependence is large. Choices of spatial weights matrices and the sign of spatial dependence do affect the performance of the estimates, especially in the case of the heteroscedastic disturbance. We also give empirical guidelines for the model.  相似文献   

19.
The estimation of the mixtures of regression models is usually based on the normal assumption of components and maximum likelihood estimation of the normal components is sensitive to noise, outliers, or high-leverage points. Missing values are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this article, we propose the mixtures of regression models for contaminated incomplete heterogeneous data. The proposed models provide robust estimates of regression coefficients varying across latent subgroups even under the presence of missing values. The methodology is illustrated through simulation studies and a real data analysis.  相似文献   

20.
Earlier attempts at reconciling disparate substitution elasticity estimates examined differences in separability hypotheses, data bases, and estimation techniques, as well as methods employed to construct capital service prices. Although these studies showed that differences in elasticity estimates between two or three studies may be attributable to the aforementioned features of the econometric models, they have been unable to demonstrate this link statistically and establish the existence of systematic relationships between features of the econometric models and the perception of production technologies generated by those models. Using sectoral data covering the entire production side of the U.S. economy, we estimate 34 production models for alternative definitions of the capital service price. We employ substitution elasticities calculated from these models as dependent variables in the statistical search for systematic relationships between features of the econometric models and perceptions of the sectoral technology as characterized by the elasticities. Statistically significant systematic effects are found between the monotonicity and concavity properties of the cost functions and service price–technical change specifications as well as between substitution elasticities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号