首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
变权重组合预测模型的局部加权最小二乘解法   总被引:2,自引:0,他引:2  
随着科学技术的不断进步,预测方法也得到了很大的发展,常见的预测方法就有数十种之多。而组合预测是将不同的预测方法组合起来,综合利用各个方法所提供的信息,其效果往往优于单一的预测方法,故得到了广泛的应用。而基于变系数模型的思想研究了组合预测模型,将变权重的求取转化为变系数模型中系数函数的估计问题,从而可以基于局部加权最小二乘方法求解,利用交叉证实法选取光滑参数。其结果表明所提方法预测精度很高,效果优于其他方法。  相似文献   

2.
Calibration in macroeconomics involves choosing fre parameters by matching certain moments of simulted models with those of data. We formally examine this method by treating the process of calibration as an econometric estimator. A numerical version of the Mehra-Prescott (1985) economy is the setting for an evaluation of calibration estimators via Monte Carlo methods. While these estimators sometimes have reasonable finite-sample properties they are not robust to mistakes in setting non-free parameters. In contrast, generalized method-of-moments (GMM) estimators have satisfactory finite-sample characteristics, quick convergence, and informational requirements less stringent than those of calibration estimators. In dynamic equilibrium models in which GMM is infeasible we offer some suggestions for improving estimates based on calibration methodology.  相似文献   

3.
In this paper, we consider a special finite mixture model named Combination of Uniform and shifted Binomial (CUB), recently introduced in the statistical literature to analyse ordinal data expressing the preferences of raters with regards to items or services. Our aim is to develop a variable selection procedure for this model using a Bayesian approach. Bayesian methods for variable selection and model choice have become increasingly popular in recent years, due to advances in Markov chain Monte Carlo computational algorithms. Several methods have been proposed in the case of linear and generalized linear models (GLM). In this paper, we adapt to the CUB model some of these algorithms: the Kuo–Mallick method together with its ‘metropolized’ version and the Stochastic Search Variable Selection method. Several simulated examples are used to illustrate the algorithms and to compare their performance. Finally, an application to real data is introduced.  相似文献   

4.
Parameter estimation for association and log-linear models is an important aspect of the analysis of cross-classified categorical data. Classically, iterative procedures, including Newton's method and iterative scaling, have typically been used to calculate the maximum likelihood estimates of these parameters. An important special case occurs when the categorical variables are ordinal and this has received a considerable amount of attention for more than 20 years. This is because models for such cases involve the estimation of a parameter that quantifies the linear-by-linear association and is directly linked with the natural logarithm of the common odds ratio. The past five years has seen the development of non-iterative procedures for estimating the linear-by-linear parameter for ordinal log-linear models. Such procedures have been shown to lead to numerically equivalent estimates when compared with iterative, maximum likelihood estimates. Such procedures also enable the researcher to avoid some of the computational difficulties that commonly arise with iterative algorithms. This paper investigates and evaluates the performance of three non-iterative procedures for estimating this parameter by considering 14 contingency tables that have appeared in the statistical and allied literature. The estimation of the standard error of the association parameter is also considered.  相似文献   

5.
In this paper we have developed some state space models for the HIV epidemic for populations at risk for AIDS. By using these state space models, we have developed a general Bayesian procedure for estimating simultaneously the unknown parameters and the state variables. The unknown parameters include the immigration and recruitment rates, the death and retirement rates, the incidence of HIV infection ( and hence the HIV infection distribution ) and the incidence of HIV incubation ( and hence the HIV incubation distribution). The state variables are the numbers of susceptible people (S people), HIV-infected people (I people) and AIDS incidence over time. The basic approach is through multi-level Gibbs sampler combined with the weighted bootstrap method. We have applied the methods to the Swiss AIDS homosexual and IV drug data to estimate simultaneously the unknown parameters and the state variables. Our results show that in both populations, both the HIV infection and HIV incubation have multi-peaks indicating the mixture nature of these distributions. Our results have also shown that the estimates of the death and retirement rates for I people are greater than those of S people, suggesting that the infection by HIV may have increased the death and retirement rates of the individuals.  相似文献   

6.
吴伟伟 《统计与信息论坛》2007,22(1):100-102,107
在不同的收入水平下,消费者行为特征不一样,因此在不同时期消费函数也应有所区别.文章以1978~2004年甘肃省城镇居民消费为研究对象,通过分析发现大致可将整个时期分为两段:1978~1988年和1989~2004年.在此基础上,分别建立了两个时期的消费函数模型,通过对两个模型的比较分析得出了一些有意义的结论.  相似文献   

7.
Competing risks models are of great importance in reliability and survival analysis. They are often assumed to have independent causes of failure in literature, which may be unreasonable. In this article, dependent causes of failure are considered by using the Marshall–Olkin bivariate Weibull distribution. After deriving some useful results for the model, we use ML, fiducial inference, and Bayesian methods to estimate the unknown model parameters with a parameter transformation. Simulation studies are carried out to assess the performances of the three methods. Compared with the maximum likelihood method, the fiducial and Bayesian methods could provide better parameter estimation.  相似文献   

8.
In this paper we propose a quantile survival model to analyze censored data. This approach provides a very effective way to construct a proper model for the survival time conditional on some covariates. Once a quantile survival model for the censored data is established, the survival density, survival or hazard functions of the survival time can be obtained easily. For illustration purposes, we focus on a model that is based on the generalized lambda distribution (GLD). The GLD and many other quantile function models are defined only through their quantile functions, no closed‐form expressions are available for other equivalent functions. We also develop a Bayesian Markov Chain Monte Carlo (MCMC) method for parameter estimation. Extensive simulation studies have been conducted. Both simulation study and application results show that the proposed quantile survival models can be very useful in practice.  相似文献   

9.
Strategies for controlling plant epidemics are investigated by fitting continuous time spatiotemporal stochastic models to data consisting of maps of disease incidence observed at discrete times. Markov chain Monte Carlo methods are used for fitting two such models to data describing the spread of citrus tristeza virus (CTV) in an orchard. The approach overcomes some of the difficulties encountered when fitting stochastic models to infrequent observations of a continuous process. The results of the analysis cast doubt on the effectiveness of a strategy identified from a previous spatial analysis of the CTV data. Extensions of the approaches to more general models and other problems are also considered.  相似文献   

10.
近10多年来,关于未决赔款准备金评估模型的研究取得了较大进展,其中虽然也包含对各种评估模型相互关系的探讨,如关于各种随机模型的比较、以及基于B-F法对各种准备金评估模型的比较等,但仍然不够全面和系统。在对准备金评估模型从不同角度进行了较为系统的分类和综述的同时,首次以最基本的链梯模型为基础,建立了一个统一的框架,并对常见的一些准备金评估模型进行了综合比较和分析,揭示了它们之间的一些重要关系,给出了在实务中选择准备金评估模型的一些建议。  相似文献   

11.
In this work, we discuss the class of bilinear GARCH (BL-GARCH) models that are capable of capturing simultaneously two key properties of non-linear time series: volatility clustering and leverage effects. It has often been observed that the marginal distributions of such time series have heavy tails; thus we examine the BL-GARCH model in a general setting under some non-normal distributions. We investigate some probabilistic properties of this model and we conduct a Monte Carlo experiment to evaluate the small-sample performance of the maximum likelihood estimation (MLE) methodology for various models. Finally, within-sample estimation properties were studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects. The main results suggest that the Student-t BL-GARCH seems highly appropriate to describe the S&P 500 daily returns.  相似文献   

12.
In longitudinal studies, nonlinear mixed-effects models have been widely applied to describe the intra- and the inter-subject variations in data. The inter-subject variation usually receives great attention and it may be partially explained by time-dependent covariates. However, some covariates may be measured with substantial errors and may contain missing values. We proposed a multiple imputation method, implemented by a Markov Chain Monte-Carlo method along with Gibbs sampler, to address the covariate measurement errors and missing data in nonlinear mixed-effects models. The multiple imputation method is illustrated in a real data example. Simulation studies show that the multiple imputation method outperforms the commonly used naive methods.  相似文献   

13.
Several models have been developed to capture the dynamics of the conditional correlations between time series of financial returns and several studies have shown that the market volatility is a major determinant of the correlations. We extend some models to include explicitly the dependence of the correlations on the market volatility. The models differ by the way—linear or nonlinear, direct or indirect—in which the volatility influences the correlations. Using a wide set of models with two measures of market volatility on two datasets, we find that for some models, the empirical results support to some extent the statistical significance and the economic significance of the volatility effect on the correlations, but the presence of the volatility effect does not improve the forecasting performance of the extended models. Supplementary materials for this article are available online.  相似文献   

14.
Supersaturated designs (SSDs) are useful in examining many factors with a restricted number of experimental units. Many analysis methods have been proposed to analyse data from SSDs, with some methods performing better than others when data are normally distributed. It is possible that data sets violate assumptions of standard analysis methods used to analyse data from SSDs, and to date the performance of these analysis methods have not been evaluated using nonnormally distributed data sets. We conducted a simulation study with normally and nonnormally distributed data sets to compare the identification rates, power and coverage of the true models using a permutation test, the stepwise procedure and the smoothly clipped absolute deviation (SCAD) method. Results showed that at the level of significance α=0.01, the identification rates of the true models of the three methods were comparable; however at α=0.05, both the permutation test and stepwise procedures had considerably lower identification rates than SCAD. For most cases, the three methods produced high power and coverage. The experimentwise error rates (EER) were close to the nominal level (11.36%) for the stepwise method, while they were somewhat higher for the permutation test. The EER for the SCAD method were extremely high (84–87%) for the normal and t-distributions, as well as for data with outlier.  相似文献   

15.
A general class of mixed Poisson regression models is introduced. This class is based on a mixing between the Poisson distribution and a distribution belonging to the exponential family. With this, we unified some overdispersed models which have been studied separately, such as negative binomial and Poisson inverse gaussian models. We consider a regression structure for both the mean and dispersion parameters of the mixed Poisson models, thus extending, and in some cases correcting, some previous models considered in the literature. An expectation–maximization (EM) algorithm is proposed for estimation of the parameters and some diagnostic measures, based on the EM algorithm, are considered. We also obtain an explicit expression for the observed information matrix. An empirical illustration is presented in order to show the performance of our class of mixed Poisson models. This paper contains a Supplementary Material.  相似文献   

16.
The class of nonlinear reproductive dispersion mixed models (NRDMMs) is an extension of nonlinear reproductive dispersion models and generalized linear mixed models. This paper discusses the influence analysis of the model based on Laplace approximation. The equivalence of case-deletion models and mean-shift outlier models in NRDMMs is investigated, and some diagnostic measures are proposed via the case-deletion method. We also investigate the assessment of local influence of various perturbation schemes. The proposed method is illustrated with an example.  相似文献   

17.
It is common to fit generalized linear models with binomial and Poisson responses, where the data show a variability that is greater than the theoretical variability assumed by the model. This phenomenon, known as overdispersion, may spoil inferences about the model by considering significant parameters associated with variables that have no significant effect on the dependent variable. This paper explains some methods to detect overdispersion and presents and evaluates three well-known methodologies that have shown their usefulness in correcting this problem, using random mean models, quasi-likelihood methods and a double exponential family. In addition, it proposes some new Bayesian model extensions that have proved their usefulness in correcting the overdispersion problem. Finally, using the information provided by the National Demographic and Health Survey 2005, the departmental factors that have an influence on the mortality of children under 5 years and female postnatal period screening are determined. Based on the results, extensions that generalize some of the aforementioned models are also proposed, and their use is motivated by the data set under study. The results conclude that the proposed overdispersion models provide a better statistical fit of the data.  相似文献   

18.
ABSTRACT

The estimation of variance function plays an extremely important role in statistical inference of the regression models. In this paper we propose a variance modelling method for constructing the variance structure via combining the exponential polynomial modelling method and the kernel smoothing technique. A simple estimation method for the parameters in heteroscedastic linear regression models is developed when the covariance matrix is unknown diagonal and the variance function is a positive function of the mean. The consistency and asymptotic normality of the resulting estimators are established under some mild assumptions. In particular, a simple version of bootstrap test is adapted to test misspecification of the variance function. Some Monte Carlo simulation studies are carried out to examine the finite sample performance of the proposed methods. Finally, the methodologies are illustrated by the ozone concentration dataset.  相似文献   

19.
We show that smoothing spline, intrinsic autoregression (IAR) and state-space model can be formulated as partially specified random-effect model with singular precision (SP). Various fitting methods have been suggested for the aforementioned models and this paper investigates the relationships among them, once the models have been placed under a single framework. Some methods have been previously shown to give the best linear unbiased predictors (BLUPs) under some random-effect models and here we show that they are in fact uniformly BLUPs (UBLUPs) under a class of models that are generated by the SP of random effects. We offer some new interpretations of the UBLUPs under models of SP and define BLUE and BLUP in these partially specified models without having to specify the covariance. We also show how the full likelihood inferences for random-effect models can be made for these models, so that the maximum likelihood (ML) and restricted maximum likelihood (REML) estimators can be used for the smoothing parameters in splines, etc.  相似文献   

20.
A framework for time varying parameter regression models is developed and employed in modeling and forecasting price expectations, using the Livingston data. Alternative model formulations, which include various choices for both the stochastic processes generating the varying parameters and the sets of explanatory variables, are examined and compared by using this framework. These models, some of which have appeared elsewhere and some of which are new, are estimated and used to assess the expectations formation process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号