首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Restrictions on the risk-pricing in dynamic term structure models (DTSMs) tighten the link between cross-sectional and time-series variation of interest rates, and make absence of arbitrage useful for inference about expectations. This article presents a new econometric framework for estimation of affine Gaussian DTSMs under restrictions on risk prices, which addresses the issues of a large model space and of model uncertainty using a Bayesian approach. A simulation study demonstrates the good performance of the proposed method. Data for U.S. Treasury yields calls for tight restrictions on risk pricing: only level risk is priced, and only changes in the slope affect term premia. Incorporating the restrictions changes the model-implied short-rate expectations and term premia. Interest rate persistence is higher than in a maximally flexible model, hence expectations of future short rates are more variable—restrictions on risk prices help resolve the puzzle of implausibly stable short-rate expectations in this literature. Consistent with survey evidence and conventional macro wisdom, restricted models attribute a large share of the secular decline in long-term interest rates to expectations of future nominal short rates. Supplementary materials for this article are available online.  相似文献   

2.
In varying-coefficient models, an important question is to determine whether some of the varying coefficients are actually invariant coefficients. This article proposes a penalized likelihood method in the framework of the smoothing spline ANOVA models, with a penalty designed toward the goal of automatically distinguishing varying coefficients and those which are not varying. Unlike the stepwise procedure, the method simultaneously quantifies and estimates the coefficients. An efficient algorithm is given and ways of choosing the smoothing parameters are discussed. Simulation results and an analysis on the Boston housing data illustrate the usefulness of the method. The proposed approach is further extended to longitudinal data analysis.  相似文献   

3.
In this article, we compare alternative missing imputation methods in the presence of ordinal data, in the framework of CUB (Combination of Uniform and (shifted) Binomial random variable) models. Various imputation methods are considered, as are univariate and multivariate approaches. The first step consists of running a simulation study designed by varying the parameters of the CUB model, to consider and compare CUB models as well as other methods of missing imputation. We use real datasets on which to base the comparison between our approach and some general methods of missing imputation for various missing data mechanisms.  相似文献   

4.
This paper develops a new class of option price models and applies it to options on the Australian S&P200 Index. The class of models generalizes the traditional Black‐Scholes framework by accommodating time‐varying conditional volatility, skewness and excess kurtosis in the underlying returns process. An important property of these more general pricing models is that the computational requirements are essentially the same as those associated with the Black‐Scholes model, with both methods being based on one‐dimensional integrals. Bayesian inferential methods are used to evaluate a range of models nested in the general framework, using observed market option prices. The evaluation is based on posterior parameter distributions, as well as posterior model probabilities. Various fit and predictive measures, plus implied volatility graphs, are also used to rank the alternative models. The empirical results provide evidence that time‐varying volatility, leptokurtosis and a small degree of negative skewness are priced in Australian stock market options.  相似文献   

5.
Several authors have contributed to what can now be considered a rather complete theory for analysis of variance in cases with orthogonal factors. By using this theory on an assumed basic reference population, the orthogonality concept gives a natural definition of independence between factors in the population. By looking upon the treated units in designed experiments as a formal sample from a future population about which we want to make inference, a natural parametrization of expectations and variances connected to such experiments arises. This approach seems to throw light upon several controversial questions in the theory of mixed models. Also, it gives a framework for discussing the choice of conditioning in models  相似文献   

6.
More flexible semiparametric linear‐index regression models are proposed to describe the conditional distribution. Such a model formulation captures varying effects of covariates over the support of a response variable in distribution, offers an alternative perspective on dimension reduction and covers a lot of widely used parametric and semiparameteric regression models. A feasible pseudo likelihood approach, accompanied with a simple and easily implemented algorithm, is further developed for the mixed case with both varying and invariant coefficients. By justifying some theoretical properties on Banach spaces, the uniform consistency and asymptotic Gaussian process of the proposed estimator are also established in this article. In addition, under the monotonicity of distribution in linear‐index, we develop an alternative approach based on maximizing a varying accuracy measure. By virtue of the asymptotic recursion relation for the estimators, some of the achievements in this direction include showing the convergence of the iterative computation procedure and establishing the large sample properties of the resulting estimator. It is noticeable that our theoretical framework is very helpful in constructing confidence bands for the parameters of interest and tests for the hypotheses of various qualitative structures in distribution. Generally, the developed estimation and inference procedures perform quite satisfactorily in the conducted simulations and are demonstrated to be useful in reanalysing data from the Boston house price study and the World Values Survey.  相似文献   

7.
SUMMARY Univariate time series models make efficient use of available historical records of electricity consumption for short-term forecasting. However, the information (expectations) provided by electricity consumers in an energy-saving survey, even though qualitative, was considered to be particularly important, because the consumers' perception of the future may take into account the changing economic conditions. Our approach to forecasting electricity consumption combines historical data with expectations of the consumers in an optimal manner, using the technique of restricted forecasts. The same technique can be applied in some other forecasting situations in which additional information-besides the historical record of a variable-is available in the form of expectations.  相似文献   

8.
In this paper, we consider a new mixture of varying coefficient models, in which each mixture component follows a varying coefficient model and the mixing proportions and dispersion parameters are also allowed to be unknown smooth functions. We systematically study the identifiability, estimation and inference for the new mixture model. The proposed new mixture model is rather general, encompassing many mixture models as its special cases such as mixtures of linear regression models, mixtures of generalized linear models, mixtures of partially linear models and mixtures of generalized additive models, some of which are new mixture models by themselves and have not been investigated before. The new mixture of varying coefficient model is shown to be identifiable under mild conditions. We develop a local likelihood procedure and a modified expectation–maximization algorithm for the estimation of the unknown non‐parametric functions. Asymptotic normality is established for the proposed estimator. A generalized likelihood ratio test is further developed for testing whether some of the unknown functions are constants. We derive the asymptotic distribution of the proposed generalized likelihood ratio test statistics and prove that the Wilks phenomenon holds. The proposed methodology is illustrated by Monte Carlo simulations and an analysis of a CO2‐GDP data set.  相似文献   

9.
Partial linear varying coefficient models (PLVCM) are often considered for analysing longitudinal data for a good balance between flexibility and parsimony. The existing estimation and variable selection methods for this model are mainly built upon which subset of variables have linear or varying effect on the response is known in advance, or say, model structure is determined. However, in application, this is unreasonable. In this work, we propose a simultaneous structure estimation and variable selection method, which can do simultaneous coefficient estimation and three types of selections: varying and constant effects selection, relevant variable selection. It can be easily implemented in one step by employing a penalized M-type regression, which uses a general loss function to treat mean, median, quantile and robust mean regressions in a unified framework. Consistency in the three types of selections and oracle property in estimation are established as well. Simulation studies and real data analysis also confirm our method.  相似文献   

10.
This paper discusses the tests for departures from nominal dispersion in the framework of generalized nonlinear models with varying dispersion and/or additive random effects. We consider two classes of exponential family distributions. The first is discrete exponential family distributions, such as Poisson, binomial, and negative binomial distributions. The second is continuous exponential family distributions, such as normal, gamma, and inverse Gaussian distributions. Correspondingly, we develop a unifying approach and propose several tests for testing for departures from nominal dispersion in two classes of generalized nonlinear models. The score test statistics are constructed and expressed in simple, easy to use, matrix formulas, so that the tests can easily be implemented using existing statistical software. The properties of test statistics are investigated through Monte Carlo simulations.  相似文献   

11.
For many environmental processes, recent studies have shown that the dependence strength is decreasing when quantile levels increase. This implies that the popular max‐stable models are inadequate to capture the rate of joint tail decay, and to estimate joint extremal probabilities beyond observed levels. We here develop a more flexible modeling framework based on the class of max‐infinitely divisible processes, which extend max‐stable processes while retaining dependence properties that are natural for maxima. We propose two parametric constructions for max‐infinitely divisible models, which relax the max‐stability property but remain close to some popular max‐stable models obtained as special cases. The first model considers maxima over a finite, random number of independent observations, while the second model generalizes the spectral representation of max‐stable processes. Inference is performed using a pairwise likelihood. We illustrate the benefits of our new modeling framework on Dutch wind gust maxima calculated over different time units. Results strongly suggest that our proposed models outperform other natural models, such as the Student‐t copula process and its max‐stable limit, even for large block sizes.  相似文献   

12.
The paper is concerned with direct tests of the rational expectations hypothesis (REH) in the presence of stationary and non-stationary variables. Alternative methods of converting qualitative survey responses into quantitative expectations series are examined. Testing of orthogonality and the issue of generated regressors for models estimated by two step methods are re-evaluated when the variable to be explained is stationary. A methodological approach for testing the REH is provided for models using qualitative response data when there are unit roots and cointegration, and alternative reasons are examined for rejecting the null hypothesis of orthogonality. The usefulness of cointegration analysis for both the probability and regression conversion procedures is also analysed. Cointegration is found to be directly applicable for the probability conversion approach with uniform, normal and logistic distributions of expectations and for the linear regressicn conversion approach. In the light of new techniques, an existing empirical example testing the REH for British manufacturing firms is re-examined and tested over an extended data set.  相似文献   

13.
The paper is concerned with direct tests of the rational expectations hypothesis (REH) in the presence of stationary and non-stationary variables. Alternative methods of converting qualitative survey responses into quantitative expectations series are examined. Testing of orthogonality and the issue of generated regressors for models estimated by two step methods are re-evaluated when the variable to be explained is stationary. A methodological approach for testing the REH is provided for models using qualitative response data when there are unit roots and cointegration, and alternative reasons are examined for rejecting the null hypothesis of orthogonality. The usefulness of cointegration analysis for both the probability and regression conversion procedures is also analysed. Cointegration is found to be directly applicable for the probability conversion approach with uniform, normal and logistic distributions of expectations and for the linear regressicn conversion approach. In the light of new techniques, an existing empirical example testing the REH for British manufacturing firms is re-examined and tested over an extended data set.  相似文献   

14.
Additive varying coefficient models are a natural extension of multiple linear regression models, allowing the regression coefficients to be functions of other variables. Therefore these models are more flexible to model more complex dependencies in data structures. In this paper we consider the problem of selecting in an automatic way the significant variables among a large set of variables, when the interest is on a given response variable. In recent years several grouped regularization methods have been proposed and in this paper we present these under one unified framework in this varying coefficient model context. For each of the discussed grouped regularization methods we investigate the optimization problem to be solved, possible algorithms for doing so, and the variable and estimation consistency of the methods. We investigate the finite-sample performance of these methods, in a comparative study, and illustrate them on real data examples.  相似文献   

15.
Summary.  Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modelling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies that were conducted at specific household locations as well as 15 ambient monitoring sites in the area. The models allow for both flexible non-linear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalized spline formulation of the model that relates to generalized kriging of the latent traffic pollution variable and leads to a natural Bayesian Markov chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degrees of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately.  相似文献   

16.
Parsimonious Gaussian mixture models   总被引:3,自引:0,他引:3  
Parsimonious Gaussian mixture models are developed using a latent Gaussian model which is closely related to the factor analysis model. These models provide a unified modeling framework which includes the mixtures of probabilistic principal component analyzers and mixtures of factor of analyzers models as special cases. In particular, a class of eight parsimonious Gaussian mixture models which are based on the mixtures of factor analyzers model are introduced and the maximum likelihood estimates for the parameters in these models are found using an AECM algorithm. The class of models includes parsimonious models that have not previously been developed. These models are applied to the analysis of chemical and physical properties of Italian wines and the chemical properties of coffee; the models are shown to give excellent clustering performance.  相似文献   

17.
Rao (1963) has formulated a damage model which we call an additive damage model. A suitable damage model, which we call a multiplicative damage model, has been considered by Krishnaji (1970) for income-related problems. In these models, an original observation is subjected to damage, e.g., death or under-reporting, according to a specified probability law. Within the framework of an additive damage model, with a special form of damage, characterizations of the linear and logarithmic exponential families are formulated using regression properties of the damaged part on the undamaged part. The characterizations of the gamma and Pareto distributions that have been found of some use in the theory of income distributions, are obtained as special cases. Similar results are investigated within the framework of the multiplicative damage model.  相似文献   

18.
In this paper, we propose a new semiparametric heteroscedastic regression model allowing for positive and negative skewness and bimodal shapes using the B-spline basis for nonlinear effects. The proposed distribution is based on the generalized additive models for location, scale and shape framework in order to model any or all parameters of the distribution using parametric linear and/or nonparametric smooth functions of explanatory variables. We motivate the new model by means of Monte Carlo simulations, thus ignoring the skewness and bimodality of the random errors in semiparametric regression models, which may introduce biases on the parameter estimates and/or on the estimation of the associated variability measures. An iterative estimation process and some diagnostic methods are investigated. Applications to two real data sets are presented and the method is compared to the usual regression methods.  相似文献   

19.
This paper considers settings where populations of units may experience recurrent events, termed failures for convenience, and where the units are subject to varying levels of usage. We provide joint models for the recurrent events and usage processes, which facilitate analysis of their relationship as well as prediction of failures. Data on usage are often incomplete and we show how to implement maximum likelihood estimation in such cases. Random effects models with linear usage processes and gamma usage processes are considered in some detail. Data on automobile warranty claims are used to illustrate the proposed models and estimation methodology.  相似文献   

20.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号