首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
The multinomial logit model (MNL) is one of the most frequently used statistical models in marketing applications. It allows one to relate an unordered categorical response variable, for example representing the choice of a brand, to a vector of covariates such as the price of the brand or variables characterising the consumer. In its classical form, all covariates enter in strictly parametric, linear form into the utility function of the MNL model. In this paper, we introduce semiparametric extensions, where smooth effects of continuous covariates are modelled by penalised splines. A mixed model representation of these penalised splines is employed to obtain estimates of the corresponding smoothing parameters, leading to a fully automated estimation procedure. To validate semiparametric models against parametric models, we utilise different scoring rules as well as predicted market share and compare parametric and semiparametric approaches for a number of brand choice data sets.  相似文献   

2.
A data-driven approach for modeling volatility dynamics and co-movements in financial markets is introduced. Special emphasis is given to multivariate conditionally heteroscedastic factor models in which the volatilities of the latent factors depend on their past values, and the parameters are driven by regime switching in a latent state variable. We propose an innovative indirect estimation method based on the generalized EM algorithm principle combined with a structured variational approach that can handle models with large cross-sectional dimensions. Extensive Monte Carlo simulations and preliminary experiments with financial data show promising results.  相似文献   

3.
Abstract.  Properties of a specification test for the parametric form of the variance function in diffusion processes are discussed. The test is based on the estimation of certain integrals of the volatility function. If the volatility function does not depend on the variable x it is known that the corresponding statistics have an asymptotic normal distribution. However, most models of mathematical finance use a volatility function which depends on the state x . In this paper we prove that in the general case, where σ depends also on x the estimates of integrals of the volatility converge stably in law to random variables with a non-standard limit distribution. The limit distribution depends on the diffusion process X t itself and we use this result to develop a bootstrap test for the parametric form of the volatility function, which is consistent in the general diffusion model.  相似文献   

4.
In the current paper, we explore some necessary probabilistic properties for the asymptotic inference of a broad class of periodic bilinear– GARCH processes (PBLGARCH) obtained by adding to the standard periodic GARCH models one or more interaction components between the observed series and its volatility process. In these models, the parameters of conditional variance are allowed to switch periodically between different regimes. This specification lead us to obtain a new model which is able to capture the asymmetry and hence leverage effect characterized by the negativity of the correlation between returns shocks and subsequent shocks in volatility patterns for seasonal financial time series. So, the goal here is to give in first part some basic structural properties of PBLGARCH necessary for the remainder of the paper. In the second part, we study the consistency and the asymptotic normality of the quasi-maximum likelihood estimator (QMLE) illustrated by a Monte Carlo study and applied to model the exchange rate of the Algerian Dinar against the US-dollar.  相似文献   

5.
In this paper we develop a measure of polarization for discrete distributions of non-negative grouped data. The measure takes into account the relative sizes and homogeneities of individual groups as well as the heterogeneities between all pairs of groups. It is based on the assumption that the total polarization within the distribution can be understood as a function of the polarizations between all pairs of groups. The measure allows information on existing groups within a population to be used directly to determine the degree of polarization. Thus the impact of various classifications on the degree of polarization can be analysed. The treatment of the distribution’s total polarization as a function of pairwise polarizations allows statements concerning the effect of an individual pair or an individual group on the total polarization.  相似文献   

6.
Abstract

Based on the fact that realized measures of volatility are affected by measurement errors, we introduce a new family of discrete-time stochastic volatility models having two measurement equations relating both observed returns and realized measures to the latent conditional variance. A semi-analytical option pricing framework is developed for this class of models. In addition, we provide analytical filtering and smoothing recursions for the basic specification of the model, and an effective MCMC algorithm for its richer variants. The empirical analysis shows the effectiveness of filtering and smoothing realized measures in inflating the latent volatility persistence—the crucial parameter in pricing Standard and Poor’s 500 Index options.  相似文献   

7.
This paper gives conditions for the consistency of simultaneous redescending M-estimators for location and scale. The consistency postulates the uniqueness of the parameters μ and σ, which are defined analogously to the estimations by using the population distribution function instead of the empirical one. The uniqueness of these parameters is no matter of course, because redescending ψ- and χ-functions, which define the parameters, cannot be chosen in a way that the parameters can be considered as the result of a common minimizing problem where the sum of ρ-functions of standardized residuals is to be minimized. The parameters arise from two minimizing problems where the result of one problem is a parameter of the other one. This can give different solutions. Proceeding from a symmetrical unimodal distribution and the usual symmetry assumptions for ψ and χ leads, in most but not in all cases, to the uniqueness of the parameters. Under this and some other assumptions, we can also prove the consistency of the according M-estimators, although these estimators are usually not unique even when the parameters are. The present article also serves as a basis for a forthcoming paper, which is concerned with a completely outlier-adjusted confidence interval for μ. So we introduce a ñ where data points far away from the bulk of the data are not counted at all.  相似文献   

8.
Every hedonic price index is an estimate of an unknown economic parameter. It depends, in practice, on one or more random samples of prices and characteristics of a certain good. Bootstrap resampling methods provide a tool for quantifying sampling errors. Following some general reflections on hedonic elementary price indices, this paper proposes a case-based, a model-based, and a wild bootstrap approach for estimating confidence intervals for hedonic price indices. Empirical results are obtained for a data set on used cars in Switzerland. A simple and an enhanced adaptive semi-logarithmic model are fit to monthly samples, and bootstrap confidence intervals are estimated for Jevons-type hedonic elementary price indices.  相似文献   

9.
Stochastic Volatility models have been considered as a real alternative to conditional variance models, assuming that volatility follows a process different from the observed one. However, issues like the unobservable nature of volatility and the creation of “rich” dynamics give rise to the use of non-linear transformations for the volatility process. The Box–Cox transformation and its Yeo–Johnson variation, by nesting both the linear and the non-linear case, can be considered as natural functions to specify non-linear Stochastic Volatility models. In this framework, a fully Bayesian approach is used for parametric and log–volatility estimation. The new models are then investigated for their within-sample and out-of-sample performance against alternative Stochastic Volatility models using real financial data series.  相似文献   

10.
There are several components to every consulting project. The most important ones are the scientific/industrial partners, the background and goals of the project, and data. A statistician has to interact successfully with every component for the project to be a success and an essential step is to encourage the project partners to interact with their own data. There is no better way to ensure that domain knowledge is fully integrated into any analysis. Partners are not always explicit about what they know, about what is possible and about what they want. It is not always clear to them either. Consulting in projects is more of a process than the accomplishment of a task, so continual interaction is needed. Sometimes this is easy, sometimes it is more difficult. This does not always have much to do with the intrinsic difficulty of the subject matter.  相似文献   

11.
This article deals with the estimation of continuous-time stochastic volatility models of option pricing. We argue that option prices are much more informative about the parameters than are asset prices. This is confirmed in a Monte Carlo experiment that compares two very simple strategies based on the different information sets. Both approaches are based on indirect inference and avoid any discretization bias by simulating the continuous-time model. We assume an Ornstein-Uhlenbeck process for the log of the volatility, a zero-volatility risk premium, and no leverage effect. We do not pursue asymptotic efficiency or specification issues; rather, we stick to a framework with no overidentifying restrictions and show that, given our option-pricing model, estimation based on option prices is much more precise in samples of typical size, without increasing the computational burden.  相似文献   

12.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   

13.
We consider the problem of parameter estimation for inhomogeneous space‐time shot‐noise Cox point processes. We explore the possibility of using a stepwise estimation method and dimensionality‐reducing techniques to estimate different parts of the model separately. We discuss the estimation method using projection processes and propose a refined method that avoids projection to the temporal domain. This remedies the main flaw of the method using projection processes – possible overlapping in the projection process of clusters, which are clearly separated in the original space‐time process. This issue is more prominent in the temporal projection process where the amount of information lost by projection is higher than in the spatial projection process. For the refined method, we derive consistency and asymptotic normality results under the increasing domain asymptotics and appropriate moment and mixing assumptions. We also present a simulation study that suggests that cluster overlapping is successfully overcome by the refined method.  相似文献   

14.
This paper addresses, via thresholding, the estimation of a possibly sparse signal observed subject to Gaussian noise. Conceptually, the optimal threshold for such problems depends upon the strength of the underlying signal. We propose two new methods that aim to adapt to potential local variation in this signal strength and select a variable threshold accordingly. Our methods are based upon an empirical Bayes approach with a smoothly variable mixing weight chosen via either spline or kernel based marginal maximum likelihood regression. We demonstrate the excellent performance of our methods in both one and two-dimensional estimation when compared to various alternative techniques. In addition, we consider the application to wavelet denoising where reconstruction quality is significantly improved with local adaptivity.  相似文献   

15.
Some popular parametric diffusion processes have been assumed as such underlying diffusion processes. This paper considers an important case where both the drift and volatility functions of the underlying diffusion process are unknown functions of the underlying process, and then proposes using two novel testing procedures for the parametric specification of both the drift and diffusion functions. The finite-sample properties of the proposed tests are assessed through using data generated from four popular parametric models. In our implementation, we suggest using a simulated critical value for each case in addition to the use of an asymptotic critical value. Our detailed studies show that there is little size distortion when using a simulated critical value while the proposed tests have some size distortions when using an asymptotic critical value in each case.  相似文献   

16.
Some popular parametric diffusion processes have been assumed as such underlying diffusion processes. This paper considers an important case where both the drift and volatility functions of the underlying diffusion process are unknown functions of the underlying process, and then proposes using two novel testing procedures for the parametric specification of both the drift and diffusion functions. The finite-sample properties of the proposed tests are assessed through using data generated from four popular parametric models. In our implementation, we suggest using a simulated critical value for each case in addition to the use of an asymptotic critical value. Our detailed studies show that there is little size distortion when using a simulated critical value while the proposed tests have some size distortions when using an asymptotic critical value in each case.  相似文献   

17.
For the analysis of square contingency tables with ordered categories, Tomizawa (1991) considered the diagonal uniform association symmetry (DUS) model, which has a multiplicative form for cell probabilities and has the structure of uniform association in the tables constructed using two diagonals that are equidistant from the main diagonal. This paper proposes another DUS model which has a similar multiplicative form for cumulative probabilities. The model indicates that the odds that an observation will fall in row category i or below and column category i+k or above, instead of in column category i or below and row category i+k or above, increase (decrease) exponentially as the cutpoint i increases for a fixed k. Examples are given.  相似文献   

18.
With the growing availability of high-frequency data, long memory has become a popular topic in finance research. Fractionally Integrated GARCH (FIGARCH) model is a standard approach to study the long memory of financial volatility. The original specification of FIGARCH model is developed using Normal distribution, which cannot accommodate fat-tailed properties commonly existing in financial time series. Traditionally, the Student-t distribution and General Error Distribution (GED) are used instead to solve that problem. However, a recent study points out that the Student-t lacks stability. Instead, the Stable distribution is introduced. The issue of this distribution is that its second moment does not exist. To overcome this new problem, the tempered stable distribution, which retains most attractive characteristics of the Stable distribution and has defined moments, is a natural candidate. In this paper, we describe the estimation procedure of the FIGARCH model with tempered stable distribution and conduct a series of simulation studies to demonstrate that it consistently outperforms FIGARCH models with the Normal, Student-t and GED distributions. An empirical evidence of the S&P 500 hourly return is also provided with robust results. Therefore, we argue that the tempered stable distribution could be a widely useful tool for modelling the high-frequency financial volatility in general contexts with a FIGARCH-type specification.  相似文献   

19.
The German Microcensus (MC) is a large scale rotating panel survey over three years. The MC is attractive for longitudinal analysis over the entire participation duration because of the mandatory participation and the very high case numbers (about 200000 respondents). However, as a consequence of the area sampling that is used for the MC, residential mobility is not covered and consequently statistical information at the new residence is lacking in the MC sample. This raises the question whether longitudinal analyses, like transitions between labour market states, are biased and how different methods perform that promise to reduce such a bias. Similar problems occur also for other national Labour Force Surveys (LFS) which are rotating panels and do not cover residential mobility, see Clarke and Tate (2002). Based on data of the German Socio-Economic Panel (SOEP), which covers residential mobility, we analysed the effects of missing data of residential movers by the estimation of labour force flows. By comparing the results from the complete SOEP sample and the results from the SOEP, restricted to the non-movers, we concluded that the non-coverage of the residential movers can not be ignored in Rubin’s sense. With respect to correction methods we analysed weighting by inverse mobility scores and log-linear models for partially observed contingency tables. Our results indicate that weighting by inverse mobility scores reduces the bias to about 60% whereas the official longitudinal weights obtained by calibration result in a bias reduction of about 80%. The estimation of log-linear models for non-ignorable non-response leads to very unstable results.  相似文献   

20.
Multi-asset modelling is of fundamental importance to financial applications such as risk management and portfolio selection. In this article, we propose a multivariate stochastic volatility modelling framework with a parsimonious and interpretable correlation structure. Building on well-established evidence of common volatility factors among individual assets, we consider a multivariate diffusion process with a common-factor structure in the volatility innovations. Upon substituting an observable market proxy for the common volatility factor, we markedly improve the estimation of several model parameters and latent volatilities. The model is applied to a portfolio of several important constituents of the S&P500 in the financial sector, with the VIX index as the common-factor proxy. We find that the prediction intervals for asset forecasts are comparable to those of more complex dependence models, but that option-pricing uncertainty can be greatly reduced by adopting a common-volatility structure. The Canadian Journal of Statistics 48: 36–61; 2020 © 2020 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号