首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Estimation of a regression function from independent and identical distributed data is considered. The L2 error with integration with respect to the design measure is used as error criterion. Upper bounds on the L2 error of least squares regression estimates are presented, which bound the error of the estimate in case that in the sample given to the estimate the values of the independent and the dependent variables are pertubated by some arbitrary procedure. The bounds are applied to analyze regression-based Monte Carlo methods for pricing American options in case of errors in modelling the price process.  相似文献   

2.
The theoretical price of a financial option is given by the expectation of its discounted expiry time payoff. The computation of this expectation depends on the density of the value of the underlying instrument at expiry time. This density depends on both the parametric model assumed for the behaviour of the underlying, and the values of parameters within the model, such as volatility. However neither the model, nor the parameter values are known. Common practice when pricing options is to assume a specific model, such as geometric Brownian Motion, and to use point estimates of the model parameters, thereby precisely defining a density function.We explicitly acknowledge the uncertainty of model and parameters by constructing the predictive density of the underlying as an average of model predictive densities, weighted by each model's posterior probability. A model's predictive density is constructed by integrating its transition density function by the posterior distribution of its parameters. This is an extension to Bayesian model averaging. Sampling importance-resampling and Monte Carlo algorithms implement the computation. The advantage of this method is that rather than falsely assuming the model and parameter values are known, inherent ignorance is acknowledged and dealt with in a mathematically logical manner, which utilises all information from past and current observations to generate and update option prices. Moreover point estimates for parameters are unnecessary. We use this method to price a European Call option on a share index.  相似文献   

3.
Several variations of monotone nonparametric regression have been developed over the past 30 years. One approach is to first apply nonparametric regression to data and then monotone smooth the initial estimates to “iron out” violations to the assumed order. Here, such estimators are considered, where local polynomial regression is first used, followed by either least squares isotonic regression or a monotone method using simple averages. The primary focus of this work is to evaluate different types of confidence intervals for these monotone nonparametric regression estimators through Monte Carlo simulation. Most of the confidence intervals use bootstrap or jackknife procedures. Estimation of a response variable as a function of two continuous predictor variables is considered, where the estimation is performed at the observed values of the predictors (instead of on a grid). The methods are then applied to data involving subjects that worked at plants that use beryllium metal who have developed chronic beryllium disease.  相似文献   

4.
American options in discrete time can be priced by solving optimal stopping problems. This can be done by computing so-called continuation values, which we represent as regression functions defined recursively by using the continuation values of the next time step. We use Monte Carlo to generate data, and then we apply smoothing spline regression estimates to estimate the continuation values from these data. All parameters of the estimate are chosen data dependent. We present results concerning consistency and the estimates’ rate of convergence.  相似文献   

5.
Estimating a Convex Function in Nonparametric Regression   总被引:1,自引:0,他引:1  
Abstract.  A new nonparametric estimate of a convex regression function is proposed and its stochastic properties are studied. The method starts with an unconstrained estimate of the derivative of the regression function, which is firstly isotonized and then integrated. We prove asymptotic normality of the new estimate and show that it is first order asymptotically equivalent to the initial unconstrained estimate if the regression function is in fact convex. If convexity is not present, the method estimates a convex function whose derivative has the same L p -norm as the derivative of the (non-convex) underlying regression function. The finite sample properties of the new estimate are investigated by means of a simulation study and it is compared with a least squares approach of convex estimation. The application of the new method is demonstrated in two data examples.  相似文献   

6.
The bootstrap, like the jackknife, is a technique for estimating standard errors. The idea is to use Monte Carlo simulation, based on a nonparametric estimate of the underlying error distribution. The bootstrap will be applied to an econometric model describing the demand for capital, labor, energy, and materials. The model is fitted by three-stage least squares. In sharp contrast with previous results, the coefficient estimates and the estimated standard errors perform very well. However, the model's forecasts show serious bias and large random errors, significantly understated by the conventional standard error of forecast.  相似文献   

7.
O Bunke  M. Möhner 《Statistics》2013,47(4):471-482
This paper deals with the estimation of a regression function f of unknown form by shrinked least squares estimates calculated on the basis of possibly replicated observa-tions, The estimation loss is chosen in a somewhat more realistic manner then the usual quadratic losses and is given by an adequately weighted sum of squared errors in estimat-ing the values of f at the design points, normalized by the squared norm of the regression function, Shrinked least squares (as special ridge estimators) have been proved by the suthors in special cases to be minimax under all estimatiors.

We investigate the shrinking of least squares estimators with the objective of minimiz-ing the least favourable risk. Here we assume a known lower bound for the magnitude of f and a known upper bound for the difference between f and some simple function approxi-mating f, e.g. we know that f is the sum of a quadratic polynomial and of some  相似文献   

8.
This paper derives EM and generalized EM (GEM) algorithms for calculating least absolute deviations (LAD) estimates of the parameters of linear and nonlinear regression models. It shows that Schlossmacher's iterative reweighted least squares algorithm for calculating LAD estimates (E.J. Schlossmacher, Journal of the American Statistical Association 68: 857–859, 1973) is an EM algorithm. A GEM algorithm for computing LAD estimates of the parameters of nonlinear regression models is also provided and is applied in some examples.  相似文献   

9.
We derive an asymptotic theory of nonparametric estimation for a time series regression model Zt=f(Xt)+Wt, where {Xt} and {Zt} are observed nonstationary processes, and {Wt} is an unobserved stationary process. The class of nonstationary processes allowed for {Xt} is a subclass of the class of null recurrent Markov chains. This subclass contains the random walk, unit root processes and nonlinear processes. The process {Wt} is assumed to be linear and stationary.  相似文献   

10.
We define a parametric proportional odds frailty model to describe lifetime data incorporating heterogeneity between individuals. An unobserved individual random effect, called frailty, acts multiplicatively on the odds of failure by time t. We investigate fitting by maximum likelihood and by least squares. For the latter, the parametric survivor function is fitted to the nonparametric Kaplan–Meier estimate at the observed failure times. Bootstrap standard errors and confidence intervals are obtained for the least squares estimates. The models are applied successfully to simulated data and to two real data sets. Least squares estimates appear to have smaller bias than maximum likelihood.  相似文献   

11.
In this paper we explore statistical properties of some difference-based approaches to estimate an error variance for small sample based on nonparametric regression which satisfies Lipschitz condition. Our study is motivated by Tong and Wang (2005), who estimated error variance using a least squares approach. They considered the error variance as the intercept in a simple linear regression which was obtained from the expectation of their lag-k Rice estimator. Their variance estimators are highly dependent on the setting of a regressor and weight of their simple linear regression. Although this regressor and weight can be varied based on the characteristic of an unknown nonparametric mean function, Tong and Wang (2005) have used a fixed regressor and weight in a large sample and gave no indication of how to determine the regressor and the weight. In this paper, we propose a new approach via local quadratic approximation to determine this regressor and weight. Using our proposed regressor and weight, we estimate the error variance as the intercept of simple linear regression using both ordinary least squares and weighted least squares. Our approach applies to both small and large samples, while most existing difference-based methods are appropriate solely for large samples. We compare the performance of our approach with other existing approaches using extensive simulation study. The advantage of our approach is demonstrated using a real data set.  相似文献   

12.
The aim of our paper is to elaborate a theoretical methodology based on the Malliavin calculus to calculate the following conditional expectation (Pt(Xt)|(Xs)) for st where the only state variable follows a J-process [Jerbi Y. A new closed-form solution as an extension of the Black—Scholes formula allowing smile curve plotting. Quant Finance. 2013; Online First Article. doi:10.1080/14697688.2012.762458]. The theoretical results are applied to the American option pricing, consisting of an extension of the work of Bally et al. [Pricing and hedging American options by Monte Carlo methods using a Malliavin calculus approach. Monte Carlo Methods Appl. 2005;11-2:97–133], as well as the J-process (with additional parameters λ and θ) is an extension of the Wiener process. The introduction of the aforesaid parameters induces skewness and kurtosis effects, i.e. smile curve allowing to fit with the reality of financial market. In his work Jerbi [Jerbi Y. A new closed-form solution as an extension of the Black–-Scholes formula allowing smile curve plotting. Quant Finance. 2013; Online First Article. doi:10.1080/14697688.2012.762458] showed that the use of the J-process is equivalent to the use of a stochastic volatility model based on the Wiener process as in Heston's. The present work consists on extending this result to the American options. We studied the influence of the parameters λ and θ on the American option price and we find empirical results fitting with the options theory.  相似文献   

13.
Summary.  We construct approximate confidence intervals for a nonparametric regression function, using polynomial splines with free-knot locations. The number of knots is determined by generalized cross-validation. The estimates of knot locations and coefficients are obtained through a non-linear least squares solution that corresponds to the maximum likelihood estimate. Confidence intervals are then constructed based on the asymptotic distribution of the maximum likelihood estimator. Average coverage probabilities and the accuracy of the estimate are examined via simulation. This includes comparisons between our method and some existing methods such as smoothing spline and variable knots selection as well as a Bayesian version of the variable knots method. Simulation results indicate that our method works well for smooth underlying functions and also reasonably well for discontinuous functions. It also performs well for fairly small sample sizes.  相似文献   

14.
Our goal is to find a regression technique that can be used in a small-sample situation with possible model misspecification. The development of a new bandwidth selector allows nonparametric regression (in conjunction with least squares) to be used in this small-sample problem, where nonparametric procedures have previously proven to be inadequate. Considered here are two new semiparametric (model-robust) regression techniques that combine parametric and nonparametric techniques when there is partial information present about the underlying model. A general overview is given of how typical concerns for bandwidth selection in nonparametric regression extend to the model-robust procedures. A new penalized PRESS criterion (with a graphical selection strategy for applications) is developed that overcomes these concerns and is able to maintain the beneficial mean squared error properties of the new model-robust methods. It is shown that this new selector outperforms standard and recently improved bandwidth selectors. Comparisons of the selectors are made via numerous generated data examples and a small simulation study.  相似文献   

15.
The bootstrap is a methodology for estimating standard errors. The idea is to use a Monte Carlo simulation experiment based on a nonparametric estimate of the error distribution. The main objective of this article is to demonstrate the use of the bootstrap to attach standard errors to coefficient estimates in a second-order autoregressive model fitted by least squares and maximum likelihood estimation. Additionally, a comparison of the bootstrap and the conventional methodology is made. As it turns out, the conventional asymptotic formulae (both the least squares and maximum likelihood estimates) for estimating standard errors appear to overestimate the true standard errors. But there are two problems:i. The first two observations y1 and y2 have been fixed, and ii. The residuals have not been inflated. After these two factors are considered in the trial and bootstrap experiment, both the conventional maximum likelihood and bootstrap estimates of the standard errors appear to be performing quite well.  相似文献   

16.
A frequency domain bootstrap (FDB) is a common technique to apply Efron’s independent and identically distributed resampling technique (Efron, 1979) to periodogram ordinates – especially normalized periodogram ordinates – by using spectral density estimates. The FDB method is applicable to several classes of statistics, such as estimators of the normalized spectral mean, the autocorrelation (but not autocovariance), the normalized spectral density function, and Whittle parameters. While this FDB method has been extensively studied with respect to short-range dependent time processes, there is a dearth of research on its use with long-range dependent time processes. Therefore, we propose an FDB methodology for ratio statistics under long-range dependence, using semi- and nonparametric spectral density estimates as a normalizing factor. It is shown that the FDB approximation allows for valid distribution estimation for a broad class of stationary, long-range (or short-range) dependent linear processes, without any stringent assumptions on the distribution of the underlying process. The results of a large simulation study show that the FDB approximation using a semi- or nonparametric spectral density estimator is often robust for various values of a long-memory parameter reflecting magnitude of dependence. We apply the proposed procedure to two data examples.  相似文献   

17.
Brownian-Laplace motion is a Lévy process which has both continuous (Brownian) and discontinuous (Laplace motion) components. The increments of the process follow a generalized normal Laplace (GNL) distribution which exhibits positive kurtosis and can be either symmetrical or exhibit skewness. The degree of kurtosis in the increments increases as the time between observations decreases. This and other properties render Brownian-Laplace motion a good candidate model for the motion of logarithmic stock prices. An option pricing formula for European call options is derived and it is used to calculate numerically the value of such an option both using nominal parameter values (to explore its dependence upon them) and those obtained as estimates from real stock price data.  相似文献   

18.
19.
Abstract

Errors-in-variable (EIV) regression is often used to gauge linear relationship between two variables both suffering from measurement and other errors, such as, the comparison of two measurement platforms (e.g., RNA sequencing vs. microarray). Scientists are often at a loss as to which EIV regression model to use for there are infinite many choices. We provide sound guidelines toward viable solutions to this dilemma by introducing two general nonparametric EIV regression frameworks: the compound regression and the constrained regression. It is shown that these approaches are equivalent to each other and, to the general parametric structural modeling approach. The advantages of these methods lie in their intuitive geometric representations, their distribution free nature, and their ability to offer candidate solutions with various optimal properties when the ratio of the error variances is unknown. Each includes the classic nonparametric regression methods of ordinary least squares, geometric mean regression (GMR), and orthogonal regression as special cases. Under these general frameworks, one can readily uncover some surprising optimal properties of the GMR, and truly comprehend the benefit of data normalization. Supplementary materials for this article are available online.  相似文献   

20.
The authors consider the problem of simulating the times of events such as extremes and barrier crossings in diffusion processes. They develop a rejection sampler based on Shepp [Shepp, Journal of Applied Probability 1979; 16:423–427] for simulating an extreme of a Brownian motion and use it in a general recursive scheme for more complex simulations, including simultaneous simulation of the minimum and maximum and application to more general diffusions. They price exotic options that are difficult to price analytically: a knock‐out barrier option with a modified payoff function, a lookback option that includes discounting at the risk‐free interest rate, and a chooser option where the choice is made at the time of a barrier crossing. The Canadian Journal of Statistics 38: 738–755; 2010 © 2010 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号