首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
The authors consider the problem of simulating the times of events such as extremes and barrier crossings in diffusion processes. They develop a rejection sampler based on Shepp [Shepp, Journal of Applied Probability 1979; 16:423–427] for simulating an extreme of a Brownian motion and use it in a general recursive scheme for more complex simulations, including simultaneous simulation of the minimum and maximum and application to more general diffusions. They price exotic options that are difficult to price analytically: a knock‐out barrier option with a modified payoff function, a lookback option that includes discounting at the risk‐free interest rate, and a chooser option where the choice is made at the time of a barrier crossing. The Canadian Journal of Statistics 38: 738–755; 2010 © 2010 Statistical Society of Canada  相似文献   

2.
This article investigates the possibility, raised by Perron and by Rappoport and Reichlin, that aggregate economic time series can be characterized as being stationary around broken trend lines. Unlike those authors, we treat the break date as unknown a priori. Asymptotic distributions are developed for recursive, rolling, and sequential tests for unit roots and/or changing coefficients in time series regressions. The recursive and rolling tests are based on changing subsamples of the data. The sequential statistics are computed using the full data set and a sequence of regressors indexed by a “break” date. When applied to data on real postwar output from seven Organization for Economic Cooperation and Development countries, these techniques fail to reject the unit-root hypothesis for five countries (including the United States) but suggest stationarity around a shifted trend for Japan.  相似文献   

3.
周晶等 《统计研究》2015,32(4):51-58
本文首次对我国1980-2011年间36个工业大类行业CES生产函数中资本、能源和劳动力之间三种嵌套形式的参数做了非线性计量估计,并以资本/劳动力比率为门限变量,对我国CES生产函数的门限效应做了考察。本文研究表明:①对于三种CES嵌套形式,多数行业能拒绝替代弹性等于1的原假设,这表明目前很多研究在构建模型时选择使用Cobb-Douglas函数形式的做法有待商榷;②对于三种CES嵌套形式,仅有极少数行业不能拒绝规模报酬等于1的原假设,这表明在构建模型时假定规模报酬不变有较大风险;③总体上,多数行业能拒绝把三种要素放在一个核里的原假设,因此对CES函数进行适当的嵌套十分必要;④综合来看,资本与能源先聚合再与劳动力聚合的嵌套形式(KE)L比较符合我国工业实际情况;⑤多数行业门限效应显著,且门限值大多出现在20世纪90年代之后,表明不少行业随资本/劳动力比率增长到一定程度后,要素替代弹性、技术进步参数及规模报酬等的表现有显著变化。  相似文献   

4.
We study the properties of truncated gamma distributions and we derive simulation algorithms which dominate the standard algorithms for these distributions. For the right truncated gamma distribution, an optimal accept–reject algorithm is based on the fact that its density can be expressed as an infinite mixture of beta distribution. For integer values of the parameters, the density of the left truncated distributions can be rewritten as a mixture which can be easily generated. We give an optimal accept–reject algorithm for the other values of the parameter. We compare the efficiency of our algorithm with the previous method and show the improvement in terms of minimum acceptance probability. The algorithm proposed here has an acceptance probability which is superior to e/4.  相似文献   

5.
Single-arm one- or multi-stage study designs are commonly used in phase II oncology development when the primary outcome of interest is tumor response, a binary variable. Both two- and three-outcome designs are available. Simon two-stage design is a well-known example of two-outcome designs. The objective of a two-outcome trial is to reject either the null hypothesis that the objective response rate (ORR) is less than or equal to a pre-specified low uninteresting rate or to reject the alternative hypothesis that the ORR is greater than or equal to some target rate. Three-outcome designs proposed by Sargent et al. allow a middle gray decision zone which rejects neither hypothesis in order to reduce the required study size. We propose new two- and three-outcome designs with continual monitoring based on Bayesian posterior probability that meet frequentist specifications such as type I and II error rates. Futility and/or efficacy boundaries are based on confidence functions, which can require higher levels of evidence for early versus late stopping and have clear and intuitive interpretations. We search in a class of such procedures for optimal designs that minimize a given loss function such as average sample size under the null hypothesis. We present several examples and compare our design with other procedures in the literature and show that our design has good operating characteristics.  相似文献   

6.
The autoregressive conditional intensity model proposed by Russell (1998) is a promising option for fitting multivariate high frequency irregularly spaced data. The authors acknowledge the validity of this model by showing the independence of its generalized residuals, a crucial assumption of the model formulation not readily recognized by researchers. The authors derive the large‐sample distribution of the autocorrelations of the generalized residual series and use it to construct a goodness‐of‐fit test for the model. Empirical results compare the performance of their test with other off‐the‐shelf tests such as the Ljung–Box test. They illustrate the use of their test with transaction records of the HSBC stock.  相似文献   

7.
In this paper we consider the determination of Bayesian life test acceptance sampling plans for finite lots when the underlying lifetime distribution is the two parameter exponential. It is assumed that the prior distribution is the natural conjugate prior, that the costs associated with the actions accept and reject are known functions of the lifetimes of the items, and that the cost of testing a sample is proportional to the duration of the test. Type 2 censored sampling is considered where a sample of size n is observed only until the rth failure occurs and the decision of whether to accept or reject the remainder of the lot is made on the basis of the r observed lifetimes. Obtaining the optimal sample size and the optimal censoring number are difficult problems when the location parameter of the distribution is restricted to be non-negative. The case when the positivity restriction on the location parameter is removed has been investigated. An example is provided for illustration.  相似文献   

8.
Semi parametric methods provide estimates of finite parameter vectors without requiring that the complete data generation process be assumed in a finite-dimensional family. By avoiding bias from incorrect specification, such estimators gain robustness, although usually at the cost of decreased precision. The most familiar semi parametric method in econometrics is ordi¬nary least squares, which estimates the parameters of a linear regression model without requiring that the distribution of the disturbances be in a finite-parameter family. The recent literature in econometric theory has extended semi parametric methods to a variety of non-linear models, including models appropriate for analysis of censored duration data. Horowitz and Newman make perhaps the first empirical application of these methods, to data on employment duration. Their analysis provides insights into the practical problems of implementing these methods, and limited information on performance. Their data set, containing 226 male controls from the Denver income maintenance experiment in 1971-74, does not show any significant covariates (except race), even when a fully parametric model is assumed. Consequently, the authors are unable to reject the fully parametric model in a test against the alternative semi parametric estimators. This provides some negative, but tenuous, evidence that in practical applications the reduction in bias using semi parametric estimators is insufficient to offset loss in precision. Larger samples, and data sets with strongly significant covariates, will need to be interval, and if the observation period is long enough will eventually be more loyal on average for those starting employment spells near the end of the observation period.  相似文献   

9.
Pricing of American options in discrete time is considered, where the option is allowed to be based on several underlying stocks. It is assumed that the price processes of the underlying stocks are given by Markov processes. We use the Monte Carlo approach to generate artificial sample paths of these price processes, and then we use nonparametric regression estimates to estimate from this data so-called continuation values, which are defined as mean values of the American option for given values of the underlying stocks at time t subject to the constraint that the option is not exercised at time t. As nonparametric regression estimates we use least squares estimates with complexity penalties, which include as special cases least squares spline estimates, least squares neural networks, smoothing splines and orthogonal series estimates. General results concerning rate of convergence are presented and applied to derive results for the special cases mentioned above. Furthermore the pricing of American options is illustrated by simulated data.  相似文献   

10.
The breakdown point of an estimator is the smallest fraction of contamination that can force the value of the estimator beyond the boundary of the parameter space. It is well known that the highest possible breakdown point, under equivariance restrictions, is 50% of the sample. However, this upper bound is not always attainable. We give an example of an estimation problem in which the highest possible attainable breakdown point is much less than 50% of the sample. For hypothesis testing, we discuss the resistance of a test and propose new definitions of resistance. The maximum resistance to rejection (acceptance) is the smallest fraction of contamination necessary to force a test to reject (fail to reject) regardless of the original sample. We derive the maximum resistances of the t-test and sign test in the one-sample problem and of the t-test and Mood test in the two-sample problem. We briefly discuss another measure known as the expected resistance.  相似文献   

11.
ABSTRACT

When the editors of Basic and Applied Social Psychology effectively banned the use of null hypothesis significance testing (NHST) from articles published in their journal, it set off a fire-storm of discussions both supporting the decision and defending the utility of NHST in scientific research. At the heart of NHST is the p-value which is the probability of obtaining an effect equal to or more extreme than the one observed in the sample data, given the null hypothesis and other model assumptions. Although this is conceptually different from the probability of the null hypothesis being true, given the sample, p-values nonetheless can provide evidential information, toward making an inference about a parameter. Applying a 10,000-case simulation described in this article, the authors found that p-values’ inferential signals to either reject or not reject a null hypothesis about the mean (α?=?0.05) were consistent for almost 70% of the cases with the parameter’s true location for the sampled-from population. Success increases if a hybrid decision criterion, minimum effect size plus p-value (MESP), is used. Here, rejecting the null also requires the difference of the observed statistic from the exact null to be meaningfully large or practically significant, in the researcher’s judgment and experience. The simulation compares performances of several methods: from p-value and/or effect size-based, to confidence-interval based, under various conditions of true location of the mean, test power, and comparative sizes of the meaningful distance and population variability. For any inference procedure that outputs a binary indicator, like flagging whether a p-value is significant, the output of one single experiment is not sufficient evidence for a definitive conclusion. Yet, if a tool like MESP generates a relatively reliable signal and is used knowledgeably as part of a research process, it can provide useful information.  相似文献   

12.
In this article, the valuation of power option is investigated when the dynamic of the stock price is governed by a generalized jump-diffusion Markov-modulated model. The systematic risk is characterized by the diffusion part, and the non systematic risk is characterized by the pure jump process. The jumps are described by a generalized renewal process with generalized jump amplitude. By introducing NASDAQ Index Model, their risk premium is identified respectively. A risk-neutral measure is identified by employing Esscher transform with two families of parameters, which represent the two parts risk premium. In this article, the non systematic risk premium is considered, based on which the price of power option is studied under the generalized jump-diffusion Markov-modulated model. In the case of a special renewal process with log double exponential jump amplitude, the accurate expressions for the Esscher parameters and the pricing formula are provided. By numerical simulation, the influence of the non systematic risk’s price and the index of the power options on the price of the option is depicted.  相似文献   

13.
This article considers a simple test for the correct specification of linear spatial autoregressive models, assuming that the choice of the weight matrix Wn is true. We derive the limiting distributions of the test under the null hypothesis of correct specification and a sequence of local alternatives. We show that the test is free of nuisance parameters asymptotically under the null and prove the consistency of our test. To improve the finite sample performance of our test, we also propose a residual-based wild bootstrap and justify its asymptotic validity. We conduct a small set of Monte Carlo simulations to investigate the finite sample properties of our tests. Finally, we apply the test to two empirical datasets: the vote cast and the economic growth rate. We reject the linear spatial autoregressive model in the vote cast example but fail to reject it in the economic growth rate example. Supplementary materials for this article are available online.  相似文献   

14.
Historical linguistics needs procedures to evaluate the similarity between languages through the comparison of specific word lists drawn from the whole vocabulary. The main issue is to evaluate a fair threshold for the number of similar items beyond which it is sensible to reject the hypothesis of chance similarity. After a short review of papers dealing with that problem, in this paper an extension of those methods is proposed which exploits available data in a more efficient way. In particular, the exact distribution of the new test statistics is calculated and the power of the new procedure is compared with the power of the existing method.  相似文献   

15.
Various models have previously been proposed for data comprising m repeated measurements on each of N subjects. Log likelihood ratio tests may be used to help choose between possible models, but these tests are based on distributions which in theory apply only asymptotically. With small N , the log likelihood ratio approximation is unreliable, tending to reject the simpler of two models more often than it should. This is shown by reference to three datasets and analogous simulated data. For two of the three datasets, subjects fall into two groups. Log likelihood ratio tests confirm that for each of these two datasets group means over time differ. Tests suggest that group covariance structures also differ.  相似文献   

16.
This article proposes to use a standardized version of the normal-Laplace mixture distribution for the modeling of tail-fatness in an asset return distribution and for the fitting of volatility smiles implied by option prices. Despite the fact that only two free parameters are used, the proposed distribution allows arbitrarily high kurtosis and uses one shape parameter to adjust the density function within three standard deviations for any specified kurtosis. For an asset price model based on this distribution, the closed-form formulas for European option prices are derived, and subsequently the volatility smiles can be easily obtained. A regression analysis is conducted to show that the kurtosis, which is commonly used as an index of tail-fatness, is unable to explain the smiles satisfactorily under the proposed model, because the additional shape parameter also significantly accounts for the deviations revealed in smiles. The effectiveness of the proposed parsimonious model is demonstrated in the practical examples where the model is fitted to the volatility smiles implied by the NASDAQ market traded foreign exchange options.  相似文献   

17.
SUMMARY This paper tests the hypothesis of difference stationarity of macro-economic time series against the alternative of trend stationarity, with and without allowing for possible structural breaks. The methodologies used are that of Dickey and Fuller familiarized by Nelson and Plosser, and that of dummy variables familiarized by Perron, including the Zivot and Andrews extension of Perron's tests. We have chosen 12 macro-economic variables in the Indian economy during the period 1900-1988 for this study. A study of this nature has not previously been undertaken for the Indian economy. The conventional Dickey-Fuller methodology without allowing for structural breaks cannot reject the unit root hypothesis (URH) for any series. Allowing for exogenous breaks in level and rate of growth in the years 1914, 1939 and 1951, Perron's tests reject the URH for three series after 1951, i.e. the year of introduction of economic planning in India. The Zivot and Andrews tests for endogenous breaks confirm the Perron tests and lead to the rejection of the URH for three more series.  相似文献   

18.
The theoretical price of a financial option is given by the expectation of its discounted expiry time payoff. The computation of this expectation depends on the density of the value of the underlying instrument at expiry time. This density depends on both the parametric model assumed for the behaviour of the underlying, and the values of parameters within the model, such as volatility. However neither the model, nor the parameter values are known. Common practice when pricing options is to assume a specific model, such as geometric Brownian Motion, and to use point estimates of the model parameters, thereby precisely defining a density function.We explicitly acknowledge the uncertainty of model and parameters by constructing the predictive density of the underlying as an average of model predictive densities, weighted by each model's posterior probability. A model's predictive density is constructed by integrating its transition density function by the posterior distribution of its parameters. This is an extension to Bayesian model averaging. Sampling importance-resampling and Monte Carlo algorithms implement the computation. The advantage of this method is that rather than falsely assuming the model and parameter values are known, inherent ignorance is acknowledged and dealt with in a mathematically logical manner, which utilises all information from past and current observations to generate and update option prices. Moreover point estimates for parameters are unnecessary. We use this method to price a European Call option on a share index.  相似文献   

19.
The critical values for various tests based on U-statistics to detect a possible change are obtained through permutations of the observations. We obtain the same approximations for the permutated U-statistics under the no change null hypothesis as well as under the exactly one change alternative. The results are used to show that the simulated critical values are asymptotically valid under the null hypothesis and the tests reject with the probability tending to one under the alternative.  相似文献   

20.
ABSTRACT

This study develops and implements methods for determining whether introducing new securities or relaxing investment constraints improves the investment opportunity set for all risk averse investors. We develop a test procedure for “stochastic spanning” for two nested portfolio sets based on subsampling and linear programming. The test is statistically consistent and asymptotically exact for a class of weakly dependent processes. A Monte Carlo simulation experiment shows good statistical size and power properties in finite samples of realistic dimensions. In an application to standard datasets of historical stock market returns, we accept market portfolio efficiency but reject two-fund separation, which suggests an important role for higher-order moment risk in portfolio theory and asset pricing. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号