共查询到20条相似文献,搜索用时 15 毫秒
1.
This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant. 相似文献
2.
In estimating the population median, it is common to encounter estimators which are linear combinations of a small number of central observations. Sample medians, sample quasi medians, trimmed means, jackknifed (and delete‐d jackknifed) medians and jackknifed quasi medians are all familiar examples. The objective of this paper is to show that within this class the quasi medians turn out to have the best asymptotic mean squared error. 相似文献
3.
In this article, we present the problem of selecting a good stochastic system with high probability and minimum total simulation cost when the number of alternatives is very large. We propose a sequential approach that starts with the Ordinal Optimization procedure to select a subset that overlaps with the set of the actual best m% systems with high probability. Then we use Optimal Computing Budget Allocation to allocate the available computing budget in a way that maximizes the Probability of Correct Selection. This is followed by a Subset Selection procedure to get a smaller subset that contains the best system among the subset that is selected before. Finally, the Indifference-Zone procedure is used to select the best system among the survivors in the previous stage. The numerical test involved with all these procedures shows the results for selecting a good stochastic system with high probability and a minimum number of simulation samples, when the number of alternatives is large. The results also show that the proposed approach is able to identify a good system in a very short simulation time. 相似文献
4.
The case fatality rate is an important indicator of the severity of a disease, and unbiased and accurate estimates of it during an outbreak are important in the study of epidemic diseases, including severe acute respiratory syndrome (SARS). In this paper, estimation methods are developed using a constant cure-death hazard ratio. A semiparametric model is presented, in which the cure-death hazard ratio is a parameter of interest, and a profile likelihood-based technique is proposed for estimating the case fatality rate. An extensive simulation was carried out to investigate the performance of this technique for small and medium sample sizes, using both summary and individual data. The results show that the performance depends on the model validity but is not heavily dependent on the sample size. The method was applied to summary SARS data obtained from Hong Kong and Singapore. 相似文献
5.
Homer F. Walker 《统计学通讯:理论与方法》2013,42(8):837-849
We address the problem of estimating the proportions of two statistical populations in a given mixture on the basis of an unlabeled sample of n–dimensional observations on the mixture. Assuming that the expected values of observations on the two populations are known, we show that almost any linear map from Rn to R1 yields an unbiased consistent estimate of the proportion of one population in a very easy way. We then find that linear map for which the resulting proportion estimate has minimum variance among all estimates so obtained. After deriving a simple expression for the minimum-variance estimate, we discuss practical aspects of obtaining this and related estimates. 相似文献
6.
Markus Abt 《Scandinavian Journal of Statistics》1999,26(4):563-578
Given one or more realizations from the finite dimensional marginal distribution of a stochastic process, we consider the problem of estimating the squared prediction error when predicting the process at unobserved locations. An approximation taking into account the additional variability due to estimating parameters involved in the correlation structure was developed by Kackar & Harville (1984) and was revisited by Harville & Jeske (1992) as well as Zimmerman & Cressie (1992). The present paper discusses an extension of these methods. The approaches will be compared via an extensive simulation study for models with and without random error term. Effects due to the designs used for prediction and for model fitting as well as due to the strength of the correlation between neighbouring observations of the stochastic process are investigated. The results show that considering the additional variability in the predictor due to estimating the covariance structure is of great importance and should not be neglected in practical applications. 相似文献
7.
8.
《统计与信息论坛》2019,(8):3-11
面板有序响应模型已经被广泛应用于主观评价问题的研究,但在估计方法上,该模型依据对排序变量的处理规则不同产生了多种估计技术。因此,有必要设计蒙特卡罗仿真实验,综合考察这些方法在模型参数估计以及假设检验两个方面的表现,进而判别其优劣,其结果对于应用研究的方法选择也具有重要的指导意义。基于仿真实验的研究结果表明,OMD方法在小样本下会存在严重的估计偏误与假设检验的显著性水平扭曲现象,在超大样本下该方法可以产生有效的估计结果与检验结论,但效率改进的幅度不明显。DvS、BUC和CML方法在各种样本条件下都可以获得稳健一致的估计结果,但是对于参数的约束检验,CML方法会产生较明显的错误结论。针对以上结果产生的建议是,应该优先使用DvS或者BUC方法来估计面板有序响应模型并开展后续的假设检验工作。 相似文献
9.
Edward W. Frees 《商业与经济统计学杂志》2013,31(1):79-86
The cost of certain types of warranties is closely related to functions that arise in renewal theory. The problem of estimating the warranty cost for a random sample of size n can be reduced to estimating these functions. In an earlier paper, I gave several methods of estimating the expected number of renewals, called the renewal function. This answered an important accounting question of how to arrive at a good approximation of the expected warranty cost. In this article, estimation of the renewal function is reviewed and several extensions are given. In particular, a resampling estimator of the renewal function is introduced. Further, I argue that managers may wish to examine other summary measures of the warranty cost, in particular the variability. To estimate this variability, I introduce estimators, both parametric and nonparametric, of the variance associated with the number of renewals. Several numerical examples are provided. 相似文献
10.
Estimation of the population mean based on right censored observations is considered. The naive sample mean will be an inconsistent and asymptotically biased estimator in this case. An estimate suggested in textbooks is to compute the area under a Kaplan–Meier curve. In this note, two more seemingly different approaches are introduced. Students’ reaction to these approaches was very positive in an introductory survival analysis course the author recently taught. 相似文献
11.
12.
保险公司破产概率及其随机模拟分析 总被引:4,自引:0,他引:4
在现有的国内外文献中,大多数作者是通过调节系数或近似求解破产概率的。文章对破产概率进行了随机模拟,不仅避免了求调节系数的麻烦,而且与通过调节系数求解破产概率有异曲同工之效用。其模拟结果,对我国保险业的发展具有一定的警示作用和指导意义。 相似文献
13.
14.
Quick efficient estimates are proposed for estimating the standard deviation of a circular bivariate population. Two procedures based on extreme observations are considered. The first of these employs the 100 p percent largest observations, while the second utilizes the extreme observations in k radial sectors. 相似文献
15.
Tiejun Tong Zeny Feng Julia S. Hilton Hongyu Zhao 《Journal of applied statistics》2013,40(9):1949-1964
Estimating the proportion of true null hypotheses, π0, has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π0 by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p-values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1?λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance. 相似文献
16.
《Journal of Statistical Computation and Simulation》2012,82(8):973-983
In prospective cohort studies, individuals are usually recruited according to a certain cross-sectional sampling criterion. The prevalent cohort is defined as a group of individuals who are alive but possibly with disease at the beginning of the study. It is appealing to incorporate the prevalent cases to estimate the incidence rate of disease before the enrollment. The method of back calculation of incidence rate has been used to estimate the incubation time from human immunodeficiency virus (HIV) infection to AIDS. The time origin is defined as the time of HIV infection. In aging cohort studies, the primary time scale is age of disease onset, subjects have to survive certain years to be enrolled into the study, thus creating left truncation (delay entry). The current methods usually assume that either the disease incidence is rare or the excess mortality due to disease is small compared with the healthy subjects. So far the validity of the results based on these assumptions has not been examined. In this paper, a simple alternative method is proposed to estimate dementia incidence rate before enrollment using prevalent cohort data with left truncation. Furthermore, simulations are used to examine the performance of the estimation of disease incidence under different assumptions of disease incidence rates and excess mortality hazards due to disease. As application, the method is applied to the prevalent cases of dementia from the Honolulu-Asia Aging Study to estimate the dementia incidence rate and to assess the effect of hypertension, Apoe 4 and education on dementia onset. 相似文献
17.
Julio Lpez-Laborda Carmen Marín-Gonzlez Jorge Onrubia-Fernndez 《Journal of applied statistics》2021,48(16):3233
Microdata are required to evaluate the distributive impact of the taxation system as a whole (direct and indirect taxes) on individuals or households. However, in European Union countries this information is usually distributed into two separate surveys: the Household Budget Surveys (HBS), including total household expenditure and its composition, and EU Statistics on Income and Living Conditions (EU-SILC), including detailed information about households'' income and direct (but not indirect) taxes paid. We present a parametric statistical matching procedure to merge both surveys. For the first stage of matching, we propose estimating total household expenditure in HBS (Engel curves) using a GLM estimator, instead of the traditionally used OLS method. It is a better alternative, insofar as it can deal with the heteroskedasticity problem of the OLS estimates, while making it unnecessary to retransform the regressors estimated in logarithms. To evaluate these advantages of the GLM estimator, we conducted a computational Monte Carlo simulation. In addition, when an error term is added to the deterministic imputation of expenditure in the EU-SILC, we propose replacing the usual Normal distribution of the error with a Chi-square type, which allows a better approximation to the original expenditures variance in the HBS. An empirical analysis is provided using Spanish surveys for years 2012–2016. In addition, we extend the empirical analysis to the rest of the European Union countries, using the surveys provided by Eurostat (EU-SILC, 2011; HBS, 2010). 相似文献
18.
Yu B 《Journal of Statistical Computation and Simulation》2011,81(8):973-983
In prospective cohort studies individuals are usually recruited according to a certain cross-sectional sampling criterion. The prevalent cohort is defined as a group of individuals who are alive but possibly with disease at the beginning of the study. It is appealing to incorporate the prevalent cases to estimate the incidence rate of disease before the enrollment. The method of back calculation of incidence rate has been used to estimate the incubation time from HIV infection to AIDS. The time origin is defined as the time of HIV infection. In aging cohort studies, the primary time scale is age of disease onset, subjects have to survive certain years to be enrolled into the study, thus creating left truncation (delay entry). The current methods usually assume that either the disease incidence is rare or the excess mortality due to disease is small compared to the healthy subjects. By far the validity of the results based on these assumptions has not been examined. In this paper, a simple alternative method is proposed to estimate dementia incidence rate before enrollment using prevalent cohort data with left truncation. Furthermore simulations are used to examine the performance of the estimation of disease incidence under different assumptions of disease incidence rates and excess mortality hazards due to disease. As application, the method is applied to the prevalent cases of dementia from the Honolulu Asia Aging Study to estimate dementia incidence rate and to assess the effect of hypertension, Apoe 4 and education on dementia onset. 相似文献
19.
Unless all of a drug is eliminated during each dosing interval, the plasma concentrations within a dosing interval will increase until the time course of change in plasma concentrations becomes invariant from one dosing interval to the next, resulting in steady state. A simple method for estimating drug concentration time to steady state based on multiple dose area under the plasma concentration–time curve and effective rate of drug accumulation is presented. Several point estimates and confidence intervals for time to 90% of steady state are compared, and a recommendation is made on how to summarize and present the results. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
20.
The basic assumption underlying the concept of ranked set sampling is that actual measurement of units is expensive, whereas ranking is cheap. This may not be true in reality in certain cases where ranking may be moderately expensive. In such situations, based on total cost considerations, k-tuple ranked set sampling is known to be a viable alternative, where one selects k units (instead of one) from each ranked set. In this article, we consider estimation of the distribution function based on k-tuple ranked set samples when the cost of selecting and ranking units is not ignorable. We investigate estimation both in the balanced and unbalanced data case. Properties of the estimation procedure in the presence of ranking error are also investigated. Results of simulation studies as well as an application to a real data set are presented to illustrate some of the theoretical findings. 相似文献