首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
应用对数正态分布研究服从复合Poission分布的短期聚合理赔总量的近似计算,导出求得平移对数正态分布参数的公式。以个别理赔额服从指数分布为例,与平移伽玛分布的分布函数对比,说明短期聚合理赔的平移对数正态分布近似方法是可行的方法。  相似文献   

2.
采用Monte Carlo模拟方法对STAR模型样本矩的统计特性进行研究。分析结果表明:STAR模型的样本均值、样本方差、样本偏度及样本峰度都渐近服从正态分布;即使STAR模型的数据生成过程中不含有常数项,其总体均值可能也不是0,这与线性ARMA模型有显著区别;即使STAR模型数据生成过程中的误差项服从正态分布,数据仍有可能是有偏分布。  相似文献   

3.
随着大数据时代的来临和统计制度的完善,宏观金融领域越来越倾向于使用大维面板数据进行经验性研究,而大维面板数据模型理论研究已成为现代计量经济学理论研究的一个热点.本文主要进行非平稳大维面板数据离散选择模型的渐近理论研究.主要研究发现,在真实回归参数值为0假设前提下,极大似然估计量具有一致性并且渐近服从正态分布;传统显著性检验Wald统计量渐近服从卡方分布.  相似文献   

4.
经典的生物等效性检验是平均生物等效性检验,然而其没有考虑效应的变异度和个体与药物之间可能存在的相互作用.文章在平均药物浓度服从药物代谢动力学中的一房室模型,误差变量分别服从正态分布、对数正态分布和韦伯分布的基础上分别建立统计模型.采用多重检验直接比较两种药物的浓度时间侧写,采用自助法求得多重检验过程中的置信上限.通过模拟分析说明该方法能够提高检验的效能.  相似文献   

5.
文章研究了具有部分缺失数据的两个对数正态分布总体中的参数估计问题以及两总体参数相等的假设检验问题;证明了估计的强相合性以及渐近正态性,给出了检验两总体参数相等的检验统计量以及检验统计量的极限分布.  相似文献   

6.
在消费行为学领域经常碰到的离散选择数据就是Multinomial响应数据,此类数据通常采用Multinomial Logit线性回归模型来处理,不过如果回归变量中的一部分与对数机率向量间呈非线性关系,其余回归变量与对数机率向量间呈线性关系,就需要引入以对数机率向量为因变量的广义半参数回归模型来处理这类实际数据了.文章以一次手机用户生活形态调查数据为例,讨论了向量广义半参数回归模型在消费者行为研究中的应用.  相似文献   

7.
文章借鉴前人的经验研究了cEv过程参数估计的GMM、MLE、MCMC方法,并利用沪深300股指数据做了实证分析。从标准误比较结果来看,MLE和MCMC估计优于GMM,而且参数β〈2,说明沪深300股指收益率波动性的弹性不为0,即沪深300股指的分布不服从对数正态分布,CEV过程不等同于几何布朗运动。  相似文献   

8.
利用分位数回归方法,讨论了非参数固定效应Panel Data模型的估计和检验问题,得到了参数估计的渐近正态性及收敛速度。同时,建立一个秩得分(rank score)统计量来检验模型的固定效应,并证明了这个统计量渐近服从标准正态分布。  相似文献   

9.
分析证券市场的有效性,指出其线性范式与现实市场状况并不符合。传统的金融学认为证券收益率服从对数正态分布,而大量的实证表明收益率分布与正态分布相比有"尖峰胖尾"特征,具有分形结构。在此基础上剖析了传统B-S权证定价模型的不足,结合分形市场中的分数布朗运动,提出了基于分形理论的B-S股本权证定价模型,考虑了股本稀释效应。由于股本权证定价模型需要已知企业股权价值及其波动率,但企业价值是权证价格的函数。基于此,运用数值方法以股票价格和波动率来估计企业价值波动率,并给出了在实际运用中的案例。  相似文献   

10.
可转换债券的鞅定价   总被引:8,自引:0,他引:8  
本文从定量的角度分析了可转换债券的价值构成,并在股票价格服从对数正态分布的条件下,利用Martingle Pricing方法推导出可转换债券的定价公式.  相似文献   

11.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   

12.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   

13.
In this paper, we consider a constant stress accelerated life test terminated by a hybrid Type-I censoring at the first stress level. The model is based on a general log-location-scale lifetime distribution with mean life being a linear function of stress and with constant scale. We obtain the maximum likelihood estimators (MLE) and the approximate maximum likelihood estimators (AMLE) of the model parameters. Approximate confidence intervals, likelihood ratio tests and two bootstrap methods are used to construct confidence intervals for the unknown parameters of the Weibull and lognormal distributions using the MLEs. Finally, a simulation study and two illustrative examples are provided to demonstrate the performance of the developed inferential methods.  相似文献   

14.
A two-stage hierarchical model for analysis of discrete data with extra-Poisson variation is examined. The model consists of a Poisson distribution with a mixing lognormal distribution for the mean. A method of approximate maximum likelihood estimation of the parameters is proposed. The method uses the EM algorithm and approximations to facilitate its implementation are derived. Approximate standard errors of the estimates are provided and a numerical example is used to illustrate the method.  相似文献   

15.
ABSTRACT

The performances of six confidence intervals for estimating the arithmetic mean of a lognormal distribution are compared using simulated data. The first interval considered is based on an exact method and is recommended in U.S. EPA guidance documents for calculating upper confidence limits for contamination data. Two intervals are based on asymptotic properties due to the Central Limit Theorem, and the other three are based on transformations and maximum likelihood estimation. The effects of departures from lognormality on the performance of these intervals are also investigated. The gamma distribution is considered to represent departures from the lognormal distribution. The average width and coverage of each confidence interval is reported for varying mean, variance, and sample size. In the lognormal case, the exact interval gives good coverage, but for small sample sizes and large variances the confidence intervals are too wide. In these cases, an approximation that incorporates sampling variability of the sample variance tends to perform better. When the underlying distribution is a gamma distribution, the intervals based upon the Central Limit Theorem tend to perform better than those based upon lognormal assumptions.  相似文献   

16.
Summary. A new estimator of the regression parameters is introduced in a multivariate multiple-regression model in which both the vector of explanatory variables and the vector of response variables are assumed to be random. The affine equivariant estimate matrix is constructed using the sign covariance matrix (SCM) where the sign concept is based on Oja's criterion function. The influence function and asymptotic theory are developed to consider robustness and limiting efficiencies of the SCM regression estimate. The estimate is shown to be consistent with a limiting multinormal distribution. The influence function, as a function of the length of the contamination vector, is shown to be linear in elliptic cases; for the least squares (LS) estimate it is quadratic. The asymptotic relative efficiencies with respect to the LS estimate are given in the multivariate normal as well as the t -distribution cases. The SCM regression estimate is highly efficient in the multivariate normal case and, for heavy-tailed distributions, it performs better than the LS estimate. Simulations are used to consider finite sample efficiencies with similar results. The theory is illustrated with an example.  相似文献   

17.
与阿基米德copula相比,分层阿基米德copula(HAC)的结构更具一般性,而相比于椭圆型copula它的待估参数个数更少。用两阶段极大似然法来估计HAC函数,主要的步骤是先估计出每个分量的边际分布,以此为基础再估计copula函数。实证分析中,采取Clayton和Gumbel型的HAC分析四只股票价格序列之间的相关性。在得出HAC的结构和估计其参数之前,运用ARMA-GARCH过程消除了序列的自相关性和条件异方差。通过比较赤迟信息准则,认为完全嵌套的Gumbel型HAC能更好地刻画这种相关性。  相似文献   

18.
针对GM(1,1)幂模型灰微分方程与白化方程无法匹配的缺陷,以灰微分方程的重构为基础,建立无偏GM(1,1)幂模型。该方法使得差分方程的参数与其在微分方程中对应的参数具有更好的一致性。将无偏GM(1,1)幂模型应用到旅游客源预测中,实例应用结果显示无偏GM(1,1)幂模型预测精度高于GM(1,1)模型。  相似文献   

19.
The Burr XII distribution offers a more flexible alternative to the lognormal, log-logistic and Weibull distributions. Outliers can occur during reliability life testing. Thus, we need an efficient method to estimate the parameters of the Burr XII distribution for censored data with outliers. The objective of this paper is to present a robust regression (RR) method called M-estimator to estimate the parameters of a two-parameter Burr XII distribution based on the probability plotting procedure for both the complete and multiply-censored data with outliers. The simulation results show that the RR method outperforms the unweighted least squares and maximum likelihood methods in most cases in terms of bias and errors in the root mean square.  相似文献   

20.
国家级高新技术产业开发区在推动经济发展、促进研究开发和技术创新、创造就业、人才培养、吸收外资、引进技术、技术扩散、产业集群、扩大出口、企业家精神形成和制度创新方面具有重要的地位。运用数据包络分析(DEA)对全国56个高新技术开发区进行绩效评价,研究发现:56个国家级高新技术产业开发区技术效率(crste)均值为0.768、纯技术效率(vrste)均值为0.838、规模效率(scale)均值为0.920;采用自然断点法对评价结果进行分级研究,并对分级结果进行空间分布分析,发现高新技术产业开发区的绩效呈现东高西低的梯度分布格局。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号