首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
VaR的发展就在于不断寻求解决风险测度的真实性,是一种克服或降低风险的方法。在揭示Del-ta-Gamma非线性模型计算VaR正态假设局限性的基础上,通过对经典转换函数Delta-Gamma-Johnson转换函数以及基于Delta-Gamma-Cornish-Fisher扩展方法构造的转换函数的梳理,从实证分析的角度考察了中国股票市场VaR的估值问题。实证结果表明,Delta-Gamma-Johnson转换函数中的SU型转换基本适宜于作为中国股票市场样本数据正态化处理的转换函数,利用SU型转换后的样本数据所计算的VaR值能明显改善中国股票市场风险测度水平。  相似文献   

2.
One of the most important steps in the design of a pharmaceutical clinical trial is the estimation of the sample size. For a superiority trial the sample size formula (to achieve a stated power) would be based on a given clinically meaningful difference and a value for the population variance. The formula is typically used as though this population variance is known whereas in reality it is unknown and is replaced by an estimate with its associated uncertainty. The variance estimate would be derived from an earlier similarly designed study (or an overall estimate from several previous studies) and its precision would depend on its degrees of freedom. This paper provides a solution for the calculation of sample sizes that allows for the imprecision in the estimate of the sample variance and shows how traditional formulae give sample sizes that are too small since they do not allow for this uncertainty with the deficiency being more acute with fewer degrees of freedom. It is recommended that the methodology described in this paper should be used when the sample variance has less than 200 degrees of freedom.  相似文献   

3.
Studies of seasonal variation are valuable in biomedical research because they can help to discover the etiology of diseases that are not well understood. Generally in these studies the data have certain characteristics that require specialized tests and methods for the statistical analysis. But the effectiveness of these specialized tests is variable, especially according to the seasonal variation, the dimension of the amplitude in the seasonal variation, and the sample size. The purpose of this paper is to present a test and methods appropriate for the analysis and modeling of data whose seasonal variation has small amplitude and whose sample size is small. This test can detect different kinds of seasonal variation. The results from a simulation study show that the test performs very well. The application of these methods is illustrated by two examples.  相似文献   

4.
The most popular goodness of fit test for a multinomial distribution is the chi-square test. But this test is generally biased if observations are subject to misclassification, In this paper we shall discuss how to define a new test procedure when we have double sample data obtained from the true and fallible devices. An adjusted chi-square test based on the imputation method and the likelihood ratio test are considered, Asymptotically, these two procedures are equivalent. However, an example and simulation results show that the former procedure is not only computationally simpler but also more powerful under finite sample situations.  相似文献   

5.
This paper considers the problem where the linear discriminant rule is formed from training data that are only partially classified with respect to the two groups of origin. A further complication is that the data of unknown origin do not constitute an observed random sample from a mixture of the two under- lying groups. Under the assumption of a homoscedastic normal model, the overall error rate of the sample linear discriminant rule formed by maximum likelihood from the partially classified training data is derived up to and including terms of the first order in the case of univariate feature data. This first- order expansion of the sample rule so formed is used to define its asymptotic efficiency relative to the rule formed from a completely classified random training set and also to the rule formed from a completely unclassified random set.  相似文献   

6.
利用从烟台市某商业银行调研得到的微观数据样本,实证研究中国个人住房抵押贷款提前偿付的影响因素。结果发现:借款人年龄越大、学历越高,提前偿还贷款的概率越大;贷款数额越大、贷款期限越长,借款人提前偿付的概率越大;首付比率高的借款人提前偿付概率较高;外地人比当地人更具有提前偿付的可能性;借款人的债龄越长,提前偿付的可能性越大;借款人的性别、婚否、家庭人口数量、工作行业和月还款额占家庭收入比率等因素对提前偿付的影响不显著。  相似文献   

7.
巩红禹  陈雅 《统计研究》2018,35(12):113-122
本文主要讨论样本代表性的改进和多目标调查两个问题。一,本文提出了一种新的改进样本代表性多目标抽样方法,增加样本量与调整样本结构相结合的方法-追加样本的平衡设计,即通过追加样本,使得补充的样本与原来的样本组合生成新的平衡样本,相对于初始样本,减少样本与总体的结构性偏差。平衡样本是指辅助变量总量的霍维茨汤普森估计量等于总体总量真值。二,平衡样本通过选择与多个目标参数相关的辅助变量,使得一套样本对不同的目标参数而言都具有良好的代表性,进而完成多目标调查。结合2010年第六次人口分县普查数据,通过选择多个目标参数,对追加样本后的平衡样本作事后评估结果表明,追加平衡设计能够有效改进样本结构,使得样本结构与总体结构相近,降低目标估计的误差;同时也说明平衡抽样设计能够实现多目标调查,提高样本的使用效率。  相似文献   

8.
Estimation of the standard deviation of a normal population is an important practical problem that in industrial practice must often be done from small and possibly contaminated data sets. Using multiple estimators is useful, as differences in the estimates may indicate whether the data set is contaminated and the form of the contamination. In this paper, finite sample correction factors have been estimated by simulation for several simple robust estimators of the standard deviation of a normal population. The estimators are the median absolute deviation, interquartile range, shortest half interval (Shorth), and median moving range. Finite sample correction factors have also been estimated for the commonly used non-robust estimators: mean absolute deviation and mean moving range. The simulation has been benchmarked against finite sample correction factors for the sample standard deviation and the sample range.  相似文献   

9.
样本数据正态性转换时变VaR   总被引:2,自引:0,他引:2       下载免费PDF全文
虽然摒弃估测VaR值的正态分布假设已成为当今金融学研究的一个重要方面及其热点问题,但利用正态性转换先处理样本数据,然后再利用基于正态分布假定下的VaR估测方法来修正VaR的估值则是精准化VaR计算的一条有效途径。本文试图在考虑收益率分布时变性特征的情况下,利用正态性转换处理样本数据这一思想来细化和改善我国证券市场不同运行阶段的VaR估测。相应的经验分析表明:我国证券市场收益率分布不仅存在时变性特征,而且牛熊市期间的VaR值存在显著的差异。Johnson转换函数之SU型适宜作为样本数据正态性转换函数,经转换样本数据估测的VaR值既提高了VaR估值的准确性,又改善了VaR估测精细化的有效性。  相似文献   

10.
农民工的家庭式迁移城市是真正完成农村剩余劳动力转移,推进城市化进程的必然途径,而新生代农民工作为当前农民工的新生力量正在逐渐成为农村剩余劳动力的中坚力量,其家庭式迁移城市的意愿受多方面因素的影响,而这些因素恰恰是政府制定政策以引导农民工合理流动有序转移的最根本依据.以西安地区为例,通过对西安城六区进行间卷调查,利用1 040个农民工样本数据,采用层次分析法对新生代农民工家庭式迁移城市意愿影响因素进行分析,确定各影响因素的重要性程度,发现职业收入、外出务工时间、受教育程度、社会保障制度对新生代农民工家庭迁移城市有重要影响,年龄、婚姻状况、户籍制度则影响不明显.  相似文献   

11.
In outcome‐dependent sampling, the continuous or binary outcome variable in a regression model is available in advance to guide selection of a sample on which explanatory variables are then measured. Selection probabilities may either be a smooth function of the outcome variable or be based on a stratification of the outcome. In many cases, only data from the final sample is accessible to the analyst. A maximum likelihood approach for this data configuration is developed here for the first time. The likelihood for fully general outcome‐dependent designs is stated, then the special case of Poisson sampling is examined in more detail. The maximum likelihood estimator differs from the well‐known maximum sample likelihood estimator, and an information bound result shows that the former is asymptotically more efficient. A simulation study suggests that the efficiency difference is generally small. Maximum sample likelihood estimation is therefore recommended in practice when only sample data is available. Some new smooth sample designs show considerable promise.  相似文献   

12.
SUMMARY Ranked-set sampling is a widely used sampling procedure when sample observations are expensive or difficult to obtain. It departs from simple random sampling by seeking to spread the observations in the sample widely over the distribution or population. This is achieved by ranking methods which may need to employ concomitant information. The ranked-set sample mean is known to be more efficient than the corresponding simple random sample mean. Instead of the ranked-set sample mean, this paper considers the corresponding optimal estimator: the ranked-set best linear unbiased estimator. This is shown to be more efficient, even for normal data, but particularly for skew data, such as from an exponential distribution. The corresponding forms of the estimators are quite distinct from the ranked-set sample mean. Improvement holds where the ordering is perfect or imperfect, with this prospect of improper ordering being explored through the use of concomitants. In addition, the corresponding optimal linear estimator of a scale parameter is also discussed. The results are applied to a biological problem that involves the estimation of root weights for experimental plants, where the expense of measurement implies the need to minimize the number of observations taken.  相似文献   

13.
A meta-elliptical model is a distribution function whose copula is that of an elliptical distribution. The tail dependence function in such a bivariate model has a parametric representation with two parameters: a tail parameter and a correlation parameter. The correlation parameter can be estimated by robust methods based on the whole sample. Using the estimated correlation parameter as plug-in estimator, we then estimate the tail parameter applying a modification of the method of moments approach proposed in the paper by Einmahl et al. (2008). We show that such an estimator is consistent and asymptotically normal. Further, we derive the joint limit distribution of the estimators of the two parameters. We illustrate the small sample behavior of the estimator of the tail parameter by a simulation study and on real data, and we compare its performance to that of the competitive estimators.  相似文献   

14.
On the planning and design of sample surveys   总被引:1,自引:1,他引:0  
Surveys rely on structured questions used to map out reality, using sample observations from a population frame, into data that can be statistically analyzed. This paper focuses on the planning and design of surveys, making a distinction between individual surveys, household surveys and establishment surveys. Knowledge from cognitive science is used to provide guidelines on questionnaire design. Non-standard, but simple, statistical methods are described for analyzing survey results. The paper is based on experience gained by conducting over 150 customer satisfaction surveys in Europe, America and the Far East.  相似文献   

15.
A belief function measure of uncertainty, associated with a network based knowledge structure, effectively defines an artificial analyst (e.g., an expert system) capable of making uncertain judgements. In practice, a belief function is typically constructed by combining independent components developed on local areas of the network from inputs such as accepted causal theories, probabilistic judgements by experts, or empirical sample data. Representation of the network in a special form called a ‘tree of cliques’ leads to a locally controlled algorithm that propagates locally defined beliefs through the tree, and fuses the resulting beliefs at the nodes, in such a way as to simultaneously compute marginal beliefs over all nodes of the tree. The paper develops a simple hypothetical example from the field of reliability to illustrate these ideas.  相似文献   

16.
Supremum score test statistics are often used to evaluate hypotheses with unidentifiable nuisance parameters under the null hypothesis. Although these statistics provide an attractive framework to address non‐identifiability under the null hypothesis, little attention has been paid to their distributional properties in small to moderate sample size settings. In situations where there are identifiable nuisance parameters under the null hypothesis, these statistics may behave erratically in realistic samples as a result of a non‐negligible bias induced by substituting these nuisance parameters by their estimates under the null hypothesis. In this paper, we propose an adjustment to the supremum score statistics by subtracting the expected bias from the score processes and show that this adjustment does not alter the limiting null distribution of the supremum score statistics. Using a simple example from the class of zero‐inflated regression models for count data, we show empirically and theoretically that the adjusted tests are superior in terms of size and power. The practical utility of this methodology is illustrated using count data in HIV research.  相似文献   

17.
This paper develops a method for assessing the risk for rare events based on the following scenario. There exists a large population with an unknown percentage p of defects. A sample of size N is drawn from the population and, in the sample, 0 defects are drawn. Given these data, we want to determine the probability that no more than n defects will be found in another random sample of N drawn from the population. Estimates on the range of p and n are calculated from a derived joint distribution which depends on p, n and N. Asymptotic risk results based on an infinite sample are then developed. It is shown that these results are applicable even with relatively small sample spaces.  相似文献   

18.
企业信用状况的定性评价——基于logistic回归模型的分析   总被引:1,自引:0,他引:1  
以材料和机械制造行业100家上市公司综合财务数据为样本数据,采用主成分分析和logistic回归模型,对企业的信用风险进行定性评价,简要评定企业的守信状况,影响企业信用的主要是企业的偿债能力,且同资金的流动性和运营效果密切相关,并给出结论与建议,指导债权人、投资者和交易方投资决策。  相似文献   

19.
Length-biased data, which are often encountered in engineering, economics and epidemiology studies, are generally subject to right censoring caused by the research ending or the follow-up loss. The structure of length-biased data is distinct from conventional survival data, since the independent censoring assumption is often violated due to the biased sampling. In this paper, a proportional hazard model with varying coefficients is considered for the length-biased and right-censored data. A local composite likelihood procedure is put forward for the estimation of unknown coefficient functions in the model, and large sample properties of the proposed estimators are also obtained. Additionally, an extensive simulation studies are conducted to assess the finite sample performance of the proposed method and a data set from the Academy Awards is analyzed.  相似文献   

20.
Summary.  Statistical agencies make changes to the data collection methodology of their surveys to improve the quality of the data collected or to improve the efficiency with which they are collected. For reasons of cost it may not be possible to estimate the effect of such a change on survey estimates or response rates reliably, without conducting an experiment that is embedded in the survey which involves enumerating some respondents by using the new method and some under the existing method. Embedded experiments are often designed for repeated and overlapping surveys; however, previous methods use sample data from only one occasion. The paper focuses on estimating the effect of a methodological change on estimates in the case of repeated surveys with overlapping samples from several occasions. Efficient design of an embedded experiment that covers more than one time point is also mentioned. All inference is unbiased over an assumed measurement model, the experimental design and the complex sample design. Other benefits of the approach proposed include the following: it exploits the correlation between the samples on each occasion to improve estimates of treatment effects; treatment effects are allowed to vary over time; it is robust against incorrectly rejecting the null hypothesis of no treatment effect; it allows a wide set of alternative experimental designs. This paper applies the methodology proposed to the Australian Labour Force Survey to measure the effect of replacing pen-and-paper interviewing with computer-assisted interviewing. This application considered alternative experimental designs in terms of their statistical efficiency and their risks to maintaining a consistent series. The approach proposed is significantly more efficient than using only 1 month of sample data in estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号