首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The accuracy of a binary diagnostic test is usually measured in terms of its sensitivity and its specificity, or through positive and negative predictive values. Another way to describe the validity of a binary diagnostic test is the risk of error and the kappa coefficient of the risk of error. The risk of error is the average loss that is caused when incorrectly classifying a non-diseased or a diseased patient, and the kappa coefficient of the risk of error is a measure of the agreement between the diagnostic test and the gold standard. In the presence of partial verification of the disease, the disease status of some patients is unknown, and therefore the evaluation of a diagnostic test cannot be carried out through the traditional method. In this paper, we have deduced the maximum likelihood estimators and variances of the risk of error and of the kappa coefficient of the risk of error in the presence of partial verification of the disease. Simulation experiments have been carried out to study the effect of the verification probabilities on the coverage of the confidence interval of the kappa coefficient.  相似文献   

2.
利润最大化区位理论与广州高房价的根源   总被引:1,自引:0,他引:1  
张立建 《统计研究》2008,25(9):16-23
本文利用利润最大化区位理论,建立房价模型,实证研究广州房价持续上涨的根源。发现影响房价的主要因素是住房供给的短缺,次要原因是高成本以及严重的贫富分化。其体制根源在于自由竞争的地产需求市场与计划经济的地产供给市场之间的矛盾,政策根源在于政府变为“经济人”,一味经营城市,经济根源在于因竞争和权力垄断所导致的产业分化,社会根源在于广州市民不合理的住房消费习惯。因而,近期来讲,加大土地供给、改革土地出让方式、实行房地产累进累退税是抑制房价的关键,从长远来讲,要建立自由竞争的地产供给市场,变“经济人”政府为服务性政府,优化产业结构,取消“国字头”行业特权。  相似文献   

3.
模糊数据的回归模型结构分析   总被引:4,自引:1,他引:3  
李竹渝  张成 《统计研究》2008,25(8):74-78
本文在给出对称三角模糊数样本基础上,提出模糊数据回归分析模型的一般结构。在使用线性规划LP方法进行模糊回归系数估计时,根据模糊集合的择近原则,给出了利用样本平均贴近度评价模型拟合效果的一个准则。通过实例计算,比较了模糊样本回归模型未知参数估计的FLP方法和FLS 方法。  相似文献   

4.
GCRM模型与ASRF模型相比,能给出与经济资本测度目标相一致的资本数量,而现有文献对GCRM模型的到期收益率没有给出明确的刻画,本文则通过假设资产的到期收益率与其信用经济资本相关,得出了基于GCRM模型的信用经济资本测度和贷款定价方法,它能够刻画借款者的违约概率、违约损失率,以及商业银行的风险偏好(目标支付概率)和资本融资成本对经济资本和贷款定价的影响,为商业银行相关领域的决策提供了参考。  相似文献   

5.
胡帆 《统计研究》2010,27(11):53-56
本文借鉴全面质量管理体系的概念,综合分析贯穿统计工作整个流程的统计调查数据质量管理的要素及作用。本文重点讨论了全面质量管理的流程和重点工作的布局;结合统计信息化的建设,特别讨论了相关工作规范、应用软件的作用,以及数据资源的建设和利用。  相似文献   

6.
基于分层随机抽样的季节指数的抽样估计研究   总被引:1,自引:0,他引:1  
邓明 《统计研究》2008,25(7):70-73
由于传统的季节指数分析方法是一种描述统计,本文提出了采用分层随机抽样的季节指数估计量,给出了估计量的偏误和均方误差以及均方误差的估计,并在此基础上分析了季节指数的假设检验以及最优估计量的确定。  相似文献   

7.
External auditors such as the National Audit Office (NAO) are the final arbiters on the level of error in accounts presented to them by their clients, and the accuracy or otherwise of individual transactions. In coming to a view on the level of error, they are expected to carry out the audit effectively and efficiently, and therefore need to make the best possible use of all the information at their disposal, even when some of the information may not be totally accurate. We consider the particular situation where the NAO is given access to the results of tests on a relatively large random sample of transactions, typically conducted by the client's internal auditors. A two-phase sampling scheme arises when the NAO subsequently assesses the quality of the client's data by retesting a subsample of these transactions. The paper discusses methodologies for combining the two sets of data to produce optimum estimates of the proportion of transactions in error (the error rate) and of the level of monetary error in the account. Although a maximum likelihood approach yields a relatively straightforward solution to the error rate problem, there is no uniformly optimum way to estimate the monetary error. Three possible methods are proposed, and the results of a series of simulation experiments to compare their performance under a variety of audit conditions is described.  相似文献   

8.
This paper describes the various stages in building a statistical model to predict temperatures in the core of a reactor, and compares the benefits of this model with those of a physical model. We give a brief background to this study and the applications of the model to rapid online monitoring and safe operation of the reactor. We describe the methods, of correlation and two dimensional spectral analysis, which we use to identify the effects that are incorporated in a spatial regression model for the measured temperatures. These effects are related to the age of the reactor fuel and the spatial geometry of the reactor. A remaining component of the temperature variation is a slowly varying temperature surface modelled by smooth functions with constrained coefficients. We assess the accuracy of the model for interpolating temperatures throughout the reactor, when measurements are available only at a reduced set of spatial locations, as is the case in most reactors. Further possible improvements to the model are discussed.  相似文献   

9.
In multi-parameter ( multivariate ) estimation, the Stein rule provides minimax and admissible estimators , compromising generally on their unbiasedness. On the other hand, the primary aim of jack-knifing is to reduce the bias of an estimator ( without necessarily compromising on its efficacy ), and, at the same time, jackknifing provides an estimator of the sampling variance of the estimator as well. In shrinkage estimation ( where minimization of a suitably defined risk function is the basic goal ), one may wonder how far the bias-reduction objective of jackknifing incorporates the dual objective of minimaxity ( or admissibility ) and estimating the risk of the estimator ? A critical appraisal of this basic role of jackknifing in shrinkage estimation is made here. Restricted, semi-restricted and the usual versions of jackknifed shrinkage estimates are considered and their performance characteristics are studied . It is shown that for Pitman-type ( local ) alternatives, usually, jackkntfing fails to provide a consistent estimator of the ( asymptotic ) risk of the shrinkage estimator, and a degenerate asymptotic situation arises for the usual fixed alternative case.  相似文献   

10.
李月 《统计研究》2010,27(9):16-25
 改革开放以来中国经济取得了举世瞩目的成就,然而在GDP高增长率的背后却蕴藏着许多发展问题。本文提出了有效经济增长的范畴,试图从一个全新的角度重新审视中国经济增长的历程。通过构建有效经济增长动态模型及相关变量的测定方法,实证分析了改革开放以来我国有效经济增长的趋势特征,指出提高消费比例中超额人均消费的比重,是当前我国改善有效需求不足现状的关键所在。  相似文献   

11.
Assessment of efficacy in important subgroups – such as those defined by sex, age, race and region – in confirmatory trials is typically performed using separate analysis of the specific subgroup. This ignores relevant information from the complementary subgroup. Bayesian dynamic borrowing uses an informative prior based on analysis of the complementary subgroup and a weak prior distribution centred on a mean of zero to construct a robust mixture prior. This combination of priors allows for dynamic borrowing of prior information; the analysis learns how much of the complementary subgroup prior information to borrow based on the consistency between the subgroup of interest and the complementary subgroup. A tipping point analysis can be carried out to identify how much prior weight needs to be placed on the complementary subgroup component of the robust mixture prior to establish efficacy in the subgroup of interest. An attractive feature of the tipping point analysis is that it enables the evidence from the source subgroup, the evidence from the target subgroup, and the combined evidence to be displayed alongside each other. This method is illustrated with an example trial in severe asthma where efficacy in the adolescent subgroup was assessed using a mixture prior combining an informative prior from the adult data in the same trial with a non-informative prior.  相似文献   

12.
For fixed size sampling designs with high entropy, it is well known that the variance of the Horvitz–Thompson estimator can be approximated by the Hájek formula. The interest of this asymptotic variance approximation is that it only involves the first order inclusion probabilities of the statistical units. We extend this variance formula when the variable under study is functional, and we prove, under general conditions on the regularity of the individual trajectories and the sampling design, that we can get a uniformly convergent estimator of the variance function of the Horvitz–Thompson estimator of the mean function. Rates of convergence to the true variance function are given for the rejective sampling. We deduce, under conditions on the entropy of the sampling design, that it is possible to build confidence bands whose coverage is asymptotically the desired one via simulation of Gaussian processes with variance function given by the Hájek formula. Finally, the accuracy of the proposed variance estimator is evaluated on samples of electricity consumption data measured every half an hour over a period of 1 week.  相似文献   

13.
Summary. The availability of intraday data on the prices of speculative assets means that we can use quadratic variation-like measures of activity in financial markets, called realized volatility, to study the stochastic properties of returns. Here, under the assumption of a rather general stochastic volatility model, we derive the moments and the asymptotic distribution of the realized volatility error—the difference between realized volatility and the discretized integrated volatility (which we call actual volatility). These properties can be used to allow us to estimate the parameters of stochastic volatility models without recourse to the use of simulation-intensive methods.  相似文献   

14.
Consider a set of points in the plane randomly perturbed about a mean configuration by Gaussian errors. In this paper a Procrustes statistic based on the shapes of subsets of the points is studied, and its approximate distribution is found for small variations. We derive various properties of the distribution including the first two moments, a central limit result and a scaled χ2–-approximation. We concentrate on the independent isotropic Gaussian error case, although the results are valid for general covariance structures. We investigate triangle subsets in detail and in particular the situation where the population mean is regular (i.e. a Delaunay triangulation of the mean of the process is comprised of equilateral triangles of the same size). We examine the variance of the statistic for differently shaped regions and provide an asymptotic result for general shaped regions. The results are applied to an investigation of regularity in human muscle fibre cross-sections.  相似文献   

15.
Statistical matching consists in estimating the joint characteristics of two variables observed in two distinct and independent sample surveys, respectively. In a parametric setup, ranges of estimates for non identifiable parameters are the only estimable items, unless restrictive assumptions on the probabilistic relationship between the non jointly observed variables are imposed. These ranges correspond to the uncertainty due to the absence of joint observations on the pair of variables of interest. The aim of this paper is to analyze the uncertainty in statistical matching in a non parametric setting. A measure of uncertainty is introduced, and its properties studied: this measure studies the “intrinsic” association between the pair of variables, which is constant and equal to 1/6 whatever the form of the marginal distribution functions of the two variables when knowledge on the pair of variables is the only one available in the two samples. This measure becomes useful in the context of the reduction of uncertainty due to further knowledge than data themselves, as in the case of structural zeros. In this case the proposed measure detects how the introduction of further knowledge shrinks the intrinsic uncertainty from 1/6 to smaller values, zero being the case of no uncertainty. Sampling properties of the uncertainty measure and of the bounds of the uncertainty intervals are also proved.  相似文献   

16.
A popular wavelet method for estimating jumps in functions is through the use of the translation-invariant (TI) estimator. The TI estimator addresses a particular problem, the susceptibility of the wavelet estimates to the location of the features in a function with respect to the support of the wavelet basis functions. The TI estimator reduces this reliance by cycling the data through a set of shifts, thus changing the relation between the wavelet support and the jump location. However, a drawback of the TI estimator is that it includes every shifted analysis in the reconstruction, even those that may reduce, rather than improve, the effectiveness of the method. In this paper, we propose a method that modifies the TI estimator to improve the jump reconstruction in terms of the mean squared errors of the reconstructions and visual performance. Information from the set of shifted data sets is used to mimic the performance of an oracle which knows exactly which are the best TI shifts to retain in the reconstruction. The TI estimate is a special case of the proposed method. A simulation study comparing this proposed method to the existing wavelet estimators and the oracle is provided.  相似文献   

17.
In recent years, the Quintile Share Ratio (or QSR) has become a very popular measure of inequality. In 2001, the European Council decided that income inequality in European Union member states should be described using two indicators: the Gini Index and the QSR. The QSR is generally defined as the ratio of the total income earned by the richest 20% of the population relative to that earned by the poorest 20%. Thus, it can be expressed using quantile shares, where a quantile share is the share of total income earned by all of the units up to a given quantile. The aim of this paper is to propose an improved methodology for the estimation and variance estimation of the QSR in a complex sampling design framework. Because the QSR is a non-linear function of interest, the estimation of its sampling variance requires advanced methodology. Moreover, a non-trivial obstacle in the estimation of quantile shares in finite populations is the non-unique definition of a quantile. Thus, two different conceptions of the quantile share are presented in the paper, leading us to two different estimators of the QSR. Regarding variance estimation, [Osier, 2006] and [Osier, 2009] proposed a variance estimator based on linearization techniques. However, his method involves Gaussian kernel smoothing of cumulative distribution functions. Our approach, also based on linearization, shows that no smoothing is needed. The construction of confidence intervals is discussed and a proposition is made to account for the skewness of the sampling distribution of the QSR. Finally, simulation studies are run to assess the relevance of our theoretical results.  相似文献   

18.
The National Institute of Mental Health (NIMH) Collaborative Study of Long-Term Maintenance Drug Therapy in Recurrent Affective Illness was a multicenter randomized controlled clinical trial designed to determine the efficacy of a pharmacotherapy for the prevention of the recurrence of unipolar affective disorders. The outcome of interest in this study was the time until the recurrence of a depressive episode. The data show much heterogeneity between centers for the placebo group. The aim of this paper is to use Bayesian hierarchical survival models to investigate the heterogeneity of placebo effects among centers in the NIMH study. This heterogeneity is explored in terms of the marginal posterior distributions of parameters of interest and predictive distributions of future observations. The Gibbs sampling algorithm is used to approximate posterior and predictive distributions. Sensitivity of results to the assumption of a constant hazard survival distribution at the first stage of the hierarchy is examined by comparing results derived from a two component exponential mixture and a two component exponential changepoint model to the results derived from an exponential model. The second component of the mixture and changepoint models is assumed to be a surviving fraction. For each of these first stage parametric models sensitivity of results to second stage prior distributions is also examined. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

19.
A three-parameter generalisation of the beta-binomial distribution (BBD) derived by Chandon (1976) is examined. We obtain the maximum likelihood estimates of the parameters and give the elements of the information matrix. To exhibit the applicability of the generalised distribution we show how it gives an improved fit over the BBD for magazine exposure and consumer purchasing data. Finally we derive an empirical Bayes estimate of a binomial proportion based on the generalised beta distribution used in this study.  相似文献   

20.
In the present article we study the asymptotic behaviour of the sequence of vectors with components expectations, variances, and covariances of the state sizes of a semi-Markov system. In this respect, we transform the semi-Markov system into a Markov system with a different though equivalent state space and relate the sequence of the transition probabilities of the respective Markov system as functions of the parameters of the semi-Markov system. Also, we study the asymptotic behavior of the sequence of vectors with components variances and covariances of the duration state sizes, of the related Markov system, under perturbation of the transition probability matrices. We use the results in the study of asymptotic behavior under perturbation of the sequence of vectors with components variances and covariances of the semi-Markov system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号