首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Two discrete-time insurance models are studied in the framework of cost approach. The models being non-deterministic one deals with decision making under uncertainty. Three different situations are investigated: (1) underlying processes are stochastic however their probability distributions are given; (2) information concerning the distribution laws is incomplete; (3) nothing is known about the processes under consideration. Mathematical methods useful for establishing the (asymptotically) optimal control are demonstrated in each case. Algorithms for calculation of critical levels are proposed. Numerical results are presented as well.  相似文献   

2.
Recent approaches to the statistical analysis of adverse event (AE) data in clinical trials have proposed the use of groupings of related AEs, such as by system organ class (SOC). These methods have opened up the possibility of scanning large numbers of AEs while controlling for multiple comparisons, making the comparative performance of the different methods in terms of AE detection and error rates of interest to investigators. We apply two Bayesian models and two procedures for controlling the false discovery rate (FDR), which use groupings of AEs, to real clinical trial safety data. We find that while the Bayesian models are appropriate for the full data set, the error controlling methods only give similar results to the Bayesian methods when low incidence AEs are removed. A simulation study is used to compare the relative performances of the methods. We investigate the differences between the methods over full trial data sets, and over data sets with low incidence AEs and SOCs removed. We find that while the removal of low incidence AEs increases the power of the error controlling procedures, the estimated power of the Bayesian methods remains relatively constant over all data sizes. Automatic removal of low-incidence AEs however does have an effect on the error rates of all the methods, and a clinically guided approach to their removal is needed. Overall we found that the Bayesian approaches are particularly useful for scanning the large amounts of AE data gathered.  相似文献   

3.
针对准则偏好值为区间数的不确定性决策问题,提出了一种基于前景理论的多准则决策方法。该方法将准则值看作给定区间内N个等差随机数,利用正态分布的分布函数来表示区间内准则值的分布规律,在给定各准则参考点的基础上,通过价值函数和决策权重函数计算方案在各准则下的前景值,并通过加权平均得到各方案的总前景值;根据前景值的大小对所有方案排序并得到最优方案。该方案的可行性通过一个简单实例得到了论证。  相似文献   

4.
To assess the value of a continuous marker in predicting the risk of a disease, a graphical tool called the predictiveness curve has been proposed. It characterizes the marker's predictiveness, or capacity to risk stratify the population by displaying the distribution of risk endowed by the marker. Methods for making inference about the curve and for comparing curves in a general population have been developed. However, knowledge about a marker's performance in the general population only is not enough. Since a marker's effect on the risk model and its distribution can both differ across subpopulations, its predictiveness may vary when applied to different subpopulations. Moreover, information about the predictiveness of a marker conditional on baseline covariates is valuable for individual decision making about having the marker measured or not. Therefore, to fully realize the usefulness of a risk prediction marker, it is important to study its performance conditional on covariates. In this article, we propose semiparametric methods for estimating covariate-specific predictiveness curves for a continuous marker. Unmatched and matched case-control study designs are accommodated. We illustrate application of the methodology by evaluating serum creatinine as a predictor of risk of renal artery stenosis.  相似文献   

5.
Abstract.  A dynamic regime provides a sequence of treatments that are tailored to patient-specific characteristics and outcomes. In 2004, James Robins proposed g –estimation using structural nested mean models (SNMMs) for making inference about the optimal dynamic regime in a multi-interval trial. The method provides clear advantages over traditional parametric approaches. Robins' g –estimation method always yields consistent estimators, but these can be asymptotically biased under a given SNMM for certain longitudinal distributions of the treatments and covariates, termed exceptional laws. In fact, under the null hypothesis of no treatment effect, every distribution constitutes an exceptional law under SNMMs which allow for interaction of current treatment with past treatments or covariates. This paper provides an explanation of exceptional laws and describes a new approach to g –estimation which we call Zeroing Instead of Plugging In (ZIPI). ZIPI provides nearly identical estimators to recursive g -estimators at non-exceptional laws while providing substantial reduction in the bias at an exceptional law when decision rule parameters are not shared across intervals.  相似文献   

6.
Pretest-posttest designs serve as building blocks for other more complicated repeated-measures designs. In settings where subjects are independent and errors follow a bivariate normal distribution, data analysis may consist of a univariate repeated-measures analysis or an analysis of covariance. Another possible analysis approach is to use seemingly unrelated regression (SUR). The purpose of this article is to help guide the statistician toward an appropriate analysis choice. Assumptions, estimates, and test statistics for each analysis are approached in a systematic manner. On the basis of these results, the crucial choice of analysis is whether differences in pretest group means are conceived to be real or the result of pure measurement error. Direct consultation of the statistician with a subject-matter person is important in making the right choice. If pretest group differences are real, then a univariate repeated-measures analysis is recommended. If pretest group differences are the result of pure measurement error, then a conditional analysis or SUR analysis should be used. The conditional analysis and the SUR analysis will produce similar results. Smaller variance estimates can be expected based on the SUR analysis, but this gain is partially mediated by a lack of an exact distribution for test statistics.  相似文献   

7.
In this work, we define a new method of ranked set sampling (RSS) which is suitable when the characteristic (variable) Y of primary interest on the units is jointly distributed with an auxiliary characteristic X on which one can take its measurement on any number of units, so that units having record values on X alone are ranked and retained for making measurement on Y. We name this RSS as concomitant record ranked set sampling (CRRSS). We propose estimators of the parameters associated with the variable Y of primary interest based on observations of the proposed CRRSS which are applicable to a very large class of distributions viz. Morgenstern family of distributions. We illustrate the application of CRRSS and our estimation technique of parameters, when the basic distribution is Morgenstern-type bivariate logistic distribution. A primary data collected by CRRSS method is demonstrated and the obtained data used to illustrate the results developed in this work.  相似文献   

8.
季美峰  王军 《统计研究》2007,24(8):57-59
本文研究了深市、沪市地产指数波动的统计性质。我们主要对深市、沪市地产指数2001年-2006年的日收益率进行研究。首先从平稳序列的角度,采用偏度峰度检验和 Kolmogorov-Smirnov 检验等方法对两证券市场的地产指数收益率分布进行了实证分析,研究结果表明中国证券市场综合地产指数收益率序列与Gauss分布具有一定的偏离。进一步地根据数据统计分析,得到两证券市场的地产指数收益率服从幂率分布。论文的最后对地产指数的相关价格进行统计分析,讨论其相应的统计规律性。  相似文献   

9.
In the previous installment of this column, Donnice Cochenour wrote about Project Muse, Johns Hopkins University Press's project to provide electronic access to its journals. This column will explore OCLC's collaboration with publishers who are making traditional print publications available electronically. Serials Review interviewed Andrea Keyhani, Manager of Electronic Publishing at OCLC, about traditional print publishers' interests in electronic distribution of journals, OCLC's solution to publishers' migration to electronic distribution, enhancements to their Guidon software, and libraries' costs and archive concerns.  相似文献   

10.
Traditional statistical modeling of continuous outcome variables relies heavily on the assumption of a normal distribution. However, in some applications, such as analysis of microRNA (miRNA) data, normality may not hold. Skewed distributions play an important role in such studies and might lead to robust results in the presence of extreme outliers. We apply a skew-normal (SN) distribution, which is indexed by three parameters (location, scale and shape), in the context of miRNA studies. We developed a test statistic for comparing means of two conditions replacing the normal assumption with SN distribution. We compared the performance of the statistic with other Wald-type statistics through simulations. Two real miRNA datasets are analyzed to illustrate the methods. Our simulation findings showed that the use of a SN distribution can result in improved identification of differentially expressed miRNAs, especially with markedly skewed data and when the two groups have different variances. It also appeared that the statistic with SN assumption performs comparably with other Wald-type statistics irrespective of the sample size or distribution. Moreover, the real dataset analyses suggest that the statistic with SN assumption can be used effectively for identification of important miRNAs. Overall, the statistic with SN distribution is useful when data are asymmetric and when the samples have different variances for the two groups.  相似文献   

11.
Application of quantile regression models with measurement errors in predictors is becoming increasingly popular. High leverage points in predictors can have substantial impacts on these models. Here, we propose a predictive leverage statistic for these models, assuming that the measurement errors follow a multivariate normal distribution, and derive its exact distribution. We compare its performance versus known predictive leverage statistics using simulation and a real dataset. The proposed statistic is shown to have desirable features. It is also the first predictive leverage statistic having its distribution derived in a closed form.  相似文献   

12.
Abstract

This paper focuses on inference based on the confidence distributions of the nonparametric regression function and its derivatives, in which dependent inferences are combined by obtaining information about their dependency structure. We first give a motivating example in production operation system to illustrate the necessity of the problems studied in this paper in practical applications. A goodness-of-fit test for polynomial regression model is proposed on the basis of the idea of combined confidence distribution inference, which is the Fisher’s combination statistic in some cases. On the basis of this testing results, a combined estimator for the p-order derivative of nonparametric regression function is provided as well as its large sample size properties. Consequently, the performances of the proposed test and estimation method are illustrated by three specific examples. Finally, the motivating example is analyzed in detail. The simulated and real data examples illustrate the good performance and practicability of the proposed methods based on confidence distribution.  相似文献   

13.
For a fixed positive integer k, limit laws of linearly normalized kth upper order statistics are well known. In this article, a comprehensive study of tail behaviours of limit laws of normalized kth upper order statistics under fixed and random sample sizes is carried out using tail equivalence which leads to some interesting tail behaviours of the limit laws. These lead to definitive answers about their max domains of attraction. Stochastic ordering properties of the limit laws are also studied. The results obtained are not dependent on linear norming and apply to power norming as well and generalize some results already available in the literature. And the proofs given here are elementary.  相似文献   

14.
The author proposes a reduced version, with three parameters, of the new modified Weibull (NMW) distribution in order to avoid some estimation problems. The mathematical properties and maximum-likelihood estimation of the reduced version are studied. Four real data sets (complete and censored) are used to compare the flexibility of the reduced version versus the NMW distribution. It is shown that the reduced version has the same desirable properties of the NMW distribution in spite of having two less parameters. The NMW distribution did not provide a significantly better fit than the reduced version.  相似文献   

15.
Abstract

In the previous installment of this column, Donnice Cochenour wrote about Project Muse, Johns Hopkins University Press's project to provide electronic access to its journals. This column will explore OCLC's collaboration with publishers who are making traditional print publications available electronically. Serials Review interviewed Andrea Keyhani, Manager of Electronic Publishing at OCLC, about traditional print publishers' interests in electronic distribution of journals, OCLC's solution to publishers' migration to electronic distribution, enhancements to their Guidon software, and libraries' costs and archive concerns.  相似文献   

16.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

17.
The problem of making statistical inference about θ =P(X > Y) has been under great investigation in the literature using simple random sampling (SRS) data. This problem arises naturally in the area of reliability for a system with strength X and stress Y. In this study, we will consider making statistical inference about θ using ranked set sampling (RSS) data. Several estimators are proposed to estimate θ using RSS. The properties of these estimators are investigated and compared with known estimators based on simple random sample (SRS) data. The proposed estimators based on RSS dominate those based on SRS. A motivated example using real data set is given to illustrate the computation of the newly suggested estimators.  相似文献   

18.
Minimax estimation of a binomial probability under LINEX loss function is considered. It is shown that no equalizer estimator is available in the statistical decision problem under consideration. It is pointed out that the problem can be solved by determining the Bayes estimator with respect to a least favorable distribution having finite support. In this situation, the optimal estimator and the least favorable distribution can be determined only by using numerical methods. Some properties of the minimax estimators and the corresponding least favorable prior distributions are provided depending on the parameters of the loss function. The properties presented are exploited in computing the minimax estimators and the least favorable distributions. The results obtained can be applied to determine minimax estimators of a cumulative distribution function and minimax estimators of a survival function.  相似文献   

19.
对《统计研究》的统计研究   总被引:4,自引:2,他引:2       下载免费PDF全文
刘传哲 《统计研究》1998,15(5):62-65
一、引言《统计研究》是中国统计学会和国家统计局统计科学研究所主办的学术刊物,自1981年创刊以来,在活跃我国统计界的学术气氛,反映该领域的最新成果,促进国内外学术交流,培养和发现人才,推动我国统计的发展等方面都起到了极为重要的作用。为了进一步论证该刊...  相似文献   

20.
Statistical models are sometimes incorporated into computer software for making predictions about future observations. When the computer model consists of a single statistical model this corresponds to estimation of a function of the model parameters. This paper is concerned with the case that the computer model implements multiple, individually-estimated statistical sub-models. This case frequently arises, for example, in models for medical decision making that derive parameter information from multiple clinical studies. We develop a method for calculating the posterior mean of a function of the parameter vectors of multiple statistical models that is easy to implement in computer software, has high asymptotic accuracy, and has a computational cost linear in the total number of model parameters. The formula is then used to derive a general result about posterior estimation across multiple models. The utility of the results is illustrated by application to clinical software that estimates the risk of fatal coronary disease in people with diabetes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号