首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 38 毫秒
1.
In this article we consider the problem of estimation of the mean of a univariate normal population with an unknown variance when uncertain nonsample prior information about the mean is available. We compare four estimators of the mean, including pretest and shrinkage estimators. The performances of the estimators are compared based on the multiple criteria decision making (MCDM) procedure in order to find the best estimator.  相似文献   

2.
The relationship Y = RX between two random variables X and Y, where R is distributed independently of X in (0, l), is known to have important consequences in different fields such as income distribution analysis, Inventory decision models, etc.

In this paper it is shown that when X and Y are discrete random variables, relationships of similar nature lead to Yule-type distributions. The implications of the results are studied in connection with problems of income underreporting and inventory decision making.  相似文献   

3.
In this paper, we define a multiple cases deletion model (MCDM) in linear measurement error models (LMEMs). Then, by using the corrected score method of Nakamura (1990), the estimation of parameters is obtained. Furthermore, Based on MCDM, we provide computationally inexpensive deletion diagnostic tools for LMEMs. An example illustrates that our method is useful for diagnosing influential subsets of observations.  相似文献   

4.
This paper illustrates an approach to setting the decision framework for a study in early clinical drug development. It shows how the criteria for a go and a stop decision are calculated based on pre‐specified target and lower reference values. The framework can lead to a three‐outcome approach by including a consider zone; this could enable smaller studies to be performed in early development, with other information either external to or within the study used to reach a go or stop decision. In this way, Phase I/II trials can be geared towards providing actionable decision‐making rather than the traditional focus on statistical significance. The example provided illustrates how the decision criteria were calculated for a Phase II study, including an interim analysis, and how the operating characteristics were assessed to ensure the decision criteria were robust. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
In this article we discuss variable selection for decision making with focus on decisions regarding when to provide treatment and which treatment to provide. Current variable selection techniques were developed for use in a supervised learning setting where the goal is prediction of the response. These techniques often downplay the importance of interaction variables that have small predictive ability but that are critical when the ultimate goal is decision making rather than prediction. We propose two new techniques designed specifically to find variables that aid in decision making. Simulation results are given along with an application of the methods on data from a randomized controlled trial for the treatment of depression.  相似文献   

6.
The paper looks at the problem of comparing two treatments, for a particular population of patients, where one is the current standard treatment and the other a possible alternative under investigation. With limited (finite) financial resources the decision whether to replace one by the other will not be based on health benefits alone. This motivates an economic evaluation of the two competing treatments where the cost of any gain in health benefit is scrutinized; it is whether this cost is acceptable to the relevant authorities which decides whether the new treatment can become the standard. We adopt a Bayesian decision theoretic framework in which a utility function is introduced describing the consequences of making a particular decision when the true state of nature is expressed via an unknown parameter θ (this parameter denotes cost, effectiveness, etc.). The treatment providing the maximum posterior expected utility summarizes the decision rule, expectations taken over the posterior distribution of the parameter θ.  相似文献   

7.
针对多投入多产出评价系统中决策单元的同质性问题,借鉴多阶段DEA模型的发展历程,利用Tobit、SFA多元线性回归分析和DEA模型相结合的方法,提出了六阶段DEA模型。首次将外部环境变量区分为正/负向环境变量,充分利用投入冗余松弛变量和产出不足松弛变量,重新调整投入量或产出量,剔除环境变量、随机误差以及管理无效率对系统效率评价的影响,得到纯管理效率。利用2009年商业银行的数据进行实证分析,证实作为多阶段DEA模型的延续,六阶段DEA模型可以作为判断评价系统中决策单元同质性的一个参考准则,有助于建立系统、全面的评价指标体系,该方法可以扩展到面板型数据,对非DEA模型的系统评价也有借鉴价值。  相似文献   

8.
Decision making is a critical component of a new drug development process. Based on results from an early clinical trial such as a proof of concept trial, the sponsor can decide whether to continue, stop, or defer the development of the drug. To simplify and harmonize the decision‐making process, decision criteria have been proposed in the literature. One of them is to exam the location of a confidence bar relative to the target value and lower reference value of the treatment effect. In this research, we modify an existing approach by moving some of the “stop” decision to “consider” decision so that the chance of directly terminating the development of a potentially valuable drug can be reduced. As Bayesian analysis has certain flexibilities and can borrow historical information through an inferential prior, we apply the Bayesian analysis to the trial planning and decision making. Via a design prior, we can also calculate the probabilities of various decision outcomes in relationship with the sample size and the other parameters to help the study design. An example and a series of computations are used to illustrate the applications, assess the operating characteristics, and compare the performances of different approaches.  相似文献   

9.
Predictive enrichment strategies use biomarkers to selectively enroll oncology patients into clinical trials to more efficiently demonstrate therapeutic benefit. Because the enriched population differs from the patient population eligible for screening with the biomarker assay, there is potential for bias when estimating clinical utility for the screening eligible population if the selection process is ignored. We write estimators of clinical utility as integrals averaging regression model predictions over the conditional distribution of the biomarker scores defined by the assay cutoff and discuss the conditions under which consistent estimation can be achieved while accounting for some nuances that may arise as the biomarker assay progresses toward a companion diagnostic. We outline and implement a Bayesian approach in estimating these clinical utility measures and use simulations to illustrate performance and the potential biases when estimation naively ignores enrichment. Results suggest that the proposed integral representation of clinical utility in combination with Bayesian methods provide a practical strategy to facilitate cutoff decision‐making in this setting. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.  相似文献   

11.
Decision making with adaptive utility provides a generalisation to classical Bayesian decision theory, allowing the creation of a normative theory for decision selection when preferences are initially uncertain. In this paper we address some of the foundational issues of adaptive utility as seen from the perspective of a Bayesian statistician. The implications that such a generalisation has upon the traditional utility concepts of value of information and risk aversion are also explored, with a new concept of trial aversion introduced that is similar to risk aversion, but which concerns a decision maker's aversion to selecting decisions with high uncertainty over resulting utility.  相似文献   

12.
Model‐informed drug discovery and development offers the promise of more efficient clinical development, with increased productivity and reduced cost through scientific decision making and risk management. Go/no‐go development decisions in the pharmaceutical industry are often driven by effect size estimates, with the goal of meeting commercially generated target profiles. Sufficient efficacy is critical for eventual success, but the decision to advance development phase is also dependent on adequate knowledge of appropriate dose and dose‐response. Doses which are too high or low pose risk of clinical or commercial failure. This paper addresses this issue and continues the evolution of formal decision frameworks in drug development. Here, we consider the integration of both efficacy and dose‐response estimation accuracy into the go/no‐go decision process, using a model‐based approach. Using prespecified target and lower reference values associated with both efficacy and dose accuracy, we build a decision framework to more completely characterize development risk. Given the limited knowledge of dose response in early development, our approach incorporates a set of dose‐response models and uses model averaging. The approach and its operating characteristics are illustrated through simulation. Finally, we demonstrate the decision approach on a post hoc analysis of the phase 2 data for naloxegol (a drug approved for opioid‐induced constipation).  相似文献   

13.
Many tasks in image analysis can be formulated as problems of discrimination or, generally, of pattern recognition. A pattern-recognition system is normally considered to comprise two processing stages: the feature selection and extraction stage, which attempts to reduce the dimensionality of the pattern to be classified, and the classification stage, the purpose of which is to assign the pattern into its perceptually meaningful category. This paper gives an overview of the various approaches to designing statistical pattern recognition schemes. The problem of feature selection and extraction is introduced. The discussion then focuses on statistical decision theoretic rules and their implementation. Both parametric and non-parametric classification methods are covered. The emphasis then switches to decision making in context. Two basic formulations of contextual pattern classification are put forward, and the various methods developed from these two formulations are reviewed. These include the method of hidden Markov chains, the Markov random field approach, Markov meshes, and probabilistic and discrete relaxation.  相似文献   

14.
This paper is devoted to application of the Choquet integral with respect to the monotone set functions in economics. We present the application in decision making, finance, insurance, social welfare and quality of life. The Choquet integral is used as the numerical representation of preference relation in decision making, as the “expected value” of future price in financial decision problems, as the insurance premium and as the social evaluation function. Received: March 2000; revised version: August 2001  相似文献   

15.
Many tasks in image analysis can be formulated as problems of discrimination or, generally, of pattern recognition. A pattern-recognition system is normally considered to comprise two processing stages: the feature selection and extraction stage, which attempts to reduce the dimensionality of the pattern to be classified, and the classification stage, the purpose of which is to assign the pattern into its perceptually meaningful category. This paper gives an overview of the various approaches to designing statistical pattern recognition schemes. The problem of feature selection and extraction is introduced. The discussion then focuses on statistical decision theoretic rules and their implementation. Both parametric and non-parametric classification methods are covered. The emphasis then switches to decision making in context. Two basic formulations of contextual pattern classification are put forward, and the various methods developed from these two formulations are reviewed. These include the method of hidden Markov chains, the Markov random field approach, Markov meshes, and probabilistic and discrete relaxation.  相似文献   

16.
In this paper the Jackknife estimate of covariance of two Kaplan–Meier integrals with covariates is introduced. Its strong consistency is established under mild conditions. Several applications of the estimator are discussed.  相似文献   

17.
ROC analysis involving two large datasets is an important method for analyzing statistics of interest for decision making of a classifier in many disciplines. And data dependency due to multiple use of the same subjects exists ubiquitously in order to generate more samples because of limited resources. Hence, a two-layer data structure is constructed and the nonparametric two-sample two-layer bootstrap is employed to estimate standard errors of statistics of interest derived from two sets of data, such as a weighted sum of two probabilities. In this article, to reduce the bootstrap variance and ensure the accuracy of computation, Monte Carlo studies of bootstrap variability were carried out to determine the appropriate number of bootstrap replications in ROC analysis with data dependency. It is suggested that with a tolerance 0.02 of the coefficient of variation, 2,000 bootstrap replications be appropriate under such circumstances.  相似文献   

18.
针对多目标决策值为区间数的规范化问题,提出一种具有奖优罚劣特性的[-1,1]线性变换算子,规范化处理原始决策信息,将其运用到目标权重确定,且属性值为区间数的多目标灰色局势决策中,给出了基于"奖优罚劣"算子的区间数多目标灰色局势决策方法,并以空舰导弹设计方案的选择作为应用案例,结果表明该方法操作方便、计算简单、易于实现,可以为一些具有区间值的不确定决策问题提供一种有效、科学、实用的方法。  相似文献   

19.
This paper reviews two types of geometric methods proposed in recent years for defining statistical decision rules based on 2-dimensional parameters that characterize treatment effect in a medical setting. A common example is that of making decisions, such as comparing treatments or selecting a best dose, based on both the probability of efficacy and the probability toxicity. In most applications, the 2-dimensional parameter is defined in terms of a model parameter of higher dimension including effects of treatment and possibly covariates. Each method uses a geometric construct in the 2-dimensional parameter space based on a set of elicited parameter pairs as a basis for defining decision rules. The first construct is a family of contours that partitions the parameter space, with the contours constructed so that all parameter pairs on a given contour are equally desirable. The partition is used to define statistical decision rules that discriminate between parameter pairs in term of their desirabilities. The second construct is a convex 2-dimensional set of desirable parameter pairs, with decisions based on posterior probabilities of this set for given combinations of treatments and covariates under a Bayesian formulation. A general framework for all of these methods is provided, and each method is illustrated by one or more applications.  相似文献   

20.
陈骥  王炳兴 《统计研究》2012,29(7):91-95
针对区间数据点值化过程中所存在的“代表性不足”的缺陷,提出了基于正态分布的点值化方法并将之应用于区间主成分评价法。通过与基于中心点值化的区间主成分法的比较,得到三个主要结论:第一,基于正态分布的点值化方法能将各样品的点值化结果导向指标均值,而非区间值的中心点;第二,基于正态分布的点值化结果增加了数据信息量;第三,基于正态分布点值化的区间主成分评价法提高了数据降维效果,具有更好的因子命名能力。应用结果表明,在考虑正态分布情况下,对区间数据的点值化处理方法具有较好的效果,基于正态分布点值化的方法可推广至基于区间数的评价和决策问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号