首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Franz Pfuff 《Statistics》2013,47(2):195-209
In this paper, problems of sequential decision theory are taken into consideration by extending the definition of the BAYES rule and treating BAYES rules. This generalisation is quite useful for practice. In many cases only BAYES rules can be calculated. The conditions under which such sequential decision procedures exist are demonstrated, as well as how to construct them on a scheme of backward induction resulting in the conclusion that the existence of BAYES rules needs essentially weaker assumptions than the existence of BAYES rules.Futhermore, methods are searched to simplify the construction of optimal stopping rules. Some illustrative examples are given.  相似文献   

2.
In this article we discuss variable selection for decision making with focus on decisions regarding when to provide treatment and which treatment to provide. Current variable selection techniques were developed for use in a supervised learning setting where the goal is prediction of the response. These techniques often downplay the importance of interaction variables that have small predictive ability but that are critical when the ultimate goal is decision making rather than prediction. We propose two new techniques designed specifically to find variables that aid in decision making. Simulation results are given along with an application of the methods on data from a randomized controlled trial for the treatment of depression.  相似文献   

3.
A compound decision problem with component decision problem being the classification of a random sample as having come from one of the finite number of univariate populations is investigated. The Bayesian approach is discussed. A distribution–free decision rule is presented which has asymptotic risk equal to zero. The asymptotic efficiencies of these rules are discussed.

The results of a compter simulation are presented which compares the Bayes rule to the distribution–free rule under the assumption of normality. It is found that the distribution–free rule can be recommended in situations where certain key lo cation parameters are not known precisely and/or when certain distributional assumptions are not satisfied.  相似文献   

4.
This paper reviews two types of geometric methods proposed in recent years for defining statistical decision rules based on 2-dimensional parameters that characterize treatment effect in a medical setting. A common example is that of making decisions, such as comparing treatments or selecting a best dose, based on both the probability of efficacy and the probability toxicity. In most applications, the 2-dimensional parameter is defined in terms of a model parameter of higher dimension including effects of treatment and possibly covariates. Each method uses a geometric construct in the 2-dimensional parameter space based on a set of elicited parameter pairs as a basis for defining decision rules. The first construct is a family of contours that partitions the parameter space, with the contours constructed so that all parameter pairs on a given contour are equally desirable. The partition is used to define statistical decision rules that discriminate between parameter pairs in term of their desirabilities. The second construct is a convex 2-dimensional set of desirable parameter pairs, with decisions based on posterior probabilities of this set for given combinations of treatments and covariates under a Bayesian formulation. A general framework for all of these methods is provided, and each method is illustrated by one or more applications.  相似文献   

5.
Likelihood-based, mixed-effects models for repeated measures (MMRMs) are occasionally used in primary analyses for group comparisons of incomplete continuous longitudinal data. Although MMRM analysis is generally valid under missing-at-random assumptions, it is invalid under not-missing-at-random (NMAR) assumptions. We consider the possibility of bias of estimated treatment effect using standard MMRM analysis in a motivational case, and propose simple and easily implementable pattern mixture models within the framework of mixed-effects modeling, to handle the NMAR data with differential missingness between treatment groups. The proposed models are a new form of pattern mixture model that employ a categorical time variable when modeling the outcome and a continuous time variable when modeling the missingness-data patterns. The models can directly provide an overall estimate of the treatment effect of interest using the average of the distribution of the missingness indicator and a categorical time variable in the same manner as MMRM analysis. Our simulation results indicate that the bias of the treatment effect for MMRM analysis was considerably larger than that for the pattern mixture model analysis under NMAR assumptions. In the case study, it would be dangerous to interpret only the results of the MMRM analysis, and the proposed pattern mixture model would be useful as a sensitivity analysis for treatment effect evaluation.  相似文献   

6.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystic pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the potential censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. Robins (1995) specified two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. The goal of this paper is to provide a class of consistent and reasonably efficient semiparametric tests and estimators for the treatment effect under these assumptions. The tests in our class, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also study the estimation, in the presence of informative censoring, of the effect of treatment on the evolution over time of the mean of repeated measures outcome such as CD4 count.  相似文献   

7.
In a response-adaptive design, we review and update the trial on the basis of outcomes in order to achive a specific goal. In clinical trials our goal is to allocate a larger number of patients to the better treatment. In the present paper, we use a response adaptive design in a two-treatment two-period crossover trial where the treatment responses are continuous. We provide probability measures to choose between the possible treatment combinations AA, AB, BA, or BB. The goal is to use the better treatment combination a larger number of times. We calculate the allocation proportions to the possible treatment combinations and their standard errors. We also derive some asymptotic results and provide solutions on related inferential problems. The proposed procedure is compared with a possible competitor. Finally, we use a data set to illustrate the applicability of our proposed design.  相似文献   

8.
ABSTRACT

Traditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models.  相似文献   

9.
One way to analyze the AB-BA crossover trial with multivariate response is proposed. The multivariate model is given and the assumptions discussed. Two possibilities for the treatment eff ects hypothesis are considered. The statistical tests include the use of Hotelling's T 2 statistic, and a transformation equivalent to that of Jones and Kenward for the univariate case. Data from a nutrition experiment in Mexico illustrate the method. The multiple comparisons are carried out using Bonferroni intervals and the validity of the assumptions is explored. The main conclusions include the finding that some of the assumptions are not a requirement for the multivariate analysis; however, the sample sizes are important.  相似文献   

10.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

11.
Adaptive designs are sometimes used in a phase III clinical trial with the goal of allocating a larger number of patients to the better treatment. In the present paper we use some adaptive designs in a two-treatment two-period crossover trial in the presence of possible carry-over effects, where the treatment responses are binary. We use some simple designs to choose between the possible treatment combinations AA, AB, BA or BB. The goal is to use the better treatment a larger proportion of times. We calculate the allocation proportions to the possible treatment combinations and their standard deviations. We also investigate related inferential problems, for which related asymptotics are derived. The proposed procedure is compared with a possible competitor. Finally we use real data sets to illustrate the applicability of our proposed design.  相似文献   

12.
The empirical likelihood (EL) technique is a powerful nonparametric method with wide theoretical and practical applications. In this article, we use the EL methodology in order to develop simple and efficient goodness-of-fit tests for normality based on the dependence between moments that characterizes normal distributions. The new empirical likelihood ratio (ELR) tests are exact and are shown to be very powerful decision rules based on small to moderate sample sizes. Asymptotic results related to the Type I error rates of the proposed tests are presented. We present a broad Monte Carlo comparison between different tests for normality, confirming the preference of the proposed method from a power perspective. A real data example is provided.  相似文献   

13.
This paper describes a nonparametric approach to make inferences for aggregate loss models in the insurance framework. We assume that an insurance company provides a historical sample of claims given by claim occurrence times and claim sizes. Furthermore, information may be incomplete as claims may be censored and/or truncated. In this context, the main goal of this work consists of fitting a probability model for the total amount that will be paid on all claims during a fixed future time period. In order to solve this prediction problem, we propose a new methodology based on nonparametric estimators for the density functions with censored and truncated data, the use of Monte Carlo simulation methods and bootstrap resampling. The developed methodology is useful to compare alternative pricing strategies in different insurance decision problems. The proposed procedure is illustrated with a real dataset provided by the insurance department of an international commercial company.  相似文献   

14.
Multi-stage time evolving models are common statistical models for biological systems, especially insect populations. In stage-duration distribution models, parameter estimation for the models use the Laplace transform method. This method involves assumptions such as known constant shapes, known constant rates or the same overall hazard rate for all stages. These assumptions are strong and restrictive. The main aim of this paper is to weaken these assumptions by using a Bayesian approach. In particular, a Metropolis-Hastings algorithm based on deterministic transformations is used to estimate parameters. We will use two models, one which has no hazard rates, and the other has stage-wise constant hazard rates. These methods are validated in simulation studies followed by a case study of cattle parasites. The results show that the proposed methods are able to estimate the parameters comparably well, as opposed to using the Laplace transform methods.  相似文献   

15.
Teaching how to derive minimax decision rules can be challenging because of the lack of examples that are simple enough to be used in the classroom. Motivated by this challenge, we provide a new example that illustrates the use of standard techniques in the derivation of optimal decision rules under the Bayes and minimax approaches. We discuss how to predict the value of an unknown quantity, θ ∈ {0, 1}, given the opinions of n experts. An important example of such crowdsourcing problem occurs in modern cosmology, where θ indicates whether a given galaxy is merging or not, and Y1, …, Yn are the opinions from n astronomers regarding θ. We use the obtained prediction rules to discuss advantages and disadvantages of the Bayes and minimax approaches to decision theory. The material presented here is intended to be taught to first-year graduate students.  相似文献   

16.
 作为一种近似处理的工具,粗集主要用于不确定情况下的决策分析,并且不需要任何事先的数据假定。但当前的主流粗集分类方法仍然需要先经过离散化的步骤,这就损失了数值型变量提供的高质量信息。本文对隶属函数重新加以概率定义,并提出了一种基于Bayes概率边界域的粗集分类技术,比较好的解决了当前粗集方法所面临的数值型属性分类的不适应、分类规则不完备等一系列问题。  相似文献   

17.
In many longitudinal studies of recurrent events there is an interest in assessing how recurrences vary over time and across treatments or strata in the population. Usual analyses of such data assume a parametric form for the distribution of the recurrences over time. Here, we consider a semiparametric model for the analysis of such longitudinal studies where data are collected as panel counts. The model is a non-homogeneous Poisson process with a multiplicative intensity incorporating covariates through a proportionality assumption. Heterogeneity is accounted for in the model through subject-specific random effects. The key feature of the model is the use of regression splines to model the distribution of recurrences over time. This provides a flexible and robust method of relaxing parametric assumptions. In addition, quasi-likelihood methods are proposed for estimation, requiring only first and second moment assumptions to obtain consistent estimates. Simulations demonstrate that the method produces estimators of the rate with low bias and whose standardized distributions are well approximated by the normal. The usefulness of this approach, especially as an exploratory tool, is illustrated by analyzing a study designed to assess the effectiveness of a pheromone treatment in disturbing the mating habits of the Cherry Bark Tortrix moth.  相似文献   

18.
We can use wavelet shrinkage to estimate a possibly multivariate regression function g under the general regression setup, y = g + ε. We propose an enhanced wavelet-based denoising methodology based on Bayesian adaptive multiresolution shrinkage, an effective Bayesian shrinkage rule in addition to the semi-supervised learning mechanism. The Bayesian shrinkage rule is advanced by utilizing the semi-supervised learning method in which the neighboring structure of a wavelet coefficient is adopted and an appropriate decision function is derived. According to decision function, wavelet coefficients follow one of two prespecified Bayesian rules obtained using varying related parameters. The decision of a wavelet coefficient depends not only on its magnitude, but also on the neighboring structure on which the coefficient is located. We discuss the theoretical properties of the suggested method and provide recommended parameter settings. We show that the proposed method is often superior to several existing wavelet denoising methods through extensive experimentation.  相似文献   

19.
The rise over recent years in the use of network meta‐analyses (NMAs) in clinical research and health economic analysis is little short of meteoric driven, in part, by a desire from decision makers to extend inferences beyond direct comparisons in controlled clinical trials. But is the increased use and reliance of NMAs justified? Do such analyses provide a reliable basis for the relative effectiveness assessment of medicines and, in turn, for critical decisions relating to healthcare access and provisioning? And can such analyses also be used earlier, as part of the evidence base for licensure? Despite several important publications highlighting inherently unverifiable assumptions underpinning NMAs, these assumptions and associated potential for serious bias are often overlooked in the reporting and interpretation of NMAs. A more cautious, and better informed, approach to the use and interpretation of NMAs in clinical research is warranted given the assumptions that sit behind such analyses. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility‐based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time‐to‐event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no‐go decision rules are provided for both the “all‐or‐none” and “at‐least‐one” win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号