首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.  相似文献   

4.
Abstract

Although no universally accepted definition of causality exists, in practice one is often faced with the question of statistically assessing causal relationships in different settings. We present a uniform general approach to causality problems derived from the axiomatic foundations of the Bayesian statistical framework. In this approach, causality statements are viewed as hypotheses, or models, about the world and the fundamental object to be computed is the posterior distribution of the causal hypotheses, given the data and the background knowledge. Computation of the posterior, illustrated here in simple examples, may involve complex probabilistic modeling but this is no different than in any other Bayesian modeling situation. The main advantage of the approach is its connection to the axiomatic foundations of the Bayesian framework, and the general uniformity with which it can be applied to a variety of causality settings, ranging from specific to general cases, or from causes of effects to effects of causes.  相似文献   

5.
In hypothesis testing involving censored lifetime data that are independently distributed according to an accelerated-failure-time model, it is often of interest to predict whether continuation of the experiment will significantly alter the inferences drawn at an interim point. Approaching the problem from a Bayesian viewpoint, we suggest a possible solution based on Laplace approximations to the posterior distribution of the parameters of interest and on Markov-chain Monte Carlo. We apply our results to Weibull data from a carcinogenesis experiment on mice.  相似文献   

6.
In this paper we introduce a broad family of loss functions based on the concept of Bregman divergence. We deal with both Bayesian estimation and prediction problems and show that all Bayes solutions associated with loss functions belonging to the introduced family of losses satisfy the same equation. We further concentrate on the concept of robust Bayesian analysis and provide one equation that explicitly leads to robust Bayes solutions. The results are model-free and include many existing results in Bayesian and robust Bayesian contexts in the literature.  相似文献   

7.
Abstract. When applicable, an assumed monotonicity property of the regression function w.r.t. covariates has a strong stabilizing effect on the estimates. Because of this, other parametric or structural assumptions may not be needed at all. Although monotonic regression in one dimension is well studied, the question remains whether one can find computationally feasible generalizations to multiple dimensions. Here, we propose a non‐parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure. The monotonic construction is based on marked point processes, where the random point locations and the associated marks (function levels) together form piecewise constant realizations of the regression surfaces. The actual inference is based on model‐averaged results over the realizations. The monotonicity of the construction is enforced by partial ordering constraints, which allows it to asymptotically, with increasing density of support points, approximate the family of all monotonic bounded continuous functions.  相似文献   

8.
Summary It is known that the problem of combining a number of expert probability evaluations is frequently solved with additive or multiplicative rules. In this paper we try to show, with the help of a behavioural model, that the additive rule (or linear pooling) derives from the application of Bayesian reasoning. In another occasion we will discuss the multiplicative rule. Research supported by C.N.R. and Ministry of University and Technological and Scientific Research.  相似文献   

9.
Parameter estimation of the generalized Pareto distribution—Part II   总被引:1,自引:0,他引:1  
This is the second part of a paper which focuses on reviewing methods for estimating the parameters of the generalized Pareto distribution (GPD). The GPD is a very important distribution in the extreme value context. It is commonly used for modeling the observations that exceed very high thresholds. The ultimate success of the GPD in applications evidently depends on the parameter estimation process. Quite a few methods exist in the literature for estimating the GPD parameters. Estimation procedures, such as the maximum likelihood (ML), the method of moments (MOM) and the probability weighted moments (PWM) method were described in Part I of the paper. We shall continue to review methods for estimating the GPD parameters, in particular methods that are robust and procedures that use the Bayesian methodology. As in Part I, we shall focus on those that are relatively simple and straightforward to be applied to real world data.  相似文献   

10.
We propose a Bayesian computation and inference method for the Pearson-type chi-squared goodness-of-fit test with right-censored survival data. Our test statistic is derived from the classical Pearson chi-squared test using the differences between the observed and expected counts in the partitioned bins. In the Bayesian paradigm, we generate posterior samples of the model parameter using the Markov chain Monte Carlo procedure. By replacing the maximum likelihood estimator in the quadratic form with a random observation from the posterior distribution of the model parameter, we can easily construct a chi-squared test statistic. The degrees of freedom of the test equal the number of bins and thus is independent of the dimensionality of the underlying parameter vector. The test statistic recovers the conventional Pearson-type chi-squared structure. Moreover, the proposed algorithm circumvents the burden of evaluating the Fisher information matrix, its inverse and the rank of the variance–covariance matrix. We examine the proposed model diagnostic method using simulation studies and illustrate it with a real data set from a prostate cancer study.  相似文献   

11.
This paper deals with the problem of robustness of Bayesian regression with respect to the data. We first give a formal definition of Bayesian robustness to data contamination, prove that robustness according to the definition cannot be obtained by using heavy-tailed error distributions in linear regression models and propose a heteroscedastic approach to achieve the desired Bayesian robustness.  相似文献   

12.
ABSTRACT

Likelihood ratio tests for a change in mean in a sequence of independent, normal random variables are based on the maximum two-sample t-statistic, where the maximum is taken over all possible changepoints. The maximum t-statistic has the undesirable characteristic that Type I errors are not uniformly distributed across possible changepoints. False positives occur more frequently near the ends of the sequence and occur less frequently near the middle of the sequence. In this paper we describe an alternative statistic that is based upon a minimum p-value, where the minimum is taken over all possible changepoints. The p-value at any particular changepoint is based upon both the two-sample t-statistic at that changepoint and the probability that the maximum two-sample t-statistic is achieved at that changepoint. The new statistic has a more uniform distribution of Type I errors across potential changepoints and it compares favorably with respect to statistical power, false discovery rates, and the mean square error of changepoint estimates.  相似文献   

13.
Probabilistic sensitivity analysis of complex models: a Bayesian approach   总被引:3,自引:0,他引:3  
Summary.  In many areas of science and technology, mathematical models are built to simulate complex real world phenomena. Such models are typically implemented in large computer programs and are also very complex, such that the way that the model responds to changes in its inputs is not transparent. Sensitivity analysis is concerned with understanding how changes in the model inputs influence the outputs. This may be motivated simply by a wish to understand the implications of a complex model but often arises because there is uncertainty about the true values of the inputs that should be used for a particular application. A broad range of measures have been advocated in the literature to quantify and describe the sensitivity of a model's output to variation in its inputs. In practice the most commonly used measures are those that are based on formulating uncertainty in the model inputs by a joint probability distribution and then analysing the induced uncertainty in outputs, an approach which is known as probabilistic sensitivity analysis. We present a Bayesian framework which unifies the various tools of prob- abilistic sensitivity analysis. The Bayesian approach is computationally highly efficient. It allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than standard Monte Carlo methods. Furthermore, all measures of interest may be computed from a single set of runs.  相似文献   

14.
This paper presents a kernel estimation of the distribution of the scale parameter of the inverse Gaussian distribution under type II censoring together with the distribution of the remaining time. Estimation is carried out via the Gibbs sampling algorithm combined with a missing data approach. Estimates and confidence intervals for the parameters of interest are also presented.  相似文献   

15.
Typically, in the practice of causal inference from observational studies, a parametric model is assumed for the joint population density of potential outcomes and treatment assignments, and possibly this is accompanied by the assumption of no hidden bias. However, both assumptions are questionable for real data, the accuracy of causal inference is compromised when the data violates either assumption, and the parametric assumption precludes capturing a more general range of density shapes (e.g., heavier tail behavior and possible multi-modalities). We introduce a flexible, Bayesian nonparametric causal model to provide more accurate causal inferences. The model makes use of a stick-breaking prior, which has the flexibility to capture any multi-modalities, skewness and heavier tail behavior in this joint population density, while accounting for hidden bias. We prove the asymptotic consistency of the posterior distribution of the model, and illustrate our causal model through the analysis of small and large observational data sets.  相似文献   

16.
This paper proposes an affine‐invariant test extending the univariate Wilcoxon signed‐rank test to the bivariate location problem. It gives two versions of the null distribution of the test statistic. The first version leads to a conditionally distribution‐free test which can be used with any sample size. The second version can be used for larger sample sizes and has a limiting χ22 distribution under the null hypothesis. The paper investigates the relationship with a test proposed by Jan & Randles (1994). It shows that the Pitman efficiency of this test relative to the new test is equal to 1 for elliptical distributions but that the two tests are not necessarily equivalent for non‐elliptical distributions. These facts are also demonstrated empirically in a simulation study. The new test has the advantage of not requiring the assumption of elliptical symmetry which is needed to perform the asymptotic version of the Jan and Randles test.  相似文献   

17.
Testing for differences between two groups is a fundamental problem in statistics, and due to developments in Bayesian non parametrics and semiparametrics there has been renewed interest in approaches to this problem. Here we describe a new approach to developing such tests and introduce a class of such tests that take advantage of developments in Bayesian non parametric computing. This class of tests uses the connection between the Dirichlet process (DP) prior and the Wilcoxon rank sum test but extends this idea to the DP mixture prior. Here tests are developed that have appropriate frequentist sampling procedures for large samples but have the potential to outperform the usual frequentist tests. Extensions to interval and right censoring are considered and an application to a high-dimensional data set obtained from an RNA-Seq investigation demonstrates the practical utility of the method.  相似文献   

18.
Recently, the world has experienced an increased number of major earthquakes. The Zagros belt is among the most seismically active mountain ranges in the world. Due to Kuwait's location in the southwest of the Zagros belt, it is affected by relative tectonic movements in the neighboring region. It is vital to assess the Zagros seismic risks in Kuwait using recent data and coordinate with the competent authorities to reduce those risks. Using the body wave magnitude (Mb) data collected in Kuwait, we want to assess the recent changes in the magnitude of earthquakes and its variations in Kuwait's vicinity. We built a change point model to detect the significant changes in its parameters. This paper applies a hierarchical Bayesian technique and derives the marginal posterior density function for the Mb. Our interest lies in identifying a shift in the mean of a single or multiple change points as well as the changes in the variation. Building upon the model and its parameters for the 2002–2003 data, we detected three change points. The first, second and third change points occurred in September 2002, April 2003 and August 2003, respectively.  相似文献   

19.
ABSTRACT

In survival analysis, individuals may fail due to multiple causes of failure called competing risks setting. Parametric models such as Weibull model are not improper that ignore the assumption of multiple failure times. In this study, a novel extension of Weibull distribution is proposed which is improper and then can incorporate to the competing risks framework. This model includes the original Weibull model before a pre-specified time point and an exponential form for the tail of the time axis. A Bayesian approach is used for parameter estimation. A simulation study is performed to evaluate the proposed model. The conducted simulation study showed identifiability and appropriate convergence of the proposed model. The proposed model and the 3-parameter Gompertz model, another improper parametric distribution, are fitted to the acute lymphoblastic leukemia dataset.  相似文献   

20.
Recent changes in European family dynamics are often linked to common latent trends of economic and ideational change. Using Bayesian factor analysis, we extract three latent variables from eight socio-demographic indicators related to family formation, dissolution, and gender system and collected on 19 European countries within four periods (1970, 1980, 1990, 1998). The flexibility of the Bayesian approach allows us to introduce an innovative temporal factor model, adding the temporal dimension to the traditional factorial analysis. The underlying structure of the Bayesian factor model proposed reflects our idea of an autoregressive pattern in the latent variables relative to adjacent time periods. The results we obtain are consistent with current interpretations in European demographic trends.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号