首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian inclusion probabilities have become a popular tool for variable assessment. From a frequentist perspective, it is often difficult to evaluate these probabilities as typically no Type I error rates are considered, neither are any explorations of power of the methods given. This paper considers how a frequentist may evaluate Bayesian inclusion probabilities for screening predictors. This evaluation looks at both unrestricted and restricted model spaces and develops a framework which a frequentist can utilize inclusion probabilities that preserve Type I error rates. Furthermore, this framework is applied to an analysis of the Arabidopsis thaliana with respect to determining quantitative trait loci associated with cotelydon opening angle.  相似文献   

2.
3.
In the problem of parametric statistical inference with a finite parameter space, we propose some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior information. The rules satisfy the likelihood principle and a basic consistency principle ('avoiding sure loss'), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. One of the rules also satisfies fundamental frequentist principles. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and to use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile three general approaches to statistical inference: likelihood inference, coherent inference and frequentist inference.  相似文献   

4.
We prove weak and strong laws of large numbers for coherent lower previsions, where the lower prevision of a random variable is given a behavioural interpretation as a subject's supremum acceptable price for buying it. Our laws are a consequence of the rationality criterion of coherence, and they can be proven under assumptions that are surprisingly weak when compared to the standard formulation of the laws in more classical approaches to probability theory.  相似文献   

5.
A Monte Carlo study was made of the effects of using simple linear regression, on the appropriate probability paper, to estimate parameters, quantiles and cumulative probability for several distributions. These distributions were the Normal, Weibull (shape parameters 1, 2, and 4) and the Type I largest extreme-value distributions. The specific objective was to observe differences arising from choice of plotting positions. Plotting positions used were i/(n+l), (i?3)/(n+.04), (i?.5)/n, either (i?.375)/(n+.25) or (i?.4)/(n+.2), and either F[E(Yi)] or F[E(£n Y)]. For each combination of 4 sample sizes (n=10(10)(40)), distribution, and plotting position, regression lines were found for each of N =9999 samples. Each regression line was used to estimate: (1) quantiles of 9 specific probabilities, (2) probabilities of 9 specific quantiles, and (3) return periods corresponding to 9 specific quantiles. Compa[rgrave]ison of the mean, variances, mean square error and medians of these estimates and of the regression coefficients confirm some results of Harter [Commun. Statist. A13(13), 1984] and provide further insight.  相似文献   

6.
By representing fair betting odds according to one or more pairs of confidence set estimators, dual parameter distributions called confidence posteriors secure the coherence of actions without any prior distribution. This theory reduces to the maximization of expected utility when the pair of posteriors is induced by an exact or approximate confidence set estimator or when a reduction rule is applied to the pair. Unlike the p-value, the confidence posterior probability of an interval hypothesis is suitable as an estimator of the indicator of hypothesis truth since it converges to 1 if the hypothesis is true or to 0 otherwise.  相似文献   

7.
This paper sets out to identify the abilities that a person needs to be able to successfully use an experimental device, such as a probability wheel or balls in an urn, for the elicitation of subjective probabilities. It is assumed that the successful use of the device requires that the person elicits unique probability values that obey the standard probability laws. This leads to a definition of probability based on the idea of the similarity between the likeliness of events and this concept is naturally extended to the idea that probabilities have strengths, which relates to information about the likeliness of an event that lies beyond a simple probability value. The latter notion is applied to the problem of explaining the Ellsberg paradox. To avoid the definition of probability being circular, probabilities are defined such that they depend on the choice of a reference set of events R which, in simple cases, corresponds to the raw outcomes produced by using an experimental device. However, it is shown that even when the events in R are considered as having an “equal chance” of occurring, the values and/or strengths of probabilities can still be affected by the choice of the set R.  相似文献   

8.
We propose a flexible method to approximate the subjective cumulative distribution function of an economic agent about the future realization of a continuous random variable. The method can closely approximate a wide variety of distributions while maintaining weak assumptions on the shape of distribution functions. We show how moments and quantiles of general functions of the random variable can be computed analytically and/or numerically. We illustrate the method by revisiting the determinants of income expectations in the United States. A Monte Carlo analysis suggests that a quantile-based flexible approach can be used to successfully deal with censoring and possible rounding levels present in the data. Finally, our analysis suggests that the performance of our flexible approach matches that of a correctly specified parametric approach and is clearly better than that of a misspecified parametric approach.  相似文献   

9.
By considering uncertainty in the attributes common methods cannot be applicable in data clustering. In the recent years, many researches have been done by considering fuzzy concepts to interpolate the uncertainty. But when data elements attributes have probabilistic distributions, the uncertainty cannot be interpreted by fuzzy theory. In this article, a new concept for clustering of elements with predefined probabilistic distributions for their attributes has been proposed, so each observation will be as a member of a cluster with special probability. Two metaheuristic algorithms have been applied to deal with the problem. Squared Euclidean distance type has been considered to calculate the similarity of data elements to cluster centers. The sensitivity analysis shows that the proposed approach will converge to the classic approaches results when the variance of each point tends to be zero. Moreover, numerical analysis confirms that the proposed approach is efficient in clustering of probabilistic data.  相似文献   

10.
ABSTRACT

The systematic sampling (SYS) design (Madow and Madow, 1944 Madow , L. H. , Madow , W. G. ( 1944 ). On the theory of systematic sampling . Ann. Math. Statist. 15 : 124 .[Crossref] [Google Scholar]) is widely used by statistical offices due to its simplicity and efficiency (e.g., Iachan, 1982 Iachan , R. ( 1982 ). Systematic sampling a critical review . Int. Statist. Rev. 50 : 293303 .[Crossref], [Web of Science ®] [Google Scholar]). But it suffers from a serious defect, namely, that it is impossible to unbiasedly estimate the sampling variance (Iachan, 1982 Iachan , R. ( 1982 ). Systematic sampling a critical review . Int. Statist. Rev. 50 : 293303 .[Crossref], [Web of Science ®] [Google Scholar]) and usual variance estimators (Yates and Grundy, 1953 Yates , F. , Grundy , P. M. ( 1953 ). Selection without replacement from within strata with probability proportional to size . J. Roy. Statist. Soc. Series B 1 : 253261 . [Google Scholar]) are inadequate and can overestimate the variance significantly (Särndal et al., 1992 Särndal , C. E. , Swenson , B. , Wretman , J. H. ( 1992 ). Model Assisted Survey Sampling . New York : Springer-Verlag , Ch. 3 .[Crossref] [Google Scholar]). We propose a novel variance estimator which is less biased and that can be implemented with any given population order. We will justify this estimator theoretically and with a Monte Carlo simulation study.  相似文献   

11.
We study distributional properties of generalized order statistics (gos) related by a random shift or scaling scheme in the continuous and discrete case, respectively. In the continuous case, we obtain new characterizations of distributions relating non-neighbouring gos extending some results given in the literature for the neighbouring cases. On the other hand, in the discrete case, we investigate the existence and uniqueness of a discrete parent distribution supported on the integers whose gos are related by a random translation.  相似文献   

12.
基于逻辑推理的视角对统计方法属性进行探讨,发现描述性统计方法属于归纳逻辑,推断统计方法的参数估计也属归纳逻辑,而统计假设检验属于演绎逻辑,统计预测为两种逻辑的结合。不论是统计归纳推理,还是统计演绎推理,给出的结论都伴随着一定的概率尺度,不具有完全确定性,呈现出或然性特征。  相似文献   

13.
For multivariate probit models, Spiess and Tutz suggest three alternative performance measures, which are all based on the decomposition of the variation. The multivariate probit model can be seen as a special case of the discrete copula model. This paper proposes some new measures based on the value of the likelihood function and the prediction-realization table. In addition, it generalizes the measures from Spiess and Tutz for the discrete copula model. Results of a simulation study designed to compare the different measures in various situations are presented.  相似文献   

14.
First- and second-order reliability algorithms (FORM AND SORM) have been adapted for use in modeling uncertainty and sensitivity related to flow in porous media. They are called reliability algorithms because they were developed originally for analysis of reliability of structures. FORM and SORM utilize a general joint probability model, the Nataf model, as a basis for transforming the original problem formulation into uncorrelated standard normal space, where a first-order or second-order estimate of the probability related to some failure criterion can easily be made. Sensitivity measures that incorporate the probabilistic nature of the uncertain variables in the problem are also evaluated, and are quite useful in indicating which uncertain variables contribute the most to the probabilistic outcome. In this paper the reliability approach is reviewed and the advantages and disadvantages compared to other typical probabilistic techniques used for modeling flow and transport. Some example applications of FORM and SORM from recent research by the authors and others are reviewed. FORM and SORM have been shown to provide an attractive alternative to other probabilistic modeling techniques in some situations.  相似文献   

15.
Summary This expository paper provides a framework for analysing de Finetti's representation theorem for exchangeable finitely additive probabilities. Such an analysis is justified by reasoning of statistical nature, since it is shown that the abandonment of the axiom of σ-additivity has some noteworthy consequences on the common interpretation of the Bayesian paradigm. The usual (strong) fromulation of de Finetti's theorem is deduced from the finitely additive (weak) formulation, and it is used to solve the problem of stating the existence of a stochastic process, with given finite-dimensional probability distributions, whose sample paths are probability distributions. It is of importance, in particular, to specify prior distributions for nonparametric inferential problems in a Bayesian setting. Research partially supported by MPI (40% 1990, Gruppo Nazionale ?Modelli Probabilistici e Statistica Matematica?).  相似文献   

16.
We present global and local likelihood-based tests to evaluate stationarity in transition models. Three motivational studies are considered. A simulation study was carried out to assess the performance of the proposed tests. The results showed that they present good performance with the control of the type-I error, especially for ordinal responses, and control of the type-II error, especially for the nominal case. Asymptotically they are close to the classical test performance. They can be executed in a single framework without the need to estimate the transition probabilities, incorporating both categorical and continuous covariates, and used to identify sources of non-stationarity.  相似文献   

17.
This article describes a new Monte Carlo method for the evaluation of the orthant probabilities by sampling first passage times of a non-singular Gaussian discrete time-series across an absorbing boundary. This procedure makes use of a simulation of several time-series sample paths, aiming to record their first crossing instants. Thus, the computation of the orthant probabilities is traced back to the accurate simulation of a non-singular Gaussian discrete-time series. Moreover, if the simulation is also efficient, this method is shown to be speedier than the others proposed in the literature. As example, we make use of the Davies–Harte algorithm in the evaluation of the orthant probabilities associated to the ARFIMA(0, d, 0) model. Test results are presented that compare this method with currently available software.  相似文献   

18.
Received: August 5, 1999; revised version: June 14, 2000  相似文献   

19.
Modern desk calculators compute distribution functions for many of the standard tabled distributions. Two such machines and some of their capabilities are discussed. Generally more is available from the calculators than is found in voluminous tables. One of the biggest advantages of the machines over tables arises from their capacity to compute probabilities for the two parameter F distribution, a set of values that is cumbersome to tabulate.  相似文献   

20.
An objective of Record Linkage is to link two data files by identifying common elements. A popular model for doing the separation is the probabilistic one from Fellegi and Sunter. To estimate the parameters needed for the model usually a mixture model is constructed and the EM algorithm is applied. For simplification, the assumption of conditional independence is often made. This assumption says that if several attributes of elements in the data are compared, then the results of the comparisons regarding the several attributes are independent within the mixture classes. A mixture model constructed with this assumption has been often used. Within this article a straightforward extension of the model is introduced which allows for conditional dependencies but is heavily dependent on the choice of the starting value. Therefore also an estimation procedure for the EM algorithm starting value is proposed. The two models are compared empirically in a simulation study based on telephone book entries. Particularly the effect of different starting values and conditional dependencies on the matching results is investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号