首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One important type of question in statistical inference is how to interpret data as evidence. The law of likelihood provides a satisfactory answer in interpreting data as evidence for simple hypotheses, but remains silent for composite hypotheses. This article examines how the law of likelihood can be extended to composite hypotheses within the scope of the likelihood principle. From a system of axioms, we conclude that the strength of evidence for the composite hypotheses should be represented by an interval between lower and upper profiles likelihoods. This article is intended to reveal the connection between profile likelihoods and the law of likelihood under the likelihood principle rather than argue in favor of the use of profile likelihoods in addressing general questions of statistical inference. The interpretation of the result is also discussed.  相似文献   

2.
A multi‐level model allows the possibility of marginalization across levels in different ways, yielding more than one possible marginal likelihood. Since log‐likelihoods are often used in classical model comparison, the question to ask is which likelihood should be chosen for a given model. The authors employ a Bayesian framework to shed some light on qualitative comparison of the likelihoods associated with a given model. They connect these results to related issues of the effective number of parameters, penalty function, and consistent definition of a likelihood‐based model choice criterion. In particular, with a two‐stage model they show that, very generally, regardless of hyperprior specification or how much data is collected or what the realized values are, a priori, the first‐stage likelihood is expected to be smaller than the marginal likelihood. A posteriori, these expectations are reversed and the disparities worsen with increasing sample size and with increasing number of model levels.  相似文献   

3.
Summary.  Data comprising colony counts, or a binary variable representing fertile (or sterile) samples, as a dilution series of the containing medium are analysed by using extended Poisson process modelling. These models form a class of flexible probability distributions that are widely applicable to count and grouped binary data. Standard distributions such as Poisson and binomial, and those representing overdispersion and underdispersion relative to these distributions can be expressed within this class. For all the models in the class, likelihoods can be obtained. These models have not been widely used because of the perceived difficulty of performing the calculations and the lack of associated software. Exact calculation of the probabilities that are involved can be time consuming although accurate approximations that use considerably less computational time are available. Although dilution series data are the focus here, the models are applicable to any count or binary data. A benefit of the approach is the ability to draw likelihood-based inferences from the data.  相似文献   

4.
Summary. A drawback of a new method for integrating abundance and mark–recapture–recovery data is the need to combine likelihoods describing the different data sets. Often these likelihoods will be formed by using specialist computer programs, which is an obstacle to the joint analysis. This difficulty is easily circumvented by the use of a multivariate normal approximation. We show that it is only necessary to make the approximation for the parameters of interest in the joint analysis. The approximation is evaluated on data sets for two bird species and is shown to be efficient and accurate.  相似文献   

5.
In this paper, we propose new estimation techniques in connection with the system of S-distributions. Besides “exact” maximum likelihood (ML), we propose simulated ML and a characteristic function-based procedure. The “exact” and simulated likelihoods can be used to provide numerical, MCMC-based Bayesian inferences.  相似文献   

6.
Statistical inferences for probability distributions involving truncation parameters have received recent attention in the literature. One aspect of these inferences is the question of shortest confidence intervals for parameters or parametric functions of these models. The topic is a classical one, and the approach follows the usual theory. In all literature treatments the authors consider specific models and derive confidence intervals (not necessarily shortest). All of these models can, however, be considered as special cases of a more general one. The use of this model enables one to obtain easily shortest confidence intervals and unify the different approaches. In addition, it provides a useful technique for classroom presentation of the topic.  相似文献   

7.
Relative surprise inferences are based on how beliefs change from a priori to a posteriori. As they are based on the posterior distribution of the integrated likelihood, inferences of this type are invariant under relabellings of the parameter of interest. The authors demonstrate that these inferences possess a certain optimality property. Further, they develop computational techniques for implementing them, provided that algorithms are available to sample from the prior and posterior distributions.  相似文献   

8.
Abstract. It is well known that curved exponential families can have multimodal likelihoods. We investigate the relationship between flat or multimodal likelihoods and model lack of fit, the latter measured by the score (Rao) test statistic W U of the curved model as embedded in the corresponding full model. When data yield a locally flat or convex likelihood (root of multiplicity >1, terrace point, saddle point, local minimum), we provide a formula for W U in such points, or a lower bound for it. The formula is related to the statistical curvature of the model, and it depends on the amount of Fisher information. We use three models as examples, including the Behrens–Fisher model, to see how a flat likelihood, etc. by itself can indicate a bad fit of the model. The results are related (dual) to classical results by Efron from 1978.  相似文献   

9.
The authors achieve robust estimation of parametric models through the use of weighted maximum likelihood techniques. A new estimator is proposed and its good properties illustrated through examples. Ease of implementation is an attractive property of the new estimator. The new estimator downweights with respect to the model and can be used for complicated likelihoods such as those involved in bivariate extreme value problems. New weight functions, tailored for these problems, are constructed. The increased insight provided by our robust fits to these bivariate extreme value models is exhibited through the analysis of sea levels at two East Coast sites in the United Kingdom.  相似文献   

10.
The focus of this paper is objective priors for spatially correlated data with nugget effects. In addition to the Jeffreys priors and commonly used reference priors, two types of “exact” reference priors are derived based on improper marginal likelihoods. An “equivalence” theorem is developed in the sense that the expectation of any function of the score functions of the marginal likelihood function can be taken under marginal likelihoods. Interestingly, these two types of reference priors are identical.  相似文献   

11.
Considerable attention has been directed in the statistical literature towards the construction of confidence bands for a simple linear regression model. These confidence bands allow the experimenter to make inferences about the model over a particular region of interest. However, in practice an experimenter will usually first check the significance of the regression line before proceeding with any further inferences such as those provided by the confidence bands. From a theoretical point of view, this raises the question of what the conditional confidence level of the confidence bands might be, and from a practical point of view it is unsatisfactory if the confidence bands contain lines that are inconsistent with the directional decision on the slope. In this paper it is shown how confidence bands can be modified to alleviate these two problems.  相似文献   

12.
Nowadays, Bayesian methods are routinely used for estimating parameters of item response theory (IRT) models. However, the marginal likelihoods are still rarely used for comparing IRT models due to their complexity and a relatively high dimension of the model parameters. In this paper, we review Monte Carlo (MC) methods developed in the literature in recent years and provide a detailed development of how these methods are applied to the IRT models. In particular, we focus on the “best possible” implementation of these MC methods for the IRT models. These MC methods are used to compute the marginal likelihoods under the one-parameter IRT model with the logistic link (1PL model) and the two-parameter logistic IRT model (2PL model) for a real English Examination dataset. We further use the widely applicable information criterion (WAIC) and deviance information criterion (DIC) to compare the 1PL model and the 2PL model. The 2PL model is favored by all of these three Bayesian model comparison criteria for the English Examination data.  相似文献   

13.
Bayesian inference for pairwise interacting point processes   总被引:1,自引:0,他引:1  
Pairwise interacting point processes are commonly used to model spatial point patterns. To perform inference, the established frequentist methods can produce good point estimates when the interaction in the data is moderate, but some methods may produce severely biased estimates when the interaction in strong. Furthermore, because the sampling distributions of the estimates are unclear, interval estimates are typically obtained by parametric bootstrap methods. In the current setting however, the behavior of such estimates is not well understood. In this article we propose Bayesian methods for obtaining inferences in pairwise interacting point processes. The requisite application of Markov chain Monte Carlo (MCMC) techniques is complicated by an intractable function of the parameters in the likelihood. The acceptance probability in a Metropolis-Hastings algorithm involves the ratio of two likelihoods evaluated at differing parameter values. The intractable functions do not cancel, and hence an intractable ratio r must be estimated within each iteration of a Metropolis-Hastings sampler. We propose the use of importance sampling techniques within MCMC to address this problem. While r may be estimated by other methods, these, in general, are not readily applied in a Bayesian setting. We demonstrate the validity of our importance sampling approach with a small simulation study. Finally, we analyze the Swedish pine sapling dataset (Strand 1972) and contrast the results with those in the literature.  相似文献   

14.
How often would investigators be misled if they took advantage of the likelihood principle and used likelihood ratios—which need not be adjusted for multiple looks at the data—to frequently examine accumulating data? The answer, perhaps surprisingly, is not often. As expected, the probability of observing misleading evidence does increase with each additional examination. However, the amount by which this probability increases converges to zero as the sample size grows. As a result, the probability of observing misleading evidence remains bounded—and therefore controllable—even with an infinite number of looks at the data. Here we use boundary crossing results to detail how often misleading likelihood ratios arise in sequential designs. We find that the probability of observing a misleading likelihood ratio is often much less than its universal bound. Additionally, we find that in the presence of fixed-dimensional nuisance parameters, profile likelihoods are to be preferred over estimated likelihoods which result from replacing the nuisance parameters by their global maximum likelihood estimates.  相似文献   

15.
In a previous study, the effect of rounding on classical statistical techniques was considered. Here, we consider how rounded data may affect the posterior distribution and, thus, any Bayesian inferences made. The results in this paper indicate that Bayesian inferences can be sensitive to the roundingprocess.  相似文献   

16.
The underlying statistical concept that animates empirical strategies for extracting causal inferences from observational data is that observational data may be adjusted to resemble data that might have originated from a randomized experiment. This idea has driven the literature on matching methods. We explore an un-mined idea for making causal inferences with observational data – that any given observational study may contain a large number of indistinguishably balanced matched designs. We demonstrate how the absence of a unique best solution presents an opportunity for greater information retrieval in causal inference analysis based on the principle that many solutions teach us more about a given scientific hypothesis than a single study and improves our discernment with observational studies. The implementation can be achieved by integrating the statistical theories and models within a computational optimization framework that embodies the statistical foundations and reasoning.  相似文献   

17.
Poisson regression and case-crossover are frequently used methods to estimate transient risks of environmental exposures such as particulate air pollution on acute events such as mortality. Roughly speaking, a case-crossover design results from a Poisson regression by conditioning on the total number of failures. We show that the case-crossover design is somewhat more generally applicable than Poisson regression. Stratification in the case-crossover design is analogous to Poisson regression with dummy variables, or to a marked Poisson regression. Poisson regression makes it possible to express case-crossover likelihood functions as multinomial likelihoods without making reference to cases, controls, or matching. This derivation avoids the counterintuitive notion of basing inferences on exposures that occur post-failure.  相似文献   

18.
Numerical methods are needed to obtain maximum-likelihood estimates (MLEs) in many problems. Computation time can be an issue for some likelihoods even with modern computing power. We consider one such problem where the assumed model is a random-clumped multinomial distribution. We compute MLEs for this model in parallel using the Toolkit for Advanced Optimization software library. The computations are performed on a distributed-memory cluster with low latency interconnect. We demonstrate that for larger problems, scaling the number of processes improves wall clock time significantly. An illustrative example shows how parallel MLE computation can be useful in a large data analysis. Our experience with a direct numerical approach indicates that more substantial gains may be obtained by making use of the specific structure of the random-clumped model.  相似文献   

19.
Abstract. A stochastic epidemic model is defined in which each individual belongs to a household, a secondary grouping (typically school or workplace) and also the community as a whole. Moreover, infectious contacts take place in these three settings according to potentially different rates. For this model, we consider how different kinds of data can be used to estimate the infection rate parameters with a view to understanding what can and cannot be inferred. Among other things we find that temporal data can be of considerable inferential benefit compared with final size data, that the degree of heterogeneity in the data can have a considerable effect on inference for non‐household transmission, and that inferences can be materially different from those obtained from a model with only two levels of mixing. We illustrate our findings by analysing a highly detailed dataset concerning a measles outbreak in Hagelloch, Germany.  相似文献   

20.
Summary.  Hypoelliptic diffusion processes can be used to model a variety of phenomena in applications ranging from molecular dynamics to audio signal analysis. We study parameter estimation for such processes in situations where we observe some components of the solution at discrete times. Since exact likelihoods for the transition densities are typically not known, approximations are used that are expected to work well in the limit of small intersample times Δ t and large total observation times N  Δ t . Hypoellipticity together with partial observation leads to ill conditioning requiring a judicious combination of approximate likelihoods for the various parameters to be estimated. We combine these in a deterministic scan Gibbs sampler alternating between missing data in the unobserved solution components, and parameters. Numerical experiments illustrate asymptotic consistency of the method when applied to simulated data. The paper concludes with an application of the Gibbs sampler to molecular dynamics data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号