首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5篇
  免费   0篇
社会学   1篇
统计学   4篇
  2008年   1篇
  2007年   1篇
  2002年   2篇
  1990年   1篇
排序方式: 共有5条查询结果,搜索用时 15 毫秒
1
1.
Research and operational applications in weather forecasting are reviewed, with emphasis on statistical issues. It is argued that the deterministic approach has dominated in weather forecasting, although weather forecasting is a probabilistic problem by nature. The reason has been the successful application of numerical weather prediction techniques over the 50 years since the introduction of computers. A gradual change towards utilization of more probabilistic methods has occurred over the last decade; in particular meteorological data assimilation, ensemble forecasting and post‐processing of model output have been influenced by ideas from statistics and control theory.  相似文献   
2.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   
3.
Abstract.  The traditional Cox proportional hazards regression model uses an exponential relative risk function. We argue that under various plausible scenarios, the relative risk part of the model should be bounded, suggesting also that the traditional model often might overdramatize the hazard rate assessment for individuals with unusual covariates. This motivates our working with proportional hazards models where the relative risk function takes a logistic form. We provide frequentist methods, based on the partial likelihood, and then go on to semiparametric Bayesian constructions. These involve a Beta process for the cumulative baseline hazard function and any prior with a density, for example that dictated by a Jeffreys-type argument, for the regression coefficients. The posterior is derived using machinery for Lévy processes, and a simulation recipe is devised for sampling from the posterior distribution of any quantity. Our methods are illustrated on real data. A Bernshtĕn–von Mises theorem is reached for our class of semiparametric priors, guaranteeing asymptotic normality of the posterior processes.  相似文献   
4.
5.
Abstract.  In many spatial and spatial-temporal models, and more generally in models with complex dependencies, it may be too difficult to carry out full maximum-likelihood (ML) analysis. Remedies include the use of pseudo-likelihood (PL) and quasi-likelihood (QL) (also called the composite likelihood). The present paper studies the ML, PL and QL methods for general Markov chain models, partly motivated by the desire to understand the precise behaviour of the PL and QL methods in settings where this can be analysed. We present limiting normality results and compare performances in different settings. For Markov chain models, the PL and QL methods can be seen as maximum penalized likelihood methods. We find that QL is typically preferable to PL, and that it loses very little to ML, while sometimes earning in model robustness. It has also appeal and potential as a modelling tool. Our methods are illustrated for consonant-vowel transitions in poetry and for analysis of DNA sequence evolution-type models.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号