首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   0篇
理论方法论   2篇
统计学   17篇
  2015年   1篇
  2014年   1篇
  2013年   3篇
  2010年   3篇
  2009年   1篇
  2008年   1篇
  2006年   2篇
  2004年   1篇
  1999年   1篇
  1998年   1篇
  1993年   1篇
  1990年   1篇
  1985年   1篇
  1984年   1篇
排序方式: 共有19条查询结果,搜索用时 15 毫秒
1.
2.
We explore the application of dynamic graphics to the exploratory analysis of spatial data. We introduce a number of new tools and illustrate their use with prototype software, developed at Trinity College, Dublin. These tools are used to examine local variability—anomalies—through plots of the data that display its marginal and multivariate distributions, through interactive smoothers, and through plots motivated by the spatial auto-covariance ideas implicit in the variogram. We regard these as alternative and linked views of the data. We conclude that the most important single view of the data is the Map View: All other views must be cross-referred to this, and the software must encourage this. The view can be enriched by overlaying on other pertinent spatial information. We draw attention to the possibilities of one-many linking, and to the use of line-objects to link pairs of data points. We draw attention to the parallels with work on Geographical Information Systems.  相似文献   
3.
Bayesian palaeoclimate reconstruction   总被引:1,自引:0,他引:1  
Summary.  We consider the problem of reconstructing prehistoric climates by using fossil data that have been extracted from lake sediment cores. Such reconstructions promise to provide one of the few ways to validate modern models of climate change. A hierarchical Bayesian modelling approach is presented and its use, inversely, is demonstrated in a relatively small but statistically challenging exercise: the reconstruction of prehistoric climate at Glendalough in Ireland from fossil pollen. This computationally intensive method extends current approaches by explicitly modelling uncertainty and reconstructing entire climate histories. The statistical issues that are raised relate to the use of compositional data (pollen) with covariates (climate) which are available at many modern sites but are missing for the fossil data. The compositional data arise as mixtures and the missing covariates have a temporal structure. Novel aspects of the analysis include a spatial process model for compositional data, local modelling of lattice data, the use, as a prior, of a random walk with long-tailed increments, a two-stage implementation of the Markov chain Monte Carlo approach and a fast approximate procedure for cross-validation in inverse problems. We present some details, contrasting its reconstructions with those which have been generated by a method in use in the palaeoclimatology literature. We suggest that the method provides a basis for resolving important challenging issues in palaeoclimate research. We draw attention to several challenging statistical issues that need to be overcome.  相似文献   
4.
Summary.  We propose a new and simple continuous Markov monotone stochastic process and use it to make inference on a partially observed monotone stochastic process. The process is piecewise linear, based on additive independent gamma increments arriving in a Poisson fashion. An independent increments variation allows very simple conditional simulation of sample paths given known values of the process. We take advantage of a reparameterization involving the Tweedie distribution to provide efficient computation. The motivating problem is the establishment of a chronology for samples taken from lake sediment cores, i.e. the attribution of a set of dates to samples of the core given their depths, knowing that the age–depth relationship is monotone. The chronological information arises from radiocarbon (14C) dating at a subset of depths. We use the process to model the stochastically varying rate of sedimentation.  相似文献   
5.
6.
We consider equalities between the ordinary least squares estimator ( $\mathrm {OLSE} $ ), the best linear unbiased estimator ( $\mathrm {BLUE} $ ) and the best linear unbiased predictor ( $\mathrm {BLUP} $ ) in the general linear model $\{ \mathbf y , \mathbf X \varvec{\beta }, \mathbf V \}$ extended with the new unobservable future value $ \mathbf y _{*}$ of the response whose expectation is $ \mathbf X _{*}\varvec{\beta }$ . Our aim is to provide some new insight and new proofs for the equalities under consideration. We also collect together various expressions, without rank assumptions, for the $\mathrm {BLUP} $ and provide new results giving upper bounds for the Euclidean norm of the difference between the $\mathrm {BLUP} ( \mathbf y _{*})$ and $\mathrm {BLUE} ( \mathbf X _{*}\varvec{\beta })$ and between the $\mathrm {BLUP} ( \mathbf y _{*})$ and $\mathrm {OLSE} ( \mathbf X _{*}\varvec{\beta })$ . A remark is made on the application to small area estimation.  相似文献   
7.
8.
In this paper we consider the estimation of regression coefficients in two partitioned linear models, shortly denoted as , and , which differ only in their covariance matrices. We call and full models, and correspondingly, and small models. We give a necessary and sufficient condition for the equality between the best linear unbiased estimators (BLUEs) of X1β1 under and . In particular, we consider the equality of the BLUEs under the full models assuming that they are equal under the small models.  相似文献   
9.
This paper presents a simple computational procedure for generating ‘matching’ or ‘cloning’ datasets so that they have exactly the same fitted multiple linear regression equation. The method is simple to implement and provides an alternative to generating datasets under an assumed model. The advantage is that, unlike the case for the straight model‐based alternative, parameter estimates from the original data and the generated data do not include any model error. This distinction suggests that ‘same fit’ procedures may provide a general and useful alternative to model‐based procedures, and have a wide range of applications. For example, as well as being useful for teaching, cloned datasets can provide a model‐free way of confidentializing data.  相似文献   
10.
Given time series data for fixed interval t= 1,2,…, M with non-autocorrelated innovations, the regression formulae for the best linear unbiased parameter estimates at each time t are given by the Kalman filter fixed interval smoothing equations. Formulae for the variance of such parameter estimates are well documented. However, formulae for covariance between these fixed interval best linear parameter estimates have previously been derived only for lag one. In this paper more general formulae for covariance between fixed interval best linear unbiased estimates at times t and t - l are derived for t= 1,2,…, M and l= 0,1,…, t - 1. Under Gaussian assumptions, these formulae are also those for the corresponding conditional covariances between the fixed interval best linear unbiased parameter estimates given the data to time M. They have application, for example, in determination via the expectation-maximisation (EM) algorithm of exact maximum likelihood parameter estimates for ARMA processes expressed in statespace form when multiple observations are available at each time point.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号