首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14篇
  免费   1篇
统计学   15篇
  2021年   1篇
  2018年   1篇
  2016年   1篇
  2014年   1篇
  2013年   1篇
  2008年   1篇
  2005年   1篇
  2004年   1篇
  2003年   1篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
  1989年   1篇
  1980年   1篇
  1977年   1篇
排序方式: 共有15条查询结果,搜索用时 140 毫秒
1.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data.  相似文献   
2.
Remote sensing of the earth with satellites yields datasets that can be massive in size, nonstationary in space, and non‐Gaussian in distribution. To overcome computational challenges, we use the reduced‐rank spatial random effects (SRE) model in a statistical analysis of cloud‐mask data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on board NASA's Terra satellite. Parameterisations of cloud processes are the biggest source of uncertainty and sensitivity in different climate models’ future projections of Earth's climate. An accurate quantification of the spatial distribution of clouds, as well as a rigorously estimated pixel‐scale clear‐sky‐probability process, is needed to establish reliable estimates of cloud‐distributional changes and trends caused by climate change. Here we give a hierarchical spatial‐statistical modelling approach for a very large spatial dataset of 2.75 million pixels, corresponding to a granule of MODIS cloud‐mask data, and we use spatial change‐of‐Support relationships to estimate cloud fraction at coarser resolutions. Our model is non‐Gaussian; it postulates a hidden process for the clear‐sky probability that makes use of the SRE model, EM‐estimation, and optimal (empirical Bayes) spatial prediction of the clear‐sky‐probability process. Measures of prediction uncertainty are also given.  相似文献   
3.
Chemical analyses of ice cores, drilled deep into an ice sheet, provide a historical record of the earth's atmosphere that dates back as far as 400,000–500,000 years. Although the atmosphere mixes quite well, it is recognized that spatial variability associated with ice-core locations should be allowed for. In this article, spatial statistical methodology is applied to the design question of finding the best spacing of ice-core locations on a partial transect of Antarctica.  相似文献   
4.
Wald and Wolfowitz (1948) have shown that the Sequential Probability Ratio Test (SPRT) for deciding between two simple hypotheses is, under very restrictive conditions, optimal in three attractive senses. First, it can be a Bayes-optimal rule. Second, of all level α tests having the same power, the test with the smallest joint-expected number of observations is the SPRT, where this expectation is taken jointly with respect to both data and prior over the two hypotheses. Third, the level α test needing the fewest conditional-expected number of observat ions is the SPRT, where this expectation is now taken with respect to the data conditional on either hypothesis being true. Principal among the strong restrictions is that sampling can proceed only in a one-at-a-time manner. In this paper, we relax some of the conditions and show that there are sequential procedures that strictly dominate the SPRT in all three senses. We conclude that the third type of optimality occurs rarely and that decision-makers are better served by looking for sequential procedures that possess the first two types of optimality. By relaxing the one-at-a-time sampling restriction, we obtain optimal (in the first two senses) variable-s ample-size- sequential probability ratio tests.  相似文献   
5.
Markov random fields (MRFs) express spatial dependence through conditional distributions, although their stochastic behavior is defined by their joint distribution. These joint distributions are typically difficult to obtain in closed form, the problem being a normalizing constant that is a function of unknown parameters. The Gaussian MRF (or conditional autoregressive model) is one case where the normalizing constant is available in closed form; however, when sample sizes are moderate to large (thousands to tens of thousands), and beyond, its computation can be problematic. Because the conditional autoregressive (CAR) model is often used for spatial-data modeling, we develop likelihood-inference methodology for this model in situations where the sample size is too large for its normalizing constant to be computed directly. In particular, we use simulation methodology to obtain maximum likelihood estimators of mean, variance, and spatial-depencence parameters (including their asymptotic variances and covariances) of CAR models.  相似文献   
6.
This paper reviews what is currently known about the behaviour of the t-statistic when one is no longer sampling from a normal distribution. Suppose Y is a batch of data on which the t-test is performed. Briefly then, heavy-tailed components of Y give a light-tailed t , positive correlation among Y gives a heavy-tailed t, and positively skewed components of Y give a negatively skewed t. The emphasis is on understanding why one gets this type of behaviour, although some numerical tables are presented to illustrate the conclusions.  相似文献   
7.
The focus of geographical studies in epidemiology has recently moved towards looking for effects of exposures based on data taken at local levels of aggregation (i.e. small areas). This paper investigates how regression coefficients measuring covariate effects at the point level are modified under aggregation. Changing the level of aggregation can lead to completely different conclusions about exposure–effect relationships, a phenomenon often referred to as ecological bias. With partial knowledge of the within‐area distribution of the exposure variable, the notion of maximum entropy can be used to approximate that part of the distribution that is unknown. From the approximation, an expression for the ecological bias is obtained; simulations and an example show that the maximum‐entropy approximation is often better than other commonly used approximations.  相似文献   
8.
Asymptotics for REML estimation of spatial covariance parameters   总被引:2,自引:0,他引:2  
In agricultural field trials, restricted maximum likelihood estimation (REML) of the spatial covariance parameters is often preferred to maximum likelihood. Although it has either been conjectured or assumed that REML estimators are asymptotically Gaussian, conditions under which such asymptotic results hold are clearly needed. This article gives checkable conditions for spatial regression when sampling locations are either on a rectangular grid or are irregularly spaced but satisfy certain growth conditions.  相似文献   
9.
In any other circumstance, it might make sense to define the extent of the terrain (Data Science) first, and then locate and describe the landmarks (Principles). But this data revolution we are experiencing defies a cadastral survey. Areas are continually being annexed into Data Science. For example, biometrics was traditionally statistics for agriculture in all its forms but now, in Data Science, it means the study of characteristics that can be used to identify an individual. Examples of non-intrusive measurements include height, weight, fingerprints, retina scan, voice, photograph/video (facial landmarks and facial expressions) and gait. A multivariate analysis of such data would be a complex project for a statistician, but a software engineer might appear to have no trouble with it at all. In any applied-statistics project, the statistician worries about uncertainty and quantifies it by modelling data as realisations generated from a probability space. Another approach to uncertainty quantification is to find similar data sets, and then use the variability of results between these data sets to capture the uncertainty. Both approaches allow ‘error bars’ to be put on estimates obtained from the original data set, although the interpretations are different. A third approach, that concentrates on giving a single answer and gives up on uncertainty quantification, could be considered as Data Engineering, although it has staked a claim in the Data Science terrain. This article presents a few (actually nine) statistical principles for data scientists that have helped me, and continue to help me, when I work on complex interdisciplinary projects.  相似文献   
10.
A large body of theory exists for statistics based on gaps or spacings. A natural generalization to gaps of higher order leads to statistics for testing uniformity which have higher power. In particular the null and alternative distributions of the minimum gap statistic are derived and the power of the test is seen to increase with m.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号