首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Summary.  Climatic phenomena such as the El-Niño–southern oscillation and the north Atlantic oscillation are results of complex interactions between atmospheric and oceanic processes. Understanding the interactions has enabled scientists to give early warning of the forthcoming phenomena, thereby reducing damage caused by them. Statistical methods have played an important role in revealing effects of these phenomena on different regions of the world. One such method is maximum covariance analysis (MCA). Two apparent weaknesses are associated with MCA. Firstly, it tends to produce estimates with a low signal-to-noise ratio, especially when the sample size is small. Secondly, there has been no objective way of incorporating incomplete records, which are frequently encountered in climatology and oceanographic data-bases. We introduce an MCA which incorporates a smoothing procedure on the estimates. The introduction of the smoothing procedure is shown to improve the signal-to-noise ratio on the estimates. The estimation of smoothing parameters is carried out by using a penalized likelihood approach, which makes the inclusion of incomplete records quite straightforward. The methodology is applied to investigate the association between Irish winter precipitation and sea surface temperature anomalies around the world. The results show relationships between Irish precipitation anomalies and the El-Niño–southern oscillation and the north Atlantic oscillation phenomena.  相似文献   

2.
Max-stable processes have proved to be useful for the statistical modeling of spatial extremes. For statistical inference it is often assumed that there is no temporal dependence; i.e., that the observations at spatial locations are independent in time. In a first approach we construct max-stable space–time processes as limits of rescaled pointwise maxima of independent Gaussian processes, where the space–time covariance functions satisfy weak regularity conditions. This leads to so-called Brown–Resnick processes. In a second approach, we extend Smith’s storm profile model to a space–time setting. We provide explicit expressions for the bivariate distribution functions, which are equal under appropriate choice of the parameters. We also show how the space–time covariance function of the underlying Gaussian process can be interpreted in terms of the tail dependence function in the limiting max-stable space–time process.  相似文献   

3.
Pre-election surveys are usually conducted several times to forecast election results before the actual voting. It is common that each survey includes a substantial number of non-responses and that the successive survey results are seen as a stochastic multinomial time series evolving over time. We propose a dynamic Bayesian model to examine how multinomial time series evolve over time for the irregularly observed contingency tables and to determine how sensitively the dynamic structure reacts to an unexpected event, such as a candidate scandal. Further, we test whether non-responses are non-ignorable to determine if non-responses need to be imputed for better forecast. We also suggest a Bayesian method that overcomes the boundary solution problem and show that the proposed method outperforms the previous Bayesian methods. Our dynamic Bayesian model is applied to the two pre-election surveys for the 2007 Korea presidential candidate election and for the 1998 Ohio general election.  相似文献   

4.
Summary.  A spatiotemporal model is developed to analyse epidemics of airborne plant diseases which are spread by spores. The observations consist of measurements of the severity of disease at different times, different locations in the horizontal plane and different heights in the vegetal cover. The model describes the joint distribution of the occurrence and the severity of the disease. The three-dimensional dispersal of spores is modelled by combining a horizontal and a vertical dispersal function. Maximum likelihood combined with a parametric bootstrap is suggested to estimate the model parameters and the uncertainty that is attached to them. The spatiotemporal model is used to analyse a yellow rust epidemic in a wheatfield. In the analysis we pay particular attention to the selection and the estimation of the dispersal functions.  相似文献   

5.
Summary.  The paper considers modelling, estimating and diagnostically verifying the response process generating longitudinal data, with emphasis on association between repeated meas-ures from unbalanced longitudinal designs. Our model is based on separate specifications of the moments for the mean, standard deviation and correlation, with different components possibly sharing common parameters. We propose a general class of correlation structures that comprise random effects, measurement errors and a serially correlated process. These three elements are combined via flexible time-varying weights, whereas the serial correlation can depend flexibly on the mean time and lag. When the measurement schedule is independent of the response process, our estimation procedure yields consistent and asymptotically normal estimates for the mean parameters even when the standard deviation and correlation are misspecified, and for the standard deviation parameters even when the correlation is misspecified. A generic diagnostic method is developed for verifying the models for the mean, standard deviation and, in particular, the correlation, which is applicable even when the data are severely unbalanced. The methodology is illustrated by an analysis of data from a longitudinal study that was designed to characterize pulmonary growth in girls.  相似文献   

6.
7.
Building new and flexible classes of nonseparable spatio-temporal covariances and variograms has resulted a key point of research in the last years. The goal of this paper is to present an up-to-date overview of recent spatio-temporal covariance models taking into account the problem of spatial anisotropy. The resulting structures are proved to have certain interesting mathematical properties, together with a considerable applicability. In particular, we focus on the problem of modelling anisotropy through isotropy within components. We present the Bernstein class, and a generalisation of Gneiting’s approach (2002a) to obtain new classes of space–time covariance functions which are spatially anisotropic. We also discuss some methods for building covariance functions that attain negative values. We finally present several differentiation and integration operators acting on particular space–time covariance classes.   相似文献   

8.
This paper is mainly concerned with modelling data from degradation sample paths over time. It uses a general growth curve model with Box‐Cox transformation, random effects and ARMA(p, q) dependence to analyse a set of such data. A maximum likelihood estimation procedure for the proposed model is derived and future values are predicted, based on the best linear unbiased prediction. The paper compares the proposed model with a nonlinear degradation model from a prediction point of view. Forecasts of failure times with various data lengths in the sample are also compared.  相似文献   

9.
Air quality control usually requires a monitoring system of multiple indicators measured at various points in space and time. Hence, the use of space–time multivariate techniques are of fundamental importance in this context, where decisions and actions regarding environmental protection should be supported by studies based on either inter-variables relations and spatial–temporal correlations. This paper describes how canonical correlation analysis can be combined with space–time geostatistical methods for analysing two spatial–temporal correlated aspects, such as air pollution concentrations and meteorological conditions. Hourly averages of three pollutants (nitric oxide, nitrogen dioxide and ozone) and three atmospheric indicators (temperature, humidity and wind speed) taken for two critical months (February and August) at several monitoring stations are considered and space–time variograms for the variables are estimated. Simultaneous relationships between such sample space–time variograms are determined through canonical correlation analysis. The most correlated canonical variates are used for describing synthetically the underlying space–time behaviour of the components of the two sets.  相似文献   

10.
In this paper, we construct a new mixture of geometric INAR(1) process for modeling over-dispersed count time series data, in particular data consisting of large number of zeros and ones. For some real data sets, the existing INAR(1) processes do not fit well, e.g., the geometric INAR(1) process overestimates the number of zero observations and underestimates the one observations, whereas Poisson INAR(1) process underestimates the zero observations and overestimates the one observations. Furthermore, for heavy tails, the PINAR(1) process performs poorly in the tail part. The existing zero-inflated Poisson INAR(1) and compound Poisson INAR(1) processes have the same kind of limitations. In order to remove this problem of under-fitting at one point and over-fitting at others points, we add some extra probability at one in the geometric INAR(1) process and build a new mixture of geometric INAR(1) process. Surprisingly, for some real data sets, it removes the problem of under and over-fitting over all the observations up to a significant extent. We then study the stationarity and ergodicity of the proposed process. Different methods of parameter estimation, namely the Yule-Walker and the quasi-maximum likelihood estimation procedures are discussed and illustrated using some simulation experiments. Furthermore, we discuss the future prediction along with some different forecasting accuracy measures. Two real data sets are analyzed to illustrate the effective use of the proposed model.  相似文献   

11.
Modelling count data with overdispersion and spatial effects   总被引:1,自引:1,他引:0  
In this paper we consider regression models for count data allowing for overdispersion in a Bayesian framework. We account for unobserved heterogeneity in the data in two ways. On the one hand, we consider more flexible models than a common Poisson model allowing for overdispersion in different ways. In particular, the negative binomial and the generalized Poisson (GP) distribution are addressed where overdispersion is modelled by an additional model parameter. Further, zero-inflated models in which overdispersion is assumed to be caused by an excessive number of zeros are discussed. On the other hand, extra spatial variability in the data is taken into account by adding correlated spatial random effects to the models. This approach allows for an underlying spatial dependency structure which is modelled using a conditional autoregressive prior based on Pettitt et al. in Stat Comput 12(4):353–367, (2002). In an application the presented models are used to analyse the number of invasive meningococcal disease cases in Germany in the year 2004. Models are compared according to the deviance information criterion (DIC) suggested by Spiegelhalter et al. in J R Stat Soc B64(4):583–640, (2002) and using proper scoring rules, see for example Gneiting and Raftery in Technical Report no. 463, University of Washington, (2004). We observe a rather high degree of overdispersion in the data which is captured best by the GP model when spatial effects are neglected. While the addition of spatial effects to the models allowing for overdispersion gives no or only little improvement, spatial Poisson models with spatially correlated or uncorrelated random effects are to be preferred over all other models according to the considered criteria.  相似文献   

12.
A parametric modelling for interval data is proposed, assuming a multivariate Normal or Skew-Normal distribution for the midpoints and log-ranges of the interval variables. The intrinsic nature of the interval variables leads to special structures of the variance–covariance matrix, which is represented by five different possible configurations. Maximum likelihood estimation for both models under all considered configurations is studied. The proposed modelling is then considered in the context of analysis of variance and multivariate analysis of variance testing. To access the behaviour of the proposed methodology, a simulation study is performed. The results show that, for medium or large sample sizes, tests have good power and their true significance level approaches nominal levels when the constraints assumed for the model are respected; however, for small samples, sizes close to nominal levels cannot be guaranteed. Applications to Chinese meteorological data in three different regions and to credit card usage variables for different card designations, illustrate the proposed methodology.  相似文献   

13.
In this paper we consider the analysis of recall-based competing risks data. The chance of an individual recalling the exact time to event depends on the time of occurrence of the event and time of observation of the individual. In particular, it is assumed that the probability of recall depends on the time elapsed since the occurrence of an event. In this study we consider the likelihood-based inference for the analysis of recall-based competing risks data. The likelihood function is constructed by incorporating the information about the probability of recall. We consider the maximum likelihood estimation of parameters. Simulation studies are carried out to examine the performance of the estimators. The proposed estimation procedure is applied to a real life data set.  相似文献   

14.
Finite mixture methods are applied to bird band-recovery studies to allow for heterogeneity of survival. Birds are assumed to belong to one of finitely many groups, each of which has its own survival rate (or set of survival rates varying by time and/or age). The group to which a specific animal belongs is not known, so its survival probability is a random variable from a finite mixture. Heterogeneity is thus modelled as a latent effect. This gives a wide selection of likelihood-based models, which may be compared using likelihood ratio tests. These models are discussed with reference to real and simulated data, and compared with previous models.  相似文献   

15.
Finite mixture methods are applied to bird band-recovery studies to allow for heterogeneity of survival. Birds are assumed to belong to one of finitely many groups, each of which has its own survival rate (or set of survival rates varying by time and/or age). The group to which a specific animal belongs is not known, so its survival probability is a random variable from a finite mixture. Heterogeneity is thus modelled as a latent effect. This gives a wide selection of likelihood-based models, which may be compared using likelihood ratio tests. These models are discussed with reference to real and simulated data, and compared with previous models.  相似文献   

16.
Tobias Niebuhr 《Statistics》2017,51(5):1118-1131
We consider time series being observed at random time points. In addition to Parzen's classical modelling by amplitude modulating sequences, we state another modelling using an integer-valued sequence as the observation times. Limiting results are presented for the sample mean and are generalized to the class of functions of smooth means. Motivated by the complicated limiting behaviour, (moving) block bootstrap possibilities are investigated. Conditional on the used modelling for the irregular spacings, one is lead to different interpretations for the block length and hence bootstrap approaches. The block length either can be interpreted as the time (resulting in an observation string of fixed length containing a random number of observations) or as the number of observations (resulting in an observation string of variable length containing a fixed number of values). Both bootstrap approaches are shown to be asymptotically valid for the sample mean. Numerical examples and an application to real-world ozone data conclude the study.  相似文献   

17.
This paper provides a brief structural perspective of discrete weighted distributions in theory and practice.. It develops a unified view of previous work involving univariate and bivariate models with some new results pertaining to mixtures, form-invariance and Bayesian inference  相似文献   

18.
Ozone and particulate matter PM(2.5) are co-pollutants that have long been associated with increased public health risks. Information on concentration levels for both pollutants come from two sources: monitoring sites and output from complex numerical models that produce concentration surfaces over large spatial regions. In this paper, we offer a fully-model based approach for fusing these two sources of information for the pair of co-pollutants which is computationally feasible over large spatial regions and long periods of time. Due to the association between concentration levels of the two environmental contaminants, it is expected that information regarding one will help to improve prediction of the other. Misalignment is an obvious issue since the monitoring networks for the two contaminants only partly intersect and because the collection rate for PM(2.5) is typically less frequent than that for ozone.Extending previous work in Berrocal et al. (2009), we introduce a bivariate downscaler that provides a flexible class of bivariate space-time assimilation models. We discuss computational issues for model fitting and analyze a dataset for ozone and PM(2.5) for the ozone season during year 2002. We show a modest improvement in predictive performance, not surprising in a setting where we can anticipate only a small gain.  相似文献   

19.
Estimation from current-status data in continuous time   总被引:2,自引:0,他引:2  
The nonparametric maximum likelihood estimator for current-status data has been known for at least 40 years, but only recently have the mathematical-statistical properties been clarified. This note provides a case study in the important and often studied context of estimating age-specific immunization intensities from a seroprevalence survey. Fully parametric and spline-based alternatives (also based on continuous-time models) are given. The basic reproduction number R 0 exemplifies estimation of a functional. The limitations implied by the necessarily rather restrictive epidemiological assumptions are briefly discussed.  相似文献   

20.
Interval-censored failure time data and panel count data are two types of incomplete data that commonly occur in event history studies and many methods have been developed for their analysis separately (Sun in The statistical analysis of interval-censored failure time data. Springer, New York, 2006; Sun and Zhao in The statistical analysis of panel count data. Springer, New York, 2013). Sometimes one may be interested in or need to conduct their joint analysis such as in the clinical trials with composite endpoints, for which it does not seem to exist an established approach in the literature. In this paper, a sieve maximum likelihood approach is developed for the joint analysis and in the proposed method, Bernstein polynomials are used to approximate unknown functions. The asymptotic properties of the resulting estimators are established and in particular, the proposed estimators of regression parameters are shown to be semiparametrically efficient. In addition, an extensive simulation study was conducted and the proposed method is applied to a set of real data arising from a skin cancer study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号