首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 132 毫秒
1.
In a wireless sensor network, data collection is relatively cheap whereas data transmission is relatively expensive. Thus, preserving battery life is critical. If the process of interest is sufficiently predictable, the suppression in transmission can be adopted to improve efficiency of sensor networks because the loss of information is not great. The prime interest lies in finding an inference-efficient way to support suppressed data collection application. In this paper, we present a suppression scheme for a multiple nodes setting with spatio-temporal processes, especially when process knowledge is insufficient. We also explore the impact of suppression schemes on the inference of the regional processes under various suppression levels. Finally, we formalize the hierarchical Bayesian model for these schemes.  相似文献   

2.
Summary. Long-transported air pollution in Europe is monitored by a combination of a highly complex mathematical model and a limited number of measurement stations. The model predicts deposition on a 150 km × 150 km square grid covering the whole of the continent. These predictions can be regarded as spatial averages, with some spatially correlated model error. The measurement stations give a limited number of point estimates, regarded as error free. We combine these two sources of data by assuming that both are observations of an underlying true process. This true deposition is made up of a smooth deterministic trend, due to gradual changes in emissions over space and time, and two stochastic components. One is non- stationary and correlated over long distances; the other describes variation within a grid square. Our approach is through hierarchical modelling with predictions and measurements being independent conditioned on the underlying non-stationary true deposition. We assume Gaussian processes and calculate maximum likelihood estimates through numerical optimization. We find that the variation within a grid square is by far the largest component of the variation in the true deposition. We assume that the mathematical model produces estimates of the mean over an area that is approximately equal to a grid square, and we find that it has an error that is similar to the long-range stochastic component of the true deposition, in addition to a large bias.  相似文献   

3.
In this paper, we describe an analysis for data collected on a three-dimensional spatial lattice with treatments applied at the horizontal lattice points. Spatial correlation is accounted for using a conditional autoregressive model. Observations are defined as neighbours only if they are at the same depth. This allows the corresponding variance components to vary by depth. We use the Markov chain Monte Carlo method with block updating, together with Krylov subspace methods, for efficient estimation of the model. The method is applicable to both regular and irregular horizontal lattices and hence to data collected at any set of horizontal sites for a set of depths or heights, for example, water column or soil profile data. The model for the three-dimensional data is applied to agricultural trial data for five separate days taken roughly six months apart in order to determine possible relationships over time. The purpose of the trial is to determine a form of cropping that leads to less moist soils in the root zone and beyond. We estimate moisture for each date, depth and treatment accounting for spatial correlation and determine relationships of these and other parameters over time.  相似文献   

4.
Abstract. In geophysical and environmental problems, it is common to have multiple variables of interest measured at the same location and time. These multiple variables typically have dependence over space (and/or time). As a consequence, there is a growing interest in developing models for multivariate spatial processes, in particular, the cross‐covariance models. On the other hand, many data sets these days cover a large portion of the Earth such as satellite data, which require valid covariance models on a globe. We present a class of parametric covariance models for multivariate processes on a globe. The covariance models are flexible in capturing non‐stationarity in the data yet computationally feasible and require moderate numbers of parameters. We apply our covariance model to surface temperature and precipitation data from an NCAR climate model output. We compare our model to the multivariate version of the Matérn cross‐covariance function and models based on coregionalization and demonstrate the superior performance of our model in terms of AIC (and/or maximum loglikelihood values) and predictive skill. We also present some challenges in modelling the cross‐covariance structure of the temperature and precipitation data. Based on the fitted results using full data, we give the estimated cross‐correlation structure between the two variables.  相似文献   

5.
In this work we present a flexible class of linear models to treat observations made in discrete time and continuous space, where the regression coefficients vary smoothly in time and space. This kind of model is particularly appealing in situations where the effect of one or more explanatory processes on the response present substantial heterogeneity in both dimensions. We describe how to perform inference for this class of models and also how to perform forecasting in time and interpolation in space, using simulation techniques. The performance of the algorithm to estimate the parameters of the model and to perform prediction in time is investigated with simulated data sets. The proposed methodology is used to model pollution levels in the Northeast of the United States.  相似文献   

6.
Summary.  We propose an adaptive varying-coefficient spatiotemporal model for data that are observed irregularly over space and regularly in time. The model is capable of catching possible non-linearity (both in space and in time) and non-stationarity (in space) by allowing the auto-regressive coefficients to vary with both spatial location and an unknown index variable. We suggest a two-step procedure to estimate both the coefficient functions and the index variable, which is readily implemented and can be computed even for large spatiotemporal data sets. Our theoretical results indicate that, in the presence of the so-called nugget effect, the errors in the estimation may be reduced via the spatial smoothing—the second step in the estimation procedure proposed. The simulation results reinforce this finding. As an illustration, we apply the methodology to a data set of sea level pressure in the North Sea.  相似文献   

7.
This study utilizes the liquidity risk associated with Treasury bonds to directly determine the degree to which liquidity spreads account for corporate bond spreads. This enhances understanding of their relative contributions to the yield spreads of corporate bonds. To capture time variation on instantaneous spreads and volatility and to reduce modeling bias, semi-parametric techniques are applied to estimate the time-varying intensity process. Empirical results indicate that our semi-parametric model is good at capturing the time variation in default and liquidity intensity processes. The credit spreads are due to default risk and reflect the relative liquidity of the corporate bond market, indicating that liquidity risk plays an important role in corporate bond valuation.  相似文献   

8.
Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.  相似文献   

9.
Abstract: The authors consider a class of models for spatio‐temporal processes based on convolving independent processes with a discrete kernel that is represented by a lower triangular matrix. They study two families of models. In the first one, spatial Gaussian processes with isotropic correlations are convoluted with a kernel that provides temporal dependencies. In the second family, AR(p) processes are convoluted with a kernel providing spatial interactions. The covariance structures associated with these two families are quite rich. Their covariance functions that are stationary and separable in space and time as well as time dependent nonseparable and nonisotropic ones.  相似文献   

10.
Proportional hazard models for survival data, even though popular and numerically handy, suffer from the restrictive assumption that covariate effects are constant over survival time. A number of tests have been proposed to check this assumption. This paper contributes to this area by employing local estimates allowing to fit hazard models in which covariate effects are smoothly varying with time. A formal test is derived to check for proportional hazards against smooth hazards as alternative. The test proves to possess omnibus power in that it is powerful against arbitrary but smooth alternatives. Comparative simulations and two data examples accompany the presentation. Extensions are provided to multiple covariate settings, where the focus of interest is to decide which of the covariate effects vary with time.  相似文献   

11.
Neural networks are a popular machine learning tool, particularly in applications such as protein structure prediction; however, overfitting can pose an obstacle to their effective use. Due to the large number of parameters in a typical neural network, one may obtain a network fit that perfectly predicts the learning data, yet fails to generalize to other data sets. One way of reducing the size of the parmeter space is to alter the network topology so that some edges are removed; however it is often not immediately apparent which edges should be eliminated. We propose a data-adaptive method of selecting an optimal network architecture using a deletion/substitution/addition algorithm. Results of this approach to classification are presented on simulated data and the breast cancer data of Wolberg and Mangasarian [1990. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proc. Nat. Acad. Sci. 87, 9193–9196].  相似文献   

12.
The authors propose a new type of scan statistic to test for the presence of space‐time clusters in point processes data, when the goal is to identify and evaluate the statistical significance of localized clusters. Their method is based only on point patterns for cases; it does not require any specific knowledge of the underlying population. The authors propose to scan the three‐dimensional space with a score test statistic under the null hypothesis that the underlying point process is an inhomogeneous Poisson point process with space and time separable intensity. The alternative is that there are one or more localized space‐time clusters. Their method has been implemented in a computationally efficient way so that it can be applied routinely. They illustrate their method with space‐time crime data from Belo Horizonte, a Brazilian city, in addition to presenting a Monte Carlo study to analyze the power of their new test.  相似文献   

13.
Summary.  Short-term forecasts of air pollution levels in big cities are now reported in news-papers and other media outlets. Studies indicate that even short-term exposure to high levels of an air pollutant called atmospheric particulate matter can lead to long-term health effects. Data are typically observed at fixed monitoring stations throughout a study region of interest at different time points. Statistical spatiotemporal models are appropriate for modelling these data. We consider short-term forecasting of these spatiotemporal processes by using a Bayesian kriged Kalman filtering model. The spatial prediction surface of the model is built by using the well-known method of kriging for optimum spatial prediction and the temporal effects are analysed by using the models underlying the Kalman filtering method. The full Bayesian model is implemented by using Markov chain Monte Carlo techniques which enable us to obtain the optimal Bayesian forecasts in time and space. A new cross-validation method based on the Mahalanobis distance between the forecasts and observed data is also developed to assess the forecasting performance of the model implemented.  相似文献   

14.
The main goal of this work is to generalize the autoregressive conditional duration (ACD) model applied to times between trades to the case of time-varying parameters. The use of wavelets allows that parameters vary through time and makes possible the modeling of non-stationary processes without preliminary data transformations. The time-varying ACD model estimation was done by maximum-likelihood with standard exponential distributed errors. The properties of the estimators were assessed via bootstrap. We present a simulation exercise for a non-stationary process and an empirical application to a real series, namely the TELEMAR stock. Diagnostic and goodness of fit analysis suggest that the time-varying ACD model simultaneously modeled the dependence between durations, intra-day seasonality and volatility.  相似文献   

15.
Abstract.  Cox's proportional hazards model is routinely used in many applied fields, some times, however, with too little emphasis on the fit of the model. In this paper, we suggest some new tests for investigating whether or not covariate effects vary with time. These tests are a natural and integrated part of an extended version of the Cox model. An important new feature of the suggested test is that time constancy for a specific covariate is examined in a model, where some effects of other covariates are allowed to vary with time and some are constant; thus making successive testing of time-dependency possible. The proposed techniques are illustrated with the well-known Mayo liver disease data, and a small simulation study investigates the finite sample properties of the tests.  相似文献   

16.
Summary. The paper develops mixture models for spatially indexed data. We confine attention to the case of finite, typically irregular, patterns of points or regions with prescribed spatial relationships, and to problems where it is only the weights in the mixture that vary from one location to another. Our specific focus is on Poisson-distributed data, and applications in disease mapping. We work in a Bayesian framework, with the Poisson parameters drawn from gamma priors, and an unknown number of components. We propose two alternative models for spatially dependent weights, based on transformations of autoregressive Gaussian processes: in one (the logistic normal model), the mixture component labels are exchangeable; in the other (the grouped continuous model), they are ordered. Reversible jump Markov chain Monte Carlo algorithms for posterior inference are developed. Finally, the performances of both of these formulations are examined on synthetic data and real data on mortality from a rare disease.  相似文献   

17.
Variation of marine temperature at different time scales is a central environmental factor in the life cycle of marine organisms, and may have particular importance for various life stages of anadromous species, for example, Atlantic salmon. To understand the salient features of temperature variation we employ scale space multiresolution analysis, that uses differences of smooths of a time series to decompose it as a sum of scale-dependent components. The number of resolved components can be determined either automatically or by exploring a map that visualizes the structure of the time series. The statistical credibility of the features of the components is established with Bayesian inference. The method was applied to analyze a marine temperature time series measured from the Barents Sea and its correlation with the abundance of Atlantic salmon in three Barents Sea rivers. Besides the annual seasonal variation and a linear trend, the method revealed mid time-scale (~10 years) and long time-scale (~30 years) variation. The 10-year quasi-cyclical component of the temperature time series appears to be connected with a similar feature in Atlantic salmon abundance. These findings can provide information about the environmental factors affecting seasonal and periodic variation in survival and migrations of Atlantic salmon and other migratory fish.  相似文献   

18.
Point processes are the stochastic models most suitable for describing physical phenomena that appear at irregularly spaced times, such as the earthquakes. These processes are uniquely characterized by their conditional intensity, that is, by the probability that an event will occur in the infinitesimal interval (t, t+Δt), given the history of the process up tot. The seismic phenomenon displays different behaviours on different time and size scales; in particular, the occurrence of destructive shocks over some centuries in a seismogenic region may be explained by the elastic rebound theory. This theory has inspired the so-called stress release models: their conditional intensity translates the idea that an earthquake produces a sudden decrease in the amount of strain accumulated gradually over time along a fault, and the subsequent event occurs when the stress exceeds the strength of the medium. This study has a double objective: the formulation of these models in the Bayesian framework, and the assignment to each event of a mark, that is its magnitude, modelled through a distribution that depends at timet on the stress level accumulated up to that instant. The resulting parameter space is constrained and dependent on the data, complicating Bayesian computation and analysis. We have resorted to Monte Carlo methods to solve these problems.  相似文献   

19.
In randomized clinical trials or observational studies, subjects are recruited at multiple treating sites. Factors that vary across sites may have some influence on outcomes; therefore, they need to be taken into account to get better results. We apply the accelerated failure time (AFT) model with linear mixed effects to analyze failure time data, accounting for correlations between outcomes. Specifically, we use Bayesian approach to fit the data, computing the regression parameters by Gibbs sampler combined with Buckley-James method. This approach is compared with the marginal independence approach and other methods through simulations and an application to a real example.  相似文献   

20.
Pricing of American options in discrete time is considered, where the option is allowed to be based on several underlying stocks. It is assumed that the price processes of the underlying stocks are given by Markov processes. We use the Monte Carlo approach to generate artificial sample paths of these price processes, and then we use nonparametric regression estimates to estimate from this data so-called continuation values, which are defined as mean values of the American option for given values of the underlying stocks at time t subject to the constraint that the option is not exercised at time t. As nonparametric regression estimates we use least squares estimates with complexity penalties, which include as special cases least squares spline estimates, least squares neural networks, smoothing splines and orthogonal series estimates. General results concerning rate of convergence are presented and applied to derive results for the special cases mentioned above. Furthermore the pricing of American options is illustrated by simulated data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号