首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The study of physical processes is often aided by computer models or codes. Computer models that simulate such processes are sometimes computationally intensive and therefore not very efficient exploratory tools. In this paper, we address computer models characterized by temporal dynamics and propose new statistical correlation structures aimed at modelling their time dependence. These correlations are embedded in regression models with input-dependent design matrix and input-correlated errors that act as fast statistical surrogates for the computationally intensive dynamical codes. The methods are illustrated with an automotive industry application involving a road load data acquisition computer model.  相似文献   

2.
The paper gives a review of a number of data models for aggregate statistical data which have appeared in the computer science literature in the last ten years.After a brief introduction to the data model in general, the fundamental concepts of statistical data are introduced. These are called statistical objects because they are complex data structures (vectors, matrices, relations, time series, etc) which may have different possible representations (e.g. tables, relations, vectors, pie-charts, bar-charts, graphs, and so on). For this reason a statistical object is defined by two different types of attribute (a summary attribute, with its own summary type and with its own instances, called summary data, and the set of category attributes, which describe the summary attribute). Some conceptual models of statistical data (CSM, SDM4S), some semantic models of statistical data (SCM, SAM*, OSAM*), and some graphical models of statistical data (SUBJECT, GRASS, STORM) are also discussed.  相似文献   

3.
This paper presents a methodology for model fitting and inference in the context of Bayesian models of the type f(Y | X,θ)f(X|θ)f(θ), where Y is the (set of) observed data, θ is a set of model parameters and X is an unobserved (latent) stationary stochastic process induced by the first order transition model f(X (t+1)|X (t),θ), where X (t) denotes the state of the process at time (or generation) t. The crucial feature of the above type of model is that, given θ, the transition model f(X (t+1)|X (t),θ) is known but the distribution of the stochastic process in equilibrium, that is f(X|θ), is, except in very special cases, intractable, hence unknown. A further point to note is that the data Y has been assumed to be observed when the underlying process is in equilibrium. In other words, the data is not collected dynamically over time. We refer to such specification as a latent equilibrium process (LEP) model. It is motivated by problems in population genetics (though other applications are discussed), where it is of interest to learn about parameters such as mutation and migration rates and population sizes, given a sample of allele frequencies at one or more loci. In such problems it is natural to assume that the distribution of the observed allele frequencies depends on the true (unobserved) population allele frequencies, whereas the distribution of the true allele frequencies is only indirectly specified through a transition model. As a hierarchical specification, it is natural to fit the LEP within a Bayesian framework. Fitting such models is usually done via Markov chain Monte Carlo (MCMC). However, we demonstrate that, in the case of LEP models, implementation of MCMC is far from straightforward. The main contribution of this paper is to provide a methodology to implement MCMC for LEP models. We demonstrate our approach in population genetics problems with both simulated and real data sets. The resultant model fitting is computationally intensive and thus, we also discuss parallel implementation of the procedure in special cases.  相似文献   

4.
ABSTRACT

Inference for epidemic parameters can be challenging, in part due to data that are intrinsically stochastic and tend to be observed by means of discrete-time sampling, which are limited in their completeness. The problem is particularly acute when the likelihood of the data is computationally intractable. Consequently, standard statistical techniques can become too complicated to implement effectively. In this work, we develop a powerful method for Bayesian paradigm for susceptible–infected–removed stochastic epidemic models via data-augmented Markov Chain Monte Carlo. This technique samples all missing values as well as the model parameters, where the missing values and parameters are treated as random variables. These routines are based on the approximation of the discrete-time epidemic by diffusion process. We illustrate our techniques using simulated epidemics and finally we apply them to the real data of Eyam plague.  相似文献   

5.
The design parameters of the economic and economic statistical designs of control charts depend on the distribution of process failure mechanism or shock model. So far, only a small number of failure distributions, such as exponential, gamma, and Weibull with fixed or increasing hazard rates, have been used as a shock model in the economic and economic statistical designs of the Hotelling T2 control charts. Due to both theoretical and practical aspects, the lifetime of the process under study may not follow a distribution with fixed or increasing hazard rate. A proper alternative for this situation may be the Burr distribution, in which the hazard rate can be fixed, increasing, decreasing, single mode, or even U-shaped. In this research article, economic and economic statistical designs of the Hotelling T2 control charts under the Burr XII shock models under two uniform and non uniform sampling schemes were proposed, constructed, and compared. The obtained design models were implemented by a numerical example, and a sensitivity analysis was conducted to evaluate the effect of changing parameters of shock model distribution on the optimum values of the proposed design models. The results showed that first the proposed designs under non uniform sampling scheme perform better and second the optimum values of the designs are not significantly sensitive to changing of the Burr XII distribution parameters. We showed that the obtained design models are also true for the beta Burr XII shock model.  相似文献   

6.
ABSTRACT

We develop a new score-driven model for the joint dynamics of fat-tailed realized covariance matrix observations and daily returns. The score dynamics for the unobserved true covariance matrix are robust to outliers and incidental large observations in both types of data by assuming a matrix-F distribution for the realized covariance measures and a multivariate Student's t distribution for the daily returns. The filter for the unknown covariance matrix has a computationally efficient matrix formulation, which proves beneficial for estimation and simulation purposes. We formulate parameter restrictions for stationarity and positive definiteness. Our simulation study shows that the new model is able to deal with high-dimensional settings (50 or more) and captures unobserved volatility dynamics even if the model is misspecified. We provide an empirical application to daily equity returns and realized covariance matrices up to 30 dimensions. The model statistically and economically outperforms competing multivariate volatility models out-of-sample. Supplementary materials for this article are available online.  相似文献   

7.
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the ‘classical’ approach and the ‘inverse’ regression approach. Both of these models are static models and are used to estimate exact measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe it. The Bayesian time series analysis method of Dynamic Linear Models can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs a new approach to performing statistical calibration. A simulation study in the context of microwave radiometry is conducted that compares the dynamic model to traditional static frequentist and Bayesian approaches. The focus of the study is to understand how well the dynamic statistical calibration method performs under various signal-to-noise ratios, r.  相似文献   

8.
ABSTRACT

The bootstrap is typically less reliable in the context of time-series models with serial correlation of unknown form than when regularity conditions for the conventional IID bootstrap apply. It is, therefore, useful to have diagnostic techniques capable of evaluating bootstrap performance in specific cases. Those suggested in this paper are closely related to the fast double bootstrap (FDB) and are not computationally intensive. They can also be used to gauge the performance of the FDB itself. Examples of bootstrapping time series are presented, which illustrate the diagnostic procedures, and show how the results can cast light on bootstrap performance.  相似文献   

9.
Abstract

The mixture of time-varying effect model (MixTVEM) was proposed to handle both nonlinearity and heterogeneity in describing the complex patterns of change over time in the analysis of intensive longitudinal data (ILD). We conducted simulation studies to assess the performance of the MixTVEM. We found that in most cases, the MixTVEM could identify correctly the number of latent classes, as well as reveal accurately the coefficient functions. However, the estimation accuracy and feasibility of the computation could be affected by the sample size. Moreover, the MixTVEM is highly intensive computationally, compared with the original TVEM.  相似文献   

10.
11.
Mixed effects models or random effects models are popular for the analysis of longitudinal data. In practice, longitudinal data are often complex since there may be outliers in both the response and the covariates and there may be measurement errors. The likelihood method is a common approach for these problems but it can be computationally very intensive and sometimes may even be computationally infeasible. In this article, we consider approximate robust methods for nonlinear mixed effects models to simultaneously address outliers and measurement errors. The approximate methods are computationally very efficient. We show the consistency and asymptotic normality of the approximate estimates. The methods can also be extended to missing data problems. An example is used to illustrate the methods and a simulation is conducted to evaluate the methods.  相似文献   

12.
The Gaussian graphical model (GGM) is one of the well-known modelling approaches to describe biological networks under the steady-state condition via the precision matrix of data. In literature there are different methods to infer model parameters based on GGM. The neighbourhood selection with the lasso regression and the graphical lasso method are the most common techniques among these alternative estimation methods. But they can be computationally demanding when the system's dimension increases. Here, we suggest a non-parametric statistical approach, called the multivariate adaptive regression splines (MARS) as an alternative of GGM. To compare the performance of both models, we evaluate the findings of normal and non-normal data via the specificity, precision, F-measures and their computational costs. From the outputs, we see that MARS performs well, resulting in, a plausible alternative approach with respect to GGM in the construction of complex biological systems.  相似文献   

13.
Abstract. It is well known that curved exponential families can have multimodal likelihoods. We investigate the relationship between flat or multimodal likelihoods and model lack of fit, the latter measured by the score (Rao) test statistic W U of the curved model as embedded in the corresponding full model. When data yield a locally flat or convex likelihood (root of multiplicity >1, terrace point, saddle point, local minimum), we provide a formula for W U in such points, or a lower bound for it. The formula is related to the statistical curvature of the model, and it depends on the amount of Fisher information. We use three models as examples, including the Behrens–Fisher model, to see how a flat likelihood, etc. by itself can indicate a bad fit of the model. The results are related (dual) to classical results by Efron from 1978.  相似文献   

14.
Contours may be viewed as the 2D outline of the image of an object. This type of data arises in medical imaging as well as in computer vision and can be modeled as data on a manifold and can be studied using statistical shape analysis. Practically speaking, each observed contour, while theoretically infinite dimensional, must be discretized for computations. As such, the coordinates for each contour as obtained at k sampling times, resulting in the contour being represented as a k-dimensional complex vector. While choosing large values of k will result in closer approximations to the original contour, this will also result in higher computational costs in the subsequent analysis. The goal of this study is to determine reasonable values for k so as to keep the computational cost low while maintaining accuracy. To do this, we consider two methods for selecting sample points and determine lower bounds for k for obtaining a desired level of approximation error using two different criteria. Because this process is computationally inefficient to perform on a large scale, we then develop models for predicting the lower bounds for k based on simple characteristics of the contours.  相似文献   

15.
Abstract

The frailties, representing extra variations due to unobserved measurements, are often assumed to be iid in shared frailty models. In medical applications, however, a speculation can arise that a data set might violate the iid assumption. In this paper we investigate this conjecture through an analysis of the kidney infection data in McGilchrist and Aisbett (McGilchrist, C. A., Aisbett, C. W. (1991). Regression with frailty in survival analysis. Biometrics 47:461–466). As a test procedure, we consider the cusum of squares test which is frequently used for monitoring a variance change in statistical models. Our result strongly sustains the heterogeneity of the frailty distribution.  相似文献   

16.
Nonparametric regression models are often used to check or suggest a parametric model. Several methods have been proposed to test the hypothesis of a parametric regression function against an alternative smoothing spline model. Some tests such as the locally most powerful (LMP) test by Cox et al. (Cox, D., Koh, E., Wahba, G. and Yandell, B. (1988). Testing the (parametric) null model hypothesis in (semiparametric) partial and generalized spline models. Ann. Stat., 16, 113–119.), the generalized maximum likelihood (GML) ratio test and the generalized cross validation (GCV) test by Wahba (Wahba, G. (1990). Spline models for observational data. CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM.) were developed from the corresponding Bayesian models. Their frequentist properties have not been studied. We conduct simulations to evaluate and compare finite sample performances. Simulation results show that the performances of these tests depend on the shape of the true function. The LMP and GML tests are more powerful for low frequency functions while the GCV test is more powerful for high frequency functions. For all test statistics, distributions under the null hypothesis are complicated. Computationally intensive Monte Carlo methods can be used to calculate null distributions. We also propose approximations to these null distributions and evaluate their performances by simulations.  相似文献   

17.
Abstract

Augmented mixed beta regression models are suitable choices for modeling continuous response variables on the closed interval [0, 1]. The random eeceeects in these models are typically assumed to be normally distributed, but this assumption is frequently violated in some applied studies. In this paper, an augmented mixed beta regression model with skew-normal independent distribution for random effects are used. Next, we adopt a Bayesian approach for parameter estimation using the MCMC algorithm. The methods are then evaluated using some intensive simulation studies. Finally, the proposed models have applied to analyze a dataset from an Iranian Labor Force Survey.  相似文献   

18.
Clustered count data are commonly analysed by the generalized linear mixed model (GLMM). Here, the correlation due to clustering and some overdispersion is captured by the inclusion of cluster-specific normally distributed random effects. Often, the model does not capture the variability completely. Therefore, the GLMM can be extended by including a set of gamma random effects. Routinely, the GLMM is fitted by maximizing the marginal likelihood. However, this process is computationally intensive. Although feasible with medium to large data, it can be too time-consuming or computationally intractable with very large data. Therefore, a fast two-stage estimator for correlated, overdispersed count data is proposed. It is rooted in the split-sample methodology. Based on a simulation study, it shows good statistical properties. Furthermore, it is computationally much faster than the full maximum likelihood estimator. The approach is illustrated using a large dataset belonging to a network of Belgian general practices.  相似文献   

19.
Combining statistical models is an useful approach in all the research area where a global picture of the problem needs to be constructed by binding together evidence from different sources [M.S. Massa and S.L. Lauritzen Combining Statistical Models, M. Viana and H. Wynn, eds., American Mathematical Society, Providence, RI, 2010, pp. 239–259]. In this paper, we investigate the effectiveness of combining a fixed number of Gaussian graphical models respecting some consistency assumptions in problems of model building. In particular, we use the meta-Markov combination of Gaussian graphical models as detailed in Massa and Lauritzen and compare model selection results obtained by combining selections over smaller sets of variables with selection results over all variables of interest. In order to do so, we carry out some simulation studies in which different criteria are considered for the selection procedures. We conclude that the combination performs, generally, better than global estimation, is computationally simpler by virtue of having fewer and simpler models to work on, and has an intuitive appeal to a wide variety of contexts.  相似文献   

20.
Measuring a statistical model's complexity is important for model criticism and comparison. However, it is unclear how to do this for hierarchical models due to uncertainty about how to count the random effects. The authors develop a complexity measure for generalized linear hierarchical models based on linear model theory. They demonstrate the new measure for binomial and Poisson observables modeled using various hierarchical structures, including a longitudinal model and an areal‐data model having both spatial clustering and pure heterogeneity random effects. They compare their new measure to a Bayesian index of model complexity, the effective number pD of parameters (Spiegelhalter, Best, Carlin & van der Linde 2002); the comparisons are made in the binomial and Poisson cases via simulation and two real data examples. The two measures are usually close, but differ markedly in some instances where pD is arguably inappropriate. Finally, the authors show how the new measure can be used to approach the difficult task of specifying prior distributions for variance components, and in the process cast further doubt on the commonly‐used vague inverse gamma prior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号