首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
There are many instances when texture contains valuable information in images, and various methods have been used for texture analysis. We distinguish between micro-textures and macro-textures. The paper models micro-texture using the general spin Ising model from statistical mechanics. This model allows for any number of grey levels and any set of pair interactions. For a given texture, we select an appropriate set of pair interactions and estimate the correspomding parameter values, using linked cluster expansions of the auto-covariances and the partition function. The series expansions are valid for parameters smaller than the critical parameters for which an infinite system would exhibit a phase transition. Hence, sufficiently small-grained micro-textures may be modelled. To ensure that the data meet this requirement, we simulate the model using the Markov chain Meet Carlo method and estimate its critical parameters using the series expansions. We demonstrate these methods on both real and simulated images.  相似文献   

2.
We present a mathematical theory of objective, frequentist chance phenomena that uses as a model a set of probability measures. In this work, sets of measures are not viewed as a statistical compound hypothesis or as a tool for modeling imprecise subjective behavior. Instead we use sets of measures to model stable (although not stationary in the traditional stochastic sense) physical sources of finite time series data that have highly irregular behavior. Such models give a coarse-grained picture of the phenomena, keeping track of the range of the possible probabilities of the events. We present methods to simulate finite data sequences coming from a source modeled by a set of probability measures, and to estimate the model from finite time series data. The estimation of the set of probability measures is based on the analysis of a set of relative frequencies of events taken along subsequences selected by a collection of rules. In particular, we provide a universal methodology for finding a family of subsequence selection rules that can estimate any set of probability measures with high probability.  相似文献   

3.
Time series regression models have been widely studied in the literature by several authors. However, statistical analysis of replicated time series regression models has received little attention. In this paper, we study the application of the quasi-least squares method to estimate the parameters in a replicated time series model with errors that follow an autoregressive process of order p. We also discuss two other established methods for estimating the parameters: maximum likelihood assuming normality and the Yule-Walker method. When the number of repeated measurements is bounded and the number of replications n goes to infinity, the regression and the autocorrelation parameters are consistent and asymptotically normal for all three methods of estimation. Basically, the three methods estimate the regression parameter efficiently and differ in how they estimate the autocorrelation. When p=2, for normal data we use simulations to show that the quasi-least squares estimate of the autocorrelation is undoubtedly better than the Yule-Walker estimate. And the former estimate is as good as the maximum likelihood estimate almost over the entire parameter space.  相似文献   

4.
In this paper we consider the linear compartment model and consider the estimation procedures of the different parameters. We discuss a method to obtain the initial estimators, which can be used for any iterative procedures to obtain the least-squares estimators. Four different types of confidence intervals have been discussed and they have been compared by computer simulations. We propose different methods to estimate the number of components of the linear compartment model. One data set has been used to see how the different methods work in practice.  相似文献   

5.
In recent years, Bayesian statistics methods in neuroscience have been showing important advances. In particular, detection of brain signals for studying the complexity of the brain is an active area of research. Functional magnetic resonance imagining (fMRI) is an important tool to determine which parts of the brain are activated by different types of physical behavior. According to recent results, there is evidence that the values of the connectivity brain signal parameters are close to zero and due to the nature of time series fMRI data with high-frequency behavior, Bayesian dynamic models for identifying sparsity are indeed far-reaching. We propose a multivariate Bayesian dynamic approach for model selection and shrinkage estimation of the connectivity parameters. We describe the coupling or lead-lag between any pair of regions by using mixture priors for the connectivity parameters and propose a new weakly informative default prior for the state variances. This framework produces one-step-ahead proper posterior predictive results and induces shrinkage and robustness suitable for fMRI data in the presence of sparsity. To explore the performance of the proposed methodology, we present simulation studies and an application to functional magnetic resonance imaging data.  相似文献   

6.
In this paper, we describe an analysis for data collected on a three-dimensional spatial lattice with treatments applied at the horizontal lattice points. Spatial correlation is accounted for using a conditional autoregressive model. Observations are defined as neighbours only if they are at the same depth. This allows the corresponding variance components to vary by depth. We use the Markov chain Monte Carlo method with block updating, together with Krylov subspace methods, for efficient estimation of the model. The method is applicable to both regular and irregular horizontal lattices and hence to data collected at any set of horizontal sites for a set of depths or heights, for example, water column or soil profile data. The model for the three-dimensional data is applied to agricultural trial data for five separate days taken roughly six months apart in order to determine possible relationships over time. The purpose of the trial is to determine a form of cropping that leads to less moist soils in the root zone and beyond. We estimate moisture for each date, depth and treatment accounting for spatial correlation and determine relationships of these and other parameters over time.  相似文献   

7.
In this paper, we consider the problem of estimating the parameters of a matrix normal dynamic linear model when the variance and covariance matrices of its error terms are unknown and can be changing over time. Given that the analysis is not conjugate, we use simulation methods based on Monte Carlo Markov chains to estimate the parameters of the model. This analysis allows us to carry out a dynamic principal components analysis in a set of multivariate time series. Furthermore, it permits the treatment of series with different lengths and with missing data. The methodology is illustrated with two empirical examples: the value added distribution of the firms operating in the manufacturing sector of the countries participating in the BACH project, and the joint evolution of a set of international stock-market indices.  相似文献   

8.
In this article, we define and study a new three-parameter model called the Marshall–Olkin extended generalized Lindley distribution. We derive various structural properties of the proposed model including expansions for the density function, ordinary moments, moment generating function, quantile function, mean deviations, Bonferroni and Lorenz curves, order statistics and their moments, Rényi entropy and reliability. We estimate the model parameters using the maximum likelihood technique of estimation. We assess the performance of the maximum likelihood estimators in a simulation study. Finally, by means of two real datasets, we illustrate the usefulness of the new model.  相似文献   

9.
We derived two methods to estimate the logistic regression coefficients in a meta-analysis when only the 'aggregate' data (mean values) from each study are available. The estimators we proposed are the discriminant function estimator and the reverse Taylor series approximation. These two methods of estimation gave similar estimators using an example of individual data. However, when aggregate data were used, the discriminant function estimators were quite different from the other two estimators. A simulation study was then performed to evaluate the performance of these two estimators as well as the estimator obtained from the model that simply uses the aggregate data in a logistic regression model. The simulation study showed that all three estimators are biased. The bias increases as the variance of the covariate increases. The distribution type of the covariates also affects the bias. In general, the estimator from the logistic regression using the aggregate data has less bias and better coverage probabilities than the other two estimators. We concluded that analysts should be cautious in using aggregate data to estimate the parameters of the logistic regression model for the underlying individual data.  相似文献   

10.
In this paper we consider the statistical analysis of multivariate multiple nonlinear regression models with correlated errors, using Finite Fourier Transforms. Consistency and asymptotic normality of the weighted least squares estimates are established under various conditions on the regressor variables. These conditions involve different types of scalings, and the scaling factors are obtained explicitly for various types of nonlinear regression models including an interesting model which requires the estimation of unknown frequencies. The estimation of frequencies is a classical problem occurring in many areas like signal processing, environmental time series, astronomy and other areas of physical sciences. We illustrate our methodology using two real data sets taken from geophysics and environmental sciences. The data we consider from geophysics are polar motion (which is now widely known as “Chandlers Wobble”), where one has to estimate the drift parameters, the offset parameters and the two periodicities associated with elliptical motion. The data were first analyzed by Arato, Kolmogorov and Sinai who treat it as a bivariate time series satisfying a finite order time series model. They estimate the periodicities using the coefficients of the fitted models. Our analysis shows that the two dominant frequencies are 12 h and 410 days. The second example, we consider is the minimum/maximum monthly temperatures observed at the Antarctic Peninsula (Faraday/Vernadsky station). It is now widely believed that over the past 50 years there is a steady warming in this region, and if this is true, the warming has serious consequences on ecology, marine life, etc. as it can result in melting of ice shelves and glaciers. Our objective here is to estimate any existing temperature trend in the data, and we use the nonlinear regression methodology developed here to achieve that goal.  相似文献   

11.
Traditionally, time series analysis involves building an appropriate model and using either parametric or nonparametric methods to make inference about the model parameters. Motivated by recent developments for dimension reduction in time series, an empirical application of sufficient dimension reduction (SDR) to nonlinear time series modelling is shown in this article. Here, we use time series central subspace as a tool for SDR and estimate it using mutual information index. Especially, in order to reduce the computational complexity in time series, we propose an efficient estimation method of minimal dimension and lag using a modified Schwarz–Bayesian criterion, when either of the dimensions and the lags is unknown. Through simulations and real data analysis, the approach presented in this article performs well in autoregression and volatility estimation.  相似文献   

12.
Summary.  The moment method is a well-known astronomical mode identification technique in asteroseismology which uses a time series of the first three moments of a spectral line to estimate the discrete oscillation mode parameters l and m . The method, in contrast with many other mode identification techniques, also provides estimates of other important continuous parameters such as the inclination angle α and the rotational velocity v e. We developed a statistical formalism for the moment method based on so-called generalized estimating equations. This formalism allows an estimation of the uncertainty of the continuous parameters, taking into account that the different moments of a line profile are correlated and that the uncertainty of the observed moments also depends on the model parameters. Furthermore, we set up a procedure to take into account the mode uncertainty, i.e. the fact that often several modes ( l ,  m ) can adequately describe the data. We also introduce a new lack-of-fit function which works at least as well as a previous discriminant function, and which in addition allows us to identify the sign of the azimuthal order m . We applied our method to star HD181558 by using several numerical methods, from which we learned that numerically solving the estimating equations is an intensive task. We report on the numerical results, from which we gain insight in the statistical uncertainties of the physical parameters that are involved in the moment method.  相似文献   

13.
Kalman filtering techniques are widely used by engineers to recursively estimate random signal parameters which are essentially coefficients in a large-scale time series regression model. These Bayesian estimators depend on the values assumed for the mean and covariance parameters associated with the initial state of the random signal. This paper considers a likelihood approach to estimation and tests of hypotheses involving the critical initial means and covariances. A computationally simple convergent iterative algorithm is used to generate estimators which depend only on standard Kalman filter outputs at each successive stage. Conditions are given under which the maximum likelihood estimators are consistent and asymptotically normal. The procedure is illustrated using a typical large-scale data set involving 10-dimensional signal vectors.  相似文献   

14.
"Modern time series methods are applied to the analysis of annual demographic data for England, 1541-1800. Evidence is found of non-stationarity in the series and of co-integration among the series. Building on economic models of historical demography, optimal inferential procedures are implemented to estimate the structural parameters of long-term equilibria among the variables. Evidence is found for a small, but significant, Malthusian 'preventive check' as well as interactions between fertility, mortality and nuptiality that are consistent with the predictions often made in demographic studies. Tentative experiments to detect the influence of environmental factors fail to reveal any significant impact on the estimates obtained."  相似文献   

15.
We develop Bayesian inference methods for a recently-emerging type of epigenetic data to study the transmission fidelity of DNA methylation patterns over cell divisions. The data consist of parent-daughter double-stranded DNA methylation patterns with each pattern coming from a single cell and represented as an unordered pair of binary strings. The data are technically difficult and time-consuming to collect, putting a premium on an efficient inference method. Our aim is to estimate rates for the maintenance and de novo methylation events that gave rise to the observed patterns, while accounting for measurement error. We model data at multiple sites jointly, thus using whole-strand information, and considerably reduce confounding between parameters. We also adopt a hierarchical structure that allows for variation in rates across sites without an explosion in the effective number of parameters. Our context-specific priors capture the expected stationarity, or near-stationarity, of the stochastic process that generated the data analyzed here. This expected stationarity is shown to greatly increase the precision of the estimation. Applying our model to a data set collected at the human FMR1 locus, we find that measurement errors, generally ignored in similar studies, occur at a non-trivial rate (inappropriate bisulfite conversion error: 1.6% with 80% CI: 0.9-2.3%). Accounting for these errors has a substantial impact on estimates of key biological parameters. The estimated average failure of maintenance rate and daughter de novo rate decline from 0.04 to 0.024 and from 0.14 to 0.07, respectively, when errors are accounted for. Our results also provide evidence that de novo events may occur on both parent and daughter strands: the median parent and daughter de novo rates are 0.08 (80% CI: 0.04-0.13) and 0.07 (80% CI: 0.04-0.11), respectively.  相似文献   

16.
The problem of constructing classification methods based on both labeled and unlabeled data sets is considered for analyzing data with complex structures. We introduce a semi-supervised logistic discriminant model with Gaussian basis expansions. Unknown parameters included in the logistic model are estimated by regularization method along with the technique of EM algorithm. For selection of adjusted parameters, we derive a model selection criterion from Bayesian viewpoints. Numerical studies are conducted to investigate the effectiveness of our proposed modeling procedures.  相似文献   

17.
Conditional probability distributions have been commonly used in modeling Markov chains. In this paper we consider an alternative approach based on copulas to investigate Markov-type dependence structures. Based on the realization of a single Markov chain, we estimate the parameters using one- and two-stage estimation procedures. We derive asymptotic properties of the marginal and copula parameter estimators and compare performance of the estimation procedures based on Monte Carlo simulations. At low and moderate dependence structures the two-stage estimation has comparable performance as the maximum likelihood estimation. In addition we propose a parametric pseudo-likelihood ratio test for copula model selection under the two-stage procedure. We apply the proposed methods to an environmental data set.  相似文献   

18.
In this article, we derive explicit expansions for the moments of beta generalized distributions from power series expansions for the quantile functions of the baseline distributions. We apply our formula to the beta normal, beta Student t, beta gamma and beta beta generalized distributions. We propose a simple way to express the quantile function of any beta generalized distribution as a power series expansion with known coefficients.  相似文献   

19.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

20.
We develop and show applications of two new test statistics for deciding if one ARIMA model provides significantly better h-step-ahead forecasts than another, as measured by the difference of approximations to their asymptotic mean square forecast errors. The two statistics differ in the variance estimates used for normalization. Both variance estimates are consistent even when the models considered are incorrect. Our main variance estimate is further distinguished by accounting for parameter estimation, while the simpler variance estimate treats parameters as fixed. Their broad consistency properties offer improvements to what are known as tests of Diebold and Mariano (1995) type, which are tests that treat parameters as fixed and use variance estimates that are generally not consistent in our context. We show how these statistics can be calculated for any pair of ARIMA models with the same differencing operator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号