首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 268 毫秒
1.
2.
Abstract.  This paper considers the problem of mapping spatial variation of yield in a field using data from a yield monitoring system on a combine harvester. The unobserved yield is assumed to be a Gaussian random field and the yield monitoring system data is modelled as a convolution of the yield and an impulse response function. This results in an unusual spatial covariance structure (depending on the driving pattern of the combine harvester) for the yield monitoring system data. Parameters of the impulse response function and the spatial covariance function of the yield are estimated using maximum likelihood methods. The fitted model is assessed using certain empirical directional covariograms and the yield is finally predicted using the inferred statistical model.  相似文献   

3.
Data envelopment analysis (DEA) is a deterministic econometric model for calculating efficiency by using data from an observed set of decision-making units (DMUs). We propose a method for calculating the distribution of efficiency scores. Our framework relies on estimating data from an unobserved set of DMUs. The model provides posterior predictive data for the unobserved DMUs to augment the frontier in the DEA that provides a posterior predictive distribution for the efficiency scores. We explore the method on a multiple-input and multiple-output DEA model. The data for the example are from a comprehensive examination of how nursing homes complete a standardized mandatory assessment of residents.  相似文献   

4.
In this paper we model the Gaussian errors in the standard Gaussian linear state space model as stochastic volatility processes. We show that conventional MCMC algorithms for this class of models are ineffective, but that the problem can be alleviated by reparameterizing the model. Instead of sampling the unobserved variance series directly, we sample in the space of the disturbances, which proves to lower correlation in the sampler and thus increases the quality of the Markov chain.

Using our reparameterized MCMC sampler, it is possible to estimate an unobserved factor model for exchange rates between a group of n countries. The underlying n + 1 country-specific currency strength factors and the n + 1 currency volatility factors can be extracted using the new methodology. With the factors, a more detailed image of the events around the 1992 EMS crisis is obtained.

We assess the fit of competitive models on the panels of exchange rates with an effective particle filter and find that indeed the factor model is strongly preferred by the data.  相似文献   

5.
The unknown or unobservable risk factors in the survival analysis cause heterogeneity between individuals. Frailty models are used in the survival analysis to account for the unobserved heterogeneity in individual risks to disease and death. To analyze the bivariate data on related survival times, the shared frailty models were suggested. The most common shared frailty model is a model in which frailty act multiplicatively on the hazard function. In this paper, we introduce the shared gamma frailty model and the inverse Gaussian frailty model with the reversed hazard rate. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We also apply the proposed models to the Australian twin data set and a better model is suggested.  相似文献   

6.
An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating model from which the latent process can be simulated. Given the presence of a latent long-memory process, we require a modification of the importance sampling technique. In particular, the long-memory process needs to be approximated by a finite dynamic linear process. Two possible approximations are discussed and are compared with each other. We show that an autoregression obtained from minimizing mean squared prediction errors leads to an effective and feasible method. In our empirical study, we analyze ten daily log-return series from the S&P 500 stock index by univariate and multivariate long-memory stochastic volatility models. We compare the in-sample and out-of-sample performance of a number of models within the class of long-memory stochastic volatility models.  相似文献   

7.
We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided.  相似文献   

8.
In this paper, we address the problem of simulating from a data-generating process for which the observed data do not follow a regular probability distribution. One existing method for doing this is bootstrapping, but it is incapable of interpolating between observed data. For univariate or bivariate data, in which a mixture structure can easily be identified, we could instead simulate from a Gaussian mixture model. In general, though, we would have the problem of identifying and estimating the mixture model. Instead of these, we introduce a non-parametric method for simulating datasets like this: Kernel Carlo Simulation. Our algorithm begins by using kernel density estimation to build a target probability distribution. Then, an envelope function that is guaranteed to be higher than the target distribution is created. We then use simple accept–reject sampling. Our approach is more flexible than others, can simulate intelligently across gaps in the data, and requires no subjective modelling decisions. With several univariate and multivariate examples, we show that our method returns simulated datasets that, compared with the observed data, retain the covariance structures and have distributional characteristics that are remarkably similar.  相似文献   

9.
Bayesian analysis of single-molecule experimental data   总被引:2,自引:0,他引:2  
Summary.  Recent advances in experimental technologies allow scientists to follow biochemical processes on a single-molecule basis, which provides much richer information about chemical dynamics than traditional ensemble-averaged experiments but also raises many new statistical challenges. The paper provides the first likelihood-based statistical analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single deoxyribonucleic acid (DNA) hairpin molecule. The conformational change is initially treated as a continuous time two-state Markov chain, which is not observable and must be inferred from changes in photon emissions. This model is further complicated by unobserved molecular Brownian diffusions. Beyond the simple two-state model, a competing model that models the energy barrier between the two states of the DNA hairpin as an Ornstein–Uhlenbeck process has been suggested in the literature. We first derive the likelihood function of the simple two-state model and then generalize the method to handle complications such as unobserved molecular diffusions and the fluctuating energy barrier. The data augmentation technique and Markov chain Monte Carlo methods are developed to sample from the posterior distribution desired. The Bayes factor calculation and posterior estimates of relevant parameters indicate that the fluctuating barrier model fits the data better than the simple two-state model.  相似文献   

10.

In time series analysis, signal extraction model (SEM) is used to estimate unobserved signal component from observed time series data. Since parameters of the components in SEM are often unknown in practice, a commonly used method is to estimate unobserved signal component using the maximum likelihood estimates (MLEs) of parameters of the components. This paper explores an alternative way to estimate unobserved signal component when parameters of the components are unknown. The suggested method makes use of importance sampling (IS) with Bayesian inference. The basic idea is to treat parameters of the components in SEM as a random vector and compute a posterior probability density function of the parameters using Bayesian inference. Then IS method is applied to integrate out the parameters and thus estimates of unobserved signal component, unconditional to the parameters, can be obtained. This method is illustrated with a real time series data. Then a Monte Carlo study with four different types of time series models is carried out to compare a performance of this method with that of a commonly used method. The study shows that IS method with Bayesian inference is computationally feasible and robust, and more efficient in terms of mean square errors (MSEs) than a commonly used method.  相似文献   

11.
We propose an estimation method that incorporates the correlation/covariance structure between repeated measurements in covariate-adjusted regression models for distorted longitudinal data. In this distorted data setting, neither the longitudinal response nor (possibly time-varying) predictors are directly observable. The unobserved response and predictors are assumed to be distorted/contaminated by unknown functions of a common observable confounder. The proposed estimation methodology adjusts for the distortion effects both in estimation of the covariance structure and in the regression parameters using generalized least squares. The finite-sample performance of the proposed estimators is studied numerically by means of simulations. The consistency and convergence rates of the proposed estimators are also established. The proposed method is illustrated with an application to data from a longitudinal study of cognitive and social development in children.  相似文献   

12.
In this paper, we propose a defective model induced by a frailty term for modeling the proportion of cured. Unlike most of the cure rate models, defective models have advantage of modeling the cure rate without adding any extra parameter in model. The introduction of an unobserved heterogeneity among individuals has bring advantages for the estimated model. The influence of unobserved covariates is incorporated using a proportional hazard model. The frailty term assumed to follow a gamma distribution is introduced on the hazard rate to control the unobservable heterogeneity of the patients. We assume that the baseline distribution follows a Gompertz and inverse Gaussian defective distributions. Thus we propose and discuss two defective distributions: the defective gamma-Gompertz and gamma-inverse Gaussian regression models. Simulation studies are performed to verify the asymptotic properties of the maximum likelihood estimator. Lastly, in order to illustrate the proposed model, we present three applications in real data sets, in which one of them we are using for the first time, related to a study about breast cancer in the A.C.Camargo Cancer Center, São Paulo, Brazil.  相似文献   

13.
In this article, we estimate structural labor supply with piecewise-linear budgets and nonseparable endogenous unobserved heterogeneity. We propose a two-stage method to address the endogeneity issue that comes from the correlation between the covariates and unobserved heterogeneity. In the first stage, Evdokimov’s nonparametric de-convolution method serves to identify the conditional distribution of unobserved heterogeneity from the quasi-reduced model that uses panel data. In the second stage, the conditional distribution is plugged into the original structural model to estimate labor supply. We apply this methodology to estimate the labor supply of U.S. married men in 2004 and 2005. Our empirical work demonstrates that ignoring the correlation between the covariates and unobserved heterogeneity will bias the estimates of wage elasticities upward. The labor elasticity estimated from a fixed effects model is less than half of that obtained from a random effects model.  相似文献   

14.
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.  相似文献   

15.
We consider estimation of the number of cells in a multinomial distribution. This is one version of the species problem: there are many applications, such as the estimation of the number of unobserved species of animals; estimation of vocabulary size, etc. We describe the results of a simulation comparison of three principal frequent-ist' procedures for estimating the number of cells (or species). The first procedure postulates a functional form for the cell probabilities; the second procedure approxi mates the distribution of the probabilities by a parametric probability density function; and the third procedure is based on an estimate of the sample coverage, i.e. the sum of the probabilities of the observed cells. Among the procedures studied, we find that the third (non-parametric) method is globally preferable; the second (functional parametric) method cannot be recommended; and that, when based on the inverse Gaussian density, the first method is competitive in some cases with the third method. We also discuss Sichel's recent generalized inverse Gaussian-based procedure which, with some refine ment, promises to perform at least as well as the non-parametric method in all cases.  相似文献   

16.
A method is proposed for estimating regression parameters from data containing covariate measurement errors by using Stein estimates of the unobserved true covariates. The method produces consistent estimates for the slope parameter in the classical linear errors-in-variables model and applies to a broad range of nonlinear regression problems, provided the measurement error is Gaussian with known variance. Simulations are used to examine the performance of the estimates in a nonlinear regression problem and to compare them with the usual naive ones obtained by ignoring error and with other estimates proposed recently in the literature.  相似文献   

17.
Summary.  To analyse functional status transitions in the older population better, we fit a semi-Markov process model to data from the 1992–2002 Medicare Current Beneficiary Survey. We used an analogue of the stochastic EM algorithm to address the problem of left censoring of spells in longitudinal data. The iterative algorithm converged robustly under various initial values for the unobserved elapsed durations of spells in progress at base-line. Results on life expectancy and recovery from functional limitations based on the semi-Markov process model differ from those based on the traditional multistate life-table method. The proposed treatment of left-censored spells has the potential to expand the modelling capability that is available to researchers in fields where left censoring is a concern.  相似文献   

18.
We describe a class of random field models for geostatistical count data based on Gaussian copulas. Unlike hierarchical Poisson models often used to describe this type of data, Gaussian copula models allow a more direct modelling of the marginal distributions and association structure of the count data. We study in detail the correlation structure of these random fields when the family of marginal distributions is either negative binomial or zero‐inflated Poisson; these represent two types of overdispersion often encountered in geostatistical count data. We also contrast the correlation structure of one of these Gaussian copula models with that of a hierarchical Poisson model having the same family of marginal distributions, and show that the former is more flexible than the latter in terms of range of feasible correlation, sensitivity to the mean function and modelling of isotropy. An exploratory analysis of a dataset of Japanese beetle larvae counts illustrate some of the findings. All of these investigations show that Gaussian copula models are useful alternatives to hierarchical Poisson models, specially for geostatistical count data that display substantial correlation and small overdispersion.  相似文献   

19.
In this article, we extend the Gaussian process for regression model by assuming a skew Gaussian process prior on the input function and a skew Gaussian white noise on the error term. Under these assumptions, the predictive density of the output function at a new fixed input is obtained in a closed form. Also, we study the Gaussian process predictor when the errors depart from the Gaussianity to the skew Gaussian white noise. The bias is derived in a closed form and is studied for some special cases. We conduct a simulation study to compare the empirical distribution function of the Gaussian process predictor under Gaussian white noise and skew Gaussian white noise.  相似文献   

20.
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号