首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A model involving autocorrelated random effects and sampling errors is proposed for small-area estimation, using both time-series and cross-sectional data. The sampling errors are assumed to have a known block-diagonal covariance matrix. This model is an extension of a well-known model, due to Fay and Herriot (1979), for cross-sectional data. A two-stage estimator of a small-area mean for the current period is obtained under the proposed model with known autocorrelation, by first deriving the best linear unbiased prediction estimator assuming known variance components, and then replacing them with their consistent estimators. Extending the approach of Prasad and Rao (1986, 1990) for the Fay-Herriot model, an estimator of mean squared error (MSE) of the two-stage estimator, correct to a second-order approximation for a small or moderate number of time points, T, and a large number of small areas, m, is obtained. The case of unknown autocorrelation is also considered. Limited simulation results on the efficiency of two-stage estimators and the accuracy of the proposed estimator of MSE are présentés.  相似文献   

2.
Summary In this paper we analyse the consequences of model overidentification on testing exogeneity, when maximum likelihood techniques for estimation and inference are used. This situation is viewed as a particular case of the more general problem of considering how restrictions on nuisance parameters could help in making inference on the parameters of interest. At first a general model is considered. A suitable likelihood function factorization is used which allows a simple derivation of the information matrix and others tools useful for building up joint tests of exogeneity and overidentifying restrictions both of Wald and Lagrange Multiplier type. The asymptotic local power of the exogeneity test in the justidentified model is compared with that in the overidentified one, when we assume that the latter is the true model. Then the pseudo-likelihood framework is used to derive the consequences of working with a model where overidentifying restrictions are erroneously imposed. The inconsistency introduced by imposing false restrictions is analysed and the consequences of the misspecification on the exogeneity test are carefully examined.  相似文献   

3.
This note considers how hypotheses of invariance and super exogeneity may be formulated and tested in elliptical linear regression models. It is demonstrated that for jointly elliptical random variables super exogeneity will only hold under normality.  相似文献   

4.
Bayesian methods have been extensively used in small area estimation. A linear model incorporating autocorrelated random effects and sampling errors was previously proposed in small area estimation using both cross-sectional and time-series data in the Bayesian paradigm. There are, however, many situations that we have time-related counts or proportions in small area estimation; for example, monthly dataset on the number of incidence in small areas. This article considers hierarchical Bayes generalized linear models for a unified analysis of both discrete and continuous data with incorporating cross-sectional and time-series data. The performance of the proposed approach is evaluated through several simulation studies and also by a real dataset.  相似文献   

5.
Data is rapidly increasing in volume and velocity and the Internet of Things (IoT) is one important source of this data. The IoT is a collection of connected devices (things) which are constantly recording data from their surroundings using on-board sensors. These devices can record and stream data to the cloud at a very high rate, leading to high storage and analysis costs. In order to ameliorate these costs, the data is modelled as a stream and analysed online to learn about the underlying process, perform interpolation and smoothing and make forecasts and predictions. Conventional state space modelling tools assume the observations occur on a fixed regular time grid. However, many sensors change their sampling frequency, sometimes adaptively, or get interrupted and re-started out of sync with the previous sampling grid, or just generate event data at irregular times. It is therefore desirable to model the system as a partially and irregularly observed Markov process which evolves in continuous time. Both the process and the observation model are potentially non-linear. Particle filters therefore represent the simplest approach to online analysis. A functional Scala library of composable continuous time Markov process models has been developed in order to model the wide variety of data captured in the IoT.  相似文献   

6.
In this paper we develop a Bayesian approach to detecting unit roots in autoregressive panel data models. Our method is based on the comparison of stationary autoregressive models with and without individual deterministic trends, to their counterpart models with a unit autoregressive root. This is done under cross-sectional dependence among the error terms of the panel units. Simulation experiments are conducted with the aim to assess the performance of the suggested inferential procedure, as well as to investigate if the Bayesian model comparison approach can distinguish unit root models from stationary autoregressive models under cross-sectional dependence. The approach is applied to real exchange rate series for a panel of the G7 countries and to a panel of US nominal interest rates data.  相似文献   

7.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

8.
Latent Markov models (LMMs) are widely used in the analysis of heterogeneous longitudinal data. However, most existing LMMs are developed in fully observed data without missing entries. The main objective of this study is to develop a Bayesian approach for analyzing the LMMs with non-ignorable missing data. Bayesian methods for estimation and model comparison are discussed. The empirical performance of the proposed methodology is evaluated through simulation studies. An application to a data set derived from National Longitudinal Survey of Youth 1997 is presented.  相似文献   

9.
In the analysis of censored survival data Cox proportional hazards model (1972) is extremely popular among the practitioners. However, in many real-life situations the proportionality of the hazard ratios does not seem to be an appropriate assumption. To overcome such a problem, we consider a class of nonproportional hazards models known as generalized odds-rate class of regression models. The class is general enough to include several commonly used models, such as proportional hazards model, proportional odds model, and accelerated life time model. The theoretical and computational properties of these models have been re-examined. The propriety of the posterior has been established under some mild conditions. A simulation study is conducted and a detailed analysis of the data from a prostate cancer study is presented to further illustrate the proposed methodology.  相似文献   

10.
An important goal of research involving gene expression data for outcome prediction is to establish the ability of genomic data to define clinically relevant risk factors. Recent studies have demonstrated that microarray data can successfully cluster patients into low- and high-risk categories. However, the need exists for models which examine how genomic predictors interact with existing clinical factors and provide personalized outcome predictions. We have developed clinico-genomic tree models for survival outcomes which use recursive partitioning to subdivide the current data set into homogeneous subgroups of patients, each with a specific Weibull survival distribution. These trees can provide personalized predictive distributions of the probability of survival for individuals of interest. Our strategy is to fit multiple models; within each model we adopt a prior on the Weibull scale parameter and update this prior via Empirical Bayes whenever the sample is split at a given node. The decision to split is based on a Bayes factor criterion. The resulting trees are weighted according to their relative likelihood values and predictions are made by averaging over models. In a pilot study of survival in advanced stage ovarian cancer we demonstrate that clinical and genomic data are complementary sources of information relevant to survival, and we use the exploratory nature of the trees to identify potential genomic biomarkers worthy of further study.  相似文献   

11.
In some fields, we are forced to work with missing data in multivariate time series. Unfortunately, the data analysis in this context cannot be carried out in the same way as in the case of complete data. To deal with this problem, a Bayesian analysis of multivariate threshold autoregressive models with exogenous inputs and missing data is carried out. In this paper, Markov chain Monte Carlo methods are used to obtain samples from the involved posterior distributions, including threshold values and missing data. In order to identify autoregressive orders, we adapt the Bayesian variable selection method in this class of multivariate process. The number of regimes is estimated using marginal likelihood or product parameter-space strategies.  相似文献   

12.
The existing studies on spatial dynamic panel data model (SDPDM) mainly focus on the normality assumption of response variables and random effects. This assumption may be inappropriate in some applications. This paper proposes a new SDPDM by assuming that response variables and random effects follow the multivariate skew-normal distribution. A Markov chain Monte Carlo algorithm is developed to evaluate Bayesian estimates of unknown parameters and random effects in skew-normal SDPDM by combining the Gibbs sampler and the Metropolis–Hastings algorithm. A Bayesian local influence analysis method is developed to simultaneously assess the effect of minor perturbations to the data, priors and sampling distributions. Simulation studies are conducted to investigate the finite-sample performance of the proposed methodologies. An example is illustrated by the proposed methodologies.  相似文献   

13.
The zero truncated inverse Gaussian–Poisson model, obtained by first mixing the Poisson model assuming its expected value has an inverse Gaussian distribution and then truncating the model at zero, is very useful when modelling frequency count data. A Bayesian analysis based on this statistical model is implemented on the word frequency counts of various texts, and its validity is checked by exploring the posterior distribution of the Pearson errors and by implementing posterior predictive consistency checks. The analysis based on this model is useful because it allows one to use the posterior distribution of the model mixing density as an approximation of the posterior distribution of the density of the word frequencies of the vocabulary of the author, which is useful to characterize the style of that author. The posterior distribution of the expectation and of measures of the variability of that mixing distribution can be used to assess the size and diversity of his vocabulary. An alternative analysis is proposed based on the inverse Gaussian-zero truncated Poisson mixture model, which is obtained by switching the order of the mixing and the truncation stages. Even though this second model fits some of the word frequency data sets more accurately than the first model, in practice the analysis based on it is not as useful because it does not allow one to estimate the word frequency distribution of the vocabulary.  相似文献   

14.
15.
Summary It is known that the problem of combining a number of expert probability evaluations is frequently solved with additive or multiplicative rules. In this paper we try to show, with the help of a behavioural model, that the additive rule (or linear pooling) derives from the application of Bayesian reasoning. In another occasion we will discuss the multiplicative rule. Research supported by C.N.R. and Ministry of University and Technological and Scientific Research.  相似文献   

16.
In this paper, a new small domain estimator for area-level data is proposed. The proposed estimator is driven by a real problem of estimating the mean price of habitation transaction at a regional level in a European country, using data collected from a longitudinal survey conducted by a national statistical office. At the desired level of inference, it is not possible to provide accurate direct estimates because the sample sizes in these domains are very small. An area-level model with a heterogeneous covariance structure of random effects assists the proposed combined estimator. This model is an extension of a model due to Fay and Herriot [5], but it integrates information across domains and over several periods of time. In addition, a modified method of estimation of variance components for time-series and cross-sectional area-level models is proposed by including the design weights. A Monte Carlo simulation, based on real data, is conducted to investigate the performance of the proposed estimators in comparison with other estimators frequently used in small area estimation problems. In particular, we compare the performance of these estimators with the estimator based on the Rao–Yu model [23]. The simulation study also accesses the performance of the modified variance component estimators in comparison with the traditional ANOVA method. Simulation results show that the estimators proposed perform better than the other estimators in terms of both precision and bias.  相似文献   

17.
In this article, we present a Bayesian modeling for response variables restricted to the interval (0, 1), such as proportions and rates, using the simplex distribution for cases in which data have a longitudinal form, taking random effects into account. In order to investigate the stability of posterior distribution, we study through sensitivity analysis, the effect of three different uniparametric prior distributions for variance parameters of random effect on the final estimation. For this purpose, we consider homogeneous and heterogeneous structures for parameters in location and dispersion submodels. Models and results are illustrated with simulated and real data application.  相似文献   

18.
A multivariate GARCH model is used to investigate Granger causality in the conditional variance of time series. Parametric restrictions for the hypothesis of noncausality in conditional variances between two groups of variables, when there are other variables in the system as well, are derived. These novel conditions are convenient for the analysis of potentially large systems of economic variables. To evaluate hypotheses of noncausality, a Bayesian testing procedure is proposed. It avoids the singularity problem that may appear in the Wald test, and it relaxes the assumption of the existence of higher-order moments of the residuals required in classical tests.  相似文献   

19.
20.
In this paper, we discuss a fully Bayesian quantile inference using Markov Chain Monte Carlo (MCMC) method for longitudinal data models with random effects. Under the assumption of error term subject to asymmetric Laplace distribution, we establish a hierarchical Bayesian model and obtain the posterior distribution of unknown parameters at τ-th level. We overcome the current computational limitations using two approaches. One is the general MCMC technique with Metropolis–Hastings algorithm and another is the Gibbs sampling from the full conditional distribution. These two methods outperform the traditional frequentist methods under a wide array of simulated data models and are flexible enough to easily accommodate changes in the number of random effects and in their assumed distribution. We apply the Gibbs sampling method to analyse a mouse growth data and some different conclusions from those in the literatures are obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号