首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For the analysis of binary data, various deterministic models have been proposed, which are generally simpler to fit and easier to understand than probabilistic models. We claim that corresponding to any deterministic model is an implicit stochastic model in which the deterministic model fits imperfectly, with errors occurring at random. In the context of binary data, we consider a model in which the probability of error depends on the model prediction. We show how to fit this model using a stochastic modification of deterministic optimization schemes.The advantages of fitting the stochastic model explicitly (rather than implicitly, by simply fitting a deterministic model and accepting the occurrence of errors) include quantification of uncertainty in the deterministic model’s parameter estimates, better estimation of the true model error rate, and the ability to check the fit of the model nontrivially. We illustrate this with a simple theoretical example of item response data and with empirical examples from archeology and the psychology of choice.  相似文献   

2.
We discuss the development of dynamic factor models for multivariate financial time series, and the incorporation of stochastic volatility components for latent factor processes. Bayesian inference and computation is developed and explored in a study of the dynamic factor structure of daily spot exchange rates for a selection of international currencies. The models are direct generalizations of univariate stochastic volatility models and represent specific varieties of models recently discussed in the growing multivariate stochastic volatility literature. We discuss model fitting based on retrospective data and sequential analysis for forward filtering and short-term forecasting. Analyses are compared with results from the much simpler method of dynamic variance-matrix discounting that, for over a decade, has been a standard approach in applied financial econometrics. We study these models in analysis, forecasting, and sequential portfolio allocation for a selected set of international exchange-rate-return time series. Our goals are to understand a range of modeling questions arising in using these factor models and to explore empirical performance in portfolio construction relative to discount approaches. We report on our experiences and conclude with comments about the practical utility of structured factor models and on future potential model extensions.  相似文献   

3.
Cluster analysis is one of the most widely used method in statistical analyses, in which homogeneous subgroups are identified in a heterogeneous population. Due to the existence of the continuous and discrete mixed data in many applications, so far, some ordinary clustering methods such as, hierarchical methods, k-means and model-based methods have been extended for analysis of mixed data. However, in the available model-based clustering methods, by increasing the number of continuous variables, the number of parameters increases and identifying as well as fitting an appropriate model may be difficult. In this paper, to reduce the number of the parameters, for the model-based clustering mixed data of continuous (normal) and nominal data, a set of parsimonious models is introduced. Models in this set are extended, using the general location model approach, for modeling distribution of mixed variables and applying factor analyzer structure for covariance matrices. The ECM algorithm is used for estimating the parameters of these models. In order to show the performance of the proposed models for clustering, results from some simulation studies and analyzing two real data sets are presented.  相似文献   

4.
Model-based clustering methods for continuous data are well established and commonly used in a wide range of applications. However, model-based clustering methods for categorical data are less standard. Latent class analysis is a commonly used method for model-based clustering of binary data and/or categorical data, but due to an assumed local independence structure there may not be a correspondence between the estimated latent classes and groups in the population of interest. The mixture of latent trait analyzers model extends latent class analysis by assuming a model for the categorical response variables that depends on both a categorical latent class and a continuous latent trait variable; the discrete latent class accommodates group structure and the continuous latent trait accommodates dependence within these groups. Fitting the mixture of latent trait analyzers model is potentially difficult because the likelihood function involves an integral that cannot be evaluated analytically. We develop a variational approach for fitting the mixture of latent trait models and this provides an efficient model fitting strategy. The mixture of latent trait analyzers model is demonstrated on the analysis of data from the National Long Term Care Survey (NLTCS) and voting in the U.S. Congress. The model is shown to yield intuitive clustering results and it gives a much better fit than either latent class analysis or latent trait analysis alone.  相似文献   

5.
Summary.  The evaluation of the performance of a continuous diagnostic measure is a commonly encountered task in medical research. We develop Bayesian non-parametric models that use Dirichlet process mixtures and mixtures of Polya trees for the analysis of continuous serologic data. The modelling approach differs from traditional approaches to the analysis of receiver operating characteristic curve data in that it incorporates a stochastic ordering constraint for the distributions of serologic values for the infected and non-infected populations. Biologically such a constraint is virtually always feasible because serologic values from infected individuals tend to be higher than those for non-infected individuals. The models proposed provide data-driven inferences for the infected and non-infected population distributions, and for the receiver operating characteristic curve and corresponding area under the curve. We illustrate and compare the predictive performance of the Dirichlet process mixture and mixture of Polya trees approaches by using serologic data for Johne's disease in dairy cattle.  相似文献   

6.
Several models for longitudinal data with nonrandom missingness are available. The selection model of Diggle and Kenward is one of these models. It has been mentioned by many authors that this model depends on untested modelling assumptions, such as the response distribution, from the observed data. So, a sensitivity analysis of the study’s conclusions for such assumptions is needed. The stochastic EM algorithm is proposed and developed to handle continuous longitudinal data with nonrandom intermittent missing values when the responses have non-normal distribution. This is a step in investigating the sensitivity of the parameter estimates to the change of the response distribution. The proposed technique is applied to real data from the International Breast Cancer Study Group.  相似文献   

7.
Non-Gaussian processes of Ornstein–Uhlenbeck (OU) type offer the possibility of capturing important distributional deviations from Gaussianity and for flexible modelling of dependence structures. This paper develops this potential, drawing on and extending powerful results from probability theory for applications in statistical analysis. Their power is illustrated by a sustained application of OU processes within the context of finance and econometrics. We construct continuous time stochastic volatility models for financial assets where the volatility processes are superpositions of positive OU processes, and we study these models in relation to financial data and theory.  相似文献   

8.
We describe studies in molecular profiling and biological pathway analysis that use sparse latent factor and regression models for microarray gene expression data. We discuss breast cancer applications and key aspects of the modeling and computational methodology. Our case studies aim to investigate and characterize heterogeneity of structure related to specific oncogenic pathways, as well as links between aggregate patterns in gene expression profiles and clinical biomarkers. Based on the metaphor of statistically derived "factors" as representing biological "subpathway" structure, we explore the decomposition of fitted sparse factor models into pathway subcomponents and investigate how these components overlay multiple aspects of known biological activity. Our methodology is based on sparsity modeling of multivariate regression, ANOVA, and latent factor models, as well as a class of models that combines all components. Hierarchical sparsity priors address questions of dimension reduction and multiple comparisons, as well as scalability of the methodology. The models include practically relevant non-Gaussian/nonparametric components for latent structure, underlying often quite complex non-Gaussianity in multivariate expression patterns. Model search and fitting are addressed through stochastic simulation and evolutionary stochastic search methods that are exemplified in the oncogenic pathway studies. Supplementary supporting material provides more details of the applications, as well as examples of the use of freely available software tools for implementing the methodology.  相似文献   

9.
The effectiveness and safety of implantable medical devices is a critical public health concern. We consider analysis of data in which it is of interest to compare devices but some individuals may be implanted with two or more devices. Our motivating example is based on orthopedic devices, where the same individual can be implanted with as many as two devices for the same joint but on different sides of the body, referred to as bilateral cases. Different methods of analysis are considered in a simulation study and real data example, including both marginal and conditional survival models, fitting single and separate models for bilateral and non-bilateral cases, and combining estimates from these two models. The results of simulations suggest that in the context of orthopedic devices, where implants failures are rare, models fit on both bilateral and non-bilateral cases simultaneously could be quite misleading, and that combined estimates from fitting two separate models performed better under homogeneity. A real data example illustrates the issues surrounding analysis of orthopedic device data with bilateral cases. Our findings suggest that research studies of orthopedic devices should at minimum consider fitting separate models to bilateral and non-bilateral cases.  相似文献   

10.
A common assumption in fitting panel data models is normality of stochastic subject effects. This can be extremely restrictive, making vague most potential features of true distributions. The objective of this article is to propose a modeling strategy, from a semi-parametric Bayesian perspective, to specify a flexible distribution for the random effects in dynamic panel data models. This is addressed here by assuming the Dirichlet process mixture model to introduce Dirichlet process prior for the random-effects distribution. We address the role of initial conditions in dynamic processes, emphasizing on joint modeling of start-up and subsequent responses. We adopt Gibbs sampling techniques to approximate posterior estimates. These important topics are illustrated by a simulation study and also by testing hypothetical models in two empirical contexts drawn from economic studies. We use modified versions of information criteria to compare the fitted models.  相似文献   

11.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

12.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

13.
ABSTRACT.  Product quality in the paper-making industry can be assessed by opacity of a linear trace through continuous production sheets, summarized in spectral form. We adopt a class of non-Gaussian stochastic models for continuous spatial variation to describe data of this type. The model has flexible covariance structure, physically interpretable parameters and allows several scales of variation for the underlying process. We derive the spectral properties of the model, and develop methods of parameter estimation based on maximum likelihood in the frequency domain. The methods are illustrated using sample data from a UK mill.  相似文献   

14.
There has recently been growing interest in modeling and estimating alternative continuous time multivariate stochastic volatility models. We propose a continuous time fractionally integrated Wishart stochastic volatility (FIWSV) process, and derive the conditional Laplace transform of the FIWSV model in order to obtain a closed form expression of moments. A two-step procedure is used, namely estimating the parameter of fractional integration via the local Whittle estimator in the first step, and estimating the remaining parameters via the generalized method of moments in the second step. Monte Carlo results for the procedure show a reasonable performance in finite samples. The empirical results for the S&P 500 and FTSE 100 indexes show that the data favor the new FIWSV process rather than the one-factor and two-factor models of the Wishart autoregressive process for the covariance structure.  相似文献   

15.
Abstract

In this paper new filters for removing unspecified form of heteroscedasticity are proposed. The filters build on the assumption that the variance of a pre-whitened time series can be viewed as a latent stochastic process by its own. This makes the filters flexible and useful in many situations. A simulation study shows that removing heteroscedasticity before fitting a model leads to efficiency gains and bias reductions when estimating the parameters of ARMA models. A real data study shows that pre-filtering can increase the forecasting precision of quarterly US GDP growth.  相似文献   

16.
Multilevel models have been widely applied to analyze data sets which present some hierarchical structure. In this paper we propose a generalization of the normal multilevel models, named elliptical multilevel models. This proposal suggests the use of distributions in the elliptical class, thus involving all symmetric continuous distributions, including the normal distribution as a particular case. Elliptical distributions may have lighter or heavier tails than the normal ones. In the case of normal error models with the presence of outlying observations, heavy-tailed error models may be applied to accommodate such observations. In particular, we discuss some aspects of the elliptical multilevel models, such as maximum likelihood estimation and residual analysis to assess features related to the fitting and the model assumptions. Finally, two motivating examples analyzed under normal multilevel models are reanalyzed under Student-t and power exponential multilevel models. Comparisons with the normal multilevel model are performed by using residual analysis.  相似文献   

17.
In interpreting the binary regression models often used in the analysis of dose-response data, it is common to introduce the idea of an underlying continuous tolerance distribution. Different choices of link function lead to different tolerance distributions. A useful way of comparing these alternatives is to compare the hazard functions or tail functions associated with each tolerance distribution. Tail functions can also be applied to give numerically preferable formulas for the iterative weights and the adjusted dependent variable in the fitting of binary regression models by the iteratively reweighted least-squares algorithm.  相似文献   

18.
A stochastic calculus for a family of continuous measure-valued Markov processes is developed. Such processes arise naturally in the construction of stochastic models of spatially distributed populations. The stochastic calculus is a tool whereby a class of density-dependent models can be studied in terms of the multiplicative measure diffusion process. In this paper the stochastic integral is introduced in the space-time setting and a Cameron-Martin-Girsanov theorem is established.  相似文献   

19.
Continuous time models with sampled data possess several advantages over conventional discrete time series and panel models (cf., e.g. special issue Stat. Neerl. 62(1), 2008). For example, data with unequal time intervals between the waves can be treated efficiently, since the model parameters of the dynamical system model are not affected by the measurement process. The continuous-discrete state space model is a combination of continuous time dynamics (stochastic differential equations, SDE) and discrete time noisy measurements.  相似文献   

20.
Classical inferential procedures induce conclusions from a set of data to a population of interest, accounting for the imprecision resulting from the stochastic component of the model. Less attention is devoted to the uncertainty arising from (unplanned) incompleteness in the data. Through the choice of an identifiable model for non-ignorable non-response, one narrows the possible data-generating mechanisms to the point where inference only suffers from imprecision. Some proposals have been made for assessing the sensitivity to these modelling assumptions; many are based on fitting several plausible but competing models. For example, we could assume that the missing data are missing at random in one model, and then fit an additional model where non-random missingness is assumed. On the basis of data from a Slovenian plebiscite, conducted in 1991, to prepare for independence, it is shown that such an ad hoc procedure may be misleading. We propose an approach which identifies and incorporates both sources of uncertainty in inference: imprecision due to finite sampling and ignorance due to incompleteness. A simple sensitivity analysis considers a finite set of plausible models. We take this idea one step further by considering more degrees of freedom than the data support. This produces sets of estimates (regions of ignorance) and sets of confidence regions (combined into regions of uncertainty).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号