首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Multivariate Dispersion Models Generated From Gaussian Copula   总被引:5,自引:0,他引:5  
In this paper a class of multivariate dispersion models generated from the multivariate Gaussian copula is presented. Being a multivariate extension of Jørgensen's (1987a) dispersion models, this class of multivariate models is parametrized by marginal position, dispersion and dependence parameters, producing a large variety of multivariate discrete and continuous models including the multivariate normal as a special case. Properties of the multivariate distributions are investigated, some of which are similar to those of the multivariate normal distribution, which makes these models potentially useful for the analysis of correlated non-normal data in a way analogous to that of multivariate normal data. As an example, we illustrate an application of the models to the regression analysis of longitudinal data, and establish an asymptotic relationship between the likelihood equation and the generalized estimating equation of Liang & Zeger (1986).  相似文献   

2.
This paper presents a Markov chain Monte Carlo algorithm for a class of multivariate diffusion models with unobserved paths. This class is of high practical interest as it includes most diffusion driven stochastic volatility models. The algorithm is based on a data augmentation scheme where the paths are treated as missing data. However, unless these paths are transformed so that the dominating measure is independent of any parameters, the algorithm becomes reducible. The methodology developed in Roberts and Stramer [2001a. On inference for partial observed nonlinear diffusion models using the metropolis-hastings algorithm. Biometrika 88(3); 603–621] circumvents the problem for scalar diffusions. We extend this framework to the class of models of this paper by introducing an appropriate reparametrisation of the likelihood that can be used to construct an irreducible data augmentation scheme. Practical implementation issues are considered and the methodology is applied to simulated data from the Heston model.  相似文献   

3.
Artur J. Lemonte 《Statistics》2013,47(6):1249-1265
The class of generalized linear models with dispersion covariates, which allows us to jointly model the mean and dispersion parameters, is a natural extension to the classical generalized linear models. In this paper, we derive the asymptotic expansions under a sequence of Pitman alternatives (up to order n ?1/2) for the nonnull distribution functions of the likelihood ratio, Wald, Rao score and gradient statistics in this class of models. The asymptotic distributions of these statistics are obtained for testing a subset of regression parameters and for testing a subset of dispersion parameters. Based on these nonnull asymptotic expansions, the power of all four tests, which are equivalent to first order, are compared. Furthermore, we consider Monte Carlo simulations in order to compare the finite-sample performance of these tests in this class of models. We present two empirical applications to two real data sets for illustrative purposes.  相似文献   

4.
This paper considers nonlinear regression analysis with a scalar response and multiple predictors. An unknown regression function is approximated by radial basis function models. The coefficients are estimated in the context of M-estimation. It is known that ordinary M-estimation leads to overfitting in nonlinear regression. The purpose of this paper is to construct a smooth estimator. The proposed method in this paper is conducted by a two-step procedure. First, the sufficient dimension reduction methods are applied to the response and radial basis functions for transforming the large number of radial bases to a small number of linear combinations of the radial bases without loss of information. In the second step, a multiple linear regression model between a response and the transformed radial bases is assumed and the ordinary M-estimation is applied. Thus, the final estimator is also obtained as a linear combination of radial bases. The validity and an asymptotic study of the proposed method are explored. A simulation and data example are addressed to confirm the behavior of the proposed method.  相似文献   

5.
In this paper we investigate the application of stochastic complexity theory to classification problems. In particular, we define the notion of admissible models as a function of problem complexity, the number of data pointsN, and prior belief. This allows us to derive general bounds relating classifier complexity with data-dependent parameters such as sample size, class entropy and the optimal Bayes error rate. We discuss the application of these results to a variety of problems, including decision tree classifiers, Markov models for image segmentation, and feedforward multilayer neural network classifiers.  相似文献   

6.
In this paper we obtain asymptotic expansions, up to order n−1/2 and under a sequence of Pitman alternatives, for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of symmetric linear regression models. This is a wide class of models which encompasses the t model and several other symmetric distributions with longer-than normal tails. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes.  相似文献   

7.
Marginalised models, also known as marginally specified models, have recently become a popular tool for analysis of discrete longitudinal data. Despite being a novel statistical methodology, these models introduce complex constraint equations and model fitting algorithms. On the other hand, there is a lack of publicly available software to fit these models. In this paper, we propose a three-level marginalised model for analysis of multivariate longitudinal binary outcome. The implicit function theorem is introduced to approximately solve the marginal constraint equations explicitly. probit link enables direct solutions to the convolution equations. Parameters are estimated by maximum likelihood via a Fisher–Scoring algorithm. A simulation study is conducted to examine the finite-sample properties of the estimator. We illustrate the model with an application to the data set from the Iowa Youth and Families Project. The R package pnmtrem is prepared to fit the model.  相似文献   

8.
In rating surveys, people are requested to evaluate objects, items, services, and so on, by choosing among a list of ordered categories. In some circumstances, it may happen that a subset of respondents selects a specific option just to simplify a more demanding choice. In this context, we generalize a class of ordinal data models (called cub and proven effective for fitting and interpretation), for taking the possible presence of a shelter choice into account. After the discussion of interpretative and inferential issues, the usefulness of the approach is checked against real case studies and by means of a simulation experiment. Some final remarks end the paper.  相似文献   

9.
This paper considers the modelling of the process of Corrective and condition-based Preventive Maintenance, for complex repairable systems. In order to take into account the dependency between both types of maintenance and the possibility of imperfect maintenance, Generalized Competing Risks models have been introduced in “Doyen and Gaudoin (J Appl Probab 43:825–839, 2006)”. In this paper, we study two classes of these models, the Generalized Random Sign and Generalized Alert Delay models. A Generalized Competing Risks model can be built as a generalization of a particular Usual Competing Risks model, either by using a virtual age framework or not. The models properties are studied and their parameterizations are discussed. Finally, simulation results and an application to real data are presented.  相似文献   

10.
Failure time data occur in many areas and in various censoring forms and many models have been proposed for their regression analysis such as the proportional hazards model and the proportional odds model. Another choice that has been discussed in the literature is a general class of semiparmetric transformation models, which include the two models above and many others as special cases. In this paper, we consider this class of models when one faces a general type of censored data, case K informatively interval-censored data, for which there does not seem to exist an established inference procedure. For the problem, we present a two-step estimation procedure that is quite flexible and can be easily implemented, and the consistency and asymptotic normality of the proposed estimators of regression parameters are established. In addition, an extensive simulation study is conducted and suggests that the proposed procedure works well for practical situations. An application is also provided.  相似文献   

11.
Joint modeling of degradation and failure time data   总被引:1,自引:0,他引:1  
This paper surveys some approaches to model the relationship between failure time data and covariate data like internal degradation and external environmental processes. These models which reflect the dependency between system state and system reliability include threshold models and hazard-based models. In particular, we consider the class of degradation–threshold–shock models (DTS models) in which failure is due to the competing causes of degradation and trauma. For this class of reliability models we express the failure time in terms of degradation and covariates. We compute the survival function of the resulting failure time and derive the likelihood function for the joint observation of failure times and degradation data at discrete times. We consider a special class of DTS models where degradation is modeled by a process with stationary independent increments and related to external covariates through a random time scale and extend this model class to repairable items by a marked point process approach. The proposed model class provides a rich conceptual framework for the study of degradation–failure issues.  相似文献   

12.
Point process models are a natural approach for modelling data that arise as point events. In the case of Poisson counts, these may be fitted easily as a weighted Poisson regression. Point processes lack the notion of sample size. This is problematic for model selection, because various classical criteria such as the Bayesian information criterion (BIC) are a function of the sample size, n, and are derived in an asymptotic framework where n tends to infinity. In this paper, we develop an asymptotic result for Poisson point process models in which the observed number of point events, m, plays the role that sample size does in the classical regression context. Following from this result, we derive a version of BIC for point process models, and when fitted via penalised likelihood, conditions for the LASSO penalty that ensure consistency in estimation and the oracle property. We discuss challenges extending these results to the wider class of Gibbs models, of which the Poisson point process model is a special case.  相似文献   

13.
In this paper, we are employing the generalized linear model (GLM) in the form 𝓁ij= to decompose the symmetry model into the class of models discussed in Tomizawa (1992 Tomizawa, S. 1992. Quasi-diagonals-parameter symmetry model for square contingency tables with ordered categories. Calcutta Statist. Assoc. Bull., 39: 5361.  [Google Scholar]). In this formulation, the random component would be the observed counts f ij with an underlying Poisson distribution. This approach utilizes the non-standard log-linear model and our focus in this paper therefore relates to models that are decompositions of the complete symmetry model. That is, models that are implied by the symmetry models. We develop factor and regression variables required for the implementation of these models in SAS PROC GENMOD and SPSS PROC GENLOG. We apply this methodology to analyse the three 4×4 contingency table, one of which is the Japanese Unaided distance vision data. Results obtained in this study are consistent with those from the numerous literature on the subject. We further extend our applications to the 6×6 Brazilian social mobility data. We found that both the quasi linear diagonal-parameters symmetry (QLDPS) and the quasi 2-ratios parameter symmetry (Q2RPS) models fit the Brazilian data very well. Parsimonious models being the QLDPS and the quasi-conditional symmetry (QCS) models. The SAS and SPSS programs for implementing the models discussed in this paper are presented in Appendices A, B and C.  相似文献   

14.
ABSTRACT

In this paper we propose a class of skewed t link models for analyzing binary response data with covariates. It is a class of asymmetric link models designed to improve the overall fit when commonly used symmetric links, such as the logit and probit links, do not provide the best fit available for a given binary response dataset. Introducing a skewed t distribution for the underlying latent variable, we develop the class of models. For the analysis of the models, a Bayesian and non-Bayesian methods are pursued using a Markov chain Monte Carlo (MCMC) sampling based approach. Necessary theories involved in modelling and computation are provided. Finally, a simulation study and a real data example are used to illustrate the proposed methodology.  相似文献   

15.
The traditional classification is based on the assumption that distribution of indicator variable X in one class is homogeneous. However, when data in one class comes from heterogeneous distribution, the likelihood ratio of two classes is not unique. In this paper, we construct the classification via an ambiguity criterion for the case of distribution heterogeneity of X in a single class. The separated historical data in each situation are used to estimate the thresholds respectively. The final boundary is chosen as the maximum and minimum thresholds from all situations. Our approach obtains the minimum ambiguity with a high classification accuracy allowing for a precise decision. In addition, nonparametric estimation of the classification region and theoretical properties are derived. Simulation study and real data analysis are reported to demonstrate the effectiveness of our method.  相似文献   

16.
In this paper we outline a class of fully parametric proportional hazards models, in which the baseline hazard is assumed to be a power transform of the time scale, corresponding to assuming that survival times follow a Weibull distribution. Such a class of models allows for the possibility of time varying hazard rates, but assumes a constant hazard ratio. We outline how Bayesian inference proceeds for such a class of models using asymptotic approximations which require only the ability to maximize the joint log posterior density. We apply these models to a clinical trial to assess the efficacy of neutron therapy compared to conventional treatment for patients with tumors of the pelvic region. In this trial there was prior information about the log hazard ratio both in terms of elicited clinical beliefs and the results of previous studies. Finally, we consider a number of extensions to this class of models, in particular the use of alternative baseline functions, and the extension to multi-state data.  相似文献   

17.
Sun L  Su B 《Lifetime data analysis》2008,14(3):357-375
In this article, we propose a general class of accelerated means regression models for recurrent event data. The class includes the proportional means model, the accelerated failure time model and the accelerated rates model as special cases. The new model offers great flexibility in formulating the effects of covariates on the mean functions of counting processes while leaving the stochastic structure completely unspecified. For the inference on the model parameters, estimating equation approaches are developed and both large and final sample properties of the proposed estimators are established. In addition, some graphical and numerical procedures are presented for model checking. An illustration with multiple-infection data from a clinic study on chronic granulomatous disease is also provided.  相似文献   

18.
The development of models and methods for cure rate estimation has recently burgeoned into an important subfield of survival analysis. Much of the literature focuses on the standard mixture model. Recently, process-based models have been suggested. We focus on several models based on first passage times for Wiener processes. Whitmore and others have studied these models in a variety of contexts. Lee and Whitmore (Stat Sci 21(4):501–513, 2006) give a comprehensive review of a variety of first hitting time models and briefly discuss their potential as cure rate models. In this paper, we study the Wiener process with negative drift as a possible cure rate model but the resulting defective inverse Gaussian model is found to provide a poor fit in some cases. Several possible modifications are then suggested, which improve the defective inverse Gaussian. These modifications include: the inverse Gaussian cure rate mixture model; a mixture of two inverse Gaussian models; incorporation of heterogeneity in the drift parameter; and the addition of a second absorbing barrier to the Wiener process, representing an immunity threshold. This class of process-based models is a useful alternative to the standard model and provides an improved fit compared to the standard model when applied to many of the datasets that we have studied. Implementation of this class of models is facilitated using expectation-maximization (EM) algorithms and variants thereof, including the gradient EM algorithm. Parameter estimates for each of these EM algorithms are given and the proposed models are applied to both real and simulated data, where they perform well.  相似文献   

19.
近10多年来,关于未决赔款准备金评估模型的研究取得了较大进展,其中虽然也包含对各种评估模型相互关系的探讨,如关于各种随机模型的比较、以及基于B-F法对各种准备金评估模型的比较等,但仍然不够全面和系统。在对准备金评估模型从不同角度进行了较为系统的分类和综述的同时,首次以最基本的链梯模型为基础,建立了一个统一的框架,并对常见的一些准备金评估模型进行了综合比较和分析,揭示了它们之间的一些重要关系,给出了在实务中选择准备金评估模型的一些建议。  相似文献   

20.
Bayesian hierarchical spatio-temporal models are becoming increasingly important due to the increasing availability of space-time data in various domains. In this paper we develop a user friendly R package, spTDyn, for spatio-temporal modelling. It can be used to fit models with spatially varying and temporally dynamic coefficients. The former is used for modelling the spatially varying impact of explanatory variables on the response caused by spatial misalignment. This issue can arise when the covariates only vary over time, or when they are measured over a grid and hence do not match the locations of the response point-level data. The latter is to examine the temporally varying impact of explanatory variables in space-time data due, for example, to seasonality or other time-varying effects. The spTDyn package uses Markov chain Monte Carlo sampling written in C, which makes computations highly efficient, and the interface is written in R making these sophisticated modelling techniques easily accessible to statistical analysts. The models and software, and their advantages, are illustrated using temperature and ozone space-time data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号