首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
GARCH models include most of the stylized facts of financial time series and they have been largely used to analyse discrete financial time series. In the last years, continuous-time models based on discrete GARCH models have been also proposed to deal with non-equally spaced observations, as COGARCH model based on Lévy processes. In this paper, we propose to use the data cloning methodology in order to obtain estimators of GARCH and COGARCH model parameters. Data cloning methodology uses a Bayesian approach to obtain approximate maximum likelihood estimators avoiding numerically maximization of the pseudo-likelihood function. After a simulation study for both GARCH and COGARCH models using data cloning, we apply this technique to model the behaviour of some NASDAQ time series.  相似文献   

2.
The maximum likelihood and maximum partial likelihood approaches to the proportional hazards model are unified. The purpose is to give a general approach to the analysis of the proportional hazards model, whether the baseline distribution is absolutely continuous, discrete, or a mixture. The advantage is that heavily tied data will be analyzed with a discrete time model, while data with no ties is analyzed with ordinary Cox regression. Data sets in between are treated by a compromise between the discrete time model and Efron's approach to tied data in survival analysis, and the transitions between modes are automatic. A simulation study is conducted comparing the proposed approach to standard methods of handling ties. A recent suggestion, that revives Breslow's approach to tied data, is finally discussed.  相似文献   

3.
The modelling of discrete such as binary time series, unlike the continuous time series, is not easy. This is due to the fact that there is no unique way to model the correlation structure of the repeated binary data. Some models may also provide a complicated correlation structure with narrow ranges for the correlations. In this paper, we consider a nonlinear dynamic binary time series model that provides a correlation structure which is easy to interpret and the correlations under this model satisfy the full?1 to 1 range. For the estimation of the parameters of this nonlinear model, we use a conditional generalized quasilikelihood (CGQL) approach which provides the same estimates as those of the well-known maximum likelihood approach. Furthermore, we consider a competitive linear dynamic binary time series model and examine the performance of the CGQL approach through a simulation study in estimating the parameters of this linear model. The model mis-specification effects on estimation as well as forecasting are also examined through simulations.  相似文献   

4.
Count data with excess zeros are common in many biomedical and public health applications. The zero-inflated Poisson (ZIP) regression model has been widely used in practice to analyze such data. In this paper, we extend the classical ZIP regression framework to model count time series with excess zeros. A Markov regression model is presented and developed, and the partial likelihood is employed for statistical inference. Partial likelihood inference has been successfully applied in modeling time series where the conditional distribution of the response lies within the exponential family. Extending this approach to ZIP time series poses methodological and theoretical challenges, since the ZIP distribution is a mixture and therefore lies outside the exponential family. In the partial likelihood framework, we develop an EM algorithm to compute the maximum partial likelihood estimator (MPLE). We establish the asymptotic theory of the MPLE under mild regularity conditions and investigate its finite sample behavior in a simulation study. The performances of different partial-likelihood based model selection criteria are compared in the presence of model misspecification. Finally, we present an epidemiological application to illustrate the proposed methodology.  相似文献   

5.
Joint modeling of degradation and failure time data   总被引:1,自引:0,他引:1  
This paper surveys some approaches to model the relationship between failure time data and covariate data like internal degradation and external environmental processes. These models which reflect the dependency between system state and system reliability include threshold models and hazard-based models. In particular, we consider the class of degradation–threshold–shock models (DTS models) in which failure is due to the competing causes of degradation and trauma. For this class of reliability models we express the failure time in terms of degradation and covariates. We compute the survival function of the resulting failure time and derive the likelihood function for the joint observation of failure times and degradation data at discrete times. We consider a special class of DTS models where degradation is modeled by a process with stationary independent increments and related to external covariates through a random time scale and extend this model class to repairable items by a marked point process approach. The proposed model class provides a rich conceptual framework for the study of degradation–failure issues.  相似文献   

6.
Using a Yamaguchi‐type generalized gamma failure‐time mixture model, we analyse the data from a study of autologous and allogeneic bone marrow transplantation in the treatment of high‐risk refractory acute lymphoblastic leukaemia, focusing on the time to recurrence of disease. We develop maximum likelihood techniques for the joint estimation of the surviving fractions and the survivor functions. This includes an approximation to the derivative of the survivor function with respect to the shape parameter. We obtain the maximum likelihood estimates of the model parameters. We also compute the variance‐covariance matrix of the parameter estimators. The extended family of generalized gamma failure‐time mixture models is flexible enough to include many commonly used failure‐time distributions as special cases. Yet these models are not used in practice because of computational difficulties. We claim that we have overcome this problem. The proposed approximation to the derivative of the survivor function with respect to the shape parameter can be used in any statistical package. We also address the issue of lack of identifiability. We point out that there can be a substantial advantage to using the gamma failure‐time mixture models over nonparametric methods. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

7.
Many economic duration variables are often available only up to intervals, and not up to exact points. However, continuous time duration models are conceptually superior to discrete ones. Hence, in duration analyses, one faces a situation with discrete data and a continuous model. This paper discusses (i) the asymptotic bias of a conventional approximation procedure in which a discrete duration is treated as an exact observation; and (ii) the efficiency of a correct maximum likelihood estimator which appropriately accounts for the discrete nature of the data.  相似文献   

8.
Many economic duration variables are often available only up to intervals, and not up to exact points. However, continuous time duration models are conceptually superior to discrete ones. Hence, in duration analyses, one faces a situation with discrete data and a continuous model. This paper discusses (i) the asymptotic bias of a conventional approximation procedure in which a discrete duration is treated as an exact observation; and (ii) the efficiency of a correct maximum likelihood estimator which appropriately accounts for the discrete nature of the data.  相似文献   

9.
Sometimes it is appropriate to model the survival and failure time data by a non-monotonic failure rate distribution. This may be desirable when the course of disease is such that mortality reaches a peak after some finite period and then slowly declines.In this paper we study Burr, type XII model whose failure rate exhibits the above behavior. The location of the critical points (at which the monotonicity changes) for both the failure rate and the mean residual life function (MRLF) are studied. A procedure is described for estimating these critical points. Necessary and sufficient conditions for the existence and uniqueness of the maximum likelihood estimates are provided and it is shown that the conditions provided by Wingo (1993) are not sufficient. A data set pertaining to fibre failure strengths is analyzed and the maximum likelihood estimates of the critical points are obtained.  相似文献   

10.
We consider ordered bivariate gap time while data on the first gap time are unobservable. This study is motivated by the HIV infection and AIDS study, where the initial HIV contracting time is unavailable, but the diagnosis times for HIV and AIDS are available. We are interested in studying the risk factors for the gap time between initial HIV contraction and HIV diagnosis, and gap time between HIV and AIDS diagnoses. Besides, the association between the two gap times is also of interest. Accordingly, in the data analysis we are faced with two-fold complexity, namely data on the first gap time is completely missing, and the second gap time is subject to induced informative censoring due to dependence between the two gap times. We propose a modeling framework for regression analysis of bivariate gap time under the complexity of the data. The estimating equations for the covariate effects on, as well as the association between, the two gap times are derived through maximum likelihood and suitable counting processes. Large sample properties of the resulting estimators are developed by martingale theory. Simulations are performed to examine the performance of the proposed analysis procedure. An application of data from the HIV and AIDS study mentioned above is reported for illustration.  相似文献   

11.
Motivated by a specific problem concerning the relationship between radar reflectance and rainfall intensity, the paper develops a space–time model for use in environmental monitoring applications. The model is cast as a high dimensional multivariate state space time series model, in which the cross-covariance structure is derived from the spatial context of the component series, in such a way that its interpretation is essentially independent of the particular set of spatial locations at which the data are recorded. We develop algorithms for estimating the parameters of the model by maximum likelihood, and for making spatial predictions of the radar calibration parameters by using realtime computations. We apply the model to data from a weather radar station in Lancashire, England, and demonstrate through empirical validation the predictive performance of the model.  相似文献   

12.
In this paper, we introduce a new family of discrete distributions and study its properties. It is shown that the new family is a generalization of discrete Marshall-Olkin family of distributions. In particular, we study generalized discrete Weibull distribution in detail. Discrete Marshall-Olkin Weibull distribution, exponentiated discrete Weibull distribution, discrete Weibull distribution, discrete Marshall-Olkin generalized exponential distribution, exponentiated geometric distribution, generalized discrete exponential distribution, discrete Marshall-Olkin Rayleigh distribution and exponentiated discrete Rayleigh distribution are sub-models of generalized discrete Weibull distribution. We derive some basic distributional properties such as probability generating function, moments, hazard rate and quantiles of the generalized discrete Weibull distribution. We can see that the hazard rate function can be decreasing, increasing, bathtub and upside-down bathtub shape. Estimation of the parameters are done using maximum likelihood method. A real data set is analyzed to illustrate the suitability of the proposed model.  相似文献   

13.
In forestry, many processes of interest are binary and they can be modeled using lifetime analysis. However, available data are often incomplete, being interval- and right-censored as well as left-truncated, which may lead to biased parameter estimates. While censoring can be easily considered in lifetime analysis, left truncation is more complicated when individual age at selection is unknown. In this study, we designed and tested a maximum likelihood estimator that deals with left truncation by taking advantage of prior knowledge about the time when the individuals enter the experiment. Whenever a model is available for predicting the time of selection, the distribution of the delayed entries can be obtained using Bayes' theorem. It is then possible to marginalize the likelihood function over the distribution of the delayed entries in the experiment to assess the joint distribution of time of selection and time to event. This estimator was tested with continuous and discrete Gompertz-distributed lifetimes. It was then compared with two other estimators: a standard one in which left truncation was not considered and a second estimator that implemented an analytical correction. Our new estimator yielded unbiased parameter estimates with empirical coverage of confidence intervals close to their nominal value. The standard estimator leaded to an overestimation of the long-term probability of survival.  相似文献   

14.
This paper deals with the prediction of time series with missing data using an alternative formulation for Holt's model with additive errors. This formulation simplifies both the calculus of maximum likelihood estimators of all the unknowns in the model and the calculus of point forecasts. In the presence of missing data, the EM algorithm is used to obtain maximum likelihood estimates and point forecasts. Based on this application we propose a leave-one-out algorithm for the data transformation selection problem which allows us to analyse Holt's model with multiplicative errors. Some numerical results show the performance of these procedures for obtaining robust forecasts.  相似文献   

15.
A power transformation regression model is considered for exponentially distributed time to failure data with right censoring. Procedures for estimation of parameters by maximum likelihood and assessment of goodness of model fit are described and illustrated.  相似文献   

16.
Information from multiple informants is frequently used to assess psychopathology. We consider marginal regression models with multiple informants as discrete predictors and a time to event outcome. We fit these models to data from the Stirling County Study; specifically, the models predict mortality from self report of psychiatric disorders and also predict mortality from physician report of psychiatric disorders. Previously, Horton et al. found little relationship between self and physician reports of psychopathology, but that the relationship of self report of psychopathology with mortality was similar to that of physician report of psychopathology with mortality. Generalized estimating equations (GEE) have been used to fit marginal models with multiple informant covariates; here we develop a maximum likelihood (ML) approach and show how it relates to the GEE approach. In a simple setting using a saturated model, the ML approach can be constructed to provide estimates that match those found using GEE. We extend the ML technique to consider multiple informant predictors with missingness and compare the method to using inverse probability weighted (IPW) GEE. Our simulation study illustrates that IPW GEE loses little efficiency compared with ML in the presence of monotone missingness. Our example data has non-monotone missingness; in this case, ML offers a modest decrease in variance compared with IPW GEE, particularly for estimating covariates in the marginal models. In more general settings, e.g., categorical predictors and piecewise exponential models, the likelihood parameters from the ML technique do not have the same interpretation as the GEE. Thus, the GEE is recommended to fit marginal models for its flexibility, ease of interpretation and comparable efficiency to ML in the presence of missing data.  相似文献   

17.
Longitudinal count responses are often analyzed with a Poisson mixed model. However, under overdispersion, these responses are better described by a negative binomial mixed model. Estimators of the corresponding parameters are usually obtained by the maximum likelihood method. To investigate the stability of these maximum likelihood estimators, we propose a methodology of sensitivity analysis using local influence. As count responses are discrete, we are unable to perturb them with the standard scheme used in local influence. Then, we consider an appropriate perturbation for the means of these responses. The proposed methodology is useful in different applications, but particularly when medical data are analyzed, because the removal of influential cases can change the statistical results and then the medical decision. We study the performance of the methodology by using Monte Carlo simulation and applied it to real medical data related to epilepsy and headache. All of these numerical studies show the good performance and potential of the proposed methodology.  相似文献   

18.
19.
This paper is mainly concerned with modelling data from degradation sample paths over time. It uses a general growth curve model with Box‐Cox transformation, random effects and ARMA(p, q) dependence to analyse a set of such data. A maximum likelihood estimation procedure for the proposed model is derived and future values are predicted, based on the best linear unbiased prediction. The paper compares the proposed model with a nonlinear degradation model from a prediction point of view. Forecasts of failure times with various data lengths in the sample are also compared.  相似文献   

20.
In this paper, we discuss some theoretical results and properties of the discrete Weibull distribution, which was introduced by Nakagawa and Osaki [The discrete Weibull distribution. IEEE Trans Reliab. 1975;24:300–301]. We study the monotonicity of the probability mass, survival and hazard functions. Moreover, reliability, moments, p-quantiles, entropies and order statistics are also studied. We consider likelihood-based methods to estimate the model parameters based on complete and censored samples, and to derive confidence intervals. We also consider two additional methods to estimate the model parameters. The uniqueness of the maximum likelihood estimate of one of the parameters that index the discrete Weibull model is discussed. Numerical evaluation of the considered model is performed by Monte Carlo simulations. For illustrative purposes, two real data sets are analyzed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号