首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Research on methods for studying time-to-event data (survival analysis) has been extensive in recent years. The basic model in use today represents the hazard function for an individual through a proportional hazards model (Cox, 1972). Typically, it is assumed that a covariate's effect on the hazard function is constant throughout the course of the study. In this paper we propose a method to allow for possible deviations from the standard Cox model, by allowing the effect of a covariate to vary over time. This method is based on a dynamic linear model. We present our method in terms of a Bayesian hierarchical model. We fit the model to the data using Markov chain Monte Carlo methods. Finally, we illustrate the approach with several examples. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

2.
ABSTRACT

This paper introduces an extension of the Markov switching GARCH model where the volatility in each state is a convex combination of two different GARCH components with time varying weights. This model has the dynamic behavior to capture the variants of shocks. The asymptotic behavior of the second moment is investigated and an appropriate upper bound for it is evaluated. Using the Bayesian method via Gibbs sampling algorithm, a dynamic method for the estimation of the parameters is proposed. Finally, we illustrate the efficiency of the model by simulation and also by considering two different set of empirical financial data. We show that this model provides much better forecasts of the volatility than the Markov switching GARCH model.  相似文献   

3.
This paper concerns a method of estimation of variance components in a random effect linear model. It is mainly a resampling method and relies on the Jackknife principle. The derived estimators are presented as least squares estimators in an appropriate linear model, and one of them appears as a MINQUE (Minimum Norm Quadratic Unbiased Estimation) estimator. Our resampling method is illustrated by an example given by C. R. Rao [7] and some optimal properties of our estimator are derived for this example. In the last part, this method is used to derive an estimation of variance components in a random effect linear model when one of the components is assumed to be known.  相似文献   

4.
We consider the semiparametric regression model introduced by Li (1991) and add to this model some linear constraints on the slope parameters. These constraints can be identifiability conditions or they may carry additional in¬formations on the slope parameters. Using a geometric argument, we develop a method to estimate the slope parameters. This link-free and distribution-free method splits in two steps: the first is a Sliced Inverse Regression (SIR); Canonical Analysis is used at the second step to transform the SIR estimates so that they satisfy the constraints. We establish yn-consistency and obtain the asymptotic distribution of the estimates.

This estimation method is applied to the general sample selection model which is very useful in Econometrics. A simulation study shows that the method performs well in the example considered.  相似文献   

5.
This paper proposes an identification method of ARIMA models for seasonal time series using an intermediary model and a filtering method. This method is found to be useful when conventional methods, such as using sample ACF and PACF, fail to reveal a clear-cut model. This filtering identification method is also found to be particularly effective when a seasonal time series is subjected to calendar variations, moving-holiday effects, and interventions.  相似文献   

6.
部分线性模型是一类非常重要的半参数回归模型,由于它既含有参数部分又含有非参数部分,与常规的线性模型相比具有更强的适应性和解释能力。文章研究带有局部平稳协变量的固定效应部分线性面板数据模型的统计推断。首先提出一个两阶段估计方法得到模型中未知参数和非参数函数的估计,并证明估计量的渐近性质,然后运用不变原理构造出非参数函数的一致置信带,最后通过数值模拟研究和实例分析验证了该方法的有效性。  相似文献   

7.
A method for robustness in linear models is to assume that there is a mixture of standard and outlier observations with a different error variance for each class. For generalised linear models (GLMs) the mixture model approach is more difficult as the error variance for many distributions has a fixed relationship to the mean. This model is extended to GLMs by changing the classes to one where the standard class is a standard GLM and the outlier class which is an overdispersed GLM achieved by including a random effect term in the linear predictor. The advantages of this method are it can be extended to any model with a linear predictor, and outlier observations can be easily identified. Using simulation the model is compared to an M-estimator, and found to have improved bias and coverage. The method is demonstrated on three examples.  相似文献   

8.
This paper considers a non linear quantile model with change-points. The quantile estimation method, which as a particular case includes median model, is more robust with respect to other traditional methods when model errors contain outliers. Under relatively weak assumptions, the convergence rate and asymptotic distribution of change-point and of regression parameter estimators are obtained. Numerical study by Monte Carlo simulations shows the performance of the proposed method for non linear model with change-points.  相似文献   

9.
The bivariate plane is symmetrically partitioned into fine rectangular regions, and a symmetric uniform association model is used to represent the resulting discretized bivariate normal probabilities. A new algorithm is developed by utilizing a quadrature and the above association model to approximate the diagonal probabilities. The off-diagonal probabilities are then approximated using the model. This method is an alternative to Wang's (1987) approach, computationally advantageous and relatively easy to extend to higher dimensions. Bivariate and trivariate normal probabilities approximated by our method are observed to agree very closely with the corresponding known results.  相似文献   

10.
The generalized Birnbaum–Saunders distribution pertains to a class of lifetime models including both lighter and heavier tailed distributions. This model adapts well to lifetime data, even when outliers exist, and has other good theoretical properties and application perspectives. However, statistical inference tools may not exist in closed form for this model. Hence, simulation and numerical studies are needed, which require a random number generator. Three different ways to generate observations from this model are considered here. These generators are compared by utilizing a goodness-of-fit procedure as well as their effectiveness in predicting the true parameter values by using Monte Carlo simulations. This goodness-of-fit procedure may also be used as an estimation method. The quality of this estimation method is studied here. Finally, through a real data set, the generalized and classical Birnbaum–Saunders models are compared by using this estimation method.  相似文献   

11.
The theoretical price of a financial option is given by the expectation of its discounted expiry time payoff. The computation of this expectation depends on the density of the value of the underlying instrument at expiry time. This density depends on both the parametric model assumed for the behaviour of the underlying, and the values of parameters within the model, such as volatility. However neither the model, nor the parameter values are known. Common practice when pricing options is to assume a specific model, such as geometric Brownian Motion, and to use point estimates of the model parameters, thereby precisely defining a density function.We explicitly acknowledge the uncertainty of model and parameters by constructing the predictive density of the underlying as an average of model predictive densities, weighted by each model's posterior probability. A model's predictive density is constructed by integrating its transition density function by the posterior distribution of its parameters. This is an extension to Bayesian model averaging. Sampling importance-resampling and Monte Carlo algorithms implement the computation. The advantage of this method is that rather than falsely assuming the model and parameter values are known, inherent ignorance is acknowledged and dealt with in a mathematically logical manner, which utilises all information from past and current observations to generate and update option prices. Moreover point estimates for parameters are unnecessary. We use this method to price a European Call option on a share index.  相似文献   

12.
This article presents a mixture three-parameter Weibull distribution to model wind speed data. The parameters are estimated by using maximum likelihood (ML) method in which the maximization problem is regarded as a nonlinear programming with only inequality constraints and is solved numerically by the interior-point method. By applying this model to four lattice-point wind speed sequences extracted from National Centers for Environmental Prediction (NCEP) reanalysis data, it is observed that the mixture three-parameter Weibull distribution model proposed in this paper provides a better fit than the existing Weibull models for the analysis of wind speed data under study.  相似文献   

13.
The goal of this paper is to compare several widely used Bayesian model selection methods in practical model selection problems, highlight their differences and give recommendations about the preferred approaches. We focus on the variable subset selection for regression and classification and perform several numerical experiments using both simulated and real world data. The results show that the optimization of a utility estimate such as the cross-validation (CV) score is liable to finding overfitted models due to relatively high variance in the utility estimates when the data is scarce. This can also lead to substantial selection induced bias and optimism in the performance evaluation for the selected model. From a predictive viewpoint, best results are obtained by accounting for model uncertainty by forming the full encompassing model, such as the Bayesian model averaging solution over the candidate models. If the encompassing model is too complex, it can be robustly simplified by the projection method, in which the information of the full model is projected onto the submodels. This approach is substantially less prone to overfitting than selection based on CV-score. Overall, the projection method appears to outperform also the maximum a posteriori model and the selection of the most probable variables. The study also demonstrates that the model selection can greatly benefit from using cross-validation outside the searching process both for guiding the model size selection and assessing the predictive performance of the finally selected model.  相似文献   

14.
The envelope method produces efficient estimation in multivariate linear regression, and is widely applied in biology, psychology, and economics. This paper estimates parameters through a model averaging methodology and promotes the predicting abilities of the envelope models. We propose a frequentist model averaging method by minimizing a cross-validation criterion. When all the candidate models are misspecified, the proposed model averaging estimator is proved to be asymptotically optimal. When correct candidate models exist, the coefficient estimator is proved to be consistent, and the sum of the weights assigned to the correct models, in probability, converges to one. Simulations and an empirical application demonstrate the effectiveness of the proposed method.  相似文献   

15.
Experimental design and Taguchi's parameter design are widely employed by industry to optimize the process/product. However, censored data are often observed in product lifetime testing during the experiments. After implementing a repetitious experiment with type II censored data, the censored data are usually estimated by establishing a complex statistical model. However, using the incomplete data to fit a model may not accurately estimates the censored data. Moreover, the model fitting process is complicated for a practitioner who has only limited statistical training. This study proposes a less complex approach to analyze censored data, using the least square estimation method and Torres's analysis of unreplicated factorials with possible abnormalities. This study also presents an effective method to analyze the censored data from Taguchi's parameter design using least square estimation method. Finally, examples are given to illustrate the effectiveness of the proposed methods.  相似文献   

16.
A method of regularized discriminant analysis for discrete data, denoted DRDA, is proposed. This method is related to the regularized discriminant analysis conceived by Friedman (1989) in a Gaussian framework for continuous data. Here, we are concerned with discrete data and consider the classification problem using the multionomial distribution. DRDA has been conceived in the small-sample, high-dimensional setting. This method has a median position between multinomial discrimination, the first-order independence model and kernel discrimination. DRDA is characterized by two parameters, the values of which are calculated by minimizing a sample-based estimate of future misclassification risk by cross-validation. The first parameter is acomplexity parameter which provides class-conditional probabilities as a convex combination of those derived from the full multinomial model and the first-order independence model. The second parameter is asmoothing parameter associated with the discrete kernel of Aitchison and Aitken (1976). The optimal complexity parameter is calculated first, then, holding this parameter fixed, the optimal smoothing parameter is determined. A modified approach, in which the smoothing parameter is chosen first, is discussed. The efficiency of the method is examined with other classical methods through application to data.  相似文献   

17.
In this paper, a test is derived to assess the validity of heteroscedastic nonlinear regression models by a non‐parametric cosine regression method. For order selection, the paper proposes a data‐driven method that uses the parametric null model optimal order. This method yields a test that is asymptotically normally distributed under the null hypothesis and is consistent against any fixed alternative. Simulation studies that test the lack of fit of a generalized linear model are conducted to compare the performance of the proposed test with that of an existing non‐parametric kernel test. A dataset of esterase levels is used to demonstrate the proposed method in practice.  相似文献   

18.
The kernel function method developed by Yamato (1971) to estimate a probability density function essentially is a way of smoothing the empirical distribution function. This paper shows how one can generalize this method to estimate signals for a semimartingale model. A recursive convolution smoothed estimate is used to obtain an absolutely continuous estimate for an absolutely continuous signal of a semimartingale model. It is also shown that the estimator obtained has a smaller asymptotic variance than the one obtained in Thavaneswaran (1988).  相似文献   

19.
Shi, Wang, Murray-Smith and Titterington (Biometrics 63:714–723, 2007) proposed a Gaussian process functional regression (GPFR) model to model functional response curves with a set of functional covariates. Two main problems are addressed by their method: modelling nonlinear and nonparametric regression relationship and modelling covariance structure and mean structure simultaneously. The method gives very good results for curve fitting and prediction but side-steps the problem of heterogeneity. In this paper we present a new method for modelling functional data with ‘spatially’ indexed data, i.e., the heterogeneity is dependent on factors such as region and individual patient’s information. For data collected from different sources, we assume that the data corresponding to each curve (or batch) follows a Gaussian process functional regression model as a lower-level model, and introduce an allocation model for the latent indicator variables as a higher-level model. This higher-level model is dependent on the information related to each batch. This method takes advantage of both GPFR and mixture models and therefore improves the accuracy of predictions. The mixture model has also been used for curve clustering, but focusing on the problem of clustering functional relationships between response curve and covariates, i.e. the clustering is based on the surface shape of the functional response against the set of functional covariates. The model is examined on simulated data and real data.  相似文献   

20.
小域估计是抽样调查的热点问题之一,其主流发展方向是基于模型的小域估计方法。但是这种方法依赖于模型的假定,若假定的模型错误,则估计效果很差。因此,利用对数变换和抽样设计权数得到小域的目标变量的稳健估计量,并通过模拟案例说明基于对数变换的方法是一种稳健有效的小域估计方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号