首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
In this article, we consider Bayesian inference procedures to test for a unit root in Stochastic Volatility (SV) models. Unit-root tests for the persistence parameter of the SV models, based on the Bayes Factor (BF), have been recently introduced in the literature. In contrast, we propose a flexible class of priors that is non-informative over the entire support of the persistence parameter (including the non-stationarity region). In addition, we show that our model fitting procedure is computationally efficient (using the software WinBUGS). Finally, we show that our proposed test procedures have good frequentist properties in terms of achieving high statistical power, while maintaining low total error rates. We illustrate the above features of our method by extensive simulation studies, followed by an application to a real data set on exchange rates.  相似文献   

2.
ABSTRACT

Simplex regression model is often employed to analyze continuous proportion data in many studies. In this paper, we extend the assumption of a constant dispersion parameter (homogeneity) to varying dispersion parameter (heterogeneity) in Simplex regression model, and present the B-spline to approximate the smoothing unknown function within the Bayesian framework. A hybrid algorithm combining the block Gibbs sampler and the Metropolis-Hastings algorithm is presented for sampling observations from the posterior distribution. The procedures for computing model comparison criteria such as conditional predictive ordinate statistic, deviance information criterion, and averaged mean squared error are presented. Also, we develop a computationally feasible Bayesian case-deletion influence measure based on the Kullback-Leibler divergence. Several simulation studies and a real example are employed to illustrate the proposed methodologies.  相似文献   

3.
In this paper, the generalized varying-coefficient single-index model is discussed based on penalized likelihood. All the unknown functions are fitted by penalized spline. The estimates of the unknown parameters and the unknown coefficient functions are obtained and the estimation approach is rapid and computationally stable. Under some mild conditions, the consistency and the asymptotic normality of these resulting estimators are given. Two simulation studies are carried out to illustrate the performance of the estimates. An application of the model to the Hong Kong environmental data further demonstrates the potential of the proposed modelling procedures.  相似文献   

4.
The main interest of prediction intervals lies in the results of a future sample from a previously sampled population. In this article, we develop procedures for the prediction intervals which contain all of a fixed number of future observations for general balanced linear random models. Two methods based on the concept of a generalized pivotal quantity (GPQ) and one based on ANOVA estimators are presented. A simulation study using the balanced one-way random model is conducted to evaluate the proposed methods. It is shown that one of the two GPQ-based and the ANOVA-based methods are computationally more efficient and they also successfully maintain the simulated coverage probabilities close to the nominal confidence level. Hence, they are recommended for practical use. In addition, one example is given to illustrate the applicability of the recommended methods.  相似文献   

5.
Tests are proposed for validation of the hypothesis that a partial linear regression model adequately describes the structure of a given data set. The test statistics are formulated following the approach of Fourier-type conditional expectations first suggested by Bierens [Consistent model specification tests. J Econometr. 1982;20:105–134]. The proposed procedures are computationally convenient, and under fairly mild conditions lead to consistent tests. Corresponding bootstrap versions are compared with alternative procedures for a wide selection of different estimators of the underlying partial linear model.  相似文献   

6.
Ordinary differential equations are arguably the most popular and useful mathematical tool for describing physical and biological processes in the real world. Often, these physical and biological processes are observed with errors, in which case the most natural way to model such data is via regression where the mean function is defined by an ordinary differential equation believed to provide an understanding of the underlying process. These regression based dynamical models are called differential equation models. Parameter inference from differential equation models poses computational challenges mainly due to the fact that analytic solutions to most differential equations are not available. In this paper, we propose an approximation method for obtaining the posterior distribution of parameters in differential equation models. The approximation is done in two steps. In the first step, the solution of a differential equation is approximated by the general one-step method which is a class of numerical numerical methods for ordinary differential equations including the Euler and the Runge-Kutta procedures; in the second step, nuisance parameters are marginalized using Laplace approximation. The proposed Laplace approximated posterior gives a computationally fast alternative to the full Bayesian computational scheme (such as Makov Chain Monte Carlo) and produces more accurate and stable estimators than the popular smoothing methods (called collocation methods) based on frequentist procedures. For a theoretical support of the proposed method, we prove that the Laplace approximated posterior converges to the actual posterior under certain conditions and analyze the relation between the order of numerical error and its Laplace approximation. The proposed method is tested on simulated data sets and compared with the other existing methods.  相似文献   

7.
The use of general linear modeling (GLM) procedures based on log-rank scores is proposed for the analysis of survival data and compared to standard survival analysis procedures. For the comparison of two groups, this approach performed similarly to the traditional log-rank test. In the case of more complicated designs - without ties in the survival times - the approach was only marginally less powerful than tests from proportional hazards models, and clearly less powerful than a likelihood ratio test for a fully parametric model; however, with ties in the survival time, the approach proved more powerful than tests from Cox's semi-parametric proportional hazards procedure. The method appears to provide a reasonably powerful alternative for the analysis of survival data, is easily used in complicated study designs, avoids (semi-)parametric assumptions, and is quite computationally easy and inexpensive to employ.  相似文献   

8.
Variable selection in cluster analysis is important yet challenging. It can be achieved by regularization methods, which realize a trade-off between the clustering accuracy and the number of selected variables by using a lasso-type penalty. However, the calibration of the penalty term can suffer from criticisms. Model selection methods are an efficient alternative, yet they require a difficult optimization of an information criterion which involves combinatorial problems. First, most of these optimization algorithms are based on a suboptimal procedure (e.g. stepwise method). Second, the algorithms are often computationally expensive because they need multiple calls of EM algorithms. Here we propose to use a new information criterion based on the integrated complete-data likelihood. It does not require the maximum likelihood estimate and its maximization appears to be simple and computationally efficient. The original contribution of our approach is to perform the model selection without requiring any parameter estimation. Then, parameter inference is needed only for the unique selected model. This approach is used for the variable selection of a Gaussian mixture model with conditional independence assumed. The numerical experiments on simulated and benchmark datasets show that the proposed method often outperforms two classical approaches for variable selection. The proposed approach is implemented in the R package VarSelLCM available on CRAN.  相似文献   

9.
Usual fitting methods for the nested error linear regression model are known to be very sensitive to the effect of even a single outlier. Robust approaches for the unbalanced nested error model with proved robustness and efficiency properties, such as M-estimators, are typically obtained through iterative algorithms. These algorithms are often computationally intensive and require robust estimates of the same parameters to start the algorithms, but so far no robust starting values have been proposed for this model. This paper proposes computationally fast robust estimators for the variance components under an unbalanced nested error model, based on a simple robustification of the fitting-of-constants method or Henderson method III. These estimators can be used as starting values for other iterative methods. Our simulations show that they are highly robust to various types of contamination of different magnitude.  相似文献   

10.
Hea-Jung Kim  Taeyoung Roh 《Statistics》2013,47(5):1082-1111
In regression analysis, a sample selection scheme often applies to the response variable, which results in missing not at random observations on the variable. In this case, a regression analysis using only the selected cases would lead to biased results. This paper proposes a Bayesian methodology to correct this bias based on a semiparametric Bernstein polynomial regression model that incorporates the sample selection scheme into a stochastic monotone trend constraint, variable selection, and robustness against departures from the normality assumption. We present the basic theoretical properties of the proposed model that include its stochastic representation, sample selection bias quantification, and hierarchical model specification to deal with the stochastic monotone trend constraint in the nonparametric component, simple bias corrected estimation, and variable selection for the linear components. We then develop computationally feasible Markov chain Monte Carlo methods for semiparametric Bernstein polynomial functions with stochastically constrained parameter estimation and variable selection procedures. We demonstrate the finite-sample performance of the proposed model compared to existing methods using simulation studies and illustrate its use based on two real data applications.  相似文献   

11.
Jones  B.  Wang  J. 《Statistics and Computing》1999,9(3):209-218
We consider some computational issues that arise when searching for optimal designs for pharmacokinetic (PK) studies. Special factors that distinguish these are (i) repeated observations are taken from each subject and the observations are usually described by a nonlinear mixed model (NLMM), (ii) design criteria depend on the model fitting procedure, (iii) in addition to providing efficient parameter estimates, the design must also permit model checking, (iv) in practice there are several design constraints, (v) the design criteria are computationally expensive to evaluate and often numerical integration is needed and finally (vi) local optimisation procedures may fail to converge or get trapped at local optima.We review current optimal design algorithms and explore the possibility of using global optimisation procedures. We use these latter procedures to find some optimal designs.For multi-purpose designs we suggest two surrogate design criteria for model checking and illustrate their use.  相似文献   

12.
Prediction under model uncertainty is an important and difficult issue. Traditional prediction methods (such as pretesting) are based on model selection followed by prediction in the selected model, but the reported prediction and the reported prediction variance ignore the uncertainty from the selection procedure. This article proposes a weighted-average least squares (WALS) prediction procedure that is not conditional on the selected model. Taking both model and error uncertainty into account, we also propose an appropriate estimate of the variance of the WALS predictor. Correlations among the random errors are explicitly allowed. Compared to other prediction averaging methods, the WALS predictor has important advantages both theoretically and computationally. Simulation studies show that the WALS predictor generally produces lower mean squared prediction errors than its competitors, and that the proposed estimator for the prediction variance performs particularly well when model uncertainty increases.  相似文献   

13.
The weaknesses of established model selection procedures based on hypothesis testing and similar criteria are discussed and an alternative based on synthetic (composite) estimation is proposed. It is developed for the problem of prediction in ordinary regression and its properties are explored by simulations for the simple regression. Extensions to a general setting are described and an example with multiple regression is analysed. Arguments are presented against using a selected model for any inferences.  相似文献   

14.
The problem of comparing the linear calibration equations of several measuring methods, each designed to measure the same characteristic on a common group of individuals, is discussed. We consider the factor analysis version of the model and propose to estimate the model parameters using the EM algorithm. The equations that define the 'M' step are simple to implement and computationally in expensive, requiring no additional maximization procedures. The derivation of the complete data log-likelihood function makes it possible to obtain the expected and observed information matrices for any number p(> 3) of instruments in closed form, upon which large sample inference on the parameters can be based. Re-analysis of two actual data sets is presented.  相似文献   

15.
Semiparametric accelerated failure time (AFT) models directly relate the expected failure times to covariates and are a useful alternative to models that work on the hazard function or the survival function. For case-cohort data, much less development has been done with AFT models. In addition to the missing covariates outside of the sub-cohort in controls, challenges from AFT model inferences with full cohort are retained. The regression parameter estimator is hard to compute because the most widely used rank-based estimating equations are not smooth. Further, its variance depends on the unspecified error distribution, and most methods rely on computationally intensive bootstrap to estimate it. We propose fast rank-based inference procedures for AFT models, applying recent methodological advances to the context of case-cohort data. Parameters are estimated with an induced smoothing approach that smooths the estimating functions and facilitates the numerical solution. Variance estimators are obtained through efficient resampling methods for nonsmooth estimating functions that avoids full blown bootstrap. Simulation studies suggest that the recommended procedure provides fast and valid inferences among several competing procedures. Application to a tumor study demonstrates the utility of the proposed method in routine data analysis.  相似文献   

16.
In many medical studies, event times are recorded in an interval-censored (IC) format. For example, in numerous cancer trials, time to disease relapse is only known to have occurred between two consecutive clinic visits. Many existing modeling methods in the IC context are computationally intensive and usually require numerous assumptions that could be unrealistic or difficult to verify in practice. We propose a flexible and computationally efficient modeling strategy based on jackknife pseudo-observations (POs). The POs obtained based on nonparametric estimators of the survival function are employed as outcomes in an equivalent, yet simpler regression model that produces consistent covariate effect estimates. Hence, instead of operating in the IC context, the problem is translated into the realm of generalized linear models, where numerous options are available. Outcome transformations via appropriate link functions lead to familiar modeling contexts such as the proportional hazards and proportional odds. Moreover, the methods developed are not limited to these settings and have broader applicability. Simulations studies show that the proposed methods produce virtually unbiased covariate effect estimates, even for moderate sample sizes. An example from the International Breast Cancer Study Group (IBCSG) Trial VI further illustrates the practical advantages of this new approach.  相似文献   

17.
In the present paper we examine finite mixtures of multivariate Poisson distributions as an alternative class of models for multivariate count data. The proposed models allow for both overdispersion in the marginal distributions and negative correlation, while they are computationally tractable using standard ideas from finite mixture modelling. An EM type algorithm for maximum likelihood (ML) estimation of the parameters is developed. The identifiability of this class of mixtures is proved. Properties of ML estimators are derived. A real data application concerning model based clustering for multivariate count data related to different types of crime is presented to illustrate the practical potential of the proposed class of models.  相似文献   

18.
An efficient optimization algorithm for identifying the best least squares regression model under the condition of non-negative coefficients is proposed. The algorithm exposits an innovative solution via the unrestricted least squares and is based on the regression tree and branch-and-bound techniques for computing the best subset regression. The aim is to filling a gap in computationally tractable solutions to the non-negative least squares problem and model selection. The proposed method is illustrated with a real dataset. Experimental results on real and artificial random datasets confirm the computational efficacy of the new strategy and demonstrates its ability to solve large model selection problems that are subject to non-negativity constrains.  相似文献   

19.
Often in observational studies of time to an event, the study population is a biased (i.e., unrepresentative) sample of the target population. In the presence of biased samples, it is common to weight subjects by the inverse of their respective selection probabilities. Pan and Schaubel (Can J Stat 36:111–127, 2008) recently proposed inference procedures for an inverse selection probability weighted (ISPW) Cox model, applicable when selection probabilities are not treated as fixed but estimated empirically. The proposed weighting procedure requires auxiliary data to estimate the weights and is computationally more intense than unweighted estimation. The ignorability of sample selection process in terms of parameter estimators and predictions is often of interest, from several perspectives: e.g., to determine if weighting makes a significant difference to the analysis at hand, which would in turn address whether the collection of auxiliary data is required in future studies; to evaluate previous studies which did not correct for selection bias. In this article, we propose methods to quantify the degree of bias corrected by the weighting procedure in the partial likelihood and Breslow-Aalen estimators. Asymptotic properties of the proposed test statistics are derived. The finite-sample significance level and power are evaluated through simulation. The proposed methods are then applied to data from a national organ failure registry to evaluate the bias in a post-kidney transplant survival model.  相似文献   

20.
This paper develops a novel weighted composite quantile regression (CQR) method for estimation of a linear model when some covariates are missing at random and the probability for missingness mechanism can be modelled parametrically. By incorporating the unbiased estimating equations of incomplete data into empirical likelihood (EL), we obtain the EL-based weights, and then re-adjust the inverse probability weighted CQR for estimating the vector of regression coefficients. Theoretical results show that the proposed method can achieve semiparametric efficiency if the selection probability function is correctly specified, therefore the EL weighted CQR is more efficient than the inverse probability weighted CQR. Besides, our algorithm is computationally simple and easy to implement. Simulation studies are conducted to examine the finite sample performance of the proposed procedures. Finally, we apply the new method to analyse the US news College data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号