首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Two new methods for improving prediction regions in the context of vector autoregressive (VAR) models are proposed. These methods, which are based on the bootstrap technique, take into account the uncertainty associated with the estimation of the model order and parameters. In particular, by exploiting an independence property of the prediction error, we will introduce a bootstrap procedure that allows for better estimates of the forecasting distribution, in the sense that the variability of its quantile estimators is substantially reduced, without requiring additional bootstrap replications. The proposed methods have a good performance even if the disturbances distribution is not Gaussian. An application to a real data set is presented.  相似文献   

2.
ABSTRACT

This paper presents methods for constructing prediction limits for a step-stress model in accelerated life testing. An exponential life distribution with a mean that is a log-linear function of stress, and a cumulative exposure model are assumed. Two prediction problems are discussed. One concerns the prediction of the life at a design stress, and the other concerns the prediction of a future life during the step-stress testing. Both predictions require the knowledge of some model parameters. When estimates for the model parameters are available, a calibration method based on simulations is proposed for correcting the prediction intervals (regions) obtained by treating the parameter estimates as the true parameter values. Finally, a numerical example is given to illustrate the prediction procedure.  相似文献   

3.
This paper considers multiple regression model with multivariate spherically symmetric errors to determine optimal β-expectation tolerance regions for the future regression vector (FRV) and future residual sum of squares (FRSS) by using the prediction distributions of some appropriate functions of future responses. The prediction distribution of the FRV, conditional on the observed responses, is multivariate Student-t distribution. Similarly, the prediction distribution of the FRSS is a beta distribution. The optimal β-expectation tolerance regions for the FRV and FRSS have been obtained based on the F -distribution and beta distribution, respectively. The results in this paper are applicable for multiple regression model with normal and Student-t errors.   相似文献   

4.
Abstract

This paper deals with Bayesian estimation and prediction for the inverse Weibull distribution with shape parameter α and scale parameter λ under general progressive censoring. We prove that the posterior conditional density functions of α and λ are both log-concave based on the assumption that λ has a gamma prior distribution and α follows a prior distribution with log-concave density. Then, we present the Gibbs sampling strategy to estimate under squared-error loss any function of the unknown parameter vector (α, λ) and find credible intervals, as well as to obtain prediction intervals for future order statistics. Monte Carlo simulations are given to compare the performance of Bayesian estimators derived via Gibbs sampling with the corresponding maximum likelihood estimators, and a real data analysis is discussed in order to illustrate the proposed procedure. Finally, we extend the developed methodology to other two-parameter distributions, including the Weibull, Burr type XII, and flexible Weibull distributions, and also to general progressive hybrid censoring.  相似文献   

5.
In the context of local interpolators, radial basis functions (RBFs) are known to reduce the computational time by using a subset of the data for prediction purposes. In this paper, we propose a new distance-based spatial RBFs method which allows modeling spatial continuous random variables. The trend is incorporated into a RBF according to a detrending procedure with mixed variables, among which we may have categorical variables. In order to evaluate the efficiency of the proposed method, a simulation study is carried out for a variety of practical scenarios for five distinct RBFs, incorporating principal coordinates. Finally, the proposed method is illustrated with an application of prediction of calcium concentration measured at a depth of 0–20 cm in Brazil, selecting the smoothing parameter by cross-validation.  相似文献   

6.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

7.
In this paper, we consider the prediction of a future observation based on a type-I hybrid censored sample when the lifetime distribution of experimental units is assumed to be a Weibull random variable. Different classical and Bayesian point predictors are obtained. Bayesian predictors are obtained using squared error and linear-exponential loss functions. We also provide a simulation consistent method for computing Bayesian prediction intervals. Monte Carlo simulations are performed to compare the performances of the different methods, and one data analysis has been presented for illustrative purposes.  相似文献   

8.
In this paper, we introduce a new partially functional linear varying coefficient model, where the response is a scalar and some of the covariates are functional. By means of functional principal components analysis and local linear smoothing techniques, we obtain the estimators of coefficient functions of both function-valued variable and real-valued variables. Then the rates of convergence of the proposed estimators and the mean squared prediction error are established under some regularity conditions. Moreover, we develop a hypothesis test for the model and employ the bootstrap procedure to evaluate the null distribution of test statistic and the p-value of the test. At last, we illustrate the finite sample performance of our methods with some simulation studies and a real data application.  相似文献   

9.
A maximization of the expected entropy of the predictive distribution interpretation of Akaike's minimum AIC procedure is exploited for the modeling and prediction of time series with trend and seasonal mean value functions and stationary covariances. The AIC criterion best one-step-ahead and best twelve-step-ahead prediction models can be different. The different models exhibit the relative optimality properties for which they were designed. The results are related to open questions on optimal trend estimation and optimal seasonal adjustment of time series.  相似文献   

10.
We develop our previous works concerning the identification of the collection of significant factors determining some, in general, nonbinary random response variable. Such identification is important, e.g., in biological and medical studies. Our approach is to examine the quality of response variable prediction by functions in (certain part of) the factors. The prediction error estimation requires some cross-validation procedure, certain prediction algorithm, and estimation of the penalty function. Using simulated data, we demonstrate the efficiency of our method. We prove a new central limit theorem for introduced regularized estimates under some natural conditions for arrays of exchangeable random variables.  相似文献   

11.
This paper is concerned with an estimation procedure of a class of single-index varying-coefficient models with right-censored data. An adjusted empirical log-likelihood ratio for the index parameters, which are of primary interest, is proposed using a synthetic data approach. The adjusted empirical likelihood is shown to have a standard chi-squared limiting distribution. Furthermore, we increase the accuracy of the proposed confidence regions by using the constraint that the index is of norm 1. Simulation studies are carried out to highlight the performance of the proposed method compared with the traditional normal approximation method.  相似文献   

12.
In this paper we propose a novel procedure, for the estimation of semiparametric survival functions. The proposed technique adapts penalized likelihood survival models to the context of lifetime value modeling. The method extends classical Cox model by introducing a smoothing parameter that can be estimated by means of penalized maximum likelihood procedures. Markov Chain Monte Carlo methods are employed to effectively estimate such smoothing parameter, using an algorithm which combines Metropolis–Hastings and Gibbs sampling. Our proposal is contextualized and compared with conventional models, with reference to a marketing application that involves the prediction of customer’s lifetime value estimation.  相似文献   

13.
We introduce a two-step procedure, in the context of ultra-high dimensional additive models, which aims to reduce the size of covariates vector and distinguish linear and nonlinear effects among nonzero components. Our proposed screening procedure, in the first step, is constructed based on the concept of cumulative distribution function and conditional expectation of response in the framework of marginal correlation. B-splines and empirical distribution functions are used to estimate the two above measures. The sure screening property of this procedure is also established. In the second step, a double penalization based procedure is applied to identify nonzero and linear components, simultaneously. The performance of the designed method is examined by several test functions to show its capabilities against competitor methods when the distribution of errors is varied. Simulation studies imply that the proposed screening procedure can be applied to the ultra-high dimensional data and well detect the influential covariates. It also demonstrate the superiority in comparison with the existing methods. This method is also applied to identify most influential genes for overexpression of a G protein-coupled receptor in mice.  相似文献   

14.
Massive correlated data with many inputs are often generated from computer experiments to study complex systems. The Gaussian process (GP) model is a widely used tool for the analysis of computer experiments. Although GPs provide a simple and effective approximation to computer experiments, two critical issues remain unresolved. One is the computational issue in GP estimation and prediction where intensive manipulations of a large correlation matrix are required. For a large sample size and with a large number of variables, this task is often unstable or infeasible. The other issue is how to improve the naive plug-in predictive distribution which is known to underestimate the uncertainty. In this article, we introduce a unified framework that can tackle both issues simultaneously. It consists of a sequential split-and-conquer procedure, an information combining technique using confidence distributions (CD), and a frequentist predictive distribution based on the combined CD. It is shown that the proposed method maintains the same asymptotic efficiency as the conventional likelihood inference under mild conditions, but dramatically reduces the computation in both estimation and prediction. The predictive distribution contains comprehensive information for inference and provides a better quantification of predictive uncertainty as compared with the plug-in approach. Simulations are conducted to compare the estimation and prediction accuracy with some existing methods, and the computational advantage of the proposed method is also illustrated. The proposed method is demonstrated by a real data example based on tens of thousands of computer experiments generated from a computational fluid dynamic simulator.  相似文献   

15.
ABSTRACT

This article considers nonparametric regression problems and develops a model-averaging procedure for smoothing spline regression problems. Unlike most smoothing parameter selection studies determining an optimum smoothing parameter, our focus here is on the prediction accuracy for the true conditional mean of Y given a predictor X. Our method consists of two steps. The first step is to construct a class of smoothing spline regression models based on nonparametric bootstrap samples, each with an appropriate smoothing parameter. The second step is to average bootstrap smoothing spline estimates of different smoothness to form a final improved estimate. To minimize the prediction error, we estimate the model weights using a delete-one-out cross-validation procedure. A simulation study has been performed by using a program written in R. The simulation study provides a comparison of the most well known cross-validation (CV), generalized cross-validation (GCV), and the proposed method. This new method is straightforward to implement, and gives reliable performances in simulations.  相似文献   

16.
In this paper, we propose robust randomized quantile regression estimators for the mean and (condition) variance functions of the popular heteroskedastic non parametric regression model. Unlike classical approaches which consider quantile as a fixed quantity, our method treats quantile as a uniformly distributed random variable. Our proposed method can be employed to estimate the error distribution, which could significantly improve prediction results. An automatic bandwidth selection scheme will be discussed. Asymptotic properties and relative efficiencies of the proposed estimators are investigated. Our empirical results show that the proposed estimators work well even for random errors with infinite variances. Various numerical simulations and two real data examples are used to demonstrate our methodologies.  相似文献   

17.
Exponential distribution has an extensive application in reliability. Introducing shape parameter to this distribution have produced various distribution functions. In their study in 2009, Gupta and Kundu brought another distribution function using Azzalini's method, which is applicable in reliability and named as weighted exponential (WE) distribution. The parameters of this distribution function have been recently estimated by the above two authors in classical statistics. In this paper, Bayesian estimates of the parameters are derived. To achieve this purpose we use Lindley's approximation method for the integrals that cannot be solved in closed form. Furthermore, a Gibbs sampling procedure is used to draw Markov chain Monte Carlo samples from the posterior distribution indirectly and then the Bayes estimates of parameters are derived. The estimation of reliability and hazard functions are also discussed. At the end of the paper, some comparisons between classical and Bayesian estimation methods are studied by using Monte Carlo simulation study. The simulation study incorporates complete and Type-II censored samples.  相似文献   

18.
Summary. The paper presents a general strategy for selecting the bandwidth of nonparametric regression estimators and specializes it to local linear regression smoothers. The procedure requires the sample to be divided into a training sample and a testing sample. Using the training sample we first compute a family of regression smoothers indexed by their bandwidths. Next we select the bandwidth by minimizing the empirical quadratic prediction error on the testing sample. The resulting bandwidth satisfies a finite sample oracle inequality which holds for all bounded regression functions. This permits asymptotically optimal estimation for nearly any regression function. The practical performance of the method is illustrated by a simulation study which shows good finite sample behaviour of our method compared with other bandwidth selection procedures.  相似文献   

19.
Qiu and Sheng has proposed a powerful and robust two-stage procedure to compare two hazard rate functions. In this paper we improve their method by using the Fisher test to combine the asymptotically independent p-values obtained from the two stages of their procedure. In addition, we extend the procedure to situations with multiple hazard rate functions. Our comprehensive simulation study shows that the proposed method has a good performance in terms of controlling the type I error rate and of detecting power. Three real data applications are considered for illustrating the use of the new method.  相似文献   

20.
One of the most important issues in toxicity studies is the identification of the equivalence of treatments with a placebo. Because it is unacceptable to declare non‐equivalent treatments to be equivalent, it is important to adopt a reliable statistical method to properly control the family‐wise error rate (FWER). In dealing with this issue, it is important to keep in mind that overestimating toxicity equivalence is a more serious error than underestimating toxicity equivalence. Consequently asymmetric loss functions are more appropriate than symmetric loss functions. Recently Tao, Tang & Shi (2010) developed a new procedure based on an asymmetric loss function. However, their procedure is somewhat unsatisfactory because it assumes that the variances of various dose levels are known. This assumption is restrictive for some applications. In this study we propose an improved approach based on asymmetric confidence intervals without the restrictive assumption of known variances. The asymmetry guarantees reliability in the sense that the FWER is well controlled. Although our procedure is developed assuming that the variances of various dose levels are unknown but equal, simulation studies show that our procedure still performs quite well when the variances are unequal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号