首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider Marshall–Olkin extended exponential (MOEE) distribution which is capable of modelling various shapes of failure rates and aging criteria. The purpose of this paper is three fold. First, we derive the maximum likelihood estimators of the unknown parameters and the observed the Fisher information matrix from progressively type-II censored data. Next, the Bayes estimates are evaluated by applying Lindley’s approximation method and Markov Chain Monte Carlo method under the squared error loss function. We have performed a simulation study in order to compare the proposed Bayes estimators with the maximum likelihood estimators. We also compute 95% asymptotic confidence interval and symmetric credible interval along with the coverage probability. Third, we consider one-sample and two-sample prediction problems based on the observed sample and provide appropriate predictive intervals under classical as well as Bayesian framework. Finally, we analyse a real data set to illustrate the results derived.  相似文献   

2.
The authors show how an adjusted pseudo‐empirical likelihood ratio statistic that is asymptotically distributed as a chi‐square random variable can be used to construct confidence intervals for a finite population mean or a finite population distribution function from complex survey samples. They consider both non‐stratified and stratified sampling designs, with or without auxiliary information. They examine the behaviour of estimates of the mean and the distribution function at specific points using simulations calling on the Rao‐Sampford method of unequal probability sampling without replacement. They conclude that the pseudo‐empirical likelihood ratio confidence intervals are superior to those based on the normal approximation, whether in terms of coverage probability, tail error rates or average length of the intervals.  相似文献   

3.
This paper concerns prediction from the frequentist point of view. The aim is to define a well-calibrated predictive distribution giving prediction intervals, and in particular prediction limits, with coverage probability equal or close to the target nominal value. This predictive distribution can be considered in a number of situations, including discrete data and non-regular cases, and it is founded on the idea of calibrating prediction limits to control the associated coverage probability. Whenever the computation of the proposed distribution is not feasible, this can be approximated using a suitable bootstrap simulation procedure or by considering high-order asymptotic expansions, giving predictive distributions already known in the literature. Examples and applications of the results to different contexts show the wide applicability and the very good performance of the proposed predictive distribution.  相似文献   

4.
In many applications, a finite population contains a large proportion of zero values that make the population distribution severely skewed. An unequal‐probability sampling plan compounds the problem, and as a result the normal approximation to the distribution of various estimators has poor precision. The central‐limit‐theorem‐based confidence intervals for the population mean are hence unsatisfactory. Complex designs also make it hard to pin down useful likelihood functions, hence a direct likelihood approach is not an option. In this paper, we propose a pseudo‐likelihood approach. The proposed pseudo‐log‐likelihood function is an unbiased estimator of the log‐likelihood function when the entire population is sampled. Simulations have been carried out. When the inclusion probabilities are related to the unit values, the pseudo‐likelihood intervals are superior to existing methods in terms of the coverage probability, the balance of non‐coverage rates on the lower and upper sides, and the interval length. An application with a data set from the Canadian Labour Force Survey‐2000 also shows that the pseudo‐likelihood method performs more appropriately than other methods. The Canadian Journal of Statistics 38: 582–597; 2010 © 2010 Statistical Society of Canada  相似文献   

5.
Massive correlated data with many inputs are often generated from computer experiments to study complex systems. The Gaussian process (GP) model is a widely used tool for the analysis of computer experiments. Although GPs provide a simple and effective approximation to computer experiments, two critical issues remain unresolved. One is the computational issue in GP estimation and prediction where intensive manipulations of a large correlation matrix are required. For a large sample size and with a large number of variables, this task is often unstable or infeasible. The other issue is how to improve the naive plug-in predictive distribution which is known to underestimate the uncertainty. In this article, we introduce a unified framework that can tackle both issues simultaneously. It consists of a sequential split-and-conquer procedure, an information combining technique using confidence distributions (CD), and a frequentist predictive distribution based on the combined CD. It is shown that the proposed method maintains the same asymptotic efficiency as the conventional likelihood inference under mild conditions, but dramatically reduces the computation in both estimation and prediction. The predictive distribution contains comprehensive information for inference and provides a better quantification of predictive uncertainty as compared with the plug-in approach. Simulations are conducted to compare the estimation and prediction accuracy with some existing methods, and the computational advantage of the proposed method is also illustrated. The proposed method is demonstrated by a real data example based on tens of thousands of computer experiments generated from a computational fluid dynamic simulator.  相似文献   

6.
This article reviews several techniques useful for forming point and interval predictions in regression models with Box-Cox transformed variables. The techniques reviewed—plug-in, mean squared error analysis, predictive likelihood, and stochastic simulation—take account of nonnormality and parameter uncertainty in varying degrees. A Monte Carlo study examining their small-sample accuracy indicates that uncertainty about the Box–Cox transformation parameter may be relatively unimportant. For certain parameters, deterministic point predictions are biased, and plug-in prediction intervals are also biased. Stochastic simulation, as usually carried out, leads to badly biased predictions. A modification of the usual approach renders stochastic simulation predictions largely unbiased.  相似文献   

7.
We construct bootstrap confidence intervals for smoothing spline estimates based on Gaussian data, and penalized likelihood smoothing spline estimates based on data from .exponential families. Several vari- ations of bootstrap confidence intervals are considered and compared. We find that the commonly used ootstrap percentile intervals are inferior to the T intervals and to intervals based on bootstrap estimation of mean squared errors. The best variations of the bootstrap confidence intervals behave similar to the well known Bayesian confidence intervals. These bootstrap confidence intervals have an average coverage probability across the function being estimated, as opposed to a pointwise property.  相似文献   

8.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

9.
A generalized version of inverted exponential distribution (IED) is considered in this paper. This lifetime distribution is capable of modeling various shapes of failure rates, and hence various shapes of aging criteria. The model can be considered as another useful two-parameter generalization of the IED. Maximum likelihood and Bayes estimates for two parameters of the generalized inverted exponential distribution (GIED) are obtained on the basis of a progressively type-II censored sample. We also showed the existence, uniqueness and finiteness of the maximum likelihood estimates of the parameters of GIED based on progressively type-II censored data. Bayesian estimates are obtained using squared error loss function. These Bayesian estimates are evaluated by applying the Lindley's approximation method and via importance sampling technique. The importance sampling technique is used to compute the Bayes estimates and the associated credible intervals. We further consider the Bayes prediction problem based on the observed samples, and provide the appropriate predictive intervals. Monte Carlo simulations are performed to compare the performances of the proposed methods and a data set has been analyzed for illustrative purposes.  相似文献   

10.
In this paper, bootstrap prediction is adapted to resolve some problems in small sample datasets. The bootstrap predictive distribution is obtained by applying Breiman's bagging to the plug-in distribution with the maximum likelihood estimator. The effectiveness of bootstrap prediction has previously been shown, but some problems may arise when bootstrap prediction is constructed in small sample datasets. In this paper, Bayesian bootstrap is used to resolve the problems. The effectiveness of Bayesian bootstrap prediction is confirmed by some examples. These days, analysis of small sample data is quite important in various fields. In this paper, some datasets are analyzed in such a situation. For real datasets, it is shown that plug-in prediction and bootstrap prediction provide very poor prediction when the sample size is close to the dimension of parameter while Bayesian bootstrap prediction provides stable prediction.  相似文献   

11.
The mean past lifetime (MPL) function (also known as the expected inactivity time function) is of interest in many fields such as reliability theory and survival analysis, actuarial studies and forensic science. For estimation of the MPL function some procedures have been proposed in the literature. In this paper, we give a central limit theorem result for the estimator of MPL function based on a right-censored random sample from an unknown distribution. The limiting distribution is used to construct normal approximation-based confidence interval for MPL. Furthermore, we use the empirical likelihood ratio procedure to obtain confidence interval for the MPL function. These two intervals are compared with each other through simulation study in terms of coverage probability. Finally, a couple of numerical example illustrating the theory is also given.  相似文献   

12.
In this article, we develop a new and novel kernel density estimator for a sum of weighted averages from a single population based on utilizing the well defined kernel density estimator in conjunction with classic inversion theory. This idea is further developed for a kernel density estimator for the difference of weighed averages from two independent populations. The resulting estimator is “bootstrap-like” in terms of its properties with respect to the derivation of approximate confidence intervals via a “plug-in” approach. This new approach is distinct from the bootstrap methodology in that it is analytically and computationally feasible to provide an exact estimate of the distribution function through direct calculation. Thus, our approach eliminates the error due to Monte Carlo resampling that arises within the context of simulation based approaches that are oftentimes necessary in order to derive bootstrap-based confidence intervals for statistics involving weighted averages of i.i.d. random variables. We provide several examples and carry forth a simulation study to show that our kernel density estimator performs better than the standard central limit theorem based approximation in term of coverage probability.  相似文献   

13.
Multivariate model validation is a complex decision-making problem involving comparison of multiple correlated quantities, based upon the available information and prior knowledge. This paper presents a Bayesian risk-based decision method for validation assessment of multivariate predictive models under uncertainty. A generalized likelihood ratio is derived as a quantitative validation metric based on Bayes’ theorem and Gaussian distribution assumption of errors between validation data and model prediction. The multivariate model is then assessed based on the comparison of the likelihood ratio with a Bayesian decision threshold, a function of the decision costs and prior of each hypothesis. The probability density function of the likelihood ratio is constructed using the statistics of multiple response quantities and Monte Carlo simulation. The proposed methodology is implemented in the validation of a transient heat conduction model, using a multivariate data set from experiments. The Bayesian methodology provides a quantitative approach to facilitate rational decisions in multivariate model assessment under uncertainty.  相似文献   

14.
In several statistical problems, nonparametric confidence intervals for population quantiles can be constructed and their coverage probabilities can be computed exactly, but cannot in general be rendered equal to a pre-determined level. The same difficulty arises for coverage probabilities of nonparametric prediction intervals for future observations. One solution to this difficulty is to interpolate between intervals which have the closest coverage probability from above and below to the pre-determined level. In this paper, confidence intervals for population quantiles are constructed based on interpolated upper and lower records. Subsequently, prediction intervals are obtained for future upper records based on interpolated upper records. Additionally, we derive upper bounds for the coverage error of these confidence and prediction intervals. Finally, our results are applied to some real data sets. Also, a comparison via a simulation study is done with similar classical intervals obtained before.  相似文献   

15.
This paper concerns the specification of multivariate prediction regions which may be useful in time series applications whenever we aim at considering not just one single forecast but a group of consecutive forecasts. We review a general result on improved multivariate prediction and we use it in order to calculate conditional prediction intervals for Markov process models so that the associated coverage probability turns out to be close to the target value. This improved solution is asymptotically superior to the estimative one, which is simpler but it may lead to unreliable predictive conclusions. An application to general autoregressive models is presented, focusing in particular on AR and ARCH models.  相似文献   

16.
Confidence Intervals Based on Local Linear Smoother   总被引:1,自引:0,他引:1  
Point-wise confidence intervals for a non-parametric regression function in conjunction with the popular local linear smoother are considered. The confidence intervals are based on the asymptotic normal distribution of the local linear smoother. Their coverage accuracy is evaluated by developing Edgeworth expansion for the coverage probability. It is found that the coverage error near the boundary of the support of the regression function is of a larger order than that in the interior, which implies that the local linear smoother is not adaptive to the boundary in terms of coverage. This is quite unexpected as the local linear smoother is adaptive to the boundary in terms of the mean squared error.  相似文献   

17.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

18.
In this paper, nonparametric methods are proposed to construct prediction intervals for the lifetime of a coherent system with known signatures. An explicit expression for the coverage probability of the prediction intervals is presented based on Samaniego’s signature. The existence and optimality of these intervals are discussed. In our derivation, we also obtain an exact expression for the marginal distribution of the \(i\) th order statistic from a pooled sample.  相似文献   

19.
Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi-Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our simulations show that in comparison to other approximations, our method has better performance. Our methods are readily applied to non-Gaussian spatial data and as an example, the analysis of the rhizoctonia root rot data is presented.  相似文献   

20.
The problem of building bootstrap confidence intervals for small probabilities with count data is addressed. The law of the independent observations is assumed to be a mixture of a given family of power series distributions. The mixing distribution is estimated by nonparametric maximum likelihood and the corresponding mixture is used for resampling. We build percentile-t and Efron percentile bootstrap confidence intervals for the probabilities and we prove their consistency in probability. The new theoretical results are supported by simulation experiments for Poisson and geometric mixtures. We compare percentile-t and Efron percentile bootstrap intervals with eight other bootstrap or asymptotic theory based intervals. It appears that Efron percentile bootstrap intervals outperform the competitors in terms of coverage probability and length.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号