首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The non-homogeneous Poisson process (NHPP) model is a very important class of software reliability models and is widely used in software reliability engineering. NHPPs are characterized by their intensity functions. In the literature it is usually assumed that the functional forms of the intensity functions are known and only some parameters in intensity functions are unknown. The parametric statistical methods can then be applied to estimate or to test the unknown reliability models. However, in realistic situations it is often the case that the functional form of the failure intensity is not very well known or is completely unknown. In this case we have to use functional (non-parametric) estimation methods. The non-parametric techniques do not require any preliminary assumption on the software models and then can reduce the parameter modeling bias. The existing non-parametric methods in the statistical methods are usually not applicable to software reliability data. In this paper we construct some non-parametric methods to estimate the failure intensity function of the NHPP model, taking the particularities of the software failure data into consideration.  相似文献   

2.
李小胜  王申令 《统计研究》2016,33(11):85-92
本文首先构造线性约束条件下的多元线性回归模型的样本似然函数,利用Lagrange法证明其合理性。其次,从似然函数的角度讨论线性约束条件对模型参数的影响,对由传统理论得出的参数估计作出贝叶斯与经验贝叶斯的改进。做贝叶斯改进时,将矩阵正态-Wishart分布作为模型参数和精度阵的联合共轭先验分布,结合构造的似然函数得出参数的后验分布,计算出参数的贝叶斯估计;做经验贝叶斯改进时,将样本分组,从方差的角度讨论由子样得出的参数估计对总样本的参数估计的影响,计算出经验贝叶斯估计。最后,利用Matlab软件生成的随机矩阵做模拟。结果表明,这两种改进后的参数估计均较由传统理论得出的参数估计更精确,拟合结果的误差比更小,可信度更高,在大数据的情况下,这种计算方法的速度更快。  相似文献   

3.
The failure rate function commonly has a bathtub shape in practice. In this paper we discuss a regression model considering new Weibull extended distribution developed by Xie et al. (2002) that can be used to model this type of failure rate function. Assuming censored data, we discuss parameter estimation: maximum likelihood method and a Bayesian approach where Gibbs algorithms along with Metropolis steps are used to obtain the posterior summaries of interest. We derive the appropriate matrices for assessing the local influence on the parameter estimates under different perturbation schemes, and we also present some ways to perform global influence. Also, some discussions on case deletion influence diagnostics are developed for the joint posterior distribution based on the Kullback–Leibler divergence. Besides, for different parameter settings, sample sizes and censoring percentages, are performed various simulations and display and compare the empirical distribution of the Martingale-type residual with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to the martingale-type residual in log-Weibull extended models with censored data. Finally, we analyze a real data set under a log-Weibull extended regression model. We perform diagnostic analysis and model check based on the martingale-type residual to select an appropriate model.  相似文献   

4.
We propose a fully Bayesian model with a non-informative prior for analyzing misclassified binary data with a validation substudy. In addition, we derive a closed-form algorithm for drawing all parameters from the posterior distribution and making statistical inference on odds ratios. Our algorithm draws each parameter from a beta distribution, avoids the specification of initial values, and does not have convergence issues. We apply the algorithm to a data set and compare the results with those obtained by other methods. Finally, the performance of our algorithm is assessed using simulation studies.  相似文献   

5.
A number of nonstationary models have been developed to estimate extreme events as function of covariates. A quantile regression (QR) model is a statistical approach intended to estimate and conduct inference about the conditional quantile functions. In this article, we focus on the simultaneous variable selection and parameter estimation through penalized quantile regression. We conducted a comparison of regularized Quantile Regression model with B-Splines in Bayesian framework. Regularization is based on penalty and aims to favor parsimonious model, especially in the case of large dimension space. The prior distributions related to the penalties are detailed. Five penalties (Lasso, Ridge, SCAD0, SCAD1 and SCAD2) are considered with their equivalent expressions in Bayesian framework. The regularized quantile estimates are then compared to the maximum likelihood estimates with respect to the sample size. A Markov Chain Monte Carlo (MCMC) algorithms are developed for each hierarchical model to simulate the conditional posterior distribution of the quantiles. Results indicate that the SCAD0 and Lasso have the best performance for quantile estimation according to Relative Mean Biais (RMB) and the Relative Mean-Error (RME) criteria, especially in the case of heavy distributed errors. A case study of the annual maximum precipitation at Charlo, Eastern Canada, with the Pacific North Atlantic climate index as covariate is presented.  相似文献   

6.
Population-parameter mapping (PPM) is a method for estimating the parameters of latent scientific models that describe the statistical likelihood function. The PPM method involves a Bayesian inference in terms of the statistical parameters and the mapping from the statistical parameter space to the parameter space of the latent scientific parameters, and obtains a model coherence estimate, P(coh). The P(coh) statistic can be valuable for designing experiments, comparing competing models, and can be helpful in redesigning flawed models. Examples are provided where greater estimation precision was found for small sample sizes for the PPM point estimates relative to the maximum likelihood estimator (MLE).  相似文献   

7.
In recent years, a number of statistical models have been proposed for the purposes of high-level image analysis tasks such as object recognition. However, in general, these models remain hard to use in practice, partly as a result of their complexity, partly through lack of software. In this paper we concentrate on a particular deformable template model which has proved potentially useful for locating and labelling cells in microscope slides Rue and Hurn (1999). This model requires the specification of a number of rather non-intuitive parameters which control the shape variability of the deformed templates. Our goal is to arrange the estimation of these parameters in such a way that the microscope user's expertise is exploited to provide the necessary training data graphically by identifying a number of cells displayed on a computer screen, but that no additional statistical input is required. In this paper we use maximum likelihood estimation incorporating the error structure in the generation of our training data.  相似文献   

8.
ABSTRACT

Autoregressive Moving Average (ARMA) time series model fitting is a procedure often based on aggregate data, where parameter estimation plays a key role. Therefore, we analyze the effect of temporal aggregation on the accuracy of parameter estimation of mixed ARMA and MA models. We derive the expressions required to compute the parameter values of the aggregate models as functions of the basic model parameters in order to compare their estimation accuracy. To this end, a simulation experiment shows that aggregation causes a severe accuracy loss that increases with the order of aggregation, leading to poor accuracy.  相似文献   

9.
Multiple-membership logit models with random effects are models for clustered binary data, where each statistical unit can belong to more than one group. The likelihood function of these models is analytically intractable. We propose two different approaches for parameter estimation: indirect inference and data cloning (DC). The former is a non-likelihood-based method which uses an auxiliary model to select reasonable estimates. We propose an auxiliary model with the same dimension of parameter space as the target model, which is particularly convenient to reach good estimates very fast. The latter method computes maximum likelihood estimates through the posterior distribution of an adequate Bayesian model, fitted to cloned data. We implement a DC algorithm specifically for multiple-membership models. A Monte Carlo experiment compares the two methods on simulated data. For further comparison, we also report Bayesian posterior mean and Integrated Nested Laplace Approximation hybrid DC estimates. Simulations show a negligible loss of efficiency for the indirect inference estimator, compensated by a relevant computational gain. The approaches are then illustrated with two real examples on matched paired data.  相似文献   

10.
The transformed likelihood approach to estimation of fixed effects dynamic panel data models is shown to present very good inferential properties but it is not directly implemented in the most diffused statistical software. The present paper aims at showing how a simple model reformulation can be adopted to describe the problem in terms of classical linear mixed models. The transformed likelihood approach is based on the first differences data transformation, the following results derive from a convenient reformulation in terms of deviations from the first observations. Given the invariance to data transformation, the likelihood functions defined in the two cases coincide. Resulting in a classical random effect linear model form, the proposed approach significantly improves the number of available estimation procedures and provides a straightforward interpretation for the parameters. Moreover, the proposed model specification allows to consider all the estimation improvements typical of the random effects model literature. Simulation studies are conducted in order to study the robustness of the estimation method to mean stationarity violation.  相似文献   

11.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

12.
The paper introduces DT-optimum designs that provide a specified balance between model discrimination and parameter estimation. An equivalence theorem is presented for the case of two models and extended to an arbitrary number of models and of combinations of parameters. A numerical example shows the properties of the procedure. The relationship with other design procedures for parameter estimation and model discrimination is discussed.  相似文献   

13.
空间回归模型由于引入了空间地理信息而使得其参数估计变得复杂,因为主要采用最大似然法,致使一般人认为在空间回归模型参数估计中不存在最小二乘法。通过分析空间回归模型的参数估计技术,研究发现,最小二乘法和最大似然法分别用于估计空间回归模型的不同的参数,只有将两者结合起来才能快速有效地完成全部的参数估计。数理论证结果表明,空间回归模型参数最小二乘估计量是最佳线性无偏估计量。空间回归模型的回归参数可以在估计量为正态性的条件下而实施显著性检验,而空间效应参数则不可以用此方法进行检验。  相似文献   

14.
Breast cancer is one of the diseases with the most profound impact on health in developed countries and mammography is the most popular method for detecting breast cancer at a very early stage. This paper focuses on the waiting period from a positive mammogram until a confirmatory diagnosis is carried out in hospital. Generalized linear mixed models are used to perform the statistical analysis, always within the Bayesian reasoning. Markov chain Monte Carlo algorithms are applied for estimation by simulating the posterior distribution of the parameters and hyperparameters of the model through the free software WinBUGS.  相似文献   

15.
针对传统交叉分类信度模型计算复杂且在结构参数先验信息不足的情况下不能得到参数无偏后验估计的问题,利用MCMC模拟和GLMM方法,对交叉分类信度模型进行实证分析证明模型的有效性。结果表明:基于MCMC方法能够动态模拟参数的后验分布,并可提高模型估计的精度;基于GLMM能大大简化计算过程且操作方便,可利用图形和其它诊断工具选择模型,并对模型实用性做出评价。  相似文献   

16.
Hierarchical models are popular in many applied statistics fields including Small Area Estimation. One well known model employed in this particular field is the Fay–Herriot model, in which unobservable parameters are assumed to be Gaussian. In Hierarchical models assumptions about unobservable quantities are difficult to check. For a special case of the Fay–Herriot model, Sinharay and Stern [2003. Posterior predictive model checking in Hierarchical models. J. Statist. Plann. Inference 111, 209–221] showed that violations of the assumptions about the random effects are difficult to detect using posterior predictive checks. In this present paper we consider two extensions of the Fay–Herriot model in which the random effects are assumed to be distributed according to either an exponential power (EP) distribution or a skewed EP distribution. We aim to explore the robustness of the Fay–Herriot model for the estimation of individual area means as well as the empirical distribution function of their ‘ensemble’. Our findings, which are based on a simulation experiment, are largely consistent with those of Sinharay and Stern as far as the efficient estimation of individual small area parameters is concerned. However, when estimating the empirical distribution function of the ‘ensemble’ of small area parameters, results are more sensitive to the failure of distributional assumptions.  相似文献   

17.
Bayesian analysis of single-molecule experimental data   总被引:2,自引:0,他引:2  
Summary.  Recent advances in experimental technologies allow scientists to follow biochemical processes on a single-molecule basis, which provides much richer information about chemical dynamics than traditional ensemble-averaged experiments but also raises many new statistical challenges. The paper provides the first likelihood-based statistical analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single deoxyribonucleic acid (DNA) hairpin molecule. The conformational change is initially treated as a continuous time two-state Markov chain, which is not observable and must be inferred from changes in photon emissions. This model is further complicated by unobserved molecular Brownian diffusions. Beyond the simple two-state model, a competing model that models the energy barrier between the two states of the DNA hairpin as an Ornstein–Uhlenbeck process has been suggested in the literature. We first derive the likelihood function of the simple two-state model and then generalize the method to handle complications such as unobserved molecular diffusions and the fluctuating energy barrier. The data augmentation technique and Markov chain Monte Carlo methods are developed to sample from the posterior distribution desired. The Bayes factor calculation and posterior estimates of relevant parameters indicate that the fluctuating barrier model fits the data better than the simple two-state model.  相似文献   

18.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

19.
In this paper, a novel Bayesian framework is used to derive the posterior density function, predictive density for a single future response, a bivariate future response, and several future responses from the exponentiated Weibull model (EWM). We study three related types of models, the exponentiated exponential, exponentiated Weibull, and beta generalized exponential, which are all utilized to determine the goodness of fit of two real data sets. The statistical analysis indicates that the EWM best fits both data sets. We determine the predictive means, standard deviations, highest predictive density intervals, and the shape characteristics for a single future response. We also consider a new parameterization method to determine the posterior kernel densities for the parameters. The summary results of the parameters are calculated by using the Markov chain Monte Carlo method.  相似文献   

20.
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood‐based ABC procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号