共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian estimation for the exponentiated Weibull model under Type II progressive censoring 总被引:1,自引:1,他引:0
Based on progressive Type II censored samples, we have derived the maximum likelihood and Bayes estimators for the two shape
parameters and the reliability function of the exponentiated Weibull lifetime model. We obtained Bayes estimators using both
the symmetric and asymmetric loss functions via squared error loss and linex loss functions. This was done with respect to
the conjugate priors for two shape parameters. We used an approximation based on the Lindley (Trabajos de Stadistca 21, 223–237,
1980) method for obtaining Bayes estimates under these loss functions. We made comparisons between these estimators and the
maximum likelihood estimators using a Monte Carlo simulation study. 相似文献
2.
The present study deals with the method of estimation of the parameters of k-components load-sharing parallel system model
in which each component’s failure time distribution is assumed to be geometric. The maximum likelihood estimates of the load-share
parameters with their standard errors are obtained. (1 − γ) 100% joint, Bonferroni simultaneous and two bootstrap confidence intervals for the parameters have been constructed. Further,
recognizing the fact that life testing experiments are time consuming, it seems realistic to consider the load-share parameters
to be random variable. Therefore, Bayes estimates along with their standard errors of the parameters are obtained by assuming
Jeffrey’s invariant and gamma priors for the unknown parameters. Since, Bayes estimators can not be found in closed form expressions,
Tierney and Kadane’s approximation method have been used to compute Bayes estimates and standard errors of the parameters.
Markov Chain Monte Carlo technique such as Gibbs sampler is also used to obtain Bayes estimates and highest posterior density
credible intervals of the load-share parameters. Metropolis–Hastings algorithm is used to generate samples from the posterior
distributions of the unknown parameters. 相似文献
3.
Andrew Redd 《Statistics and Computing》2012,22(1):251-257
Through the use of a matrix representation for B-splines presented by Qin (Vis. Comput. 16:177–186, 2000) we are able to reexamine calculus operations on B-spline basis functions. In this matrix framework the problem associated
with generating orthogonal splines is reexamined, and we show that this approach can simplify the operations involved to linear
matrix operations. We apply these results to a recent paper (Zhou et al. in Biometrika 95:601–619, 2008) on hierarchical functional data analysis using a principal components approach, where a numerical integration scheme was
used to orthogonalize a set of B-spline basis functions. These orthogonalized basis functions, along with their estimated
derivatives, are then used to construct estimates of mean functions and functional principal components. By applying the methods
presented here such algorithms can benefit from increased speed and precision. An R package is available to do the computations. 相似文献
4.
This paper is concerned with a partially explosive linear model with polynomial regression components generating a pair of
related time series. The least squares estimates of the coefficients are shown to be √N-consistent and asymptotically singular
normal, when the degrees of polynomial regression components are same, thus generalising a result due to Venkataraman (1974). 相似文献
5.
Michael Kohler 《AStA Advances in Statistical Analysis》2008,92(2):153-178
American options in discrete time can be priced by solving optimal stopping problems. This can be done by computing so-called
continuation values, which we represent as regression functions defined recursively by using the continuation values of the
next time step. We use Monte Carlo to generate data, and then we apply smoothing spline regression estimates to estimate the
continuation values from these data. All parameters of the estimate are chosen data dependent. We present results concerning
consistency and the estimates’ rate of convergence. 相似文献
6.
Kyeongjun Lee 《Journal of applied statistics》2017,44(5):811-832
In this paper, the estimation of parameters, reliability and hazard functions of a inverted exponentiated half logistic distribution (IEHLD) from progressive Type II censored data has been considered. The Bayes estimates for progressive Type II censored IEHLD under asymmetric and symmetric loss functions such as squared error, general entropy and linex loss function are provided. The Bayes estimates for progressive Type II censored IEHLD parameters, reliability and hazard functions are also obtained under the balanced loss functions. However, the Bayes estimates cannot be obtained explicitly, Lindley approximation method and importance sampling procedure are considered to obtain the Bayes estimates. Furthermore, the asymptotic normality of the maximum likelihood estimates is used to obtain the approximate confidence intervals. The highest posterior density credible intervals of the parameters based on importance sampling procedure are computed. Simulations are performed to see the performance of the proposed estimates. For illustrative purposes, two data sets have been analyzed. 相似文献
7.
Bhupendra Singh Shubhi Rathi Sachin Kumar 《Journal of Statistical Computation and Simulation》2013,83(1):1-24
This study focuses on the classical and Bayesian analysis of a k-components load-sharing parallel system in which components have time-dependent failure rates. In the classical set up, the maximum likelihood estimates of the load-share parameters with their standard errors (SEs) are obtained. (1?γ) 100% simultaneous and two bootstrap confidence intervals for the parameters and system reliability and hazard functions have been constructed. Further, on recognizing the fact that life-testing experiments are very time consuming, the parameters involved in the failure time distribution of the system are expected to follow some random variations. Therefore, Bayes estimates along with their posterior SEs of the parameters and system reliability and hazard functions are obtained by assuming gamma and Jeffrey's priors of the unknown parameters. Markov chain Monte Carlo technique such as Gibbs sampler has been used to obtain Bayes estimates and highest posterior density credible intervals. 相似文献
8.
In this paper we have considered the problem of finding admissible estimates for a fairly general class of parametric functions in the so called “non-regular” type of densities Following Karlin s (1958) technique, we have established the ad-missibility of generalized Bayes estimates and Pitman estimates. Some examples are discussed. 相似文献
9.
D. F. Andrews 《Statistics and Computing》2001,11(1):7-16
This paper shows how procedures for computing moments and cumulants may themselves be computed from a few elementary identities.Many parameters, such as variance, may be expressed or approximated as linear combinations of products of expectations. The estimates of such parameters may be expressed as the same linear combinations of products of averages. The moments and cumulants of such estimates may be computed in a straightforward way if the terms of the estimates, moments and cumulants are represented as lists and the expectation operation defined as a transformation of lists. Vector space considerations lead to a unique representation of terms and hence to a simplification of results. Basic identities relating variables and their expectations induce transformations of lists, which transformations may be computed from the identities. In this way procedures for complex calculations are computed from basic identities.The procedures permit the calculation of results which would otherwise involve complementary set partitions, k-statistics, and pattern functions. The examples include the calculation of unbiased estimates of cumulants, of cumulants of these, and of moments of bootstrap estimates. 相似文献
10.
Saralees Nadarajah 《Statistical Papers》2009,50(3):605-615
The Student’s t distribution has become increasingly prominent and is considered as a competitor to the normal distribution. Motivated by
real examples in Physics, decision sciences and Bayesian statistics, a new t distribution is introduced by taking the product of two Student’s t pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the
mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood
estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated. 相似文献
11.
Husam Awni Bayoud 《统计学通讯:理论与方法》2013,42(1):71-82
AbstractThe shape parameter of Topp–Leone distribution is estimated in this article from the Bayesian viewpoint under the assumption of known scale parameter. Bayes and empirical Bayes estimates of the unknown parameter are proposed under non informative and suitable conjugate priors. These estimates are derived under the assumption of squared and linear-exponential error loss functions. The risk functions of the proposed estimates are derived in analytical forms. It is shown that the proposed estimates are minimax and admissible. The consistency of the proposed estimates under the squared error loss function is also proved. Numerical examples are provided. 相似文献
12.
Annaliisa Kankainen Sara Taskinen Hannu Oja 《Statistical Methods and Applications》2007,16(3):357-379
Classical univariate measures of asymmetry such as Pearson’s (mean-median)/σ or (mean-mode)/σ often measure the standardized
distance between two separate location parameters and have been widely used in assessing univariate normality. Similarly,
measures of univariate kurtosis are often just ratios of two scale measures. The classical standardized fourth moment and
the ratio of the mean deviation to the standard deviation serve as examples. In this paper we consider tests of multinormality
which are based on the Mahalanobis distance between two multivariate location vector estimates or on the (matrix) distance
between two scatter matrix estimates, respectively. Asymptotic theory is developed to provide approximate null distributions
as well as to consider asymptotic efficiencies. Limiting Pitman efficiencies for contiguous sequences of contaminated normal
distributions are calculated and the efficiencies are compared to those of the classical tests by Mardia. Simulations are
used to compare finite sample efficiencies. The theory is also illustrated by an example. 相似文献
13.
Piero Demetrio Falorsi Giorgio Alleva Fabio Bacchini Roberto Iannaccone 《Statistical Methods and Applications》2005,14(1):83-99
Various approaches to obtaining estimates based on preliminary data are outlined. A case is then considered which frequently
arises when selecting a subsample of units, the information for which is collected within a deadline that allows preliminary
estimates to be produced. At the moment when these estimates have to be produced it often occurs that, although the collection
of data on subsample units is still not complete, information is available on a set of units which does not belong to the
sample selected for the production of the preliminary estimates. An estimation method is proposed which allows all the data
available on a given date to be used to the full-and the expression of the expectation and variance are derived. The proposal
is based on two-phase sampling theory and on the hypothesis that the response mechanism is the result of random processes
whose parameters can be suitably estimated. An empirical analysis of the performance of the estimator on the Italian Survey
on building permits concludes the work.
The Sects. 1,2,3,4 and the technical appendixes have been developed by Giorgio Alleva and Piero Demetrio Falorsi; Sect. 5
has been done by Fabio Bacchini and Roberto Iannaccone.
Piero Demetrio Falorsi is chief statisticians at Italian National Institute of Statistics (ISTAT); Giorgio Alleva is Professor
of Statistics at University “La Sapienza” of Rome, Fabio Bacchini and Roberto Iannaccone are researchers at ISTAT. 相似文献
14.
A loss function proposed by Wasan (1970) is well-fitted for a measure of inaccuracy for an estimator of a scale parameter
of a distribution defined onR
+=(0, ∞). We refer to this loss function as the K-loss function. A relationship between the K-loss and squared error loss functions
is discussed. And an optimal estimator for a scale parameter with known coefficient of variation under the K-loss function
is presented. 相似文献
15.
We evaluate MCMC sampling schemes for a variety of link functions in generalized linear models with Dirichlet process random
effects. First, we find that there is a large amount of variability in the performance of MCMC algorithms, with the slice
sampler typically being less desirable than either a Kolmogorov–Smirnov mixture representation or a Metropolis–Hastings algorithm.
Second, in fitting the Dirichlet process, dealing with the precision parameter has troubled model specifications in the past.
Here we find that incorporating this parameter into the MCMC sampling scheme is not only computationally feasible, but also
results in a more robust set of estimates, in that they are marginalized-over rather than conditioned-upon. Applications are
provided with social science problems in areas where the data can be difficult to model, and we find that the nonparametric
nature of the Dirichlet process priors for the random effects leads to improved analyses with more reasonable inferences. 相似文献
16.
M. Gharib 《Australian & New Zealand Journal of Statistics》1998,40(1):95-102
This paper obtains some estimates for the rate of convergence in the multi-dimensional central limit theorem for vector-valued functions of a homogeneous Markov chain without assuming the finiteness of their absolute third moment. These estimates have a universal character and generalize the results that hold when the third moments are finite. 相似文献
17.
Shahjahan Khan 《Statistical Papers》2009,50(3):511-525
This paper considers multiple regression model with multivariate spherically symmetric errors to determine optimal β-expectation
tolerance regions for the future regression vector (FRV) and future residual sum of squares (FRSS) by using the prediction
distributions of some appropriate functions of future responses. The prediction distribution of the FRV, conditional on the
observed responses, is multivariate Student-t distribution. Similarly, the prediction distribution of the FRSS is a beta distribution. The optimal β-expectation tolerance
regions for the FRV and FRSS have been obtained based on the F -distribution and beta distribution, respectively. The results in this paper are applicable for multiple regression model
with normal and Student-t errors.
相似文献
18.
Tanmay Kayal Devendra Pratap Singh Manoj Kumar Rastogi 《Journal of Statistical Computation and Simulation》2017,87(2):348-366
We consider estimation of the unknown parameters of Chen distribution [Chen Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statist Probab Lett. 2000;49:155–161] with bathtub shape using progressive-censored samples. We obtain maximum likelihood estimates by making use of an expectation–maximization algorithm. Different Bayes estimates are derived under squared error and balanced squared error loss functions. It is observed that the associated posterior distribution appears in an intractable form. So we have used an approximation method to compute these estimates. A Metropolis–Hasting algorithm is also proposed and some more approximate Bayes estimates are obtained. Asymptotic confidence interval is constructed using observed Fisher information matrix. Bootstrap intervals are proposed as well. Sample generated from MH algorithm are further used in the construction of HPD intervals. Finally, we have obtained prediction intervals and estimates for future observations in one- and two-sample situations. A numerical study is conducted to compare the performance of proposed methods using simulations. Finally, we analyse real data sets for illustration purposes. 相似文献
19.
In the case where the lagged dependent variables are included in the regression model, it is known that the ordinary least
squares estimates (OLSE) are biased in small sample and that the bias increases as the number of the irrelevant variables
increases. In this paper, based on the bootstrap methods, an attempt is made to obtain the unbiased estimates in autoregressive
and non-Gaussian cases. We propose the residual-based bootstrap method in this paper. Some simulation studies are performed
to examine whether the proposed estimation procedure works well or not. We obtain the results that it is possible to recover
the true parameter values and that the proposed procedure gives us the less biased estimators than OLSE.
This paper is a substantial revision of Tanizaki (2000). The normality assumption is adopted in Tanizaki (2000), but it is
not required in this paper. The authors are grateful to an anonymous referee for valuable suggestions and comments. This research
was partially supported by Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research (C)(2) #14530033,
2002–2005, for H. Tanizaki and Grants-in-Aid for the 21st Century COE Program. 相似文献
20.
Nikolay Robinzonov Gerhard Tutz Torsten Hothorn 《AStA Advances in Statistical Analysis》2012,96(1):99-122
Many of the popular nonlinear time series models require a priori the choice of parametric functions which are assumed to be appropriate in specific applications. This approach is mainly
used in financial applications, when sufficient knowledge is available about the nonlinear structure between the covariates
and the response. One principal strategy to investigate a broader class on nonlinear time series is the Nonlinear Additive
AutoRegressive (NAAR) model. The NAAR model estimates the lags of a time series as flexible functions in order to detect non-monotone
relationships between current and past observations. We consider linear and additive models for identifying nonlinear relationships.
A componentwise boosting algorithm is applied for simultaneous model fitting, variable selection, and model choice. Thus,
with the application of boosting for fitting potentially nonlinear models we address the major issues in time series modelling:
lag selection and nonlinearity. By means of simulation we compare boosting to alternative nonparametric methods. Boosting
shows a strong overall performance in terms of precise estimations of highly nonlinear lag functions. The forecasting potential
of boosting is examined on the German industrial production (IP); to improve the model’s forecasting quality we include additional
exogenous variables. Thus we address the second major aspect in this paper which concerns the issue of high dimensionality
in models. Allowing additional inputs in the model extends the NAAR model to a broader class of models, namely the NAARX model.
We show that boosting can cope with large models which have many covariates compared to the number of observations. 相似文献