首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In many linear inverse problems the unknown function f (or its discrete approximation Θ p×1), which needs to be reconstructed, is subject to the non negative constraint(s); we call these problems the non negative linear inverse problems (NNLIPs). This article considers NNLIPs. However, the error distribution is not confined to the traditional Gaussian or Poisson distributions. We adopt the exponential family of distributions where Gaussian and Poisson are special cases. We search for the non negative maximum penalized likelihood (NNMPL) estimate of Θ. The size of Θ often prohibits direct implementation of the traditional methods for constrained optimization. Given that the measurements and point-spread-function (PSF) values are all non negative, we propose a simple multiplicative iterative algorithm. We show that if there is no penalty, then this algorithm is almost sure to converge; otherwise a relaxation or line search is necessitated to assure its convergence.  相似文献   

2.
ABSTRACT

A third order accurate approximation to the p value in testing either the location or scale parameter in a location scale model with Student(λ) errors is introduced. The third order approximation is developed via an asymptotic method, based on exponential models and the saddlepoint approximation. Techniques are presented for the numerical computation of all quantities required for the third order approximation. To compare the accuracy of various asymptotic methods a numerical example and simulation study are included. The numerical example and simulation study illustrate that the third order method presented leads to a more accurate p value approximation compared to first order methods in Student(λ) models with small samples.  相似文献   

3.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

4.
In this paper, a small-sample asymptotic method is proposed for higher order inference in the stress–strength reliability model, R=P(Y<X), where X and Y are distributed independently as Burr-type X distributions. In a departure from the current literature, we allow the scale parameters of the two distributions to differ, and the likelihood-based third-order inference procedure is applied to obtain inference for R. The difficulty of the implementation of the method is in obtaining the the constrained maximum likelihood estimates (MLE). A penalized likelihood method is proposed to handle the numerical complications of maximizing the constrained likelihood model. The proposed procedures are illustrated using a sample of carbon fibre strength data. Our results from simulation studies comparing the coverage probabilities of the proposed small-sample asymptotic method with some existing large-sample asymptotic methods show that the proposed method is very accurate even when the sample sizes are small.  相似文献   

5.
The two parameter Gamma distribution is widely used for modeling lifetime distributions in reliability theory. There is much literature on the inference on the individual parameters of the Gamma distribution, namely the shape parameter k and the scale parameter θ when the other parameter is known. However, usually the reliability professionals have a major interest in making statistical inference about the mean lifetime μ, which equals the product θk for the Gamma distribution. The problem of inference on the mean μ when both parameters θ and k are unknown has been less attended in the literature for the Gamma distribution. In this paper we review the existing methods for interval estimation of μ. A comparative study in this paper indicates that the existing methods are either too approximate and yield less reliable confidence intervals or are computationally quite complicated and need advanced computing facilities. We propose a new simple method for interval estimation of the Gamma mean and compare its performance with the existing methods. The comparative study showed that the newly proposed computationally simple optimum power normal approximation method works best even for small sample sizes.  相似文献   

6.
The authors show how saddlepoint techniques lead to highly accurate approximations for Bayesian predictive densities and cumulative distribution functions in stochastic model settings where the prior is tractable, but not necessarily the likelihood or the predictand distribution. They consider more specifically models involving predictions associated with waiting times for semi‐Markov processes whose distributions are indexed by an unknown parameter θ. Bayesian prediction for such processes when they are not stationary is also addressed and the inverse‐Gaussian based saddlepoint approximation of Wood, Booth & Butler (1993) is shown to accurately deal with the nonstationarity whereas the normal‐based Lugannani & Rice (1980) approximation cannot, Their methods are illustrated by predicting various waiting times associated with M/M/q and M/G/1 queues. They also discuss modifications to the matrix renewal theory needed for computing the moment generating functions that are used in the saddlepoint methods.  相似文献   

7.
Guogen Shan 《Statistics》2018,52(5):1086-1095
In addition to point estimate for the probability of response in a two-stage design (e.g. Simon's two-stage design for binary endpoints), confidence limits should be computed and reported. The current method of inverting the p-value function to compute the confidence interval does not guarantee coverage probability in a two-stage setting. The existing exact approach to calculate one-sided limits is based on the overall number of responses to order the sample space. This approach could be conservative because many sample points have the same limits. We propose a new exact one-sided interval based on p-value for the sample space ordering. Exact intervals are computed by using binomial distributions directly, instead of a normal approximation. Both exact intervals preserve the nominal confidence level. The proposed exact interval based on the p-value generally performs better than the other exact interval with regard to expected length and simple average length of confidence intervals.  相似文献   

8.
Observations collected over time are often autocorrelated rather than independent, and sometimes include observations below or above detection limits (i.e. censored values reported as less or more than a level of detection) and/or missing data. Practitioners commonly disregard censored data cases or replace these observations with some function of the limit of detection, which often results in biased estimates. Moreover, parameter estimation can be greatly affected by the presence of influential observations in the data. In this paper we derive local influence diagnostic measures for censored regression models with autoregressive errors of order p (hereafter, AR(p)‐CR models) on the basis of the Q‐function under three useful perturbation schemes. In order to account for censoring in a likelihood‐based estimation procedure for AR(p)‐CR models, we used a stochastic approximation version of the expectation‐maximisation algorithm. The accuracy of the local influence diagnostic measure in detecting influential observations is explored through the analysis of empirical studies. The proposed methods are illustrated using data, from a study of total phosphorus concentration, that contain left‐censored observations. These methods are implemented in the R package ARCensReg.  相似文献   

9.
Following the paper by Genton and Loperfido [Generalized skew-elliptical distributions and their quadratic forms, Ann. Inst. Statist. Math. 57 (2005), pp. 389–401], we say that Z has a generalized skew-normal distribution, if its probability density function (p.d.f.) is given by f(z)=2φ p (z; ξ, Ω)π (z?ξ), z∈? p , where φ p (·; ξ, Ω) is the p-dimensional normal p.d.f. with location vector ξ and scale matrix Ω, ξ∈? p , Ω>0, and π is a skewing function from ? p to ?, that is 0≤π (z)≤1 and π (?z)=1?π (z), ? z∈? p . First the distribution of linear transformations of Z are studied, and some moments of Z and its quadratic forms are derived. Next we obtain the joint moment-generating functions (m.g.f.’s) of linear and quadratic forms of Z and then investigate conditions for their independence. Finally explicit forms for the above distributions, m.g.f.’s and moments are derived when π (z)=κ (αz), where α∈? p and κ is the normal, Laplace, logistic or uniform distribution function.  相似文献   

10.
Abstract. We propose an information‐theoretic approach to approximate asymptotic distributions of statistics using the maximum entropy (ME) densities. Conventional ME densities are typically defined on a bounded support. For distributions defined on unbounded supports, we use an asymptotically negligible dampening function for the ME approximation such that it is well defined on the real line. We establish order n?1 asymptotic equivalence between the proposed method and the classical Edgeworth approximation for general statistics that are smooth functions of sample means. Numerical examples are provided to demonstrate the efficacy of the proposed method.  相似文献   

11.
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p-values and other Bayesian p-values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p-values, but also from the suppression of model uncertainty in subsequent inference. One solution is to transform Bayesian and frequentist p-values for model assessment into a fiducial distribution across the models. Averaging the Bayesian or frequentist posterior distributions with respect to the fiducial distribution can reproduce results from Bayesian model averaging or classical fiducial inference.  相似文献   

12.
For a multivariate linear model, Wilk's likelihood ratio test (LRT) constitutes one of the cornerstone tools. However, the computation of its quantiles under the null or the alternative hypothesis requires complex analytic approximations, and more importantly, these distributional approximations are feasible only for moderate dimension of the dependent variable, say p≤20. On the other hand, assuming that the data dimension p as well as the number q of regression variables are fixed while the sample size n grows, several asymptotic approximations are proposed in the literature for Wilk's Λ including the widely used chi-square approximation. In this paper, we consider necessary modifications to Wilk's test in a high-dimensional context, specifically assuming a high data dimension p and a large sample size n. Based on recent random matrix theory, the correction we propose to Wilk's test is asymptotically Gaussian under the null hypothesis and simulations demonstrate that the corrected LRT has very satisfactory size and power, surely in the large p and large n context, but also for moderately large data dimensions such as p=30 or p=50. As a byproduct, we give a reason explaining why the standard chi-square approximation fails for high-dimensional data. We also introduce a new procedure for the classical multiple sample significance test in multivariate analysis of variance which is valid for high-dimensional data.  相似文献   

13.
14.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

15.
V. Nekoukhou  H. Bidram 《Statistics》2013,47(4):876-887
In this paper, we shall attempt to introduce another discrete analogue of the generalized exponential distribution of Gupta and Kundu [Generalized exponential distributions, Aust. N. Z. J. Stat. 41(2) (1999), pp. 173–188], different to that of Nekoukhou et al. [A discrete analogue of the generalized exponential distribution, Comm. Stat. Theory Methods, to appear (2011)]. This new discrete distribution, which we shall call a discrete generalized exponential distribution of the second type (DGE2(α, p)), can be viewed as another generalization of the geometric distribution. We shall first study some basic distributional and moment properties, as well as order statistics distributions of this family of new distributions. Certain compounded DGE2(α, p) distributions are also discussed as the results of which some previous lifetime distributions such as that of Adamidis and Loukas [A lifetime distribution with decreasing failure rate, Statist. Probab. Lett. 39 (1998), pp. 35–42] follow as corollaries. Then, we will investigate estimation of the parameters involved. Finally, we will examine the model with a real data set.  相似文献   

16.
Among reliability systems, one of the basic systems is a parallel system. In this article, we consider a parallel system consisting of n identical components with independent lifetimes having a common distribution function F. Under the condition that the system has failed by time t, with t being 100pth percentile of F(t = F ?1(p), 0 < p < 1), we characterize the probability distributions based on the mean past lifetime of the components of the system. These distributions are described in the form of a specific shape on the left of t and arbitrary continuous function on the right tail.  相似文献   

17.
ABSTRACT

In this paper, the stress-strength reliability, R, is estimated in type II censored samples from Pareto distributions. The classical inference includes obtaining the maximum likelihood estimator, an exact confidence interval, and the confidence intervals based on Wald and signed log-likelihood ratio statistics. Bayesian inference includes obtaining Bayes estimator, equi-tailed credible interval, and highest posterior density (HPD) interval given both informative and non-informative prior distributions. Bayes estimator of R is obtained using four methods: Lindley's approximation, Tierney-Kadane method, Monte Carlo integration, and MCMC. Also, we compare the proposed methods by simulation study and provide a real example to illustrate them.  相似文献   

18.
In multiple hypothesis test, an important problem is estimating the proportion of true null hypotheses. Existing methods are mainly based on the p-values of the single tests. In this paper, we propose two new estimations for this proportion. One is a natural extension of the commonly used methods based on p-values and the other is based on a mixed distribution. Simulations show that the first method is comparable with existing methods and performs better under some cases. And the method based on a mixed distribution can get accurate estimators even if the variance of data is large or the difference between the null hypothesis and alternative hypothesis is very small.  相似文献   

19.
Let X1, …, Xp be independent random variables, all having the same distribution up to a possibly varying unspecified parameter, where each of the p distributions belongs to the family of one parameter discrete exponential distributions. The problem is to estimate the unknown parameters simultaneously. Hudson (1978) shows that the minimum variance unbiased estimator (MVUE) of the parameters is inadmissible under squared error loss, and estimators better than the MVUE are proposed. Essentially, these estimators shrink the MVUE towards the origin. In this paper, we indicate that estimators shifting the MVUE towards a point different from the origin or a point determined by the observations can be obtained.  相似文献   

20.
In socioeconomic areas, functional observations may be collected with weights, called weighted functional data. In this paper, we deal with a general linear hypothesis testing (GLHT) problem in the framework of functional analysis of variance with weighted functional data. With weights taken into account, we obtain unbiased and consistent estimators of the group mean and covariance functions. For the GLHT problem, we obtain a pointwise F-test statistic and build two global tests, respectively, via integrating the pointwise F-test statistic or taking its supremum over an interval of interest. The asymptotic distributions of test statistics under the null and some local alternatives are derived. Methods for approximating their null distributions are discussed. An application of the proposed methods to density function data is also presented. Intensive simulation studies and two real data examples show that the proposed tests outperform the existing competitors substantially in terms of size control and power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号