首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Microcomputer-based algorithms for the estimation of the parameters shift, scale, initial and terminal shape of the hyper–Gamma distribution class are presented. They are based on the moment equations and on the logarithmic likelihood function (LLF) associated with the hyper-Gamma density. The maximum–likelihood approach is implemented by means of the derivative equations resulting from the LLF and, independently, by means of direct optimization of the LLF. Program options include estimation of (i) four parameters, (ii) three parameters (shift known), and (iii) two parameters (shift known, initial shape zero). A program diskette with user's guide will be made available upon request.  相似文献   

2.
This study is a Bayesian analysis of a regression model with autocorrelated errors which exhibits one change in the regression parameters and where the autocorrelation parameter is unknown

Using a normal-gamma prior for all the parameters except the shift point which has a uniform distribution, the marginal posterior distribution of the regression parameters, the shift point and the precision of the errors is found. It is important to know where the shift occurred thus the main emphasis is with the posterior distribution of the shift point

A numerical study assesses the effect of the values of the shift point and the magnitude of the shift on the posterior distribution of the shift point. The posterior distribution of the shift point is more sensitive to change, which occurs in the middle of the observations than to one which occurs at an extreme of the data.  相似文献   

3.
To compare two samples under Type I censorship, this article proposes a method of semiparametric inference for the two-sample location-scale problem when the model for two samples is characterized by an unknown distribution and two unknown parameters. Simultaneous estimators for both the location shift and scale change parameters are given. It is shown that the two estimators are strongly consistent and asymptotically normal. The approach in this article can also be used for scale-shape models. Monte Carlo studies indicate that the proposed estimation procedure performs well in finite and heavily censored samples, maintains high relative efficiencies for a wide range of censoring proportions and is robust to the model misspecification, and also outperforms other competitive estimators.  相似文献   

4.
We demonstrate how to perform direct simulation from the posterior distribution of a class of multiple changepoint models where the number of changepoints is unknown. The class of models assumes independence between the posterior distribution of the parameters associated with segments of data between successive changepoints. This approach is based on the use of recursions, and is related to work on product partition models. The computational complexity of the approach is quadratic in the number of observations, but an approximate version, which introduces negligible error, and whose computational cost is roughly linear in the number of observations, is also possible. Our approach can be useful, for example within an MCMC algorithm, even when the independence assumptions do not hold. We demonstrate our approach on coal-mining disaster data and on well-log data. Our method can cope with a range of models, and exact simulation from the posterior distribution is possible in a matter of minutes.  相似文献   

5.
A Bayesian least squares approach is taken here to estimate certain parameters in generalized linear models for dichotomous response data. The method requires that only first and second moments of the probability distribution representing prior information be specified* Examples are presented to illustrate situations having direct estimates as well as those which require approximate or iterative solutions.  相似文献   

6.
ABSTRACT

A vast majority of the literature on the design of sampling plans by variables assumes that the distribution of the quality characteristic variable is normal, and that only its mean varies while its variance is known and remains constant. But, for many processes, the quality variable is nonnormal, and also either one or both of the mean and the variance of the variable can vary randomly. In this paper, an optimal economic approach is developed for design of plans for acceptance sampling by variables having Inverse Gaussian (IG) distributions. The advantage of developing an IG distribution based model is that it can be used for diverse quality variables ranging from highly skewed to almost symmetrical. We assume that the process has two independent assignable causes, one of which shifts the mean of the quality characteristic variable of a product and the other shifts the variance. Since a product quality variable may be affected by any one or both of the assignable causes, three different likely cases of shift (mean shift only, variance shift only, and both mean and variance shift) have been considered in the modeling process. For all of these likely scenarios, mathematical models giving the cost of using a variable acceptance sampling plan are developed. The cost models are optimized in selecting the optimal sampling plan parameters, such as the sample size, and the upper and lower acceptance limits. A large set of numerical example problems is solved for all the cases. Some of these numerical examples are also used in depicting the consequences of: 1) using the assumption that the quality variable is normally distributed when the true distribution is IG, and 2) using sampling plans from the existing standards instead of the optimal plans derived by the methodology developed in this paper. Sensitivities of some of the model input parameters are also studied using the analysis of variance technique. The information obtained on the parameter sensitivities can be used by the model users on prudently allocating resources for estimation of input parameters.  相似文献   

7.
Summary.  The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.  相似文献   

8.
A Wald test-based approach for power and sample size calculations has been presented recently for logistic and Poisson regression models using the asymptotic normal distribution of the maximum likelihood estimator, which is applicable to tests of a single parameter. Unlike the previous procedures involving the use of score and likelihood ratio statistics, there is no simple and direct extension of this approach for tests of more than a single parameter. In this article, we present a method for computing sample size and statistical power employing the discrepancy between the noncentral and central chi-square approximations to the distribution of the Wald statistic with unrestricted and restricted parameter estimates, respectively. The distinguishing features of the proposed approach are the accommodation of tests about multiple parameters, the flexibility of covariate configurations and the generality of overall response levels within the framework of generalized linear models. The general procedure is illustrated with some special situations that have motivated this research. Monte Carlo simulation studies are conducted to assess and compare its accuracy with existing approaches under several model specifications and covariate distributions.  相似文献   

9.
This work examines the problem of locating changes in the distribution of a Compound Poisson Process where the variables being summed are iid normal and the number of variable follows the Poisson distribution. A Bayesian approach is developed to identify the location of significant changes in any of the parameters of the distribution, and a sliding window algorithm is used to identify multiple change points. These results can be applied in any field of study where an interest in locating changes not only in the parameter of a normally distributed data set but also in the rate of their occurrence. It has direct application to the study of DNA copy number variations in cancer research, where it is known that the distances between the genes can affect their intensity level.  相似文献   

10.
Bivariate count data arise in several different disciplines (epidemiology, marketing, sports statistics just to name a few) and the bivariate Poisson distribution being a generalization of the Poisson distribution plays an important role in modelling such data. In the present paper we present a Bayesian estimation approach for the parameters of the bivariate Poisson model and provide the posterior distributions in closed forms. It is shown that the joint posterior distributions are finite mixtures of conditionally independent gamma distributions for which their full form can be easily deduced by a recursively updating scheme. Thus, the need of applying computationally demanding MCMC schemes for Bayesian inference in such models will be removed, since direct sampling from the posterior will become available, even in cases where the posterior distribution of functions of the parameters is not available in closed form. In addition, we define a class of prior distributions that possess an interesting conjugacy property which extends the typical notion of conjugacy, in the sense that both prior and posteriors belong to the same family of finite mixture models but with different number of components. Extension to certain other models including multivariate models or models with other marginal distributions are discussed.  相似文献   

11.
In this paper, we present a statistical inference procedure for the step-stress accelerated life testing (SSALT) model with Weibull failure time distribution and interval censoring via the formulation of generalized linear model (GLM). The likelihood function of an interval censored SSALT is in general too complicated to obtain analytical results. However, by transforming the failure time to an exponential distribution and using a binomial random variable for failure counts occurred in inspection intervals, a GLM formulation with a complementary log-log link function can be constructed. The estimations of the regression coefficients used for the Weibull scale parameter are obtained through the iterative weighted least square (IWLS) method, and the shape parameter is updated by a direct maximum likelihood (ML) estimation. The confidence intervals for these parameters are estimated through bootstrapping. The application of the proposed GLM approach is demonstrated by an industrial example.  相似文献   

12.
Recently, mixture distribution becomes more and more popular in many scientific fields. Statistical computation and analysis of mixture models, however, are extremely complex due to the large number of parameters involved. Both EM algorithms for likelihood inference and MCMC procedures for Bayesian analysis have various difficulties in dealing with mixtures with unknown number of components. In this paper, we propose a direct sampling approach to the computation of Bayesian finite mixture models with varying number of components. This approach requires only the knowledge of the density function up to a multiplicative constant. It is easy to implement, numerically efficient and very practical in real applications. A simulation study shows that it performs quite satisfactorily on relatively high dimensional distributions. A well-known genetic data set is used to demonstrate the simplicity of this method and its power for the computation of high dimensional Bayesian mixture models.  相似文献   

13.
Binary dynamic fixed and mixed logit models are extensively studied in the literature. These models are developed to examine the effects of certain fixed covariates through a parametric regression function as a part of the models. However, there are situations where one may like to consider more covariates in the model but their direct effect is not of interest. In this paper we propose a generalization of the existing binary dynamic logit (BDL) models to the semi-parametric longitudinal setup to address this issue of additional covariates. The regression function involved in such a semi-parametric BDL model contains (i) a parametric linear regression function in some primary covariates, and (ii) a non-parametric function in certain secondary covariates. We use a simple semi-parametric conditional quasi-likelihood approach for consistent estimation of the non-parametric function, and a semi-parametric likelihood approach for the joint estimation of the main regression and dynamic dependence parameters of the model. The finite sample performance of the estimation approaches is examined through a simulation study. The asymptotic properties of the estimators are also discussed. The proposed model and the estimation approaches are illustrated by reanalysing a longitudinal infectious disease data.  相似文献   

14.
Quick detection of unanticipated changes in a financial sequence is a critical problem for practitioners in the finance industry. Based on refined logarithmic moment estimators for the four parameters of a stable distribution, this article presents a stable-distribution-based multi-CUSUM chart that consists of several CUSUM charts and detects changes in the four parameters in an independent and identically distributed random sequence with the stable distribution. Numerical results of the average run lengths show that the multi-CUSUM chart is superior (robust and quick) on the whole to a single CUSUM chart in detecting the shift change of the four parameters. A real example that monitors changes in IBM's stock returns is used to demonstrate the performance of the proposed method.  相似文献   

15.
The problem of testing whether there is a change in location in a sequence of random variables that are taken over time was discussed in several papers. In this paper we develop a conservative nonparametric distribution - free confidence bound for the amount of shift and give some Monte Carlo results to show how conservative the bound is.  相似文献   

16.
Abstract. This paper deals with the issue of performing a default Bayesian analysis on the shape parameter of the skew‐normal distribution. Our approach is based on a suitable pseudo‐likelihood function and a matching prior distribution for this parameter, when location (or regression) and scale parameters are unknown. This approach is important for both theoretical and practical reasons. From a theoretical perspective, it is shown that the proposed matching prior is proper thus inducing a proper posterior distribution for the shape parameter, also when the likelihood is monotone. From the practical perspective, the proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals.  相似文献   

17.
In this article, a transformation method using the principal component analysis approach is first applied to remove the existing autocorrelation within each profile in Phase I monitoring of autocorrelated simple linear profiles. This easy-to-use approach is independent of the autocorrelation coefficient. Moreover, since it is a model-free method, it can be used for Phase I monitoring procedures. Then, five control schemes are proposed to monitor the parameters of the profile with uncorrelated error terms. The performances of the proposed control charts are evaluated and are compared through simulation experiments based on different values of autocorrelation coefficient as well as different shift scenarios in the parameters of the profile in terms of probability of receiving an out-of-control signal.  相似文献   

18.
Statistical design is applied to a multivariate exponentially weighted moving average (MEWMA) control chart. The chart parameters are control limit H and smoothing constant r. The choices of the parameters depend on the number of variables p and the size of the process mean shift δ. The MEWMA statistic is modeled as a Markov chain and the Markov chain approach is used to determine the properties of the chart. Although average run length has become a traditional measure of the performance of control schemes, some authors have suggested other measures, such as median and other percentiles of the run length distribution to explain run length properties of a control scheme. This will allow a thorough study of the performance of the control scheme. Consequently, conclusions based on these measures would provide a better and comprehensive understanding of a scheme. In this article, we present the performance of the MEWMA control chart as measured by the average run length and median run length. Graphs are given so that the chart parameters of an optimal MEWMA chart can be determined easily.  相似文献   

19.
Empirical Bayes approaches have often been applied to the problem of estimating small-area parameters. As a compromise between synthetic and direct survey estimators, an estimator based on an empirical Bayes procedure is not subject to the large bias that is sometimes associated with a synthetic estimator, nor is it as variable as a direct survey estimator. Although the point estimates perform very well, naïve empirical Bayes confidence intervals tend to be too short to attain the desired coverage probability, since they fail to incorporate the uncertainty which results from having to estimate the prior distribution. Several alternative methodologies for interval estimation which correct for the deficiencies associated with the naïve approach have been suggested. Laird and Louis (1987) proposed three types of bootstrap for correcting naïve empirical Bayes confidence intervals. Calling the methodology of Laird and Louis (1987) an unconditional bias-corrected naïve approach, Carlin and Gelfand (1991) suggested a modification to the Type III parametric bootstrap which corrects for bias in the naïve intervals by conditioning on the data. Here we empirically evaluate the Type II and Type III bootstrap proposed by Laird and Louis, as well as the modification suggested by Carlin and Gelfand (1991), with the objective of examining coverage properties of empirical Bayes confidence intervals for small-area proportions.  相似文献   

20.
Proschan, Brittain, and Kammerman made a very interesting observation that for some examples of the unequal allocation minimization, the mean of the unconditional randomization distribution is shifted away from 0. Kuznetsova and Tymofyeyev linked this phenomenon to the variations in the allocation ratio from allocation to allocation in the examples considered in the paper by Proschan et al. and advocated the use of unequal allocation procedures that preserve the allocation ratio at every step. In this paper, we show that the shift phenomenon extends to very common settings: using conditional randomization test in a study with equal allocation. This phenomenon has the same cause: variations in the allocation ratio among the allocation sequences in the conditional reference set, not previously noted. We consider two kinds of conditional randomization tests. The first kind is the often used randomization test that conditions on the treatment group totals; we describe the variations in the conditional allocation ratio with this test on examples of permuted block randomization and biased coin randomization. The second kind is the randomization test proposed by Zheng and Zelen for a multicenter trial with permuted block central allocation that conditions on the within‐center treatment totals. On the basis of the sequence of conditional allocation ratios, we derive the value of the shift in the conditional randomization distribution for specific vector of responses and the expected value of the shift when responses are independent identically distributed random variables. We discuss the asymptotic behavior of the shift for the two types of tests. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号