首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we extend Bernstein theorem by using basic tools of calculus on time scales, and, as a further application of it, the discrete nabla and delta Mittag-Leffler distributions are introduced here with respect to their Laplace transforms on the discrete time scale. For these discrete distributions, infinite divisibility and geometric infinite divisibility are proved along with some statistical properties. The delta and nabla Mittag-Leffler processes are defined.  相似文献   

2.
In this article, a new mixed Poisson distribution is introduced. This new distribution is obtained by utilizing mixing process, with Poisson distribution as mixed distribution and Transmuted Exponential as mixing distribution. Distributional properties like unimodality, moments, over-dispersion, infinite divisibility are studied. Three methods viz. Method of moment, Method of moment and proportion, and Maximum-likelihood method are used for parameter estimation. Further, an actuarial application in context of aggregate claim distribution is presented. Finally, to show the applicability and superiority of proposed model, we discuss count data and count regression modeling and compare with some well established models.  相似文献   

3.
A new discrete distribution involving geometric and discrete Pareto as special cases is introduced. The distribution possesses many interesting properties like decreasing hazard rate, zero vertex uni-modality, over-dispersion, infinite divisibility and compound Poisson representation, which makes the proposed distribution well suited for count data modeling. Other issues including closure property under minima, comparison of its distribution tail with other distributions via actuarial indices are discussed. The method of proportion and maximum likelihood method are presented for parameter estimation. Finally the performance of the proposed distribution over other classical and newly proposed infinitely divisible distributions are discussed.  相似文献   

4.
This paper shows that a normalization of the Hurwitz zeta function is a characteristic function. This generalizes the 1938 result of Khinchine about the Riemann zeta function. The paper investigates the infinite divisibility of the resulting distribution.  相似文献   

5.
The K-means algorithm and the normal mixture model method are two common clustering methods. The K-means algorithm is a popular heuristic approach which gives reasonable clustering results if the component clusters are ball-shaped. Currently, there are no analytical results for this algorithm if the component distributions deviate from the ball-shape. This paper analytically studies how the K-means algorithm changes its classification rule as the normal component distributions become more elongated under the homoscedastic assumption and compares this rule with that of the Bayes rule from the mixture model method. We show that the classification rules of both methods are linear, but the slopes of the two classification lines change in the opposite direction as the component distributions become more elongated. The classification performance of the K-means algorithm is then compared to that of the mixture model method via simulation. The comparison, which is limited to two clusters, shows that the K-means algorithm provides poor classification performances consistently as the component distributions become more elongated while the mixture model method can potentially, but not necessarily, take advantage of this change and provide a much better classification performance.  相似文献   

6.
Renewal-type equations are frequently encountered in the study of reliability, warranty analysis, replacement and maintenance policies, and inventory control. Renewal equations usually do not have analytical solutions, and hence, bounds or approximations are very useful. In this article, analytical bounds are studied based on a simple iterative procedure which provides some analytical results and nice convergence properties when the number of iteration increases. Bounds and approximations are also investigated for a recursive algorithm for numerical computation. In addition, some interesting monotonicity properties are introduced and discussed. The approximation error, which is important for determining the stopping rule of the iterative procedure and the numerical algorithm, is also studied.  相似文献   

7.
It has been a common practice to recommend one distribution for a large class of problems. An example is the use of the logarithmic distribution for count data - the point of concern in this paper is its recommendation without regard to the size of the experimental unit. References for this particular distribution go back to Fisher et al. [1943]. Look up Douglas [1980], Patil et al. [1984] and Kotz and Johnson [1985] to pick up additional references in this area -especially through sorting the data base of the American Mathematical Society. We will show a fallacy in this structure; provide a computer algorithm find the actual distributions; and then to check on the divisibility. The language used is APL2. But the users can make up their own programs.  相似文献   

8.
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time‐to‐event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach. In the unmatched pair approach, the confidence interval is constructed based on bootstrap resampling, and the hypothesis testing is based on the non‐parametric method by Finkelstein and Schoenfeld (1999). Luo et al. (2015) developed a close‐form variance estimator of the win ratio for the unmatched pair approach, based on a composite endpoint with two components and a specific algorithm determining winners, losers and ties. We extend the unmatched pair approach to provide a generalized analytical solution to both hypothesis testing and confidence interval construction for the win ratio, based on its logarithmic asymptotic distribution. This asymptotic distribution is derived via U‐statistics following Wei and Johnson (1985). We perform simulations assessing the confidence intervals constructed based on our approach versus those per the bootstrap resampling and per Luo et al. We have also applied our approach to a liver transplant Phase III study. This application and the simulation studies show that the win ratio can be a better statistical measure than the odds ratio when the importance order among components matters; and the method per our approach and that by Luo et al., although derived based on large sample theory, are not limited to a large sample, but are also good for relatively small sample sizes. Different from Pocock et al. and Luo et al., our approach is a generalized analytical method, which is valid for any algorithm determining winners, losers and ties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
This study investigates the Bayesian appeoach to the analysis of parired responess when the responses are categorical. Using resampling and analytical procedures, inferences for homogeneity and agreement are develped. The posterior analysis is based on the Dirichlet distribution from which repeated samples can be geneated with a random number generator. Resampling and analytical techniques are employed to make Bayesian inferences, and when it is not appropriate to use analytical procedures, resampling techniques are easily implemented. Bayesian methodoloogy is illustrated with several examples and the results show that they are exacr-small sample procedures that can easily solve inference problems for matched designs.  相似文献   

10.
Families of Repeated Measurements Designs balanced for residual effects are constructed (whenever the divisibility conditions allow), under the assumption that the number of periods is less than the number of treatments and that each treatment precedes each other treatment once. These designs are then shown to be connected for both residual and direct treatment effects.  相似文献   

11.
Abstract

We propose a new multivariate extension of the inverse Gaussian distribution derived from a certain multivariate inverse relationship. First we define a multivariate extension of the inverse relationship between two sets of multivariate distributions, then define a reduced inverse relationship between two multivariate distributions. We derive the multivariate continuous distribution that has the reduced multivariate inverse relationship with a multivariate normal distribution and call it a multivariate inverse Gaussian distribution. This distribution is also characterized as the distribution of the location of a multivariate Brownian motion at some stopping time. The marginal distribution in one direction is the inverse Gaussian distribution, and the conditional distribution in the space perpendicular to this direction is a multivariate normal distribution. Mean, variance, and higher order cumulants are derived from the multivariate inverse relationship with a multivariate normal distribution. Other properties such as reproductivity and infinite divisibility are also given.  相似文献   

12.
Markov chain Monte Carlo (MCMC) methods, including the Gibbs sampler and the Metropolis–Hastings algorithm, are very commonly used in Bayesian statistics for sampling from complicated, high-dimensional posterior distributions. A continuing source of uncertainty is how long such a sampler must be run in order to converge approximately to its target stationary distribution. A method has previously been developed to compute rigorous theoretical upper bounds on the number of iterations required to achieve a specified degree of convergence in total variation distance by verifying drift and minorization conditions. We propose the use of auxiliary simulations to estimate the numerical values needed in this theorem. Our simulation method makes it possible to compute quantitative convergence bounds for models for which the requisite analytical computations would be prohibitively difficult or impossible. On the other hand, although our method appears to perform well in our example problems, it cannot provide the guarantees offered by analytical proof.  相似文献   

13.
Gibbs sampler as a computer-intensive algorithm is an important statistical tool both in application and in theoretical work. This algorithm, in many cases, is time-consuming; this paper extends the concept of using the steady-state ranked simulated sampling approach, utilized in Monte Carlo methods by Samawi [On the approximation of multiple integrals using steady state ranked simulated sampling, 2010, submitted for publication], to improve the well-known Gibbs sampling algorithm. It is demonstrated that this approach provides unbiased estimators, in the case of estimating the means and the distribution function, and substantially improves the performance of the Gibbs sampling algorithm and convergence, which results in a significant reduction in the costs and time required to attain a certain level of accuracy. Similar to Casella and George [Explaining the Gibbs sampler, Am. Statist. 46(3) (1992), pp. 167–174], we provide some analytical properties in simple cases and compare the performance of our method using the same illustrations.  相似文献   

14.
The research described herein was motivated by a study of the relationship between the performance of students in senior high schools and at universities in China. A special linear structural equation model is established, in which some parameters are known and both the responses and the covariables are measured with errors. To explore the relationship between the true responses and latent covariables and to estimate the parameters, we suggest a non-iterative estimation approach that can account for the external dependence between the true responses and latent covariables. This approach can also deal with the collinearity problem because the use of dimension-reduction techniques can remove redundant variables. Combining further with the information that some of parameters are given, we can perform estimation for the other unknown parameters. An easily implemented algorithm is provided. A simulation is carried out to provide evidence of the performance of the approach and to compare it with existing methods. The approach is applied to the education example for illustration, and it can be readily extended to more general models.  相似文献   

15.
Celebrating the 20th anniversary of the presentation of the paper by Dempster, Laird and Rubin which popularized the EM algorithm, we investigate, after a brief historical account, strategies that aim to make the EM algorithm converge faster while maintaining its simplicity and stability (e.g. automatic monotone convergence in likelihood). First we introduce the idea of a 'working parameter' to facilitate the search for efficient data augmentation schemes and thus fast EM implementations. Second, summarizing various recent extensions of the EM algorithm, we formulate a general alternating expectation–conditional maximization algorithm AECM that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations. We illustrate these methods using multivariate t -models with known or unknown degrees of freedom and Poisson models for image reconstruction. We show, through both empirical and theoretical evidence, the potential for a dramatic reduction in computational time with little increase in human effort. We also discuss the intrinsic connection between EM-type algorithms and the Gibbs sampler, and the possibility of using the techniques presented here to speed up the latter. The main conclusion of the paper is that, with the help of statistical considerations, it is possible to construct algorithms that are simple, stable and fast.  相似文献   

16.
A faster alternative to the EM algorithm in finite mixture distributions is described, which alternates EM iterations with Gauss-Newton iterations using the observed information matrix. At the expense of modest additional analytical effort in obtaining the observed information, the hybrid algorithm reduces the computing time required and provides asymptotic standard errors at convergence. The algorithm is illustrated on the two-component normal mixture.  相似文献   

17.
The purpose of this article is to study Kataoka's safety-first (KSF) model, which is a representative of safety-first models of most popular models in portfolio selection of modern finance. We obtain conditions that guarantee that the KSF model has a finite optimal solution without normality assumption. When short-sell is allowed, we provide an explicit analytical solution of the KSF model in two cases. When short-sell is not allowed, we propose an iterating algorithm for finding the optimal portfolios of the KSF model. We also investigate a KSF model with constraint of mean return and obtain the explicit analytical expression of the optimal portfolio.  相似文献   

18.
This article proposes the maximum likelihood estimates based on bare bones particle swarm optimization (BBPSO) algorithm for estimating the parameters of Weibull distribution with censored data, which is widely used in lifetime data analysis. This approach can produce more accuracy of the parameter estimation for the Weibull distribution. Additionally, the confidence intervals for the estimators are obtained. The simulation results show that the BB PSO algorithm outperforms the Newton–Raphson method in most cases in terms of bias, root mean square of errors, and coverage rate. Two examples are used to demonstrate the performance of the proposed approach. The results show that the maximum likelihood estimates via BBPSO algorithm perform well for estimating the Weibull parameters with censored data.  相似文献   

19.
In this article, we employ the variational Bayesian method to study the parameter estimation problems of linear regression model, wherein some regressors are of Gaussian distribution with nonzero prior means. We obtain an analytical expression of the posterior parameter distribution, and then propose an iterative algorithm for the model. Simulations are carried out to test the performance of the proposed algorithm, and the simulation results confirm both the effectiveness and the reliability of the proposed algorithm.  相似文献   

20.
Genetic algorithms (GAs) are adaptive search techniques designed to find near-optimal solutions of large scale optimization problems with multiple local maxima. Standard versions of the GA are defined for objective functions which depend on a vector of binary variables. The problem of finding the maximum a posteriori (MAP) estimate of a binary image in Bayesian image analysis appears to be well suited to a GA as images have a natural binary representation and the posterior image probability is a multi-modal objective function. We use the numerical optimization problem posed in MAP image estimation as a test-bed on which to compare GAs with simulated annealing (SA), another all-purpose global optimization method. Our conclusions are that the GAs we have applied perform poorly, even after adaptation to this problem. This is somewhat unexpected, given the widespread claims of GAs' effectiveness, but it is in keeping with work by Jennison and Sheehan (1995) which suggests that GAs are not adept at handling problems involving a great many variables of roughly equal influence.We reach more positive conclusions concerning the use of the GA's crossover operation in recombining near-optimal solutions obtained by other methods. We propose a hybrid algorithm in which crossover is used to combine subsections of image reconstructions obtained using SA and we show that this algorithm is more effective and efficient than SA or a GA individually.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号