首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Zusammenfassung: In diesem Artikel wird der Weg von einem univariaten gemischten Poisson–Prozess, der in vielen Bereichen zum Z?hlen von Ereignissen benutzt wird, zu einem bivariaten gemischten Poisson–Prozess aufgezeigt. Dazu werden einige Eigenschaften des bivariaten Prozesses angegeben. Im zweiten Teil der Arbeit wird gezeigt, wie mit Hilfe dieses Prozesses der übergang von einem herk?mmlichen Bonus–Malus–System in der Kraftfahrthaftpflichtversicherung zu einem Bonus–Malus–System mit Berücksichtigung der Schadenart beschritten werden kann. Dazu wird zuerst eine Modellprüfung der gegebenen Daten vorgenommen und sodann werden für verschiedene mischende Verteilungen die Verteilungsparameter gesch?tzt und Nettopr?mien angegeben sowie die Prognosegenauigkeit getestet.
Summary: In this paper we show that the model of the bivariate mixed Poisson process arises in a natural way from the univariate mixed Poisson process, which is used in several areas for counting certain events. Furthermore we state some properties of the bivariate process. In the second part of the paper we illustrate how by means of the bivariate mixed Poisson process a bonus–malus system handling different types of accidents can be derived from the classical bonus–malus system in third–party liability insurance. To this end we first check the model on the given data and then estimate distribution parameters and compute net premiums for different mixing distributions as well as test the prediction probabilities.
* Vortrag am Dresdner Forum zur Versicherungsmathematik: Tarifierung in Erst- und Rückversicherung am 25. Juni 2004. Für die Unterstützung zu dieser Arbeit m?chte der Autor Lothar Partzsch, Klaus D. Schmidt (beide Dresden) und Friedemann Spies (München) recht herzlich danken.  相似文献   

3.
ABSTRACT

In practice, it is often not possible to find an appropriate family of distributions which can be used for fitting the sample distribution with high precision. In these cases, it seems to be opportune to search for the best approximation by a family of distributions instead of an exact fit. In this paper, we consider the Anderson–Darling statistic with plugged-in minimum distance estimator for the parameter vector. We prove asymptotic normality of the Anderson–Darling statistic which is used for a test of goodness of approximation. Moreover, we introduce a measure of discrepancy between the sample distribution and the model class.  相似文献   

4.
ABSTRACT

Mixed Poisson distributions are widely used in various applications of count data mainly when extra variation is present. This paper introduces an extension in terms of a mixed strategy to jointly deal with extra-Poisson variation and zero-inflated counts. In particular, we propose the Poisson log-skew-normal distribution which utilizes the log-skew-normal as a mixing prior and present its main properties. This is directly done through additional hierarchy level to the lognormal prior and includes the Poisson lognormal distribution as its special case. Two numerical methods are developed for the evaluation of associated likelihoods based on the Gauss–Hermite quadrature and the Lambert's W function. By conducting simulation studies, we show that the proposed distribution performs better than several commonly used distributions that allow for over-dispersion or zero inflation. The usefulness of the proposed distribution in empirical work is highlighted by the analysis of a real data set taken from health economics contexts.  相似文献   

5.
Abstract

In this paper, we study Pareto-optimal reinsurance policies from the perspectives of an insurer and a reinsurer, assuming reinsurance premium principles satisfy risk loading and stop-loss ordering preserving. By geometric approach, we determine the forms of the optimal policies among two classes of ceded loss functions, the class of increasing convex ceded loss functions and the class that the constraints on both ceded and retained loss functions are relaxed to increasing functions. Then we demonstrate the applicability of our results by giving the parameters of the optimal ceded loss functions under Dutch premium principle and Wang’s premium principle.  相似文献   

6.
Abstract

The compound Poisson Omega model is considered in the presence of a three-step premium rate. Firstly, the integral equations and the integro-differential equations for the Gerber-Shiu expected discounted penalty function are derived. Secondly, the integro-differential equations for the Gerber-Shiu expected discounted penalty function are determined in three different initial conditions. The results are then used to find the bankruptcy probability. Finally, the special cases where the claim size distribution is exponential be discussed in some detail in order to illustrate the effect of the model with three-step premium rate.  相似文献   

7.
ABSTRACT

It is well known that the Hodges–Lehmann estimator is asymptotically efficient for the location parameter of the logistic distribution. In this article we give a simple and direct proof that this property also characterizes the logistic between all the symmetric location distributions under mild conditions. Using pseudolikelihood, we also show how to find from the Hodges–Lehmann estimator an asymptotically efficient estimator of the scale parameter of the logistic distribution.  相似文献   

8.
Abstract

In this paper, we consider the optimal investment and premium control problem for insurers who worry about model ambiguity. Different from previous works, we assume that the insurer’s surplus process is described by a non-homogeneous compound Poisson model and the insurer has ambiguity on both the financial market and the insurance market. Our purpose is to find the impacts of model ambiguity on optimal policies. With the objective of maximizing the expected utility of terminal wealth, the closed-form solutions of the optimal investment and premium policies are obtained by solving HJB equations. Finally, numerical examples are also given to illustrate the results.  相似文献   

9.
ABSTRACT

This paper deals with Bayes, robust Bayes, and minimax predictions in a subfamily of scale parameters under an asymmetric precautionary loss function. In Bayesian statistical inference, the goal is to obtain optimal rules under a specified loss function and an explicit prior distribution over the parameter space. However, in practice, we are not able to specify the prior totally or when a problem must be solved by two statisticians, they may agree on the choice of the prior but not the values of the hyperparameters. A common approach to the prior uncertainty in Bayesian analysis is to choose a class of prior distributions and compute some functional quantity. This is known as Robust Bayesian analysis which provides a way to consider the prior knowledge in terms of a class of priors Γ for global prevention against bad choices of hyperparameters. Under a scale invariant precautionary loss function, we deal with robust Bayes predictions of Y based on X. We carried out a simulation study and a real data analysis to illustrate the practical utility of the prediction procedure.  相似文献   

10.
ABSTRACT

In this article, Bayesian estimation of the expected cell counts for log-linear models is considered. The prior specified for log-linear parameters is used to determine a prior for expected cell counts, by means of the family and parameters of prior distributions. This approach is more cost-effective than working directly with cell counts because converting prior information into a prior distribution on the log-linear parameters is easier than that of on the expected cell counts. While proceeding from the prior on log-linear parameters to the prior of the expected cell counts, we faced with a singularity problem of variance matrix of the prior distribution, and added a new precision parameter to solve the problem. A numerical example is also given to illustrate the usage of the new parameter.  相似文献   

11.
Abstract

This article studies E-Bayesian estimation and its E-posterior risk, for failure rate derived from exponential distribution, in the case of the two hyper parameters. In order to measure the estimated risk, the definition of E-posterior risk (expected posterior risk) is proposed based on the definition of E-Bayesian estimation. Moreover, under the different prior distributions of hyper parameters, the formulas of E-Bayesian estimation and formulas of E-posterior risk are given respectively, these estimations are derived based on a conjugate prior distribution for the unknown parameter under the squared error loss function. Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and a real data set have been analyzed for illustrative purposes, results are compared on the basis of E-posterior risk.  相似文献   

12.
The most popular method for trying to detect an association between two random variables is to test H 0 ?:?ρ=0, the hypothesis that Pearson's correlation is equal to zero. It is well known, however, that Pearson's correlation is not robust, roughly meaning that small changes in any distribution, including any bivariate normal distribution as a special case, can alter its value. Moreover, the usual estimate of ρ, r, is sensitive to only a few outliers which can mask a true association. A simple alternative to testing H 0 ?:?ρ =0 is to switch to a measure of association that guards against outliers among the marginal distributions such as Kendall's tau, Spearman's rho, a Winsorized correlation, or a so-called percentage bend correlation. But it is known that these methods fail to take into account the overall structure of the data. Many measures of association that do take into account the overall structure of the data have been proposed, but it seems that nothing is known about how they might be used to detect dependence. One such measure of association is selected, which is designed so that under bivariate normality, its estimator gives a reasonably accurate estimate of ρ. Then methods for testing the hypothesis of a zero correlation are studied.  相似文献   

13.
We discuss the problem of selecting among alternative parametric models within the Bayesian framework. For model selection problems, which involve non‐nested models, the common objective choice of a prior on the model space is the uniform distribution. The same applies to situations where the models are nested. It is our contention that assigning equal prior probability to each model is over simplistic. Consequently, we introduce a novel approach to objectively determine model prior probabilities, conditionally, on the choice of priors for the parameters of the models. The idea is based on the notion of the worth of having each model within the selection process. At the heart of the procedure is the measure of this worth using the Kullback–Leibler divergence between densities from different models.  相似文献   

14.

Recently, exact confidence bounds and exact likelihood inference have been developed based on hybrid censored samples by Chen and Bhattacharyya [Chen, S. and Bhattacharyya, G.K. (1998). Exact confidence bounds for an exponential parameter under hybrid censoring. Communications in StatisticsTheory and Methods, 17, 1857–1870.], Childs et al. [Childs, A., Chandrasekar, B., Balakrishnan, N. and Kundu, D. (2003). Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution. Annals of the Institute of Statistical Mathematics, 55, 319–330.], and Chandrasekar et al. [Chandrasekar, B., Childs, A. and Balakrishnan, N. (2004). Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoring. Naval Research Logistics, 51, 994–1004.] for the case of the exponential distribution. In this article, we propose an unified hybrid censoring scheme (HCS) which includes many cases considered earlier as special cases. We then derive the exact distribution of the maximum likelihood estimator as well as exact confidence intervals for the mean of the exponential distribution under this general unified HCS. Finally, we present some examples to illustrate all the methods of inference developed here.  相似文献   

15.
Abstract

In order to discriminate between two probability distributions extensions of Kullback–Leibler (KL) information have been proposed in the literature. In recent years, an extension called cumulative Kullback–Leibler (CKL) information is considered by authors which is closely related to equilibrium distributions. In this paper, we propose an adjusted version of CKL based on equilibrium distributions. Some properties of the proposed measure of divergence are investigated. A test of exponentiality based on the adjusted measure, is proposed. The empirical power of the presented test is calculated and compared with some existing standard tests of exponentiality. The results show that our proposed test, for some important alternative distributions, has better performance than some of the existing tests.  相似文献   

16.
ABSTRACT

This article addresses the problem of repeats detection used in the comparison of significant repeats in sequences. The case of self-overlapping leftmost repeats for large sequences generated by an homogeneous stationary Markov chain has not been treated in the literature. In this work, we are interested by the approximation of the number of self-overlapping leftmost long enough repeats distribution in an homogeneous stationary Markov chain. Using the Chen–Stein method, we show that the number of self-overlapping leftmost long enough repeats distribution is approximated by the Poisson distribution. Moreover, we show that this approximation can be extended to the case where the sequences are generated by a m-order Markov chain.  相似文献   

17.
ABSTRACT

On the basis of Csiszar's φ-divergence discrimination information, we propose a measure of discrepancy between equilibriums associated with two distributions. Proving that a distribution can be characterized by associated equilibrium distribution, a Renyi distance of the equilibrium distributions is constructed that made us to propose an EDF-based goodness-of-fit test for exponential distribution. For comparing the performance of the proposed test, some well-known EDF-based tests and some entropy-based tests are considered. Based on the simulation results, the proposed test has better powers than those of competing entropy-based tests for the alternatives with decreasing hazard rate function. The use of the proposed test is evaluated in an illustrative example.  相似文献   

18.
Abstract

The Kruskal–Wallis test is a popular nonparametric test for comparing k independent samples. In this article we propose a new algorithm to compute the exact null distribution of the Kruskal–Wallis test. Generating the exact null distribution of the Kruskal–Wallis test is needed to compare several approximation methods. The 5% cut-off points of the exact null distribution which StatXact cannot produce are obtained as by-products. We also investigate graphically a reason that the exact and approximate distributions differ, and hope that it will be a useful tutorial tool to teach about the Kruskal–Wallis test in undergraduate course.  相似文献   

19.
ABSTRACT

Conditional tests are constructed by conditioning a fit measure to a minimal sufficient statistic. To calculate the p-value of these tests, Monte Carlo methods with co-sufficient samples can be used. In this paper we show how to simulate co-sufficient samples when the data distribution belongs to the exponential family with doubly transitive sufficient statistics. The proposed method is illustrated using the beta distribution.  相似文献   

20.
Abstract

In this article, we propose a two-stage generalized case–cohort design and develop an efficient inference procedure for the data collected with this design. In the first-stage, we observe the failure time, censoring indicator and covariates which are easy or cheap to measure, and in the second-stage, select a subcohort by simple random sampling and a subset of failures in remaining subjects from the first-stage subjects to observe their exposures which are different or expensive to measure. We derive estimators for regression parameters in the accelerated failure time model under the two-stage generalized case–cohort design through the estimated augmented estimating equation and the kernel function method. The resulting estimators are shown to be consistent and asymptotically normal. The finite sample performance of the proposed method is evaluated through the simulation studies. The proposed method is applied to a real data set from the National Wilm’s Tumor Study Group.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号