首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
ABSTRACT

Life tables used in life insurance are often calibrated to show the survival function of the age of death distribution at exact integer ages. Actuaries usually make fractional age assumptions (FAAs) when having to value payments that are not restricted to integer ages. Traditional FAAs have the advantage of simplicity but cannot guarantee to capture precisely the real trends of the survival functions and sometimes even result in a non intuitive overall shape of the force of mortality. In fact, an FAA is an interpolation between integer age values which are accepted as given. In this article, we introduce Kriging model, which is widely used as a metamodel for expensive simulations, to fit the survival function at integer ages, and furthermore use the precisely constructed survival function to build the force of mortality and the life expectancy. The experimental results obtained from a simulated life table (Makehamized life table) and two “real” life tables (the Chinese and US life tables) show that these actuarial quantities (survival function, force of mortality, and life expectancy) presented by Kriging model are much more accurate than those presented by commonly-used FAAs: the uniform distribution of death (UDD) assumption, the constant force assumption, and the Balducci assumption.  相似文献   

2.
周晓剑等 《统计研究》2014,31(9):102-106
在生存函数的计算中,生命表只提供了其在整数年龄上的值。当计算非整数年龄上的生存函数时就需要进行分数年龄假设。经典的分数年龄假设在数学上容易处理,但却容易导致死力函数不连续,更重要的是无法保证其在分数年龄上估计的精确性。分数年龄假设本质上是一种插值技术。本研究尝试将一种插值性能优越的Kriging模型引入到分数年龄假设中,对整数年龄上的生存函数进行插值,并基于良好拟合的生存函数进一步构建死力函数及平均余命函数。基于Kriging模型的分数年龄假设的有效性通过了Makeham法则下的生存函数的验证,其结果表明,Kriging模型的插值性能远胜过经典的分数年龄假设模型。  相似文献   

3.
This paper introduces a quadratic fractional age assumption which makes the force of mortality and survival function continuous at all ages. The necessary and sufficient condition for the assumption to be valid is derived. Important life table parameters are estimated and applications are shown using several life table data.  相似文献   

4.
To obtain maximum likelihood (ML) estimation in factor analysis (FA), we propose in this paper a novel and fast conditional maximization (CM) algorithm, which has quadratic and monotone convergence, consisting of a sequence of CM log-likelihood (CML) steps. The main contribution of this algorithm is that the closed form expression for the parameter to be updated in each step can be obtained explicitly, without resorting to any numerical optimization methods. In addition, a new ECME algorithm similar to Liu’s (Biometrika 81, 633–648, 1994) one is obtained as a by-product, which turns out to be very close to the simple iteration algorithm proposed by Lawley (Proc. R. Soc. Edinb. 60, 64–82, 1940) but our algorithm is guaranteed to increase log-likelihood at every iteration and hence to converge. Both algorithms inherit the simplicity and stability of EM but their convergence behaviors are much different as revealed in our extensive simulations: (1) In most situations, ECME and EM perform similarly; (2) CM outperforms EM and ECME substantially in all situations, no matter assessed by the CPU time or the number of iterations. Especially for the case close to the well known Heywood case, it accelerates EM by factors of around 100 or more. Also, CM is much more insensitive to the choice of starting values than EM and ECME.  相似文献   

5.
Consider a system where units having independent and identically distributed lifetimes enter according to a nonhomogeneous Poisson process. After the unit’s life in the system, the unit departs the system. For a fixed system time, this paper relates the units’ common underlying life distribution with the distribution of the ages of units in the system, the distribution for the system life of units that departed the system and the distribution for the system life of units that have recently departed the system. Results can be used to estimate the underlying life distribution or a truncated version of that distribution based on the ages and/or most recent ages at death in both one sample and two sample situations. Results include a complete characterization of the possible distribution of the ages of those units in the system, how to estimate the underlying life distribution from the most recent ages at death, and how to test for an underlying monotone failure rate function based on independent samples from the ages and most recent ages at death. Two sample inferences that involve a likelihood ratio ordering make use of the results in Dykstra et al. (1995, J Amer Statisc Assoc 90(431):1030–1040), which provides the maximum likelihood estimators and a likelihood ratio test when the two distributions satisfy a likelihood ratio ordering. For the ages of the active units and the ages at death among the departed units, limits for their distributions and strong limiting results for their empirical distributions will be provided.Disclaimer: The views expressed in this paper are those of the author and may not represent the views of the U.S. FDA.  相似文献   

6.
Summary. On the basis of serological data from prevalence studies of rubella, mumps and hepatitis A, the paper describes a flexible local maximum likelihood method for the estimation of the rate at which susceptible individuals acquire infection at different ages. In contrast with parametric models that have been used before in the literature, the local polynomial likelihood method allows this age-dependent force of infection to be modelled without making any assumptions about the parametric structure. Moreover, this method allows for simultaneous nonparametric estimation of age-specific incidence and prevalence. Unconstrained models may lead to negative estimates for the force of infection at certain ages. To overcome this problem and to guarantee maximal flexibility, the local smoother can be constrained to be monotone. It turns out that different parametric and nonparametric estimates of the force of infection can exhibit considerably different qualitative features like location and the number of maxima, emphasizing the importance of a well-chosen flexible statistical model.  相似文献   

7.
This paper proposes a new probabilistic classification algorithm using a Markov random field approach. The joint distribution of class labels is explicitly modelled using the distances between feature vectors. Intuitively, a class label should depend more on class labels which are closer in the feature space, than those which are further away. Our approach builds on previous work by Holmes and Adams (J. R. Stat. Soc. Ser. B 64:295–306, 2002; Biometrika 90:99–112, 2003) and Cucala et al. (J. Am. Stat. Assoc. 104:263–273, 2009). Our work shares many of the advantages of these approaches in providing a probabilistic basis for the statistical inference. In comparison to previous work, we present a more efficient computational algorithm to overcome the intractability of the Markov random field model. The results of our algorithm are encouraging in comparison to the k-nearest neighbour algorithm.  相似文献   

8.
Bootstrap方法在死亡模型中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
孙佳美  段白鸽 《统计研究》2010,27(6):100-105
 由于不同国家死亡率改善现象不同,世界各国所使用的死亡率模型皆不尽相同,而且不同年龄段的死亡率模型也不同。实际中,我们常常采用Gompertz模型、Makeham模型、Weibull模型等拟合高年龄段人口的死亡率,但是因高年龄段人口的死亡数据资料不够充分,较少有人以统计的观点给出模型适合性的检验过程。因此本研究提出利用Bootstrap方法检验死亡模型假设的方法,包括模型适合性的检验、参数估计、参数假设检验等。最后,本文应用中国1997-2007年65-89岁人口的粗死亡率数据,提出适合的死亡模型,然后给出利用Bootstrap方法进行死亡模型检验的全过程。  相似文献   

9.
In randomized clinical trials, we are often concerned with comparing two-sample survival data. Although the log-rank test is usually suitable for this purpose, it may result in substantial power loss when the two groups have nonproportional hazards. In a more general class of survival models of Yang and Prentice (Biometrika 92:1–17, 2005), which includes the log-rank test as a special case, we improve model efficiency by incorporating auxiliary covariates that are correlated with the survival times. In a model-free form, we augment the estimating equation with auxiliary covariates, and establish the efficiency improvement using the semiparametric theories in Zhang et al. (Biometrics 64:707–715, 2008) and Lu and Tsiatis (Biometrics, 95:674–679, 2008). Under minimal assumptions, our approach produces an unbiased, asymptotically normal estimator with additional efficiency gain. Simulation studies and an application to a leukemia study show the satisfactory performance of the proposed method.  相似文献   

10.
A fast new algorithm is proposed for numerical computation of (approximate) D-optimal designs. This cocktail algorithm extends the well-known vertex direction method (VDM; Fedorov in Theory of Optimal Experiments, 1972) and the multiplicative algorithm (Silvey et al. in Commun. Stat. Theory Methods 14:1379–1389, 1978), and shares their simplicity and monotonic convergence properties. Numerical examples show that the cocktail algorithm can lead to dramatically improved speed, sometimes by orders of magnitude, relative to either the multiplicative algorithm or the vertex exchange method (a variant of VDM). Key to the improved speed is a new nearest neighbor exchange strategy, which acts locally and complements the global effect of the multiplicative algorithm. Possible extensions to related problems such as nonparametric maximum likelihood estimation are mentioned.  相似文献   

11.
This paper examines the finite-sample behavior of the Lagrange Multiplier (LM) test for fractional integration proposed by Breitung and Hassler (J. Econom. 110:167–185, 2002). We find by extensive Monte Carlo simulations that size distortions can be quite large in small samples. These are caused by a finite-sample bias towards the alternative. Analytic expressions for this bias are derived, based on which the test can easily be corrected.  相似文献   

12.
If biological aging is understood as some process of damage accumulation, it does not necessarily lead to increasing mortality rate. Within the framework of suggested models and relevant examples we show that even for the monotonically increasing degradation, the mortality rate can, at least, ultimately decrease. Aging properties of systems with imperfect repair are also studied. It is shown that for some models of imperfect repair the corresponding age process is monotone and stable. This means that as t→∞t, degradation slows down, which results in the mortality rate deceleration and its possible convergence to a constant.  相似文献   

13.
Consider a planner choosing treatments for observationally identical persons who vary in their response to treatment. There are two treatments with binary outcomes. One is a status quo with known population success rate. The other is an innovation for which the data are the outcomes of an experiment. Karlin and Rubin [1956. The theory of decision procedures for distributions with monotone likelihood ratio. Ann. Math. Statist. 27, 272–299] assumed that the objective is to maximize the population success rate and showed that the admissible rules are the KR-monotone   rules. These assign everyone to the status quo if the number of experimental successes is below a specified threshold and everyone to the innovation if experimental success exceeds the threshold. We assume that the objective is to maximize a concave-monotone function f(·)f(·) of the success rate and show that admissibility depends on the curvature of f(·)f(·). Let a fractional monotone   rule be one where the fraction of persons assigned to the innovation weakly increases with the number of experimental successes. We show that the class of fractional monotone rules is complete if f(·)f(·) is concave and strictly monotone. Define an M-step monotone rule   to be a fractional monotone rule with an interior fractional treatment assignment for no more than MM consecutive values of the number of experimental successes. The MM-step monotone rules form a complete class if f(·)f(·) is differentiable and has sufficiently weak curvature. Bayes rules and the minimax-regret rule depend on the curvature of the welfare function.  相似文献   

14.
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.  相似文献   

15.
Clusters of galaxies are a useful proxy to trace the distribution of mass in the universe. By measuring the mass of clusters of galaxies on different scales, one can follow the evolution of the mass distribution (Martínez and Saar, Statistics of the Galaxy Distribution, 2002). It can be shown that finding galaxy clusters is equivalent to finding density contour clusters (Hartigan, Clustering Algorithms, 1975): connected components of the level set S c ≡{f>c} where f is a probability density function. Cuevas et al. (Can. J. Stat. 28, 367–382, 2000; Comput. Stat. Data Anal. 36, 441–459, 2001) proposed a nonparametric method for density contour clusters, attempting to find density contour clusters by the minimal spanning tree. While their algorithm is conceptually simple, it requires intensive computations for large datasets. We propose a more efficient clustering method based on their algorithm with the Fast Fourier Transform (FFT). The method is applied to a study of galaxy clustering on large astronomical sky survey data.  相似文献   

16.
This article proposes a new fractional age assumption (FAA) based on the cubic polynomial interpolation (CPI) and applies it to estimate the mortality rate and related actuarial quantities. The validity of the method under CPI is proved theoretically and the valuable advantages of CPI assumption are discussed based on three different perspectives—utilized death information, property of mortality force, and optimality criterion. The results show that CPI assumption has distinct valuable superiority compared with other FAAs in references. Finally under CPI assumption we study the calculations of some important actuarial quantities in life contingencies.  相似文献   

17.
In order to study developmental variables, for example, neuromotor development of children and adolescents, monotone fitting is typically needed. Most methods, to estimate a monotone regression function non-parametrically, however, are not straightforward to implement, a difficult issue being the choice of smoothing parameters. In this paper, a convenient implementation of the monotone B-spline estimates of Ramsay [Monotone regression splines in action (with discussion), Stat. Sci. 3 (1988), pp. 425–461] and Kelly and Rice [Montone smoothing with application to dose-response curves and the assessment of synergism, Biometrics 46 (1990), pp. 1071–1085] is proposed and applied to neuromotor data. Knots are selected adaptively using ideas found in Friedman and Silverman [Flexible parsimonous smoothing and additive modelling (with discussion), Technometrics 31 (1989), pp. 3–39] yielding a flexible algorithm to automatically and accurately estimate a monotone regression function. Using splines also simultaneously allows to include other aspects in the estimation problem, such as modeling a constant difference between two groups or a known jump in the regression function. Finally, an estimate which is not only monotone but also has a ‘levelling-off’ (i.e. becomes constant after some point) is derived. This is useful when the developmental variable is known to attain a maximum/minimum within the interval of observation.  相似文献   

18.
Summary.  The World Health Organization revises the international classification of diseases about every 10 years to stay abreast of advances in medical science and to compare international health statistics. However, the new revision (i.e. the 10th revision) introduces discontinuities in mortality trends, making it impossible to compare the mortality statistics before and after the revision directly. The US National Center for Health Statistics published comparability ratios to correct the discontinuities between the two sets of mortality data: one coded by the ninth revision and the other by the 10th revision. We propose a parametric two-stage model to produce new comparability ratios and use these ratios to correct the discontinuities. The asymptotic behaviour of the comparability ratios is investigated. Our model not only measures the extent of discontinuities in trends in mortality but also can be used to forecast future mortality. Comparing with the National Center for Health Statistics's ratios, our comparability ratios smooth out the discontinuities better for most causes.  相似文献   

19.
In empirical Bayes inference one is typically interested in sampling from the posterior distribution of a parameter with a hyper-parameter set to its maximum likelihood estimate. This is often problematic particularly when the likelihood function of the hyper-parameter is not available in closed form and the posterior distribution is intractable. Previous works have dealt with this problem using a multi-step approach based on the EM algorithm and Markov Chain Monte Carlo (MCMC). We propose a framework based on recent developments in adaptive MCMC, where this problem is addressed more efficiently using a single Monte Carlo run. We discuss the convergence of the algorithm and its connection with the EM algorithm. We apply our algorithm to the Bayesian Lasso of Park and Casella (J. Am. Stat. Assoc. 103:681–686, 2008) and on the empirical Bayes variable selection of George and Foster (J. Am. Stat. Assoc. 87:731–747, 2000).  相似文献   

20.
This paper examines the existence of time trends in the infant mortality rates in a number of countries in the twentieth century. We test for the presence of deterministic trends by adopting a linear model for the log-transformed data. Instead of assuming that the error term is a stationary I(0), or alternatively, a non-stationary I(1) process, we allow for the possibility of fractional integration and hence for a much greater degree of flexibility in the dynamic specification of the series. Indeed, once the linear trend is removed, all series appear to be I(d) with 0<d<1, implying long-range dependence. As expected, the time trend coefficients are significantly negative, although of a different magnitude from those obtained assuming integer orders of differentiation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号