首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary.  The paper analyses a time series of infant mortality rates in the north of England from 1921 to the early 1970s at a spatial scale that is more disaggregated than in previous studies of infant mortality trends in this period. The paper describes regression methods to obtain mortality gradients over socioeconomic indicators from the censuses of 1931, 1951, 1961 and 1971 and to assess whether there is any evidence for widening spatial inequalities in infant mortality outcomes against a background of an overall reduction in the infant mortality rate. Changes in the degree of inequality are also formally assessed by inequality measures such as the Gini and Theil indices, for which sampling densities are obtained and significant changes assessed. The analysis concerns a relatively infrequent outcome (especially towards the end of the period that is considered) and a high proportion of districts with small populations, so necessitating the use of appropriate methods for deriving indices of inequality and for regression modelling.  相似文献   

2.
Impacts of complex emergencies or relief interventions have often been evaluated by absolute mortality compared to international standardized mortality rates. A better evaluation would be to compare with local baseline mortality of the affected populations. A projection of population-based survival data into time of emergency or intervention based on information from before the emergency may create a local baseline reference. We find a log-transformed Gaussian time series model where standard errors of the estimated rates are included in the variance to have the best forecasting capacity. However, if time-at-risk during the forecasted period is known then forecasting might be done using a Poisson time series model with overdispersion. Whatever, the standard error of the estimated rates must be included in the variance of the model either in an additive form in a Gaussian model or in a multiplicative form by overdispersion in a Poisson model. Data on which the forecasting is based must be modelled carefully concerning not only calendar-time trends but also periods with excessive frequency of events (epidemics) and seasonal variations to eliminate residual autocorrelation and to make a proper reference for comparison, reflecting changes over time during the emergency. Hence, when modelled properly it is possible to predict a reference to an emergency-affected population based on local conditions. We predicted childhood mortality during the war in Guinea-Bissau 1998-1999. We found an increased mortality in the first half-year of the war and a mortality corresponding to the expected one in the last half-year of the war.  相似文献   

3.
We develop a continuous-time model for analyzing and valuing catastrophe mortality contingent claims based on stochastic modeling of the force of mortality. We derive parameter estimates from a 105-year time series of U.S. population mortality data using a simulated maximum likelihood approach based on a particle filter. Relying on the resulting parameters, we calculate loss profiles for a representative catastrophe mortality transaction and compare them to the “official” loss profiles that are provided by the issuers to investors and rating agencies. We find that although the loss profiles are subject to great uncertainties, the official figures fall significantly below the corresponding risk statistics based on our model. In particular, we find that the annualized incidence probability of a mortality catastrophe, defined as a 15% increase in aggregated mortality probabilities, is about 1.4%—compared to about 0.1% according to the official loss profiles.  相似文献   

4.
Lee and Carter proposed in 1992 a non-linear model mxt = exp (ax + bx kt + εxt) for fitting and forecasting age-specific mortality rates at age x and time t. For the model parameter estimation, they employed the singular value decomposition method to find a least squares solution. However, the singular value decomposition algorithm does not provide the standard errors of estimated parameters, making it impossible to assess the accuracy of model parameters. This article describes the Lee-Carter model and the technical procedures to fit and extrapolate this model. To estimate the precision of the parameter estimates of the Lee-Carter model, we propose a binomial framework, whose parameter point estimates can be obtained by the maximum likelihood approach and interval estimates by a bootstrap approach. This model is used to fit mortality data in England and Wales from 1951 to 1990 and to forecast mortality change from 1991 to 2020. The Lee-Carter model fits these mortality data very well with R2 being 0.9980. The estimated overall age pattern of mortality ax is very robust whereas there is considerable uncertainty in bx (changes in the age pattern over time) and kt (overall change in mortality). The fitted log age-specific mortality rates have been declining linearly from 1951 to 1990 at different paces and the projected rates will continue to decline in such a way in the 30 years prediction period.  相似文献   

5.
Abstract.  In a case–cohort design a random sample from the study cohort, referred as a subcohort, and all the cases outside the subcohort are selected for collecting extra covariate data. The union of the selected subcohort and all cases are referred as the case–cohort set. Such a design is generally employed when the collection of information on an extra covariate for the study cohort is expensive. An advantage of the case–cohort design over more traditional case–control and the nested case–control designs is that it provides a set of controls which can be used for multiple end-points, in which case there is information on some covariates and event follow-up for the whole study cohort. Here, we propose a Bayesian approach to analyse such a case–cohort design as a cohort design with incomplete data on the extra covariate. We construct likelihood expressions when multiple end-points are of interest simultaneously and propose a Bayesian data augmentation method to estimate the model parameters. A simulation study is carried out to illustrate the method and the results are compared with the complete cohort analysis.  相似文献   

6.
Researchers have proposed that hospitals with excessive statistically unexplained mortality rates are more likely to have quality-of-care problems. The U.S. Health Care Financing Administration currently uses this statistical “outlier” approach to screen for poor quality in hospitals. Little is known, however, about the validity of this technique, since direct measures of quality are difficult to obtain. We use Monte Carlo methods to evaluate the performance of the outlier technique as parameters of the true mortality process are varied. Results indicate that the screening ability of the technique may be very sensitive to how widespread quality-related mortality is among hospitals but insensitive to other factors generally thought to be important.  相似文献   

7.
A large number of studies have shown a gradual fall in stomach cancer-related mortality rate during the last decade. Here we analyzed the pattern of stomach cancer-related mortality rates in Japanese aged>85 years from 1970 to 1995. We used data for the entire population of Japan. The magnitude of change was measured by relative risk and cause-elimination life tables to distinguish time trends in mortality rates of stomach cancer for individuals over 85 years of age compared with other age groups (55–84 years). In the over-85 age group, stomach cancer mortality increased from 374 in 1970 to 662 in 1995 per 100,000 (77%) for males and from 232 to 296 per 100,000 (27%) for females. Using the 55–59 years group as the reference category, the relative risk increased from 2.3 to 9.9 and from 2.8 to 11.1 in men and women, respectively. The effects of mortality on life expectancy also increased 1.5 times and 1.1 times, respectively. Our results showed a rise of stomach cancer mortality in Japanese aged over 85 years, which paralleled the increase in relative risk and negative contribution to life expectancy. While the mortality of younger age groups is decreasing, the change over from increase to decrease in the over-85 age group is only just beginning.  相似文献   

8.
Compositional time series are multivariate time series which at each time point are proportions that sum to a constant. Accurate inference for such series which occur in several disciplines such as geology, economics and ecology is important in practice. Usual multivariate statistical procedures ignore the inherent constrained nature of these observations as parts of a whole and may lead to inaccurate estimation and prediction. In this article, a regression model with vector autoregressive moving average (VARMA) errors is fit to the compositional time series after an additive log ratio (ALR) transformation. Inference is carried out in a hierarchical Bayesian framework using Markov chain Monte Carlo techniques. The approach is illustrated on compositional time series of mortality events in Los Angeles in order to investigate dependence of different categories of mortality on air quality.  相似文献   

9.
There are several ways to handle within‐subject correlations with a longitudinal discrete outcome, such as mortality. The most frequently used models are either marginal or random‐effects types. This paper deals with a random‐effects‐based approach. We propose a nonparametric regression model having time‐varying mixed effects for longitudinal cancer mortality data. The time‐varying mixed effects in the proposed model are estimated by combining kernel‐smoothing techniques and a growth‐curve model. As an illustration based on real data, we apply the proposed method to a set of prefecture‐specific data on mortality from large‐bowel cancer in Japan.  相似文献   

10.
The graduation of mortality data aims to estimate the probabilities of death at age x, q ( x ), by means of an age-dependent function, whose parameters are adjusted from the crude probabilities that are directly obtainable from the data. However, current life tables have a problem, the need for periodic updates due to changes in mortality over short periods of time. The table containing mortality rates for different ages in different years, q ( xt ), is called a dynamic life table, which captures mortality variation over time. This paper proposes a review of the most commonly used dynamic models and compares the results obtained by each of them when applied to mortality data from the Valencia Region (Spain). The result of the comparison leads us to the conclusion that the Lee-Carter method offers the best results for both sexes, while that based on Heligman and Pollard functions provides the best fit for men alone. Our working method is of additional interest as it may be applied to mortality data for a wide range of ages in any geographical location, allowing the most appropriate dynamic life table to be selected for the case at hand.  相似文献   

11.
Abstract.  Case–cohort sampling aims at reducing the data sampling and costs of large cohort studies. It is therefore important to estimate the parameters of interest as efficiently as possible. We present a maximum likelihood estimator (MLE) for a case–cohort study based on the proportional hazards assumption. The estimator shows finite sample properties that improve on those by the Self & Prentice [Ann. Statist. 16 (1988)] estimator. The size of the gain by the MLE varies with the level of the disease incidence and the variability of the relative risk over the considered population. The gain tends to be small when the disease incidence is low. The MLE is found by a simple EM algorithm that is easy to implement. Standard errors are estimated by a profile likelihood approach based on EM-aided differentiation.  相似文献   

12.
This paper deals with the specification of probability distributions expressing ignorance concerning annual or otherwise discretized failure or mortality rates, when these rates can safely be assumed to be increasing and convex, but are completely unknown otherwise. Such distributions can be used as noninformative priors for Bayesian analysis of failure data. We demonstrate why a uniform distribution used in earlier work is unsatisfactory, especially from the point of view of insensitivity with respect to the time scale that is chosen for the problem at hand. We suggest alternative distributions based on Dirichlet distributed weights for the extreme points of relevant convex sets, and discuss which consequences a requirement for scale neutrality has for the choice of Dirichlet parameters.  相似文献   

13.
We analyse the (age, time)-specific incidence of diabetes based on retrospective data obtained from a prevalent cohort only including survivors to a particular date. From underlying point processes with intensities corresponding to the (age, time)-specific incidence rates the observed point pattern is assumed to be generated by an independent thinning process with parameters (assumed known) depending on population density and survival probability to the sampling date. A Bayesian procedure is carried out for the optimal adjustment and comparison of isotropic and anisotropic smoothing priors for the intensity functions, as well as for the decomposition of the intensity on the (time, age) Lexis diagram into the three factors of age, period and cohort.  相似文献   

14.
Cox's (1972) Proportioal hazards failure time model, already widely used in the analysis of clinical trials, also provides an elegant formalization of the epidemiologic concept of relative risk. When used to compare the disease experience of a study cohort with that of an external control population, it generalizes the notions of the standardized morbidity ratio (SMR) and the proportional morbidity ratio (PMR). For studies in which matched sets of cases and controls are sampled retrospectively from the population at risk, the model provides a flexible tool for the regression analysis of multiple risk factors.  相似文献   

15.
The thin plate volume matching and volume smoothing histosplines are described. These histosplines are suitable for estimating densities or incidence rates as a function of position on the plane when data is aggregated by area, for example by county. We give a numerical algorithm for the volume matching histospline and for the volume smoothing histospline using generalized cross validation to estimate the smoothing parameter. Some numerical experiments were performed using synthetic data, population data and SMR's (standardized mortality ratios) aggregated by county over the state of Wisconsin. The method turns out to be not particularly suited for obtaining population density maps where the population density can vary by two orders of magnitude, because the histospline can be negative in  相似文献   

16.
In this paper, we propose a two-stage functional principal component analysis method in age–period–cohort (APC) analysis. The first stage of the method considers the age–period effect with the fitted values treated as an offset; and the second stage of the method considers the residual age–cohort effect conditional on the already estimated age-period effect. An APC version of the model in functional data analysis provides an improved fit to the data, especially when the data are sparse and irregularly spaced. We demonstrate the effectiveness of the proposed method using body mass index data stratified by gender and ethnicity.  相似文献   

17.
A cohort of 300 women with breast cancer who were submitted for surgery is analysed by using a non-homogeneous Markov process. Three states are onsidered: no relapse, relapse and death. As relapse times change over time, we have extended previous approaches for a time homogeneous model to a non omogeneous multistate process. The trends of the hazard rate functions of transitions between states increase and then decrease, showing that a changepoint can be considered. Piecewise Weibull distributions are introduced as transition intensity functions. Covariates corresponding to treatments are incorporated in the model multiplicatively via these functions. The likelihood function is built for a general model with k changepoints and applied to the data set, the parameters are estimated and life-table and transition probabilities for treatments in different periods of time are given. The survival probability functions for different treatments are plotted and compared with the corresponding function for the homogeneous model. The survival functions for the various cohorts submitted for treatment are fitted to the mpirical survival functions.  相似文献   

18.
In this paper, we propose a methodology to analyze longitudinal data through distances between pairs of observations (or individuals) with regard to the explanatory variables used to fit continuous response variables. Restricted maximum-likelihood and generalized least squares are used to estimate the parameters in the model. We applied this new approach to study the effect of gender and exposure on the deviant behavior variable with respect to tolerance for a group of youths studied over a period of 5 years. Were performed simulations where we compared our distance-based method with classic longitudinal analysis with both AR(1) and compound symmetry correlation structures. We compared them under Akaike and Bayesian information criterions, and the relative efficiency of the generalized variance of the errors of each model. We found small gains in the proposed model fit with regard to the classical methodology, particularly in small samples, regardless of variance, correlation, autocorrelation structure and number of time measurements.  相似文献   

19.
When modeling correlated binary data in the presence of informative cluster sizes, generalized estimating equations with either resampling or inverse-weighting, are often used to correct for estimation bias. However, existing methods for the clustered longitudinal setting assume constant cluster sizes over time. We present a subject-weighted generalized estimating equations scheme that provides valid parameter estimation for the clustered longitudinal setting while allowing cluster sizes to change over time. We compare, via simulation, the performance of existing methods to our subject-weighted approach. The subject-weighted approach was the only method that showed negligible bias, with excellent coverage, for all model parameters.  相似文献   

20.
Human activities have indirectly modified the dynamics of many populations, accelerating considerably the natural rate of species extinction and raising strong concerns about biodiversity. In many such cases, the underlying ‘natural’ dynamics of the population has been modified by human‐induced increases in mortality, even if the populations are not exploited or harvested in the strict sense. Both dynamical and statistical models are needed to investigate the consequences of human‐induced mortality on the overall dynamics of a population. This paper reviews existing approaches and the potential of recent developments to help form a conceptual and practical framework for analysing the dynamics of exploited populations. It examines both the simple case of an extra source of mortality instantaneously in time, and the theory involved when both risks compete over a continuous time scale. This basic theory is expanded to structured populations, using matrix population models, with applications to the conservation biology of long‐lived vertebrates. The type and degree of compensation expected and approaches to detect it are reviewed, and ways of handling uncertainty are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号