首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 534 毫秒
1.
Length of stay in hospital (LOS) is a widely used outcome measure in Health Services research, often acting as a surrogate for resource consumption or as a measure of efficiency. The distribution of LOS is typically highly skewed, with a few large observations. An interesting feature is the presence of multiple outcomes (e.g. healthy discharge, death in hospital, transfer to another institution). Health Services researchers are interested in modeling the dependence of LOS on covariates, often using administrative data collected for other purposes, such as calculating fees for doctors. Even after all available covariates have been included in the model, unexplained heterogeneity usually remains. In this article, we develop a parametric regression model for LOS that addresses these features. The model is based on the time, T, that a Wiener process with drift (representing an unobserved health level process) hits one of two barriers, one representing healthy discharge and the other death in hospital. Our approach to analyzing event times has many parallels with competing risks analysis (Kalbfleisch and Prentice, The Statistical Analysis of Failure Time Data, New York: John Wiley and Sons, 1980)), and can be seen as a way of formalizing a competing risks situation. The density of T is an infinite series, and we outline a proof that the density and its derivatives are absolutely and uniformly convergent, and regularity conditions are satisfied. Expressions for the expected value of T, the conditional expectation of T given outcome, and the probability of each outcome are available in terms of model parameters. The proposed regression model uses an approximation to the density formed by truncating the series, and its parameters are estimated by maximum likelihood. An extension to allow a third outcome (e.g. transfers out of hospital) is discussed, as well as a mixture model that addresses the issue of unexplained heterogeneity. The model is illustrated using administrative data.  相似文献   

2.
Impacts of complex emergencies or relief interventions have often been evaluated by absolute mortality compared to international standardized mortality rates. A better evaluation would be to compare with local baseline mortality of the affected populations. A projection of population-based survival data into time of emergency or intervention based on information from before the emergency may create a local baseline reference. We find a log-transformed Gaussian time series model where standard errors of the estimated rates are included in the variance to have the best forecasting capacity. However, if time-at-risk during the forecasted period is known then forecasting might be done using a Poisson time series model with overdispersion. Whatever, the standard error of the estimated rates must be included in the variance of the model either in an additive form in a Gaussian model or in a multiplicative form by overdispersion in a Poisson model. Data on which the forecasting is based must be modelled carefully concerning not only calendar-time trends but also periods with excessive frequency of events (epidemics) and seasonal variations to eliminate residual autocorrelation and to make a proper reference for comparison, reflecting changes over time during the emergency. Hence, when modelled properly it is possible to predict a reference to an emergency-affected population based on local conditions. We predicted childhood mortality during the war in Guinea-Bissau 1998-1999. We found an increased mortality in the first half-year of the war and a mortality corresponding to the expected one in the last half-year of the war.  相似文献   

3.
Lee and Carter proposed in 1992 a non-linear model mxt = exp (ax + bx kt + εxt) for fitting and forecasting age-specific mortality rates at age x and time t. For the model parameter estimation, they employed the singular value decomposition method to find a least squares solution. However, the singular value decomposition algorithm does not provide the standard errors of estimated parameters, making it impossible to assess the accuracy of model parameters. This article describes the Lee-Carter model and the technical procedures to fit and extrapolate this model. To estimate the precision of the parameter estimates of the Lee-Carter model, we propose a binomial framework, whose parameter point estimates can be obtained by the maximum likelihood approach and interval estimates by a bootstrap approach. This model is used to fit mortality data in England and Wales from 1951 to 1990 and to forecast mortality change from 1991 to 2020. The Lee-Carter model fits these mortality data very well with R2 being 0.9980. The estimated overall age pattern of mortality ax is very robust whereas there is considerable uncertainty in bx (changes in the age pattern over time) and kt (overall change in mortality). The fitted log age-specific mortality rates have been declining linearly from 1951 to 1990 at different paces and the projected rates will continue to decline in such a way in the 30 years prediction period.  相似文献   

4.
Hospital admission rates are often used as a proxy to reflect patterns of morbidity or health need in population subgroups or across geographic areas. This paper considers the estimation of small area variations in relative health need, as measured by routinely collected hospital admissions data, after allowing for variation in general practice (primary care) and hospital (supply) effects. A fully Bayesian hierarchical modelling framework is adopted, using combinations of electoral ward populations and general practice patients' lists to define catchment groups for analysis. Hospitals create a further stratum, with flows of patients between catchment groups and hospitals being represented by a gravity model. Variations in health outcomes are modelled by using a range of random-effects structures for each cross-classification of strata, together with a consideration of ward, practice, hospital and crossed level covariates. The approach is applied to case-studies of child respiratory and total emergency hospital admissions for residents in a London health authority.  相似文献   

5.
6.
Monitoring health care performance outcomes such as post-operative mortality rates has recently become more common, spurring new statistical methodologies designed for this purpose. One such methodology is the Risk-adjusted Cumulative Sum chart (RA-CUSUM) for monitoring binary outcomes such as mortality after cardiac surgery. When building RA-CUSUMs, independence and model correctness are assumed. We carry out a simulation study to examine the effect of violating these two assumptions on the chart's performance.  相似文献   

7.
Identifying the distribution of the incidence rate of a disease over a region is a prediction problem where area‐specific random effects are to be estimated. The authors consider the inclusion of such effects at different levels of a hierarchical health administrative structure. They develop inference procedures for this type of multi‐level model and show that the predicted rates are approximately weighted sums of the crude rates obtained by pooling data on each level of the hierarchy. Their techniques are illustrated using infant mortality data from British Columbia.  相似文献   

8.
Summary.  Following several recent inquiries in the UK into medical malpractice and failures to deliver appropriate standards of health care, there is pressure to introduce formal monitoring of performance outcomes routinely throughout the National Health Service. Statistical process control (SPC) charts have been widely used to monitor medical outcomes in a variety of contexts and have been specifically advocated for use in clinical governance. However, previous applications of SPC charts in medical monitoring have focused on surveillance of a single process over time. We consider some of the methodological and practical aspects that surround the routine surveillance of health outcomes and, in particular, we focus on two important methodological issues that arise when attempting to extend SPC charts to monitor outcomes at more than one unit simultaneously (where a unit could be, for example, a surgeon, general practitioner or hospital): the need to acknowledge the inevitable between-unit variation in 'acceptable' performance outcomes due to the net effect of many small unmeasured sources of variation (e.g. unmeasured case mix and data errors) and the problem of multiple testing over units as well as time. We address the former by using quasi-likelihood estimates of overdispersion, and the latter by using recently developed methods based on estimation of false discovery rates. We present an application of our approach to annual monitoring 'all-cause' mortality data between 1995 and 2000 from 169 National Health Service hospital trusts in England and Wales.  相似文献   

9.
This paper considers the problem of obtaining a dynamic prediction for 5-year failure free survival after bone marrow transplantation in ALL patients using data from the EBMT, the European Group for Blood and Marrow Transplantation. The paper compares the new landmark methodology as developed by the first author and the established multi-state modeling as described in a recent Tutorial in Biostatistics in Statistics in Medicine by the second author and colleagues. As expected the two approaches give similar results. The landmark methodology does not need complex modeling and leads to easy prediction rules. On the other hand, it does not give the insight in the biological processes as obtained for the multi-state model.  相似文献   

10.
Although heterogeneity across individuals may be reduced when a two-state process is extended into a multi-state process, the discrepancy between the observed and the predicted for some states may still exist owing to two possibilities, unobserved mixture distribution in the initial state and the effect of measured covariates on subsequent multi-state disease progression. In the present study, we developed a mixture Markov exponential regression model to take account of the above-mentioned heterogeneity across individuals (subject-to-subject variability) with a systematic model selection based on the likelihood ratio test. The model was successfully demonstrated by an empirical example on surveillance of patients with small hepatocellular carcinoma treated by non-surgical methods. The estimated results suggested that the model with the incorporation of unobserved mixture distribution behaves better than the one without. Complete and partial effects regarding risk factors on different subsequent multi-state transitions were identified using a homogeneous Markov model. The combination of both initial mixture distribution and homogeneous Markov exponential regression model makes a significant contribution to reducing heterogeneity across individuals and over time for disease progression.  相似文献   

11.
There are several ways to handle within‐subject correlations with a longitudinal discrete outcome, such as mortality. The most frequently used models are either marginal or random‐effects types. This paper deals with a random‐effects‐based approach. We propose a nonparametric regression model having time‐varying mixed effects for longitudinal cancer mortality data. The time‐varying mixed effects in the proposed model are estimated by combining kernel‐smoothing techniques and a growth‐curve model. As an illustration based on real data, we apply the proposed method to a set of prefecture‐specific data on mortality from large‐bowel cancer in Japan.  相似文献   

12.
The prediction problem of sea state based on the field measurements of wave and meteorological factors is a topic of interest from the standpoints of navigation safety and fisheries. Various statistical methods have been considered for the prediction of the distribution of sea surface elevation. However, prediction of sea state in the transitional situation when waves are developing by blowing wind has been a difficult problem until now, because the statistical expression of the dynamic mechanism during this situation is very complicated. In this article, we consider this problem through the development of a statistical model. More precisely, we develop a model for the prediction of the time-varying distribution of sea surface elevation, taking into account a non-homogeneous hidden Markov model in which the time-varying structures are influenced by wind speed and wind direction. Our prediction experiments suggest the possibility that the proposed model contributes to an improvement of the prediction accuracy by using a homogenous hidden Markov model. Furthermore, we found that the prediction accuracy is influenced by the circular distribution of the circular hidden Markov model for the directional time series wind direction data.  相似文献   

13.
This paper presents a Bayesian method for the analysis of toxicological multivariate mortality data when the discrete mortality rate for each family of subjects at a given time depends on familial random effects and the toxicity level experienced by the family. Our aim is to model and analyse one set of such multivariate mortality data with large family sizes: the potassium thiocyanate (KSCN) tainted fish tank data of O'Hara Hines. The model used is based on a discretized hazard with additional time-varying familial random effects. A similar previous study (using sodium thiocyanate (NaSCN)) is used to construct a prior for the parameters in the current study. A simulation-based approach is used to compute posterior estimates of the model parameters and mortality rates and several other quantities of interest. Recent tools in Bayesian model diagnostics and variable subset selection have been incorporated to verify important modelling assumptions regarding the effects of time and heterogeneity among the families on the mortality rate. Further, Bayesian methods using predictive distributions are used for comparing several plausible models.  相似文献   

14.
Analyzing center specific outcomes in hematopoietic cell transplantation   总被引:1,自引:1,他引:0  
Reporting transplant center-specific survival rates after hematopoietic cell transplantation is required in the United States. We describe a method to report 1-year survival outcomes by center, as well as to quantify center performance relative to the transplant center network average, which can be reliably used with censored data and for small center sizes. Each center's observed 1-year survival outcome is compared to a predicted survival outcome adjusted for patient characteristics using a pseudovalue regression technique. A 95% prediction interval for 1-year survival assuming no center effect is computed for each center by bootstrapping the scaled residuals from the regression model, and the observed 1-year survival is compared to this prediction interval to determine center performance. We illustrate the technique using a recent center specific analysis performed by the Center for International Blood and Marrow Transplant Research, and study the performance of this method using simulation.  相似文献   

15.
In this paper we use bootstrap methodology to achieve accurate estimated prediction intervals for recovery rates. In the framework of the LossCalc model, which is the Moody's KMV model to predict loss given default, a single beta distribution is usually assumed to model the behaviour of recovery rates and, hence, to construct prediction intervals. We evaluate the coverage properties of beta estimated prediction intervals for multimodal recovery rates. We carry out a simulation study, and our results show that bootstrap versions of beta mixture prediction intervals exhibit the best coverage properties.  相似文献   

16.
In many medical studies, there are covariates that change their values over time and their analysis is most often modeled using the Cox regression model. However, many of these time-dependent covariates can be expressed as an intermediate event, which can be modeled using a multi-state model. Using the relationship of time-dependent (discrete) covariates and multi-state models, we compare (via simulation studies) the Cox model with time-dependent covariates with the most frequently used multi-state regression models. This article also details the procedures for generating survival data arising from all approaches, including the Cox model with time-dependent covariates.  相似文献   

17.
Summary.  The paper analyses a time series of infant mortality rates in the north of England from 1921 to the early 1970s at a spatial scale that is more disaggregated than in previous studies of infant mortality trends in this period. The paper describes regression methods to obtain mortality gradients over socioeconomic indicators from the censuses of 1931, 1951, 1961 and 1971 and to assess whether there is any evidence for widening spatial inequalities in infant mortality outcomes against a background of an overall reduction in the infant mortality rate. Changes in the degree of inequality are also formally assessed by inequality measures such as the Gini and Theil indices, for which sampling densities are obtained and significant changes assessed. The analysis concerns a relatively infrequent outcome (especially towards the end of the period that is considered) and a high proportion of districts with small populations, so necessitating the use of appropriate methods for deriving indices of inequality and for regression modelling.  相似文献   

18.
We propose a state-space approach for GARCH models with time-varying parameters able to deal with non-stationarity that is usually observed in a wide variety of time series. The parameters of the non-stationary model are allowed to vary smoothly over time through non-negative deterministic functions. We implement the estimation of the time-varying parameters in the time domain through Kalman filter recursive equations, finding a state-space representation of a class of time-varying GARCH models. We provide prediction intervals for time-varying GARCH models and, additionally, we propose a simple methodology for handling missing values. Finally, the proposed methodology is applied to the Chilean Stock Market (IPSA) and to the American Standard&Poor's 500 index (S&P500).  相似文献   

19.
人口死亡率反映了人口的死亡程度,准确预测死亡率是人口科学及人口经济学研究的重点之一,同时也是长寿风险测量的重要数据基础。基于Lee-Carter模型,探索中国大陆与台湾地区死亡率的相关性,通过协整分析考虑两地死亡率的长期均衡关系,创新性地建立基于相关性的向量误差修正模型(VECM),克服传统自回归移动平均模型(ARIMA)使用有限数据进行预测的局限性;均方预测误差作为检验标准,结果表明:基于VECM模型的预测效果比传统的预测效果更佳;基于中国大陆地区和台湾地区的死亡率长期均衡关系,可以为两地联合长寿债券的定价提供重要参考。  相似文献   

20.
In clinical research, study subjects may experience multiple events that are observed and recorded periodically. To analyze transition patterns of disease processes, it is desirable to use those multiple events over time in the analysis. This study proposes a multi-state Markov model with piecewise transition probability, which is able to accommodate periodically observed clinical data without a time homogeneity assumption. Models with ordinal outcomes that incorporate covariates are also discussed. The proposed models are illustrated by an analysis of the severity of morbidity in a monthly follow-up study for patients with spontaneous intracerebral hemorrhage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号