首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To allow more accurate prediction of hospital length of stay (LOS) after serious injury or illness, a multi-state model is proposed, in which transitions from the hospitalized state to three possible outcome states (home, long-term care, or death) are assumed to follow constant rates for each of a limited number of time periods. This results in a piecewise exponential (PWE) model for each outcome. Transition rates may be affected by time-varying covariates, which can be estimated from a reference database using standard statistical software and Poisson regression. A PWE model combining the three outcomes allows prediction of LOS. Records of 259,941 injured patients from the US Nationwide Inpatient Sample were used to create such a multi-state PWE model with four time periods. Hospital mortality and LOS for patient subgroups were calculated from this model, and time-varying covariate effects were estimated. Early mortality was increased by anatomic injury severity or penetrating mechanism, but these effects diminished with time; age and male sex remained strong predictors of mortality in all time periods. Rates of discharge home decreased steadily with time, while rates of transfer to long-term care peaked at five days. Predicted and observed LOS and mortality were similar for multiple subgroups. Conceptual background and methods of calculation are discussed and demonstrated. Multi-state PWE models may be useful to describe hospital outcomes, especially when many patients are not discharged home.  相似文献   

2.
Patient flow modeling is a growing field of interest in health services research. Several techniques have been applied to model movement of patients within and between health-care facilities. However, individual patient experience during the delivery of care has always been overlooked. In this work, a random effects model is introduced to patient flow modeling and applied to a London Hospital Neonatal unit data. In particular, a random effects multinomial logit model is used to capture individual patient trajectories in the process of care with patient frailties modeled as random effects. Intuitively, both operational and clinical patient flow are modeled, the former being physical and the latter latent. Two variants of the model are proposed, one based on mere patient pathways and the other based on patient characteristics. Our technique could identify interesting pathways such as those that result in high probability of death (survival), pathways incurring the least (highest) cost of care or pathways with the least (highest) length of stay. Patient-specific discharge probabilities from the health care system could also be predicted. These are of interest to health-care managers in planning the scarce resources needed to run health-care institutions.  相似文献   

3.
Mixture distribution survival trees are constructed by approximating different nodes in the tree by distinct types of mixture distributions to improve within node homogeneity. Previously, we proposed a mixture distribution survival tree-based method for determining clinically meaningful patient groups from a given dataset of patients’ length of stay. This article extends this approach to examine the interrelationship between length of stay in hospital, outcome measures, and other covariates. We describe an application of this approach to patient pathway and examine the relationship between length of stay in hospital and/or treatment outcome using five-years’ retrospective data of stroke patients.  相似文献   

4.
Hospital admission rates are often used as a proxy to reflect patterns of morbidity or health need in population subgroups or across geographic areas. This paper considers the estimation of small area variations in relative health need, as measured by routinely collected hospital admissions data, after allowing for variation in general practice (primary care) and hospital (supply) effects. A fully Bayesian hierarchical modelling framework is adopted, using combinations of electoral ward populations and general practice patients' lists to define catchment groups for analysis. Hospitals create a further stratum, with flows of patients between catchment groups and hospitals being represented by a gravity model. Variations in health outcomes are modelled by using a range of random-effects structures for each cross-classification of strata, together with a consideration of ward, practice, hospital and crossed level covariates. The approach is applied to case-studies of child respiratory and total emergency hospital admissions for residents in a London health authority.  相似文献   

5.
This paper describes a Bayesian approach to modelling carcinogenity in animal studies where the data consist of counts of the number of tumours present over time. It compares two autoregressive hidden Markov models. One of them models the transitions between three latent states: an inactive transient state, a multiplying state for increasing counts and a reducing state for decreasing counts. The second model introduces a fourth tied state to describe non‐zero observations that are neither increasing nor decreasing. Both these models can model the length of stay upon entry of a state. A discrete constant hazards waiting time distribution is used to model the time to onset of tumour growth. Our models describe between‐animal‐variability by a single hierarchy of random effects and the within‐animal variation by first‐order serial dependence. They can be extended to higher‐order serial dependence and multi‐level hierarchies. Analysis of data from animal experiments comparing the influence of two genes leads to conclusions that differ from those of Dunson (2000). The observed data likelihood defines an information criterion to assess the predictive properties of the three‐ and four‐state models. The deviance information criterion is appropriately defined for discrete parameters.  相似文献   

6.
Abstract.  This paper examines and applies methods for modelling longitudinal binary data subject to both intermittent missingness and dropout. The paper is based around the analysis of data from a study into the health impact of a sanitation programme carried out in Salvador, Brazil. Our objective was to investigate risk factors associated with incidence and prevalence of diarrhoea in children aged up to 3 years old. In total, 926 children were followed up at home twice a week from October 2000 to January 2002 and for each child daily occurrence of diarrhoea was recorded. A challenging factor in analysing these data is the presence of between-subject heterogeneity not explained by known risk factors, combined with significant loss of observed data through either intermittent missingness (average of 78 days per child) or dropout (21% of children). We discuss modelling strategies and show the advantages of taking an event history approach with an additive discrete time regression model.  相似文献   

7.
Summary.  Because exposure to radon gas in buildings is a likely risk factor for lung cancer, estimation of residential radon levels is an important public health endeavour. Radon originates from uranium, and therefore data on the geographical distribution of uranium in the Earth's surface may inform about radon levels. We fit a Bayesian geostatistical model that appropriately combines data on uranium with measurements of indoor home radon in the state of Iowa, thereby obtaining more accurate and precise estimation of the geographic distribution of average residential radon levels than would be possible by using radon data alone.  相似文献   

8.
Evaluation of the impact of nosocomial infection on duration of hospital stay usually relies on estimates obtained in prospective cohort studies. However, the statistical methods used to estimate the extra length of stay are usually not adequate. A naive comparison of duration of stay in infected and non-infected patients is not adequate to estimate the extra hospitalisation time due to nosocomial infections. Matching for duration of stay prior to infection can compensate in part for the bias of ad hoc methods. New model-based approaches have been developed to estimate the excess length of stay. It will be demonstrated that statistical models based on multivariate counting processes provide an appropriate framework to analyse the occurrence and impact of nosocomial infections. We will propose and investigate new approaches to estimate the extra time spent in hospitals attributable to nosocomial infections based on functionals of the transition probabilities in multistate models. Additionally, within the class of structural nested failure time models an alternative approach to estimate the extra stay due to nosocomial infections is derived. The methods are illustrated using data from a cohort study on 756 patients admitted to intensive care units at the University Hospital in Freiburg.  相似文献   

9.
Summary.  Measuring the process of care in substance abuse treatment requires analysing repeated client assessments at critical time points during treatment tenure. Assessments are frequently censored because of early departure from treatment. Most analyses accounting for informative censoring define the censoring time to be that of the last observed assessment. However, if missing assessments for those who remain in treatment are attributable to logistical reasons rather than to the underlying treatment process being measured, then the length of stay in treatment might better characterize censoring than would time of measurement. Bayesian variable selection is incorporated in the conditional linear model to assess whether time of measurement or length of stay better characterizes informative censoring. Marginal posterior distributions of the trajectory of treatment process scores are obtained that incorporate model uncertainty. The methodology is motivated by data from an on-going study of the quality of care in in-patient substance abuse treatment.  相似文献   

10.
Summary.  We introduce a flexible marginal modelling approach for statistical inference for clustered and longitudinal data under minimal assumptions. This estimated estimating equations approach is semiparametric and the proposed models are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed effects linear predictor with unknown smooth link, and variance–covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance–covariance functions via smoothing methods, whereas the regression parameters are obtained via the estimated estimating equations. These are score equations that contain nonparametric function estimates. The proposed estimated estimating equations approach is motivated by its flexibility and easy implementation. Moreover, if data follow a generalized linear mixed model, with either a specified or an unspecified distribution of random effects and link function, the model proposed emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed effects in the underlying generalized linear mixed model, without the need to specify any other components of this generalized linear mixed model. Among marginal models, the estimated estimating equations approach provides a flexible alternative to modelling with generalized estimating equations. Applications of estimated estimating equations include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical inference. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.  相似文献   

11.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

12.
Previously, we developed a modeling framework which classifies individuals with respect to their length of stay (LOS) in the transient states of a continuous-time Markov model with a single absorbing state; phase-type models are used for each class of the Markov model. We here add costs and obtain results for moments of total costs in (0, t], for an individual, a cohort arriving at time zero and when arrivals are Poisson. Based on stroke patient data from the Belfast City Hospital we use the overall modelling framework to obtain results for total cost in a given time interval.  相似文献   

13.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

14.
This paper develops a Bayesian procedure for estimation and forecasting of the volatility of multivariate time series. The foundation of this work is the matrix-variate dynamic linear model, for the volatility of which we adopt a multiplicative stochastic evolution, using Wishart and singular multivariate beta distributions. A diagonal matrix of discount factors is employed in order to discount the variances element by element and therefore allowing a flexible and pragmatic variance modelling approach. Diagnostic tests and sequential model monitoring are discussed in some detail. The proposed estimation theory is applied to a four-dimensional time series, comprising spot prices of aluminium, copper, lead and zinc of the London metal exchange. The empirical findings suggest that the proposed Bayesian procedure can be effectively applied to financial data, overcoming many of the disadvantages of existing volatility models.  相似文献   

15.
State space modelling and Bayesian analysis are both active areas of applied research in fisheries stock assessment. Combining these two methodologies facilitates the fitting of state space models that may be non-linear and have non-normal errors, and hence it is particularly useful for modelling fisheries dynamics. Here, this approach is demonstrated by fitting a non-linear surplus production model to data on South Atlantic albacore tuna ( Thunnus alalunga ). The state space approach allows for random variability in both the data (the measurement of relative biomass) and in annual biomass dynamics of the tuna stock. Sampling from the joint posterior distribution of the unobservables was achieved by using Metropolis-Hastings within-Gibbs sampling.  相似文献   

16.
Evidence of communication traffic complexity reveals correlation in a within-queue and heterogeneity among queues. We show how a random-effect model can be used to accommodate these kinds of phenomena. We apply a Pareto distribution for arrival (service) time of individual queue for given arrival (service) rate. For modelling potential correlation in arrival (service) times within a queue and heterogeneity of the arrival (service) rates among queues, we use an inverse gamma distribution. This modelling approach is then applied to the cache access log data processed through an Internet server. We believe that our approach is potentially useful in the area of network resource management.  相似文献   

17.
We will pursue a Bayesian nonparametric approach in the hierarchical mixture modelling of lifetime data in two situations: density estimation, when the distribution is a mixture of parametric densities with a nonparametric mixing measure, and accelerated failure time (AFT) regression modelling, when the same type of mixture is used for the distribution of the error term. The Dirichlet process is a popular choice for the mixing measure, yielding a Dirichlet process mixture model for the error; as an alternative, we also allow the mixing measure to be equal to a normalized inverse-Gaussian prior, built from normalized inverse-Gaussian finite dimensional distributions, as recently proposed in the literature. Markov chain Monte Carlo techniques will be used to estimate the predictive distribution of the survival time, along with the posterior distribution of the regression parameters. A comparison between the two models will be carried out on the grounds of their predictive power and their ability to identify the number of components in a given mixture density.  相似文献   

18.
In this paper we present Bayesian analysis of finite mixtures of multivariate Poisson distributions with an unknown number of components. The multivariate Poisson distribution can be regarded as the discrete counterpart of the multivariate normal distribution, which is suitable for modelling multivariate count data. Mixtures of multivariate Poisson distributions allow for overdispersion and for negative correlations between variables. To perform Bayesian analysis of these models we adopt a reversible jump Markov chain Monte Carlo (MCMC) algorithm with birth and death moves for updating the number of components. We present results obtained from applying our modelling approach to simulated and real data. Furthermore, we apply our approach to a problem in multivariate disease mapping, namely joint modelling of diseases with correlated counts.  相似文献   

19.
A fully parametric first-order autoregressive (AR(1)) model is proposed to analyse binary longitudinal data. By using a discretized version of a copula, the modelling approach allows one to construct separate models for the marginal response and for the dependence between adjacent responses. In particular, the transition model that is focused on discretizes the Gaussian copula in such a way that the marginal is a Bernoulli distribution. A probit link is used to take into account concomitant information in the behaviour of the underlying marginal distribution. Fixed and time-varying covariates can be included in the model. The method is simple and is a natural extension of the AR(1) model for Gaussian series. Since the approach put forward is likelihood-based, it allows interpretations and inferences to be made that are not possible with semi-parametric approaches such as those based on generalized estimating equations. Data from a study designed to reduce the exposure of children to the sun are used to illustrate the methods.  相似文献   

20.
The purpose of this paper is to build a model for aggregate losses which constitutes a crucial step in evaluating premiums for health insurance systems. It aims at obtaining the predictive distribution of the aggregate loss within each age class of insured persons over the time horizon involved in planning employing the Bayesian methodology. The model proposed using the Bayesian approach is a generalization of the collective risk model, a commonly used model for analysing risk of an insurance system. Aggregate loss prediction is based on past information on size of loss, number of losses and size of population at risk. In modelling the frequency and severity of losses, the number of losses is assumed to follow a negative binomial distribution, individual loss sizes are independent and identically distributed exponential random variables, while the number of insured persons in a finite number of possible age groups is assumed to follow the multinomial distribution. Prediction of aggregate losses is based on the Gibbs sampling algorithm which incorporates the missing data approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号