首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe recent developments in the POPAN system for the analysis of mark-recapture data from Jolly-Seber type experiments. The previous versions (POPAN-3 for SUN/OS workstations and POPAN-PC for IBM-PC running DOS or Windows) included general statistics gathering and testing procedures, a wide range of analysis options for estimating population abundance, survival and birth parameters, and a general simulation capability. POPAN-4 adds a very general procedure for fitting constrained models based on a new unified theory for Jolly-Seber models. Users can impose constraints on capture, survival and birth rates over time and/or across attribute groups (e.g. sex or age groups) and can model these rates using covariate models involving auxiliary variables (e.g. sampling effort).  相似文献   

2.
In recent years, survival analysis of radio-tagged animals has developed using methods based on the Kaplan-Meier method used in medical and engineering applications (Pollock et al. , 1989a,b). An important assumption of this approach is that all tagged animals with a functioning radio can be relocated at each sampling time with probability 1. This assumption may not always be reasonable in practice. In this paper, we show how a general capture-recapture model can be derived which allows for some probability (less than one) for animals to be relocated. This model is not simply a Jolly-Seber model because it is possible to relocate both dead and live animals, unlike when traditional tagging is used. The model can also be viewed as a generalization of the Kaplan-Meier procedure, thus linking the Jolly-Seber and Kaplan-Meier approaches to survival estimation. We present maximum likelihood estimators and discuss testing between submodels. We also discuss model assumptions and their validity in practice. An example is presented based on canvasback data collected by G. M. Haramis of Patuxent Wildlife Research Center, Laurel, Maryland, USA.  相似文献   

3.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

4.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

5.
The multinomial-binomial approach to the Jolly-Seber capture- recapture model is used as a basis to derive explicit probability distributions for special cases of the Jolly-Seber model:no recruitment, or no mortality. Also given are the residual distributions that allow tests of these restricted models compared to the general Jolly-Seber model. Losses on capture are allowed. The special case distribution is also derived for no recruitment and no mortality, but allowing losses on capture; this is a generalized version of Darroch's closed capture-recapture model. Here, however, it was not possible to obtain a closed form residual distribution.  相似文献   

6.
Jolly-Seber models A, B, D and 2 were used to investigate capture-recapture data. The standard Jolly-Seber model A (time-dependent survival phi and capture probability p ) fits capture-recapture data of migrating passerines. Captures from a long-term mist-netting study (Mettnau Peninsula, SW Germany) at a stop-over site were used to estimate stop-over length from survival rate between days and capture probability. For some data, model 2 could be used, indicating a termporary reduction in 'survival' rate. Application of models B and D gave poor results. The total number of birds stopping over, i.e. population size, was estimated from captures of 1-5 line transects of nets in the spatial trapping design. Behaviour, movements within the stop-over site, catchability and ecophysiological covariables such as moult, fat deposition and climatic parameters are likely to have strong influence on the estimation of capture parameters.  相似文献   

7.
In recent years, survival analysis of radio-tagged animals has developed using methods based on the Kaplan-Meier method used in medical and engineering applications (Pollock et al. , 1989a,b). An important assumption of this approach is that all tagged animals with a functioning radio can be relocated at each sampling time with probability 1. This assumption may not always be reasonable in practice. In this paper, we show how a general capture-recapture model can be derived which allows for some probability (less than one) for animals to be relocated. This model is not simply a Jolly-Seber model because it is possible to relocate both dead and live animals, unlike when traditional tagging is used. The model can also be viewed as a generalization of the Kaplan-Meier procedure, thus linking the Jolly-Seber and Kaplan-Meier approaches to survival estimation. We present maximum likelihood estimators and discuss testing between submodels. We also discuss model assumptions and their validity in practice. An example is presented based on canvasback data collected by G. M. Haramis of Patuxent Wildlife Research Center, Laurel, Maryland, USA.  相似文献   

8.
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes ( Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).  相似文献   

9.
We propose a mixture model that combines a discrete-time survival model for analyzing the correlated times between recurrent events, e.g. births, with a logistic regression model for the probability of never experiencing the event of interest, i.e., being a long-term survivor. The proposed survival model incorporates both observed and unobserved heterogeneity in the probability of experiencing the event of interest. We use Gibbs sampling for the fitting of such mixture models, which leads to a computationally intensive solution to the problem of fitting survival models for multiple event time data with long-term survivors. We illustrate our Bayesian approach through an analysis of Hutterite birth histories.  相似文献   

10.
We discuss the analysis of random effects in capture-recapture models, and outline Bayesian and frequentists approaches to their analysis. Under a normal model, random effects estimators derived from Bayesian or frequentist considerations have a common form as shrinkage estimators. We discuss some of the difficulties of analysing random effects using traditional methods, and argue that a Bayesian formulation provides a rigorous framework for dealing with these difficulties. In capture-recapture models, random effects may provide a parsimonious compromise between constant and completely time-dependent models for the parameters (e.g. survival probability). We consider application of random effects to band-recovery models, although the principles apply to more general situations, such as Cormack-Jolly-Seber models. We illustrate these ideas using a commonly analysed band recovery data set.  相似文献   

11.
We discuss the analysis of random effects in capture-recapture models, and outline Bayesian and frequentists approaches to their analysis. Under a normal model, random effects estimators derived from Bayesian or frequentist considerations have a common form as shrinkage estimators. We discuss some of the difficulties of analysing random effects using traditional methods, and argue that a Bayesian formulation provides a rigorous framework for dealing with these difficulties. In capture-recapture models, random effects may provide a parsimonious compromise between constant and completely time-dependent models for the parameters (e.g. survival probability). We consider application of random effects to band-recovery models, although the principles apply to more general situations, such as Cormack-Jolly-Seber models. We illustrate these ideas using a commonly analysed band recovery data set.  相似文献   

12.
An individual measure of relative survival   总被引:2,自引:0,他引:2  
Summary.  Relative survival techniques are used to compare survival experience in a study cohort with that expected if background population rates apply. The techniques are especially useful when cause-specific death information is not accurate or not available as they provide a measure of excess mortality in a group of patients with a certain disease. Whereas these methods are based on group comparisons, we present here a transformation approach which instead gives for each individual an outcome measure relative to the appropriate background population. The new outcome measure is easily interpreted and can be analysed by using standard survival analysis techniques. It provides additional information on relative survival and gives new options in regression analysis. For example, one can estimate the proportion of patients who survived longer than a given percentile of the respective general population or compare survival experience of individuals while accounting for the population differences. The regression models for the new outcome measure are different from existing models, thus providing new possibilities in analysing relative survival data. One distinctive feature of our approach is that we adjust for expected survival before modelling. The paper is motivated by a study into the survival of patients after acute myocardial infarction.  相似文献   

13.
Frailty models are used in the survival analysis to account for the unobserved heterogeneity in the individual risks to disease and death. To analyze the bivariate data on related survival times (e.g., matched pairs experiments, twin or family data), the shared frailty models were suggested. In this article, we introduce the shared gamma frailty models with the reversed hazard rate. We develop the Bayesian estimation procedure using the Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We apply the model to a real life bivariate survival dataset.  相似文献   

14.
Summary.  Clinical trials of micronutrient supplementation are aimed at reducing the risk of infant mortality by increasing birth weight. Because infant mortality is greatest among the low birth weight (LBW) infants (2500 g or under), an effective intervention increases the birth weight among the smallest babies. The paper defines population and counterfactual parameters for estimating the treatment effects on birth weight and on survival as functions of the percentiles of the birth weight distribution. We use a Bayesian approach with data augmentation to approximate the posterior distributions of the parameters, taking into account uncertainty that is associated with the imputation of the counterfactuals. This approach is particularly suitable for exploring the sensitivity of the results to unverifiable modelling assumptions and other prior beliefs. We estimate that the average causal effect of the treatment on birth weight is 72 g (95% posterior regions 33–110 g) and that this causal effect is largest among the LBW infants. Posterior inferences about average causal effects of the treatment on birth weight are robust to modelling assumptions. However, inferences about causal effects for babies at the tails of the birth weight distribution can be highly sensitive to the unverifiable assumption about the correl-ation between the observed and the counterfactuals birth weights. Among the LBW infants who have a large causal effect of the treatment on birth weight, we estimate that a baby receiving the treatment has 5% less chance of death than if the same baby had received the control. Among the LBW infants, we found weak evidence supporting an additional beneficial effect of the treatment on mortality independent of birth weight.  相似文献   

15.
The use of the Cormack-Jolly-Seber model under a standard sampling scheme of one sample per time period, when the Jolly-Seber assumption that all emigration is permanent does not hold, leads to the confounding of temporary emigration probabilities with capture probabilities. This biases the estimates of capture probability when temporary emigration is a completely random process, and both capture and survival probabilities when there is a temporary trap response in temporary emigration, or it is Markovian. The use of secondary capture samples over a shorter interval within each period, during which the population is assumed to be closed (Pollock's robust design), provides a second source of information on capture probabilities. This solves the confounding problem, and thus temporary emigration probabilities can be estimated. This process can be accomplished in an ad hoc fashion for completely random temporary emigration and to some extent in the temporary trap response case, but modelling the complete sampling process provides more flexibility and permits direct estimation of variances. For the case of Markovian temporary emigration, a full likelihood is required.  相似文献   

16.
Developing new medical tests and identifying single biomarkers or panels of biomarkers with superior accuracy over existing classifiers promotes lifelong health of individuals and populations. Before a medical test can be routinely used in clinical practice, its accuracy within diseased and non-diseased populations must be rigorously evaluated. We introduce a method for sample size determination for studies designed to test hypotheses about medical test or biomarker sensitivity and specificity. We show how a sample size can be determined to guard against making type I and/or type II errors by calculating Bayes factors from multiple data sets simulated under null and/or alternative models. The approach can be implemented across a variety of study designs, including investigations into one test or two conditionally independent or dependent tests. We focus on a general setting that involves non-identifiable models for data when true disease status is unavailable due to the nonexistence of or undesirable side effects from a perfectly accurate (i.e. ‘gold standard’) test; special cases of the general method apply to identifiable models with or without gold-standard data. Calculation of Bayes factors is performed by incorporating prior information for model parameters (e.g. sensitivity, specificity, and disease prevalence) and augmenting the observed test-outcome data with unobserved latent data on disease status to facilitate Gibbs sampling from posterior distributions. We illustrate our methods using a thorough simulation study and an application to toxoplasmosis.  相似文献   

17.
Frailty models are used in the survival analysis to account for the unobserved heterogeneity in individual risks to disease and death. To analyze the bivariate data on related survival times (e.g., matched pairs experiments, twin or family data) the shared frailty models were suggested. Shared frailty models are used despite their limitations. To overcome their disadvantages correlated frailty models may be used. In this article, we introduce the gamma correlated frailty models with two different baseline distributions namely, the generalized log logistic, and the generalized Weibull. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in these models. We present a simulation study to compare the true values of the parameters with the estimated values. Also we apply these models to a real life bivariate survival dataset related to the kidney infection data and a better model is suggested for the data.  相似文献   

18.
The joint survival function of the latent lifetimes in the dependent competing risks set up is nonidentifiable from the joint probability distribution of the observation (failure time, cause of failure). This paper concentrates on the hazard gradients associated with the joint survival function, for which we provide bounds in the general dependent case and improve these in case the type of dependence (e.g., RTI, RCSI, etc.) is known. This leads to bounds on net (marginal) hazard rates in terms of cause-specific hazard rates which are identifiable. The problem of estimation of these bounds is discussed at the end.  相似文献   

19.
Selection of a parsimonious model as a basis for statistical inference from capture-recapture data is critical, especially when using open models in the analysis of multiple, interrelated data sets (e.g. males and females, with two to three age classes, over three to five areas and 10-15 years). The global (i.e. most general) model for such data sets might contain hundreds of survival and recapture parameters. Here, we focus on a series of nested models of the Cormack-Jolly-Seber type wherein the likelihood arises from products of multinomial distributions whose cell probabilities are reparameterized in terms of survival ( phi ) and mean capture ( p ) probabilities. This paper presents numerical results on two information-theoretic methods for model selection when the capture probabilities are heterogeneous over individual animals: Akaike's Information Criterion (AIC) and a dimension-consistent criterion (CAIC), derived from a Bayesian viewpoint. Quality of model selection was evaluated based on the relative Euclidian distance between standardized theta and theta (parameter theta is vector-valued and contains the survival ( phi ) and mean capture ( p ) probabilities); this quantity (RSS = sigma{(theta i - theta i )/ theta i } 2 ) is a sum of squared bias and variance. Thus, the quality of inference (RSS) was judged by comparing the performance of the two information criteria and the use of the true model (used to generate the data), in relation to the model that provided the smallest RSS. We found that heterogeneity in the capture probabilities had a negligible effect on model selection using AIC or CAIC. Model size increased as sample size increased with both AIC- and CAIC-selected models.  相似文献   

20.
Abstract

Frailty models are used in survival analysis to account for unobserved heterogeneity in individual risks to disease and death. To analyze bivariate data on related survival times (e.g., matched pairs experiments, twin, or family data), shared frailty models were suggested. Shared frailty models are frequently used to model heterogeneity in survival analysis. The most common shared frailty model is a model in which hazard function is a product of random factor(frailty) and baseline hazard function which is common to all individuals. There are certain assumptions about the baseline distribution and distribution of frailty. In this paper, we introduce shared gamma frailty models with reversed hazard rate. We introduce Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. Also, we apply the proposed model to the Australian twin data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号