首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary.  In capture–recapture experiments the capture probabilities may depend on individual covariates such as an individual's weight or age. Typically this dependence is modelled through simple parametric functions of the covariates. Here we first demonstrate that misspecification of the model can produce biased estimates and subsequently develop a non-parametric procedure to estimate the functional relationship between the probability of capture and a single covariate. This estimator is then incorporated in a Horvitz–Thompson estimator to estimate the size of the population. The resulting estimators are evaluated in a simulation study and applied to a data set on captures of the Mountain Pygmy Possum.  相似文献   

2.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

3.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

4.
We use a class of parametric counting process regression models that are commonly employed in the analysis of failure time data to formulate the subject-specific capture probabilities for removal and recapture studies conducted in continuous time. We estimate the regression parameters by modifying the conventional likelihood score function for left-truncated and right-censored data to accommodate an unknown population size and missing covariates on uncaptured subjects, and we subsequently estimate the population size by a martingale-based estimating function. The resultant estimators for the regression parameters and population size are consistent and asymptotically normal under appropriate regularity conditions. We assess the small sample properties of the proposed estimators through Monte Carlo simulation and we present an application to a bird banding exercise.  相似文献   

5.
Capture–recapture experiments are commonly used to estimate the size of a closed population. However, the associated estimators of the population size are well known to be highly sensitive to misspecification of the capture probabilities. To address this, we present a general semiparametric framework for the analysis of capture–recapture experiments when the capture probability depends on individual characteristics, time effects and behavioural response. This generalizes well‐known general parametric capture–recapture models and extends previous semiparametric models in which there is no time dependence or behavioural response. The method is evaluated in simulations and applied to two real data sets.  相似文献   

6.
In this article, we highlight some interesting facts about Bayesian variable selection methods for linear regression models in settings where the design matrix exhibits strong collinearity. We first demonstrate via real data analysis and simulation studies that summaries of the posterior distribution based on marginal and joint distributions may give conflicting results for assessing the importance of strongly correlated covariates. The natural question is which one should be used in practice. The simulation studies suggest that posterior inclusion probabilities and Bayes factors that evaluate the importance of correlated covariates jointly are more appropriate, and some priors may be more adversely affected in such a setting. To obtain a better understanding behind the phenomenon, we study some toy examples with Zellner’s g-prior. The results show that strong collinearity may lead to a multimodal posterior distribution over models, in which joint summaries are more appropriate than marginal summaries. Thus, we recommend a routine examination of the correlation matrix and calculation of the joint inclusion probabilities for correlated covariates, in addition to marginal inclusion probabilities, for assessing the importance of covariates in Bayesian variable selection.  相似文献   

7.
For capture–recapture models when covariates are subject to measurement errors and missing data, a set of estimating equations is constructed to estimate population size and relevant parameters. These estimating equations can be solved by an algorithm similar to the EM algorithm. The proposed method is also applicable to the situation when covariates with no measurement errors have missing data. Simulation studies are used to assess the performance of the proposed estimator. The estimator is also applied to a capture–recapture experiment on the bird species Prinia flaviventris in Hong Kong. The Canadian Journal of Statistics 37: 645–658; 2009 © 2009 Statistical Society of Canada  相似文献   

8.
This paper deals with the regression analysis of failure time data when there are censoring and multiple types of failures. We propose a semiparametric generalization of a parametric mixture model of Larson & Dinse (1985), for which the marginal probabilities of the various failure types are logistic functions of the covariates. Given the type of failure, the conditional distribution of the time to failure follows a proportional hazards model. A marginal like lihood approach to estimating regression parameters is suggested, whereby the baseline hazard functions are eliminated as nuisance parameters. The Monte Carlo method is used to approximate the marginal likelihood; the resulting function is maximized easily using existing software. Some guidelines for choosing the number of Monte Carlo replications are given. Fixing the regression parameters at their estimated values, the full likelihood is maximized via an EM algorithm to estimate the baseline survivor functions. The methods suggested are illustrated using the Stanford heart transplant data.  相似文献   

9.
Empirical Bayes methods and a bootstrap bias adjustment procedure are used to estimate the size of a closed population when the individual capture probabilities are independently and identically distributed with a Beta distribution. The method is examined in simulations and applied to several well-known datasets. The simulations show the estimator performs as well as several other proposed parametric and non-parametric estimators.  相似文献   

10.
We consider the Arnason-Schwarz model, usually used to estimate survival and movement probabilities from capture-recapture data. A missing data structure of this model is constructed which allows a clear separation of information relative to capture and relative to movement. Extensions of the Arnason-Schwarz model are considered. For example, we consider a model that takes into account both the individual migration history and the individual reproduction history. Biological assumptions of these extensions are summarized via a directed graph. Owing to missing data, the posterior distribution of parameters is numerically intractable. To overcome those computational difficulties we advocate a Gibbs sampling algorithm that takes advantage of the missing data structure inherent in capture-recapture models. Prior information on survival, capture and movement probabilities typically consists of a prior mean and of a prior 95% credible confidence interval. Dirichlet distributions are used to incorporate some prior information on capture, survival probabilities, and movement probabilities. Finally, the influence of the prior on the Bayesian estimates of movement probabilities is examined.  相似文献   

11.
Abstract

In this article, we consider a panel data partially linear regression model with fixed effect and non parametric time trend function. The data can be dependent cross individuals through linear regressor and error components. Unlike the methods using non parametric smoothing technique, a difference-based method is proposed to estimate linear regression coefficients of the model to avoid bandwidth selection. Here the difference technique is employed to eliminate the non parametric function effect, not the fixed effects, on linear regressor coefficient estimation totally. Therefore, a more efficient estimator for parametric part is anticipated, which is shown to be true by the simulation results. For the non parametric component, the polynomial spline technique is implemented. The asymptotic properties of estimators for parametric and non parametric parts are presented. We also show how to select informative ones from a number of covariates in the linear part by using smoothly clipped absolute deviation-penalized estimators on a difference-based least-squares objective function, and the resulting estimators perform asymptotically as well as the oracle procedure in terms of selecting the correct model.  相似文献   

12.
We consider the Arnason-Schwarz model, usually used to estimate survival and movement probabilities from capture-recapture data. A missing data structure of this model is constructed which allows a clear separation of information relative to capture and relative to movement. Extensions of the Arnason-Schwarz model are considered. For example, we consider a model that takes into account both the individual migration history and the individual reproduction history. Biological assumptions of these extensions are summarized via a directed graph. Owing to missing data, the posterior distribution of parameters is numerically intractable. To overcome those computational difficulties we advocate a Gibbs sampling algorithm that takes advantage of the missing data structure inherent in capture-recapture models. Prior information on survival, capture and movement probabilities typically consists of a prior mean and of a prior 95% credible confidence interval. Dirichlet distributions are used to incorporate some prior information on capture, survival probabilities, and movement probabilities. Finally, the influence of the prior on the Bayesian estimates of movement probabilities is examined.  相似文献   

13.
ABSTRACT

A drawback of non parametric estimators of the size of a closed population in the presence of heterogeneous capture probabilities has been their lack of analytic tractability. Here we show that the martingale estimating function/sample coverage approach to estimating the size of a closed population with heterogeneous capture probabilities is mathematically tractable and develop its large sample properties.  相似文献   

14.
This paper proposes Bayesian nonparametric mixing for some well-known and popular models. The distribution of the observations is assumed to contain an unknown mixed effects term which includes a fixed effects term, a function of the observed covariates, and an additive or multiplicative random effects term. Typically these random effects are assumed to be independent of the observed covariates and independent and identically distributed from a distribution from some known parametric family. This assumption may be suspect if either there is interaction between observed covariates and unobserved covariates or the fixed effects predictor of observed covariates is misspecified. Another cause for concern might be simply that the covariates affect more than just the location of the mixed effects distribution. As a consequence the distribution of the random effects could be highly irregular in modality and skewness leaving parametric families unable to model the distribution adequately. This paper therefore proposes a Bayesian nonparametric prior for the random effects to capture possible deviances in modality and skewness and to explore the observed covariates' effect on the distribution of the mixed effects.  相似文献   

15.
Ion Grama 《Statistics》2019,53(4):807-838
We propose an extension of the regular Cox's proportional hazards model which allows the estimation of the probabilities of rare events. It is known that when the data are heavily censored, the estimation of the tail of the survival distribution is not reliable. To improve the estimate of the baseline survival function in the range of the largest observed data and to extend it outside, we adjust the tail of the baseline distribution beyond some threshold by an extreme value model under appropriate assumptions. The survival distributions conditioned to the covariates are easily computed from the baseline. A procedure allowing an automatic choice of the threshold and an aggregated estimate of the survival probabilities are also proposed. The performance is studied by simulations and an application on two data sets is given.  相似文献   

16.
There are various settings in which researchers are interested in the assessment of the correlation between repeated measurements that are taken within the same subject (i.e., reliability). For example, the same rating scale may be used to assess the symptom severity of the same patients by multiple physicians, or the same outcome may be measured repeatedly over time in the same patients. Reliability can be estimated in various ways, for example, using the classical Pearson correlation or the intra‐class correlation in clustered data. However, contemporary data often have a complex structure that goes well beyond the restrictive assumptions that are needed with the more conventional methods to estimate reliability. In the current paper, we propose a general and flexible modeling approach that allows for the derivation of reliability estimates, standard errors, and confidence intervals – appropriately taking hierarchies and covariates in the data into account. Our methodology is developed for continuous outcomes together with covariates of an arbitrary type. The methodology is illustrated in a case study, and a Web Appendix is provided which details the computations using the R package CorrMixed and the SAS software. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
The authors consider a semiparametric partially linear regression model with serially correlated errors. They propose a new way of estimating the error structure which has the advantage that it does not involve any nonparametric estimation. This allows them to develop an inference procedure consisting of a bandwidth selection method, an efficient semiparametric generalized least squares estimator of the parametric component, a goodness‐of‐fit test based on the bootstrap, and a technique for selecting significant covariates in the parametric component. They assess their approach through simulation studies and illustrate it with a concrete application.  相似文献   

18.
This study investigates the empirical likelihood method for the partially linear additive models in which certain covariates are measured with additive errors. An empirical log-likelihood ratio for the parametric component is proposed based on the profile procedure, and a nonparametric version of the Wilk’s theorem is derived. Then, the confidence regions of the parametric component with asymptotically correct coverage probabilities are constructed by the obtained results. Furthermore, a simulation study is conducted to illustrate the performance of the proposed method.  相似文献   

19.
Graphical representation of survival curves is often used to illustrate associations between exposures and time-to-event outcomes. However, when exposures are time-dependent, calculation of survival probabilities is not straightforward. Our aim was to develop a method to estimate time-dependent survival probabilities and represent them graphically. Cox models with time-dependent indicators to represent state changes were fitted, and survival probabilities were plotted using pre-specified times of state changes. Time-varying hazard ratios for the state change were also explored. The method was applied to data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL). Survival curves showing a ‘split’ at a pre-specified time t allow for the qualitative comparison of survival probabilities between patients with similar baseline covariates who do and do not experience a state change at time t. Time since state change interactions can be visually represented to reflect changing hazard ratios over time. A2ALL study results showed differences in survival probabilities among those who did not receive a transplant, received a living donor transplant, and received a deceased donor transplant. These graphical representations of survival curves with time-dependent indicators improve upon previous methods and allow for clinically meaningful interpretation.  相似文献   

20.
A smoothing procedure for discrete time failure data is proposed which allows for the inclusion of covariates. This purely nonparametric method is based on discrete or continuous kernel smoothing techniques that gives a compromise between the data and smoothness. The method may be used as an exploratory tool to uncover the underlying structure or as an alternative to parametric methods when prediction is the primary objective. Confidence intervals are considered and alternative techniques of cross validation based choices of smoothing parameters are investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号