首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Capture–recapture experiments are commonly used to estimate the size of a closed population. However, the associated estimators of the population size are well known to be highly sensitive to misspecification of the capture probabilities. To address this, we present a general semiparametric framework for the analysis of capture–recapture experiments when the capture probability depends on individual characteristics, time effects and behavioural response. This generalizes well‐known general parametric capture–recapture models and extends previous semiparametric models in which there is no time dependence or behavioural response. The method is evaluated in simulations and applied to two real data sets.  相似文献   

2.
Empirical Bayes methods and a bootstrap bias adjustment procedure are used to estimate the size of a closed population when the individual capture probabilities are independently and identically distributed with a Beta distribution. The method is examined in simulations and applied to several well-known datasets. The simulations show the estimator performs as well as several other proposed parametric and non-parametric estimators.  相似文献   

3.
This paper compares the properties of various estimators for a beta‐binomial model for estimating the size of a heterogeneous population. It is found that maximum likelihood and conditional maximum likelihood estimators perform well for a large population with a large capture proportion. The jackknife and the sample coverage estimators are biased for low capture probabilities. The performance of the martingale estimator is satisfactory, but it requires full capture histories. The Gibbs sampler and Metropolis‐Hastings algorithm provide reasonable posterior estimates for informative priors.  相似文献   

4.
If the capture probabilities in a capture‐recapture experiment depend on covariates, parametric models may be fitted and the population size may then be estimated. Here a semiparametric model for the capture probabilities that allows both continuous and categorical covariates is developed. Kernel smoothing and profile estimating equations are used to estimate the nonparametric and parametric components. Analytic forms of the standard errors are derived, which allows an empirical bias bandwidth selection procedure to be used to estimate the bandwidth. The method is evaluated in simulations and is applied to a real data set concerning captures of Prinia flaviventris, which is a common bird species in Southeast Asia.  相似文献   

5.
The author is concerned with log‐linear estimators of the size N of a population in a capture‐recapture experiment featuring heterogeneity in the individual capture probabilities and a time effect. He also considers models where the first capture influences the probability of subsequent captures. He derives several results from a new inequality associated with a dispersive ordering for discrete random variables. He shows that in a log‐linear model with inter‐individual heterogeneity, the estimator N is an increasing function of the heterogeneity parameter. He also shows that the inclusion of a time effect in the capture probabilities decreases N in models without heterogeneity. He further argues that a model featuring heterogeneity can accommodate a time effect through a small change in the heterogeneity parameter. He demonstrates these results using an inequality for the estimators of the heterogeneity parameters and illustrates them in a Monte Carlo experiment  相似文献   

6.
ABSTRACT

A drawback of non parametric estimators of the size of a closed population in the presence of heterogeneous capture probabilities has been their lack of analytic tractability. Here we show that the martingale estimating function/sample coverage approach to estimating the size of a closed population with heterogeneous capture probabilities is mathematically tractable and develop its large sample properties.  相似文献   

7.
Nuisance parameter elimination is a central problem in capture–recapture modelling. In this paper, we consider a closed population capture–recapture model which assumes the capture probabilities varies only with the sampling occasions. In this model, the capture probabilities are regarded as nuisance parameters and the unknown number of individuals is the parameter of interest. In order to eliminate the nuisance parameters, the likelihood function is integrated with respect to a weight function (uniform and Jeffrey's) of the nuisance parameters resulting in an integrated likelihood function depending only on the population size. For these integrated likelihood functions, analytical expressions for the maximum likelihood estimates are obtained and it is proved that they are always finite and unique. Variance estimates of the proposed estimators are obtained via a parametric bootstrap resampling procedure. The proposed methods are illustrated on a real data set and their frequentist properties are assessed by means of a simulation study.  相似文献   

8.
When there are frequent capture occasions, both semiparametric and nonparametric estimators for the size of an open population have been proposed using kernel smoothing methods. While kernel smoothing methods are mathematically tractable, fitting them to data is computationally intensive. Here, we use smoothing splines in the form of P-splines to provide an alternate less computationally intensive method of fitting these models to capture–recapture data from open populations with frequent capture occasions. We fit the model to capture data collected over 64 occasions and model the population size as a function of time, seasonal effects and an environmental covariate. A small simulation study is also conducted to examine the performance of the estimators and their standard errors.  相似文献   

9.
The logistic regression model has become a standard tool to investigate the relationship between a binary outcome and a set of potential predictors. When analyzing binary data, it often arises that the observed proportion of zeros is greater than expected under the postulated logistic model. Zero-inflated binomial (ZIB) models have been developed to fit binary data that contain too many zeros. Maximum likelihood estimators in these models have been proposed and their asymptotic properties established. Several aspects of ZIB models still deserve attention however, such as the estimation of odds-ratios and event probabilities. In this article, we propose estimators of these quantities and we investigate their properties both theoretically and via simulations. Based on these results, we provide recommendations about the range of conditions (minimum sample size, maximum proportion of zeros in excess) under which a reliable statistical inference on the odds-ratios and event probabilities can be obtained in a ZIB regression model. A real-data example illustrates the proposed estimators.  相似文献   

10.
Summary.  In capture–recapture experiments the capture probabilities may depend on individual covariates such as an individual's weight or age. Typically this dependence is modelled through simple parametric functions of the covariates. Here we first demonstrate that misspecification of the model can produce biased estimates and subsequently develop a non-parametric procedure to estimate the functional relationship between the probability of capture and a single covariate. This estimator is then incorporated in a Horvitz–Thompson estimator to estimate the size of the population. The resulting estimators are evaluated in a simulation study and applied to a data set on captures of the Mountain Pygmy Possum.  相似文献   

11.
We use a class of parametric counting process regression models that are commonly employed in the analysis of failure time data to formulate the subject-specific capture probabilities for removal and recapture studies conducted in continuous time. We estimate the regression parameters by modifying the conventional likelihood score function for left-truncated and right-censored data to accommodate an unknown population size and missing covariates on uncaptured subjects, and we subsequently estimate the population size by a martingale-based estimating function. The resultant estimators for the regression parameters and population size are consistent and asymptotically normal under appropriate regularity conditions. We assess the small sample properties of the proposed estimators through Monte Carlo simulation and we present an application to a bird banding exercise.  相似文献   

12.
Estimating the parameter of a Dirichlet distribution is an interesting question since this distribution arises in many situations of applied probability. Classical procedures are based on sample of Dirichlet distribution. In this paper we exhibit five different estimators from only one observation. They are based either on residual allocation model decompositions or on sampling properties of Dirichlet distributions. Two ways are investigated: the first one uses fragments’ size and the second one uses size-biased permutations of a partition. Numerical computations based on simulations are supplied. The estimators are finally used to estimate birth probabilities per month.  相似文献   

13.
A model for analyzing release-recapture data is presented that generalizes a previously existing individual covariate model to include multiple groups of animals. As in the previous model, the generalized version includes selection parameters that relate individual covariates to survival potential. Significance of the selection parameters was equivalent to significance of the individual covariates. Simulation studies were conducted to investigate three inferential properties with respect to the selection parameters: (1) sample size requirements, (2) validity of the likelihood ratio test (LRT) and (3) power of the LRT. When the survival and capture probabilities ranged from 0.5 to 1.0, a total sample size of 300 was necessary to achieve a power of 0.80 at a significance level of 0.1 when testing the significance of the selection parameters. However, only half that (a total of 150) was necessary for the distribution of the maximum likelihood estimators of the selection parameters to approximate their asymptotic distributions. In general, as the survival and capture probabilities decreased, the sample size requirements increased. The validity of the LRT for testing the significance of the selection parameters was confirmed because the LRT statistic was distributed as theoretically expected under the null hypothesis, i.e. like a chi 2 random variable. When the baseline survival model was fully parameterized with population and interval effects, the LRT was also valid in the presence of unaccounted for random variation. The power of the LRT for testing the selection parameters was unaffected by over-parameterization of the baseline survival and capture models. The simulation studies showed that for testing the significance of individual covariates to survival the LRT was remarkably robust to assumption violations.  相似文献   

14.
A model for analyzing release-recapture data is presented that generalizes a previously existing individual covariate model to include multiple groups of animals. As in the previous model, the generalized version includes selection parameters that relate individual covariates to survival potential. Significance of the selection parameters was equivalent to significance of the individual covariates. Simulation studies were conducted to investigate three inferential properties with respect to the selection parameters: (1) sample size requirements, (2) validity of the likelihood ratio test (LRT) and (3) power of the LRT. When the survival and capture probabilities ranged from 0.5 to 1.0, a total sample size of 300 was necessary to achieve a power of 0.80 at a significance level of 0.1 when testing the significance of the selection parameters. However, only half that (a total of 150) was necessary for the distribution of the maximum likelihood estimators of the selection parameters to approximate their asymptotic distributions. In general, as the survival and capture probabilities decreased, the sample size requirements increased. The validity of the LRT for testing the significance of the selection parameters was confirmed because the LRT statistic was distributed as theoretically expected under the null hypothesis, i.e. like a chi 2 random variable. When the baseline survival model was fully parameterized with population and interval effects, the LRT was also valid in the presence of unaccounted for random variation. The power of the LRT for testing the selection parameters was unaffected by over-parameterization of the baseline survival and capture models. The simulation studies showed that for testing the significance of individual covariates to survival the LRT was remarkably robust to assumption violations.  相似文献   

15.
In a capture–recapture experiment, the number of measurements for individual covariates usually equals the number of captures. This creates a heteroscedastic measurement error problem and the usual surrogate condition does not hold in the context of a measurement error model. This study adopts a small measurement error assumption to approximate the conventional estimating functions and the population size estimator. This study also investigates the biases of the resulting estimators. In addition, modifications for two common approximation methods, regression calibration and simulation extrapolation, to accommodate heteroscedastic measurement error are also discussed. These estimation methods are examined through simulations and illustrated by analysing a capture–recapture data set.  相似文献   

16.
A two-phase approach for sampling with unequal inclusions probabilities and fixed sample size is presented. The expansion estimator using target inclusion probabilities is suggested for estimation of population parameters. As an alternative, the estimator for two-phase sampling can be used for estimation. Inclusion probabilities are shown to be asymptotically equivalent to the targeted inclusion probabilities. By means of simulation associated estimators are shown to work well with respect to bias and precision.  相似文献   

17.
One of the main aims of a recapture experiment is to estimate the unknown size, N of a closed population. Under the so-called behavioural model, individual capture probabilities change after the first capture. Unfortunately, the maximum likelihood estimator given by Zippin (1956) may give an infinite result and often has poor precision. Chaiyapong & Lloyd (1997) have given formulae for the asymptotic bias and variance as well as for the probability that the estimate is infinite.
The purpose of this article is to tabulate the inversions of the above cited formulae so that practitioners can plan the required capture effort. This paper develops simple approximations for the minimum capture effort required to achieve (i) no more than a certain probability of breakdown, (ii) a given relative standard error.  相似文献   

18.
Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, u i , provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in u i We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in u i . These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for u i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) u i ' is the appropriate estimator.  相似文献   

19.
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.  相似文献   

20.
SUMMARY We compare properties of parameter estimators under Akaike information criterion (AIC) and 'consistent' AIC (CAIC) model selection in a nested sequence of open population capture-recapture models. These models consist of product multinomials, where the cell probabilities are parameterized in terms of survival ( ) and capture ( p ) i i probabilities for each time interval i . The sequence of models is derived from 'treatment' effects that might be (1) absent, model H ; (2) only acute, model H ; or (3) acute and 0 2 p chronic, lasting several time intervals, model H . Using a 35 factorial design, 1000 3 repetitions were simulated for each of 243 cases. The true number of parameters ranged from 7 to 42, and the sample size ranged from approximately 470 to 55 000 per case. We focus on the quality of the inference about the model parameters and model structure that results from the two selection criteria. We use achieved confidence interval coverage as an integrating metric to judge what constitutes a 'properly parsimonious' model, and contrast the performance of these two model selection criteria for a wide range of models, sample sizes, parameter values and study interval lengths. AIC selection resulted in models in which the parameters were estimated with relatively little bias. However, these models exhibited asymptotic sampling variances that were somewhat too small, and achieved confidence interval coverage that was somewhat below the nominal level. In contrast, CAIC-selected models were too simple, the parameter estimators were often substantially biased, the asymptotic sampling variances were substantially too small and the achieved coverage was often substantially below the nominal level. An example case illustrates a pattern: with 20 capture occasions, 300 previously unmarked animals are released at each occasion, and the survival and capture probabilities in the control group on each occasion were 0.9 and 0.8 respectively using model H . There was a strong acute treatment effect 3 on the first survival ( ) and first capture probability ( p ), and smaller, chronic effects 1 2 on the second and third survival probabilities ( and ) as well as on the second capture 2 3 probability ( p ); the sample size for each repetition was approximately 55 000. CAIC 3 selection led to a model with exactly these effects in only nine of the 1000 repetitions, compared with 467 times under AIC selection. Under CAIC selection, even the two acute effects were detected only 555 times, compared with 998 for AIC selection. AIC selection exhibited a balance between underfitted and overfitted models (270 versus 263), while CAIC tended strongly to select underfitted models. CAIC-selected models were overly parsimonious and poor as a basis for statistical inferences about important model parameters or structure. We recommend the use of the AIC and not the CAIC for analysis and inference from capture-recapture data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号