首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
There are a number of statistical techniques for analysing epidemic outbreaks. However, many diseases are endemic within populations and the analysis of such diseases are complicated by changing population demography. Motivated by the spread of cowpox among rodent populations, a combined mathematical model for population and disease dynamics is introduced. An MCMC algorithm is then constructed to make statistical inference for the model based on data being obtained from a capture–recapture experiment. The statistical analysis is used to identify the key elements in the spread of the cowpox virus.  相似文献   

2.
In principle it is possible to use recently derived procedures to determine whether or not all the parameters of particular complex ecological models can be estimated using classical methods of statistical inference. If it is not possible to estimate all the parameters a model is parameter redundant. Furthermore, one can investigate whether derived results hold for such models for all lengths of study, and also how the results might change for specific data sets. In this paper we show how to apply these approaches to entire families of capture–recapture and capture–recapture–recovery models. This results in comprehensive tables, providing the definitive parameter redundancy status for such models. Parameter redundancy can also be caused by the data rather than the model, and how to investigate this is demonstrated through two applications, one to recapture data on dippers, and one to recapture–recovery data on great cormorants.  相似文献   

3.
4.
Distance sampling and capture–recapture are the two most widely used wildlife abundance estimation methods. capture–recapture methods have only recently incorporated models for spatial distribution and there is an increasing tendency for distance sampling methods to incorporated spatial models rather than to rely on partly design-based spatial inference. In this overview we show how spatial models are central to modern distance sampling and that spatial capture–recapture models arise as an extension of distance sampling methods. Depending on the type of data recorded, they can be viewed as particular kinds of hierarchical binary regression, Poisson regression, survival or time-to-event models, with individuals’ locations as latent variables and a spatial model as the latent variable distribution. Incorporation of spatial models in these two methods provides new opportunities for drawing explicitly spatial inferences. Areas of likely future development include more sophisticated spatial and spatio-temporal modelling of individuals’ locations and movements, new methods for integrating spatial capture–recapture and other kinds of ecological survey data, and methods for dealing with the recapture uncertainty that often arise when “capture” consists of detection by a remote device like a camera trap or microphone.  相似文献   

5.
The Conway–Maxwell–Poisson estimator is considered in this paper as the population size estimator. The benefit of using the Conway–Maxwell–Poisson distribution is that it includes the Bernoulli, the Geometric and the Poisson distributions as special cases and, furthermore, allows for heterogeneity. Little emphasis is often placed on the variability associated with the population size estimate. This paper provides a deep and extensive comparison of bootstrap methods in the capture–recapture setting. It deals with the classical bootstrap approach using the true population size, the true bootstrap, and the classical bootstrap using the observed sample size, the reduced bootstrap. Furthermore, the imputed bootstrap, as well as approximating forms in terms of standard errors and confidence intervals for the population size, under the Conway–Maxwell–Poisson distribution, have been investigated and discussed. These methods are illustrated in a simulation study and in benchmark real data examples.  相似文献   

6.
Nuisance parameter elimination is a central problem in capture–recapture modelling. In this paper, we consider a closed population capture–recapture model which assumes the capture probabilities varies only with the sampling occasions. In this model, the capture probabilities are regarded as nuisance parameters and the unknown number of individuals is the parameter of interest. In order to eliminate the nuisance parameters, the likelihood function is integrated with respect to a weight function (uniform and Jeffrey's) of the nuisance parameters resulting in an integrated likelihood function depending only on the population size. For these integrated likelihood functions, analytical expressions for the maximum likelihood estimates are obtained and it is proved that they are always finite and unique. Variance estimates of the proposed estimators are obtained via a parametric bootstrap resampling procedure. The proposed methods are illustrated on a real data set and their frequentist properties are assessed by means of a simulation study.  相似文献   

7.
In this paper we introduce a three-parameter lifetime distribution following the Marshall and Olkin [New method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Biometrika. 1997;84(3):641–652] approach. The proposed distribution is a compound of the Lomax and Logarithmic distributions (LLD). We provide a comprehensive study of the mathematical properties of the LLD. In particular, the density function, the shape of the hazard rate function, a general expansion for moments, the density of the rth order statistics, and the mean and median deviations of the LLD are derived and studied in detail. The maximum likelihood estimators of the three unknown parameters of LLD are obtained. The asymptotic confidence intervals for the parameters are also obtained based on asymptotic variance–covariance matrix. Finally, a real data set is analysed to show the potential of the new proposed distribution.  相似文献   

8.
9.
We provide a closed form likelihood expression for multi-state capture–recapture–recovery data when the state of an individual may be only partially observed. The corresponding sufficient statistics are presented in addition to a matrix formulation which facilitates an efficient calculation of the likelihood. This likelihood framework provides a consistent and unified framework with many standard models applied to capture–recapture–recovery data as special cases.  相似文献   

10.
In a capture–recapture experiment, the number of measurements for individual covariates usually equals the number of captures. This creates a heteroscedastic measurement error problem and the usual surrogate condition does not hold in the context of a measurement error model. This study adopts a small measurement error assumption to approximate the conventional estimating functions and the population size estimator. This study also investigates the biases of the resulting estimators. In addition, modifications for two common approximation methods, regression calibration and simulation extrapolation, to accommodate heteroscedastic measurement error are also discussed. These estimation methods are examined through simulations and illustrated by analysing a capture–recapture data set.  相似文献   

11.
12.
This paper investigates the applications of capture–recapture methods to human populations. Capture–recapture methods are commonly used in estimating the size of wildlife populations but can also be used in epidemiology and social sciences, for estimating prevalence of a particular disease or the size of the homeless population in a certain area. Here we focus on estimating the prevalence of infectious diseases. Several estimators of population size are considered: the Lincoln–Petersen estimator and its modified version, the Chapman estimator, Chao’s lower bound estimator, the Zelterman’s estimator, McKendrick’s moment estimator and the maximum likelihood estimator. In order to evaluate these estimators, they are applied to real, three-source, capture-recapture data. By conditioning on each of the sources of three source data, we have been able to compare the estimators with the true value that they are estimating. The Chapman and Chao estimators were compared in terms of their relative bias. A variance formula derived through conditioning is suggested for Chao’s estimator, and normal 95% confidence intervals are calculated for this and the Chapman estimator. We then compare the coverage of the respective confidence intervals. Furthermore, a simulation study is included to compare Chao’s and Chapman’s estimator. Results indicate that Chao’s estimator is less biased than Chapman’s estimator unless both sources are independent. Chao’s estimator has also the smaller mean squared error. Finally, the implications and limitations of the above methods are discussed, with suggestions for further development. We are grateful to the Medical Research Council for supporting this work.  相似文献   

13.
In recent years there has been a significant development of several procedures to infer about the extremal model that most conveniently describes the distribution function of the underlying population from a data set. The problem of choosing one of the three extremal types, giving preference to the Gumbel model for the null hypothesis, has frequently received the general designation of statistical choice of extremal models and has been handled under different set-ups by numerous authors. Recently, a test procedure, referred by Hasofer and Wang (1992), gave place to a comparison with some of other connected perspectives. Such a topic, jointly with some suggestions for applicability to real data, is the theme of the present paper.  相似文献   

14.
The purpose of the study is to estimate the population size under a truncated count model that accounts for heterogeneity. The proposed estimator is based on the Conway–Maxwell–Poisson distribution. The benefit of using the Conway–Maxwell–Poisson distribution is that it includes the Bernoulli, the Geometric and the Poisson distributions as special cases and, furthermore, allows for heterogeneity. Parameter estimates can be obtained by exploiting the ratios of successive frequency counts in a weighted linear regression framework. The results of the comparisons with Turing’s, the maximum likelihood Poisson, Zelterman’s and Chao’s estimators reveal that our proposal can be beneficially used. Furthermore, our proposal outperforms its competitors under all heterogeneous settings. The empirical examples consider the homogeneous case and several heterogeneous cases, each with its own features, and provide interesting insights on the behavior of the estimators.  相似文献   

15.
This paper deals with estimation of a green tree frog population in an urban setting using repeated capture–mark–recapture (CMR) method over several weeks with an individual tagging system which gives rise to a complicated generalization of the hypergeometric distribution. Based on the maximum likelihood estimation, a parametric bootstrap approach is adopted to obtain interval estimates of the weekly population size which is the main objective of our work. The method is computation-based; and programming intensive to implement the algorithm for re-sampling. This method can be applied to estimate the population size of any species based on repeated CMR method at multiple time points. Further, it has been pointed out that the well-known Jolly–Seber method, which is based on some strong assumptions, produces either unrealistic estimates, or may have situations where its assumptions are not valid for our observed data set.  相似文献   

16.
In this paper, a new censoring scheme named by adaptive progressively interval censoring scheme is introduced. The competing risks data come from Marshall–Olkin extended Chen distribution under the new censoring scheme with random removals. We obtain the maximum likelihood estimators of the unknown parameters and the reliability function by using the EM algorithm based on the failure data. In addition, the bootstrap percentile confidence intervals and bootstrap-t confidence intervals of the unknown parameters are obtained. To test the equality of the competing risks model, the likelihood ratio tests are performed. Then, Monte Carlo simulation is conducted to evaluate the performance of the estimators under different sample sizes and removal schemes. Finally, a real data set is analyzed for illustration purpose.  相似文献   

17.
A pioneer first enters the market as the monopolist and later experiences the competition when a similar product is brought to the market by the competitor. In 2012, Wang and Xie suggested to decompose the pioneer survival to “monopoly” and “competition” durations and estimate the two survivals of the pioneer individually with the competitor's survival via regression analysis. In their article, several regression analyses were performed to study the effect of order entry to the pioneer and the later entrant in different market status. Using the same datasets from their study, our main interest is to investigate the interdependent relationship between two competitive firms and study whether the market pioneer and the later entrant can benefit from the competition. The major contribution of this article is that the interdependence between two competitive firms is explicitly expressed and three survival durations can be estimated in one model. The proposed method relates the survival times of two competitive firms to pioneer's monopoly time and some observable covariates via proportional hazard model, and incorporates frailty variables to capture the interdependence in the competition. This article demonstrates a new method that formulates the interdependence between competitive firms and shows data analyses in the industries of newspapers and high technology.  相似文献   

18.
Assume that a sample is available from a population having an exponential distribution, and that l Future sample are to be taken from the same population. This paper provides a formula for the same population. This paper provides a formula for computing a one–sided lower simulataneous prediction limit which is to be below the (ki ? mi + 1) –st order statistics of a future sample of size ki for the i = 1,…,2, hased on the sample mean of a past sample. Tables for factors for one–sided lower simultaneous predicition limits are provided. Such limits are of practical importance in determining acceptance criteria and predicting system survival times.  相似文献   

19.
The pretest–posttest design is widely used to investigate the effect of an experimental treatment in biomedical research. The treatment effect may be assessed using analysis of variance (ANOVA) or analysis of covariance (ANCOVA). The normality assumption for parametric ANOVA and ANCOVA may be violated due to outliers and skewness of data. Nonparametric methods, robust statistics, and data transformation may be used to address the nonnormality issue. However, there is no simultaneous comparison for the four statistical approaches in terms of empirical type I error probability and statistical power. We studied 13 ANOVA and ANCOVA models based on parametric approach, rank and normal score-based nonparametric approach, Huber M-estimation, and Box–Cox transformation using normal data with and without outliers and lognormal data. We found that ANCOVA models preserve the nominal significance level better and are more powerful than their ANOVA counterparts when the dependent variable and covariate are correlated. Huber M-estimation is the most liberal method. Nonparametric ANCOVA, especially ANCOVA based on normal score transformation, preserves the nominal significance level, has good statistical power, and is robust for data distribution.  相似文献   

20.
Searching for regions of the input space where a statistical model is inappropriate is useful in many applications. The study proposes an algorithm for finding local departures from a regression-type prediction model. The algorithm returns low-dimensional hypercubes where the average prediction error clearly departs from zero. The study describes the developed algorithm, and shows successful applications on the simulated and real data from the steel plate production. The algorithms that have been originally developed for searching regions of the high-response value from the input space are reviewed and considered as alternative methods for locating model departures. The proposed algorithm succeeds in locating the model departure regions better than the compared alternatives. The algorithm can be utilized in sequential follow-up of a model as time goes along and new data are observed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号