首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We discuss the analysis of mark-recapture data when the aim is to quantify density dependence between survival rate and abundance. We describe an analysis for a random effects model that includes a linear relationship between abundance and survival using an errors-in-variables regression estimator with analytical adjustment for approximate bias. The analysis is illustrated using data from short-tailed shearwaters banded for 48 consecutive years at Fisher Island, Tasmania, and Hutton's shearwater banded at Kaikoura, New Zealand for nine consecutive years. The Fisher Island data provided no evidence of a density dependence relationship between abundance and survival, and confidence interval widths rule out anything but small density dependent effects. The Hutton's shearwater data were equivocal with the analysis unable to rule out anything but a very strong density dependent relationship between survival and abundance.  相似文献   

2.
3.
A method for nonparametric estimation of density based on a randomly censored sample is presented. The density is expressed as a linear combination of cubic M -splines, and the coefficients are determined by pseudo-maximum-likelihood estimation (likelihood is maximized conditionally on data-dependent knots). By using regression splines (small number of knots) it is possible to reduce the estimation problem to a space of low dimension while preserving flexibility, thus striking a compromise between parametric approaches and ordinary nonparametric approaches based on spline smoothing. The number of knots is determined by the minimum AIC. Examples of simulated and real data are presented. Asymptotic theory and the bootstrap indicate that the precision and the accuracy of the estimates are satisfactory.  相似文献   

4.
We describe recent developments in the POPAN system for the analysis of mark-recapture data from Jolly-Seber type experiments. The previous versions (POPAN-3 for SUN/OS workstations and POPAN-PC for IBM-PC running DOS or Windows) included general statistics gathering and testing procedures, a wide range of analysis options for estimating population abundance, survival and birth parameters, and a general simulation capability. POPAN-4 adds a very general procedure for fitting constrained models based on a new unified theory for Jolly-Seber models. Users can impose constraints on capture, survival and birth rates over time and/or across attribute groups (e.g. sex or age groups) and can model these rates using covariate models involving auxiliary variables (e.g. sampling effort).  相似文献   

5.
We describe recent developments in the POPAN system for the analysis of mark-recapture data from Jolly-Seber type experiments. The previous versions (POPAN-3 for SUN/OS workstations and POPAN-PC for IBM-PC running DOS or Windows) included general statistics gathering and testing procedures, a wide range of analysis options for estimating population abundance, survival and birth parameters, and a general simulation capability. POPAN-4 adds a very general procedure for fitting constrained models based on a new unified theory for Jolly-Seber models. Users can impose constraints on capture, survival and birth rates over time and/or across attribute groups (e.g. sex or age groups) and can model these rates using covariate models involving auxiliary variables (e.g. sampling effort).  相似文献   

6.
Nonparametric estimation of the probability density function f° of a lifetime distribution based on arbitrarily right-censor-ed observations from f° has been studied extensively in recent years. In this paper the density estimators from censored data that have been obtained to date are outlined. Histogram, kernel-type, maximum likelihood, series-type, and Bayesian nonparametric estimators are included. Since estimation of the hazard rate function can be considered as giving a density estimate, all known results concerning nonparametric hazard rate estimation from censored samples are also briefly mentioned.  相似文献   

7.
In this paper, the kernel density estimator for negatively superadditive dependent random variables is studied. The exponential inequalities and the exponential rate for the kernel estimator of density function with a uniform version, over compact sets are investigated. Also, the optimal bandwidth rate of the estimator is obtained using mean integrated squared error. The results are generalized and used to improve the ones obtained for the case of associated sequences. As an application, FGM sequences that fulfil our assumptions are investigated. Also, the convergence rate of the kernel density estimator is illustrated via a simulation study. Moreover, a real data analysis is presented.  相似文献   

8.
The analysis of time-indexed categorical data is important in many fields, e.g., in telecommunication network monitoring, manufacturing process control, ecology, etc. Primary interest is in detecting and measuring serial associations and dependencies in such data. For cardinal time series analysis, autocorrelation is a convenient and informative measure of serial association. Yet, for categorical time series analysis an analogous convenient measure and corresponding concepts of weak stationarity have not been provided. For two categorical variables, several ways of measuring association have been suggested. This paper reviews such measures and investigates their properties in a serial context. We discuss concepts of weak stationarity of a categorical time series, in particular of stationarity in association measures. Serial association and weak stationarity are studied in the class of discrete ARMA processes introduced by Jacobs and Lewis (J. Time Ser. Anal. 4(1):19–36, 1983). An intrinsic feature of a time series is that, typically, adjacent observations are dependent. The nature of this dependence among observations of a time series is of considerable practical interest. Time series analysis is concerned with techniques for the analysis of this dependence. (Box et al. 1994p. 1)  相似文献   

9.
Summary. We propose a new parametric survival model for cancer prevention studies. The formulation of the model is in the spirit of stochastic modelling of the occurrences of tumours through two stages: initiation of an undetected tumour and promotion of the tumour to a detectable cancer. Several novel properties of the model proposed are derived. In addition, we examine the relationship of our model with the existing lagged regression model of Zucker and Lakatos. Also, we bridge the difference between two distinct stochastic modelling methods for cancer data, one used primarily for cancer therapeutic trials and the other used for cancer prevention trials.  相似文献   

10.
When observational data are used to compare treatment-specific survivals, regular two-sample tests, such as the log-rank test, need to be adjusted for the imbalance between treatments with respect to baseline covariate distributions. Besides, the standard assumption that survival time and censoring time are conditionally independent given the treatment, required for the regular two-sample tests, may not be realistic in observational studies. Moreover, treatment-specific hazards are often non-proportional, resulting in small power for the log-rank test. In this paper, we propose a set of adjusted weighted log-rank tests and their supremum versions by inverse probability of treatment and censoring weighting to compare treatment-specific survivals based on data from observational studies. These tests are proven to be asymptotically correct. Simulation studies show that with realistic sample sizes and censoring rates, the proposed tests have the desired Type I error probabilities and are more powerful than the adjusted log-rank test when the treatment-specific hazards differ in non-proportional ways. A real data example illustrates the practical utility of the new methods.  相似文献   

11.
In statistical practice, it is quite common that some data are unknown or disregarded for various reasons. In the present paper, on the basis of a multiply censored sample from a Pareto population, the problem of finding the highest posterior density (HPD) estimates of the inequality and precision parameters is discussed assuming a natural joint conjugate prior. HPD estimates are obtained in closed forms for complete or right censored data. In the general multiple censoring case, it is shown the existence and uniqueness of the estimates. Explicit lower and upper bounds are also provided. Due to the posterior unimodality, HPD credibility regions are simply connected sets. For illustration, two numerical examples are included.  相似文献   

12.
We consider the problem of estimating a density function based on aggregated data where the data group sizes may differ from each other. The reconstruction of the target density can be regarded as a nonlinear statistical inverse problem. We introduce some estimation procedures which are capable to use the observations from all groups by some nonstandard deconvolution techniques. General consistency and rate-optimality under common smoothness constraints are developed. We give some numerical simulations and a data-driven bandwidth selector.  相似文献   

13.
In recent years, regression models have been shown to be useful for predicting the long-term survival probabilities of patients in clinical trials. The importance of a regression model is that once the regression parameters are estimated information about the regressed quantity is immediate. A simple estimator is proposed for the regression parameters in a model for the long-term survival rate. The proposed estimator is seen to arise from an estimating function that has the missing information principle underlying its construction. When the covariate takes values in a finite set, the proposed estimating function is equivalent to an ad hoc estimating function proposed in the literature. However, in general, the two estimating functions lead to different estimators of the regression parameter. For discrete covariates, the asymptotic covariance matrix of the proposed estimator is simple to calculate using standard techniques involving the predictable covariation process of martingale transforms. An ad hoc extension to the case of a one-dimensional continuous covariate is proposed. Simplicity and generalizability are two attractive features of the proposed approach. The last mentioned feature is not enjoyed by the other estimator.  相似文献   

14.
Consideration of coverage yields a new class of estimators of population size for the standard mark-recapture model which permits heterogeneity of capture probabilities. Real data and simulation studies are used to assess these coverage-adjusted estimators. The simulations highlight the need for estimators that perform well for a wide range of values of the mean and coefficient of variation of the capture probabilities. When judged for this type of robustness, the simulations provide good grounds for preferring the new estimators to earlier ones for this model, except when the number of sampling occasions is large. A bootstrapping approach is used to estimate the standard errors of the new estimators, and to obtain confidence intervals for the population size.  相似文献   

15.
In Kernel density estimation, a criticism of bandwidth selection techniques which minimize squared error expressions is that they perform poorly when estimating tails of probability density functions. Techniques minimizing absolute error expressions are thought to result in more uniform performance and be potentially superior. An asympotic mean absolute error expression for nonparametric kernel density estimators from right-censored data is developed here. This expression is used to obtain local and global bandwidths that are optimal in the sense that they minimize asymptotic mean absolute error and integrated asymptotic mean absolute error, respectively. These estimators are illustrated fro eight data sets from known distributions. Computer simulation results are discussed, comparing the estimation methods with squared-error-based bandwidth selection for right-censored data.  相似文献   

16.
Large scale sample surveys often collect survival times that are clustered at a number of hierarchical levels. Only the case where three levels are nested is considered here: that is, individual response times (level- i) are grouped into larger units (level-2) which in turn are grouped into much larger units (level-3). It is assumed that individuals in a unit share a common, unobservable and specific random frailty which induces an association between survival times in the unit. A Bayesian hierarchical analysis of the data is examined by modelling the survival time (level-1) using a semipanmietric Cox proportional hazards and specific level-2 and level-3 random frailty effects are assumed independent and modelled as gamma distributions. The complete posterior distribution of all the model parameters is estimated using the Gibbs sampler, a Monte Carlo method.  相似文献   

17.
In some experiments, such as destructive stress testing and industrial quality control experiments, only values smaller than all previous ones are observed. Here, for such record-breaking data, kernel estimation of the cumulative distribution function and smooth density estimation is considered. For a single record-breaking sample, consistent estimation is not possible, and replication is required for global results. For m independent record-breaking samples, the proposed distribution function and density estimators are shown to be strongly consistent and asymptotically normal as m → ∞. Also, for small m, the mean squared errors and biases of the estimators and their smoothing parameters are investigated through computer simulations.  相似文献   

18.
Frailty models for survival data   总被引:1,自引:0,他引:1  
A frailty model is a random effects model for time variables, where the random effect (the frailty) has a multiplicative effect on the hazard. It can be used for univariate (independent) failure times, i.e. to describe the influence of unobserved covariates in a proportional hazards model. More interesting, however, is to consider multivariate (dependent) failure times generated as conditionally independent times given the frailty. This approach can be used both for survival times for individuals, like twins or family members, and for repeated events for the same individual. The standard assumption is to use a gamma distribution for the frailty, but this is a restriction that implies that the dependence is most important for late events. More generally, the distribution can be stable, inverse Gaussian, or follow a power variance function exponential family. Theoretically, large differences are seen between the choices. In practice, using the largest model makes it possible to allow for more general dependence structures, without making the formulas too complicated.This paper is a revised version of a review, which together with ten papers by the author made up a thesis for a Doctor of Science degree at the University of Copenhagen.  相似文献   

19.
Some concepts of stochastic dependence for continuous bivariate distribution functions are investigated by defining a convex transformation on their reliability or survival functions. We also study notions of bivariate hazard rate and hazard dependence. Some dependence orderings are characterized by using convex transformation. To clarify the discussions, illustrative examples are given.  相似文献   

20.
The medical costs in an ageing society substantially increase when the incidences of chronic diseases, disabilities and inability to live independently are high. Healthy lifestyles not only affect elderly individuals but also influence the entire community. When assessing treatment efficacy, survival and quality of life should be considered simultaneously. This paper proposes the joint likelihood approach for modelling survival and longitudinal binary covariates simultaneously. Because some unobservable information is present in the model, the Monte Carlo EM algorithm and Metropolis-Hastings algorithm are used to find the estimators. Monte Carlo simulations are performed to evaluate the performance of the proposed model based on the accuracy and precision of the estimates. Real data are used to demonstrate the feasibility of the proposed model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号