首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
There has been growing interest in the estimation of transition probabilities among stages (Hestbeck et al. , 1991; Brownie et al. , 1993; Schwarz et al. , 1993) in tag-return and capture-recapture models. This has been driven by the increasing interest in meta-population models in ecology and the need for parameter estimates to use in these models. These transition probabilities are composed of survival and movement rates, which can only be estimated separately when an additional assumption is made (Brownie et al. , 1993). Brownie et al. (1993) assumed that movement occurs at the end of the interval between time i and i + 1. We generalize this work to allow different movement patterns in the interval for multiple tag-recovery and capture-recapture experiments. The time of movement is a random variable with a known distribution. The model formulations can be viewed as matrix extensions to the model formulations of single open population capturerecapture and tag-recovery experiments (Jolly, 1965; Seber, 1965; Brownie et al. , 1985). We also present the results of a small simulation study for the tag-return model when movement time follows a beta distribution, and later another simulation study for the capture-recapture model when movement time follows a uniform distribution. The simulation studies use a modified program SURVIV (White, 1983). The Relative Standard Errors (RSEs) of estimates according to high and low movement rates are presented. We show there are strong correlations between movement and survival estimates in the case that the movement rate is high. We also show that estimators of movement rates to different areas and estimators of survival rates in different areas have substantial correlations.  相似文献   

2.
Ring-recovery methodology has been widely used to estimate survival rates in multi-year ringing studies of wildlife and fish populations (Youngs & Robson, 1975; Brownie et al. , 1985). The Brownie et al. (1985) methodology is often used but its formulation does not account for the fact that rings may be returned in two ways. Sometimes hunters are solicited by a wildlife management officer or scientist and asked if they shot any ringed birds. Alternatively, a hunter may voluntarily report the ring to the Bird Banding Laboratory (US Fish and Wildlife Service, Laurel, MD, USA) as is requested on the ring. Because the Brownie et al. (1985) models only consider reported rings, Conroy (1985) and Conroy et al. (1989) generalized their models to permit solicited rings. Pollock et al. (1991) considered a very similar model for fish tagging models which might be combined with angler surveys. Pollock et al. (1994) showed how to apply their generalized formulation, with some modification to allow for crippling losses, to wildlife ringing studies. Provided an estimate of ring reporting rate is available, separation of hunting and natural mortality estimates is possible which provides important management information. Here we review this material and then discuss possible methods of estimating reporting rate which include: (1) reward ring studies; (2) use of planted rings; (3) hunter surveys; and (4) pre- and post-hunting season ringings. We compare and contrast the four methods in terms of their model assumptions and practicality. We also discuss the estimation of crippling loss using pre- and post-season ringing in combination with a reward ringing study to estimate reporting rate.  相似文献   

3.
Ring-recovery methodology has been widely used to estimate survival rates in multi-year ringing studies of wildlife and fish populations (Youngs & Robson, 1975; Brownie et al. , 1985). The Brownie et al. (1985) methodology is often used but its formulation does not account for the fact that rings may be returned in two ways. Sometimes hunters are solicited by a wildlife management officer or scientist and asked if they shot any ringed birds. Alternatively, a hunter may voluntarily report the ring to the Bird Banding Laboratory (US Fish and Wildlife Service, Laurel, MD, USA) as is requested on the ring. Because the Brownie et al. (1985) models only consider reported rings, Conroy (1985) and Conroy et al. (1989) generalized their models to permit solicited rings. Pollock et al. (1991) considered a very similar model for fish tagging models which might be combined with angler surveys. Pollock et al. (1994) showed how to apply their generalized formulation, with some modification to allow for crippling losses, to wildlife ringing studies. Provided an estimate of ring reporting rate is available, separation of hunting and natural mortality estimates is possible which provides important management information. Here we review this material and then discuss possible methods of estimating reporting rate which include: (1) reward ring studies; (2) use of planted rings; (3) hunter surveys; and (4) pre- and post-hunting season ringings. We compare and contrast the four methods in terms of their model assumptions and practicality. We also discuss the estimation of crippling loss using pre- and post-season ringing in combination with a reward ringing study to estimate reporting rate.  相似文献   

4.
After initiating the theory of optimal design by Smith (1918), many optimality criteria were introduced. Atkinson et al. (2007) used the definition of compound design criteria to combine two optimality criteria and introduced the DT- and CD-optimalities criteria. This paper introduces the CDT-optimum design that provides a specified balance between model discrimination, parameter estimation and estimation of a parametric function such as the area under curve in models for drug absorbance. An equivalence theorem is presented for the case of two models.  相似文献   

5.
For right-censored data, Zeng et al. [Semiparametirc transformation modes with random effects for clustered data. Statist Sin. 2008;18:355–377] proposed a class of semiparametric transformation models with random effects to formulate the effects of possibly time-dependent covariates on clustered failure times. In this article, we demonstrate that the approach of Zeng et al. can be extended to analyse clustered doubly censored data. The asymptotic properties of the nonparametric maximum likelihood estimators of the model parameters are derived. A simulation study is conducted to investigate the performance of the proposed estimators.  相似文献   

6.
We show that the asymptotic mean of the log-likelihood ratio in a misspecified model is a differential geometric quantity that is related to the exponential curvature of Efron (1978), Amari (1982), and the preferred point geometry of [Critchley et al., 1993] and [Critchley et al., 1994]. The mean is invariant with respect to reparameterization, which leads to the differential geometrical approach where coordinate-system invariant quantities like statistical curvatures play an important role. When models are misspecified, the likelihood ratios do not have the chi-squared asymptotic limit, and the asymptotic mean of the likelihood ratio depends on two geometric factors, the departure of models from exponential families (i.e. the exponential curvature) and the departure of embedding spaces from being totally flat in the sense of Critchley et al. (1994). As a special case, the mean becomes the mean of the usual chi-squared limit (i.e. the half of the degrees of freedom) when these two curvatures vanish. The effect of curvatures is shown in the non-nested hypothesis testing approach of Vuong (1989), and we correct the numerator of the test statistic with an estimated asymptotic mean of the log-likelihood ratio to improve the asymptotic approximation to the sampling distribution of the test statistic.  相似文献   

7.
The purpose of this paper is to relate a number of multinomial models currently in use for ordinal response data in a unified manner. By studying generalized logit models, proportional generalized odds ratio models and proportional generalized hazard models under different parameterizations, we conclude that there are only four different models and they can be specified genericaUy in a uniform way. These four models all possess the same stochastic ordering property and we compare them graphically in a simple case. Data from the NHLBI TYPE II study (Brensike et al (1984)) is used to illustrate these models. We show that the BMDP programs LE and PR can be employed in computing maximum likelihood estimators for these four models.  相似文献   

8.
The Weibull distribution is one of the most important distributions in reliability. For the first time, we introduce the beta exponentiated Weibull distribution which extends recent models by Lee et al. [Beta-Weibull distribution: some properties and applications to censored data, J. Mod. Appl. Statist. Meth. 6 (2007), pp. 173–186] and Barreto-Souza et al. [The beta generalized exponential distribution, J. Statist. Comput. Simul. 80 (2010), pp. 159–172]. The new distribution is an important competitive model to the Weibull, exponentiated exponential, exponentiated Weibull, beta exponential and beta Weibull distributions since it contains all these models as special cases. We demonstrate that the density of the new distribution can be expressed as a linear combination of Weibull densities. We provide the moments and two closed-form expressions for the moment-generating function. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and entropies. The density of the order statistics can also be expressed as a linear combination of Weibull densities. We obtain the moments of the order statistics. The expected information matrix is derived. We define a log-beta exponentiated Weibull regression model to analyse censored data. The estimation of the parameters is approached by the method of maximum likelihood. The usefulness of the new distribution to analyse positive data is illustrated in two real data sets.  相似文献   

9.
The methods developed by John and Draper et al. of partitioning the blends (runs) of four mixture components into two or more orthogonal blocks when fitting quadratic models are extended to mixtures of five components. The characteristics of Latin squares of side five are used to derive rules for reliably and quickly obtaining designs with specific properties. The designs also produce orthogonal blocks when higher order models are fitted.  相似文献   

10.
The Cox proportional frailty model with a random effect has been proposed for the analysis of right-censored data which consist of a large number of small clusters of correlated failure time observations. For right-censored data, Cai et al. [3] proposed a class of semiparametric mixed-effects models which provides useful alternatives to the Cox model. We demonstrate that the approach of Cai et al. [3] can be used to analyze clustered doubly censored data when both left- and right-censoring variables are always observed. The asymptotic properties of the proposed estimator are derived. A simulation study is conducted to investigate the performance of the proposed estimator.  相似文献   

11.

In this paper, we consider testing for linearity against a well-known class of regime switching models known as the smooth transition autoregressive (STAR) models. Apart from the model selection issues, one reason for interest in testing for linearity in time-series models is that non-linear models such as the STAR are considerably more difficult to use. This testing problem is non-standard because a nuisance parameter becomes unidentified under the null hypothesis. In this paper, we further explore the class of tests proposed by Luukkonen, Saikonnen and Terasvirta (1988). Luukkonen et al . (1988) proposed LM tests for linearity against STAR models. A potential difficulty here is that the linear approximation introduces high leverage points, and hence outliers are likely to be quite influential. To overcome this difficulty, we use the same approximating linear model of Luukkonen et al . (1988), but we apply Wald and F -tests based on l 1 - and bounded influence estimates. The efficiency gains of this procedure cannot be easily deduced from the existing theoretical results because the test is based on a misspecified model under H 1 . Therefore, we carried out a simulation study, in which we observed that the robust tests have desirable properties compared to the test of Luukkonen et al . (1988) for a range of error distributions in the STAR model, in particular the robust tests have power advantages over the LM test.  相似文献   

12.
Log Gaussian Cox processes as introduced in Moller et al. (1998) are extended to space-time models called log Gaussian Cox birth processes. These processes allow modelling of spatial and temporal heterogeneity in time series of increasing point processes consisting of different types of points. The models are shown to be easy to analyse yet flexible enough for a detailed statistical analysis of a particular agricultural experiment concerning the development of two weed species on an organic barley field. Particularly, the aspects of estimation, model validation and intensity surface prediction are discussed.  相似文献   

13.
Urn models are popular for response adaptive designs in clinical studies. Among different urn models, Ivanova's drop-the-loser rule is capable of producing superior adaptive treatment allocation schemes. Ivanova [2003. A play-the-winner-type urn model with reduced variability. Metrika 58, 1–13] obtained the asymptotic normality only for two treatments. Recently, Zhang et al. [2007. Generalized drop-the-loser urn for clinical trials with delayed responses. Statist. Sinica, in press] extended the drop-the-loser rule to tackle more general circumstances. However, their discussion is also limited to only two treatments. In this paper, the drop-the-loser rule is generalized to multi-treatment clinical trials, and delayed responses are allowed. Moreover, the rule can be used to target any desired pre-specified allocation proportion. Asymptotic properties, including strong consistency and asymptotic normality, are also established for general multi-treatment cases.  相似文献   

14.
The continual reassessment method (CRM) was first introduced by O’Quigley et al. [1990. Continual reassessment method: a practical design for Phase I clinical trials in cancer. Biometrics 46, 33–48]. Many articles followed adding to the original ideas, among which are articles by Babb et al. [1998. Cancer Phase I clinical trials: efficient dose escalation with overdose control. Statist. Med. 17, 1103–1120], Braun [2002. The bivariate-continual reassessment method. Extending the CRM to phase I trials of two competing outcomes. Controlled Clin. Trials 23, 240–256], Chevret [1993. The continual reassessment method in cancer phase I clinical trials: a simulation study. Statist. Med. 12, 1093–1108], Faries [1994. Practical modifications of the continual reassessment method for phase I cancer clinical trials. J. Biopharm. Statist. 4, 147–164], Goodman et al. [1995. Some practical improvements in the continual reassessment method for phase I studies. Statist. Med. 14, 1149–1161], Ishizuka and Ohashi [2001. The continual reassessment method and its applications: a Bayesian methodology for phase I cancer clinical trials. Statist. Med. 20, 2661–2681], Legedeza and Ibrahim [2002. Longitudinal design for phase I trials using the continual reassessment method. Controlled Clin. Trials 21, 578–588], Mahmood [2001. Application of preclinical data to initiate the modified continual reassessment method for maximum tolerated dose-finding trial. J. Clin. Pharmacol. 41, 19–24], Moller [1995. An extension of the continual reassessment method using a preliminary up and down design in a dose finding study in cancer patients in order to investigate a greater number of dose levels. Statist. Med. 14, 911–922], O’Quigley [1992. Estimating the probability of toxicity at the recommended dose following a Phase I clinical trial in cancer. Biometrics 48, 853–862], O’Quigley and Shen [1996. Continual reassessment method: a likelihood approach. Biometrics 52, 163–174], O’Quigley et al. (1999), O’Quigley et al. [2002. Non-parametric optimal design in dose finding studies. Biostatistics 3, 51–56], O’Quigley and Paoletti [2003. Continual reassessment method for ordered groups. Biometrics 59, 429–439], Piantodosi et al., 1998. [1998 Practical implementation of a modified continual reassessment method for dose-finding trials. Cancer Chemother. Pharmacol. 41, 429–436] and Whitehead and Williamson [1998. Bayesian decision procedures based on logistic regression models for dose-finding studies. J. Biopharm. Statist. 8, 445–467]. The method is broadly described by Storer [1989. Design and analysis of Phase I clinical trials. Biometrics 45, 925–937]. Whether likelihood or Bayesian based, inference poses particular theoretical difficulties in view of working models being under-parameterized. Nonetheless CRM models have proven themselves to be of practical use and, in this work, the aim is to turn the spotlight on the main theoretical ideas underpinning the approach, obtaining results which can provide guidance in practice. Stemming from this theoretical framework are a number of results and some further development, in particular the way to structure a randomized allocation of subjects as well as a more robust approach to the problem of dealing with patient heterogeneity.  相似文献   

15.
The analysis of exogeneity in econometric time-series models as formalized in the seminal paper by Engle et al. [Econometrica 51 (1983), 277–304] is extended to cover a more general class of models, including error-components models. The Bayesian framework adopted here allows us to take full advantage of a number of statistical tools, related to the reduction of Bayesian experiments, and motivates a careful consideration of prediction issues, leading to a concept of predictive exogeneity. We also adapt the formal definitions of weak and strong exogeneity introduced in Engle et al. (1983), and provide a naturally nested set of definitions for exogeneity. An example highlights the main implications of our analysis for econometric modelling.  相似文献   

16.
In this paper, it is proposed to modify autoregressive fractionally integrated moving average (ARFIMA) processes by introducing an additional parameter to comply with the criticism of Hauser et al . (1999) that ARFIMA processes are not appropriate for the estimation of persistence, because of the degenerate behavior of their spectral densities at frequency zero. When fitting these modified ARFIMA processes to the US GNP, it turns out that the estimated spectra are very similar to those obtained with conventional ARFIMA models, indicating that, in this special case, the disadvantage of ARFIMA models cited by Hauser et al. (1999) does not seriously aff ect the estimation of persistence. However, according to the results of a goodness-of-fit test applied to the estimated spectra, both the ARFIMA models and the modified ARFIMA models seem to overfit the data in the neighborhood of frequency zero.  相似文献   

17.
The generalized negative exponential disparity, discussed in Bhandari et al. (Robust inference in parametric models using the family of generalized negative exponential disparities, 2006, ANZJS, 48 , 95–114), represents an important class of disparity measures that generates efficient estimators and tests with strong robustness properties. In their paper, however, Bhandari et al. failed to provide a sharp lower bound for the power breakdown point of the corresponding tests. This was acknowledged by the authors, who indicated the possible existence of a sharper bound, but noted that they did not “have a proof at this point”. In this paper we provide an improved bound for this power breakdown point, and show with an example how this can enhance the existing results.  相似文献   

18.
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time‐to‐event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach. In the unmatched pair approach, the confidence interval is constructed based on bootstrap resampling, and the hypothesis testing is based on the non‐parametric method by Finkelstein and Schoenfeld (1999). Luo et al. (2015) developed a close‐form variance estimator of the win ratio for the unmatched pair approach, based on a composite endpoint with two components and a specific algorithm determining winners, losers and ties. We extend the unmatched pair approach to provide a generalized analytical solution to both hypothesis testing and confidence interval construction for the win ratio, based on its logarithmic asymptotic distribution. This asymptotic distribution is derived via U‐statistics following Wei and Johnson (1985). We perform simulations assessing the confidence intervals constructed based on our approach versus those per the bootstrap resampling and per Luo et al. We have also applied our approach to a liver transplant Phase III study. This application and the simulation studies show that the win ratio can be a better statistical measure than the odds ratio when the importance order among components matters; and the method per our approach and that by Luo et al., although derived based on large sample theory, are not limited to a large sample, but are also good for relatively small sample sizes. Different from Pocock et al. and Luo et al., our approach is a generalized analytical method, which is valid for any algorithm determining winners, losers and ties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Using generalized linear models (GLMs), Jalaludin  et al. (2006;  J. Exposure Analysis and Epidemiology   16 , 225–237) studied the association between the daily number of visits to emergency departments for cardiovascular disease by the elderly (65+) and five measures of ambient air pollution. Bayesian methods provide an alternative approach to classical time series modelling and are starting to be more widely used. This paper considers Bayesian methods using the dataset used by Jalaludin  et al.  (2006) , and compares the results from Bayesian methods with those obtained by Jalaludin  et al.  (2006) using GLM methods.  相似文献   

20.
A robust procedure is developed for testing the equality of means in the two sample normal model. This is based on the weighted likelihood estimators of Basu et al. (1993). When the normal model is true the tests proposed have the same asymptotic power as the two sample Student's t-statistic in the equal variance case. However, when the normality assumptions are only approximately true the proposed tests can be substantially more powerful than the classical tests. In a Monte Carlo study for the equal variance case under various outlier models the proposed test using Hellinger distance based weighted likelihood estimator compared favorably with the classical test as well as the robust test proposed by Tiku (1980).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号