首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
There has been growing interest in the estimation of transition probabilities among stages (Hestbeck et al. , 1991; Brownie et al. , 1993; Schwarz et al. , 1993) in tag-return and capture-recapture models. This has been driven by the increasing interest in meta-population models in ecology and the need for parameter estimates to use in these models. These transition probabilities are composed of survival and movement rates, which can only be estimated separately when an additional assumption is made (Brownie et al. , 1993). Brownie et al. (1993) assumed that movement occurs at the end of the interval between time i and i + 1. We generalize this work to allow different movement patterns in the interval for multiple tag-recovery and capture-recapture experiments. The time of movement is a random variable with a known distribution. The model formulations can be viewed as matrix extensions to the model formulations of single open population capturerecapture and tag-recovery experiments (Jolly, 1965; Seber, 1965; Brownie et al. , 1985). We also present the results of a small simulation study for the tag-return model when movement time follows a beta distribution, and later another simulation study for the capture-recapture model when movement time follows a uniform distribution. The simulation studies use a modified program SURVIV (White, 1983). The Relative Standard Errors (RSEs) of estimates according to high and low movement rates are presented. We show there are strong correlations between movement and survival estimates in the case that the movement rate is high. We also show that estimators of movement rates to different areas and estimators of survival rates in different areas have substantial correlations.  相似文献   

3.
BMDP, Version 7.0: Available from BMDP, Statistical Software, Inc., 12121 Wilshire Blvd., Suite 300, Los Angeles, CA 90025; phone: 310-207-8800; fax: 310-207-8844. $695.

NCSS, Version 5.3: Available from NCSS, Attn: Dr. Jerry Hintze, 329 N. 1000 East, Kaysville, UT 84037; phone: 801-546-0445; fax: 801-546-3907. $295.

SAS, Version 6.07: Available from SAS Institute, Inc., SASA Campus Drive, Cary, NC 27513; phone: 919-677-8200; fax: 919-677-8123. $2525 (first year); $1175 (renewal).

SPSS, Version 6.0: Available from SPSS Inc., 444 N. Michigan Ave., Chicago, IL 60611; phone: 800-543-5831; fax: 800-841-0064. $1190.  相似文献   

4.
5.
Sargent et al (J Clin Oncol 23: 8664–8670, 2005) concluded that 3-year disease-free survival (DFS) can be considered a valid surrogate (replacement) endpoint for 5-year overall survival (OS) in clinical trials of adjuvant chemotherapy for colorectal cancer. We address the question whether the conclusion holds for trials involving other classes of treatments than those considered by Sargent et al. Additionally, we assess if the 3-year cutpoint is an optimal one. To this aim, we investigate whether the results reported by Sargent et al. could have been used to predict treatment effects in three centrally randomized adjuvant colorectal cancer trials performed by the Japanese Foundation for Multidisciplinary Treatment for Cancer (JFMTC) (Sakamoto et al. J Clin Oncol 22:484–492, 2004). Our analysis supports the conclusion of Sargent et al. and shows that using DFS at 2 or 3 years would be the best option for the prediction of OS at 5 years.  相似文献   

6.
The continual reassessment method (CRM) was first introduced by O’Quigley et al. [1990. Continual reassessment method: a practical design for Phase I clinical trials in cancer. Biometrics 46, 33–48]. Many articles followed adding to the original ideas, among which are articles by Babb et al. [1998. Cancer Phase I clinical trials: efficient dose escalation with overdose control. Statist. Med. 17, 1103–1120], Braun [2002. The bivariate-continual reassessment method. Extending the CRM to phase I trials of two competing outcomes. Controlled Clin. Trials 23, 240–256], Chevret [1993. The continual reassessment method in cancer phase I clinical trials: a simulation study. Statist. Med. 12, 1093–1108], Faries [1994. Practical modifications of the continual reassessment method for phase I cancer clinical trials. J. Biopharm. Statist. 4, 147–164], Goodman et al. [1995. Some practical improvements in the continual reassessment method for phase I studies. Statist. Med. 14, 1149–1161], Ishizuka and Ohashi [2001. The continual reassessment method and its applications: a Bayesian methodology for phase I cancer clinical trials. Statist. Med. 20, 2661–2681], Legedeza and Ibrahim [2002. Longitudinal design for phase I trials using the continual reassessment method. Controlled Clin. Trials 21, 578–588], Mahmood [2001. Application of preclinical data to initiate the modified continual reassessment method for maximum tolerated dose-finding trial. J. Clin. Pharmacol. 41, 19–24], Moller [1995. An extension of the continual reassessment method using a preliminary up and down design in a dose finding study in cancer patients in order to investigate a greater number of dose levels. Statist. Med. 14, 911–922], O’Quigley [1992. Estimating the probability of toxicity at the recommended dose following a Phase I clinical trial in cancer. Biometrics 48, 853–862], O’Quigley and Shen [1996. Continual reassessment method: a likelihood approach. Biometrics 52, 163–174], O’Quigley et al. (1999), O’Quigley et al. [2002. Non-parametric optimal design in dose finding studies. Biostatistics 3, 51–56], O’Quigley and Paoletti [2003. Continual reassessment method for ordered groups. Biometrics 59, 429–439], Piantodosi et al., 1998. [1998 Practical implementation of a modified continual reassessment method for dose-finding trials. Cancer Chemother. Pharmacol. 41, 429–436] and Whitehead and Williamson [1998. Bayesian decision procedures based on logistic regression models for dose-finding studies. J. Biopharm. Statist. 8, 445–467]. The method is broadly described by Storer [1989. Design and analysis of Phase I clinical trials. Biometrics 45, 925–937]. Whether likelihood or Bayesian based, inference poses particular theoretical difficulties in view of working models being under-parameterized. Nonetheless CRM models have proven themselves to be of practical use and, in this work, the aim is to turn the spotlight on the main theoretical ideas underpinning the approach, obtaining results which can provide guidance in practice. Stemming from this theoretical framework are a number of results and some further development, in particular the way to structure a randomized allocation of subjects as well as a more robust approach to the problem of dealing with patient heterogeneity.  相似文献   

7.
This paper is concerned with developing procedures for construcing confidence intervals, which would hold approximately equal tail probabilities and coverage probabilities close to the normal, for the scale parameter θ of the two-parameter exponential lifetime model when the data are time censored. We use a conditional approach to eliminate the nuisance parameter and develop several procedures based on the conditional likelihood. The methods are (a) a method based on the likelihood ratio, (b) a method based on the skewness corrected score (Bartlett, Biometrika 40 (1953), 12–19), (c) a method based on an adjustment to the signed root likelihood ratio (Diciccio, Field et al., Biometrika 77 (1990), 77–95), and (d) a method based on parameter transformation to the normal approximation. The performances of these procedures are then compared, through simulations, with the usual likelihood based procedure. The skewness corrected score procedure performs best in terms of holding both equal tail probabilities and nominal coverage probabilities even for small samples.  相似文献   

8.
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time‐to‐event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach. In the unmatched pair approach, the confidence interval is constructed based on bootstrap resampling, and the hypothesis testing is based on the non‐parametric method by Finkelstein and Schoenfeld (1999). Luo et al. (2015) developed a close‐form variance estimator of the win ratio for the unmatched pair approach, based on a composite endpoint with two components and a specific algorithm determining winners, losers and ties. We extend the unmatched pair approach to provide a generalized analytical solution to both hypothesis testing and confidence interval construction for the win ratio, based on its logarithmic asymptotic distribution. This asymptotic distribution is derived via U‐statistics following Wei and Johnson (1985). We perform simulations assessing the confidence intervals constructed based on our approach versus those per the bootstrap resampling and per Luo et al. We have also applied our approach to a liver transplant Phase III study. This application and the simulation studies show that the win ratio can be a better statistical measure than the odds ratio when the importance order among components matters; and the method per our approach and that by Luo et al., although derived based on large sample theory, are not limited to a large sample, but are also good for relatively small sample sizes. Different from Pocock et al. and Luo et al., our approach is a generalized analytical method, which is valid for any algorithm determining winners, losers and ties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
The program Data Muncher (Version 1.0) and the SAS (Version 6.10) procedures PROC TABULATE and PROC REPORT are compared with respect to their ability to generate statistical tables that summarize data for both analysis (continuous) and classification (categorical) variables. The comparison was made on an IBM-compatible PC running Windows. The functioning of the programs as well as many formatting and layout options are compared. It is concluded that each product has its advantages and its limitations. Data Muncher is available from Conceptual Software, Inc. [(713) 721–4200 or (800) 328–2686]. The SAS System is available from SAS Institute Inc. [(919) 677–8000].  相似文献   

10.
Singh et al. (Stat Trans 6(4):515–522, 2003) proposed a modified unrelated question procedure and they also demonstrated that the modified procedure is capable of producing a more efficient estimator of the population parameter π A , namely, the proportion of persons in a community bearing a sensitive character A when π A  < 0.50. The development of Singh et al. (Stat Trans 6(4):515–522, 2003) is based on simple random samples with replacement and on the assumption that π B , namely, the proportion of individuals bearing an unrelated innocuous character B is known. Due to these limitations, Singh et al.’s (Stat Trans 6(4):515–522, 2003) procedure cannot be used in practical surveys where usually the sample units are chosen with varying selection probabilities. In this article, following Singh et al. (Stat Trans 6(4):515–522, 2003) we propose an alternative RR procedure assuming that the population units are sampled with unequal selection probabilities and that the value of π B is unknown. A numerical example comparing the performance of the proposed RR procedure under alternative sampling designs is also reported.  相似文献   

11.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n-1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n-1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings.  相似文献   

12.
Accurate diagnosis of a molecularly defined subtype of cancer is often an important step toward its effective control and treatment. For the diagnosis of some subtypes of a cancer, a gold standard with perfect sensitivity and specificity may be unavailable. In those scenarios, tumor subtype status is commonly measured by multiple imperfect diagnostic markers. Additionally, in many such studies, some subjects are only measured by a subset of diagnostic tests and the missing probabilities may depend on the unknown disease status. In this paper, we present statistical methods based on the EM algorithm to evaluate incomplete multiple imperfect diagnostic tests under a missing at random assumption and one missing not at random scenario. We apply the proposed methods to a real data set from the National Cancer Institute (NCI) colon cancer family registry on diagnosing microsatellite instability for hereditary non-polyposis colorectal cancer to estimate diagnostic accuracy parameters (i.e. sensitivities and specificities), prevalence, and potential differential missing probabilities for 11 biomarker tests. Simulations are also conducted to evaluate the small-sample performance of our methods.  相似文献   

13.
The operating characteristics (OCs) of an indifference-zone ranking and selection procedure are derived for randomized response binomial data. The OCs include tables and figures to facilitate tradeoffs between sample size and a stated probability of a correct selection, i.e., correctly identifying the binomial population (out of k ≥ 2) characterized by the largest probability of success. Measures of efficiency are provided to assist the analyst in selection of an appropriate randomized response design for the collection of the data. A hybrid randomized response model, which includes the Warner model and the Greenberg et al. model, is introduced to facilitate comparisons among a wider range of statistical designs than previously available. An example comparing failure rates of contraceptive methods is used to illustrate the use of these new results.  相似文献   

14.
Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples (Beaumont et al., Biometrika 86:983?C990, 2009; Peters et al., Technical report, 2008; Toni et al., J.?Roy. Soc. Interface 6:187?C202, 2009) and require the careful choice of simulation parameters. In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters. We demonstrate our algorithm on a toy example and on a birth-death-mutation model arising in epidemiology.  相似文献   

15.
Generally, confidence regions for the probabilities of a multinomial population are constructed based on the Pearson χ2 statistic. Morales et al. (Bootstrap confidence regions in multinomial sampling. Appl Math Comput. 2004;155:295–315) considered the bootstrap and asymptotic confidence regions based on a broader family of test statistics known as power-divergence test statistics. In this study, we extend their work and propose penalized power-divergence test statistics-based confidence regions. We only consider small sample sizes where asymptotic properties fail and alternative methods are needed. Both bootstrap and asymptotic confidence regions are constructed. We consider the percentile and the bias corrected and accelerated bootstrap confidence regions. The latter confidence region has not been studied previously for the power-divergence statistics much less for the penalized ones. Designed simulation studies are carried out to calculate average coverage probabilities. Mean absolute deviation between actual and nominal coverage probabilities is used to compare the proposed confidence regions.  相似文献   

16.
This paper develops the theory of calibration estimation and proposes calibration approach alternative to existing calibration estimators for estimating population mean of the study variable using auxiliary variable in stratified sampling. The theory of new calibration estimation is given and optimum weights are derived. A simulation study is carried out to performance of the proposed calibration estimator with other existing calibration estimators. The results reveal that the proposed calibration estimators are more efficient than Tracy et al., Singh et al., Singh calibration estimators of the population mean.  相似文献   

17.
This paper offers a procedure for specifying probabilities for students to select answers on a multiple-choice test that, unlike previous procedures, satisfies all three of the following structural consistency conditions: (1) for any student, the sum over questions of the probabilities that the student will use the correct answers is the student's score on the test; (2) for any student, the sum over possible answers of the probabilities of using the answers is 1.0; and (3) for any answer to any question, the sum over students of the probabilities of using that answer is the number of students who used that answer. When applied to an exam, these fully consistent probabilities had the same power to identify cheaters as the probabilities proposed by Wesolowsky, and noticeably better power than the probabilities suggested by Frary et al.  相似文献   

18.
In this paper, with the notion of independence for random variables under upper expectations, we derive a strong law of large numbers for non-additive probabilities. This result can be seen an extension version of Theorem 3.1 that Chen et al. [A strong law of large numbers for non-additive probabilities. Int J Approx Reason. 2013;54:365–377] yielded. Furthermore, two applications of our result are given.  相似文献   

19.
In the present paper, we have consisdered the situation of multi–character survey where the study variables, beside being poorly correlated with the selection probabilities are also sensitive in nature. Randomized Response technique (RRT) proposed by Chaudhuri and Adhikary (1990) is used to elicit the information on the sensitive character. The empirical study carried out shows the relative efficiency of the transformation suggested by Basnel and Singh (1985) over the transformations suggested by Rao (1966) and Amahia et al.(1989) under a super population model.  相似文献   

20.
Characterization theorems in probability and statistics are widely appreciated for their role in clarifying the structure of the families of probability distributions. Less well known is the role characterization theorems have as a natural, logical and effective starting point for constructing goodness-of-fit tests. The characteristic independence of the mean and variance and of the mean and the third central moment of a normal sample were used, respectively, by Lin and Mudholkar [1980. A simple test for normality against asymmetric alternatives. Biometrika 67, 455–461] and by Mudholkar et al. [2002a. Independence characterizations and testing normality against skewness-kurtosis alternatives. J. Statist. Plann. Inference 104, 485–501] for developing tests of normality. The characteristic independence of the maximum likelihood estimates of the population parameters was similarly used by Mudholkar et al. [2002b. Independence characterization and inverse Gaussian goodness-of-fit. Sankhya A 63, 362–374] to develop a test of the composite inverse Gaussian hypothesis. The gamma models are extensively used for applied research in the areas of econometrics, engineering and biomedical sciences; but there are few goodness-of-fit tests available to test if the data indeed come from a gamma population. In this paper we employ Hwang and Hu's [1999. On a characterization of the gamma distribution: the independence of the sample mean and the sample coefficient of variation. Ann. Inst. Statist. Math. 51, 749–753] characterization of the gamma population in terms of the independence of sample mean and coefficient of variation for developing such a test. The asymptotic null distribution of the proposed test statistic is obtained and empirically refined for use with samples of moderate size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号