首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 0 毫秒
1.
Abstract.  In many epidemiological studies, disease occurrences and their rates are naturally modelled by counting processes and their intensities, allowing an analysis based on martingale methods. Applied to the Mantel–Haenszel estimator, these methods lend themselves to the analysis of general control selection sampling designs and the accommodation of time-varying exposures.  相似文献   

2.
This paper presents a study of the performance of simple and counter-matched nested case-control sampling relative to a full cohort study. First we review methods for estimating the regression parameters and the integrated baseline hazard for Cox's proportional hazards model from cohort and case-control data. Then the asymptotic distributional properties of these estimators are recapitulated, and relative efficiency results are presented both for regression and baseline hazard estimation.  相似文献   

3.
Many late-onset diseases are caused by what appears to be a combination of a genetic predisposition to disease and environmental factors. The use of existing cohort studies provides an opportunity to infer genetic predisposition to disease on a representative sample of a study population, now that many such studies are gathering genetic information on the participants. One feature to using existing cohorts is that subjects may be censored due to death prior to genetic sampling, thereby adding a layer of complexity to the analysis. We develop a statistical framework to infer parameters of a latent variables model for disease onset. The latent variables model describes the role of genetic and modifiable risk factors on the onset ages of multiple diseases, and accounts for right-censoring of disease onset ages. The framework also allows for missing genetic information by inferring a subject's unknown genotype through appropriately incorporated covariate information. The model is applied to data gathered in the Framingham Heart Study for measuring the effect of different Apo-E genotypes on the occurrence of various cardiovascular disease events.  相似文献   

4.
The use of logistic regression analysis is widely applicable to epidemiologic studies concerned with quantifying an association between a study factor (i.e., an exposure variable) and a health outcome (i.e., disease status). This paper reviews the general characteristics of the logistic model and illustrates its use in epidemiologic inquiry. Particular emphasis is given to the control of extraneous variables in the context of follow-up and case-control studies. Techniques for both unconditional and conditional maximum likelihood estimation of the parameters in the logistic model are described and illustrated. A general analysis strategy is also presented which incorporates the assessment of both interaction and confounding in quantifying an exposure-disease association of interest.  相似文献   

5.
Group sequential trialswith time to event end points can be complicated to design. Notonly are there unlimited choices for the number of events requiredat each stage, but for each of these choices, there are unlimitedcombinations of accrual and follow-up at each stage that providethe required events. Methods are presented for determining optimalcombinations of accrual and follow-up for two-stage clinicaltrials with time to event end points. Optimization is based onminimizing the expected total study length as a function of theexpected accrual duration or sample size while providing an appropriateoverall size and power. Optimal values of expected accrual durationand minimum expected total study length are given assuming anexponential proportional hazards model comparing two treatmentgroups. The expected total study length can be substantiallydecreased by including a follow-up period during which accrualis suspended. Conditions that warrant an interim follow-up periodare considered, and the gain in efficiency achieved by includingan interim follow-up period is quantified. The gain in efficiencyshould be weighed against the practical difficulties in implementingsuch designs. An example is given to illustrate the use of thesetechniques in designing a clinical trial to compare two chemotherapyregimens for lung cancer. Practical considerations of includingan interim follow-up period are discussed.  相似文献   

6.
Millions of smart meters that are able to collect individual load curves, that is, electricity consumption time series, of residential and business customers at fine scale time grids are now deployed by electricity companies all around the world. It may be complex and costly to transmit and exploit such a large quantity of information, therefore it can be relevant to use survey sampling techniques to estimate mean load curves of specific groups of customers. Data collection, like every mass process, may undergo technical problems at every point of the metering and collection chain resulting in missing values. We consider imputation approaches (linear interpolation, kernel smoothing, nearest neighbours, principal analysis by conditional estimation) that take advantage of the specificities of the data, that is to say the strong relation between the consumption at different instants of time. The performances of these techniques are compared on a real example of Irish electricity load curves under various scenarios of missing data. A general variance approximation of total estimators is also given which encompasses nearest neighbours, kernel smoothers imputation and linear imputation methods. The Canadian Journal of Statistics 47: 65–89; 2019 © 2018 Statistical Society of Canada  相似文献   

7.
This research focuses on the estimation of tumor incidence rates from long-term animal studies which incorporate interim sacrifices. A nonparametric stochastic model is described with transition rates between states corresponding to the tumor incidence rate, the overall death rate, and the death rate for tumor-free animals. Exact analytic solutions for the maximum likelihood estimators of the hazard rates are presented, and their application to data from a long-term animal study is illustrated by an example. Unlike many common methods for estimation and comparison of tumor incidence rates among treatment groups, the estimators derived in this paper require no assumptions regarding tumor lethality or treatment lethality. The small sample operating characteristics of these estimators are evaluated using Monte Carlo simulation studies.  相似文献   

8.
In clinical trials with binary endpoints, the required sample size does not depend only on the specified type I error rate, the desired power and the treatment effect but also on the overall event rate which, however, is usually uncertain. The internal pilot study design has been proposed to overcome this difficulty. Here, nuisance parameters required for sample size calculation are re-estimated during the ongoing trial and the sample size is recalculated accordingly. We performed extensive simulation studies to investigate the characteristics of the internal pilot study design for two-group superiority trials where the treatment effect is captured by the relative risk. As the performance of the sample size recalculation procedure crucially depends on the accuracy of the applied sample size formula, we firstly explored the precision of three approximate sample size formulae proposed in the literature for this situation. It turned out that the unequal variance asymptotic normal formula outperforms the other two, especially in case of unbalanced sample size allocation. Using this formula for sample size recalculation in the internal pilot study design assures that the desired power is achieved even if the overall rate is mis-specified in the planning phase. The maximum inflation of the type I error rate observed for the internal pilot study design is small and lies below the maximum excess that occurred for the fixed sample size design.  相似文献   

9.
For a group‐sequential trial with two pre‐planned analyses, stopping boundaries can be calculated using a simple SAS? programme on the basis of the asymptotic bivariate normality of the interim and final test statistics. Given the simplicity and transparency of this approach, it is appropriate for researchers to apply their own bespoke spending function as long as the rate of alpha spend is pre‐specified. One such application could be an oncology trial where progression free survival (PFS) is the primary endpoint and overall survival (OS) is also assessed, both at the same time as the analysis of PFS and also later following further patient follow‐up. In many circumstances it is likely, if PFS is significantly extended, that the protocol will be amended to allow patients in the control arm to start receiving the experimental regimen. Such an eventuality is likely to result in the diminution of any effect on OS. It is shown that spending a greater proportion of alpha at the first analysis of OS, using either Pocock or bespoke boundaries, will maintain and in some cases result in greater power given a fixed number of events. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
林少宫 《统计研究》2007,24(2):84-86
 摘  要:本文论述了实验设计思想在微观计量经济分析中的重要作用。从实验设计三大原理局部控制、随机化和重复出发,重新诠释微观计量经济学(特别是“因果链”分析)中有关方法论的问题,预期实验经济学和微观计量经济学将通过实验设计的思想方法而互相促进,并期望“计量经济(学)设计与分析”一类读物的出现。  相似文献   

11.
When characterizing a therapy, the efficacy and the safety are two major aspects under consideration. In prescribing a therapy to a patient, a clinician puts the two aspects together and makes a decision based on a consolidated thought process. The global benefit-risk (GBR) measures proposed by Chuang-Stein et al. (Stat. Med. 1991; 10:1349-1359) are useful in facilitating the thinking, and creating the framework for making statistical comparisons based on benefit-risk point of view. This article describes how a GBR linear score was defined and used as the primary outcome measure in a clinical trial design. The robustness of the definitions of 'benefit' and 'risk' are evaluated using different criteria. The sensitivity of the pre-specified weights is also analyzed using alternative weights; one of those was determined by the relative to an identified distribution integral transformation approach (Biometrics 1958; 14:18-38). Statistical considerations are illustrated using pooled data from clinical trials studying antidepressant. The pros and cons for using GBR assessments in the setting of clinical trials are discussed.  相似文献   

12.
We study a factor analysis model with two normally distributed observations and one factor. In the case when the errors have equal variance, the maximum likelihood estimate of the factor loading is given in closed form. Exact and approximate distributions of the maximum likelihood estimate are considered. The exact distribution function is given in a complex form that involves the incomplete Beta function. Approximations to the distribution function are given for the cases of large sample sizes and small error variances. The accuracy of the approximations is discussed  相似文献   

13.
The authors consider the effect of orchard attributes and landscape in a heterogeneous area on the efficacy of a control program for the codling moth in apple orchards in British Columbia. The context is first presented, along with a set of questions of importance to the Okanagan Valley Sterile Insect Release program. Two groups of analysts then address a number of these issues using methods for spatial‐temporal data including counts, proportions and Bernoulli variables. The models are then compared and the relevance of the results to this operational program is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号