首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 15 毫秒
1.
Abstract. Family‐based case–control designs are commonly used in epidemiological studies for evaluating the role of genetic susceptibility and environmental exposure to risk factors in the etiology of rare diseases. Within this framework, it is often reasonable to assume genetic susceptibility and environmental exposure being conditionally independent of each other within families in the source population. We focus on this setting to explore the situation of measurement error affecting the assessment of the environmental exposure. We correct for measurement error through a likelihood‐based method. We exploit a conditional likelihood approach to relate the probability of disease to the genetic and the environmental risk factors. We show that this approach provides less biased and more efficient results than that based on logistic regression. Regression calibration, instead, provides severely biased estimators of the parameters. The comparison of the correction methods is performed through simulation, under common measurement error structures.  相似文献   

2.
Testing goodness‐of‐fit of commonly used genetic models is of critical importance in many applications including association studies and testing for departure from Hardy–Weinberg equilibrium. Case–control design has become widely used in population genetics and genetic epidemiology, thus it is of interest to develop powerful goodness‐of‐fit tests for genetic models using case–control data. This paper develops a likelihood ratio test (LRT) for testing recessive and dominant models for case–control studies. The LRT statistic has a closed‐form formula with a simple $\chi^{2}(1)$ null asymptotic distribution, thus its implementation is easy even for genome‐wide association studies. Moreover, it has the same power and optimality as when the disease prevalence is known in the population. The Canadian Journal of Statistics 41: 341–352; 2013 © 2013 Statistical Society of Canada  相似文献   

3.
We extend the discussion of Qin and Zhang's [1997. A goodness of fit test for logistic regression models base on case–control data. Biometrika 84, 609–618] goodness-of-fit test of logistic regression under case–control data to continuation ratio logistic regression (CRLR) models. We first showed that the retrospective CRLR model, which is valid for case–control data (the null hypothesis H0)H0), is equivalent to an I  -sample semiparametric model. Then under H0H0, we find the semiparametric profile empirical likelihood estimators of distributions of the covariate conditioning on each response category and use them to define a Kolmogorov–Smirnov type test for assessing the global fit of CRLR models under case–control data. Unlike prospective CRLR models, retrospective CRLR models cannot be partitioned to a series of retrospective binary logistic regression models studied by Qin and Zhang [1997. A goodness of fit test for logistic regression models base on case–control data. Biometrika 84, 609–618].  相似文献   

4.
Abstract. We consider a bidimensional Ornstein–Uhlenbeck process to describe the tissue microvascularization in anti‐cancer therapy. Data are discrete, partial and noisy observations of this stochastic differential equation (SDE). Our aim is to estimate the SDE parameters. We use the main advantage of a one‐dimensional observation to obtain an easy way to compute the exact likelihood using the Kalman filter recursion, which allows to implement an easy numerical maximization of the likelihood. Furthermore, we establish the link between the observations and an ARMA process and we deduce the asymptotic properties of the maximum likelihood estimator. We show that this ARMA property can be generalized to a higher dimensional underlying Ornstein–Uhlenbeck diffusion. We compare this estimator with the one obtained by the well‐known expectation maximization algorithm on simulated data. Our estimation methods can be directly applied to other biological contexts such as drug pharmacokinetics or hormone secretions.  相似文献   

5.
Abstract. We investigate non‐parametric estimation of a monotone baseline hazard and a decreasing baseline density within the Cox model. Two estimators of a non‐decreasing baseline hazard function are proposed. We derive the non‐parametric maximum likelihood estimator and consider a Grenander type estimator, defined as the left‐hand slope of the greatest convex minorant of the Breslow estimator. We demonstrate that the two estimators are strongly consistent and asymptotically equivalent and derive their common limit distribution at a fixed point. Both estimators of a non‐increasing baseline hazard and their asymptotic properties are obtained in a similar manner. Furthermore, we introduce a Grenander type estimator for a non‐increasing baseline density, defined as the left‐hand slope of the least concave majorant of an estimator of the baseline cumulative distribution function, derived from the Breslow estimator. We show that this estimator is strongly consistent and derive its asymptotic distribution at a fixed point.  相似文献   

6.
This paper deals with statistical inference on the parameters of a stochastic model, describing curved fibrous objects in three dimensions, that is based on multivariate autoregressive processes. The model is fitted to experimental data consisting of a large number of short independently sampled trajectories of multivariate autoregressive processes. We discuss relevant statistical properties (e.g. asymptotic behaviour as the number of trajectories tends to infinity) of the maximum likelihood (ML) estimators for such processes. Numerical studies are also performed to analyse some of the more intractable properties of the ML estimators. Finally the whole methodology, i.e., the fibre model and its statistical inference, is applied to appropriately describe the tracking of fibres in real materials.  相似文献   

7.
Small area estimation has long been a popular and important research topic due to its growing demand in public and private sectors. We consider here the basic area level model, popularly known as the Fay–Herriot model. Although much of current research is predominantly focused on second order unbiased estimation of mean squared prediction errors, we concentrate on developing confidence intervals (CIs) for the small area means that are second order correct. The corrected CI can be readily implemented, because it only requires quantities that are already estimated as part of the mean squared error estimation. We extend the approach to a CI for the difference of two small area means. The findings are illustrated with a simulation study.  相似文献   

8.
This paper considers the maximin approach for designing clinical studies. A maximin efficient design maximizes the smallest efficiency when compared with a standard design, as the parameters vary in a specified subset of the parameter space. To specify this subset of parameters in a real situation, a four‐step procedure using elicitation based on expert opinions is proposed. Further, we describe why and how we extend the initially chosen subset of parameters to a much larger set in our procedure. By this procedure, the maximin approach becomes feasible for dose‐finding studies. Maximin efficient designs have shown to be numerically difficult to construct. However, a new algorithm, the H‐algorithm, considerably simplifies the construction of these designs. We exemplify the maximin efficient approach by considering a sigmoid Emax model describing a dose–response relationship and compare inferential precision with that obtained when using a uniform design. The design obtained is shown to be at least 15% more efficient than the uniform design. © 2014 The Authors. Pharmaceutical Statistics Published by John Wiley & Sons Ltd.  相似文献   

9.
A complication that may arise in some bioequivalence studies is that of ‘incomplete subject profiles’, caused by missing values that occur at one or more sampling points in the concentration–time curve for some study subjects. We assess the impact of incomplete subject profiles on the assessment of bioequivalence in a standard two‐period crossover design. The specific aim of the investigation is to assess the impact of four different patterns of missing concentration values on the coverage level of a 90% nominal two‐sided confidence interval for the ratio of geometric means and then to consider the impact on the probability of concluding bioequivalence. An overall conclusion from the results is that random missingness – that is, missingness for reasons unrelated to the bioavailability of the formulation involved or, more generally, to any aspect of the study design and conduct – has a damaging effect on the study conclusions only when the number of missing values is fairly large. On the other hand, a missingness pattern that potentially has a very damaging effect on the study conclusions is that which arises when values are missing ‘late’ in the concentration–time curve. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

10.
Linear mixed‐effects models (LMEMs) of concentration–double‐delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration–ddQTc interval model from a ‘typical’ thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between‐subject variance of the slope, increased the residual variance, and had no effect on the between‐subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation–extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation–extrapolation method could correct biased model parameter estimates to near‐unbiased levels. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
In recent years, immunological science has evolved, and cancer vaccines are available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected. Accordingly, the use of weighted log‐rank tests with the Fleming–Harrington class of weights is proposed for evaluation of survival endpoints. We present a method for calculating the sample size under assumption of a piecewise exponential distribution for the cancer vaccine group and an exponential distribution for the placebo group as the survival model. The impact of delayed effect timing on both the choice of the Fleming–Harrington's weights and the increment in the required number of events is discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
In software reliability theory many different models have been proposed and investigated. some of these models intuitively match reality better than others. The properties of certain statistical estimation procedures in connection with these models are also model-dependent. In this paper we investigate how well the maximum likelihood estimation procedure and the parametric bootstrap behave in the case of the very well-known software reliability model suggested by Jelinski and Moranda (1972). For this study we will make use of simulated data.  相似文献   

13.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号