首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz’s Bayesian information criterion and Hannan–Quinn criterion.  相似文献   

2.
Measurement error, the difference between a measured (observed) value of quantity and its true value, is perceived as a possible source of estimation bias in many surveys. To correct for such bias, a validation sample can be used in addition to the original sample for adjustment of measurement error. Depending on the type of validation sample, we can either use the internal calibration approach or the external calibration approach. Motivated by Korean Longitudinal Study of Aging (KLoSA), we propose a novel application of fractional imputation to correct for measurement error in the analysis of survey data. The proposed method is to create imputed values of the unobserved true variables, which are mis-measured in the main study, by using validation subsample. Furthermore, the proposed method can be directly applicable when the measurement error model is a mixture distribution. Variance estimation using Taylor linearization is developed. Results from a limited simulation study are also presented.  相似文献   

3.
Summary.  We present an approach for correcting for interobserver measurement error in an ordinal logistic regression model taking into account also the variability of the estimated correction terms. The different scoring behaviour of the 16 examiners complicated the identification of a geographical trend in a recent study on caries experience in Flemish children (Belgium) who were 7 years old. Since the measurement error is on the response the factor 'examiner' could be included in the regression model to correct for its confounding effect. However, controlling for examiner largely removed the geographical east–west trend. Instead, we suggest a (Bayesian) ordinal logistic model which corrects for the scoring error (compared with a gold standard) using a calibration data set. The marginal posterior distribution of the regression parameters of interest is obtained by integrating out the correction terms pertaining to the calibration data set. This is done by processing two Markov chains sequentially, whereby one Markov chain samples the correction terms. The sampled correction term is imputed in the Markov chain pertaining to the regression parameters. The model was fitted to the oral health data of the Signal–Tandmobiel® study. A WinBUGS program was written to perform the analysis.  相似文献   

4.
A correlation curve measures the strength of the association between two variables locally at different values of covariate. This paper studies how to estimate the correlation curve under the multiplicative distortion measurement errors setting. The unobservable variables are both distorted in a multiplicative fashion by an observed confounding variable. We obtain asymptotic normality results for the estimated correlation curve. We conduct Monte Carlo simulation experiments to examine the performance of the proposed estimator. The estimated correlation curve is applied to analyze a real dataset for an illustration.  相似文献   

5.
The problems of estimating the mean and an upper percentile of a lognormal population with nonnegative values are considered. For estimating the mean of a such population based on data that include zeros, a simple confidence interval (CI) that is obtained by modifying Tian's [Inferences on the mean of zero-inflated lognormal data: the generalized variable approach. Stat Med. 2005;24:3223—3232] generalized CI, is proposed. A fiducial upper confidence limit (UCL) and a closed-form approximate UCL for an upper percentile are developed. Our simulation studies indicate that the proposed methods are very satisfactory in terms of coverage probability and precision, and better than existing methods for maintaining balanced tail error rates. The proposed CI and the UCL are simple and easy to calculate. All the methods considered are illustrated using samples of data involving airborne chlorine concentrations and data on diagnostic test costs.  相似文献   

6.
In this article, we propose a beta regression model with multiplicative log-normal measurement errors. Three estimation methods are presented, namely, naive, calibration regression, and pseudo likelihood. The nuisance parameters are estimated from a system of estimation equations using replicated data and these estimates are used to propose a pseudo likelihood function. A simulation study was performed to assess some properties of the proposed methods. Results from an example with a real dataset, including diagnostic tools, are also reported.  相似文献   

7.
A non-Bayesian predictive approach for statistical calibration is introduced. This is based on particularizing to the calibration setting the general definition of non-Bayesian (or frequentist) predictive probability density proposed by Harris [Predictive fit for natural exponential families, Biometrika 76 (1989), pp. 675–684]. The new method is elaborated in detail in case of Gaussian linear univariate calibration. Through asymptotic analysis and simulation results with moderate sample size, it is shown that the non-Bayesian predictive estimator of the unknown parameter of interest in calibration (commonly, a substance concentration) favourably compares with previous estimators such as the classical and inverse estimators, especially for extrapolation problems. A further advantage of the non-Bayesian predictive approach is that it provides not only point estimates but also a predictive likelihood function that allows the researcher to explore the plausibility of any possible parameter value, which is also briefly illustrated. Furthermore, the introduced approach offers a general framework that can be applied for calibrating on the basis of any parametric statistical model, so making it potentially useful for nonlinear and non-Gaussian calibration problems.  相似文献   

8.
This paper develops the theory of calibration estimation and proposes calibration approach alternative to existing calibration estimators for estimating population mean of the study variable using auxiliary variable in stratified sampling. The theory of new calibration estimation is given and optimum weights are derived. A simulation study is carried out to performance of the proposed calibration estimator with other existing calibration estimators. The results reveal that the proposed calibration estimators are more efficient than Tracy et al., Singh et al., Singh calibration estimators of the population mean.  相似文献   

9.
The restrictive properties of compositional data, that is multivariate data with positive parts that carry only relative information in their components, call for special care to be taken while performing standard statistical methods, for example, regression analysis. Among the special methods suitable for handling this problem is the total least squares procedure (TLS, orthogonal regression, regression with errors in variables, calibration problem), performed after an appropriate log-ratio transformation. The difficulty or even impossibility of deeper statistical analysis (confidence regions, hypotheses testing) using the standard TLS techniques can be overcome by calibration solution based on linear regression. This approach can be combined with standard statistical inference, for example, confidence and prediction regions and bounds, hypotheses testing, etc., suitable for interpretation of results. Here, we deal with the simplest TLS problem where we assume a linear relationship between two errorless measurements of the same object (substance, quantity). We propose an iterative algorithm for estimating the calibration line and also give confidence ellipses for the location of unknown errorless results of measurement. Moreover, illustrative examples from the fields of geology, geochemistry and medicine are included. It is shown that the iterative algorithm converges to the same values as those obtained using the standard TLS techniques. Fitted lines and confidence regions are presented for both original and transformed compositional data. The paper contains basic principles of linear models and addresses many related problems.  相似文献   

10.
Comparison of groups in longitudinal studies is often conducted using the area under the outcome versus time curve. However, outcomes may be subject to censoring due to a limit of detection and specific methods that take informative missingness into account need to be applied. In this article, we present a unified model‐based method that accounts for both the within‐subject variability in the estimation of the area under the curve as well as the missingness mechanism in the event of censoring. Simulation results demonstrate that our proposed method has a significant advantage over traditionally implemented methods with regards to its inferential properties. A working example from an AIDS study is presented to demonstrate the applicability of our approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Bioequivalence (BE) is required for approving a generic drug. The two one‐sided tests procedure (TOST, or the 90% confidence interval approach) has been used as the mainstream methodology to test average BE (ABE) on pharmacokinetic parameters such as the area under the blood concentration‐time curve and the peak concentration. However, for highly variable drugs (%CV > 30%), it is difficult to demonstrate ABE in a standard cross‐over study with the typical number of subjects using the TOST because of lack of power. Recently, the US Food and Drug Administration and the European Medicines Agency recommended similar but not identical reference‐scaled average BE (RSABE) approaches to address this issue. Although the power is improved, the new approaches may not guarantee a high level of confidence for the true difference between two drugs at the ABE boundaries. It is also difficult for these approaches to address the issues of population BE (PBE) and individual BE (IBE). We advocate the use of a likelihood approach for representing and interpreting BE data as evidence. Using example data from a full replicate 2 × 4 cross‐over study, we demonstrate how to present evidence using the profile likelihoods for the mean difference and standard deviation ratios of the two drugs for the pharmacokinetic parameters. With this approach, we present evidence for PBE and IBE as well as ABE within a unified framework. Our simulations show that the operating characteristics of the proposed likelihood approach are comparable with the RSABE approaches when the same criteria are applied. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
In this article, we propose a flexible parametric (FP) approach for adjusting for covariate measurement errors in regression that can accommodate replicated measurements on the surrogate (mismeasured) version of the unobserved true covariate on all the study subjects or on a sub-sample of the study subjects as error assessment data. We utilize the general framework of the FP approach proposed by Hossain and Gustafson in 2009 for adjusting for covariate measurement errors in regression. The FP approach is then compared with the existing non-parametric approaches when error assessment data are available on the entire sample of the study subjects (complete error assessment data) considering covariate measurement error in a multiple logistic regression model. We also developed the FP approach when error assessment data are available on a sub-sample of the study subjects (partial error assessment data) and investigated its performance using both simulated and real life data. Simulation results reveal that, in comparable situations, the FP approach performs as good as or better than the competing non-parametric approaches in eliminating the bias that arises in the estimated regression parameters due to covariate measurement errors. Also, it results in better efficiency of the estimated parameters. Finally, the FP approach is found to perform adequately well in terms of bias correction, confidence coverage, and in achieving appropriate statistical power under partial error assessment data.  相似文献   

13.
This paper is motivated from a neurophysiological study of muscle fatigue, in which biomedical researchers are interested in understanding the time-dependent relationships of handgrip force and electromyography measures. A varying coefficient model is appealing here to investigate the dynamic pattern in the longitudinal data. The response variable in the study is continuous but bounded on the standard unit interval (0, 1) over time, while the longitudinal covariates are contaminated with measurement errors. We propose a generalization of varying coefficient models for the longitudinal proportional data with errors-in-covariates. We describe two estimation methods with penalized splines, which are formalized under a Bayesian inferential perspective. The first method is an adaptation of the popular regression calibration approach. The second method is based on a joint likelihood under the hierarchical Bayesian model. A simulation study is conducted to evaluate the efficacy of the proposed methods under different scenarios. The analysis of the neurophysiological data is presented to demonstrate the use of the methods.  相似文献   

14.
In some clinical trials and epidemiologic studies, investigators are interested in knowing whether the variability of a biomarker is independently predictive of clinical outcomes. This question is often addressed via a naïve approach where a sample-based estimate (e.g., standard deviation) is calculated as a surrogate for the “true” variability and then used in regression models as a covariate assumed to be free of measurement error. However, it is well known that the measurement error in covariates causes underestimation of the true association. The issue of underestimation can be substantial when the precision is low because of limited number of measures per subject. The joint analysis of survival data and longitudinal data enables one to account for the measurement error in longitudinal data and has received substantial attention in recent years. In this paper we propose a joint model to assess the predictive effect of biomarker variability. The joint model consists of two linked sub-models, a linear mixed model with patient-specific variance for longitudinal data and a full parametric Weibull distribution for survival data, and the association between two models is induced by a latent Gaussian process. Parameters in the joint model are estimated under Bayesian framework and implemented using Markov chain Monte Carlo (MCMC) methods with WinBUGS software. The method is illustrated in the Ocular Hypertension Treatment Study to assess whether the variability of intraocular pressure is an independent risk of primary open-angle glaucoma. The performance of the method is also assessed by simulation studies.  相似文献   

15.
ABSTRACT

We propose a new unsupervised learning algorithm to fit regression mixture models with unknown number of components. The developed approach consists in a penalized maximum likelihood estimation carried out by a robust expectation–maximization (EM)-like algorithm. We derive it for polynomial, spline, and B-spline regression mixtures. The proposed learning approach is unsupervised: (i) it simultaneously infers the model parameters and the optimal number of the regression mixture components from the data as the learning proceeds, rather than in a two-fold scheme as in standard model-based clustering using afterward model selection criteria, and (ii) it does not require accurate initialization unlike the standard EM for regression mixtures. The developed approach is applied to curve clustering problems. Numerical experiments on simulated and real data show that the proposed algorithm performs well and provides accurate clustering results, and confirm its benefit for practical applications.  相似文献   

16.
Regression parameter estimation in the Cox failure time model is considered when regression variables are subject to measurement error. Assuming that repeat regression vector measurements adhere to a classical measurement model, we can consider an ordinary regression calibration approach in which the unobserved covariates are replaced by an estimate of their conditional expectation given available covariate measurements. However, since the rate of withdrawal from the risk set across the time axis, due to failure or censoring, will typically depend on covariates, we may improve the regression parameter estimator by recalibrating within each risk set. The asymptotic and small sample properties of such a risk set regression calibration estimator are studied. A simple estimator based on a least squares calibration in each risk set appears able to eliminate much of the bias that attends the ordinary regression calibration estimator under extreme measurement error circumstances. Corresponding asymptotic distribution theory is developed, small sample properties are studied using computer simulations and an illustration is provided.  相似文献   

17.
In this paper, we develop a numerical method for evaluating the large sample bias in estimated regression coefficients arising due to exposure model misspecification while adjusting for measurement errors in errors-in-variable regression. The application of the proposed method has been demonstrated in the case of a logistic errors-in-variable regression model. The method is based on the combination of Monte-Carlo, numerical and, in some special cases, analytic integration techniques. The proposed method facilitates the investigation of the limiting bias in the estimated regression parameters based on a single data set rather than on repeated data sets as required by the conventional repeated sample method. Simulation studies demonstrate that the proposed method provides very similar estimates of bias in the estimated regression parameters under exposure model misspecification in logistic errors-in-variable regression with a higher degree of precision as compared to the conventional repeated sample method.  相似文献   

18.
Implementation of the Gibbs sampler for estimating the accuracy of multiple binary diagnostic tests in one population has been investigated. This method, proposed by Joseph, Gyorkos and Coupal, makes use of a Bayesian approach and is used in the absence of a gold standard to estimate the prevalence, the sensitivity and specificity of medical diagnostic tests. The expressions that allow this method to be implemented for an arbitrary number of tests are given. By using the convergence diagnostics procedure of Raftery and Lewis, the relation between the number of iterations of Gibbs sampling and the precision of the estimated quantiles of the posterior distributions is derived. An example concerning a data set of gastro-esophageal reflux disease patients collected to evaluate the accuracy of the water siphon test compared with 24 h pH-monitoring, endoscopy and histology tests is presented. The main message that emerges from our analysis is that implementation of the Gibbs sampler to estimate the parameters of multiple binary diagnostic tests can be critical and convergence diagnostic is advised for this method. The factors which affect the convergence of the chains to the posterior distributions and those that influence the precision of their quantiles are analyzed.  相似文献   

19.
The problem of heavy tail in regression models is studied. It is proposed that regression models are estimated by a standard procedure and a statistical check for heavy tail using residuals is conducted as a tool for regression diagnostic. Using the peaks-over-threshold approach, the generalized Pareto distribution quantifies the degree of heavy tail by the extreme value index. The number of excesses is determined by means of an innovative threshold model which partitions the random sample into extreme values and ordinary values. The overall decision on a significant heavy tail is justified by both a statistical test and a quantile–quantile plot. The usefulness of the approach includes justification of goodness of fit of the estimated regression model and quantification of the occurrence of extremal events. The proposed methodology is supplemented by surface ozone level in the city center of Leeds.  相似文献   

20.
Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this article, we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号