首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
This paper generalizes the tolerance interval approach for assessing agreement between two methods of continuous measurement for repeated measurement data—a common scenario in applications. The repeated measurements may be longitudinal or they may be replicates of the same underlying measurement. Our approach is to first model the data using a mixed model and then construct a relevant asymptotic tolerance interval (or band) for the distribution of appropriately defined differences. We present the methodology in the general context of a mixed model that can incorporate covariates, heteroscedasticity and serial correlation in the errors. Simulation for the no-covariate case shows good small-sample performance of the proposed methodology. For the longitudinal data, we also describe an extension for the case when the observed time profiles are modelled nonparametrically through penalized splines. Two real data applications are presented.  相似文献   

2.
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.  相似文献   

3.
Regression parameter estimation in the Cox failure time model is considered when regression variables are subject to measurement error. Assuming that repeat regression vector measurements adhere to a classical measurement model, we can consider an ordinary regression calibration approach in which the unobserved covariates are replaced by an estimate of their conditional expectation given available covariate measurements. However, since the rate of withdrawal from the risk set across the time axis, due to failure or censoring, will typically depend on covariates, we may improve the regression parameter estimator by recalibrating within each risk set. The asymptotic and small sample properties of such a risk set regression calibration estimator are studied. A simple estimator based on a least squares calibration in each risk set appears able to eliminate much of the bias that attends the ordinary regression calibration estimator under extreme measurement error circumstances. Corresponding asymptotic distribution theory is developed, small sample properties are studied using computer simulations and an illustration is provided.  相似文献   

4.
Summary.  A common problem with laboratory assays is that a measurement of a substance in a test sample becomes relatively imprecise as the concentration decreases. A standard solution is to establish lower limits for reliable measurement. A quantitation limit is a level above which a measurement has sufficient precision to be reliably reported. The paper proposes a new approach to defining the limit of quantitation for the case where a linear calibration curve is used to estimate actual concentrations from measured values. The approach is based on the relative precision of the estimated concentration, using the delta method to approximate the precision. A graphical display is proposed for the assessment of estimated concentrations, as well as the overall reliability of the calibration curve. Our research is motivated by a clinical inhalation experiment. Comparisons are made between the approach proposed and two standard methods, using both real and simulated data.  相似文献   

5.
Measurement error, the difference between a measured (observed) value of quantity and its true value, is perceived as a possible source of estimation bias in many surveys. To correct for such bias, a validation sample can be used in addition to the original sample for adjustment of measurement error. Depending on the type of validation sample, we can either use the internal calibration approach or the external calibration approach. Motivated by Korean Longitudinal Study of Aging (KLoSA), we propose a novel application of fractional imputation to correct for measurement error in the analysis of survey data. The proposed method is to create imputed values of the unobserved true variables, which are mis-measured in the main study, by using validation subsample. Furthermore, the proposed method can be directly applicable when the measurement error model is a mixture distribution. Variance estimation using Taylor linearization is developed. Results from a limited simulation study are also presented.  相似文献   

6.
Nakamura (1990) introduced an approach to estimation in measurement error models based on a corrected score function, and claimed that the estimators obtained are consistent for functional models. Proof of the claim essentially assumed the existence of a corrected log-likelihood for which differentiation with respect to model parameters can be interchanged with conditional expectation taken with respect to the measurement error distributions, given the response variables and true covariates. This paper deals with simple yet practical models for which the above assumption is false, i.e. a corrected score function for the model may not be obtained through differentiating a corrected log-likelihood although it exists. Alternative regularity conditions with no reference to log-likelihood are given, under which the corrected score functions yield consistent and asymptotically normal estimators. Application to functional comparative calibration yields interesting results.  相似文献   

7.
A common problem in medical statistics is the discrimination between two groups on the basis of diagnostic information. Information on patient characteristics is used to classify individuals into one of two groups: diseased or disease-free. This classification is often with respect to a particular disease. This discrimination has two probabilistic components: (1) the discrimination is not without error, and (2) in many cases the a priori chance of disease can be estimated. Logistic models (Cox 1970; Anderson 1972) provide methods for incorporating both of these components. The a posteriori probability of disease may be estimated for a patient on the basis of both current measurement of patient characteristics and prior information. The parameters of the logistic model may be estimated on the basis of a calibration trial. In practice, not one but several sets of measurements of one characteristic of the patient may be made on a questionable case. These measurements typically are correlated; they are far from independent. How should these correlated measurements be used? This paper presents a method for incorporating several sets of measurements in the classification of a case.  相似文献   

8.
It is often necessary to compare two measurement methods in medicine and other experimental sciences. This problem covers a broad range of data. Many authors have explored ways of assessing the agreement of two sets of measurements. However, there has been relatively little attention to the problem of determining sample size for designing an agreement study. In this paper, a method using the interval approach for concordance is proposed to calculate sample size in conducting an agreement study. The philosophy behind this is that the concordance is satisfied when no more than the pre‐specified k discordances are found for a reasonable large sample size n since it is much easier to define a discordance pair. The goal here is to find such a reasonable large sample size n. The sample size calculation is based on two rates: the discordance rate and tolerance probability, which in turn can be used to quantify an agreement study. The proposed approach is demonstrated through a real data set. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
A mixture measurement error model built upon skew normal distributions and normal distributions is developed to evaluate various impacts of measurement errors to parameter inferences in logistic regressions. Data generated from survey questionnaires are usually error contaminated. We consider two types of errors: person-specific bias and random errors. Person-specific bias is modelled using skew normal distribution, and the distribution of random errors is described by a normal distribution. Intensive simulations are conducted to evaluate the contribution of each component in the mixture to outcomes of interest. The proposed method is then applied to a questionnaire data set generated from a neural tube defect study. Simulation results and real data application indicate that ignoring measurement errors or misspecifying measurement error components can both produce misleading results, especially when measurement errors are actually skew distributed. The inferred parameters can be attenuated or inflated depending on how the measurement error components are specified. We expect the findings will self-explain the importance of adjusting measurement errors and thus benefit future data collection effort.  相似文献   

10.
In a physical activity (PA) study, the 7-day PA log viewed as an alloyed gold standard was used to correct the measurement error in the physical activity questionnaire. Due to correlations between the errors in the two measurements, the usual regression calibration (RC) may result in a biased estimate of the calibration factor. We propose a method for removing the correlation through orthogonal decomposition of the errors, and then the usual RC can be applied. Simulation studies show that our method can effectively correct the bias.  相似文献   

11.
The restrictive properties of compositional data, that is multivariate data with positive parts that carry only relative information in their components, call for special care to be taken while performing standard statistical methods, for example, regression analysis. Among the special methods suitable for handling this problem is the total least squares procedure (TLS, orthogonal regression, regression with errors in variables, calibration problem), performed after an appropriate log-ratio transformation. The difficulty or even impossibility of deeper statistical analysis (confidence regions, hypotheses testing) using the standard TLS techniques can be overcome by calibration solution based on linear regression. This approach can be combined with standard statistical inference, for example, confidence and prediction regions and bounds, hypotheses testing, etc., suitable for interpretation of results. Here, we deal with the simplest TLS problem where we assume a linear relationship between two errorless measurements of the same object (substance, quantity). We propose an iterative algorithm for estimating the calibration line and also give confidence ellipses for the location of unknown errorless results of measurement. Moreover, illustrative examples from the fields of geology, geochemistry and medicine are included. It is shown that the iterative algorithm converges to the same values as those obtained using the standard TLS techniques. Fitted lines and confidence regions are presented for both original and transformed compositional data. The paper contains basic principles of linear models and addresses many related problems.  相似文献   

12.
Summary.  We present an approach for correcting for interobserver measurement error in an ordinal logistic regression model taking into account also the variability of the estimated correction terms. The different scoring behaviour of the 16 examiners complicated the identification of a geographical trend in a recent study on caries experience in Flemish children (Belgium) who were 7 years old. Since the measurement error is on the response the factor 'examiner' could be included in the regression model to correct for its confounding effect. However, controlling for examiner largely removed the geographical east–west trend. Instead, we suggest a (Bayesian) ordinal logistic model which corrects for the scoring error (compared with a gold standard) using a calibration data set. The marginal posterior distribution of the regression parameters of interest is obtained by integrating out the correction terms pertaining to the calibration data set. This is done by processing two Markov chains sequentially, whereby one Markov chain samples the correction terms. The sampled correction term is imputed in the Markov chain pertaining to the regression parameters. The model was fitted to the oral health data of the Signal–Tandmobiel® study. A WinBUGS program was written to perform the analysis.  相似文献   

13.
In the field of education, it is often of great interest to estimate the percentage of students who start out in the top test quantile at time 1 and who remain there at time 2, which is termed as “persistence rate,” to measure the students’ academic growth. One common difficulty is that students’ performance may be subject to measurement errors. We therefore considered a correlation calibration method and the simulation–extrapolation (SIMEX) method for correcting the measurement errors. Simulation studies are presented to compare various measurement error correction methods in estimating the persistence rate.  相似文献   

14.
The use of the cumulative average model to investigate the association between disease incidence and repeated measurements of exposures in medical follow-up studies can be dated back to the 1960s (Kahn and Dawber, J Chron Dis 19:611–620, 1966). This model takes advantage of all prior data and thus should provide a statistically more powerful test of disease-exposure associations. Measurement error in covariates is common for medical follow-up studies. Many methods have been proposed to correct for measurement error. To the best of our knowledge, no methods have been proposed yet to correct for measurement error in the cumulative average model. In this article, we propose a regression calibration approach to correct relative risk estimates for measurement error. The approach is illustrated with data from the Nurses’ Health Study relating incident breast cancer between 1980 and 2002 to time-dependent measures of calorie-adjusted saturated fat intake, controlling for total caloric intake, alcohol intake, and baseline age.  相似文献   

15.
The paper describes two regression models—principal components and maximum-likelihood factor analysis—which may be used when the stochastic predictor varibles are highly intereorrelated and/or contain measurement error. The two problems can occur jointly, for example in social-survey data where the true (but unobserved) covariance matrix can be singular. Departure from singularity of the sample dispersion matrix is then due to measurement error. We first consider the more elementary principal components regression model, where it is shown that it can be derived as a special case of (i) canonical correlation, and (ii) restricted least squares. The second part consists of the more general maximum-likelihood factor-analysis regression model, which is derived from the generalized inverse of the product of two singular matrices. Also, it is proved that factor-analysis regression can be considered as an instrumental variables estimator and therefore does not depend on whether factors have been “properly” identified in terms of substantive behaviour. Consequently the additional task of rotating factors to “simple structure” does not arise.  相似文献   

16.
This paper considers nonlinear regression models when neither the response variable nor the covariates can be directly observed, but are measured with both multiplicative and additive distortion measurement errors. We propose conditional variance and conditional mean calibration estimation methods for the unobserved variables, then a nonlinear least squares estimator is proposed. For the hypothesis testing of parameter, a restricted estimator under the null hypothesis and a test statistic are proposed. The asymptotic properties for the estimator and test statistic are established. Lastly, a residual-based empirical process test statistic marked by proper functions of the regressors is proposed for the model checking problem. We further suggest a bootstrap procedure to calculate critical values. Simulation studies demonstrate the performance of the proposed procedure and a real example is analysed to illustrate its practical usage.  相似文献   

17.
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the ‘classical’ approach and the ‘inverse’ regression approach. Both of these models are static models and are used to estimate exact measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe it. The Bayesian time series analysis method of Dynamic Linear Models can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs a new approach to performing statistical calibration. A simulation study in the context of microwave radiometry is conducted that compares the dynamic model to traditional static frequentist and Bayesian approaches. The focus of the study is to understand how well the dynamic statistical calibration method performs under various signal-to-noise ratios, r.  相似文献   

18.
刘云霞等 《统计研究》2021,38(12):77-88
本文对以往我国全要素生产率测度中存在的问题做了较为系统的分析,试图总结一套科学且具有可操作性的测度方法。本文在区分实际资本存量和有效资本存量的基础上,根据我国资本存量估算数据,采用一阶差分对数模型和有关经济计量学方法,估计资本与劳动的产出弹性,避免了可能出现的“伪回归”“序列相关”“多重共线性”和“异方差”等问题,从而保证所测度的产出弹性估计值既符合经济理论分析又能通过经济计量学检验。本文还阐述了全要素生产率与广义技术进步这两个指标 的联系与区别,并通过实证分析揭示了不同时期两项指标产生差异的原因。实证分析结果表明,改革开放以来全要素生产率提高对促进我国经济增长发挥了重要作用。  相似文献   

19.
Restricted factor analysis can be used to investigate measurement bias. A prerequisite for the detection of measurement bias through factor analysis is the correct specification of the measurement model. We applied restricted factor analysis to two subtests of a Dutch cognitive ability test. These two examples serve to illustrate the relationship between multidimensionality and measurement bias. We conclude that measurement bias implies multidimensionality, whereas multidimensionality shows up as measurement bias only if multidimensionality is not properly accounted for in the measurement model.  相似文献   

20.
We consider logistic regression with covariate measurement error. Most existing approaches require certain replicates of the error‐contaminated covariates, which may not be available in the data. We propose generalized method of moments (GMM) nonparametric correction approaches that use instrumental variables observed in a calibration subsample. The instrumental variable is related to the underlying true covariates through a general nonparametric model, and the probability of being in the calibration subsample may depend on the observed variables. We first take a simple approach adopting the inverse selection probability weighting technique using the calibration subsample. We then improve the approach based on the GMM using the whole sample. The asymptotic properties are derived, and the finite sample performance is evaluated through simulation studies and an application to a real data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号