首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Different approaches for estimation of change in biomass between two points in time by means of airborne laser scanner data were tested. Both field and laser data were collected at two occasions on 52 sample plots in a mountain forest in southeastern Norway. In the first approach, biomass change was estimated as the difference between predicted biomass for the two measurement occasions. Joint models for the biomass at both occasions were fitted using different height and density variables from laser data as explanatory variables. The second approach modelled the observed change directly using the change in different variables extracted from the laser data as explanatory variables. In the third approach we modelled the relative change in biomass. The explanatory variables were also expressed as relative change between measurement occasions. In all approaches we allowed spline terms to be entered. We also investigated the aptness of models for which the residual variance was modeled by allowing it to be proportional to the area of the plot on which biomass was assessed. All alternative models were initially assessed by AIC. All models were also evaluated by estimating biomass change on the model development data. This evaluation indicated that the two direct approaches (approach 2 and 3) were better than relying on modeling biomass at both occasions and taking change as the difference between biomass estimates. Approach 2 seemed to be slightly better than approach 3 based on assessments of bias in the evaluation.  相似文献   

2.
Restricted factor analysis can be used to investigate measurement bias. A prerequisite for the detection of measurement bias through factor analysis is the correct specification of the measurement model. We applied restricted factor analysis to two subtests of a Dutch cognitive ability test. These two examples serve to illustrate the relationship between multidimensionality and measurement bias. We conclude that measurement bias implies multidimensionality, whereas multidimensionality shows up as measurement bias only if multidimensionality is not properly accounted for in the measurement model.  相似文献   

3.
Factor analysis is an established technique for the detection of measurement bias. Multigroup factor analysis (MGFA) can detect both uniform and nonuniform bias. Restricted factor analysis (RFA) can also be used to detect measurement bias, albeit only uniform measurement bias. Latent moderated structural equations (LMS) enable the estimation of nonlinear interaction effects in structural equation modelling. By extending the RFA method with LMS, the RFA method should be suited to detect nonuniform bias as well as uniform bias. In a simulation study, the RFA/LMS method and the MGFA method are compared in detecting uniform and nonuniform measurement bias under various conditions, varying the size of uniform bias, the size of nonuniform bias, the sample size, and the ability distribution. For each condition, 100 sets of data were generated and analysed through both detection methods. The RFA/LMS and MGFA methods turned out to perform equally well. Percentages of correctly identified items as biased (true positives) generally varied between 92% and 100%, except in small sample size conditions in which the bias was nonuniform and small. For both methods, the percentages of false positives were generally higher than the nominal levels of significance.  相似文献   

4.
The linear structural model provides one way of modelling a linear relationship between two random variables. It is well known that problems of unidentifiability arise for unreplicated observations and normal error structure. As in all data sets, outliers can arise and methods are needed for detecting and testing them. An outlier-generating model of mean–slippage type can be used to characterise four different forms of outlier manifestation. It is interesting to find that the unidentifiability problem provides no obstacle for detecting or testing the outliers for three of the four forms. Detection principles, and specific discordancy tests, are derived and illustrated by application to some data on physical measurements of Pacific squid.  相似文献   

5.
Latent variable structural models and the partial least-squares (PLS) estimation procedure have found increased interest since being used in the context of customer satisfaction measurement. The well-known property that the estimates of the inner structure model are inconsistent implies biased estimates for finite sample sizes. A simplified version of the structural model that is used for the Swedish Customer Satisfaction Index (SCSI) system has been used to generate simulated data and to study the PLS algorithm in the presence of three inadequacies: (i) skew instead of symmetric distributions for manifest variables; (ii) multi-collinearity within blocks of manifest and between latent variables; and (iii) misspecification of the structural model (omission of regressors). The simulation results show that the PLS method is quite robust against these inadequacies. The bias that is caused by the inconsistency of PLS estimates is substantially increased only for extremely skewed distributions and for the erroneous omission of a highly relevant latent regressor variable. The estimated scores of the latent variables are always in very good agreement with the true values and seem to be unaffected by the inadequacies under investigation.  相似文献   

6.
Summary.  Method effects often occur when different methods are used for measuring the same construct. We present a new approach for modelling this kind of phenomenon, consisting of a definition of method effects and a first model, the method effect model , that can be used for data analysis. This model may be applied to multitrait–multimethod data or to longitudinal data where the same construct is measured with at least two methods at all occasions. In this new approach, the definition of the method effects is based on the theory of individual causal effects by Neyman and Rubin. Method effects are accordingly conceptualized as the individual effects of applying measurement method j instead of k . They are modelled as latent difference scores in structural equation models. A reference method needs to be chosen against which all other methods are compared. The model fit is invariant to the choice of the reference method. The model allows the estimation of the average of the individual method effects, their variance, their correlation with the traits (and other latent variables) and the correlation of different method effects among each other. Furthermore, since the definition of the method effects is in line with the theory of causality, the method effects may (under certain conditions) be interpreted as causal effects of the method. The method effect model is compared with traditional multitrait–multimethod models. An example illustrates the application of the model to longitudinal data analysing the effect of negatively (such as 'feel bad') as compared with positively formulated items (such as 'feel good') measuring mood states.  相似文献   

7.
Longitudinal health-related quality of life data arise naturally from studies of progressive and neurodegenerative diseases. In such studies, patients’ mental and physical conditions are measured over their follow-up periods and the resulting data are often complicated by subject-specific measurement times and possible terminal events associated with outcome variables. Motivated by the “Predictor’s Cohort” study on patients with advanced Alzheimer disease, we propose in this paper a semiparametric modeling approach to longitudinal health-related quality of life data. It builds upon and extends some recent developments for longitudinal data with irregular observation times. The new approach handles possibly dependent terminal events. It allows one to examine time-dependent covariate effects on the evolution of outcome variable and to assess nonparametrically change of outcome measurement that is due to factors not incorporated in the covariates. The usual large-sample properties for parameter estimation are established. In particular, it is shown that relevant parameter estimators are asymptotically normal and the asymptotic variances can be estimated consistently by the simple plug-in method. A general procedure for testing a specific parametric form in the nonparametric component is also developed. Simulation studies show that the proposed approach performs well for practical settings. The method is applied to the motivating example.  相似文献   

8.
Measurement error is well known to cause bias in estimated regression coefficients and a loss of power for detecting associations. Methods commonly used to correct for bias often require auxiliary data. We develop a solution for investigating associations between the change in an imprecisely measured outcome and precisely measured predictors, adjusting for the baseline value of the outcome when auxiliary data are not available. We require the specification of ranges for the reliability or the measurement error variance. The solution allows one to investigate the associations for change and to assess the impact of the measurement error.  相似文献   

9.
Measurement error, the difference between a measured (observed) value of quantity and its true value, is perceived as a possible source of estimation bias in many surveys. To correct for such bias, a validation sample can be used in addition to the original sample for adjustment of measurement error. Depending on the type of validation sample, we can either use the internal calibration approach or the external calibration approach. Motivated by Korean Longitudinal Study of Aging (KLoSA), we propose a novel application of fractional imputation to correct for measurement error in the analysis of survey data. The proposed method is to create imputed values of the unobserved true variables, which are mis-measured in the main study, by using validation subsample. Furthermore, the proposed method can be directly applicable when the measurement error model is a mixture distribution. Variance estimation using Taylor linearization is developed. Results from a limited simulation study are also presented.  相似文献   

10.
For estimation of population totals, dual system estimation (d.s.e.) is often used. Such a procedure is known to suffer from bias under certain conditions. In the following, a simple model is proposed that combines three conditions under which bias of the DSE can result. The conditions relate to response correlation, classification and matching error. The resulting bias is termed model bias. The effects of model bias and synthetic bias in a small area estimation application are illustrated. The illustration uses simulated population data  相似文献   

11.
This paper develops a bias correction scheme for a multivariate heteroskedastic errors-in-variables model. The applicability of this model is justified in areas such as astrophysics, epidemiology and analytical chemistry, where the variables are subject to measurement errors and the variances vary with the observations. We conduct Monte Carlo simulations to investigate the performance of the corrected estimators. The numerical results show that the bias correction scheme yields nearly unbiased estimates. We also give an application to a real data set.  相似文献   

12.
Summary.  The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.  相似文献   

13.
Factor analysis is a flexible technique for assessment of multivariate dependence and codependence. Besides being an exploratory tool used to reduce the dimensionality of multivariate data, it allows estimation of common factors that often have an interesting theoretical interpretation in real problems. However, standard factor analysis is only applicable when the variables are scaled, which is often inappropriate, for example, in data obtained from questionnaires in the field of psychology, where the variables are often categorical. In this framework, we propose a factor model for the analysis of multivariate ordered and non-ordered polychotomous data. The inference procedure is done under the Bayesian approach via Markov chain Monte Carlo methods. Two Monte Carlo simulation studies are presented to investigate the performance of this approach in terms of estimation bias, precision and assessment of the number of factors. We also illustrate the proposed method to analyze participants'' responses to the Motivational State Questionnaire dataset, developed to study emotions in laboratory and field settings.  相似文献   

14.
We consider varying coefficient models, which are an extension of the classical linear regression models in the sense that the regression coefficients are replaced by functions in certain variables (for example, time), the covariates are also allowed to depend on other variables. Varying coefficient models are popular in longitudinal data and panel data studies, and have been applied in fields such as finance and health sciences. We consider longitudinal data and estimate the coefficient functions by the flexible B-spline technique. An important question in a varying coefficient model is whether an estimated coefficient function is statistically different from a constant (or zero). We develop testing procedures based on the estimated B-spline coefficients by making use of nice properties of a B-spline basis. Our method allows longitudinal data where repeated measurements for an individual can be correlated. We obtain the asymptotic null distribution of the test statistic. The power of the proposed testing procedures are illustrated on simulated data where we highlight the importance of including the correlation structure of the response variable and on real data.  相似文献   

15.
Missing data are a common problem in almost all areas of empirical research. Ignoring the missing data mechanism, especially when data are missing not at random (MNAR), can result in biased and/or inefficient inference. Because MNAR mechanism is not verifiable based on the observed data, sensitivity analysis is often used to assess it. Current sensitivity analysis methods primarily assume a model for the response mechanism in conjunction with a measurement model and examine sensitivity to missing data mechanism via the parameters of the response model. Recently, Jamshidian and Mata (Post-modelling sensitivity analysis to detect the effect of missing data mechanism, Multivariate Behav. Res. 43 (2008), pp. 432–452) introduced a new method of sensitivity analysis that does not require the difficult task of modelling the missing data mechanism. In this method, a single measurement model is fitted to all of the data and to a sub-sample of the data. Discrepancy in the parameter estimates obtained from the the two data sets is used as a measure of sensitivity to missing data mechanism. Jamshidian and Mata describe their method mainly in the context of detecting data that are missing completely at random (MCAR). They used a bootstrap type method, that relies on heuristic input from the researcher, to test for the discrepancy of the parameter estimates. Instead of using bootstrap, the current article obtains confidence interval for parameter differences on two samples based on an asymptotic approximation. Because it does not use bootstrap, the developed procedure avoids likely convergence problems with the bootstrap methods. It does not require heuristic input from the researcher and can be readily implemented in statistical software. The article also discusses methods of obtaining sub-samples that may be used to test missing at random in addition to MCAR. An application of the developed procedure to a real data set, from the first wave of an ongoing longitudinal study on aging, is presented. Simulation studies are performed as well, using two methods of missing data generation, which show promise for the proposed sensitivity method. One method of missing data generation is also new and interesting in its own right.  相似文献   

16.
Prognostic studies are essential to understand the role of particular prognostic factors and, thus, improve prognosis. In most studies, disease progression trajectories of individual patients may end up with one of mutually exclusive endpoints or can involve a sequence of different events.

One challenge in such studies concerns separating the effects of putative prognostic factors on these different endpoints and testing the differences between these effects.

In this article, we systematically evaluate and compare, through simulations, the performance of three alternative multivariable regression approaches in analyzing competing risks and multiple-event longitudinal data. The three approaches are: (1) fitting separate event-specific Cox's proportional hazards models; (2) the extension of Cox's model to competing risks proposed by Lunn and McNeil; and (3) Markov multi-state model.

The simulation design is based on a prognostic study of cancer progression, and several simulated scenarios help investigate different methodological issues relevant to the modeling of multiple-event processes of disease progression. The results highlight some practically important issues. Specifically, the decreased precision of the observed timing of intermediary (non fatal) events has a strong negative impact on the accuracy of regression coefficients estimated with either the Cox's or Lunn-McNeil models, while the Markov model appears to be quite robust, under the same circumstances. Furthermore, the tests based on both Markov and Lunn-McNeil models had similar power for detecting a difference between the effects of the same covariate on the hazards of two mutually exclusive events. The Markov approach yields also accurate Type I error rate and good empirical power for testing the hypothesis that the effect of a prognostic factor on changes after an intermediary event, which cannot be directly tested with the Lunn-McNeil method. Bootstrap-based standard errors improve the coverage rates for Markov model estimates. Overall, the results of our simulations validate Markov multi-state model for a wide range of data structures encountered in prognostic studies of disease progression, and may guide end users regarding the choice of model(s) most appropriate for their specific application.  相似文献   

17.
We propose a flexible functional approach for modelling generalized longitudinal data and survival time using principal components. In the proposed model the longitudinal observations can be continuous or categorical data, such as Gaussian, binomial or Poisson outcomes. We generalize the traditional joint models that treat categorical data as continuous data by using some transformations, such as CD4 counts. The proposed model is data-adaptive, which does not require pre-specified functional forms for longitudinal trajectories and automatically detects characteristic patterns. The longitudinal trajectories observed with measurement error or random error are represented by flexible basis functions through a possibly nonlinear link function, combining dimension reduction techniques resulting from functional principal component (FPC) analysis. The relationship between the longitudinal process and event history is assessed using a Cox regression model. Although the proposed model inherits the flexibility of non-parametric methods, the estimation procedure based on the EM algorithm is still parametric in computation, and thus simple and easy to implement. The computation is simplified by dimension reduction for random coefficients or FPC scores. An iterative selection procedure based on Akaike information criterion (AIC) is proposed to choose the tuning parameters, such as the knots of spline basis and the number of FPCs, so that appropriate degree of smoothness and fluctuation can be addressed. The effectiveness of the proposed approach is illustrated through a simulation study, followed by an application to longitudinal CD4 counts and survival data which were collected in a recent clinical trial to compare the efficiency and safety of two antiretroviral drugs.  相似文献   

18.
Summary.  In longitudinal studies missing data are the rule not the exception. We consider the analysis of longitudinal binary data with non-monotone missingness that is thought to be non-ignorable. In this setting a full likelihood approach is complicated algebraically and can be computationally prohibitive when there are many measurement occasions. We propose a 'protective' estimator that assumes that the probability that a response is missing at any occasion depends, in a completely unspecified way, on the value of that variable alone. Relying on this 'protectiveness' assumption, we describe a pseudolikelihood estimator of the regression parameters under non-ignorable missingness, without having to model the missing data mechanism directly. The method proposed is applied to CD4 cell count data from two longitudinal clinical trials of patients infected with the human immunodeficiency virus.  相似文献   

19.
This work shows a procedure that aims to eliminate or reduce the bias caused by omitted variables by means of the so-called regime-switching regressions. There is a bias estimation whenever the statistical (linear) model is under-specified, that is, when there are some omitted variables and they are correlated with the regressors. This work shows how an appropriate specification of a regime-switching model (independent or Markov-switching) can eliminate or reduce this correlation, hence the estimation bias. A demonstration is given, together with some Monte Carlo simulations. An empirical verification, based on Fisher's equation, is also provided.  相似文献   

20.
While most of the literature on measurement error focuses on additive measurement error, we consider in this paper the multiplicative case. We apply the Simulation Extrapolation method (SIMEX)—a procedure which was originally proposed by Cook and Stefanski (J. Am. Stat. Assoc. 89:1314–1328, 1994) in order to correct the bias due to additive measurement error—to the case where data are perturbed by multiplicative noise and present several approaches to account for multiplicative noise in the SIMEX procedure. Furthermore, we analyze how well these approaches reduce the bias caused by multiplicative perturbation. Using a binary probit model, we produce Monte Carlo evidence on how the reduction of data quality can be minimized. For helpful comments, we would like to thank Helmut Küchenhoff, Winfried Pohlmeier, and Gerd Ronning. Sandra Nolte gratefully acknowledges financial support by the DFG. Elena Biewen and Martin Rosemann gratefully acknowledge the financial support by the Federal Ministry of Education and Research (BMBF). The usual disclaimer applies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号