首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
Overcoming biases and misconceptions in ecological studies   总被引:2,自引:1,他引:1  
The aggregate data study design provides an alternative group level analysis to ecological studies in the estimation of individual level health risks. An aggregate model is derived by aggregating a plausible individual level relative rate model within groups, such that population-based disease rates are modelled as functions of individual level covariate data. We apply an aggregate data method to a series of fictitious examples from a review paper by Greenland and Robins which illustrated the problems that can arise when using the results of ecological studies to make inference about individual health risks. We use simulated data based on their examples to demonstrate that the aggregate data approach can address many of the sources of bias that are inherent in typical ecological analyses, even though the limited between-region covariate variation in these examples reduces the efficiency of the aggregate study. The aggregate method has the potential to estimate exposure effects of interest in the presence of non-linearity, confounding at individual and group levels, effect modification, classical measurement error in the exposure and non-differential misclassification in the confounder.  相似文献   

2.
Age–period–cohort decomposition requires an identification assumption because there is a linear relationship between age, survey period, and birth cohort (age+cohort=period). This paper proposes new decomposition methods based on factor models such as principal components model and partial least squares model. Although factor models have been applied to overcome the problem of many observed variables with possible co-linearity, they are applied to overcome the perfect co-linearity among age, period, and cohort dummy variables. Since any unobserved factor in the factor model is represented as a linear combination of the observed variables, the parameter estimates for age, period, and cohort effects are automatically obtained after the application of these factor models. Simulation results suggest that in almost all cases, the performance of the proposed method is better than that of a conventional econometric method. Empirical examples are also provided.  相似文献   

3.
In large cohort studies it can be impractical to report individual data that only summary or aggregated data are available. Using aggregated data from Bernoulli trials is expected to result in overdispersion so that a quasi-binomial approach would seem feasible. We show that when applied to aggregated data arising from cohorts of individuals according to a chain binomial model, the quasi-binomial model results in biased estimates. We propose an alternate calibration estimator and demonstrate its improved performance by simulations. The calibration method is then applied to model the probability of leaving a personal emergency link service in Hong Kong.  相似文献   

4.
We present a methodology for rating in real-time the creditworthiness of public companies in the U.S. from the prices of traded assets. Our approach uses asset pricing data to impute a term structure of risk neutral survival functions or default probabilities. Firms are then clustered into ratings categories based on their survival functions using a functional clustering algorithm. This allows all public firms whose assets are traded to be directly rated by market participants. For firms whose assets are not traded, we show how they can be indirectly rated by matching them to firms that are traded based on observable characteristics. We also show how the resulting ratings can be used to construct loss distributions for portfolios of bonds. Finally, we compare our ratings to Standard & Poors and find that, over the period 2005 to 2011, our ratings lead theirs for firms that ultimately default.  相似文献   

5.
Summary.  Method effects often occur when different methods are used for measuring the same construct. We present a new approach for modelling this kind of phenomenon, consisting of a definition of method effects and a first model, the method effect model , that can be used for data analysis. This model may be applied to multitrait–multimethod data or to longitudinal data where the same construct is measured with at least two methods at all occasions. In this new approach, the definition of the method effects is based on the theory of individual causal effects by Neyman and Rubin. Method effects are accordingly conceptualized as the individual effects of applying measurement method j instead of k . They are modelled as latent difference scores in structural equation models. A reference method needs to be chosen against which all other methods are compared. The model fit is invariant to the choice of the reference method. The model allows the estimation of the average of the individual method effects, their variance, their correlation with the traits (and other latent variables) and the correlation of different method effects among each other. Furthermore, since the definition of the method effects is in line with the theory of causality, the method effects may (under certain conditions) be interpreted as causal effects of the method. The method effect model is compared with traditional multitrait–multimethod models. An example illustrates the application of the model to longitudinal data analysing the effect of negatively (such as 'feel bad') as compared with positively formulated items (such as 'feel good') measuring mood states.  相似文献   

6.
Assessing long-term efficacy in psychiatric drugs involves a number of complex questions, and the priaority of these questions is different for different disorders and for different stakeholders. Therefore, it is essential that we not adopt a one-method-fits-all approach, but rather adapt the specific details of the designs and analysis of data from long-term trials to individual disease states. Randomized withdrawal (RW) designs, even though addressing a specific question of particular interest, face some difficult methodological and ethical challenges. A less common alternative design, termed the double-blind long-term efficacy (DBLE) design, is logistically similar to traditional responder extension designs. However, use of an analytic approach that includes all randomized patients rather than only the selected subset that continued beyond acute treatment avoids the major criticism of the extender design. The present paper illustrates the attributes of the RW and DBLE designs by analyzing data from trials adopting these designs. The RW and DBLE designs address different questions, and are thus not directly comparable. Potential benefits of the DBLE design include: (1) the parsimonious use of patients and the resultant reduced exposure to placebo as each patient can contribute to multiple developmental objectives; (2) the results are generalizable to actual clinical practice as the design matches treatment guidelines; and, (3) results of safety assessments are meaningful as they involve all randomized patients. Our case study suggests that the DBLE design can provide definitive answers to important questions and may be a useful design for assessing long-term treatment effects.  相似文献   

7.
Current status data arise in studies where the target measurement is the time of occurrence of some event, but observations are limited to indicators of whether or not the event has occurred at the time the sample is collected - only the current status of each individual with respect to event occurrence is observed. Examples of such data arise in several fields, including demography, epidemiology, econometrics and bioassay. Although estimation of the marginal distribution of times of event occurrence is well understood, techniques for incorporating covariate information are not well developed. This paper proposes a semiparametric approach to estimation for regression models of current status data, using techniques from generalized additive modeling and isotonic regression. This procedure provides simultaneous estimates of the baseline distribution of event times and covariate effects. No parametric assumptions about the form of the baseline distribution are required. The results are illustrated using data from a demographic survey of breastfeeding practices in developing countries, and from an epidemiological study of heterosexual Human Immunodeficiency Virus (HIV) transmission. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

8.
Patterns of sexual mixing and the sexual partner network are important determinants of the spread of all sexually transmitted diseases (STDs), including the human immunodeficiency virus. Novel statistical problems arise in the analysis and interpretation of studies aimed at measuring patterns of sexual mixing and sexual partner networks. Samples of mixing patterns and network structures derived from randomly sampling individuals are not themselves random samples of measures of partnerships or networks. In addition, the sensitive nature of questions on sexual activity will result in the introduction of non-response biases, which in estimating network structures are likely to be non-ignorable. Adjusting estimates for these biases by using standard statistical approaches is complicated by the complex interactions between the mechanisms generating bias and the non-independent nature of network data. Using a two-step Monte Carlo simulation approach, we have shown that measures of mixing patterns and the network structure that do not account for missing data and non-random sampling are severely biased. Here, we use this approach to adjust raw estimates in data to incorporate these effects. The results suggest that the risk for transmission of STDs in empirical data is underestimated by ignoring missing data and non-random sampling.  相似文献   

9.
10.
This study examines the relationship between accounting and market-value measures of profitability for individual firms. Differences in measures of profitability are observed both between and within industry groups and thus cannot be explained by differences in uncontrolled industry-specific influences. We suggest that both accounting and market can be used as unique but imperfect indicators of profitability. Using a LISREL model approach, we find R & D intensity, television advertising intensity, leverage, and industry growth to be important determinants of firm profits.  相似文献   

11.
In many prospective clinical and biomedical studies, longitudinal biomarkers are repeatedly measured as health indicators to evaluate disease progression when patients are followed up over a period of time. Patient visiting times can be referred to as informative observation times if they are assumed to carry information in addition to that of the longitudinal biomarker measures alone. Irregular visiting times may reflect compliance with physician instruction, disease progression and symptom severity. When the follow-up time may be stopped by competing terminal events, it is possible that patient observation times may correlate with the competing terminal events themselves, thus making the observation times difficult to assess. To explicitly account for the impact of competing terminal events and dependent observation times on the longitudinal data analysis in the context of such complex data, we propose a joint model using latent random effects to describe the association among them. A likelihood-based approach is derived for statistical inference. Extensive simulation studies reveal that the proposed approach performs well for practical situations, and an analysis of patients with chronic kidney disease in a cohort study is presented to illustrate the proposed method.  相似文献   

12.
Summary.  We propose an approach for estimating the age at first lower endoscopy examination from current status data that were collected via two series of cross-sectional surveys. To model the national probability of ever having a lower endoscopy examination, we incorporate birth cohort effects into a mixed influence diffusion model. We link a state-specific model to the national level diffusion model by using a marginalized modelling approach. In future research, results from our model will be used as microsimulation model inputs to estimate the contribution of endoscopy examinations to observed changes in colorectal cancer incidence and mortality.  相似文献   

13.
Correlated survival data arise frequently in biomedical and epidemiologic research, because each patient may experience multiple events or because there exists clustering of patients or subjects, such that failure times within the cluster are correlated. In this paper, we investigate the appropriateness of the semi-parametric Cox regression and of the generalized estimating equations as models for clustered failure time data that arise from an epidemiologic study in veterinary medicine. The semi-parametric approach is compared with a proposed fully parametric frailty model. The frailty component is assumed to follow a gamma distribution. Estimates of the fixed covariates effects were obtained by maximizing the likelihood function, while an estimate of the variance component ( frailty parameter) was obtained from a profile likelihood construction.  相似文献   

14.
Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models. The derived score equations for regression parameter and infinite-dimensional function suggest an iterative algorithm for cMLE. The cMLE is shown to be consistent and asymptotically normal. The limiting variances for the estimators can be consistently estimated using the inverse of negative Hessian matrix. Intensive simulation studies are conducted to investigate the performance of the cMLE. An application to the Channing House data is given to illustrate the methodology.  相似文献   

15.
A model which is an alternative to the age period cohort (APC) model is proposed for analyzing (age, period)-tabulated on breast cancer deaths. The result of fitting the proposed model to the data for females in Japan is shown, where it is seen that the proposed model provides a better fit to the data than APC model in terms of AIC. Besides, the ML estimates of the parameters in the model suggest that the risks of breast cancer death existing in the environment are rapidly increasing since the period of high economic growth in Japan.  相似文献   

16.
The problem of interpreting lung-function measurements in industrial workers is examined. Two common lung-function measurements (FEV1, and FVC) are described. The standard method currently used in the analysis of such cross-sectional survey data is discussed. The basic assumption of a linear decline with age is questioned on the basis of large sets of data from a variety of industries in British Columbia. It is shown that, while the linear assumption holds approximately in unexposed. healthy nonsmoking individuals, a quadratic age effect is often observed in smokers and/or in individuals who are industrially exposed to certain fumes or dusts. Recognizing this accelerated rate of deterioration in the lungs is of fundamental importance both to the identification of affected individuals and to the understanding of the process involved. An attempt is made to interpret the variety of nonlinear situations observed, by appealing to population selection mechanisms, individual variations in susceptibility, and the effects due to various levels of stimulus strength.  相似文献   

17.
In this paper, we propose a two-stage functional principal component analysis method in age–period–cohort (APC) analysis. The first stage of the method considers the age–period effect with the fitted values treated as an offset; and the second stage of the method considers the residual age–cohort effect conditional on the already estimated age-period effect. An APC version of the model in functional data analysis provides an improved fit to the data, especially when the data are sparse and irregularly spaced. We demonstrate the effectiveness of the proposed method using body mass index data stratified by gender and ethnicity.  相似文献   

18.
Several studies have shown that at the individual level there exists a negative relationship between age at first birth and completed fertility. Using twin data in order to control for unobserved heterogeneity as possible source of bias, Kohler et al. (2001) showed the significant presence of such "postponement effect" at the micro level. In this paper, we apply sample selection models, where selection is based on having or not having had a first birth at all, to estimate the impact of postponing first births on subsequent fertility for four European nations, three of which have now lowest-low fertility levels. We use data from a set of comparative surveys (Fertility and Family Surveys), and we apply sample selection models on the logarithm of total fertility and on the progression to the second birth. Our results show that postponement effects are only very slightly affected by sample selection biases, so that sample selection models do not improve significantly the results of standard regression techniques on selected samples. Our results confirm that the postponement effect is higher in countries with lowest-low fertility levels.  相似文献   

19.
本文利用1999-2007年170多万家制造业企业数据,从企业异质性视角,研究了FDI通过水平关联和垂直关联等渠道对内资企业的技术溢出效应.本文的研究不仅回答了我国制造业企业是否存在FDI技术外溢,而且还深入研究了技术外溢效应是如何发生的.研究结果表明:①从总体上看,外资企业主要通过前后向关联效应促进本地企业生产率的提高,在行业内,存在市场挤出效应.②与私营企业相比,外资并没有通过水平或垂直关联对国有企业产生正的技术外溢效应.内外资企业之间技术差距越小,越有利于FDI技术外溢的产生.③外资的异质性会影响FDI技术外溢的效果,出口型FDI对内资企业存在显著的水平挤出效应,外资控制权越高,技术外溢的效应越弱.  相似文献   

20.
One of the main advantages of factorial experiments is the information that they can offer on interactions. When there are many factors to be studied, some or all of this information is often sacrificed to keep the size of an experiment economically feasible. Two strategies for group screening are presented for a large number of factors, over two stages of experimentation, with particular emphasis on the detection of interactions. One approach estimates only main effects at the first stage (classical group screening), whereas the other new method (interaction group screening) estimates both main effects and key two-factor interactions at the first stage. Three criteria are used to guide the choice of screening technique, and also the size of the groups of factors for study in the first-stage experiment. The criteria seek to minimize the expected total number of observations in the experiment, the probability that the size of the experiment exceeds a prespecified target and the proportion of active individual factorial effects which are not detected. To implement these criteria, results are derived on the relationship between the grouped and individual factorial effects, and the probability distributions of the numbers of grouped factors whose main effects or interactions are declared active at the first stage. Examples are used to illustrate the methodology, and some issues and open questions for the practical implementation of the results are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号