首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper extends methods for nonlinear regression analysis that have developed for the analysis of clustered data. Its novelty lies in its dual incorporation of random cluster effects and structural error in the measurement of the explanatory variables. Moments up to second order are assumed to have been specified for the latter to enable a generalized estimating equations approach to be used for fitting and testing nonlinear models linking response to these explanatory variables and random effects. Taylor expansion methods are used, and a difficulty with earlier approaches overcome. Finally we describe an application of this methodology to indicate how it can be used. That application concerns the degree of association of hospital admissions for acute respiratory health problems and air pollution.  相似文献   

2.
In this article, we introduce a new weighted quantile regression method. Traditionally, the estimation of the parameters involved in quantile regression is obtained by minimizing a loss function based on absolute distances with weights independent of explanatory variables. Specifically, we study a new estimation method using a weighted loss function with the weights associated with explanatory variables so that the performance of the resulting estimation can be improved. In full generality, we derive the asymptotic distribution of the weighted quantile regression estimators for any uniformly bounded positive weight function independent of the response. Two practical weighting schemes are proposed, each for a certain type of data. Monte Carlo simulations are carried out for comparing our proposed methods with the classical approaches. We also demonstrate the proposed methods using two real-life data sets from the literature. Both our simulation study and the results from these examples show that our proposed method outperforms the classical approaches when the relative efficiency is measured by the mean-squared errors of the estimators.  相似文献   

3.
The so-called “fixed effects” approach to the estimation of panel data models suffers from the limitation that it is not possible to estimate the coefficients on explanatory variables that are time-invariant. This is in contrast to a “random effects” approach, which achieves this by making much stronger assumptions on the relationship between the explanatory variables and the individual-specific effect. In a linear model, it is possible to obtain the best of both worlds by making random effects-type assumptions on the time-invariant explanatory variables while maintaining the flexibility of a fixed effects approach when it comes to the time-varying covariates. This article attempts to do the same for some popular nonlinear models.  相似文献   

4.
Multilevel models are popular models for analysing data inheriting a hierarchical structure. They are used in diverse fields including social, medical, economical and biological sciences. These models encounter some problems in estimating the parameters, if there are measurement errors in either explanatory or response variables. A common approach to tackle this obstacle is to consider the pseudo variables and follow some simulation methods to estimate the parameters. We propose a new algorithm constituting the iterative and simulation extrapolation steps in turn. To evaluate the proposed algorithm, various simulation studies are also conducted. Moreover, we investigate the implementation of our method on a real data set concerning the cost and expenditure of the households in Tehran city in the year 2007.  相似文献   

5.
Most system identification approaches and statistical inference methods rely on the availability of the analytic knowledge of the probability distribution function of the system output variables. In the case of dynamic systems modelled by hidden Markov chains or stochastic nonlinear state-space models, these distributions as well as that of the state variables themselves, can be unknown or untractable. In that situation, the usual particle Monte Carlo filters for system identification or likelihood-based inference and model selection methods have to rely, whenever possible, on some hazardous approximations and are often at risk. This review shows how a recent nonparametric particle filtering approach can be efficiently used in that context, not only for consistent filtering of these systems but also to restore these statistical inference methods, allowing, for example, consistent particle estimation of Bayes factors or the generalisation of model parameter change detection sequential tests. Real-life applications of these particle approaches to a microbiological growth model are proposed as illustrations.  相似文献   

6.
Summary.  Method effects often occur when different methods are used for measuring the same construct. We present a new approach for modelling this kind of phenomenon, consisting of a definition of method effects and a first model, the method effect model , that can be used for data analysis. This model may be applied to multitrait–multimethod data or to longitudinal data where the same construct is measured with at least two methods at all occasions. In this new approach, the definition of the method effects is based on the theory of individual causal effects by Neyman and Rubin. Method effects are accordingly conceptualized as the individual effects of applying measurement method j instead of k . They are modelled as latent difference scores in structural equation models. A reference method needs to be chosen against which all other methods are compared. The model fit is invariant to the choice of the reference method. The model allows the estimation of the average of the individual method effects, their variance, their correlation with the traits (and other latent variables) and the correlation of different method effects among each other. Furthermore, since the definition of the method effects is in line with the theory of causality, the method effects may (under certain conditions) be interpreted as causal effects of the method. The method effect model is compared with traditional multitrait–multimethod models. An example illustrates the application of the model to longitudinal data analysing the effect of negatively (such as 'feel bad') as compared with positively formulated items (such as 'feel good') measuring mood states.  相似文献   

7.
Dependence in outcome variables may pose formidable difficulty in analyzing data in longitudinal studies. In the past, most of the studies made attempts to address this problem using the marginal models. However, using the marginal models alone, it is difficult to specify the measures of dependence in outcomes due to association between outcomes as well as between outcomes and explanatory variables. In this paper, a generalized approach is demonstrated using both the conditional and marginal models. This model uses link functions to test for dependence in outcome variables. The estimation and test procedures are illustrated with an application to the mobility index data from the Health and Retirement Survey and also simulations are performed for correlated binary data generated from the bivariate Bernoulli distributions. The results indicate the usefulness of the proposed method.  相似文献   

8.
The analysis of failure time data often involves two strong assumptions. The proportional hazards assumption postulates that hazard rates corresponding to different levels of explanatory variables are proportional. The additive effects assumption specifies that the effect associated with a particular explanatory variable does not depend on the levels of other explanatory variables. A hierarchical Bayes model is presented, under which both assumptions are relaxed. In particular, time-dependent covariate effects are explicitly modelled, and the additivity of effects is relaxed through the use of a modified neural network structure. The hierarchical nature of the model is useful in that it parsimoniously penalizes violations of the two assumptions, with the strength of the penalty being determined by the data.  相似文献   

9.
We propose a three step procedure to investigate measurement bias and response shift, a special case of measurement bias in longitudinal data. Structural equation modelling is used in each of the three steps, which can be described as (1) establishing a measurement model using confirmatory factor analysis, (2) detecting measurement bias by testing the equivalence of model parameters across measurement occasions, (3) detecting measurement bias with respect to additional exogenous variables by testing their direct effects on the indicator variables. The resulting model can be used to investigate true change in the attributes of interest, by testing changes in common factor means. Solutions for the issue of constraint interaction and for chance capitalisation in model specification searches are discussed as part of the procedure. The procedure is illustrated by applying it to longitudinal health-related quality-of-life data of HIV/AIDS patients, collected at four semi-annual measurement occasions.  相似文献   

10.
Bayesian model building techniques are developed for data with a strong time series structure and possibly exogenous explanatory variables that have strong explanatory and predictive power. The emphasis is on finding whether there are any explanatory variables that might be used for modelling if the data have a strong time series structure that should also be included. We use a time series model that is linear in past observations and that can capture both stochastic and deterministic trend, seasonality and serial correlation. We propose the plotting of absolute predictive error against predictive standard deviation. A series of such plots is utilized to determine which of several nested and non-nested models is optimal in terms of minimizing the dispersion of the predictive distribution and restricting predictive outliers. We apply the techniques to modelling monthly counts of fatal road crashes in Australia where economic, consumption and weather variables are available and we find that three such variables should be included in addition to the time series filter. The approach leads to graphical techniques to determine strengths of relationships between the dependent variable and covariates and to detect model inadequacy as well as determining useful numerical summaries.  相似文献   

11.
Beta regression models provide an adequate approach for modeling continuous outcomes limited to the interval (0, 1). This paper deals with an extension of beta regression models that allow for explanatory variables to be measured with error. The structural approach, in which the covariates measured with error are assumed to be random variables, is employed. Three estimation methods are presented, namely maximum likelihood, maximum pseudo-likelihood and regression calibration. Monte Carlo simulations are used to evaluate the performance of the proposed estimators and the naïve estimator. Also, a residual analysis for beta regression models with measurement errors is proposed. The results are illustrated in a real data set.  相似文献   

12.
Suppose that the conditional density of a response variable given a vector of explanatory variables is parametrically modelled, and that data are collected by a two-phase sampling design. First, a simple random sample is drawn from the population. The stratum membership in a finite number of strata of the response and explanatory variables is recorded for each unit. Second, a subsample is drawn from the phase-one sample such that the selection probability is determined by the stratum membership. The response and explanatory variables are fully measured at this phase. We synthesize existing results on nonparametric likelihood estimation and present a streamlined approach for the computation and the large sample theory of profile likelihood in four different situations. The amount of information in terms of data and assumptions varies depending on whether the phase-one data are retained, the selection probabilities are known, and/or the stratum probabilities are known. We establish and illustrate numerically the order of efficiency among the maximum likelihood estimators, according to the amount of information utilized, in the four situations.  相似文献   

13.
In the past decades, the number of variables explaining observations in different practical applications increased gradually. This has led to heavy computational tasks, despite of widely using provisional variable selection methods in data processing. Therefore, more methodological techniques have appeared to reduce the number of explanatory variables without losing much of the information. In these techniques, two distinct approaches are apparent: ‘shrinkage regression’ and ‘sufficient dimension reduction’. Surprisingly, there has not been any communication or comparison between these two methodological categories, and it is not clear when each of these two approaches are appropriate. In this paper, we fill some of this gap by first reviewing each category in brief, paying special attention to the most commonly used methods in each category. We then compare commonly used methods from both categories based on their accuracy, computation time, and their ability to select effective variables. A simulation study on the performance of the methods in each category is generated as well. The selected methods are concurrently tested on two sets of real data which allows us to recommend conditions under which one approach is more appropriate to be applied to high-dimensional data.  相似文献   

14.
State space modelling and Bayesian analysis are both active areas of applied research in fisheries stock assessment. Combining these two methodologies facilitates the fitting of state space models that may be non-linear and have non-normal errors, and hence it is particularly useful for modelling fisheries dynamics. Here, this approach is demonstrated by fitting a non-linear surplus production model to data on South Atlantic albacore tuna ( Thunnus alalunga ). The state space approach allows for random variability in both the data (the measurement of relative biomass) and in annual biomass dynamics of the tuna stock. Sampling from the joint posterior distribution of the unobservables was achieved by using Metropolis-Hastings within-Gibbs sampling.  相似文献   

15.
This paper considers the problem of estimating the linear parameters of a Generalised Linear Model (GLM) when the explanatory variable is subject to measurement error. In this situation the induced model for dependence on the approximate explanatory variable is not usually of GLM form. However, when the distribution of measurement error is known or estimated from replicated measurements, application of the GLIM iteratively reweighted least squares algorithm with transformed data and weighting is shown to produce maximum quasi likelihood estimates in many cases. Details of this approach are given for two particular generalized linear models; simulation results illustrate the usefulness of the theory for these models.  相似文献   

16.
Evaluation of trace evidence in the form of multivariate data   总被引:1,自引:0,他引:1  
Summary.  The evaluation of measurements on characteristics of trace evidence found at a crime scene and on a suspect is an important part of forensic science. Five methods of assessment for the value of the evidence for multivariate data are described. Two are based on significance tests and three on the evaluation of likelihood ratios. The likelihood ratio which compares the probability of the measurements on the evidence assuming a common source for the crime scene and suspect evidence with the probability of the measurements on the evidence assuming different sources for the crime scene and suspect evidence is a well-documented measure of the value of the evidence. One of the likelihood ratio approaches transforms the data to a univariate projection based on the first principal component. The other two versions of the likelihood ratio for multivariate data account for correlation among the variables and for two levels of variation: that between sources and that within sources. One version assumes that between-source variability is modelled by a multivariate normal distribution; the other version models the variability with a multivariate kernel density estimate. Results are compared from the analysis of measurements on the elemental composition of glass.  相似文献   

17.
Multivariate extreme events are typically modelled using multivariate extreme value distributions. Unfortunately, there exists no finite parametrization for the class of multivariate extreme value distributions. One common approach is to model extreme events using some flexible parametric subclass. This approach has been limited to only two or three dimensions, primarily because suitably flexible high-dimensional parametric models have prohibitively complex density functions. We present an approach that allows a number of popular flexible models to be used in arbitrarily high dimensions. The approach easily handles missing and censored data, and can be employed when modelling componentwise maxima and multivariate threshold exceedances. The approach is based on a representation using conditionally independent marginal components, conditioning on positive stable random variables. We use Bayesian inference, where the conditioning variables are treated as auxiliary variables within Markov chain Monte Carlo simulations. We demonstrate these methods with an application to sea-levels, using data collected at 10 sites on the east coast of England.  相似文献   

18.
This note considers a method for estimating regression parameters from the data containing measurement errors using some natural estimates of the unobserved explanatory variables. It is shown that the resulting estimator is consistent not only in the usual linear regression model but also in the probit model and regression models with censoship or truncation. However, it fails to be consistent in nonlinear regression models except for special cases.  相似文献   

19.
The classical approach to the analysis of data from repeated surveys, based on Patterson (1950), has recently been extended by the work of Blight and Scott (1973) and Scott and Smith (1974), by assuming that a time series relationship exists between the population parameters at different times. The purpose of this paper is to compare the efficiencies of these three approaches by computing the mean square error (MSE) of the estimators of the current mean and of the change in mean on the last two occasions.  相似文献   

20.
This note considers a method for estimating regression parameters from the data containing measurement errors using some natural estimates of the unobserved explanatory variables. It is shown that the resulting estimator is consistent not only in the usual linear regression model but also in the probit model and regression models with censoship or truncation. However, it fails to be consistent in nonlinear regression models except for special cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号