首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a consistent and locally efficient method of estimating the model parameters of a logistic mixed effect model with random slopes. Our approach relaxes two typical assumptions: the random effects being normally distributed, and the covariates and random effects being independent of each other. Adhering to these assumptions is particularly difficult in health studies where, in many cases, we have limited resources to design experiments and gather data in long‐term studies, while new findings from other fields might emerge, suggesting the violation of such assumptions. So it is crucial to have an estimator that is robust to such violations; then we could make better use of current data harvested using various valuable resources. Our method generalizes the framework presented in Garcia & Ma (2016) which also deals with a logistic mixed effect model but only considers a random intercept. A simulation study reveals that our proposed estimator remains consistent even when the independence and normality assumptions are violated. This contrasts favourably with the traditional maximum likelihood estimator which is likely to be inconsistent when there is dependence between the covariates and random effects. Application of this work to a study of Huntington's disease reveals that disease diagnosis can be enhanced using assessments of cognitive performance. The Canadian Journal of Statistics 47: 140–156; 2019 © 2019 Statistical Society of Canada  相似文献   

2.
Most parametric statistical methods are based on a set of assumptions: normality, linearity and homoscedasticity. Transformation of a metric response is a popular method to meet these assumptions. In particular, transformation of the response of a linear model is a popular method when attempting to satisfy the Gaussian assumptions on the error components in the model. A particular problem with common transformations such as the logarithm or the Box–Cox family is that negative and zero data values cannot be transformed. This paper proposes a new transformation which allows negative and zero data values. The method for estimating the transformation parameter consider an objective criteria based on kurtosis and skewness for achieving normality. Use of the new transformation and the method for estimating the transformation parameter are illustrated with three data sets.  相似文献   

3.
Since departures from the classical assumptions regarding the disturbances in a linear tegression model arise frequently in empirical application, deveral computationally Straightforward procedutes are presented in this paper for testiog non-nested models when the disturbances of these models follow first- or higher-order autoregressive processes. Anempirical example is used to illustrate how the procedures may be used to test competing Keynesian and New Classical non-nested models of unemployment for the U.S using annual time series data for 1955-85.  相似文献   

4.
Multi-stage time evolving models are common statistical models for biological systems, especially insect populations. In stage-duration distribution models, parameter estimation for the models use the Laplace transform method. This method involves assumptions such as known constant shapes, known constant rates or the same overall hazard rate for all stages. These assumptions are strong and restrictive. The main aim of this paper is to weaken these assumptions by using a Bayesian approach. In particular, a Metropolis-Hastings algorithm based on deterministic transformations is used to estimate parameters. We will use two models, one which has no hazard rates, and the other has stage-wise constant hazard rates. These methods are validated in simulation studies followed by a case study of cattle parasites. The results show that the proposed methods are able to estimate the parameters comparably well, as opposed to using the Laplace transform methods.  相似文献   

5.
A fundamental problem with the latent-time framework in competing risks is the lack of identifiability of the joint distribution. Given observed covariates along with assumptions as to the form of their effect, then identifiability may obtain. However it is difficult to check any assumptions about form since a more general model may lose identifiability. This paper considers a general framework for modelling the effect of covariates, with the single assumption that the copula dependency structure of the latent times is invariant to the covariates. This framework consists of a set of functions: the covariate-time transformations. The main result produces bounds on these functions, which are derived solely from the crude incidence functions. These bounds are a useful model checking tool when considering the covariate-time transformation resulting from any particular set of further assumptions. An example is given where the widely-used assumption of independent competing risks is checked.  相似文献   

6.
An often-used scenario in marketing is that of individuals purchasing in a Poisson manner with their purchasing rates distributed gamma across the population of customers. Ehrenberg (1959) introduced the marketing community to this story and the resulting negative binomial distribution (NBD), and during the past 30 years the NBD model has been shown to work quite well. But the basic gamma/Poisson assumptions lack some face validity. In many product categories, customers purchase more regularly than the exponential. There are some individuals who will never purchase. The purpose of this article is to review briefly the literature that addresses these and other issues. The tractable results presented arise when the basic gamma/Poisson assumptions are relaxed one issue at a time. Some conjectures will be made about the robustness of the NBD when multiple deviations occur together. The NBD may work, but there are still opportunities for working on variations of the NBD theme.  相似文献   

7.
The need to use rigorous, transparent, clearly interpretable, and scientifically justified methodology for preventing and dealing with missing data in clinical trials has been a focus of much attention from regulators, practitioners, and academicians over the past years. New guidelines and recommendations emphasize the importance of minimizing the amount of missing data and carefully selecting primary analysis methods on the basis of assumptions regarding the missingness mechanism suitable for the study at hand, as well as the need to stress‐test the results of the primary analysis under different sets of assumptions through a range of sensitivity analyses. Some methods that could be effectively used for dealing with missing data have not yet gained widespread usage, partly because of their underlying complexity and partly because of lack of relatively easy approaches to their implementation. In this paper, we explore several strategies for missing data on the basis of pattern mixture models that embody clear and realistic clinical assumptions. Pattern mixture models provide a statistically reasonable yet transparent framework for translating clinical assumptions into statistical analyses. Implementation details for some specific strategies are provided in an Appendix (available online as Supporting Information), whereas the general principles of the approach discussed in this paper can be used to implement various other analyses with different sets of assumptions regarding missing data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
We first describe the time series modeling problem in a general way. Then some specific assumptions and observations which are pertinent to the application of these models are made. We next propose a specific approach to the modeling problem, one which yields efficient, easily calculated estimators of all parameters (under the stated assumptions). Finally, the technique is applied to the problem of modeling the census of a particular hospital.  相似文献   

9.
Female labor participation models have been usually studied through probit and logit specifications. Little attention has been paid to verify the assumptions that are used in these sort of models, basically distributional assumptions and homoskedasticity. In this paper we apply semiparametirc methods in order to test the previous hypothesis. We also estimate a Spanish female labor participation model using both parametric and semiparametirc approaches. The parametirc model includes fixed and random coefficients probit specification. The estimation procedures are parametric maximum likelihood for both probit and logit models, and semiparametric quasi maximum likelihood following Klein and Spady (1993). The results depend cricially in the assumed model.  相似文献   

10.
Abstract.  The present work focuses on extensions of the posterior predictive p -value (ppp-value) for models with hierarchical structure, designed for testing assumptions made on underlying processes. The ppp-values are popular as tools for model criticism, yet their lack of a common interpretation limit their practical use. We discuss different extensions of ppp-values to hierarchical models, allowing for discrepancy measures that can be used for checking properties of the model at all stages. Through analytical derivations and simulation studies on simple models, we show that similar to the standard ppp-values, these extensions are typically far from uniformly distributed under the model assumptions and can give poor power in a hypothesis testing framework. We propose a calibration of the p -values, making the resulting calibrated p -values uniformly distributed under the model conditions. Illustrations are made through a real example of multinomial regression to age distributions of fish.  相似文献   

11.
A variety of statistical regression models have been proposed for the comparison of ROC curves for different markers across covariate groups. Pepe developed parametric models for the ROC curve that induce a semiparametric model for the market distributions to relax the strong assumptions in fully parametric models. We investigate the analysis of the power ROC curve using these ROC-GLM models compared to the parametric exponential model and the estimating equations derived from the usual partial likelihood methods in time-to-event analyses. In exploring the robustness to violations of distributional assumptions, we find that the ROC-GLM provides an extra measure of robustness.  相似文献   

12.
Under mild distributional assumptions,Langberg,Proshane,and Quinzi demonstrated the changing of a dependent competing failure mode system into an independent one.In this paper, their procedure is ultiized to construct dependent failure mode models.The common reliablity models, Exponential,Weibull and Extreme value, are examined with this procedure and an example is discussed.  相似文献   

13.
Goodness-of-fit Tests for Mixed Models   总被引:2,自引:1,他引:1  
Abstract.  Mixed linear models have become a very useful tool for modelling experiments with dependent observations within subjects, but to establish their appropriateness several assumptions have to be checked. In this paper, we focus on the normality assumptions, using goodness-of-fit tests that make allowance for possible design imbalance. These tests rely on asymptotic results, which are established via empirical process theory. The power of the tests is explored empirically, and examples illustrate some aspects of the usage of the tests.  相似文献   

14.
This paper discusses a general strategy for reducing measurement-error-induced bias in statistical models. It is assumed that the measurement error is unbiased with a known variance although no other distributional assumptions on the measurement-error are employed,

Using a preliminary fit of the model to the observed data, a transformation of the variable measured with error is estimated. The transformation is constructed so that the estimates obtained by refitting the model to the ‘corrected’ data have smaller bias,

Whereas the general strategy can be applied in a number of settings, this paper focuses on the problem of covariate measurement error in generalized linear models, Two estimators are derived and their effectiveness at reducing bias is demonstrated in a Monte Carlo study.  相似文献   

15.
We generalize Wedderburn's (1974) notion of quasi-likelihood to define a quasi-Bayesian approach for nonlinear estimation problems by allowing the full distributional assumptions about the random component in the classical Bayesian approach to be replaced by much weaker assumptions in which only the first and second moments of the prior distribution are specified. The formulas given are based on the Gauss-Newton estimating procedure and require only the first and second moments of the distributions involved. The use of GLIM package to solve for the estimation problems considered is discussed. Applications are made to estimation problems in inverse linear regression, regression models with both variables subject to error and also to the estimation of the size of animal populations. Some numerical illustrations are reported. For the inverse linear regression problem, comparisons with ordinary Bayesianand other techniques are considered.  相似文献   

16.
The strong consistency of the least-squares estimates in regression models is obtained when the errors are i.i.d. with absolute moment of order r, 0<r? 2. The assumptions presented for the random error sequence will permit us to obtain improvements of the conditions on the regressors in order to obtain the strong consistency of the least-squares estimates in linear and nonlinear regression models.  相似文献   

17.
This paper shows how the bootstrap method can be used to estimate the joint distribution of sample autocorrelations and partial autocorrelations. The exact joint distribution of sample autocorrelations is mathematically intractable and attempts at workable approximations are difficult and rely on special assumptions. The bootstrap offers an accurate solution to this problem without requiring special assumptions and in a way that avoids theoretical difficulties. The bootstrap-estimated joint distributions of the autocorrelations and partial autocorrelations of time series are shown to lead to better ARMA model identification. This is demonstrated using simulated series.  相似文献   

18.
When the target variable exhibits a semicontinuous behavior (a point mass in a single value and a continuous distribution elsewhere), parametric “two-part models” have been extensively used and investigated. The applications have mainly been related to non negative variables with a point mass in zero (zero-inflated data). In this article, a semiparametric Bayesian two-part model for dealing with such variables is proposed. The model allows a semiparametric expression for the two parts of the model by using Dirichlet processes. A motivating example, based on grape wine production in Tuscany (an Italian region), is used to show the capabilities of the model. Finally, two simulation experiments evaluate the model. Results show a satisfactory performance of the suggested approach for modeling and predicting semicontinuous data when parametric assumptions are not reasonable.  相似文献   

19.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

20.
In responding to a rating question, an individual may give answers either according to his/her knowledge/awareness or to his/her level of indecision/uncertainty, typically driven by a response style. As ignoring this dual behavior may lead to misleading results, we define a multivariate model for ordinal rating responses by introducing, for every item and every respondent, a binary latent variable that discriminates aware from uncertain responses. Some independence assumptions among latent and observable variables characterize the uncertain behavior and make the model easier to interpret. Uncertain responses are modeled by specifying probability distributions that can depict different response styles. A marginal parameterization allows a simple and direct interpretation of the parameters in terms of association among aware responses and their dependence on explanatory factors. The effectiveness of the proposed model is attested through an application to real data and supported by a Monte Carlo study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号