首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood function near the true maximum likelihood estimate. To save computing time, we use only one Newton iteration to approximate the maximiser of each Monte Carlo likelihood and show that this is equivalent to the pointwise approach. The procedure is applied to fit a latent process model to a set of polio incidence data. The paper ends by a comparison between the marginal likelihood and the recently proposed hierarchical likelihood which avoids integration altogether.  相似文献   

2.
Scientific experiments commonly result in clustered discrete and continuous data. Existing methods for analyzing such data include the use of quasi-likelihood procedures and generalized estimating equations to estimate marginal mean response parameters. In applications to areas such as developmental toxicity studies, where discrete and continuous measurements are recorded on each fetus, or clinical ophthalmologic trials, where different types of observations are made on each eye, the assumption that data within cluster are exchangeable is often very reasonable. We use this assumption to formulate fully parametric regression models for clusters of bivariate data with binary and continuous components. The regression models proposed have marginal interpretations and reproducible model structures. Tractable expressions for likelihood equations are derived and iterative schemes are given for computing efficient estimates (MLEs) of the marginal mean, correlations, variances and higher moments. We demonstrate the use the ‘exchangeable’ procedure with an application to a developmental toxicity study involving fetal weight and malformation data.  相似文献   

3.
A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.  相似文献   

4.
In this paper, we discuss the inference problem about the Box-Cox transformation model when one faces left-truncated and right-censored data, which often occur in studies, for example, involving the cross-sectional sampling scheme. It is well-known that the Box-Cox transformation model includes many commonly used models as special cases such as the proportional hazards model and the additive hazards model. For inference, a Bayesian estimation approach is proposed and in the method, the piecewise function is used to approximate the baseline hazards function. Also the conditional marginal prior, whose marginal part is free of any constraints, is employed to deal with many computational challenges caused by the constraints on the parameters, and a MCMC sampling procedure is developed. A simulation study is conducted to assess the finite sample performance of the proposed method and indicates that it works well for practical situations. We apply the approach to a set of data arising from a retirement center.  相似文献   

5.
Analyses of randomised trials are often based on regression models which adjust for baseline covariates, in addition to randomised group. Based on such models, one can obtain estimates of the marginal mean outcome for the population under assignment to each treatment, by averaging the model‐based predictions across the empirical distribution of the baseline covariates in the trial. We identify under what conditions such estimates are consistent, and in particular show that for canonical generalised linear models, the resulting estimates are always consistent. We show that a recently proposed variance estimator underestimates the variance of the estimator around the true marginal population mean when the baseline covariates are not fixed in repeated sampling and provide a simple adjustment to remedy this. We also describe an alternative semiparametric estimator, which is consistent even when the outcome regression model used is misspecified. The different estimators are compared through simulations and application to a recently conducted trial in asthma.  相似文献   

6.
Conditional probability distributions have been commonly used in modeling Markov chains. In this paper we consider an alternative approach based on copulas to investigate Markov-type dependence structures. Based on the realization of a single Markov chain, we estimate the parameters using one- and two-stage estimation procedures. We derive asymptotic properties of the marginal and copula parameter estimators and compare performance of the estimation procedures based on Monte Carlo simulations. At low and moderate dependence structures the two-stage estimation has comparable performance as the maximum likelihood estimation. In addition we propose a parametric pseudo-likelihood ratio test for copula model selection under the two-stage procedure. We apply the proposed methods to an environmental data set.  相似文献   

7.
Bayesian model learning based on a parallel MCMC strategy   总被引:1,自引:0,他引:1  
We introduce a novel Markov chain Monte Carlo algorithm for estimation of posterior probabilities over discrete model spaces. Our learning approach is applicable to families of models for which the marginal likelihood can be analytically calculated, either exactly or approximately, given any fixed structure. It is argued that for certain model neighborhood structures, the ordinary reversible Metropolis-Hastings algorithm does not yield an appropriate solution to the estimation problem. Therefore, we develop an alternative, non-reversible algorithm which can avoid the scaling effect of the neighborhood. To efficiently explore a model space, a finite number of interacting parallel stochastic processes is utilized. Our interaction scheme enables exploration of several local neighborhoods of a model space simultaneously, while it prevents the absorption of any particular process to a relatively inferior state. We illustrate the advantages of our method by an application to a classification model. In particular, we use an extensive bacterial database and compare our results with results obtained by different methods for the same data.  相似文献   

8.
In many applications, a single Box–Cox transformation cannot necessarily produce the normality, constancy of variance and linearity of systematic effects. In this paper, by establishing a heterogeneous linear regression model for the Box–Cox transformed response, we propose a hybrid strategy, in which variable selection is employed to reduce the dimension of the explanatory variables in joint mean and variance models, and Box–Cox transformation is made to remedy the response. We propose a unified procedure which can simultaneously select significant variables in the joint mean and variance models of Box–Cox transformation which provide a useful extension of the ordinary normal linear regression models. With appropriate choice of the tuning parameters, we establish the consistency of this procedure and the oracle property of the obtained estimators. Moreover, we also consider the maximum profile likelihood estimator of the Box–Cox transformation parameter. Simulation studies and a real example are used to illustrate the application of the proposed methods.  相似文献   

9.
In this paper, a unified maximum marginal likelihood estimation procedure is proposed for the analysis of right censored data using general partially linear varying-coefficient transformation models (GPLVCTM), which are flexible enough to include many survival models as its special cases. Unknown functional coefficients in the models are approximated by cubic B-spline polynomial. We estimate B-spline coefficients and regression parameters by maximizing marginal likelihood function. One advantage of this procedure is that it is free of both baseline and censoring distribution. Through simulation studies and a real data application (VA data from the Veteran's Administration Lung Cancer Study Clinical Trial), we illustrate that the proposed estimation procedure is accurate, stable and practical.  相似文献   

10.
One of the most important agents responsible for high pollution in Tehran is carbon monoxide. Prediction of carbon monoxide is of immense help for sustaining the inhabitants’ health level. In this paper, motivated by the statistical analysis of carbon monoxide using the empirical Bayes approach, we deal with the issue of prior specification for the model parameters. In fact, the hyperparameters (the parameters of the prior law) are estimated based on a sampling-based method which depends only on the specification of the marginal spatial and temporal correlation structures. We compare the predictive performance of this approach with the type II maximum likelihood method. Results indicate that the proposed procedure performs better for this data set.  相似文献   

11.

Motivated by a breast cancer research program, this paper is concerned with the joint survivor function of multiple event times when their observations are subject to informative censoring caused by a terminating event. We formulate the correlation of the multiple event times together with the time to the terminating event by an Archimedean copula to account for the informative censoring. Adapting the widely used two-stage procedure under a copula model, we propose an easy-to-implement pseudo-likelihood based procedure for estimating the model parameters. The approach yields a new estimator for the marginal distribution of a single event time with semicompeting-risks data. We conduct both asymptotics and simulation studies to examine the proposed approach in consistency, efficiency, and robustness. Data from the breast cancer program are employed to illustrate this research.

  相似文献   

12.
In most software reliability models which utilize the nonhomogeneous Poisson process (NHPP), the intensity function for the counting process is usually assumed to be continuous and monotone. However, on account of various practical reasons, there may exist some change points in the intensity function and thus the assumption of continuous and monotone intensity function may be unrealistic in many real situations. In this article, the Bayesian change-point approach using beta-mixtures for modeling the intensity function with possible change points is proposed. The hidden Markov model with non constant transition probabilities is applied to the beta-mixture for detecting the change points of the parameters. The estimation and interpretation of the model is illustrated using the Naval Tactical Data System (NTDS) data. The proposed change point model will be also compared with the competing models via marginal likelihood. It can be seen that the proposed model has the highest marginal likelihood and outperforms the competing models.  相似文献   

13.
For many forms of cancer, patients will receive the initial regimen of treatments, then experience cancer progression and eventually die of the disease. Understanding the disease process in patients with cancer is essential in clinical, epidemiological and translational research. One challenge in analyzing such data is that death dependently censors cancer progression (e.g., recurrence), whereas progression does not censor death. We deal with the informative censoring by first selecting a suitable copula model through an exploratory diagnostic approach and then developing an inference procedure to simultaneously estimate the marginal survival function of cancer relapse and an association parameter in the copula model. We show that the proposed estimators possess consistency and weak convergence. We use simulation studies to evaluate the finite sample performance of the proposed method, and illustrate it through an application to data from a study of early stage breast cancer.  相似文献   

14.
New data collection and storage technologies have given rise to a new field of streaming data analytics, called real-time statistical methodology for online data analyses. Most existing online learning methods are based on homogeneity assumptions, which require the samples in a sequence to be independent and identically distributed. However, inter-data batch correlation and dynamically evolving batch-specific effects are among the key defining features of real-world streaming data such as electronic health records and mobile health data. This article is built under a state-space mixed model framework in which the observed data stream is driven by a latent state process that follows a Markov process. In this setting, online maximum likelihood estimation is made challenging by high-dimensional integrals and complex covariance structures. In this article, we develop a real-time Kalman-filter-based regression analysis method that updates both point estimates and their standard errors for fixed population average effects while adjusting for dynamic hidden effects. Both theoretical justification and numerical experiments demonstrate that our proposed online method has statistical properties similar to those of its offline counterpart and enjoys great computational efficiency. We also apply this method to analyze an electronic health record dataset.  相似文献   

15.
The objective of this paper is to investigate through simulation the possible presence of the incidental parameters problem when performing frequentist model discrimination with stratified data. In this context, model discrimination amounts to considering a structural parameter taking values in a finite space, with k points, k≥2. This setting seems to have not yet been considered in the literature about the Neyman–Scott phenomenon. Here we provide Monte Carlo evidence of the severity of the incidental parameters problem also in the model discrimination setting and propose a remedy for a special class of models. In particular, we focus on models that are scale families in each stratum. We consider traditional model selection procedures, such as the Akaike and Takeuchi information criteria, together with the best frequentist selection procedure based on maximization of the marginal likelihood induced by the maximal invariant, or of its Laplace approximation. Results of two Monte Carlo experiments indicate that when the sample size in each stratum is fixed and the number of strata increases, correct selection probabilities for traditional model selection criteria may approach zero, unlike what happens for model discrimination based on exact or approximate marginal likelihoods. Finally, two examples with real data sets are given.  相似文献   

16.
Summary.  We introduce a flexible marginal modelling approach for statistical inference for clustered and longitudinal data under minimal assumptions. This estimated estimating equations approach is semiparametric and the proposed models are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed effects linear predictor with unknown smooth link, and variance–covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance–covariance functions via smoothing methods, whereas the regression parameters are obtained via the estimated estimating equations. These are score equations that contain nonparametric function estimates. The proposed estimated estimating equations approach is motivated by its flexibility and easy implementation. Moreover, if data follow a generalized linear mixed model, with either a specified or an unspecified distribution of random effects and link function, the model proposed emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed effects in the underlying generalized linear mixed model, without the need to specify any other components of this generalized linear mixed model. Among marginal models, the estimated estimating equations approach provides a flexible alternative to modelling with generalized estimating equations. Applications of estimated estimating equations include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical inference. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.  相似文献   

17.
In longitudinal studies, as repeated observations are made on the same individual the response variables will usually be correlated. In analyzing such data, this dependence must be taken into account to avoid misleading inferences. The focus of this paper is to apply a logistic marginal model with Markovian dependence proposed by Azzalini [A. Azzalini, Logistic regression for autocorrelated data with application to repeated measures, Biometrika 81 (1994) 767–775] to the study of the influence of time-dependent covariates on the marginal distribution of the binary response in serially correlated binary data. We have shown how to construct the model so that the covariates relate only to the mean value of the process, independent of the association parameters. After formulating the proposed model for repeated measures data, the same approach is applied to missing data. An application is provided to the diabetes mellitus data of registered patients at the Bangladesh Institute of Research and Rehabilitation in Diabetes, Endocrine and Metabolic Disorders (BIRDEM) in 1984, using both time stationary and time varying covariates.  相似文献   

18.
We consider a Bayesian analysis method of paired survival data using a bivariate exponential model proposed by Moran (1967, Biometrika 54:385–394). Important features of Moran’s model include that the marginal distributions are exponential and the range of the correlation coefficient is between 0 and 1. These contrast with the popular exponential model with gamma frailty. Despite these nice properties, statistical analysis with Moran’s model has been hampered by lack of a closed form likelihood function. In this paper, we introduce a latent variable to circumvent the difficulty in the Bayesian computation. We also consider a model checking procedure using the predictive Bayesian P-value.  相似文献   

19.
The varying-coefficient model is an important nonparametric statistical model since it allows appreciable flexibility on the structure of fitted model. For ultra-high dimensional heterogeneous data it is very necessary to examine how the effects of covariates vary with exposure variables at different quantile level of interest. In this paper, we extended the marginal screening methods to examine and select variables by ranking a measure of nonparametric marginal contributions of each covariate given the exposure variable. Spline approximations are employed to model marginal effects and select the set of active variables in quantile-adaptive framework. This ensures the sure screening property in quantile-adaptive varying-coefficient model. Numerical studies demonstrate that the proposed procedure works well for heteroscedastic data.  相似文献   

20.
In this paper, we present an adaptive evolutionary Monte Carlo algorithm (AEMC), which combines a tree-based predictive model with an evolutionary Monte Carlo sampling procedure for the purpose of global optimization. Our development is motivated by sensor placement applications in engineering, which requires optimizing certain complicated “black-box” objective function. The proposed method is able to enhance the optimization efficiency and effectiveness as compared to a few alternative strategies. AEMC falls into the category of adaptive Markov chain Monte Carlo (MCMC) algorithms and is the first adaptive MCMC algorithm that simulates multiple Markov chains in parallel. A theorem about the ergodicity property of the AEMC algorithm is stated and proven. We demonstrate the advantages of the proposed method by applying it to a sensor placement problem in a manufacturing process, as well as to a standard Griewank test function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号