首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A multi‐level model allows the possibility of marginalization across levels in different ways, yielding more than one possible marginal likelihood. Since log‐likelihoods are often used in classical model comparison, the question to ask is which likelihood should be chosen for a given model. The authors employ a Bayesian framework to shed some light on qualitative comparison of the likelihoods associated with a given model. They connect these results to related issues of the effective number of parameters, penalty function, and consistent definition of a likelihood‐based model choice criterion. In particular, with a two‐stage model they show that, very generally, regardless of hyperprior specification or how much data is collected or what the realized values are, a priori, the first‐stage likelihood is expected to be smaller than the marginal likelihood. A posteriori, these expectations are reversed and the disparities worsen with increasing sample size and with increasing number of model levels.  相似文献   

2.
Generalized linear models provide a useful tool for analyzing data from quality-improvement experiments. We discuss why analysis must be done for all the data, not just for summarizing quantities, and show by examples how residuals can be used for model checking. A restricted-maximum-likelihood-type adjustment for the dispersion analysis is developed.  相似文献   

3.
Abstract. Although generalized cross‐validation (GCV) has been frequently applied to select bandwidth when kernel methods are used to estimate non‐parametric mixed‐effect models in which non‐parametric mean functions are used to model covariate effects, and additive random effects are applied to account for overdispersion and correlation, the optimality of the GCV has not yet been explored. In this article, we construct a kernel estimator of the non‐parametric mean function. An equivalence between the kernel estimator and a weighted least square type estimator is provided, and the optimality of the GCV‐based bandwidth is investigated. The theoretical derivations also show that kernel‐based and spline‐based GCV give very similar asymptotic results. This provides us with a solid base to use kernel estimation for mixed‐effect models. Simulation studies are undertaken to investigate the empirical performance of the GCV. A real data example is analysed for illustration.  相似文献   

4.
Abstract. This paper deals with the issue of performing a default Bayesian analysis on the shape parameter of the skew‐normal distribution. Our approach is based on a suitable pseudo‐likelihood function and a matching prior distribution for this parameter, when location (or regression) and scale parameters are unknown. This approach is important for both theoretical and practical reasons. From a theoretical perspective, it is shown that the proposed matching prior is proper thus inducing a proper posterior distribution for the shape parameter, also when the likelihood is monotone. From the practical perspective, the proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals.  相似文献   

5.
The generalized estimating equations (GEE) approach has attracted considerable interest for the analysis of correlated response data. This paper considers the model selection criterion based on the multivariate quasi‐likelihood (MQL) in the GEE framework. The GEE approach is closely related to the MQL. We derive a necessary and sufficient condition for the uniqueness of the risk function based on the MQL by using properties of differential geometry. Furthermore, we establish a formal derivation of model selection criterion as an asymptotically unbiased estimator of the prediction risk under this condition, and we explicitly take into account the effect of estimating the correlation matrix used in the GEE procedure.  相似文献   

6.
The evolution of opinion as to how to analyse the AB/BA cross‐over trials is described by examining the recommendations of three key papers. The impact of these papers on the medical literature is analysed by looking at citation rates as a function of various factors. It is concluded that amongst practitioners there is a highly imperfect appreciation of the issues raised by the possibility of carry‐over. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
The authors describe a model‐based kappa statistic for binary classifications which is interpretable in the same manner as Scott's pi and Cohen's kappa, yet does not suffer from the same flaws. They compare this statistic with the data‐driven and population‐based forms of Scott's pi in a population‐based setting where many raters and subjects are involved, and inference regarding the underlying diagnostic procedure is of interest. The authors show that Cohen's kappa and Scott's pi seriously underestimate agreement between experts classifying subjects for a rare disease; in contrast, the new statistic is robust to changes in prevalence. The performance of the three statistics is illustrated with simulations and prostate cancer data.  相似文献   

8.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

9.
Functional data analysis has become an important area of research because of its ability of handling high‐dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models and, in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area‐level data and fit a varying coefficient linear mixed effect model where the varying coefficients are semiparametrically modelled via B‐splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.  相似文献   

10.
11.
12.
In this paper, we consider the estimation of both the parameters and the nonparametric link function in partially linear single‐index models for longitudinal data that may be unbalanced. In particular, a new three‐stage approach is proposed to estimate the nonparametric link function using marginal kernel regression and the parametric components with generalized estimating equations. The resulting estimators properly account for the within‐subject correlation. We show that the parameter estimators are asymptotically semiparametrically efficient. We also show that the asymptotic variance of the link function estimator is minimized when the working error covariance matrices are correctly specified. The new estimators are more efficient than estimators in the existing literature. These asymptotic results are obtained without assuming normality. The finite‐sample performance of the proposed method is demonstrated by simulation studies. In addition, two real‐data examples are analyzed to illustrate the methodology.  相似文献   

13.
In phase II single‐arm studies, the response rate of the experimental treatment is typically compared with a fixed target value that should ideally represent the true response rate for the standard of care therapy. Generally, this target value is estimated through previous data, but the inherent variability in the historical response rate is not taken into account. In this paper, we present a Bayesian procedure to construct single‐arm two‐stage designs that allows to incorporate uncertainty in the response rate of the standard treatment. In both stages, the sample size determination criterion is based on the concepts of conditional and predictive Bayesian power functions. Different kinds of prior distributions, which play different roles in the designs, are introduced, and some guidelines for their elicitation are described. Finally, some numerical results about the performance of the designs are provided and a real data example is illustrated. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
A version of the nonparametric bootstrap, which resamples the entire subjects from original data, called the case bootstrap, has been increasingly used for estimating uncertainty of parameters in mixed‐effects models. It is usually applied to obtain more robust estimates of the parameters and more realistic confidence intervals (CIs). Alternative bootstrap methods, such as residual bootstrap and parametric bootstrap that resample both random effects and residuals, have been proposed to better take into account the hierarchical structure of multi‐level and longitudinal data. However, few studies have been performed to compare these different approaches. In this study, we used simulation to evaluate bootstrap methods proposed for linear mixed‐effect models. We also compared the results obtained by maximum likelihood (ML) and restricted maximum likelihood (REML). Our simulation studies evidenced the good performance of the case bootstrap as well as the bootstraps of both random effects and residuals. On the other hand, the bootstrap methods that resample only the residuals and the bootstraps combining case and residuals performed poorly. REML and ML provided similar bootstrap estimates of uncertainty, but there was slightly more bias and poorer coverage rate for variance parameters with ML in the sparse design. We applied the proposed methods to a real dataset from a study investigating the natural evolution of Parkinson's disease and were able to confirm that the methods provide plausible estimates of uncertainty. Given that most real‐life datasets tend to exhibit heterogeneity in sampling schedules, the residual bootstraps would be expected to perform better than the case bootstrap. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
More flexible semiparametric linear‐index regression models are proposed to describe the conditional distribution. Such a model formulation captures varying effects of covariates over the support of a response variable in distribution, offers an alternative perspective on dimension reduction and covers a lot of widely used parametric and semiparameteric regression models. A feasible pseudo likelihood approach, accompanied with a simple and easily implemented algorithm, is further developed for the mixed case with both varying and invariant coefficients. By justifying some theoretical properties on Banach spaces, the uniform consistency and asymptotic Gaussian process of the proposed estimator are also established in this article. In addition, under the monotonicity of distribution in linear‐index, we develop an alternative approach based on maximizing a varying accuracy measure. By virtue of the asymptotic recursion relation for the estimators, some of the achievements in this direction include showing the convergence of the iterative computation procedure and establishing the large sample properties of the resulting estimator. It is noticeable that our theoretical framework is very helpful in constructing confidence bands for the parameters of interest and tests for the hypotheses of various qualitative structures in distribution. Generally, the developed estimation and inference procedures perform quite satisfactorily in the conducted simulations and are demonstrated to be useful in reanalysing data from the Boston house price study and the World Values Survey.  相似文献   

16.
This paper proposes the use of the integrated likelihood for inference on the mean effect in small‐sample meta‐analysis for continuous outcomes. The method eliminates the nuisance parameters given by variance components through integration with respect to a suitable weight function, with no need to estimate them. The integrated likelihood approach takes into proper account the estimation uncertainty of within‐study variances, thus providing confidence intervals with empirical coverage closer to nominal levels than standard likelihood methods. The improvement is remarkable when either (i) the number of studies is small to moderate or (ii) the small sample size of the studies does not allow to consider the within‐study variances as known, as common in applications. Moreover, the use of the integrated likelihood avoids numerical pitfalls related to the estimation of variance components which can affect alternative likelihood approaches. The proposed methodology is illustrated via simulation and applied to a meta‐analysis study in nutritional science.  相似文献   

17.
In many applications, a finite population contains a large proportion of zero values that make the population distribution severely skewed. An unequal‐probability sampling plan compounds the problem, and as a result the normal approximation to the distribution of various estimators has poor precision. The central‐limit‐theorem‐based confidence intervals for the population mean are hence unsatisfactory. Complex designs also make it hard to pin down useful likelihood functions, hence a direct likelihood approach is not an option. In this paper, we propose a pseudo‐likelihood approach. The proposed pseudo‐log‐likelihood function is an unbiased estimator of the log‐likelihood function when the entire population is sampled. Simulations have been carried out. When the inclusion probabilities are related to the unit values, the pseudo‐likelihood intervals are superior to existing methods in terms of the coverage probability, the balance of non‐coverage rates on the lower and upper sides, and the interval length. An application with a data set from the Canadian Labour Force Survey‐2000 also shows that the pseudo‐likelihood method performs more appropriately than other methods. The Canadian Journal of Statistics 38: 582–597; 2010 © 2010 Statistical Society of Canada  相似文献   

18.
The rise over recent years in the use of network meta‐analyses (NMAs) in clinical research and health economic analysis is little short of meteoric driven, in part, by a desire from decision makers to extend inferences beyond direct comparisons in controlled clinical trials. But is the increased use and reliance of NMAs justified? Do such analyses provide a reliable basis for the relative effectiveness assessment of medicines and, in turn, for critical decisions relating to healthcare access and provisioning? And can such analyses also be used earlier, as part of the evidence base for licensure? Despite several important publications highlighting inherently unverifiable assumptions underpinning NMAs, these assumptions and associated potential for serious bias are often overlooked in the reporting and interpretation of NMAs. A more cautious, and better informed, approach to the use and interpretation of NMAs in clinical research is warranted given the assumptions that sit behind such analyses. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Various criteria have been proposed for determining the reliability of noncompartmental pharmacokinetic estimates of the terminal disposition phase half‐life (t1/2) and the extrapolated area under the curve (AUCextrap). This simulation study assessed the performance of two frequently used reportability rules: the terminal disposition phase regression adjusted‐r2 classification rule and the regression data point time span classification rule. Using simulated data, these rules were assessed in relation to the magnitude of the variability in the terminal disposition phase slope, the length of the terminal disposition phase captured in the concentration‐time profile (data span), the number of data points present in the terminal disposition phase, and the type and level of variability in concentration measurement. The accuracy of estimating t1/2 was satisfactory for data spans of 1.5 and longer, given low measurement variability; and for spans of 2.5 and longer, given high measurement variability. Satisfactory accuracy in estimating AUCextrap was only achieved with low measurement variability and spans of 2.5 and longer. Neither of the classification rules improved the identification of accurate t1/2 and AUCextrap estimates. Based on the findings of this study, a strategy is proposed for determining the reportability of estimates of t1/2 and area under the curve extrapolated to infinity.  相似文献   

20.
Abstract. To increase the predictive abilities of several plasma biomarkers on the coronary artery disease (CAD)‐related vital statuses over time, our research interest mainly focuses on seeking combinations of these biomarkers with the highest time‐dependent receiver operating characteristic curves. An extended generalized linear model (EGLM) with time‐varying coefficients and an unknown bivariate link function is used to characterize the conditional distribution of time to CAD‐related death. Based on censored survival data, two non‐parametric procedures are proposed to estimate the optimal composite markers, linear predictors in the EGLM model. Estimation methods for the classification accuracies of the optimal composite markers are also proposed. In the article we establish theoretical results of the estimators and examine the corresponding finite‐sample properties through a series of simulations with different sample sizes, censoring rates and censoring mechanisms. Our optimization procedures and estimators are further shown to be useful through an application to a prospective cohort study of patients undergoing angiography.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号