首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A two-stage procedure is described for assessing subject-specific and marginal agreement for data from a test-retest reliability study of a binary classification procedure. Subject-specific agreement is parametrized through the log odds ratio, while marginal agreement is reflected by the log ratio of the off-diagonal Poisson means. A family of agreement measures in the interval [-1, 1] is presented for both types of agreement. The conditioning argument described facilitates exact inference. The proposed methodology is demonstrated by way of an example involving hypothetical data chosen for illustrative purposes, and data from a National Health Survey Study (Rogot and Goldberg 1966).  相似文献   

2.
3.
The Cohen kappa is probably the most widely used measure of agreement. Measuring the degree of agreement or disagreement in square contingency tables by two raters is mostly of interest. Modeling the agreement provides more information on the pattern of the agreement rather than summarizing the agreement by kappa coefficient. Additionally, the disagreement models in the literature they mentioned are proposed for the nominal scales. Disagreement and uniform association models are aggregated as a new model for the ordinal scale agreement data, thus in this paper, symmetric disagreement plus uniform association model that aims separating the association from the disagreement is proposed. Proposed model is applied to real uterine cancer data.  相似文献   

4.
Biomedical and psychosocial researchers increasingly utilize multiple indicators to assess an outcome of interest. We apply the ordinal estimating equations model for analysing this kind of measurement. We detail the special complexities of using this model to analyse clustered non-identical items and propose a workable model building strategy. Three graphical methods— cumulative log-odds, partial residual and Pearson residual plotting—are developed to diagnose the adequacy of models. The benefit of incorporating interitem associations and the trade-off between simple versus complex models are evaluated. Throughout the paper, an analysis to determine how measured impairments affect visual disability is used for illustration.  相似文献   

5.
We study methods to estimate regression and variance parameters for over-dispersed and correlated count data from highly stratified surveys. Our application involves counts of fish catches from stratified research surveys and we propose a novel model in fisheries science to address changes in survey protocols. A challenge with this model is the large number of nuisance parameters which leads to computational issues and biased statistical inferences. We use a computationally efficient profile generalized estimating equation method and compare it to marginal maximum likelihood (MLE) and restricted MLE (REML) methods. We use REML to address bias and inaccurate confidence intervals because of many nuisance parameters. The marginal MLE and REML approaches involve intractable integrals and we used a new R package that is designed for estimating complex nonlinear models that may include random effects. We conclude from simulation analyses that the REML method provides more reliable statistical inferences among the three methods we investigated.  相似文献   

6.
In this work, a generalization of the Goodman Association Model to the case of q, q > 2, categorical variables which is based on the idea of marginal modelling discussed by Gloneck–McCullagh is introduced; the difference between the proposed generalization and two models, previously introduced by Becker and Colombi, is discussed. The Becker generalization is not a marginal model because it does not imply Logit Models for the marginal probabilities, and because it is based on the conditional approach of modelling the association. The Colombi model is only partially a marginal model because it uses simple logit models for the univariate marginal probabilities but is based on the conditional approach of modelling the association. It is also shown that the maximum likelihood estimation of the parameters of the new model is feasible and, to compute the maximum likelihood estimates, an algorithm is proposed, which is a numerically convenient compromise between the constrained optimization approach of Lang and the straightforward use of the Fisher Scoring Algorithm suggested by Glonek–McCullagh.Finally, the proposed model is used to analyze a data set concerning work accidents which occurred to workers at some Italian firms during the years 1994–1996.  相似文献   

7.
Kappa and B assess agreement between two observers independently classifying N units into k categories. We study their behavior under zero cells in the contingency table and unbalanced asymmetric marginal distributions. Zero cells arise when a cross-classification is never endorsed by both observers; biased marginal distributions occur when some categories are preferred differently between the observers. Simulations studied the distributions of the unweighted and weighted statistics for k=4, under fixed proportions of diagonal agreement and different patterns off-diagonal, with various sample sizes, and under various zero cell count scenarios. Marginal distributions were first uniform and homogeneous, and then unbalanced asymmetric distributions. Results for unweighted kappa and B statistics were comparable to work of Muñoz and Bangdiwala, even with zero cells. A slight increased variation was observed as the sample size decreased. Weighted statistics did show greater variation as the number of zero cells increased, with weighted kappa increasing substantially more than weighted B. Under biased marginal distributions, weighted kappa with Cicchetti weights were higher than with squared weights. Both statistics for observer agreement behaved well under zero cells. The weighted B was less variable than the weighted kappa under similar circumstances and different weights. In general, B's performance and graphical interpretation make it preferable to kappa under the studied scenarios.  相似文献   

8.
For the assessment of agreement using probability criteria, we obtain an exact test, and for sample sizes exceeding 30, we give a bootstrap-tt test that is remarkably accurate. We show that for assessing agreement, the total deviation index approach of Lin [2000. Total deviation index for measuring individual agreement with applications in laboratory performance and bioequivalence. Statist. Med. 19, 255–270] is not consistent and may not preserve its asymptotic nominal level, and that the coverage probability approach of Lin et al. [2002. Statistical methods in assessing agreement: models, issues and tools. J. Amer. Statist. Assoc. 97, 257–270] is overly conservative for moderate sample sizes. We also show that the nearly unbiased test of Wang and Hwang [2001. A nearly unbiased test for individual bioequivalence problems using probability criteria. J. Statist. Plann. Inference 99, 41–58] may be liberal for large sample sizes, and suggest a minor modification that gives numerically equivalent approximation to the exact test for sample sizes 30 or less. We present a simple and accurate sample size formula for planning studies on assessing agreement, and illustrate our methodology with a real data set from the literature.  相似文献   

9.
For square contingency tables with ordered categories, this paper proposes a measure to represent the degree of departure from the marginal homogeneity model. It is expressed as the weighted sum of the power-divergence or Patil–Taillie diversity index, and is a function of marginal log odds ratios. The measure represents the degree of departure from the equality of the log odds that the row variable is i or below instead of i+1 or above and the log odds that the column variable is i or below instead of i+1 or above for every i. The measure is also extended to multi-way tables. Examples are given.  相似文献   

10.
Andr  Lucas 《Econometric Reviews》1998,17(2):185-214
This paper considers Lagrange Multiplier (LM) and Likelihood Ratio (LR) tests for determining the cointegrating rank of a vector autoregressive system. n order to deal with outliers and possible fat-tailedness of the error process, non-Gaussian likelihoods are used to carry out the estimation. The limiting distributions of the tests based on these non-Gaussian pseudo-)likelihoods are derived. These distributions depend on nuisance parameters. An operational procedure is proposed to perform inference. It appears that the tests based on non-Gaussian pseudo-likelihoods are much more powerful than their Gaussian counterparts if the errors are fat-tailed. Moreover, the operational LM-type test has a better overall performance than the LR-type test. Copyright O 1998 by Marcel Dekker, Inc.  相似文献   

11.
When confronted with multiple covariates and a response variable, analysts sometimes apply a variable‐selection algorithm to the covariate‐response data to identify a subset of covariates potentially associated with the response, and then wish to make inferences about parameters in a model for the marginal association between the selected covariates and the response. If an independent data set were available, the parameters of interest could be estimated by using standard inference methods to fit the postulated marginal model to the independent data set. However, when applied to the same data set used by the variable selector, standard (“naive”) methods can lead to distorted inferences. The authors develop testing and interval estimation methods for parameters reflecting the marginal association between the selected covariates and response variable, based on the same data set used for variable selection. They provide theoretical justification for the proposed methods, present results to guide their implementation, and use simulations to assess and compare their performance to a sample‐splitting approach. The methods are illustrated with data from a recent AIDS study. The Canadian Journal of Statistics 37: 625–644; 2009 © 2009 Statistical Society of Canada  相似文献   

12.
We compare results for stochastic volatility models where the underlying volatility process having generalized inverse Gaussian (GIG) and tempered stable marginal laws. We use a continuous time stochastic volatility model where the volatility follows an Ornstein–Uhlenbeck stochastic differential equation driven by a Lévy process. A model for long-range dependence is also considered, its merit and practical relevance discussed. We find that the full GIG and a special case, the inverse gamma, marginal distributions accurately fit real data. Inference is carried out in a Bayesian framework, with computation using Markov chain Monte Carlo (MCMC). We develop an MCMC algorithm that can be used for a general marginal model.  相似文献   

13.
Inference in hybrid Bayesian networks using dynamic discretization   总被引:1,自引:0,他引:1  
We consider approximate inference in hybrid Bayesian Networks (BNs) and present a new iterative algorithm that efficiently combines dynamic discretization with robust propagation algorithms on junction trees. Our approach offers a significant extension to Bayesian Network theory and practice by offering a flexible way of modeling continuous nodes in BNs conditioned on complex configurations of evidence and intermixed with discrete nodes as both parents and children of continuous nodes. Our algorithm is implemented in a commercial Bayesian Network software package, AgenaRisk, which allows model construction and testing to be carried out easily. The results from the empirical trials clearly show how our software can deal effectively with different type of hybrid models containing elements of expert judgment as well as statistical inference. In particular, we show how the rapid convergence of the algorithm towards zones of high probability density, make robust inference analysis possible even in situations where, due to the lack of information in both prior and data, robust sampling becomes unfeasible.  相似文献   

14.
15.
Statistics and Computing - We propose a method for inference on moderately high-dimensional, nonlinear, non-Gaussian, partially observed Markov process models for which the transition density is...  相似文献   

16.
Generalized additive mixed models are proposed for overdispersed and correlated data, which arise frequently in studies involving clustered, hierarchical and spatial designs. This class of models allows flexible functional dependence of an outcome variable on covariates by using nonparametric regression, while accounting for correlation between observations by using random effects. We estimate nonparametric functions by using smoothing splines and jointly estimate smoothing parameters and variance components by using marginal quasi-likelihood. Because numerical integration is often required by maximizing the objective functions, double penalized quasi-likelihood is proposed to make approximate inference. Frequentist and Bayesian inferences are compared. A key feature of the method proposed is that it allows us to make systematic inference on all model components within a unified parametric mixed model framework and can be easily implemented by fitting a working generalized linear mixed model by using existing statistical software. A bias correction procedure is also proposed to improve the performance of double penalized quasi-likelihood for sparse data. We illustrate the method with an application to infectious disease data and we evaluate its performance through simulation.  相似文献   

17.
This paper generalizes the tolerance interval approach for assessing agreement between two methods of continuous measurement for repeated measurement data—a common scenario in applications. The repeated measurements may be longitudinal or they may be replicates of the same underlying measurement. Our approach is to first model the data using a mixed model and then construct a relevant asymptotic tolerance interval (or band) for the distribution of appropriately defined differences. We present the methodology in the general context of a mixed model that can incorporate covariates, heteroscedasticity and serial correlation in the errors. Simulation for the no-covariate case shows good small-sample performance of the proposed methodology. For the longitudinal data, we also describe an extension for the case when the observed time profiles are modelled nonparametrically through penalized splines. Two real data applications are presented.  相似文献   

18.
Summary.  Treatment of complex diseases such as cancer, leukaemia, acquired immune deficiency syndrome and depression usually follows complex treatment regimes consisting of time varying multiple courses of the same or different treatments. The goal is to achieve the largest overall benefit defined by a common end point such as survival. Adaptive treatment strategy refers to a sequence of treatments that are applied at different stages of therapy based on the individual's history of covariates and intermediate responses to the earlier treatments. However, in many cases treatment assignment depends only on intermediate response and prior treatments. Clinical trials are often designed to compare two or more adaptive treatment strategies. A common approach that is used in these trials is sequential randomization. Patients are randomized on entry into available first-stage treatments and then on the basis of the response to the initial treatments are randomized to second-stage treatments, and so on. The analysis often ignores this feature of randomization and frequently conducts separate analysis for each stage. Recent literature suggested several semiparametric and Bayesian methods for inference related to adaptive treatment strategies from sequentially randomized trials. We develop a parametric approach using mixture distributions to model the survival times under different adaptive treatment strategies. We show that the estimators proposed are asymptotically unbiased and can be easily implemented by using existing routines in statistical software packages.  相似文献   

19.
A Bayesian approach to modeling a rich class of nonconjugate problems is presented. An adaptive Monte Carlo integration technique known as the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in such a framework. The result is a general strategy for obtaining marginal posterior densities under changing specification of the model error densities and related prior densities. We illustrate the approach in a nonlinear regression setting, comparing the merits of three candidate error distributions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号