首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Abstract

The Library of Congress (LC) is in the process of developing a new level of MARC 21 and AACR2 cataloging for non-serial Internet resources called “access” level. This article briefly describes the impetus behind the creation of this new standard, information about the proposed standard itself, and the results of a test conducted at LC using the core data set and cataloging guidelines. The future plans of the Library for implementing and possibly expanding the use of access level are identified.  相似文献   

3.
While analyzing 2 × 2 contingency tables, the log odds ratio for measuring the strength of association is often approximated by a normal distribution with some variance. We show that the expression of that variance needs to be modified in the presence of correlation between two binomial distributions of the contingency table. In the present paper, we derive a correlation-adjusted variance of the limiting normal distribution of log odds ratio. We also propose a correlation adjusted test based on the standard odds ratio for analyzing matched-pair studies and any other study settings that induce correlated binary outcomes. We demonstrate that our proposed test outperforms the classical McNemar’s test. Simulation studies show the gains in power are especially manifest when sample size is small and strong correlation is present. Two examples of real data sets are used to demonstrate that the proposed method may lead to conclusions significantly different from those reached using McNemar’s test.  相似文献   

4.
We propose an approach that utilizes the Delaunay triangulation to identify a robust/outlier-free subsample. Given that the data structure of the non-outlying points is convex (e.g. of elliptical shape), this subsample can then be used to give a robust estimation of location and scatter (by applying the classical mean and covariance). The estimators derived from our approach are shown to have a high breakdown point. In addition, we provide a diagnostic plot to expand the initial subset in a data-driven way, further increasing the estimators’ efficiency.  相似文献   

5.
6.
This article studies the construction of a Bayesian confidence interval for risk difference in a 2×2 table with structural zero. The exact posterior distribution of the risk difference is derived under the Dirichlet prior distribution, and a tail-based interval is used to construct the Bayesian confidence interval. The frequentist performance of the tail-based interval is investigated and compared with the score-based interval by simulation. Our results show that the tail-based interval at Jeffreys prior performs as well as or better than the score-based confidence interval.  相似文献   

7.
This paper studies the construction of a Bayesian confidence interval for the risk ratio (RR) in a 2 × 2 table with structural zero. Under a Dirichlet prior distribution, the exact posterior distribution of the RR is derived, and tail-based interval is suggested for constructing Bayesian confidence interval. The frequentist performance of this confidence interval is investigated by simulation and compared with the score-based interval in terms of the mean coverage probability and mean expected width of the interval. An advantage of the Bayesian confidence interval is that it is well defined for all data structure and has shorter expected width. Our simulation shows that the Bayesian tail-based interval under Jeffreys’ prior performs as well as or better than the score-based confidence interval.  相似文献   

8.
To assess the efficacy of a treatment, patients are administered a pre-test, the treatment, and a post-test (identical to the pre-test). These patients are then categorized according to their outcomes observed on both tests,e.g., (S,S), (S,F), etc. Also, we observe "incomplete" information on the pre-tests' outcomes for some patients and the results of only the post-test being known for thers, A Bayesian framework is fit to the problem and Bayes factors, posterior odds ratios, and utility functions are given to evaluate th e treatment, A method of assessing the prior distribution is specified and a numerical example is worked.  相似文献   

9.
10.
The main objective of this paper is to develop a full Bayesian analysis for the Birnbaum–Saunders (BS) regression model based on scale mixtures of the normal (SMN) distribution with right-censored survival data. The BS distributions based on SMN models are a very general approach for analysing lifetime data, which has as special cases the Student-t-BS, slash-BS and the contaminated normal-BS distributions, being a flexible alternative to the use of the corresponding BS distribution or any other well-known compatible model, such as the log-normal distribution. A Gibbs sample algorithm with Metropolis–Hastings algorithm is used to obtain the Bayesian estimates of the parameters. Moreover, some discussions on the model selection to compare the fitted models are given and case-deletion influence diagnostics are developed for the joint posterior distribution based on the Kullback–Leibler divergence. The newly developed procedures are illustrated on a real data set previously analysed under BS regression models.  相似文献   

11.
In some situations, the distribution of the error terms of a multivariate linear regression model may depart from normality. This problem has been addressed, for example, by specifying a different parametric distribution family for the error terms, such as multivariate skewed and/or heavy-tailed distributions. A new solution is proposed, which is obtained by modelling the error term distribution through a finite mixture of multi-dimensional Gaussian components. The multivariate linear regression model is studied under this assumption. Identifiability conditions are proved and maximum likelihood estimation of the model parameters is performed using the EM algorithm. The number of mixture components is chosen through model selection criteria; when this number is equal to one, the proposal results in the classical approach. The performances of the proposed approach are evaluated through Monte Carlo experiments and compared to the ones of other approaches. In conclusion, the results obtained from the analysis of a real dataset are presented.  相似文献   

12.
This study estimates default probabilities of 124 emerging countries from 1981 to 2002 as a function of a set of macroeconomic and political variables. The estimated probabilities are then compared with the default rates implied by sovereign credit ratings of three major international credit rating agencies (CRAs) – Moody's Investor's Service, Standard & Poor's and Fitch Ratings. Sovereign debt default probabilities are used by investors in pricing sovereign bonds and loans as well as in determining country risk exposure. The study finds that CRAs usually underestimate the risk of sovereign debt as the sovereign credit ratings from rating agencies are usually too optimistic.  相似文献   

13.
14.
In the last years, many articles have been written about Bayesian model selection. In this article, a different and easier method is proposed and analyzed. The key idea of this method is based on the well-known property that, under the true model, the cumulative distribution function is distributed as a uniform distribution over the interval (0, 1). The method is first introduced for the continuous case and then for the discrete case by smoothing the cumulative distribution function. Some asymptotical properties of the method are obtained by developing an alternative to Helly's theorems. Finally, the performance of the method is evaluated by simulation, showing a good behavior.  相似文献   

15.
In this paper, we proposed a new family of distributions namely exponentiated exponential–geometric (E2G) distribution. The E2G distribution is a straightforwardly generalization of the exponential–geometric (EG) distribution proposed by Adamidis and Loukas [A lifetime distribution with decreasing failure rate, Statist. Probab. Lett. 39 (1998), pp. 35–42], which accommodates increasing, decreasing and unimodal hazard functions. It arises on a latent competing risk scenarios, where the lifetime associated with a particular risk is not observable but only the minimum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its survival and hazard functions, moments, rth moment of the ith order statistic, mean residual lifetime and modal value. Maximum-likelihood inference is implemented straightforwardly. From a mis-specification simulation study performed in order to assess the extent of the mis-specification errors when testing the EG distribution against the E2G, and we observed that it is usually possible to discriminate between both distributions even for moderate samples with presence of censoring. The practical importance of the new distribution was demonstrated in three applications where we compare the E2G distribution with several lifetime distributions.  相似文献   

16.
Several studies have found that occasional-break processes may produce realizations with slowly decaying autocorrelations, which is hardly distinguished from the long memory phenomenon. In this paper we suggest the use of the Box–Pierce statistics to discriminate long memory and occasional-break processes. We conduct an extensive Monte Carlo experiment to examine the finite sample properties of the Box–Pierce and other simple tests statistics in this framework. The results allow us to infer important guidelines for applied statistics in practice.  相似文献   

17.
This paper describes the performance of specific-to-general composition of forecasting models that accord with (approximate) linear autoregressions. Monte Carlo experiments are complemented with ex-ante forecasting results for 97 macroeconomic time series collected for the G7 economies in Stock and Watson (J. Forecast. 23:405–430, 2004). In small samples, the specific-to-general strategy is superior in terms of ex-ante forecasting performance in comparison with a commonly applied strategy of successive model reduction according to weakest parameter significance. Applied to real data, the specific-to-general approach turns out to be preferable. In comparison with successive model reduction, the successive model expansion is less likely to involve overly large losses in forecast accuracy and is particularly recommended if the diagnosed prediction schemes are characterized by a medium to large number of predictors.  相似文献   

18.
19.
20.
In this article, we consider the problem of constructing simultaneous confidence intervals for odds ratios in 2 × k classification tables with a fixed reference level. We discuss six methods designed to control the familywise error rate and investigate these methods in terms of simultaneous coverage probability and mean interval length. We illustrate the importance and the implementation of these methods using two {sc hiv} public health studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号