首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Tonia Graves 《Serials Review》2017,43(3-4):246-250
ABSTRACT

When Old Dominion University Libraries conducted the LibQUAL+ survey in 2015, results indicated a lack of satisfaction in effectively discovering and using our electronic resources. This article is based on a presentation from the 2017 North Carolina Serials Conference describing how Old Dominion University Libraries used data from the LibQUAL+ survey results to help our users connect more effectively with information resources, resulting in an improved user experience and an increased level of satisfaction.  相似文献   

2.
《Serials Review》2012,38(4):239-244
Abstract

As physical collections are increasingly pressed for space, libraries continue to look at practices such as weeding and off-site storage, coupled with services like on-demand article-level document delivery, as potential space solutions. As libraries discard long runs of print journals, though, what role do journal backfiles play going forward? When many see backfiles as space hogs, is there data that provides a compelling argument for keeping backfiles of journals, and, if so, how should those backfiles be handled and by whom? Incorporating information from an interview with Glenn Jaeger, owner of Absolute Backorder Service, interlibrary loan data from the University of Prince Edward Island, and examples from the literature, this Balance Point column looks at the role that print journal backfiles play in the library landscape.  相似文献   

3.
4.
5.
6.
If unit‐level data are available, small area estimation (SAE) is usually based on models formulated at the unit level, but they are ultimately used to produce estimates at the area level and thus involve area‐level inferences. This paper investigates the circumstances under which using an area‐level model may be more effective. Linear mixed models (LMMs) fitted using different levels of data are applied in SAE to calculate synthetic estimators and empirical best linear unbiased predictors (EBLUPs). The performance of area‐level models is compared with unit‐level models when both individual and aggregate data are available. A key factor is whether there are substantial contextual effects. Ignoring these effects in unit‐level working models can cause biased estimates of regression parameters. The contextual effects can be automatically accounted for in the area‐level models. Using synthetic and EBLUP techniques, small area estimates based on different levels of LMMs are investigated in this paper by means of a simulation study.  相似文献   

7.
8.
In an informal way, some dilemmas in connection with hypothesis testing in contingency tables are discussed. The body of the article concerns the numerical evaluation of Cochran's Rule about the minimum expected value in r × c contingency tables with fixed margins when testing independence with Pearson's X2 statistic using the χ2 distribution.  相似文献   

9.
The Statistical Policy Division of the Office of Management and Budget has the overall responsibility for the planning and coordination of U.S. government statistics. The present staff of the Statistical Policy Division is attempting, through an integrated publication entitled “A Framework for Planning U.S. Federal Statistics, 1978–1989,” to state its perspective on necessary developments in the coming years. This article illustrates the character of the Framework materials and outlines the process for public review and comment on this undertaking.  相似文献   

10.
11.
12.
Abstract

Birnbaum and Saunders (1969a Birnbaum, Z.W., Saunders, S.C. (1969a). A new family of life distributions. J. Appl. Probab. 6:319327.[Crossref], [Web of Science ®] [Google Scholar]) pioneered a lifetime model which is commonly used in reliability studies. Based on this distribution, a new model called the gamma Birnbaum–Saunders distribution is proposed for describing fatigue life data. Several properties of the new distribution including explicit expressions for the ordinary and incomplete moments, generating and quantile functions, mean deviations, density function of the order statistics, and their moments are derived. We discuss the method of maximum likelihood and a Bayesian approach to estimate the model parameters. The superiority of the new model is illustrated by means of three failure real data sets. We also propose a new extended regression model based on the logarithm of the new distribution. The last model can be very useful to the analysis of real data and provide more realistic fits than other special regression models.  相似文献   

13.
We develop a model of labor force status (federal employment, nonfederal employment, unemployment, and out of the labor force) that depends on human capital variables, local labor market conditions, and personal characteristics. According to the estimated model for white non-Hispanic males and females a substantial difference exists between blacks and white non-Hispanics even after correction for the control variables. However, the control variables explain almost all of the difference between Hispanics and white non-Hispanics.  相似文献   

14.
15.
Abstract

Serialists have long believed their field is underrepresented in the library and information science (LIS) curriculum. A recent review of Web sites of ALA-accredited LIS programs shows no significant change in the percentage of formal serials courses in those programs. The problem of adequate formal serials education is examined in the broader context of LIS education as a whole. Increasing traditional, formal serials education is an impractical goal. Instead, we should develop continuing education opportunities, and work to dispel some of the mystique of serials.  相似文献   

16.
ABSTRACT

P values linked to null hypothesis significance testing (NHST) is the most widely (mis)used method of statistical inference. Empirical data suggest that across the biomedical literature (1990–2015), when abstracts use P values 96% of them have P values of 0.05 or less. The same percentage (96%) applies for full-text articles. Among 100 articles in PubMed, 55 report P values, while only 4 present confidence intervals for all the reported effect sizes, none use Bayesian methods and none use false-discovery rate. Over 25 years (1990–2015), use of P values in abstracts has doubled for all PubMed, and tripled for meta-analyses, while for some types of designs such as randomized trials the majority of abstracts report P values. There is major selective reporting for P values. Abstracts tend to highlight most favorable P values and inferences use even further spin to reach exaggerated, unreliable conclusions. The availability of large-scale data on P values from many papers has allowed the development and applications of methods that try to detect and model selection biases, for example, p-hacking, that cause patterns of excess significance. Inferences need to be cautious as they depend on the assumptions made by these models and can be affected by the presence of other biases (e.g., confounding in observational studies). While much of the unreliability of past and present research is driven by small, underpowered studies, NHST with P values may be also particularly problematic in the era of overpowered big data. NHST and P values are optimal only in a minority of current research. Using a more stringent threshold, as in the recently proposed shift from P < 0.05 to P < 0.005, is a temporizing measure to contain the flood and death-by-significance. NHST and P values may be replaced in many fields by other, more fit-for-purpose, inferential methods. However, curtailing selection biases requires additional measures, beyond changes in inferential methods, and in particular reproducible research practices.  相似文献   

17.
18.
19.
20.
Symbolic Itô calculus refers both to the implementation of Itô calculus in a computer algebra package and to its application. This article reports on progress in the implementation of Itô calculus in the powerful and innovative computer algebra package AXIOM, in the context of a decade of previous implementations and applications. It is shown how the elegant algebraic structure underlying the expressive and effective formalism of Itô calculus can be implemented directly in AXIOM using the package's programmable facilities for strong typing of computational objects. An application is given of the use of the implementation to provide calculations for a new proof, based on stochastic differentials, of the Mardia-Dryden distribution from statistical shape theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号