首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In July 2004, Cindy Hepfer asked friends and colleagues: “What question would you like to ask Clifford Lynch if you had the chance?” As a result, Clifford Lynch discusses a wide variety of topics and issues impacting the serials community from Open Access, institutional repositories, what we can learn from Google and Amazon, and Shibboleth to where his favorite places are to travel and how he prepares for presentations.  相似文献   

2.
Abstract

As a selector, have you ever wondered if the resource you requested was ordered? As an acquisitions staff member, are you struggling with keeping track of your order requests from various channels? As a manager, are you finding it challenging to monitor staff work? CORAL, an open source electronic resource management system, proved to be one solution to these concerns for North Carolina State University (NCSU) Libraries. This article discusses how to manage workflows in CORAL and outlines a NCSU initiative to evolve this tool through collaboration across departments and across the CORAL community.  相似文献   

3.
Abstract

“They Might Be Giants” gets a facelift, tackles a new medium and welcomes a co-editor. Michael Brown and Jessica Teeter bring you this first installment of “From Picas to Pixels: Life in the Trenches of Print and Web Publishing.” This installment features an interview with the publishers of a Web magazine called FILE Magazine, A Collection of Unexpected Photography.  相似文献   

4.
Tabloids     
Abstract

Comic books and libraries do not seem to get along, at least not in North American libraries. Aside from a few dozen specialized, noncirculating research collections, retrospective comic book holdings remain virtually unknown as a library resource.1 Browsing collections of current comic books are equally rare in public, school, and college libraries. In a 1984 article, comic book bibliographer Randall Scott observed, “In most communities, if you want to read or refer to a comic book, you have to buy it.” Librarian Doug Highsmith concurred, writing in 1992 that public libraries carrying the latest issues of popular comics titles are “still the exception rather than the rule.” Both statements remain fundamentally true today.  相似文献   

5.
Abstract

It is widely acknowledged that the biomedical literature suffers from a surfeit of false positive results. Part of the reason for this is the persistence of the myth that observation of p?<?0.05 is sufficient justification to claim that you have made a discovery. It is hopeless to expect users to change their reliance on p-values unless they are offered an alternative way of judging the reliability of their conclusions. If the alternative method is to have a chance of being adopted widely, it will have to be easy to understand and to calculate. One such proposal is based on calculation of false positive risk(FPR). It is suggested that p-values and confidence intervals should continue to be given, but that they should be supplemented by a single additional number that conveys the strength of the evidence better than the p-value. This number could be the minimum FPR (that calculated on the assumption of a prior probability of 0.5, the largest value that can be assumed in the absence of hard prior data). Alternatively one could specify the prior probability that it would be necessary to believe in order to achieve an FPR of, say, 0.05.  相似文献   

6.
Given any generalized inverse (X'X)? appropriate to normal equations X'Xb 0 = X'y for the linear model y = Xb + e, a procedure is given for obtaining from it a generalized inverse appropriate to a restricted model having restrictions P'b = 0 for P'b nonestimable.  相似文献   

7.
DSTAT, Version 1.10: Available from Lawrence Erlbaum Associates, Inc., 10 Industrial Ave., Mahwah, NJ 07430-2262; phone: 800-926-6579

TRUE EPISTAT, Version 4.0: Available from Epistat Services, 2011 Cap Rock Circle, Richardson, TX 75080; phone: 214-680-1376; fax: 214-680-1303.

FAST*PRO, Version 1.0: Available from Academic Press, Inc., 955 Massachusetts Avenue, Cambridge, MA 02139; phone: 800-321-5068; fax: 800-336-7377.

Meta-analysts conduct studies in which the responses are analytic summary measurements, such as risk differences, effect sizes, p values, or z statistics, obtained from a series of independent studies. The motivation for conducting a meta-analysis is to integrate research findings over studies in order to summarize the evidence about treatment efficacy or risk factors. This article presents a comparative review of three meta-analytic software packages: DSTAT, TRUE EPISTAT, and FAST*PRO.  相似文献   

8.
The hierarchically orthogonal functional decomposition of any measurable function η of a random vector X=(X1,?…?, Xp) consists in decomposing η(X) into a sum of increasing dimension functions depending only on a subvector of X. Even when X1,?…?, Xp are assumed to be dependent, this decomposition is unique if the components are hierarchically orthogonal. That is, two of the components are orthogonal whenever all the variables involved in one of the summands are a subset of the variables involved in the other. Setting Y=η(X), this decomposition leads to the definition of generalized sensitivity indices able to quantify the uncertainty of Y due to each dependent input in X [Chastaing G, Gamboa F, Prieur C. Generalized Hoeffding–Sobol decomposition for dependent variables – application to sensitivity analysis. Electron J Statist. 2012;6:2420–2448]. In this paper, a numerical method is developed to identify the component functions of the decomposition using the hierarchical orthogonality property. Furthermore, the asymptotic properties of the components estimation is studied, as well as the numerical estimation of the generalized sensitivity indices of a toy model. Lastly, the method is applied to a model arising from a real-world problem.  相似文献   

9.
10.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

11.
A semi-Markovian random walk process (X(t)) with a generalized beta distribution of chance is considered. The asymptotic expansions for the first four moments of the ergodic distribution of the process are obtained as E(ζn) → ∞ when the random variable ζn has a generalized beta distribution with parameters (s, S, α, β); , β > 1,?0? ? s < S < ∞. Finally, the accuracy of the asymptotic expansions is examined by using the Monte Carlo simulation method.  相似文献   

12.
ABSTRACT

The paper present an explicit expression for the density of a n-dimensional random vector with a singular Elliptical distribution. Based on this, the densities of the generalized Chi-squared and generalized t distributions are derived, examining the Pearson Type VII distribution and Kotz Type distribution (as specific Elliptical distributions). Finally, the results are applied to the study of the distribution of the residuals of an Elliptical linear model and the distribution of the t-statistic, based on a sample from an Elliptical population.  相似文献   

13.
For XN p (μ, Σ) testing H o:Σ = Σ 0, with Σ 0 known, relies at present on an approximation of the null-distribution of the likelihood ratio statistic.

We present here the exact null distribution and also its computation, hence providing a precise tool that can be used in small sample cases.  相似文献   

14.
In the context of the general linear model Y=Xβ+ε, the matrix Pz =Z(ZTZ)?1 ZT , where Z=(X: Y), plays an important role in determining least squares results. In this article we propose two graphical displays for the off-diagonal as well as the diagonal elements of PZ . The two graphs are based on simple ideas and are useful in the detection of potentially influential subsets of observations in regression. Since PZ is invariant with respect to permutations of the columns of Z, an added advantage of these graphs is that they can be used to detect outliers in multivariate data where the rows of Z are usually regarded as a random sample from a multivariate population. We also suggest two calibration points, one for the diagonal elements of PZ and the other for the off-diagonal elements. The advantage of these calibration points is that they take into consideration the variability of the off-diagonal as well as the diagonal elements of PZ . They also do not suffer from masking.  相似文献   

15.
A segmented line regression model has been used to describe changes in cancer incidence and mortality trends [Kim, H.-J., Fay, M.P., Feuer, E.J. and Midthune, D.N., 2000, Permutation tests for joinpoint regression with applications to cancer rates. Statistics in Medicine, 19, 335–351. Kim, H.-J., Fay, M.P., Yu, B., Barrett., M.J. and Feuer, E.J., 2004, Comparability of segmented line regression models. Biometrics, 60, 1005–1014.]. The least squares fit can be obtained by using either the grid search method proposed by Lerman [Lerman, P.M., 1980, Fitting segmented regression models by grid search. Applied Statistics, 29, 77–84.] which is implemented in Joinpoint 3.0 available at http://srab.cancer.gov/joinpoint/index.html, or by using the continuous fitting algorithm proposed by Hudson [Hudson, D.J., 1966, Fitting segmented curves whose join points have to be estimated. Journal of the American Statistical Association, 61, 1097–1129.] which will be implemented in the next version of Joinpoint software. Following the least squares fitting of the model, inference on the parameters can be pursued by using the asymptotic results of Hinkley [Hinkley, D.V., 1971, Inference in two-phase regression. Journal of the American Statistical Association, 66, 736–743.] and Feder [Feder, P.I., 1975a, On asymptotic distribution theory in segmented regression Problems-Identified Case. The Annals of Statistics, 3, 49–83.] Feder [Feder, P.I., 1975b, The log likelihood ratio in segmented regression. The Annals of Statistics, 3, 84–97.] Via simulations, this paper empirically examines small sample behavior of these asymptotic results, studies how the two fitting methods, the grid search and the Hudson's algorithm affect these inferential procedures, and also assesses the robustness of the asymptotic inferential procedures.  相似文献   

16.
Abstract

Elsevier's Strategic Partners Program (SPP) convened a forum prior to the American Library Association's 2003 midwinter conference in Philadelphia, PA. Two presentations from that forum are published here to indicate the enormous challenges, initiatives, and changes that face all of us in the serials, publishing, and information environments.  相似文献   

17.
Abstract

This installment of “They Might Be Giants” highlights a small online/print zine from New York called Pindeldyboz. Described by its editor Whitney Pastorek as a lit mag, Pindeldyboz publishes new fiction every week or so at the Web site but then publishes longer fiction once a year in the annual print zine of the same name. While this may indicate a split personality on the part of the editor/publisher, the goals of both the print and online versions are the same: to publish good fiction.  相似文献   

18.
ABSTRACT

Background: Instrumental variables (IVs) have become much easier to find in the “Big data era” which has increased the number of applications of the Two-Stage Least Squares model (TSLS). With the increased availability of IVs, the possibility that these IVs are weak has increased. Prior work has suggested a ‘rule of thumb’ that IVs with a first stage F statistic at least ten will avoid a relative bias in point estimates greater than 10%. We investigated whether or not this threshold was also an efficient guarantee of low false rejection rates of the null hypothesis test in TSLS applications with many IVs.

Objective: To test how the ‘rule of thumb’ for weak instruments performs in predicting low false rejection rates in the TSLS model when the number of IVs is large.

Method: We used a Monte Carlo approach to create 28 original data sets for different models with the number of IVs varying from 3 to 30. For each model, we generated 2000 observations for each iteration and conducted 50,000 iterations to reach convergence in rejection rates. The point estimate was set to 0, and probabilities of rejecting this hypothesis were recorded for each model as a measurement of false rejection rate. The relationship between the endogenous variable and IVs was carefully adjusted to let the F statistics for the first stage model equal ten, thus simulating the ‘rule of thumb.’

Results: We found that the false rejection rates (type I errors) increased when the number of IVs in the TSLS model increased while holding the F statistics for the first stage model equal to 10. The false rejection rate exceeds 10% when TLSL has 24 IVs and exceed 15% when TLSL has 30 IVs.

Conclusion: When more instrumental variables were applied in the model, the ‘rule of thumb’ was no longer an efficient guarantee for good performance in hypothesis testing. A more restricted margin for F statistics is recommended to replace the ‘rule of thumb,’ especially when the number of instrumental variables is large.  相似文献   

19.
Eric M. Hanson 《Serials Review》2017,43(3-4):278-281
ABSTRACT

In this interview from March 2017, Kay Teel, metadata librarian for serials and arts resources at the Stanford University Libraries, discusses the issues involved in providing access to serials through an institutional repository.  相似文献   

20.
Conclusion Presto is a software which automatically generates FORTRAN code corresponding to approximation procedures of the solutions of stochastic differential systems.At the present time, it answers only to a few needs, but a diversification of its users and its simple internal structure could easily permit it to be developed towards a more ambitious system.Finally, let us mention that Presto is an INRIA product, which is free for academic institutions, universities, etc, which already have MAPLE and X-Windows licences. Presto can be run on any UNIX station under X-Windows environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号