首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A statistical software package is a collaborative effort between a program's authors and users. When statistical analysis took place exclusively on mainframe computers, the entire statistical community was served by some three to six major packages, which helped to ensure that program errors would be quickly uncovered and corrected. The current trend toward performing statistical analysis on microcomputers has resulted in an explosion of software of varying quality, with more than 200 packages for the IBM PC alone. Since all of these programs are competing for the same base of knowledgeable users, the number of sophisticated users per package is dramatically less than for mainframe packages; the net result is that problems in any particular package are more likely to go unnoticed and uncorrected. For example, the most widely used shareware package contains major errors that should cause it to be rejected out of hand, and three best-selling packages analyze unbalanced two-factor experiments using an approximate technique originally developed for hand calculation. Several strategies are offered to help author and user reveal any problems that might be present in their software.  相似文献   

2.
This paper presents a new statistical method and accompanying software for the evaluation of order constrained hypotheses in structural equation models (SEM). The method is based on a large sample approximation of the Bayes factor using a prior with a data-based correlational structure. An efficient algorithm is written into an R package to ensure fast computation. The package, referred to as Bain, is easy to use for applied researchers. Two classical examples from the SEM literature are used to illustrate the methodology and software.  相似文献   

3.
Two useful statistical methods for generating a latent variable are described and extended to incorporate polytomous data and additional covariates. Item response analysis is not well-known outside its area of application, mainly because the procedures to fit the models are computer intensive and not routinely available within general statistical software packages. The linear score technique is less computer intensive, straightforward to implement and has been proposed as a good approximation to item response analysis. Both methods have been implemented in the standard statistical software package GLIM 4.0, and are compared to determine their effectiveness.  相似文献   

4.
This paper surveys commercially available MS-DOS and Microsoft Windows based microcomputer software for survival analysis, especially for Cox proportional hazards regression and parametric survival models. Emphasis is given to functionality, documentation, generality, and flexibility of software. A discussion of the need for software integration is given, which leads to the conclusion that survival analysis software not closely tied to a well-designed package will not meet an analyst's general needs. Some standalone programs are good tools for teaching the theory of some survival analysis procedures, but they may not teach the student good data analysis techniques such as critically examining regression assumptions. We contrast typical software with a general, integrated, modeling framework that is available with S-PLUS.  相似文献   

5.
In this paper we introduce a flexible extension of the Gumbel distribution called the odd log-logistic exponentiated Gumbel distribution. The new model was implemented in GAMLSS package of R software and a brief tutorial on how to use this package is presented throughout the paper. We provide a comprehensive treatment of its general mathematical properties. Further, we propose a new extended regression model considering four regression structures. We discuss estimation methods based on censored and uncensored data. Two simulation studies are presented and four real data sets are applied to illustrating the usefulness of the new model.  相似文献   

6.
We describe and analyze a longitudinal diffusion tensor imaging (DTI) study relating changes in the microstructure of intracranial white matter tracts to cognitive disability in multiple sclerosis patients. In this application the scalar outcome and the functional exposure are measured longitudinally. This data structure is new and raises challenges that cannot be addressed with current methods and software. To analyze the data, we introduce a penalized functional regression model and inferential tools designed specifically for these emerging types of data. Our proposed model extends the Generalized Linear Mixed Model by adding functional predictors; this method is computationally feasible and is applicable when the functional predictors are measured densely, sparsely or with error. An online appendix compares two implementations, one likelihood-based and the other Bayesian, and provides the software used in simulations; the likelihood-based implementation is included as the lpfr() function in the R package refund available on CRAN.  相似文献   

7.
This paper presents estimates for the parameters included in the Block and Basu bivariate lifetime distributions in the presence of covariates and cure fraction, applied to analyze survival data when some individuals may never experience the event of interest and two lifetimes are associated with each unit. A Bayesian procedure is used to get point and confidence intervals for the unknown parameters. Posterior summaries of interest are obtained using standard Markov Chain Monte Carlo methods in rjags package for R software. An illustration of the proposed methodology is given for a Diabetic Retinopathy Study data set.  相似文献   

8.
Eight statistical software packages for general use by non-statisticians are reviewed. The packages are GraphPad Prism, InStat, ISP, NCSS, SigmaStat, Statistix, Statmost, and Winks. Summary tables of statistical capabilities and “usability” features are followed by discussions of each package. Discussions include system requirements, data import capabilities, statistical capabilities, and user interface. Recommendations, based on user needs and sophistication, are presented following the reviews.  相似文献   

9.
Correction     
In many probability and mathematical statistics courses the probability generating function (PGF) is typically overlooked in favor of the more utilized moment generating function. However, for certain types of random variables, the PGF may be more appealing. For example, sums of independent, non-negative, integer-valued random variables with finite support are easily studied via the PGF. In particular, the exact distribution of the sum can easily be calculated. Several illustrative classroom examples, with varying degrees of difficulty, are presented. All of the examples have been implemented using the R statistical software package.  相似文献   

10.
This article offers a review of three software packages that estimate directed acyclic graphs (DAGs) from data. The three packages, MIM, Tetrad and WinMine, can help researchers discover underlying causal structure. Although each package uses a different algorithm, the results are to some extent similar. All three packages are free and easy to use. They are likely to be of interest to researchers who do not have strong theory regarding the causal structure in their data. DAG modeling is a powerful analytic tool to consider in conjunction with, or in place of, path analysis, structural equation modeling, and other statistical techniques.  相似文献   

11.
This article focuses on important aspects of microcomputer statistical software. These include documentation, control language, data entry, data listing and editing, data manipulation, graphics, statistical procedures, output, customizing, system environment, and support. The primary concern is that a package encourage good statistical practice.  相似文献   

12.
We demonstrate the use of our R package, gammSlice, for Bayesian fitting and inference in generalised additive mixed model analysis. This class of models includes generalised linear mixed models and generalised additive models as special cases. Accurate Bayesian inference is achievable via sufficiently large Markov chain Monte Carlo (MCMC) samples. Slice sampling is a key component of the MCMC scheme. Comparisons with existing generalised additive mixed model software shows that gammSlice offers improved inferential accuracy, albeit at the cost of longer computational time.  相似文献   

13.
ABSTRACT

This article develops and investigates a confidence interval and hypothesis testing procedure for a population proportion based on a ranked set sample (RSS). The inference is exact, in the sense that it is based on the exact distribution of the total number of successes observed in the RSS. Furthermore, this distribution can be readily computed with the well-known and freely available R statistical software package. A data example that illustrates the methodology is presented. In addition, the properties of the inference procedures are compared with their simple random sample (SRS) counterparts. In regards to expected lengths of confidence intervals and the power of tests, the RSS inference procedures are superior to the SRS methods.  相似文献   

14.
Bayesian hierarchical spatio-temporal models are becoming increasingly important due to the increasing availability of space-time data in various domains. In this paper we develop a user friendly R package, spTDyn, for spatio-temporal modelling. It can be used to fit models with spatially varying and temporally dynamic coefficients. The former is used for modelling the spatially varying impact of explanatory variables on the response caused by spatial misalignment. This issue can arise when the covariates only vary over time, or when they are measured over a grid and hence do not match the locations of the response point-level data. The latter is to examine the temporally varying impact of explanatory variables in space-time data due, for example, to seasonality or other time-varying effects. The spTDyn package uses Markov chain Monte Carlo sampling written in C, which makes computations highly efficient, and the interface is written in R making these sophisticated modelling techniques easily accessible to statistical analysts. The models and software, and their advantages, are illustrated using temperature and ozone space-time data.  相似文献   

15.
Summary.  Previous research has proposed a design-based analysis procedure for experiments that are embedded in complex sampling designs in which the ultimate sampling units of an on-going sample survey are randomized over different treatments according to completely randomized designs or randomized block designs. Design-based Wald and t -statistics are applied to test whether sample means that are observed under various survey implementations are significantly different. This approach is generalized to experimental designs in which clusters of sampling units are randomized over the different treatments. Furthermore, test statistics are derived to test differences between ratios of two sample estimates that are observed under alternative survey implementations. The methods are illustrated with a simulation study and real life applications of experiments that are embedded in the Dutch Labour Force Survey. The functionality of a software package that was developed to conduct these analyses is described.  相似文献   

16.
The bootstrap is a powerful non-parametric statistical technique for making probability-based inferences about a population parameter. Through a Monte-Carlo resampling simulation, bootstrapping empirically generates a statistic's entire distribution. From this simulated distribution, inferences can be made about a population parameter. Assumptions about normality are not required. In general, despite its power, bootstrapping has been used relatively infrequently in social science research, and this is particularly true for business research. This under-utilization is likely due to a combination of a general lack of understanding of the bootstrap technique and the difficulty with which it has traditionally been implemented. Researchers in the various fields of business should be familiar with this powerful statistical technique. The purpose of this paper is to explain how this technique works using Lotus 1-2-3, a software package with which business people are very familiar.  相似文献   

17.
The paper is devoted to a new randomization method that yields unbiased adjustments of p-values for linear regression model predictors by incorporating the number of potential explanatory variables, their variance–covariance matrix and its uncertainty, based on the number of observations. This adjustment helps control type I errors in scientific studies, significantly decreasing the number of publications that report false relations to be authentic ones. Comparative analysis with such existing methods as Bonferroni correction and Shehata and White adjustments explicitly shows their imperfections, especially in case when the number of observations and the number of potential explanatory variables are approximately equal. Proposed method is easy to program and can be integrated into any statistical software package.  相似文献   

18.
Data from birds ringed as chicks and recaptured during subsequent breeding seasons provide information on avian natal dispersal distances. However, national patterns of ring reports are influenced by recapture rates as well as by dispersal rates. While an extensive methodology has been developed to study survival rates using models that correct for recapture rates, the same is not true for dispersal. Here, we present such a method, showing how corrections for spatial heterogeneity in recapture rate can be built into estimates of dispersal rates if detailed atlas data and ringing totals can be combined with extensive data on birds ringed as chicks and recaptured as breeding adults. We show how the method can be implemented in the software package SURVIV (White, 1992).  相似文献   

19.
Multiplicities are ubiquitous. They threaten every inference in every aspect of life. Despite the focus in statistics on multiplicities, statisticians underestimate their importance. One reason is that the focus is on methodology for known multiplicities. Silent multiplicities are much more important and they are insidious. Both frequentists and Bayesians have important contributions to make regarding problems of multiplicities. But neither group has an inside track. Frequentists and Bayesians working together is a promising way of making inroads into this knotty set of problems. Two experiments with identical results may well lead to very different statistical conclusions. So we will never be able to use a software package with default settings to resolve all problems of multiplicities. Every problem has unique aspects. And all problems require understanding the substantive area of application.  相似文献   

20.
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo‐likelihood estimator of the parameters of a spatial Gibbs point process model. This allows us to construct asymptotic confidence intervals for the parameters. We illustrate the efficiency of our procedure in a simulation study for several classical parametric models. The procedure is implemented in the statistical software R , and it is included in spatstat , which is an R package for analyzing spatial point patterns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号