首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Major factors to consider in selecting a microcomputer SQC package are discussed, and a selection of low-cost user-friendly software is described. Functions of each package are considered in the context of their purchase cost and arrangements for reproduction. Focus is on the classroom, but this software is also suitable for use in industry and commerce.  相似文献   

2.
A statistical software package is a collaborative effort between a program's authors and users. When statistical analysis took place exclusively on mainframe computers, the entire statistical community was served by some three to six major packages, which helped to ensure that program errors would be quickly uncovered and corrected. The current trend toward performing statistical analysis on microcomputers has resulted in an explosion of software of varying quality, with more than 200 packages for the IBM PC alone. Since all of these programs are competing for the same base of knowledgeable users, the number of sophisticated users per package is dramatically less than for mainframe packages; the net result is that problems in any particular package are more likely to go unnoticed and uncorrected. For example, the most widely used shareware package contains major errors that should cause it to be rejected out of hand, and three best-selling packages analyze unbalanced two-factor experiments using an approximate technique originally developed for hand calculation. Several strategies are offered to help author and user reveal any problems that might be present in their software.  相似文献   

3.
Structural equation models (SEM) have been extensively used in behavioral, social, and psychological research to model relations between the latent variables and the observations. Most software packages for the fitting of SEM rely on frequentist methods. Traditional models and software are not appropriate for analysis of the dependent observations such as time-series data. In this study, a structural equation model with a time series feature is introduced. A Bayesian approach is used to solve the model with the aid of the Markov chain Monte Carlo method. Bayesian inferences as well as prediction with the proposed time series structural equation model can also reveal certain unobserved relationships among the observations. The approach is successfully employed using real Asian, American and European stock return data.  相似文献   

4.
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo‐likelihood estimator of the parameters of a spatial Gibbs point process model. This allows us to construct asymptotic confidence intervals for the parameters. We illustrate the efficiency of our procedure in a simulation study for several classical parametric models. The procedure is implemented in the statistical software R , and it is included in spatstat , which is an R package for analyzing spatial point patterns.  相似文献   

5.
The paper is devoted to a new randomization method that yields unbiased adjustments of p-values for linear regression model predictors by incorporating the number of potential explanatory variables, their variance–covariance matrix and its uncertainty, based on the number of observations. This adjustment helps control type I errors in scientific studies, significantly decreasing the number of publications that report false relations to be authentic ones. Comparative analysis with such existing methods as Bonferroni correction and Shehata and White adjustments explicitly shows their imperfections, especially in case when the number of observations and the number of potential explanatory variables are approximately equal. Proposed method is easy to program and can be integrated into any statistical software package.  相似文献   

6.
We propose a new procedure for combining multiple tests in samples of right-censored observations. The new method is based on multiple constrained censored empirical likelihood where the constraints are formulated as linear functionals of the cumulative hazard functions. We prove a version of Wilks’ theorem for the multiple constrained censored empirical likelihood ratio, which provides a simple reference distribution for the test statistic of our proposed method. A useful application of the proposed method is, for example, examining the survival experience of different populations by combining different weighted log-rank tests. Real data examples are given using the log-rank and Gehan-Wilcoxon tests. In a simulation study of two sample survival data, we compare the proposed method of combining tests to previously developed procedures. The results demonstrate that, in addition to its computational simplicity, the combined test performs comparably to, and in some situations more reliably than previously developed procedures. Statistical software is available in the R package ‘emplik’.  相似文献   

7.
This paper surveys commercially available MS-DOS and Microsoft Windows based microcomputer software for survival analysis, especially for Cox proportional hazards regression and parametric survival models. Emphasis is given to functionality, documentation, generality, and flexibility of software. A discussion of the need for software integration is given, which leads to the conclusion that survival analysis software not closely tied to a well-designed package will not meet an analyst's general needs. Some standalone programs are good tools for teaching the theory of some survival analysis procedures, but they may not teach the student good data analysis techniques such as critically examining regression assumptions. We contrast typical software with a general, integrated, modeling framework that is available with S-PLUS.  相似文献   

8.
Two useful statistical methods for generating a latent variable are described and extended to incorporate polytomous data and additional covariates. Item response analysis is not well-known outside its area of application, mainly because the procedures to fit the models are computer intensive and not routinely available within general statistical software packages. The linear score technique is less computer intensive, straightforward to implement and has been proposed as a good approximation to item response analysis. Both methods have been implemented in the standard statistical software package GLIM 4.0, and are compared to determine their effectiveness.  相似文献   

9.
We describe and analyze a longitudinal diffusion tensor imaging (DTI) study relating changes in the microstructure of intracranial white matter tracts to cognitive disability in multiple sclerosis patients. In this application the scalar outcome and the functional exposure are measured longitudinally. This data structure is new and raises challenges that cannot be addressed with current methods and software. To analyze the data, we introduce a penalized functional regression model and inferential tools designed specifically for these emerging types of data. Our proposed model extends the Generalized Linear Mixed Model by adding functional predictors; this method is computationally feasible and is applicable when the functional predictors are measured densely, sparsely or with error. An online appendix compares two implementations, one likelihood-based and the other Bayesian, and provides the software used in simulations; the likelihood-based implementation is included as the lpfr() function in the R package refund available on CRAN.  相似文献   

10.
Data from birds ringed as chicks and recaptured during subsequent breeding seasons provide information on avian natal dispersal distances. However, national patterns of ring reports are influenced by recapture rates as well as by dispersal rates. While an extensive methodology has been developed to study survival rates using models that correct for recapture rates, the same is not true for dispersal. Here, we present such a method, showing how corrections for spatial heterogeneity in recapture rate can be built into estimates of dispersal rates if detailed atlas data and ringing totals can be combined with extensive data on birds ringed as chicks and recaptured as breeding adults. We show how the method can be implemented in the software package SURVIV (White, 1992).  相似文献   

11.
Estimation of the lifetime distribution of industrial components and systems yields very important information for manufacturers and consumers. However, obtaining reliability data is time consuming and costly. In this context, degradation tests are a useful alternative approach to lifetime and accelerated life tests in reliability studies. The approximate method is one of the most used techniques for degradation data analysis. It is very simple to understand and easy to implement numerically in any statistical software package. This paper uses time series techniques in order to propose a modified approximate method (MAM). The MAM improves the standard one in two aspects: (1) it uses previous observations in the degradation path as a Markov process for future prediction and (2) it is not necessary to specify a parametric form for the degradation path. Characteristics of interest such as mean or median time to failure and percentiles, among others, are obtained by using the modified method. A simulation study is performed in order to show the improved properties of the modified method over the standard one. Both methods are also used to estimate the failure time distribution of the fatigue-crack-growth data set.  相似文献   

12.
ABSTRACT

Panel datasets have been increasingly used in economics to analyze complex economic phenomena. Panel data is a two-dimensional array that combines cross-sectional and time series data. Through constructing a panel data matrix, the clustering method is applied to panel data analysis. This method solves the heterogeneity question of the dependent variable, which belongs to panel data, before the analysis. Clustering is a widely used statistical tool in determining subsets in a given dataset. In this article, we present that the mixed panel dataset is clustered by agglomerative hierarchical algorithms based on Gower's distance and by k-prototypes. The performance of these algorithms has been studied on panel data with mixed numerical and categorical features. The effectiveness of these algorithms is compared by using cluster accuracy. An experimental analysis is illustrated on a real dataset using Stata and R package software.  相似文献   

13.
A comparative study is made of three tests, developed by James (1951), Welch (1951) and Brown & Forsythe (1974). James presented two methods of which only one is considered in this paper. It is shown that this method gives better control over the size than the other two tests. None of these methods is uniformly more powerful than the other two. In some cases the tests of James and Welch reject a false null hypothesis more often than the test of Brown & Forsythe, but there are also situations in which it is the other way around.

We conclude that for implementation in a statistical software package the very complicated test of James is the most attractive. A practical disadvantage of this method can be overcome by a minor modification.  相似文献   

14.
This article focuses on important aspects of microcomputer statistical software. These include documentation, control language, data entry, data listing and editing, data manipulation, graphics, statistical procedures, output, customizing, system environment, and support. The primary concern is that a package encourage good statistical practice.  相似文献   

15.
In this paper we introduce a flexible extension of the Gumbel distribution called the odd log-logistic exponentiated Gumbel distribution. The new model was implemented in GAMLSS package of R software and a brief tutorial on how to use this package is presented throughout the paper. We provide a comprehensive treatment of its general mathematical properties. Further, we propose a new extended regression model considering four regression structures. We discuss estimation methods based on censored and uncensored data. Two simulation studies are presented and four real data sets are applied to illustrating the usefulness of the new model.  相似文献   

16.
The framework for a unified statistical theory of spline regression assuming fixed knots using the truncated polynomial or “+” function representation is presented. In particular, a partial ordering of some spline models is introduced to clarify their relationship and to indicate the hypotheses that can be tested by using either standard multiple regression procedures or a little-used conditional test developed by Hotelling (1940). The construction of spline models with polynomial pieces of different degrees is illustrated. A numerical example from a chemical experiment is given by using the GLM procedure of the statistical software package SAS (Barr et al. 1976).  相似文献   

17.
Summary.  Complex survey sampling is often used to sample a fraction of a large finite population. In general, the survey is conducted so that each unit (e.g. subject) in the sample has a different probability of being selected into the sample. For generalizability of the sample to the population, both the design and the probability of being selected into the sample must be incorporated in the analysis. In this paper we focus on non-standard regression models for complex survey data. In our motivating example, which is based on data from the Medical Expenditure Panel Survey, the outcome variable is the subject's 'total health care expenditures in the year 2002'. Previous analyses of medical cost data suggest that the variance is approximately equal to the mean raised to the power of 1.5, which is a non-standard variance function. Currently, the regression parameters for this model cannot be easily estimated in standard statistical software packages. We propose a simple two-step method to obtain consistent regression parameter and variance estimates; the method proposed can be implemented within any standard sample survey package. The approach is applicable to complex sample surveys with any number of stages.  相似文献   

18.
We demonstrate the use of our R package, gammSlice, for Bayesian fitting and inference in generalised additive mixed model analysis. This class of models includes generalised linear mixed models and generalised additive models as special cases. Accurate Bayesian inference is achievable via sufficiently large Markov chain Monte Carlo (MCMC) samples. Slice sampling is a key component of the MCMC scheme. Comparisons with existing generalised additive mixed model software shows that gammSlice offers improved inferential accuracy, albeit at the cost of longer computational time.  相似文献   

19.
Empirical likelihood (EL) is an important nonparametric statistical methodology. We develop a package in R called el.convex to implement EL for inference about a multivariate mean. This package contains five functions which use different optimization algorithms but meanwhile seek the same goal. These functions are based on the theory of convex optimization; they are Newton, Davidon–Fletcher–Powell, Broyden–Fletcher–Goldfarb–Shanno, conjugate gradient method, and damped Newton, respectively. We also compare them with the function el.test in the existing R package emplik, and discuss their relative advantages and disadvantages.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号