首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Summary.  The complexities of educational processes and structure and the need for disentangling effects beneath the level of the school or college are discussed. Ordinal response multilevel crossed random-effects models for educational grades are introduced. Weighted random effects for teacher contributions are then added. Estimation methodology is reviewed. Specially written macros for quasi-likelihood with second-order terms are described. The application discusses General Certificate of Education at advanced level grades cross-classified by student and teaching group within a number of institutions. The methods handle teacher effects where several teachers contribute to provision and where each teacher deals with several groups. Some methodological lessons are drawn for sparse data and the use of extra-multinomial variation. Developments of the analysis yield conclusions about the sources of variation in educational progress, and particularly the effect of teachers.  相似文献   

2.
This paper surveys commercially available MS-DOS and Microsoft Windows based microcomputer software for survival analysis, especially for Cox proportional hazards regression and parametric survival models. Emphasis is given to functionality, documentation, generality, and flexibility of software. A discussion of the need for software integration is given, which leads to the conclusion that survival analysis software not closely tied to a well-designed package will not meet an analyst's general needs. Some standalone programs are good tools for teaching the theory of some survival analysis procedures, but they may not teach the student good data analysis techniques such as critically examining regression assumptions. We contrast typical software with a general, integrated, modeling framework that is available with S-PLUS.  相似文献   

3.
The goal of this paper is to compare the performance of two estimation approaches, the quasi-likelihood estimating equation and the pseudo-likelihood equation, against model mis-specification for non-separable binary data. This comparison, to the authors’ knowledge, has not been done yet. In this paper, we first extend the quasi-likelihood work on spatial data to non-separable binary data. Some asymptotic properties of the quasi-likelihood estimate are also briefly discussed. We then use the techniques of a truncated Gaussian random field with a quasi-likelihood type model and a Gibbs sampler with a conditional model in the Markov random field to generate spatial–temporal binary data, respectively. For each simulated data set, both of the estimation methods are used to estimate parameters. Some discussion about the simulation results are also included.  相似文献   

4.
This paper considers the use of independent student data collection and analysis projects in the teaching of engineering statistics and quality control. It describes the possible forms such projects can take, gives some details of their use at Iowa State University, and argues for their effectiveness in improving learning.  相似文献   

5.
Summary. Latent class analysis (LCA) is a statistical tool for evaluating the error in categorical data when two or more repeated measurements of the same survey variable are available. This paper illustrates an application of LCA for evaluating the error in self-reports of drug use using data from the 1994, 1995 and 1996 implementations of the US National Household Survey on Drug Abuse. In our application, the LCA approach is used for estimating classification errors which in turn leads to identifying problems with the questionnaire and adjusting estimates of prevalence of drug use for classification error bias. Some problems in using LCA when the indicators of the use of a particular drug are embedded in a single survey questionnaire, as in the National Household Survey on Drug Abuse, are also discussed.  相似文献   

6.
The combination of log-linear models and correspondence analysis have long been used to decompose contingency tables and aid in their interpretation. Until now, this approach has not been applied to the education Statewide Longitudinal Data System (SLDS), which contains administrative school data at the student level. While some research has been conducted using the SLDS, its primary use is for state education administrative reporting. This article uses the combination of log-linear models and correspondence analysis to gain insight into high school dropouts in two discrete regions in Kentucky, Appalachia and non-Appalachia, defined by the American Community Survey. The individual student records from the SLDS were categorized into one of the two regions and a log-linear model was used to identify the interactions between the demographic characteristics and the dropout categories, push-out and pull-out. Correspondence analysis was then used to visualize the interactions with the expanded push-out categories, boredom, course selection, expulsion, failing grade, teacher conflict, and pull-out categories, employment, family problems, illness, marriage, and pregnancy to provide insights into the regional differences. In this article, we demonstrate that correspondence analysis can extend the insights gained from SDLS data and provide new perspectives on dropouts. Supplementary materials for this article are available online.  相似文献   

7.
The authors develop a kernel-based estimator of a dynamic reliability measure for use with independent ranked set samples. The estimator is in the form of a ratio, whose numerator and denominator are shown to outperform their rivals based on simple random samples. Some asymptotic properties about the proposed estimator are also established. Simulation studies reveal finite-sample properties of the estimator. The technique is finally applied on an agricultural data set.  相似文献   

8.
Strong movements in both education research and education reform are emphasizing that teaching should encourage student activity rather than simply aim knowledge in the general direction of a student audience. Yet video, at least in its traditional technological forms, is passive. How can teachers make effective use of an apparently ineffective medium? What role can video best play in new multimedia instructional systems? This article reviews research on learning through television in order to make practical suggestions. Specific examples use two widely distributed series, Against All Odds: Inside Statistics and Statistics: Decisions Through Data.  相似文献   

9.
The use of Monte Carlo methods to generate exam datasets is nowadays a well-established practice among econometrics and statistics examiners all over the world. Its advantages are well known: providing each student a different data set ensures that estimates are actually computed individually, rather than copied from someone sitting nearby. The method however has a major fault: initial “random errors,” such as mistakes in downloading the assigned dataset, might generate downward bias in student evaluation. We propose a set of calibration algorithms, typical of indirect estimation methods, that solve the issue of initial “random errors” and reduce evaluation bias. Ensuring round initial estimates of the parameters for each individual dataset, our calibration procedures allow the students to determine if they have started the exam correctly. When initial estimates are not round numbers, this random error in the initial stage of the exam can be corrected for immediately, thus reducing evaluation bias. The procedure offers the further advantage of rounding markers’ life by allowing them to check round numbers answers only, rather than lists of numbers with many decimal digits1.  相似文献   

10.
A hazard model of the probability of medical school drop-out in the UK   总被引:2,自引:0,他引:2  
Summary.  From individual level longitudinal data for two entire cohorts of medical students in UK universities, we use multilevel models to analyse the probability that an individual student will drop out of medical school. We find that academic preparedness—both in terms of previous subjects studied and levels of attainment therein—is the major influence on withdrawal by medical students. Additionally, males and more mature students are more likely to withdraw than females or younger students respectively. We find evidence that the factors influencing the decision to transfer course differ from those affecting the decision to drop out for other reasons.  相似文献   

11.
Robust splines     
We consider the problem of fitting a cubic spline to data using robust regression techniques. Some important properties of splines are discussed, showing that their use as a regression model is related in principle to the concept of robustness. Methods for fitting splines and interpreting the results are outlined, and an illustrative example is given.  相似文献   

12.
Graphical methods have played a central role in the development of statistical theory and practice. This presentation briefly reviews some of the highlights in the historical development of statistical graphics and gives a simple taxonomy that can be used to characterize the current use of graphical methods. This taxonomy is used to describe the evolution of the use of graphics in some major statistical and related scientific journals.

Some recent advances in the use of graphical methods for statistical analysis are reviewed, and several graphical methods for the statistical presentation of data are illustrated, including the use of multicolor maps.  相似文献   

13.
Clustering high-dimensional data is often a challenging task both because of the computational burden required to run any technique, and because the difficulty in interpreting clusters generally increases with the data dimension. In this work, a method for finding low-dimensional representations of high-dimensional data is discussed, specifically conceived to preserve possible clusters in data. It is based on the critical bandwidth, a nonparametric statistic to test unimodality, related to kernel density estimation. Some useful properties of the aforementioned statistic are enlightened and an adjustment to use it as a basis for reducing dimensionality is suggested. The method is illustrated by simulated and real data examples.  相似文献   

14.
Many authors have criticized the use of spreadsheets for statistical data processing and computing because of incorrect statistical functions, no log file or audit trail, inconsistent behavior of computational dialogs, and poor handling of missing values. Some improvements in some spreadsheet processors and the possibility of audit trail facilities suggest that the use of a spreadsheet for some statistical data entry and simple analysis tasks may now be acceptable. A brief outline of some issues and some guidelines for good practice are included.  相似文献   

15.
The log-logistic distribution is one of the popular distributions in life-testing applications. This article develops an acceptance sampling procedure for the log-logistic lifetime distribution based on grouped data when the shape parameter is given. Both producer and consumer risks are considered to develop the ordinary, approximate and simulated sampling plans. Some of the proposed sampling plans are tabulated; moreover, those three types of sampling plans are compared with each other under the same censoring rates. The use of these tables is illustrated by an example.  相似文献   

16.
This paper explores the use of data augmentation in settings beyond the standard Bayesian one. In particular, we show that, after proposing an appropriate generalised data-augmentation principle, it is possible to extend the range of sampling situations in which fiducial methods can be applied by constructing Markov chains whose stationary distributions represent valid posterior inferences on model parameters. Some properties of these chains are presented and a number of open questions are discussed. We also use the approach to draw out connections between classical and Bayesian approaches in some standard settings.  相似文献   

17.
In this article, we propose a nonparametric approach for estimating the intensity function of temporal point processes based on kernel estimators. In particular, we use asymmetric kernel estimators characterized by the gamma distribution, in order to describe features of observed point patterns adequately. Some characteristics of these estimators are analyzed and discussed both through simulated results and applications to real data from different seismic catalogs.  相似文献   

18.
The paper reviews finite mixture models for binomial counts with concomitant variables. These models are well known in theory, but they are rarely applied. We use a binomial finite mixture to model the number of credits gained by freshmen during the first year at the School of Economics of the University of Florence. The finite mixture approach allows us to appropriately account for the large number of zeroes and the multimodality of the observed distribution. Moreover, we rely on a concomitant variable specification to investigate the role of student background characteristics and of a compulsory pre-enrollment test in predicting gained credits. In the paper, we deal with model selection, including the choice of the number of components, and we devise numerical and graphical summaries of the model results in order to exploit the information content of the concomitant variable specification. The main finding is that the introduction of the pre-enrollment test gives additional information for student tutoring, even if the predictive power is modest.  相似文献   

19.
A regression model, based on the exponentiated-exponential geometric distribution, is defined and studied. The regression model can be applied to count data with under-dispersion or over-dispersion. Some forms of its modifications to truncated or inflated data are mentioned. Some tests to discriminate between the regression model and its competitors are discussed. Real numerical data sets are used to illustrate the applications of the regression model.  相似文献   

20.
Existence of missing values is an inseparable part of longitudinal studies in epidemiology, medical and clinical studies. Usually researchers, for simplicity, ignore the missingness mechanism while, ignoring a not at random mechanism may lead to misleading results. In this paper, we use a Bayesian paradigm for fitting selection model of Heckman, which allows the non-ignorable missingness for longitudinal data. Also, we use reversible-jump Markov chain Monte Carlo to allow the model to choose between non-ignorable and ignorable structures for missingness mechanism, and show how the selection can be incorporated. Some simulation studies are performed for illustration of the proposed approach. The approach is also used for analyzing two real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号