首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In testing, item response theory models are widely used in order to estimate item parameters and individual abilities. However, even unidimensional models require a considerable sample size so that all parameters can be estimated precisely. The introduction of empirical prior information about candidates and items might reduce the number of candidates needed for parameter estimation. Using data for IQ measurement, this work shows how empirical information about items can be used effectively for item calibration and in adaptive testing. First, we propose multivariate regression trees to predict the item parameters based on a set of covariates related to the item-solving process. Afterwards, we compare the item parameter estimation when tree-fitted values are included in the estimation or when they are ignored. Model estimation is fully Bayesian, and is conducted via Markov chain Monte Carlo methods. The results are two-fold: (a) in item calibration, it is shown that the introduction of prior information is effective with short test lengths and small sample sizes and (b) in adaptive testing, it is demonstrated that the use of the tree-fitted values instead of the estimated parameters leads to a moderate increase in the test length, but provides a considerable saving of resources.  相似文献   

2.
A unified approach is developed for testing hypotheses in the general linear model based on the ranks of the residuals. It complements the nonparametric estimation procedures recently reported in the literature. The testing and estimation procedures together provide a robust alternative to least squares. The methods are similar in spirit to least squares so that results are simple to interpret. Hypotheses concerning a subset of specified parameters can be tested, while the remaining parameters are treated as nuisance parameters. Asymptotically, the test statistic is shown to have a chi-square distribution under the null hypothesis. This result is then extended to cover a sequence of contiguous alternatives from which the Pitman efficacy is derived. The general application of the test requires the consistent estimation of a functional of the underlying distribution and one such estimate is furnished.  相似文献   

3.
Sliced Inverse Regression (SIR) is an effective method for dimension reduction in high-dimensional regression problems. The original method, however, requires the inversion of the predictors covariance matrix. In case of collinearity between these predictors or small sample sizes compared to the dimension, the inversion is not possible and a regularization technique has to be used. Our approach is based on a Fisher Lecture given by R.D. Cook where it is shown that SIR axes can be interpreted as solutions of an inverse regression problem. We propose to introduce a Gaussian prior distribution on the unknown parameters of the inverse regression problem in order to regularize their estimation. We show that some existing SIR regularizations can enter our framework, which permits a global understanding of these methods. Three new priors are proposed leading to new regularizations of the SIR method. A comparison on simulated data as well as an application to the estimation of Mars surface physical properties from hyperspectral images are provided.  相似文献   

4.
The score function is associated with some optimality features in statistical inference. This review article looks on the central role of the score in testing and estimation. The maximization of the power in testing and the quest for efficiency in estimation lead to score as a guiding principle. In hypothesis testing, the locally most powerful test statistic is the score test or a transformation of it. In estimation, the optimal estimating function is the score. The same link can be made in the case of nuisance parameters: the optimal test function should have maximum correlation with the score of the parameter of primary interest. We complement this result by showing that the same criterion should be satisfied in the estimation problem as well.  相似文献   

5.
In nonignorable missing response problems, we study a semiparametric model with unspecified missingness mechanism model and a exponential family model for response conditional density. Even though existing methods are available to estimate the parameters in exponential family, estimation or testing of the missingness mechanism model nonparametrically remains to be an open problem. By defining a “synthesis" density involving the unknown missingness mechanism model and the known baseline “carrier" density in the exponential family model, we treat this “synthesis" density as a legitimate one with biased sampling version. We develop maximum pseudo likelihood estimation procedures and the resultant estimators are consistent and asymptotically normal. Since the “synthesis" cumulative distribution is a functional of the missingness mechanism model and the known carrier density, proposed method can be used to test the correctness of the missingness mechanism model nonparametrically andindirectly. Simulation studies and real example demonstrate the proposed methods perform very well.  相似文献   

6.
This paper discusses visualization methods for discriminant analysis. It does not address numerical methods for classification per se, but rather focuses on graphical methods that can be viewed as pre-processors, aiding the analyst's understanding of the data and the choice of a final classifier. The methods are adaptations of recent results in dimension reduction for regression, including sliced inverse regression and sliced average variance estimation. A permutation test is suggested as a means of determining dimension, and examples are given throughout the discussion.  相似文献   

7.
Dimension reduction with bivariate responses, especially a mix of a continuous and categorical responses, can be of special interest. One immediate application is to regressions with censoring. In this paper, we propose two novel methods to reduce the dimension of the covariates of a bivariate regression via a model-free approach. Both methods enjoy a simple asymptotic chi-squared distribution for testing the dimension of the regression, and also allow us to test the contributions of the covariates easily without pre-specifying a parametric model. The new methods outperform the current one both in simulations and in analysis of a real data. The well-known PBC data are used to illustrate the application of our method to censored regression.  相似文献   

8.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   

9.
To reduce the predictors dimension without loss of information on the regression, we develop in this paper a sufficient dimension reduction method which we term cumulative Hessian directions. Unlike many other existing sufficient dimension reduction methods, the estimation of our proposal avoids completely selecting the tuning parameters such as the number of slices in slicing estimation or the bandwidth in kernel smoothing. We also investigate the asymptotic properties of our proposal when the predictors dimension diverges. Illustrations through simulations and an application are presented to evidence the efficacy of our proposal and to compare it with existing methods.  相似文献   

10.
The single index model is a useful regression model. In this paper, we propose a nonconcave penalized least squares method to estimate both the parameters and the link function of the single index model. Compared to other variable selection and estimation methods, the proposed method can estimate parameters and select variables simultaneously. When the dimension of parameters in the single index model is a fixed constant, under some regularity conditions, we demonstrate that the proposed estimators for parameters have the so-called oracle property, and furthermore we establish the asymptotic normality and develop a sandwich formula to estimate the standard deviations of the proposed estimators. Simulation studies and a real data analysis are presented to illustrate the proposed methods.  相似文献   

11.
There are a variety of methods in the literature which seek to make iterative estimation algorithms more manageable by breaking the iterations into a greater number of simpler or faster steps. Those algorithms which deal at each step with a proper subset of the parameters are called in this paper partitioned algorithms. Partitioned algorithms in effect replace the original estimation problem with a series of problems of lower dimension. The purpose of the paper is to characterize some of the circumstances under which this process of dimension reduction leads to significant benefits.Four types of partitioned algorithms are distinguished: reduced objective function methods, nested (partial Gauss-Seidel) iterations, zigzag (full Gauss-Seidel) iterations, and leapfrog (non-simultaneous) iterations. Emphasis is given to Newton-type methods using analytic derivatives, but a nested EM algorithm is also given. Nested Newton methods are shown to be equivalent to applying to same Newton method to the reduced objective function, and are applied to separable regression and generalized linear models. Nesting is shown generally to improve the convergence of Newton-type methods, both by improving the quadratic approximation to the log-likelihood and by improving the accuracy with which the observed information matrix can be approximated. Nesting is recommended whenever a subset of parameters is relatively easily estimated. The zigzag method is shown to produce a stable but generally slow iteration; it is fast and recommended when the parameter subsets have approximately uncorrelated estimates. The leapfrog iteration has less guaranteed properties in general, but is similar to nesting and zigzagging when the parameter subsets are orthogonal.  相似文献   

12.
Semiparametric Analysis of Truncated Data   总被引:1,自引:0,他引:1  
Randomly truncated data are frequently encountered in many studies where truncation arises as a result of the sampling design. In the literature, nonparametric and semiparametric methods have been proposed to estimate parameters in one-sample models. This paper considers a semiparametric model and develops an efficient method for the estimation of unknown parameters. The model assumes that K populations have a common probability distribution but the populations are observed subject to different truncation mechanisms. Semiparametric likelihood estimation is studied and the corresponding inferences are derived for both parametric and nonparametric components in the model. The method can also be applied to two-sample problems to test the difference of lifetime distributions. Simulation results and a real data analysis are presented to illustrate the methods.  相似文献   

13.
Traditionally, time series analysis involves building an appropriate model and using either parametric or nonparametric methods to make inference about the model parameters. Motivated by recent developments for dimension reduction in time series, an empirical application of sufficient dimension reduction (SDR) to nonlinear time series modelling is shown in this article. Here, we use time series central subspace as a tool for SDR and estimate it using mutual information index. Especially, in order to reduce the computational complexity in time series, we propose an efficient estimation method of minimal dimension and lag using a modified Schwarz–Bayesian criterion, when either of the dimensions and the lags is unknown. Through simulations and real data analysis, the approach presented in this article performs well in autoregression and volatility estimation.  相似文献   

14.
The paper deals with the introduction of empirical prior information in the estimation of candidate’s ability within computerized adaptive testing (CAT). CAT is generally applied to improve efficiency of test administration. In this paper, it is shown how the inclusion of background variables both in the initialization and the ability estimation is able to improve the accuracy of ability estimates. In particular, a Gibbs sampler scheme is proposed in the phases of interim and final ability estimation. By using both simulated and real data, it is proved that the method produces more accurate ability estimates, especially for short tests and when reproducing boundary abilities. This implies that operational problems of CAT related to weak measurement precision under particular conditions, can be reduced as well. In the empirical examples, the methods were applied to CAT for intelligence testing in the area of personnel selection and to educational measurement. Other promising applications would be in the medical world, where testing efficiency is of paramount importance as well.  相似文献   

15.
Markov chain Monte Carlo (MCMC) algorithms have been shown to be useful for estimation of complex item response theory (IRT) models. Although an MCMC algorithm can be very useful, it also requires care in use and interpretation of results. In particular, MCMC algorithms generally make extensive use of priors on model parameters. In this paper, MCMC estimation is illustrated using a simple mixture IRT model, a mixture Rasch model (MRM), to demonstrate how the algorithm operates and how results may be affected by some commonly used priors. Priors on the probabilities of mixtures, label switching, model selection, metric anchoring, and implementation of the MCMC algorithm using WinBUGS are described, and their effects illustrated on parameter recovery in practical testing situations. In addition, an example is presented in which an MRM is fitted to a set of educational test data using the MCMC algorithm and a comparison is illustrated with results from three existing maximum likelihood estimation methods.  相似文献   

16.
The incidence of most diseases is low enough that in. large populations the number of new cases may be considered a Poisson variate. This paper explores models and methods for analyzing such data Specific cases are the estimation and testing of ratios and the cross-product ratios, both simple and stratified* We assume the Poisson means are exponential functions of the relevant parameters. The resulting sets of sufficient statistics are partitioned into a test statistic and a vector of statistics related to the nuisance parameters . The methods derived are based on the conditional distribution of the test statistic given the other sufficient statistics. The analyses of stratified cross-product ratios are seen to be analogues of the noncentral distribution associated with theanalysis of the common odds ratio in several 2×2 tables. The various methods are illustrated in numerical examples involving incidence rates of cancer in two metropolitan areas adjusting for both age and sex.  相似文献   

17.
For semiparametric models, interval estimation and hypothesis testing based on the information matrix for the full model is a challenge because of potentially unlimited dimension. Use of the profile information matrix for a small set of parameters of interest is an appealing alternative. Existing approaches for the estimation of the profile information matrix are either subject to the curse of dimensionality, or are ad-hoc and approximate and can be unstable and numerically inefficient. We propose a numerically stable and efficient algorithm that delivers an exact observed profile information matrix for regression coefficients for the class of Nonlinear Transformation Models [A. Tsodikov (2003) J R Statist Soc Ser B 65:759-774]. The algorithm deals with the curse of dimensionality and requires neither large matrix inverses nor explicit expressions for the profile surface.  相似文献   

18.
Chen and Balakrishnan [Chen, G. and Balakrishnan, N., 1995, A general purpose approximate goodness-of-fit test. Journal of Quality Technology, 27, 154–161] proposed an approximate method of goodness-of-fit testing that avoids the use of extensive tables. This procedure first transforms the data to normality, and subsequently applies the classical tests for normality based on the empirical distribution function, and critical points thereof. In this paper, we investigate the potential of this method in comparison to a corresponding goodness-of-fit test which instead of the empirical distribution function, utilizes the empirical characteristic function. Both methods are in full generality as they may be applied to arbitrary laws with continuous distribution function, provided that an efficient method of estimation exists for the parameters of the hypothesized distribution.  相似文献   

19.
Sliced average variance estimation is one of many methods for estimating the central subspace. It was shown to be more comprehensive than sliced inverse regression in the sense that it consistently estimates the central subspace under mild conditions while slice inverse regression may estimate only a proper subset of the central subspace. In this paper we extend this method to regressions with qualitative predictors. We also provide tests of dimension and a marginal coordinate hypothesis test. We apply the method to a data set concerning lakes infested by Eurasian Watermilfoil, and compare this new method to the partial inverse regression estimator.  相似文献   

20.
Diagnostic odds ratio is defined as the ratio of the odds of the positivity of a diagnostic test results in the diseased population relative to that in the non-diseased population. It is a function of sensitivity and specificity, which can be seen as an indicator of the diagnostic accuracy for the evaluation of a biomarker/test. The naïve estimator of diagnostic odds ratio fails when either sensitivity or specificity is close to one, which leads the denominator of diagnostic odds ratio equal to zero. We propose several methods to adjust for such situation. Agresti and Coull’s adjustment is a common and straightforward way for extreme binomial proportions. Alternatively, estimation methods based on a more advanced sampling design can be applied, which systematically selects samples from underlying population based on judgment ranks. Under such design, the odds can be estimated by the sum of indicator functions and thus avoid the situation of dividing by zero and provide a valid estimation. The asymptotic mean and variance of the proposed estimators are derived. All methods are readily applied for the confidence interval estimation and hypothesis testing for diagnostic odds ratio. A simulation study is conducted to compare the efficiency of the proposed methods. Finally, the proposed methods are illustrated using a real dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号