首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, an empirical best linear unbiased predictor is widely used as a practical approach to small area inference. It is also of interest to construct empirical prediction intervals. However, we do not know which method should be used from among the several existing prediction intervals. In this article, we first obtain an empirical prediction interval by using the residual maximum likelihood method for estimating unknown model variance parameters. Then we compare the later with other intervals with the residual maximum likelihood method. Additionally, some different parametric bootstrap methods for constructing empirical prediction intervals are also compared in a simulation study.  相似文献   

2.
This article examines the use of regression analysis for allocating indirect costs. When multiple regression is used to estimate the weights of several allocation factors, conventional standard errors and correlation coefficients can be misleading with respect to the statistical precision of the cost allocations. This article develops alternative allocation approaches and measures of precision that use linear prediction theory and Bayesian inference. The proposed methods are illustrated using a university indirect cost study.  相似文献   

3.
Summary. When a number of distinct models contend for use in prediction, the choice of a single model can offer rather unstable predictions. In regression, stochastic search variable selection with Bayesian model averaging offers a cure for this robustness issue but at the expense of requiring very many predictors. Here we look at Bayes model averaging incorporating variable selection for prediction. This offers similar mean-square errors of prediction but with a vastly reduced predictor space. This can greatly aid the interpretation of the model. It also reduces the cost if measured variables have costs. The development here uses decision theory in the context of the multivariate general linear model. In passing, this reduced predictor space Bayes model averaging is contrasted with single-model approximations. A fast algorithm for updating regressions in the Markov chain Monte Carlo searches for posterior inference is developed, allowing many more variables than observations to be contemplated. We discuss the merits of absolute rather than proportionate shrinkage in regression, especially when there are more variables than observations. The methodology is illustrated on a set of spectroscopic data used for measuring the amounts of different sugars in an aqueous solution.  相似文献   

4.
This paper presents an approach for constructing prediction intervals for any given distribution. The approach is based on the principle of fiducial inference. We use several examples, including the normal, binomial, exponential, gamma, and Weibull distributions, to illustrate the proposed procedure.  相似文献   

5.
The problem of estimating standard errors for diagnostic accuracy measures might be challenging for many complicated models. We can address such a problem by using the Bootstrap methods to blunt its technical edge with resampled empirical distributions. We consider two cases where bootstrap methods can successfully improve our knowledge of the sampling variability of the diagnostic accuracy estimators. The first application is to make inference for the area under the ROC curve resulted from a functional logistic regression model which is a sophisticated modelling device to describe the relationship between a dichotomous response and multiple covariates. We consider using this regression method to model the predictive effects of multiple independent variables on the occurrence of a disease. The accuracy measures, such as the area under the ROC curve (AUC) are developed from the functional regression. Asymptotical results for the empirical estimators are provided to facilitate inferences. The second application is to test the difference of two weighted areas under the ROC curve (WAUC) from a paired two sample study. The correlation between the two WAUC complicates the asymptotic distribution of the test statistic. We then employ the bootstrap methods to gain satisfactory inference results. Simulations and examples are supplied in this article to confirm the merits of the bootstrap methods.  相似文献   

6.
The use of surrogate variables has been proposed as a means to capture, for a given observed set of data, sources driving the dependency structure among high-dimensional sets of features and remove the effects of those sources and their potential negative impact on simultaneous inference. In this article we illustrate the potential effects of latent variables on testing dependence and the resulting impact on multiple inference, we briefly review the method of surrogate variable analysis proposed by Leek and Storey (PNAS 2008; 105:18718-18723), and assess that method via simulations intended to mimic the complexity of feature dependence observed in real-world microarray data. The method is also assessed via application to a recent Merck microarray data set. Both simulation and case study results indicate that surrogate variable analysis can offer a viable strategy for tackling the multiple testing dependence problem when the features follow a potentially complex correlation structure, yielding improvements in the variability of false positive rates and increases in power.  相似文献   

7.
We introduce a fully model-based approach of studying functional relationships between a multivariate circular-dependent variable and several circular covariates, enabling inference regarding all model parameters and related prediction. Two multiple circular regression models are presented for this approach. First, for an univariate circular-dependent variable, we propose the least circular mean-square error (LCMSE) estimation method, and asymptotic properties of the LCMSE estimators and inferential methods are developed and illustrated. Second, using a simulation study, we provide some practical suggestions for model selection between the two models. An illustrative example is given using a real data set from protein structure prediction problem. Finally, a straightforward extension to the case with a multivariate-dependent circular variable is provided.  相似文献   

8.
Random forests are widely used in many research fields for prediction and interpretation purposes. Their popularity is rooted in several appealing characteristics, such as their ability to deal with high dimensional data, complex interactions and correlations between variables. Another important feature is that random forests provide variable importance measures that can be used to identify the most important predictor variables. Though there are alternatives like complete case analysis and imputation, existing methods for the computation of such measures cannot be applied straightforward when the data contains missing values. This paper presents a solution to this pitfall by introducing a new variable importance measure that is applicable to any kind of data—whether it does or does not contain missing values. An extensive simulation study shows that the new measure meets sensible requirements and shows good variable ranking properties. An application to two real data sets also indicates that the new approach may provide a more sensible variable ranking than the widespread complete case analysis. It takes the occurrence of missing values into account which makes results also differ from those obtained under multiple imputation.  相似文献   

9.
Exact and approximate Bayesian inference is developed for the prediction problem in finite populations under a linear functional superpopulation model. The models considered are the usual regression models involving two variables, X and Y, where the independent variable X is measured with error. The approach is based on the conditional distribution of Y given X and our predictor is the posterior mean of the quantity of interest (population total and population variance) given the observed data. Empirical investigations about optimal purposive samples and possible model misspecifications based on comparisons with the corresponding models where X is measured without error are also reported.  相似文献   

10.
A new modeling approach called ‘recursive segmentation’ is proposed to support the supervised exploration and identification of subgroups or clusters. It is based on the frameworks of recursive partitioning and the Patient Rule Induction Method (PRIM). Through combining these methods, recursive segmentation aims to exploit their respective strengths while reducing their weaknesses. Consequently, recursive segmentation can be applied in a very general way, that is in any (multivariate) regression, classification or survival (time-to-event) problem, using conditional inference, evolutionary learning or the CART algorithm, with predictor variables of any scale and with missing values. Furthermore, results of a synthetic example and a benchmark application study that comprises 26 data sets suggest that recursive segmentation achieves a competitive prediction accuracy and provides more accurate definitions of subgroups by models of less complexity as compared to recursive partitioning and PRIM. An application to the German Breast Cancer Study Group data demonstrates the improved interpretability and reliability of results produced by the new approach. The method is made publicly available through the R-package rseg (http://rseg.r-forge.r-project.org/).  相似文献   

11.
The purpose of this paper is to discuss response surface designs for multivariate generalized linear models (GLMs). Such models are considered whenever several response variables can be measured for each setting of a group of control variables, and the response variables are adequately represented by GLMs. The mean-squared error of prediction (MSEP) matrix is used to assess the quality of prediction associated with a given design. The MSEP incorporates both the prediction variance and the prediction bias, which results from using maximum likelihood estimates of the parameters of the fitted linear predictor. For a given design, quantiles of a scalar-valued function of the MSEP are obtained within a certain region of interest. The quantiles depend on the unknown parameters of the linear predictor. The dispersion of these quantiles over the space of the unknown parameters is determined and then depicted by the so-called quantile dispersion graphs. An application of the proposed methodology is presented using the special case of the bivariate binary distribution.  相似文献   

12.
In the problem of parametric statistical inference with a finite parameter space, we propose some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior information. The rules satisfy the likelihood principle and a basic consistency principle ('avoiding sure loss'), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. One of the rules also satisfies fundamental frequentist principles. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and to use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile three general approaches to statistical inference: likelihood inference, coherent inference and frequentist inference.  相似文献   

13.
Probabilistic graphical models offer a powerful framework to account for the dependence structure between variables, which is represented as a graph. However, the dependence between variables may render inference tasks intractable. In this paper, we review techniques exploiting the graph structure for exact inference, borrowed from optimisation and computer science. They are built on the principle of variable elimination whose complexity is dictated in an intricate way by the order in which variables are eliminated. The so‐called treewidth of the graph characterises this algorithmic complexity: low‐treewidth graphs can be processed efficiently. The first point that we illustrate is therefore the idea that for inference in graphical models, the number of variables is not the limiting factor, and it is worth checking the width of several tree decompositions of the graph before resorting to the approximate method. We show how algorithms providing an upper bound of the treewidth can be exploited to derive a ‘good' elimination order enabling to realise exact inference. The second point is that when the treewidth is too large, algorithms for approximate inference linked to the principle of variable elimination, such as loopy belief propagation and variational approaches, can lead to accurate results while being much less time consuming than Monte‐Carlo approaches. We illustrate the techniques reviewed in this article on benchmarks of inference problems in genetic linkage analysis and computer vision, as well as on hidden variables restoration in coupled Hidden Markov Models.  相似文献   

14.
This paper develops inference for the significance of features such as peaks and valleys observed in additive modeling through an extension of the SiZer-type methodology of Chaudhuri and Marron (1999) and Godtliebsen et al. (2002, 2004) to the case where the outcome is discrete. We consider the problem of determining the significance of features such as peaks or valleys in observed covariate effects both for the case of additive modeling where the main predictor of interest is univariate as well as the problem of studying the significance of features such as peaks, inclines, ridges and valleys when the main predictor of interest is geographical location. We work with low rank radial spline smoothers to allow to the handling of sparse designs and large sample sizes. Reducing the problem to a Generalised Linear Mixed Model (GLMM) framework enables derivation of simulation-based critical value approximations and guards against the problem of multiple inferences over a range of predictor values. Such a reduction also allows for easy adjustment for confounders including those which have an unknown or complex effect on the outcome. A simulation study indicates that our method has satisfactory power. Finally, we illustrate our methodology on several data sets.  相似文献   

15.
A nonparametric inference algorithm developed by Davis and Geman (1983) is extended problem. The algorithm and applied to a medical prediction employs an estimation procedure for acquiring pairwise statistics among variables of a binary data set, allows for the data-driven creation of interaction terms among the variables, and employs a decision rule which asymptotically gives the minimum expected error. The inference procedure was designed for large data sets but has been extended via the method of cross-validation to encompass smaller data sets.  相似文献   

16.
Statistically matched files are created in an attempt to solve the practical problem that exists when no single file has the full set of variables needed for drawing important inferences. Previous methods of file matching are reviewed, and the method of file concatenation with adjusted weights and multiple imputations is described and illustrated on an artificial example. A major benefit of this approach is the ability to display sensitivity of inference to untestable assumptions being made when creating the matched file.  相似文献   

17.
Recently, the orthodox best linear unbiased predictor (BLUP) method was introduced for inference about random effects in Tweedie mixed models. With the use of h-likelihood, we illustrate that the standard likelihood procedures, developed for inference about fixed unknown parameters, can be used for inference about random effects. We show that the necessary standard error for the prediction interval of the random effect can be computed from the Hessian matrix of the h-likelihood. We also show numerically that the h-likelihood provides a prediction interval that maintains a more precise coverage probability than the BLUP method.  相似文献   

18.
Statistical agencies are interested to report precise estimates of linear parameters from small areas. This goal can be achieved by using model-based inference. In this sense, random regression coefficient models provide a flexible way of modelling the relationship between the target and the auxiliary variables. Because of this, empirical best linear unbiased predictor (EBLUP) estimates based on these models are introduced. A closed-formula procedure to estimate the mean-squared error of the EBLUP estimators is also given and empirically studied. Results of several simulation studies are reported as well as an application to the estimation of household normalized net annual incomes in the Spanish Living Conditions Survey.  相似文献   

19.
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms.  相似文献   

20.
In several sciences, especially when dealing with performance evaluation, complex testing problems may arise due in particular to the presence of multidimensional categorical data. In such cases the application of nonparametric methods can represent a reasonable approach. In this paper, we consider the problem of testing whether a “treatment” is stochastically larger than a “control” when univariate and multivariate ordinal categorical data are present. We propose a solution based on the nonparametric combination of dependent permutation tests (Pesarin in Multivariate permutation test with application to biostatistics. Wiley, Chichester, 2001), on variable transformation, and on tests on moments. The solution requires the transformation of categorical response variables into numeric variables and the breaking up of the original problem’s hypotheses into partial sub-hypotheses regarding the moments of the transformed variables. This type of problem is considered to be almost impossible to analyze within likelihood ratio tests, especially in the multivariate case (Wang in J Am Stat Assoc 91:1676–1683, 1996). A comparative simulation study is also presented along with an application example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号