首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a novel, efficient approach for obtaining high-quality experimental designs for event-related functional magnetic resonance imaging (ER-fMRI), a popular brain mapping technique. Our proposed approach combines a greedy hill-climbing algorithm and a cyclic permutation method. When searching for optimal ER-fMRI designs, the proposed approach focuses only on a promising restricted class of designs with equal frequency of occurrence across stimulus types. The computational time is significantly reduced. We demonstrate that our proposed approach is very efficient compared with a recently proposed genetic algorithm approach. We also apply our approach in obtaining designs that are robust against misspecification of error correlations.  相似文献   

2.
We often rely on the likelihood to obtain estimates of regression parameters but it is not readily available for generalized linear mixed models (GLMMs). Inferences for the regression coefficients and the covariance parameters are key in these models. We presented alternative approaches for analyzing binary data from a hierarchical structure that do not rely on any distributional assumptions: a generalized quasi-likelihood (GQL) approach and a generalized method of moments (GMM) approach. These are alternative approaches to the typical maximum-likelihood approximation approach in Statistical Analysis System (SAS) such as Laplace approximation (LAP). We examined and compared the performance of GQL and GMM approaches with multiple random effects to the LAP approach as used in PROC GLIMMIX, SAS. The GQL approach tends to produce unbiased estimates, whereas the LAP approach can lead to highly biased estimates for certain scenarios. The GQL approach produces more accurate estimates on both the regression coefficients and the covariance parameters with smaller standard errors as compared to the GMM approach. We found that both GQL and GMM approaches are less likely to result in non-convergence as opposed to the LAP approach. A simulation study was conducted and a numerical example was presented for illustrative purposes.  相似文献   

3.
Simon's two-stage designs are widely used in clinical trials to assess the activity of a new treatment. In practice, it is often the case that the second stage sample size is different from the planned one. For this reason, the critical value for the second stage is no longer valid for statistical inference. Existing approaches for making statistical inference are either based on asymptotic methods or not optimal. We propose an approach to maximize the power of a study while maintaining the type I error rate, where the type I error rate and power are calculated exactly from binomial distributions. The critical values of the proposed approach are numerically searched by an intelligent algorithm over the complete parameter space. It is guaranteed that the proposed approach is at least as powerful as the conditional power approach which is a valid but non-optimal approach. The power gain of the proposed approach can be substantial as compared to the conditional power approach. We apply the proposed approach to a real Phase II clinical trial.  相似文献   

4.
Various methods have been proposed for smoothing under the monotonicity constraint. We review the literature and implement an approach of monotone smoothing with B-splines for a generalized linear model response. The approach is expressed as a quadratic programming problem and is easily solved using the statistical software R. In a simulation study, we find that the approach performs better than other approaches with much faster computation time. The approach can also be used for smoothing under other shape constraints or mixed constraints. Supplementary materials of the appendices and R code to implement the developed approach is available online.  相似文献   

5.
Network meta-analysis synthesizes several studies of multiple treatment comparisons to simultaneously provide inference for all treatments in the network. It can often strengthen inference on pairwise comparisons by borrowing evidence from other comparisons in the network. Current network meta-analysis approaches are derived from either conventional pairwise meta-analysis or hierarchical Bayesian methods. This paper introduces a new approach for network meta-analysis by combining confidence distributions (CDs). Instead of combining point estimators from individual studies in the conventional approach, the new approach combines CDs, which contain richer information than point estimators, and thus achieves greater efficiency in its inference. The proposed CD approach can efficiently integrate all studies in the network and provide inference for all treatments, even when individual studies contain only comparisons of subsets of the treatments. Through numerical studies with real and simulated data sets, the proposed approach is shown to outperform or at least equal the traditional pairwise meta-analysis and a commonly used Bayesian hierarchical model. Although the Bayesian approach may yield comparable results with a suitably chosen prior, it is highly sensitive to the choice of priors (especially for the between-trial covariance structure), which is often subjective. The CD approach is a general frequentist approach and is prior-free. Moreover, it can always provide a proper inference for all the treatment effects regardless of the between-trial covariance structure.  相似文献   

6.
Whittemore (1981) proposed an approach for calculating the sample size needed to test hypotheses with specified significance and power against a given alternative for logistic regression with small response probability. Based on the distribution of covariate, which could be either discrete or continuous, this approach first provides a simple closed-form approximation to the asymptotic covariance matrix of the maximum likelihood estimates, and then uses it to calculate the sample size needed to test a hypothesis about the parameter. Self et al. (1992) described a general approach for power and sample size calculations within the framework of generalized linear models, which include logistic regression as a special case. Their approach is based on an approximation to the distribution of the likelihood ratio statistic. Unlike the Whittemore approach, their approach is not limited to situations of small response probability. However, it is restricted to models with a finite number of covariate configurations. This study compares these two approaches to see how accurate they would be for the calculations of power and sample size in logistic regression models with various response probabilities and covariate distributions. The results indicate that the Whittemore approach has a slight advantage in achieving the nominal power only for one case with small response probability. It is outperformed for all other cases with larger response probabilities. In general, the approach proposed in Self et al. (1992) is recommended for all values of the response probability. However, its extension for logistic regression models with an infinite number of covariate configurations involves an arbitrary decision for categorization and leads to a discrete approximation. As shown in this paper, the examined discrete approximations appear to be sufficiently accurate for practical purpose.  相似文献   

7.
The article describes an operational Bayesian approach to making inferences for the spectral density function for univariate autoregressive processes and for the AR operator of multivariate autoregressive processes. The derivation of the approach is described. Numerical examples, including the Wolfer Sunspot numbers, are used to demonstrate the practical usefulness of the approach.  相似文献   

8.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
This article develops a control chart for the generalized variance. A Bayesian approach is used to incorporate parameter uncertainty. Our approach has two stages, (i) construction of the control chart where we use a predictive distribution based on a Bayesian approach to derive the rejection region, and (ii) evaluation of the control chart where we use a sampling theory approach to examine the performance of the control chart under various hypothetical specifications for the data generation model.  相似文献   

10.
This paper proposes a new approach based on two explicit rules of Mendel experiments and Mendel's population genetics for the genetic algorithm (GA). These rules are the segregation and independent assortment of alleles, respectively. This new approach has been simulated for the optimization of certain test functions. The doctrinal sense of GA is conceptually improved by this approach using a Mendelian framework. The new approach is different than the conventional one in terms of crossover, recombination, and mutation operators. The results obtained here are in agreement with those of the conventional GA, and even better in some cases. These results suggest that the new approach is overall more sensitive and accurate than the conventional one. Possible ways of improving the approach by including more genetic formulae in the code are also discussed.  相似文献   

11.
In earlier work (Gelfand and Smith, 1990 and Gelfand et al, 1990) a sampling based approach using the Gibbs sampler was offered as a means for developing marginal posterior densities for a wide range of Bayesian problems several of which were previously inaccessible. Our purpose here is two-fold. First we flesh out the implementation of this approach for calculation of arbitrary expectations of interest. Secondly we offer comparison with perhaps the most prominent approach for calculating posterior expectations, analytic approximation involving application of the LaPlace method. Several illustrative examples are discussed as well. Clear advantages for the sampling based approach emerge.  相似文献   

12.
As an alternative to the classical approach for analysing dichotomous choice environmental valuation data, this note develops a Bayesian approach by using the idea of Gibbs sampling and data augmentation. A by-product from the approach is a welfare measure, such as the mean willingness to pay, and its confidence interval, which can be used for policy analysis.  相似文献   

13.
Recommended methods for analyzing unbalanced two-way data may be classified into two major categories:the parametric interpretation approach and the model comparison approach. Each approach has its advantages and its drawbacks. The main drawback of the parametric interpretation approach is non-orthogonality.For the model comparison approach the main drawback is the dependence of the hypothesis tested on the cell sizes. In this paper we provide examples to illustrate these drawbacks.  相似文献   

14.
We propose a flexible model approach for the distribution of random effects when both response variables and covariates have non-ignorable missing values in a longitudinal study. A Bayesian approach is developed with a choice of nonparametric prior for the distribution of random effects. We apply the proposed method to a real data example from a national long-term survey by Statistics Canada. We also design simulation studies to further check the performance of the proposed approach. The result of simulation studies indicates that the proposed approach outperforms the conventional approach with normality assumption when the heterogeneity in random effects distribution is salient.  相似文献   

15.
内容提要:本文基于Q型因子分析的基本思想,结合对应分析方法,建立了一种适用于大型数据库聚类的方法。该方法既解决了Q型因子分析算法效率方面的问题,也解决了传统对应分析法中缺乏客观分类标准、信息损失严重等多种缺陷,在实证分析中也取得了良好的效果。  相似文献   

16.
Abstract

In this paper, we propose an outlier-detection approach that uses the properties of an intercept estimator in a difference-based regression model (DBRM) that we first introduce. This DBRM uses multiple linear regression, and invented it to detect outliers in a multiple linear regression. Our outlier-detection approach uses only the intercept; it does not require estimates for the other parameters in the DBRM. In this paper, we first employed a difference-based intercept estimator to study the outlier-detection problem in a multiple regression model. We compared our approach with several existing methods in a simulation study and the results suggest that our approach outperformed the others. We also demonstrated the advantage of our approach using a real data application. Our approach can extend to nonparametric regression models for outliers detection.  相似文献   

17.
Analysis of familial aggregation in the presence of varying family sizes   总被引:2,自引:0,他引:2  
Summary.  Family studies are frequently undertaken as the first step in the search for genetic and/or environmental determinants of disease. Significant familial aggregation of disease is suggestive of a genetic aetiology for the disease and may lead to more focused genetic analysis. Of course, it may also be due to shared environmental factors. Many methods have been proposed in the literature for the analysis of family studies. One model that is appealing for the simplicity of its computation and the conditional interpretation of its parameters is the quadratic exponential model. However, a limiting factor in its application is that it is not reproducible , meaning that all families must be of the same size. To increase the applicability of this model, we propose a hybrid approach in which analysis is based on the assumption of the quadratic exponential model for a selected family size and combines a missing data approach for smaller families with a marginalization approach for larger families. We apply our approach to a family study of colorectal cancer that was sponsored by the Cancer Genetics Network of the National Institutes of Health. We investigate the properties of our approach in simulation studies. Our approach applies more generally to clustered binary data.  相似文献   

18.
In this article, we propose an outlier detection approach in a multiple regression model using the properties of a difference-based variance estimator. This type of a difference-based variance estimator was originally used to estimate error variance in a non parametric regression model without estimating a non parametric function. This article first employed a difference-based error variance estimator to study the outlier detection problem in a multiple regression model. Our approach uses the leave-one-out type method based on difference-based error variance. The existing outlier detection approaches using the leave-one-out approach are highly affected by other outliers, while ours is not because our approach does not use the regression coefficient estimator. We compared our approach with several existing methods using a simulation study, suggesting the outperformance of our approach. The advantages of our approach are demonstrated using a real data application. Our approach can be extended to the non parametric regression model for outlier detection.  相似文献   

19.
In this paper we propose an alternative procedure for estimating the parameters of the beta regression model. This alternative estimation procedure is based on the EM-algorithm. For this, we took advantage of the stochastic representation of the beta random variable through ratio of independent gamma random variables. We present a complete approach based on the EM-algorithm. More specifically, this approach includes point and interval estimations and diagnostic tools for detecting outlying observations. As it will be illustrated in this paper, the EM-algorithm approach provides a better estimation of the precision parameter when compared to the direct maximum likelihood (ML) approach. We present the results of Monte Carlo simulations to compare EM-algorithm and direct ML. Finally, two empirical examples illustrate the full EM-algorithm approach for the beta regression model. This paper contains a Supplementary Material.  相似文献   

20.
To study the equality of regression coefficients in several heteroscedastic regression models, we propose a fiducial-based test, and theoretically examine the frequency property of the proposed test. We numerically compare the performance of the proposed approach with the parametric bootstrap (PB) approach. Simulation results indicate that the fiducial approach controls the Type I error rates satisfactorily regardless of the number of regression models and sample sizes, whereas the PB approach tends to be a little of liberal in some scenarios. Finally, the proposed approach is applied to an analysis of a real dataset for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号