首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Based on recent developments in the field of operations research, we propose two adaptive resampling algorithms for estimating bootstrap distributions. One algorithm applies the principle of the recently proposed cross-entropy (CE) method for rare event simulation, and does not require calculation of the resampling probability weights via numerical optimization methods (e.g., Newton's method), whereas the other algorithm can be viewed as a multi-stage extension of the classical two-step variance minimization approach. The two algorithms can be easily used as part of a general algorithm for Monte Carlo calculation of bootstrap confidence intervals and tests, and are especially useful in estimating rare event probabilities. We analyze theoretical properties of both algorithms in an idealized setting and carry out simulation studies to demonstrate their performance. Empirical results on both one-sample and two-sample problems as well as a real survival data set show that the proposed algorithms are not only superior to traditional approaches, but may also provide more than an order of magnitude of computational efficiency gains.  相似文献   

2.
ABSTRACT

Empirical likelihood (EL) is a nonparametric method based on observations. EL method is defined as a constrained optimization problem. The solution of this constrained optimization problem is carried on using duality approach. In this study, we propose an alternative algorithm to solve this constrained optimization problem. The new algorithm is based on a newton-type algorithm for Lagrange multipliers for the constrained optimization problem. We provide a simulation study and a real data example to compare the performance of the proposed algorithm with the classical algorithm. Simulation and the real data results show that the performance of the proposed algorithm is comparable with the performance of the existing algorithm in terms of efficiencies and cpu-times.  相似文献   

3.
This article proposes the maximum likelihood estimates based on bare bones particle swarm optimization (BBPSO) algorithm for estimating the parameters of Weibull distribution with censored data, which is widely used in lifetime data analysis. This approach can produce more accuracy of the parameter estimation for the Weibull distribution. Additionally, the confidence intervals for the estimators are obtained. The simulation results show that the BB PSO algorithm outperforms the Newton–Raphson method in most cases in terms of bias, root mean square of errors, and coverage rate. Two examples are used to demonstrate the performance of the proposed approach. The results show that the maximum likelihood estimates via BBPSO algorithm perform well for estimating the Weibull parameters with censored data.  相似文献   

4.
Distributed agent-based simulation is a popular method to realize computational experiment on large-scale artificial society. Meanwhile, the partitioning strategy of the artificial society models among hosts is playing an essential role for simulation engine to offer high execution efficiency as it has great impact on the communication overheads and computational load-balancing during simulation. Aiming at the problem, we firstly analyze the execution and scheduling process of agents during simulation and model it as wide-sense cyclostationary random process. Then, a static statistical partitioning model is proposed to obtain the optimal partitioning strategy with minimum average communication cost and load imbalance factor. To solve the static statistical partitioning model, this paper turns it into a graph-partitioning problem. A statistical movement graph-based partitioning algorithm is then devised which generates task graph model by mining the statistical movement information from initialization data of simulation model. In the experiments, two other popular partitioning methods are used to evaluate the performance of proposed graph-partitioning algorithm. Furthermore, this paper compares the graph-partitioning performance under different task graph model. The results indicate that our proposed statistical movement graph-based static partitioning method outperforms all other methods in reducing the communication overhead while satisfying the load balance constraint.  相似文献   

5.
We present a class of truncated non linear regression models for location and scale where the truncated nature of the data is incorporated into the statistical model by assuming that the response variable follows a truncated distribution. The location parameter of the response variable is assumed to be modeled by a continuous non linear function of covariates and unknown parameters. In addition, the proposed model also allows for the scale parameter of the responses to be characterized by a continuous function of the covariates and unknown parameters. Three particular cases of the proposed models are presented by considering the response variable to follow a truncated normal, truncated skew normal, and truncated beta distribution. These truncated non linear regression models are constructed assuming fixed known truncation limits and model parameters are estimated by direct maximization of the log-likelihood using a non linear optimization algorithm. Standardized residuals and diagnostic metrics based on the cases deletion are considered to verify the adequacy of the model and to detect outliers and influential observations. Results based on simulated data are presented to assess the frequentist properties of estimates, and a real data set on soil-water retention from the Buriti Vermelho River Basin database is analyzed using the proposed methodology.  相似文献   

6.
We present the maximum likelihood estimation (MLE) via particle swarm optimization (PSO) algorithm to estimate the mixture of two Weibull parameters with complete and multiple censored data. A simulation study is conducted to assess the performance of the MLE via PSO algorithm, quasi-Newton method and expectation-maximization (EM) algorithm for different parameter settings and sample sizes in both uncensored and censored cases. The simulation results showed that the PSO algorithm outperforms the quasi-Newton method and the EM algorithm in most cases regarding bias and root mean square errors. Two numerical examples are used to demonstrate the performance of our proposed method.  相似文献   

7.
针对GM(1,1)幂模型求解初始条件的优化问题,提出一种基于原始序列新旧信息的线性组合优化方法.在模拟误差平方和最小化的目标下,构建初始条件组合权重的优化模型,给出最优组合权重的解析式.最后以中国高中升学率的数据为例,验证了此优化模型的有效性和优越性.结果表明初始条件优化方法能够有效地平衡新旧信息的权重,并提高GM(1,1)幂模型的模拟和预测精度.  相似文献   

8.
In this article, to reduce computational load in performing Bayesian variable selection, we used a variant of reversible jump Markov chain Monte Carlo methods, and the Holmes and Held (HH) algorithm, to sample model index variables in logistic mixed models involving a large number of explanatory variables. Furthermore, we proposed a simple proposal distribution for model index variables, and used a simulation study and real example to compare the performance of the HH algorithm with our proposed and existing proposal distributions. The results show that the HH algorithm with our proposed proposal distribution is a computationally efficient and reliable selection method.  相似文献   

9.
Bayesian hierarchical modeling with Gaussian process random effects provides a popular approach for analyzing point-referenced spatial data. For large spatial data sets, however, generic posterior sampling is infeasible due to the extremely high computational burden in decomposing the spatial correlation matrix. In this paper, we propose an efficient algorithm—the adaptive griddy Gibbs (AGG) algorithm—to address the computational issues with large spatial data sets. The proposed algorithm dramatically reduces the computational complexity. We show theoretically that the proposed method can approximate the real posterior distribution accurately. The sufficient number of grid points for a required accuracy has also been derived. We compare the performance of AGG with that of the state-of-the-art methods in simulation studies. Finally, we apply AGG to spatially indexed data concerning building energy consumption.  相似文献   

10.
In this paper, we propose a lower bound based smoothed quasi-Newton algorithm for computing the solution paths of the group bridge estimator in linear regression models. Our method is based on the quasi-Newton algorithm with a smoothed group bridge penalty in combination with a novel data-driven thresholding rule for the regression coefficients. This rule is derived based on a necessary KKT condition of the group bridge optimization problem. It is easy to implement and can be used to eliminate groups with zero coefficients. Thus, it reduces the dimension of the optimization problem. The proposed algorithm removes the restriction of groupwise orthogonal condition needed in coordinate descent and LARS algorithms for group variable selection. Numerical results show that the proposed algorithm outperforms the coordinate descent based algorithms in both efficiency and accuracy.  相似文献   

11.
The fitting of Lévy processes is an important field of interest in both option pricing and risk management. In literature, a large number of fitting methods requiring adequate initial values at the start of the optimization procedure exists. A so-called simplified method of moments (SMoM) generates by assuming a symmetric distribution these initial values for the Variance Gamma process, whereby the idea behind can be easily transferred to the Normal Inverse Gaussian process. However, the characteristics of the Generalized Hyperbolic process prevent such an easy adaption. Therefore, we provide by applying a Taylor series approximation for the modified Bessel function of the third kind, a Tschirnhaus transformation and a symmetric distribution assumption, a SMOM for the Generalized Hyperbolic distribution. Our simulation study compares the results of our SMoM with the results of the maximum likelihood estimation. The results show that our proposed approach is an appropriate and useful way for estimating Generalized Hyperbolic process parameters and significantly reduces estimation time.  相似文献   

12.
Solving label switching is crucial for interpreting the results of fitting Bayesian mixture models. The label switching originates from the invariance of posterior distribution to permutation of component labels. As a result, the component labels in Markov chain simulation may switch to another equivalent permutation, and the marginal posterior distribution associated with all labels may be similar and useless for inferring quantities relating to each individual component. In this article, we propose a new simple labelling method by minimizing the deviance of the class probabilities to a fixed reference labels. The reference labels can be chosen before running Markov chain Monte Carlo (MCMC) using optimization methods, such as expectation-maximization algorithms, and therefore the new labelling method can be implemented by an online algorithm, which can reduce the storage requirements and save much computation time. Using the Acid data set and Galaxy data set, we demonstrate the success of the proposed labelling method for removing the labelling switching in the raw MCMC samples.  相似文献   

13.
熊巍等 《统计研究》2020,37(5):104-116
随着计算机技术的迅猛发展,高维成分数据不断涌现并伴有大量近似零值和缺失,数据的高维特性不仅给传统统计方法带来了巨大的挑战,其厚尾特征、复杂的协方差结构也使得理论分析难上加难。于是如何对高维成分数据的近似零值进行稳健的插补,挖掘潜在的内蕴结构成为当今学者研究的焦点。对此,本文结合修正的EM算法,提出基于R型聚类的Lasso-分位回归插补法(SubLQR)对高维成分数据的近似零值问题予以解决。与现有高维近似零值插补方法相比,本文所提出的SubLQR具有如下优势。①稳健全面性:利用Lasso-分位回归方法,不仅可以有效地探测到响应变量的整个条件分布,还能提供更加真实的高维稀疏模式;②有效准确性:采用基于R型聚类的思想进行插补,可以降低计算复杂度,极大提高插补的精度。模拟研究证实,本文提出的SubLQR高效灵活准确,特别在零值、异常值较多的情形更具优势。最后将SubLQR方法应用于罕见病代谢组学研究中,进一步表明本文所提出的方法具有广泛的适用性。  相似文献   

14.
We consider the problem of making statistical inference on unknown parameters of a lognormal distribution under the assumption that samples are progressively censored. The maximum likelihood estimates (MLEs) are obtained by using the expectation-maximization algorithm. The observed and expected Fisher information matrices are provided as well. Approximate MLEs of unknown parameters are also obtained. Bayes and generalized estimates are derived under squared error loss function. We compute these estimates using Lindley's method as well as importance sampling method. Highest posterior density interval and asymptotic interval estimates are constructed for unknown parameters. A simulation study is conducted to compare proposed estimates. Further, a data set is analysed for illustrative purposes. Finally, optimal progressive censoring plans are discussed under different optimality criteria and results are presented.  相似文献   

15.
We propose a fully Bayesian model with a non-informative prior for analyzing misclassified binary data with a validation substudy. In addition, we derive a closed-form algorithm for drawing all parameters from the posterior distribution and making statistical inference on odds ratios. Our algorithm draws each parameter from a beta distribution, avoids the specification of initial values, and does not have convergence issues. We apply the algorithm to a data set and compare the results with those obtained by other methods. Finally, the performance of our algorithm is assessed using simulation studies.  相似文献   

16.
This paper introduces a method for clustering spatially dependent functional data. The idea is to consider the contribution of each curve to the spatial variability. Thus, we define a spatial dispersion function associated to each curve and perform a k-means like clustering algorithm. The algorithm is based on the optimization of a fitting criterion between the spatial dispersion functions associated to each curve and the representative of the clusters. The performance of the proposed method is illustrated by an application on real data and a simulation study.  相似文献   

17.
A transformation is proposed to convert the nonlinear constraints of the parameters in the mixture transition distribution (MTD) model into box-constraints. The proposed transformation removes the difficulties associated with the maximum likelihood estimation (MLE) process in the MTD modeling so that the MLEs of the parameters can be easily obtained via a hybrid algorithm from the evolutionary algorithms and/or quasi-Newton algorithms for global optimization. Simulation studies are conducted to demonstrate MTD modeling by the proposed novel approach through a global search algorithm in R environment. Finally, the proposed approach is used for the MTD modelings of three real data sets.  相似文献   

18.
Abstract

Variable selection in finite mixture of regression (FMR) models is frequently used in statistical modeling. The majority of applications of variable selection in FMR models use a normal distribution for regression error. Such assumptions are unsuitable for a set of data containing a group or groups of observations with heavy tails and outliers. In this paper, we introduce a robust variable selection procedure for FMR models using the t distribution. With appropriate selection of the tuning parameters, the consistency and the oracle property of the regularized estimators are established. To estimate the parameters of the model, we develop an EM algorithm for numerical computations and a method for selecting tuning parameters adaptively. The parameter estimation performance of the proposed model is evaluated through simulation studies. The application of the proposed model is illustrated by analyzing a real data set.  相似文献   

19.
Estimators are often defined as the solutions to data dependent optimization problems. A common form of objective function (function to be optimized) that arises in statistical estimation is the sum of a convex function V and a quadratic complexity penalty. A standard paradigm for creating kernel-based estimators leads to such an optimization problem. This article describes an optimization algorithm designed for unconstrained optimization problems in which the objective function is the sum of a non negative convex function and a known quadratic penalty. The algorithm is described and compared with BFGS on some penalized logistic regression and penalized L 3/2 regression problems.  相似文献   

20.
In this article, a new algorithm for rather expensive simulation problems is presented, which consists of two phases. In the first phase, as a model-based algorithm, the simulation output is used directly in the optimization stage. In the second phase, the simulation model is replaced by a valid metamodel. In addition, a new optimization algorithm is presented. To evaluate the performance of the proposed algorithm, it is applied to the (s,S) inventory problem as well as to five test functions. Numerical results show that the proposed algorithm leads to better solutions with less computational time than the corresponding metamodel-based algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号