首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
The analysis of human perceptions is often carried out by resorting to surveys and questionnaires, where respondents are asked to express ratings about the objects being evaluated. A class of mixture models, called CUB (Combination of Uniform and shifted Binomial), has been recently proposed in this context. This article focuses on a model of this class, the Nonlinear CUB, and investigates some computational issues concerning parameter estimation, which is performed by Maximum Likelihood. More specifically, we consider two main approaches to optimize the log-likelihood: the classical numerical methods of optimization and the EM algorithm. The classical numerical methods comprise the widely used algorithms Nelder–Mead, Newton–Raphson, Broyden–Fletcher–Goldfarb–Shanno (BFGS), Berndt–Hall–Hall–Hausman (BHHH), Simulated Annealing, Conjugate Gradients and usually have the advantage of a fast convergence. On the other hand, the EM algorithm deserves consideration for some optimality properties in the case of mixture models, but it is slower. This article has a twofold aim: first we show how to obtain explicit formulas for the implementation of the EM algorithm in nonlinear CUB models and we formally derive the asymptotic variance–covariance matrix of the Maximum Likelihood estimator; second, we discuss and compare the performance of the two above mentioned approaches to the log-likelihood maximization.  相似文献   

2.
The quasi-likelihood function proposed by Wedderburn [Quasi-likelihood functions, generalized linear models, and the Gauss–Newton method. Biometrika. 1974;61:439–447] broadened the application scope of generalized linear models (GLM) by specifying the mean and variance function instead of the entire distribution. However, in many situations, complete specification of variance function in the quasi-likelihood approach may not be realistic. Following Fahrmeir's [Maximum likelihood estimation in misspecified generalized linear models. Statistics. 1990;21:487–502] treating with misspecified GLM, we define a quasi-likelihood nonlinear models (QLNM) with misspecified variance function by replacing the unknown variance function with a known function. In this paper, we propose some mild regularity conditions, under which the existence and the asymptotic normality of the maximum quasi-likelihood estimator (MQLE) are obtained in QLNM with misspecified variance function. We suggest computing MQLE of unknown parameter in QLNM with misspecified variance function by the Gauss–Newton iteration procedure and show it to work well in a simulation study.  相似文献   

3.
Maximum-likelihood estimation technique is known to provide consistent and most efficient regression estimates but often this technique is tedious to implement, particularly in the modelling of correlated count responses. To overcome this limitation, researchers have developed semi- or quasi-likelihood functions that depend only on the correct specification of the mean and variance of the responses rather than on the distribution function. Moreover, quasi-likelihood estimation provides consistent and equally efficient estimates as the maximum-likelihood approach. Basically, the quasi-likelihood estimating function is a non-linear equation constituting of the gradient, Hessian and basic score matrices. Henceforth, to obtain estimates of the regression parameters, the quasi-likelihood equation is solved iteratively using the Newton–Raphson technique. However, the inverse of the Jacobian matrix involved in the Newton–Raphson method may not be easy to compute since the matrix is very close to singularity. In this paper, we consider the use of vector divisions in solving quasi-likelihood equations. The vector divisions are implemented to form secant method formulas. To assess the performance of the use of vector divisions with the secant method, we generate cross-sectional Poisson counts using different sets of mean parameters. We compute the estimates of the regression parameters using the Newton–Raphson technique and vector divisions and compare the number of non-convergent simulations under both algorithms.  相似文献   

4.
Abstract. This article studies a method to estimate the parameters governing the distribution of a stationary marked Gibbs point process. This procedure, known as the Takacs–Fiksel method, is based on the estimation of the left and right hand sides of the Georgii–Nguyen–Zessin formula and leads to a family of estimators due to the possible choices of test functions. We propose several examples illustrating the interest and flexibility of this procedure. We also provide sufficient conditions based on the model and the test functions to derive asymptotic properties (consistency and asymptotic normality) of the resulting estimator. The different assumptions are discussed for exponential family models and for a large class of test functions. A short simulation study is proposed to assess the correctness of the methodology and the asymptotic results.  相似文献   

5.
We derive explicit formulas for Sobol's sensitivity indices (SSIs) under the generalized linear models (GLMs) with independent or multivariate normal inputs. We argue that the main-effect SSIs provide a powerful tool for variable selection under GLMs with identity links under polynomial regressions. We also show via examples that the SSI-based variable selection results are similar to the ones obtained by the random forest algorithm but without the computational burden of data permutation. Finally, applying our results to the problem of gene network discovery, we identify through the SSI analysis of a public microarray dataset several novel higher-order gene–gene interactions missed out by the more standard inference methods. The relevant functions for SSI analysis derived here under GLMs with identity, log, and logit links are implemented and made available in the R package Sobol sensitivity.  相似文献   

6.
目前主流功效函数多为凸性,在处理社会经济数据集中常见的右偏样本时效果难以令人满意。通过系统探讨凸性和凹性功效函数各自的适用特征,指出凹性函数在应用中有其必要性。归纳并使用偏度、区分度、P-P图三种方法作为分布形态的评判标准,比较了常见功效函数对指标原始数据分布形态的调整作用。在分析基础上,提出一种改进的凹性指数功效函数,能有效地处理右偏数据,且相比使用对数预处理的凸性功效函数更具适用性与便利性。  相似文献   

7.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

8.
We address the issue of recovering the structure of large sparse directed acyclic graphs from noisy observations of the system. We propose a novel procedure based on a specific formulation of the \(\ell _1\)-norm regularized maximum likelihood, which decomposes the graph estimation into two optimization sub-problems: topological structure and node order learning. We provide convergence inequalities for the graph estimator, as well as an algorithm to solve the induced optimization problem, in the form of a convex program embedded in a genetic algorithm. We apply our method to various data sets (including data from the DREAM4 challenge) and show that it compares favorably to state-of-the-art methods. This algorithm is available on CRAN as the R package GADAG.  相似文献   

9.
This paper develops an algorithm for uniform random generation over a constrained simplex, which is the intersection of a standard simplex and a given set. Uniform sampling from constrained simplexes has numerous applications in different fields, such as portfolio optimization, stochastic multi-criteria decision analysis, experimental design with mixtures and decision problems involving discrete joint distributions with imprecise probabilities. The proposed algorithm is developed by combining the acceptance–rejection and conditional methods along with the use of optimization tools. The acceptance rate of the algorithm is analytically compared to that of a crude acceptance–rejection algorithm, which generates points over the simplex and then rejects any points falling outside the intersecting set. Finally, using convex optimization, the setup phase of the algorithm is detailed for the special cases where the intersecting set is a general convex set, a convex set defined by a finite number of convex constraints or a polyhedron.  相似文献   

10.
Value at risk (VaR) and expected shortfall (ES) are widely used risk measures of the risk of loss on a specific portfolio of financial assets. Adjusted empirical likelihood (AEL) is an important non parametric likelihood method which is developed from empirical likelihood (EL). It can overcome the limitation of convex hull problems in EL. In this paper, we use AEL method to estimate confidence region for VaR and ES. Theoretically, we find that AEL has the same large sample statistical properties as EL, and guarantees solution to the estimating equations in EL. In addition, simulation results indicate that the coverage probabilities of the new confidence regions are higher than that of the original EL with the same level. These results show that the AEL estimation for VaR and ES deserves to recommend for the real applications.  相似文献   

11.
In this article, operational details of an R package MultiOrd that is designed for the generation of correlated ordinal data are described, and examples of some important functions are given. The package provides a valuable and needed tool that has been lacking for generating multivariate ordinal data.  相似文献   

12.
ABSTRACT

Empirical likelihood (EL) is a nonparametric method based on observations. EL method is defined as a constrained optimization problem. The solution of this constrained optimization problem is carried on using duality approach. In this study, we propose an alternative algorithm to solve this constrained optimization problem. The new algorithm is based on a newton-type algorithm for Lagrange multipliers for the constrained optimization problem. We provide a simulation study and a real data example to compare the performance of the proposed algorithm with the classical algorithm. Simulation and the real data results show that the performance of the proposed algorithm is comparable with the performance of the existing algorithm in terms of efficiencies and cpu-times.  相似文献   

13.
CVX‐based numerical algorithms are widely and freely available for solving convex optimization problems but their applications to solve optimal design problems are limited. Using the CVX programs in MATLAB, we demonstrate their utility and flexibility over traditional algorithms in statistics for finding different types of optimal approximate designs under a convex criterion for nonlinear models. They are generally fast and easy to implement for any model and any convex optimality criterion. We derive theoretical properties of the algorithms and use them to generate new A‐, c‐, D‐ and E‐optimal designs for various nonlinear models, including multi‐stage and multi‐objective optimal designs. We report properties of the optimal designs and provide sample CVX program codes for some of our examples that users can amend to find tailored optimal designs for their problems. The Canadian Journal of Statistics 47: 374–391; 2019 © 2019 Statistical Society of Canada  相似文献   

14.
Recently, Bolfarine et al. [Bimodal symmetric-asymmetric power-normal families. Commun Statist Theory Methods. Forthcoming. doi:10.1080/03610926.2013.765475] introduced a bimodal asymmetric model having the normal and skew normal as special cases. Here, we prove a stochastic representation for their bimodal asymmetric model and use it to generate random numbers from that model. It is shown how the resulting algorithm can be seen as an improvement over the rejection method. We also discuss practical and numerical aspects regarding the estimation of the model parameters by maximum likelihood under simple random sampling. We show that a unique stationary point of the likelihood equations exists except when all observations have the same sign. However, the location-scale extension of the model usually presents two or more roots and this fact is illustrated here. The standard maximization routines available in the R system (Broyden–Fletcher–Goldfarb–Shanno (BFGS), Trust, Nelder–Mead) were considered in our implementations but exhibited similar performance. We show the usefulness of inspecting profile loglikelihoods as a method to obtain starting values for maximization and illustrate data analysis with the location-scale model in the presence of multiple roots. A simple Bayesian model is discussed in the context of a data set which presents a flat likelihood in the direction of the skewness parameter.  相似文献   

15.
Abstract

This article presents a class of novel penalties that are defined under a unified framework, which includes lasso, SCAD and ridge as special cases, and novel functions, such as the asymmetric quantile check function. The proposed class of penalties is capable of producing alternative differentiable penalties to lasso. We mainly focus on this case and show its desirable properties, propose an efficient algorithm for the parameter estimation and prove the theoretical properties of the resulting estimators. Moreover, we exploit the differentiability of the penalty function by deriving a novel Generalized Information Criterion (GIC) for model selection. The method is implemented in the R package DLASSO freely available from CRAN, http://CRAN.R-project.org/package=DLASSO.  相似文献   

16.
We consider the problem of variable selection for a class of varying coefficient models with instrumental variables. We focus on the case that some covariates are endogenous variables, and some auxiliary instrumental variables are available. An instrumental variable based variable selection procedure is proposed by using modified smooth-threshold estimating equations (SEEs). The proposed procedure can automatically eliminate the irrelevant covariates by setting the corresponding coefficient functions as zero, and simultaneously estimate the nonzero regression coefficients by solving the smooth-threshold estimating equations. The proposed variable selection procedure avoids the convex optimization problem, and is flexible and easy to implement. Simulation studies are carried out to assess the performance of the proposed variable selection method.  相似文献   

17.
Some concepts of stochastic dependence for continuous bivariate distribution functions are investigated by defining a convex transformation on their reliability or survival functions. We also study notions of bivariate hazard rate and hazard dependence. Some dependence orderings are characterized by using convex transformation. To clarify the discussions, illustrative examples are given.  相似文献   

18.
Value at risk and expected shortfall are the two most popular measures of financial risk. But the available R packages for their computation are limited. Here, we introduce an R contributed package written by the authors. It computes the two measures for over 100 parametric distributions, including all commonly known distributions. We expect that the R package could be useful to researchers and to the financial community.  相似文献   

19.
We review sequential designs, including group sequential and two-stage designs, for testing or estimating a single binary parameter. We use this simple case to introduce ideas common to many sequential designs, which in this case can be explained without explicitly using stochastic processes. We focus on methods provided by our newly developed R package, binseqtest, which exactly bound the Type I error rate of tests and exactly maintain proper coverage of confidence intervals. Within this framework, we review some allowable practical adaptations of the sequential design. We explore issues such as the following: How should the design be modified if no assessment was made at one of the planned sequential stopping times? How should the parameter be estimated if the study needs to be stopped early? What reasons for stopping early are allowed? How should inferences be made when the study is stopped for crossing the boundary, but later information is collected about responses of subjects that had enrolled before the decision to stop but had not responded by that time? Answers to these questions are demonstrated using basic methods that are available in our binseqtest R package. Supplementary materials for this article are available online.  相似文献   

20.
In this article, the operational details of the R package PoisNor that is designed for simulating multivariate data with count and continuous variables with a prespecified correlation matrix are described, and examples of some important functions are given. The data-generation mechanism is a combination of the “NORmal To Anything” principle and a recently established connection between Poisson and normal correlations. The package provides a unique and useful tool that has been lacking for generating multivariate mixed data with Poisson and normal components.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号