首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
文章基于变系数模型,研究了模型变量选择的问题.采用B样条函数逼近模型中的系数函数,结合LASSO、SCAD和MCP罚函数,利用组坐标下降算法进行变量选择.通过模拟比较了这三种罚函数的效果.模拟结果印证提出方法的有效性,并且得到MCP和SCAD优于LASSO.  相似文献   

2.
《统计与信息论坛》2019,(11):122-128
网络舆情通过媒体报道以及股吧、微博、博客等平台进行传播而影响投资者情绪和行为,进而引发金融市场波动。随着对金融网络舆情关注度的提升,国内外学者从理论和实证分析等方面研究了网络舆情对金融市场的影响并取得了较大进展,但目前对该方面的研究缺乏系统性梳理。因此,首先从舆情数据采集、关键词选择及网络舆情的度量三个方面对网络舆情的测度方法进行了归纳阐述;然后分别探讨了网络舆情对股票市场、衍生金融市场以及金融市场稳定性的影响关系及实证方法;最后提出在未来的研究中应重点关注以下三个方面:(1)在金融网络舆情指标构建中应利用数据挖掘技术多维度构建能充分反映舆情信息的网络舆情指数;(2)在网络舆情与金融市场的关系研究中重点关注网络舆情对系统性金融风险的影响;(3)在实证方法研究上应选用包含不同时间频率指标的模型进行分析。  相似文献   

3.
基于Logistic模型的微博舆情热点发展预测研究   总被引:1,自引:0,他引:1  
微博是网络舆情的重要平台。微博舆情热点的发展一般经历发生、扩散、缓解、消退等阶段。相应地,在舆情规模上成"S"形走势。由此,建立基于Logistic模型的预测模型,并用新浪微博实例证明了该模型能够有效预测自组织状态下微博舆情热点的发展。  相似文献   

4.
基于2010—2013年食品医药行业41个涉及舆情事件的典型案例,量化负面舆情对公司股价的冲击程度,建立多元回归模型,从回应态度、公司声誉、停牌策略、规章措施等四个维度,对公司股价表现和舆情应对成效进行分析。实证结果显示,舆情事件爆发当日样本出现了显著的负异常收益率。好的回应态度、公司声誉和快速有效的措施都可以起到减弱事件影响的作用。上市公司应重视产品质量,保持良好声誉,对舆情事件做出正面积极回应,并制定相应的规章措施,加强信息披露。  相似文献   

5.
文章研究以条件风险价值CVaR为约束的多阶段投资组合问题。在不允许卖空的情况下,将风险控制在一定水平,以最大化终端财富为目标建立多阶段投资组合模型,并通过罚函数处理机制建立辅助问题,利用差分进化算法求解新模型,得到各阶段的最优投资组合策略。实证分析表明该模型合理,为投资者选择适合自己风险偏好的投资组合策略提供了思路。  相似文献   

6.
常量红利界下服从Erlang(2)过程的风险模型分析   总被引:1,自引:0,他引:1  
本文通过大家熟悉的Gerber-Shiu折扣罚函数来进行分析,采用微分方程特解与通解的关系及一些已有结论求解Gerber-Shiu折扣罚函数,研究了索赔过程服从Erlang(2)、具有常量红利界限的风险模型及与破产有关的问题.  相似文献   

7.
柯赟 《统计与决策》2016,(20):26-28
随着新兴媒体的出现,为了提前并更加准确地判断突发事件网络舆情的发展演化方向,以便做出比较合理的预测监控.文章基于动态贝叶斯网络模型,建立了关于突发事件的网络舆情预测监控模型.通过关联概率的计算,对动态贝叶斯网络中存在因果关系的节点变量进行预测.并以分析2014年上海踩踏事件为例,确定此事件对象中的节点变量,并通过10位专家评分的方式给出了对突发事件网络舆情进行预测的具体操作方法,得到比较合理的预测结果,证明了该方法的可行性和实用性.  相似文献   

8.
由于短时间内网络舆情事件信息收集不完备,决策者对网络舆情危机的应急决策往往会卷入网络舆情热度和突发事件影响力等直觉模糊决策指标.文章通过网络舆情突发事件指标的直觉信息熵来构建多指标应急决策模型,获取各决策指标的合理权重,进而通过直觉模糊集结算子集成计算各网络舆情突发事件的综合危机程度,根据直觉模糊危机值的得分和精度,可确定各网络舆情事件危机严重程度排序,辅助政府轻重缓急地应急处置各舆情危机.  相似文献   

9.
基于扎根理论量表开发的网络舆情对旅游地形象传播研究   总被引:1,自引:0,他引:1  
文章基于扎根理论方法开发网络舆情危机量表,对网络舆情危机于旅游地形象的影响研究发现,除旅游品牌形象外,旅游地认知形象对旅游地意向形象传播具有显著的正向影响,网络舆情危机对旅游地认知形象传播具有显著的负向影响,除事件追因和舆情传播维度外,网络舆情危机对旅游地意向形象传播具有显著的负向影响.总体上,除舆情传播外,网络舆情危机对旅游地形象传播具有显著的负向影响.为此,应秉持“全域旅游”理念,适调媒体监督与网民批评,科学应对网络舆情危机,并有效提升各主体要素在网络新媒体时代的媒介素养,形成旅游地的正向形象传播.  相似文献   

10.
文章基于条件风险价值CVaR风险计量技术,在整数规划意义下,建立了以最小化风险为目标,带有基数约束的投资组合优化模型。针对该模型运用差分进化法进行求解,利用罚函数方法处理模型中的不等式约束,并选取沪市和深市的十六种股票作为备选股票进行实证分析,数值结果表明了模型的合理性和算法的可行性。  相似文献   

11.
从统计学视角审视网络舆论倾向性的监测问题,提出了以粗糙分类器为基础建立舆论倾向性分类模型,将复杂的预警指标体系简化为单一直观的跟踪统计量,并通过跟踪统计量动态跟踪舆论倾向性变化轨迹的研究方法。实证研究以2011年"郭美美事件"相关新闻跟帖为对象。分析表明,网络舆论的消极倾向性在整个事件发展过程中呈持续增长,与基本事实相符,证实了方法的可行性和适用性。  相似文献   

12.
基于2008—2014年的面板数据,以旅游收入和旅游人次为自变量,以地区生产总值为因变量,构建回归模型,对旅游产业的溢出效应进行实证分析。研究发现,旅游收入和旅游人次会对地方经济产生正的溢出效应,适合旅游产业的集群化发展。然后分别运用城市旅游功能、区位熵、产业空间联系方法,通过定性和定量分析,对山西省11个地市的旅游产业集群化程度进行了综合测度与评价,并据此提出了建议。  相似文献   

13.
The expectation maximization (EM) algorithm is a widely used parameter approach for estimating the parameters of multivariate multinomial mixtures in a latent class model. However, this approach has unsatisfactory computing efficiency. This study proposes a fuzzy clustering algorithm (FCA) based on both the maximum penalized likelihood (MPL) for the latent class model and the modified penalty fuzzy c-means (PFCM) for normal mixtures. Numerical examples confirm that the FCA-MPL algorithm is more efficient (that is, requires fewer iterations) and more computationally effective (measured by the approximate relative ratio of accurate classification) than the EM algorithm.  相似文献   

14.
Variable selection in cluster analysis is important yet challenging. It can be achieved by regularization methods, which realize a trade-off between the clustering accuracy and the number of selected variables by using a lasso-type penalty. However, the calibration of the penalty term can suffer from criticisms. Model selection methods are an efficient alternative, yet they require a difficult optimization of an information criterion which involves combinatorial problems. First, most of these optimization algorithms are based on a suboptimal procedure (e.g. stepwise method). Second, the algorithms are often computationally expensive because they need multiple calls of EM algorithms. Here we propose to use a new information criterion based on the integrated complete-data likelihood. It does not require the maximum likelihood estimate and its maximization appears to be simple and computationally efficient. The original contribution of our approach is to perform the model selection without requiring any parameter estimation. Then, parameter inference is needed only for the unique selected model. This approach is used for the variable selection of a Gaussian mixture model with conditional independence assumed. The numerical experiments on simulated and benchmark datasets show that the proposed method often outperforms two classical approaches for variable selection. The proposed approach is implemented in the R package VarSelLCM available on CRAN.  相似文献   

15.
16.
In Wu and Zen (1999), a linear model selection procedure based on M-estimation is proposed, which includes many classical model selection criteria as its special cases, and it is shown that the selection procedure is strongly consistent for a variety of penalty functions. In this paper, we will investigate its small sample performances for some choices of fixed penalty functions. It can be seen that the performance varies with the choice of the penalty. Hence, a randomized penalty based on observed data is proposed, which preserves the consistency property and provides improved performance over a fixed choice of penalty functions.  相似文献   

17.
The aim of this study is to assign weights w 1, …, w m to m clustering variables Z 1, …, Z m , so that k groups were uncovered to reveal more meaningful within-group coherence. We propose a new criterion to be minimized, which is the sum of the weighted within-cluster sums of squares and the penalty for the heterogeneity in variable weights w 1, …, w m . We will present the computing algorithm for such k-means clustering, a working procedure to determine a suitable value of penalty constant and numerical examples, among which one is simulated and the other two are real.  相似文献   

18.
In cancer diagnosis studies, high‐throughput gene profiling has been extensively conducted, searching for genes whose expressions may serve as markers. Data generated from such studies have the ‘large d, small n’ feature, with the number of genes profiled much larger than the sample size. Penalization has been extensively adopted for simultaneous estimation and marker selection. Because of small sample sizes, markers identified from the analysis of single data sets can be unsatisfactory. A cost‐effective remedy is to conduct integrative analysis of multiple heterogeneous data sets. In this article, we investigate composite penalization methods for estimation and marker selection in integrative analysis. The proposed methods use the minimax concave penalty (MCP) as the outer penalty. Under the homogeneity model, the ridge penalty is adopted as the inner penalty. Under the heterogeneity model, the Lasso penalty and MCP are adopted as the inner penalty. Effective computational algorithms based on coordinate descent are developed. Numerical studies, including simulation and analysis of practical cancer data sets, show satisfactory performance of the proposed methods.  相似文献   

19.
This paper proposes an adaptive model selection criterion with a data-driven penalty term. We treat model selection as an equality constrained minimization problem and develop an adaptive model selection procedure based on the Lagrange optimization method. In contrast to Akaike's information criterion (AIC), Bayesian information criterion (BIC) and most other existing criteria, this new criterion is to minimize the model size and take a measure of lack-of-fit as an adaptive penalty. Both theoretical results and simulations illustrate the power of this criterion with respect to consistency and pointwise asymptotic loss efficiency in the parametric and nonparametric cases.  相似文献   

20.
Cox’s proportional hazards model is the most common way to analyze survival data. The model can be extended in the presence of collinearity to include a ridge penalty, or in cases where a very large number of coefficients (e.g. with microarray data) has to be estimated. To maximize the penalized likelihood, optimal weights of the ridge penalty have to be obtained. However, there is no definite rule for choosing the penalty weight. One approach suggests maximization of the weights by maximizing the leave-one-out cross validated partial likelihood, however this is time consuming and computationally expensive, especially in large datasets. We suggest modelling survival data through a Poisson model. Using this approach, the log-likelihood of a Poisson model is maximized by standard iterative weighted least squares. We will illustrate this simple approach, which includes smoothing of the hazard function and move on to include a ridge term in the likelihood. We will then maximize the likelihood by considering tools from generalized mixed linear models. We will show that the optimal value of the penalty is found simply by computing the hat matrix of the system of linear equations and dividing its trace by a product of the estimated coefficients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号