首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
在今天越来越激烈的市场竞争之中,越来越多的公司已经从产品驱动的市场竞争策略转向客户驱动的市场竞争策略。并且随着计算机技术及数据挖掘方法的飞速发展,直接市场营销越来越受到重视。如何选择目标客户邮寄货单的问题也就越来越引起市场直销者的兴趣。计分模型及增益表选择方法和单群预测及选择方法是目前普遍采用的两种客户选择方法。在平稳市场的假设下,客户的购买模式服从著名的重复购买理论。基于这个理论,本文提出了一种新的客户选择方法,即两群预测及选择方法。应用实际的客户数据比较这三种不同的方法,结果表明新方法表现最好。  相似文献   

2.
In this paper we propose a robust Bayesian procedure of estimation, testing, validation and selection of spatio-temporal autoregressive models (STAR) with neighbourhood effects applied to the appraisal of dwelling prices. The methodology does not depend on asymptotic results and, unlike previously procedures proposed in the literature, takes into account the uncertainty associated to the estimation of the neighbourhood parameters of the model, giving more realism to the analysis. Moreover, a sequential algorithm to elaborate fast on-line forecast, is provided. The methodology is illustrated by means of a practical case of the real estate market of Zaragoza.  相似文献   

3.
杨青  王晨蔚 《统计研究》2019,36(3):65-77
作为深度学习技术的经典模型之一,长短期记忆(LSTM)神经网络在挖掘序列数据长期依赖关系中极具优势。基于深度神经网络优化技术,本文构造了一个深层LSTM神经网络并将其应用于全球30个股票指数三种不同期限的预测研究,结果发现:①LSTM神经网络具有很强的泛化能力,对全部指数不同期限的预测效果均很稳定;②LSTM神经网络具有优秀的预测精度,相比三种对照模型(SVR,MLP和ARIMA),其对全部指数的平均预测精度在不同期限上均有提升;③LSTM神经网络能够有效控制误差波动,其对全部指数的平均预测稳定度相比三种对照模型在不同期限上亦均有提高。鉴于LSTM神经网络在预测精度和稳定度两方面的优势,其未来在金融预测中将有广阔的应用前景。  相似文献   

4.
This paper deals with the problem of selecting the best population from among k(≥ 2) two-parameter exponential populations. New selection procedures are proposed for selecting the unique best. The procedures include preliminary tests which allow the xperimenter to have an option to not select if the statistical evidence is not significant. Two probabilities, the probability to make a selection and the probability of a correct selection, are controlled by these selection procedures. Comparisons between the proposed selection procedures and certain earlier existing procedures are also made. The results show the superiority of the proposed selection procedures in terms of the required sample size.  相似文献   

5.
一、引言自从20世纪60年代新产品市场扩散研究引入技术发展预测以及市场统计学等领域以来,用创新扩散模型准确预测新产品市场扩散的研究就引起了人们广泛的兴趣,西方经济学者在长期的研究中建立了多个新产品市场扩散模型和许多扩散模型的参数估计方法,文献[1]对这些参数估计方法的适用条件和应用范围进行了详细的讨论和评价,并将新产品市场扩散模型的参数估计方法分为两大类:一类是时不变估计程序,包括普通最小二乘法(OLS)[2]、最大似然估计法(MLE)[3]和非线性最小二乘法(NLS)[4]等,另一类是时变估计程序,包括由Bretschne…  相似文献   

6.
Jing Yang  Fang Lu  Hu Yang 《Statistics》2017,51(6):1179-1199
In this paper, we develop a new estimation procedure based on quantile regression for semiparametric partially linear varying-coefficient models. The proposed estimation approach is empirically shown to be much more efficient than the popular least squares estimation method for non-normal error distributions, and almost not lose any efficiency for normal errors. Asymptotic normalities of the proposed estimators for both the parametric and nonparametric parts are established. To achieve sparsity when there exist irrelevant variables in the model, two variable selection procedures based on adaptive penalty are developed to select important parametric covariates as well as significant nonparametric functions. Moreover, both these two variable selection procedures are demonstrated to enjoy the oracle property under some regularity conditions. Some Monte Carlo simulations are conducted to assess the finite sample performance of the proposed estimators, and a real-data example is used to illustrate the application of the proposed methods.  相似文献   

7.
In this article, we propose a new empirical information criterion (EIC) for model selection which penalizes the likelihood of the data by a non-linear function of the number of parameters in the model. It is designed to be used where there are a large number of time series to be forecast. However, a bootstrap version of the EIC can be used where there is a single time series to be forecast. The EIC provides a data-driven model selection tool that can be tuned to the particular forecasting task.

We compare the EIC with other model selection criteria including Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC). The comparisons show that for the M3 forecasting competition data, the EIC outperforms both the AIC and BIC, particularly for longer forecast horizons. We also compare the criteria on simulated data and find that the EIC does better than existing criteria in that case also.  相似文献   

8.
Stock & Watson (1999) consider the relative quality of different univariate forecasting techniques. This paper extends their study on forecasting practice, comparing the forecasting performance of two popular model selection procedures, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). This paper considers several topics: how AIC and BIC choose lags in autoregressive models on actual series, how models so selected forecast relative to an AR(4) model, the effect of using a maximum lag on model selection, and the forecasting performance of combining AR(4), AIC, and BIC models with an equal weight.  相似文献   

9.
This article shows how to account for nonsample information in the classical forecasting framework. We explicitly incorporate two elements: a default decision and a probability reflecting the confidence associated with it. Starting from the default decision, the new estimator increases the objective function only as long as its first derivatives are statistically different from zero. It includes as a special case the classical estimator and has clear analogies with Bayesian estimators. The properties of the new estimator are studied with a detailed risk analysis. Finally, we illustrate its performance with applications to mean-variance portfolio selection and to GDP forecast.  相似文献   

10.
In a two-treatment trial, a two-sided test is often used to reach a conclusion, Usually we are interested in doing a two-sided test because of no prior preference between the two treatments and we want a three-decision framework. When a standard control is just as good as the new experimental treatment (which has the same toxicity and cost), then we will accept both treatments. Only when the standard control is clearly worse or better than the new experimental treatment, then we choose only one treatment. In this paper, we extend the concept of a two-sided test to the multiple treatment trial where three or more treatments are involved. The procedure turns out to be a subset selection procedure; however, the theoretical framework and performance requirement are different from the existing subset selection procedures. Two procedures (exclusion or inclusion) are developed here for the case of normal data with equal known variance. If the sample size is large, they can be applied with unknown variance and with the binomial data or survival data with random censoring.  相似文献   

11.
运用Granger因果关系检验识别确定经济变量间因果关系是经济研究中极为常见的分析模式,然而在具体应用时,Granger因果关系检验的功效会受到模型形式选择与检验策略因素的影响,为此,解析了Granger因果关系检验的水平型VAR、差分型VAR、VEC三种模型形式选择的基本原理,探讨了与模型选择相关的四大检验策略,即变量个数选择、滞后阶数选择、变量单整性检验、协整空间维数选择,并给出了Granger因果关系检验相对稳妥的实践操作程序。  相似文献   

12.
In the framework of cluster analysis based on Gaussian mixture models, it is usually assumed that all the variables provide information about the clustering of the sample units. Several variable selection procedures are available in order to detect the structure of interest for the clustering when this structure is contained in a variable sub-vector. Currently, in these procedures a variable is assumed to play one of (up to) three roles: (1) informative, (2) uninformative and correlated with some informative variables, (3) uninformative and uncorrelated with any informative variable. A more general approach for modelling the role of a variable is proposed by taking into account the possibility that the variable vector provides information about more than one structure of interest for the clustering. This approach is developed by assuming that such information is given by non-overlapped and possibly correlated sub-vectors of variables; it is also assumed that the model for the variable vector is equal to a product of conditionally independent Gaussian mixture models (one for each variable sub-vector). Details about model identifiability, parameter estimation and model selection are provided. The usefulness and effectiveness of the described methodology are illustrated using simulated and real datasets.  相似文献   

13.
We study the simultaneous occurrence of long memory and nonlinear effects, such as parameter changes and threshold effects, in time series models and apply our modeling framework to daily realized measures of integrated variance. We develop asymptotic theory for parameter estimation and propose two model-building procedures. The methodology is applied to stocks of the Dow Jones Industrial Average during the period 2000 to 2009. We find strong evidence of nonlinear effects in financial volatility. An out-of-sample analysis shows that modeling these effects can improve forecast performance. Supplementary materials for this article are available online.  相似文献   

14.
A number of goodness-of-fit and model selection procedures related to the Weibull distribution are reviewed. These procedures include probability plotting, correlation type goodness-of-fit tests, and chi-square goodness-of-fit tests. Also the Kolmogorow-Smirniv, Kuiper, and Cramer-Von Mises test statistics for completely specified hypothesis based on censored data are reviewed, and these test statistics based on complete samples for the unspecified parameters case are considered. Goodness-of-fit tests based on sample spacings, and a goodness-of-fit test for the Weibull process, is also discussed.

Model selection procedures for selecting between a Weibull and gamma model, a Weibull and lognormal model, and for selecting from among all three models are considered. Also tests of exponential versus Weibull and Weibull versus generalized gamma are mentioned.  相似文献   

15.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

16.
Although a large number of selection procedures have been published in the statistics literature, the selection approach has received only limited use in applications. One drawback to the use of such procedures has been the lack of parameter estimates, which prevents quantitative comparisons among the treatments. To partially address this criticism, we present a general method for constructing unbiased estimators of the success probabilities after the termination of a sequential experiment involving two or more Bernoulli populations. Some theoretical properties are presented, and examples are provided for several different selection procedures.  相似文献   

17.
Assume that a k-element vector time series follows a vector autoregressive (VAR) model. Obtaining simultaneous forecasts of the k elements of the vector time series is an important problem. Based on the Bonferroni inequality, Lutkepohl (1991) derived the procedures which construct the conservative joint forecast regions for the VAR model. In this paper, we propose to use an exact method which provides shorter prediction intervals than does the Bonferroni method. Three illustrative examples are given for comparison of the various VAR forecasting procedures.  相似文献   

18.
In this paper, we seek to establish asymptotic results for selective inference procedures removing the assumption of Gaussianity. The class of selection procedures we consider are determined by affine inequalities, which we refer to as affine selection procedures. Examples of affine selection procedures include selective inference along the solution path of the least absolute shrinkage and selection operator (LASSO), as well as selective inference after fitting the least absolute shrinkage and selection operator at a fixed value of the regularization parameter. We also consider some tests in penalized generalized linear models. Our result proves asymptotic convergence in the high‐dimensional setting where n<p, and n can be of a logarithmic factor of the dimension p for some procedures.  相似文献   

19.
In this paper, we examine the potential determinants of foreign direct investment. For this purpose, we apply new exact subset selection procedures, which are based on idealized assumptions, as well as their possibly more plausible empirical counterparts to an international data set to select the optimal set of predictors. Unlike the standard model selection procedures AIC and BIC, which penalize only the number of variables included in a model, and the subset selection procedures RIC and MRIC, which consider also the total number of available candidate variables, our data-specific procedures even take the correlation structure of all candidate variables into account. Our main focus is on a new procedure, which we have designed for situations where some of the potential predictors are certain to be included in the model. For a sample of 73 developing countries, this procedure selects only four variables, namely imports, net income from abroad, gross capital formation, and GDP per capita. An important secondary finding of our study is that the data-specific procedures, which are based on extensive simulations and are therefore very time-consuming, can be approximated reasonably well by the much simpler exact methods.  相似文献   

20.
Summary: A class of selection procedures for selecting the least dispersive distribution from k available distributions has been proposed. This problem finds applications in reliability and engineering. In engineering, for example, the goal of the experimenter is to select a firm whose components have least dispersive distribution from the available set of competing firms manufacturing the components of the desired specifications meant for the same purpose. The proposed procedures can be used even when the underlying distributions belong to different families. Applications of the proposed selection procedures are discussed by taking exponential, gamma and Lehmann type distributions. Performance of the proposed selection procedures is assessed through simulation study. Implementation of the proposed selection procedure is illustrated through an example. * The authors are very grateful to the editor and referees for their valuable comments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号