首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
ABSTRACT

Online consumer product ratings data are increasing rapidly. While most of the current graphical displays mainly represent the average ratings, Ho and Quinn proposed an easily interpretable graphical display based on an ordinal item response theory (IRT) model, which successfully accounts for systematic interrater differences. Conventionally, the discrimination parameters in IRT models are constrained to be positive, particularly in the modeling of scored data from educational tests. In this article, we use real-world ratings data to demonstrate that such a constraint can have a great impact on the parameter estimation. This impact on estimation was explained through rater behavior. We also discuss correlation among raters and assess the prediction accuracy for both the constrained and the unconstrained models. The results show that the unconstrained model performs better when a larger fraction of rater pairs exhibit negative correlations in ratings.  相似文献   

2.
We propose model-free measures for Granger causality in mean between random variables. Unlike the existing measures, ours are able to detect and quantify nonlinear causal effects. The new measures are based on nonparametric regressions and defined as logarithmic functions of restricted and unrestricted mean square forecast errors. They are easily and consistently estimated by replacing the unknown mean square forecast errors by their nonparametric kernel estimates. We derive the asymptotic normality of nonparametric estimator of causality measures, which we use to build tests for their statistical significance. We establish the validity of smoothed local bootstrap that one can use in finite sample settings to perform statistical tests. Monte Carlo simulations reveal that the proposed test has good finite sample size and power properties for a variety of data-generating processes and different sample sizes. Finally, the empirical importance of measuring nonlinear causality in mean is also illustrated. We quantify the degree of nonlinear predictability of equity risk premium using variance risk premium. Our empirical results show that the variance risk premium is a very good predictor of risk premium at horizons less than 6 months. We also find that there is a high degree of predictability at the 1-month horizon, that can be attributed to a nonlinear causal effect. Supplementary materials for this article are available online.  相似文献   

3.
Smoothing Splines and Shape Restrictions   总被引:2,自引:0,他引:2  
Constrained smoothing splines are discussed under order restrictions on the shape of the function m . We consider shape constraints of the type m ( r )≥ 0, i.e. positivity, monotonicity, convexity, .... (Here for an integer r ≥ 0, m ( r ) denotes the r th derivative of m .) The paper contains three results: (1) constrained smoothing splines achieve optimal rates in shape restricted Sobolev classes; (2) they are equivalent to two step procedures of the following type: (a) in a first step the unconstrained smoothing spline is calculated; (b) in a second step the unconstrained smoothing spline is "projected" onto the constrained set. The projection is calculated with respect to a Sobolev-type norm; this result can be used for two purposes, it may motivate new algorithmic approaches and it helps to understand the form of the estimator and its asymptotic properties; (3) the infinite number of constraints can be replaced by a finite number with only a small loss of accuracy, this is discussed for estimation of a convex function.  相似文献   

4.
We propose a modification on the local polynomial estimation procedure to account for the “within-subject” correlation presented in panel data. The proposed procedure is rather simple to compute and has a closed-form expression. We study the asymptotic bias and variance of the proposed procedure and show that it outperforms the working independence estimator uniformly up to the first order. Simulation study shows that the gains in efficiency with the proposed method in the presence of “within-subject” correlation can be significant in small samples. For illustration purposes, the procedure is applied to explore the impact of market concentration on airfare.  相似文献   

5.
Abstract. Zero‐inflated data abound in ecological studies as well as in other scientific fields. Non‐parametric regression with zero‐inflated response may be studied via the zero‐inflated generalized additive model (ZIGAM) with a probabilistic mixture distribution of zero and a regular exponential family component. We propose the (partially) constrained ZIGAM, which assumes that some covariates affect the probability of non‐zero‐inflation and the regular exponential family distribution mean proportionally on the link scales. When the assumption obtains, the new approach provides a unified framework for modelling zero‐inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We develop an iterative estimation algorithm, and discuss the confidence interval construction of the estimator. Some asymptotic properties are derived. We also propose a Bayesian model selection criterion for choosing between the unconstrained and constrained ZIGAMs. The new methods are illustrated with both simulated data and a real application in jellyfish abundance data analysis.  相似文献   

6.
Summary.  We consider the analysis of extreme shapes rather than the more usual mean- and variance-based shape analysis. In particular, we consider extreme shape analysis in two applications: human muscle fibre images, where we compare healthy and diseased muscles, and temporal sequences of DNA shapes from molecular dynamics simulations. One feature of the shape space is that it is bounded, so we consider estimators which use prior knowledge of the upper bound when present. Peaks-over-threshold methods and maximum-likelihood-based inference are used. We introduce fixed end point and constrained maximum likelihood estimators, and we discuss their asymptotic properties for large samples. It is shown that in some cases the constrained estimators have half the mean-square error of the unconstrained maximum likelihood estimators. The new estimators are applied to the muscle and DNA data, and practical conclusions are given.  相似文献   

7.
In many economic models, theory restricts the shape of functions, such as monotonicity or curvature conditions. This article reviews and presents a framework for constrained estimation and inference to test for shape conditions in parametric models. We show that “regional” shape-restricting estimators have important advantages in terms of model fit and flexibility (as opposed to standard “local” or “global” shape-restricting estimators). In our empirical illustration, this is the first article to impose and test for all shape restrictions required by economic theory simultaneously in the “Berndt and Wood” data. We find that this dataset is consistent with “duality theory,” whereas previous studies have found violations of economic theory. We discuss policy consequences for key parameters, such as whether energy and capital are complements or substitutes.  相似文献   

8.
Conventional approaches for inference about efficiency in parametric stochastic frontier (PSF) models are based on percentiles of the estimated distribution of the one-sided error term, conditional on the composite error. When used as prediction intervals, coverage is poor when the signal-to-noise ratio is low, but improves slowly as sample size increases. We show that prediction intervals estimated by bagging yield much better coverages than the conventional approach, even with low signal-to-noise ratios. We also present a bootstrap method that gives confidence interval estimates for (conditional) expectations of efficiency, and which have good coverage properties that improve with sample size. In addition, researchers who estimate PSF models typically reject models, samples, or both when residuals have skewness in the “wrong” direction, i.e., in a direction that would seem to indicate absence of inefficiency. We show that correctly specified models can generate samples with “wrongly” skewed residuals, even when the variance of the inefficiency process is nonzero. Both our bagging and bootstrap methods provide useful information about inefficiency and model parameters irrespective of whether residuals have skewness in the desired direction.  相似文献   

9.
This article uses Bayesian marginal likelihood analysis to compare univariate models of the stock return behavior and test for structural breaks in the equity premium. The analysis favors a model that relates the equity premium to Markov-switching changes in the level of market volatility and accommodates volatility feedback. For this model, there is evidence of a one-time structural break in the equity premium in the 1940s, with no evidence of additional breaks in the postwar period. The break in the 1940s corresponds to a permanent reduction in the general level of stock market volatility. Meanwhile, there appears to be no change in the underlying risk preferences relating the equity premium to market volatility. The estimated unconditional equity premium drops from an annualized 12% before to the break to 9% after the break.  相似文献   

10.
A survey on health insurance was conducted in July and August of 2011 in three major cities in China. In this study, we analyze the household coverage rate, which is an important index of the quality of health insurance. The coverage rate is restricted to the unit interval [0, 1], and it may differ from other rate data in that the “two corners” are nonzero. That is, there are nonzero probabilities of zero and full coverage. Such data may also be encountered in economics, finance, medicine, and many other areas. The existing approaches may not be able to properly accommodate such data. In this study, we develop a three-part model that properly describes fractional response variables with non-ignorable zeros and ones. We investigate estimation and inference under two proportional constraints on the regression parameters. Such constraints may lead to more lucid interpretations and fewer unknown parameters and hence more accurate estimation. A simulation study is conducted to compare the performance of constrained and unconstrained models and show that estimation under constraint can be more efficient. The analysis of household health insurance coverage data suggests that household size, income, expense, and presence of chronic disease are associated with insurance coverage.  相似文献   

11.
The penalized spline is a popular method for function estimation when the assumption of “smoothness” is valid. In this paper, methods for estimation and inference are proposed using penalized splines under additional constraints of shape, such as monotonicity or convexity. The constrained penalized spline estimator is shown to have the same convergence rates as the corresponding unconstrained penalized spline, although in practice the squared error loss is typically smaller for the constrained versions. The penalty parameter may be chosen with generalized cross‐validation, which also provides a method for determining if the shape restrictions hold. The method is not a formal hypothesis test, but is shown to have nice large‐sample properties, and simulations show that it compares well with existing tests for monotonicity. Extensions to the partial linear model, the generalized regression model, and the varying coefficient model are given, and examples demonstrate the utility of the methods. The Canadian Journal of Statistics 40: 190–206; 2012 © 2012 Statistical Society of Canada  相似文献   

12.
This paper concerns model selection for autoregressive time series when the observations are contaminated with trend. We propose an adaptive least absolute shrinkage and selection operator (LASSO) type model selection method, in which the trend is estimated by B-splines, the detrended residuals are calculated, and then the residuals are used as if they were observations to optimize an adaptive LASSO type objective function. The oracle properties of such an adaptive LASSO model selection procedure are established; that is, the proposed method can identify the true model with probability approaching one as the sample size increases, and the asymptotic properties of estimators are not affected by the replacement of observations with detrended residuals. The intensive simulation studies of several constrained and unconstrained autoregressive models also confirm the theoretical results. The method is illustrated by two time series data sets, the annual U.S. tobacco production and annual tree ring width measurements.  相似文献   

13.
ABSTRACT

Logit-linear and probit-linear two-part models can be used to analyze data that are a mixture of zeros and positive continuous responses. The slopes in the linear part of a model can be constrained to be proportional to the slopes in the logit or probit part. In this article, it is shown that implementing such a constraint will decrease (in Loewner ordering) the asymptotic covariance matrix of the maximum likelihood estimates. A case study is provided using coronary artery calcification data from the Multi-Ethnic Study of Atherosclerosis.  相似文献   

14.
In this paper, a method for estimating monotone, convex and log-concave densities is proposed. The estimation procedure consists of an unconstrained kernel estimator which is modified in a second step with respect to the desired shape constraint by using monotone rearrangements. It is shown that the resulting estimate is a density itself and shares the asymptotic properties of the unconstrained estimate. A short simulation study shows the finite sample behavior.  相似文献   

15.
A nonparametric mixture model specifies that observations arise from a mixture distribution, ∫ f(x, θ) dG(θ), where the mixing distribution G is completely unspecified. A number of algorithms have been developed to obtain unconstrained maximum-likelihood estimates of G, but none of these algorithms lead to estimates when functional constraints are present. In many cases, there is a natural interest in functional ?(G), such as the mean and variance, of the mixing distribution, and profile likelihoods and confidence intervals for ?(G) are desired. In this paper we develop a penalized generalization of the ISDM algorithm of Kalbfleisch and Lesperance (1992) that can be used to solve the problem of constrained estimation. We also discuss its use in various different applications. Convergence results and numerical examples are given for the generalized ISDM algorithm, and asymptotic results are developed for the likelihood-ratio test statistics in the multinomial case.  相似文献   

16.
P. Reimnitz 《Statistics》2013,47(2):245-263
The classical “Two Armed Bandit” problem with Bernoulli-distributed outcomes is being considered. First the terms “asymptotic nearly admissibility” and “asymptotic nearly optimality” are defined. A nontrivial asymptotic nearly admissible and (with respect to a certain Bayes risk) asymptotic nearly optimal strategy is presented, then these properties are shown. Finally, it is discussed how these results generalize to the non-Bernoulli cases and the “k-Armed Bandit” problem (;k≧2).  相似文献   

17.
We introduce a new test of isotropy or uniformity on the circle, based on the Gini mean difference of the sample arc-lengths and obtain both the exact and asymptotic distributions under the null hypothesis of circular uniformity. We also provide a table of upper percentile values of the exact distribution for small to moderate sample sizes. Illustrative examples in circular data analysis are also given. It is shown that a “generalized” Gini mean difference test has better asymptotic efficiency than a corresponding “generalized” Rao's test in the sense of Pitman asymptotic relative efficiency.  相似文献   

18.
To improve the goodness of fit between a regression model and observations, the model can be complicated; however, that can reduce the statistical power when the complication does not lead significantly to an improved model. In the context of two-phase (segmented) logistic regressions, the model evaluation needs to include testing for simple (one-phase) versus two-phase logistic regression models. In this article, we propose and examine a class of likelihood ratio type tests for detecting a change in logistic regression parameters that splits the model into two-phases. We show that the proposed tests, based on Shiryayev–Roberts type statistics, are on average the most powerful. The article argues in favor of a new approach for fixing Type I errors of tests when the parameters of null hypotheses are unknown. Although the suggested approach is partly based on Bayes–Factor-type testing procedures, the classical significance levels of the proposed tests are under control. We demonstrate applications of the average most powerful tests to an epidemiologic study entitled “Time to pregnancy and multiple births.”  相似文献   

19.
由Fama和French提出的三因子模型能够较好地解释股票的收益率风险溢价。文章以状态空间模型为框架,将风险因子系数作为状态变量,市场风险溢价作为观测变量,构建时变三因子模型来应对股票市场价格的时变特征。研究结果显示,利用卡尔曼滤波来估计时变风险因子系数,增强了估计结果的准确性与连贯性;风险因子系数变化规律与中国A股市场政策和环境影响相吻合,消除非理性噪声后的时变三因子模型更具有解释力度。  相似文献   

20.
The generalized weighted premium includes many classical premium principles. Most important of all, some of them have positive safe-loading. Some work had been done previously, the credibility premium derived under this premium principle cannot be applied to practice directly due to the difficulties of calculation and the estimation of structure parameters. In this article, we consider a new form of credibility estimator under the generalized weighted premium principle. In addition, the consistency of the estimator is shown and the comparisons were analyzed with previous results in the simulations. The results show that this “new” estimator is better than existed estimators under mean square error sense. Finally, the structure parameters in credibility factor were estimated in the models of multitude contracts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号