首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We develop a general approach to estimation and inference for income distributions using grouped or aggregate data that are typically available in the form of population shares and class mean incomes, with unknown group bounds. We derive generic moment conditions and an optimal weight matrix that can be used for generalized method-of-moments (GMM) estimation of any parametric income distribution. Our derivation of the weight matrix and its inverse allows us to express the seemingly complex GMM objective function in a relatively simple form that facilitates estimation. We show that our proposed approach, which incorporates information on class means as well as population proportions, is more efficient than maximum likelihood estimation of the multinomial distribution, which uses only population proportions. In contrast to the earlier work of Chotikapanich, Griffiths, and Rao, and Chotikapanich, Griffiths, Rao, and Valencia, which did not specify a formal GMM framework, did not provide methodology for obtaining standard errors, and restricted the analysis to the beta-2 distribution, we provide standard errors for estimated parameters and relevant functions of them, such as inequality and poverty measures, and we provide methodology for all distributions. A test statistic for testing the adequacy of a distribution is proposed. Using eight countries/regions for the year 2005, we show how the methodology can be applied to estimate the parameters of the generalized beta distribution of the second kind (GB2), and its special-case distributions, the beta-2, Singh–Maddala, Dagum, generalized gamma, and lognormal distributions. We test the adequacy of each distribution and compare predicted and actual income shares, where the number of groups used for prediction can differ from the number used in estimation. Estimates and standard errors for inequality and poverty measures are provided. Supplementary materials for this article are available online.  相似文献   

2.
We propose a flexible method to approximate the subjective cumulative distribution function of an economic agent about the future realization of a continuous random variable. The method can closely approximate a wide variety of distributions while maintaining weak assumptions on the shape of distribution functions. We show how moments and quantiles of general functions of the random variable can be computed analytically and/or numerically. We illustrate the method by revisiting the determinants of income expectations in the United States. A Monte Carlo analysis suggests that a quantile-based flexible approach can be used to successfully deal with censoring and possible rounding levels present in the data. Finally, our analysis suggests that the performance of our flexible approach matches that of a correctly specified parametric approach and is clearly better than that of a misspecified parametric approach.  相似文献   

3.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

4.
Specification of household engel curves by nonparametric regression   总被引:1,自引:0,他引:1  
This paper demonstrates the usefulness of nonparametric regression analysis for functional specfication of houshold Engel curves.

After a brief review in section 2 of the literature on demand functions and equivalence scales and the functional specifications used, we first discuss in section 3 the issues of using income versus total expenditure, the origin and nature of the error terms in the light of utility theroy, and the interpretation of empirical demand functions. we shall reach the unorthodox view that household demand functions should be interpreted as conditional expectations relative to prices, household composition and either income or the conditional expectation of total expenditure (rather that total expenditure itself), where the latter conditional expectation is taken relative to income, prices and household composition. these two forms appear to be equivalent. this result also solves the simultaneity problem: the error variance matrix is no longer singular. Moreover, the errors are in general heteroskedastic.

In section 4 we discuss the model and the data, and in section 5 we review the nonparametric kernal regression approach.

In section 6 we derive the functional form of our household engel curves from nonparametric regression results, using the 1980 budget survey for the netherlands, in order to avoid model misspecification. thus the modl is derived directly from the data, without restricting its functional form. the nonparametric regression results are then translated to suitable parametric functional specifications, i.e., we choose parametric functional forms in accordance with the nanparametric regression results. these parametric specification are estimated by least squares, and various parameter restrictions are tested in order to simplify the models. this yields very simple final specifications of the household engel curves involved, namely linear functions of income and the number of children in two age groups.  相似文献   

5.
For right-censored data, the accelerated failure time (AFT) model is an alternative to the commonly used proportional hazards regression model. It is a linear model for the (log-transformed) outcome of interest, and is particularly useful for censored outcomes that are not time-to-event, such as laboratory measurements. We provide a general and easily computable definition of the R2 measure of explained variation under the AFT model for right-censored data. We study its behavior under different censoring scenarios and under different error distributions; in particular, we also study its robustness when the parametric error distribution is misspecified. Based on Monte Carlo investigation results, we recommend the log-normal distribution as a robust error distribution to be used in practice for the parametric AFT model, when the R2 measure is of interest. We apply our methodology to an alcohol consumption during pregnancy data set from Ukraine.  相似文献   

6.
We consider inference in randomized longitudinal studies with missing data that is generated by skipped clinic visits and loss to follow-up. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and partial ignorability for the intermittent missingness. We posit an exponential tilt model that links non-identifiable distributions and distributions identified under partial ignorability. This exponential tilt model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated by, and applied to, data from the Breast Cancer Prevention Trial.  相似文献   

7.
The semiparametric LABROC approach of fitting binormal model for estimating AUC as a global index of accuracy has been justified (except for bimodal forms), while for estimating a local index of accuracy such as TPF, it may lead to a bias in severe departure of data from binormality. We extended parametric ROC analysis for quantitative data when one or both pair members are mixture of Gaussian (MG) in particular for bimodal forms. We analytically showed that AUC and TPF are a mixture of weighting parameters of different components of AUCs and TPFs of a mixture of underlying distributions. In a simulation study of six configurations of MG distributions:{bimodal, normal} and {bimodal, bimodal} pairs, the parameters of MG distributions were estimated using the EM algorithm. The results showed that the estimated AUC from our proposed model was essentially unbiased, and that the bias in the estimated TPF at a clinically relevant range of FPF was roughly 0.01 for a sample size of n=100/100. In practice, with severe departures from binormality, we recommend an extension of the LABROC and software development for future research to allow for each member of the pair of distributions to be a mixture of Gaussian that is a more flexible parametric form.  相似文献   

8.
When computing the disparity of a metric variable we frequently have to deal with grouped data. It has been generally assumed that the sums of the values in each class are given. Dropping this assumption we usually resort to working with the class mark as the representative value in each class. This paper presents three approaches to the computation of the bounds of the Gini index from grouped data with incomplete information of different degree. Numerical results based on income distributions of the Federal Republic of Germany demonstrate the effects of different degrees of information on a frequency distribution and, consequently, the problems associated with comparing the disparity of various frequency distributions.  相似文献   

9.
Insurance and economic data are frequently characterized by positivity, skewness, leptokurtosis, and multi-modality; although many parametric models have been used in the literature, often these peculiarities call for more flexible approaches. Here, we propose a finite mixture of contaminated gamma distributions that provides a better characterization of data. It is placed in between parametric and non-parametric density estimation and strikes a balance between these alternatives, as a large class of densities can be implemented. We adopt a maximum likelihood approach to estimate the model parameters, providing the likelihood and the expected-maximization algorithm implemented to estimate all unknown parameters. We apply our approach to an artificial dataset and to two well-known datasets as the workers compensation data and the healthcare expenditure data taken from the medical expenditure panel survey. The Value-at-Risk is evaluated and comparisons with other benchmark models are provided.  相似文献   

10.
A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for good performance. In this paper, we present such a parametric family for adaptive sampling on high dimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want to sample from a Bayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For high dimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations into account, analogously to the multivariate normal distribution on continuous spaces. We provide a review of models for binary data and make one of them work in the context of Sequential Monte Carlo sampling. Computational studies on real life data with about a hundred covariates suggest that, on difficult instances, our Sequential Monte Carlo approach clearly outperforms standard techniques based on Markov chain exploration.  相似文献   

11.
罗幼喜  张敏  田茂再 《统计研究》2020,37(2):105-118
本文在贝叶斯分析的框架下讨论了面板数据的可加模型分位回归建模方法。首先通过低秩薄板惩罚样条展开和个体效应虚拟变量的引进将非参数模型转换为参数模型,然后在假定随机误差项服从非对称Laplace分布的基础上建立了贝叶斯分层分位回归模型。通过对非对称Laplace分布的分解,论文给出了所有待估参数的条件后验分布,并构造了待估参数的 Gibbs抽样估计算法。计算机模拟仿真结果显示,新提出的方法相比于传统的可加模型均值回归方法在估计稳健性上明显占优。最后以消费支出面板数据为例研究了我国农村居民收入结构对消费支出的影响,发现对于农村居民来说,无论是高、中、低消费群体,工资性收入与经营净收入的增加对其消费支出的正向刺激作用更为明显。进一步,相比于高消费农村居民人群,低消费农村居民人群随着收入的增加消费支出上升速度较为缓慢。  相似文献   

12.
It may sometimes be clear from background knowledge that a population under investigation proportionally consists of a known number of subpopulations, whose distributions belong to the same, yet unknown, family. While a parametric family is commonly used in practice, one can also consider some nonparametric families to avoid distributional misspecification. In this article, we propose a solution using a mixture-based nonparametric family for the component distribution in a finite mixture model as opposed to some recent research that utilizes a kernel-based approach. In particular, we present a semiparametric maximum likelihood estimation procedure for the model parameters and tackle the bandwidth parameter selection problem via some popular means for model selection. Empirical comparisons through simulation studies and three real data sets suggest that estimators based on our mixture-based approach are more efficient than those based on the kernel-based approach, in terms of both parameter estimation and overall density estimation.  相似文献   

13.
Comparing treatment means from populations that follow independent normal distributions is a common statistical problem. Many frequentist solutions exist to test for significant differences amongst the treatment means. A different approach would be to determine how likely it is that particular means are grouped as equal. We developed a fiducial framework for this situation. Our method provides fiducial probabilities that any number of means are equal based on the data and the assumed normal distributions. This methodology was developed when there is constant and non-constant variance across populations. Simulations suggest that our method selects the correct grouping of means at a relatively high rate for small sample sizes and asymptotic calculations demonstrate good properties. Additionally, we have demonstrated the flexibility in the methods ability to calculate the fiducial probability for any number of equal means. This was done by analyzing a simulated data set and a data set measuring the nitrogen levels of red clover plants that were inoculated with different treatments.  相似文献   

14.
This paper studies a functional coe?cient time series model with trending regressors, where the coe?cients are unknown functions of time and random variables. We propose a local linear estimation method to estimate the unknown coe?cient functions, and establish the corresponding asymptotic theory under mild conditions. We also develop a test procedure to see if the functional coe?cients take particular parametric forms. For practical use, we further propose a Bayesian approach to select the bandwidths, and conduct several numerical experiments to examine the finite sample performance of our proposed local linear estimator and the test procedure. The results show that the local linear estimator works well and the proposed test has satisfactory size and power. In addition, our simulation studies show that the Bayesian bandwidth selection method performs better than the cross-validation method. Furthermore, we use the functional coe?cient model to study the relationship between consumption per capita and income per capita in United States, and it was shown that the functional coe?cient model with our proposed local linear estimator and Bayesian bandwidth selection method performs well in both in-sample fitting and out-of-sample forecasting.  相似文献   

15.
This paper demonstrates the usefulness of nonparametric regression analysis for functional specfication of houshold Engel curves.

After a brief review in section 2 of the literature on demand functions and equivalence scales and the functional specifications used, we first discuss in section 3 the issues of using income versus total expenditure, the origin and nature of the error terms in the light of utility theroy, and the interpretation of empirical demand functions. we shall reach the unorthodox view that household demand functions should be interpreted as conditional expectations relative to prices, household composition and either income or the conditional expectation of total expenditure (rather that total expenditure itself), where the latter conditional expectation is taken relative to income, prices and household composition. these two forms appear to be equivalent. this result also solves the simultaneity problem: the error variance matrix is no longer singular. Moreover, the errors are in general heteroskedastic.

In section 4 we discuss the model and the data, and in section 5 we review the nonparametric kernal regression approach.

In section 6 we derive the functional form of our household engel curves from nonparametric regression results, using the 1980 budget survey for the netherlands, in order to avoid model misspecification. thus the modl is derived directly from the data, without restricting its functional form. the nonparametric regression results are then translated to suitable parametric functional specifications, i.e., we choose parametric functional forms in accordance with the nanparametric regression results. these parametric specification are estimated by least squares, and various parameter restrictions are tested in order to simplify the models. this yields very simple final specifications of the household engel curves involved, namely linear functions of income and the number of children in two age groups.  相似文献   

16.
In this article, we consider inference about the correlation coefficients of several bivariate normal distributions. We first propose computational approach tests for testing the equality of the correlation coefficients. In fact, these approaches are parametric bootstrap tests, and simulation studies show that they perform very satisfactory, and the actual sizes of these tests are better than other existing approaches. We also present a computational approach test and a parametric bootstrap confidence interval for inference about the parameter of common correlation coefficient. At the end, all the approaches are illustrated using two real examples.  相似文献   

17.
Stationary time series models built from parametric distributions are, in general, limited in scope due to the assumptions imposed on the residual distribution and autoregression relationship. We present a modeling approach for univariate time series data, which makes no assumptions of stationarity, and can accommodate complex dynamics and capture non-standard distributions. The model for the transition density arises from the conditional distribution implied by a Bayesian nonparametric mixture of bivariate normals. This results in a flexible autoregressive form for the conditional transition density, defining a time-homogeneous, non-stationary Markovian model for real-valued data indexed in discrete time. To obtain a computationally tractable algorithm for posterior inference, we utilize a square-root-free Cholesky decomposition of the mixture kernel covariance matrix. Results from simulated data suggest that the model is able to recover challenging transition densities and non-linear dynamic relationships. We also illustrate the model on time intervals between eruptions of the Old Faithful geyser. Extensions to accommodate higher order structure and to develop a state-space model are also discussed.  相似文献   

18.
Abstract.  We consider a two-component mixture model where one component distribution is known while the mixing proportion and the other component distribution are unknown. These kinds of models were first introduced in biology to study the differences in expression between genes. The various estimation methods proposed till now have all assumed that the unknown distribution belongs to a parametric family. In this paper, we show how this assumption can be relaxed. First, we note that generally the above model is not identifiable, but we show that under moment and symmetry conditions some 'almost everywhere' identifiability results can be obtained. Where such identifiability conditions are fulfilled we propose an estimation method for the unknown parameters which is shown to be strongly consistent under mild conditions. We discuss applications of our method to microarray data analysis and to the training data problem. We compare our method to the parametric approach using simulated data and, finally, we apply our method to real data from microarray experiments.  相似文献   

19.
Internet traffic data is characterized by some unusual statistical properties, in particular, the presence of heavy-tailed variables. A typical model for heavy-tailed distributions is the Pareto distribution although this is not adequate in many cases. In this article, we consider a mixture of two-parameter Pareto distributions as a model for heavy-tailed data and use a Bayesian approach based on the birth-death Markov chain Monte Carlo algorithm to fit this model. We estimate some measures of interest related to the queueing system k-Par/M/1 where k-Par denotes a mixture of k Pareto distributions. Heavy-tailed variables are difficult to model in such queueing systems because of the lack of a simple expression for the Laplace Transform (LT). We use a procedure based on recent LT approximating results for the Pareto/M/1 system. We illustrate our approach with both simulated and real data.  相似文献   

20.
Non parametric approaches to classification have gained significant attention in the last two decades. In this paper, we propose a classification methodology based on the multivariate rank functions and show that it is a Bayes rule for spherically symmetric distributions with a location shift. We show that a rank-based classifier is equivalent to optimal Bayes rule under suitable conditions. We also present an affine invariant version of the classifier. To accommodate different covariance structures, we construct a classifier based on the central rank region. Asymptotic properties of these classification methods are studied. We illustrate the performance of our proposed methods in comparison to some other depth-based classifiers using simulated and real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号