首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
We introduce the entropic measure transform (EMT) problem for a general process and prove the existence of a unique optimal measure characterizing the solution. The density process of the optimal measure is characterized using a semimartingale BSDE under general conditions. The EMT is used to reinterpret the conditional entropic risk-measure and to obtain a convenient formula for the conditional expectation of a process that admits an affine representation under a related measure. The EMT is then used to provide a new characterization of defaultable bond prices, forward prices and futures prices when a jump-diffusion drives the asset. The characterization of these pricing problems in terms of the EMT provides economic interpretations as maximizing the returns subject to a penalty for removing financial risk as expressed through the aggregate relative entropy. The EMT is shown to extend the optimal stochastic control characterization of default-free bond prices of Gombani & Runggaldier (2013). These methods are illustrated numerically with an example in the defaultable bond setting. The Canadian Journal of Statistics 48: 97–129; 2020 © 2020 Statistical Society of Canada  相似文献   

2.
Earlier attempts at reconciling disparate substitution elasticity estimates examined differences in separability hypotheses, data bases, and estimation techniques, as well as methods employed to construct capital service prices. Although these studies showed that differences in elasticity estimates between two or three studies may be attributable to the aforementioned features of the econometric models, they have been unable to demonstrate this link statistically and establish the existence of systematic relationships between features of the econometric models and the perception of production technologies generated by those models. Using sectoral data covering the entire production side of the U.S. economy, we estimate 34 production models for alternative definitions of the capital service price. We employ substitution elasticities calculated from these models as dependent variables in the statistical search for systematic relationships between features of the econometric models and perceptions of the sectoral technology as characterized by the elasticities. Statistically significant systematic effects are found between the monotonicity and concavity properties of the cost functions and service price–technical change specifications as well as between substitution elasticities.  相似文献   

3.
Reporting sampling errors of survey estimates is a problem that is commonly addressed when compiling a survey report. Because of the vast number of study variables or population characteristics and of interest domains in a survey, it is almost impossible to calculate and to publish the standard errors for each statistic. A way of overcoming such problem would be to estimate indirectly the sampling errors by using generalized variance functions, which define a statistical relationship between the sampling errors and the corresponding estimates. One of the problems with this approach is that the model specification has to be consistent with a roughly constant design effect. If the design effects vary greatly across estimates, as in the case of the Business Surveys, the prediction model is not correctly specified and the least-square estimation is biased. In this paper, we show an extension of the generalized variance functions, which address the above problems, which could be used in contexts similar to those encountered in Business Surveys. The proposed method has been applied to the Italian Structural Business Statistics Survey case.  相似文献   

4.
In multivariate stratified sample survey with L strata, let p-characteristics are defined on each unit of the population. To estimate the unknown p-population means of each characteristic, a random sample is taken out from the population. In multivariate stratified sample survey, the optimum allocation of any characteristic may not be optimum for others. Thus the problem arises to find out an allocation which may be optimum for all characteristics in some sense. Therefore a compromise criterion is needed to workout such allocation. In this paper, the procedure of estimation of p-population means is discussed in the presence of nonresponse when the use of linear cost function is not advisable. A solution procedure is suggested by using lexicographic goal programming problem. The numerical illustrations are given for its practical utility.  相似文献   

5.
This paper provides an examination of the problem of heteroscedasticity as it relates to estimating park use, although the results can also be applied to a wide variety of flow problems involving traffic, people or commodities. The major issue is that estimates of flows obtained using ordinary least squares, OLS, often yield statistically significant results while still giving rise to large differences between observed and predicted flows (residuals). The paper presents results which show that for the flow estimation problem of concern, more accurate use estimates may be obtained by using generalized least squares, GLS, rather than using OLS. Weights to use in GLS regression are developed taking into account the variance to be expected in origin-destination flows. It is shown that deriving the correct weights, estimates of variances, to use in a regression analysis results in an ‘absolute’ test for the structural appropriateness of the regression model. Tests related to the ‘absolute’ adequacy test are introduced and their use to identify specific structural problems with a model is illustrated.  相似文献   

6.
This paper demonstrates the usefulness of nonparametric regression analysis for functional specfication of houshold Engel curves.

After a brief review in section 2 of the literature on demand functions and equivalence scales and the functional specifications used, we first discuss in section 3 the issues of using income versus total expenditure, the origin and nature of the error terms in the light of utility theroy, and the interpretation of empirical demand functions. we shall reach the unorthodox view that household demand functions should be interpreted as conditional expectations relative to prices, household composition and either income or the conditional expectation of total expenditure (rather that total expenditure itself), where the latter conditional expectation is taken relative to income, prices and household composition. these two forms appear to be equivalent. this result also solves the simultaneity problem: the error variance matrix is no longer singular. Moreover, the errors are in general heteroskedastic.

In section 4 we discuss the model and the data, and in section 5 we review the nonparametric kernal regression approach.

In section 6 we derive the functional form of our household engel curves from nonparametric regression results, using the 1980 budget survey for the netherlands, in order to avoid model misspecification. thus the modl is derived directly from the data, without restricting its functional form. the nonparametric regression results are then translated to suitable parametric functional specifications, i.e., we choose parametric functional forms in accordance with the nanparametric regression results. these parametric specification are estimated by least squares, and various parameter restrictions are tested in order to simplify the models. this yields very simple final specifications of the household engel curves involved, namely linear functions of income and the number of children in two age groups.  相似文献   

7.
Combining-100 information from multiple samples is often needed in biomedical and economic studies, but differences between these samples must be appropriately taken into account in the analysis of the combined data. We study the estimation for moment restriction models with data combined from two samples under an ignorability-type assumption while allowing for different marginal distributions of variables common to both samples. Suppose that an outcome regression (OR) model and a propensity score (PS) model are specified. By leveraging semi-parametric efficiency theory, we derive an augmented inverse probability-weighted (AIPW) estimator that is locally efficient and doubly robust with respect to these models. Furthermore, we develop calibrated regression and likelihood estimators that are not only locally efficient and doubly robust but also intrinsically efficient in achieving smaller variances than the AIPW estimator when the PS model is correctly specified but the OR model may be mispecified. As an important application, we study the two-sample instrumental variable problem and derive the corresponding estimators while allowing for incompatible distributions of variables common to the two samples. Finally, we provide a simulation study and an econometric application on public housing projects to demonstrate the superior performance of our improved estimators. The Canadian Journal of Statistics 48: 259–284; 2020 © 2019 Statistical Society of Canada  相似文献   

8.
A new optimization algorithm is presented to solve the stratification problem. Assuming the number L of strata and the total sample size n are fixed, we obtain strata boundaries by using an objective function associated with the variance. In this problem, strata boundaries must be determined so that the elements in each stratum are more homogeneous among themselves. To produce more homogeneous strata, this paper proposes a new algorithm that uses the Greedy Randomized Adaptive Search Procedure (GRASP) methodology. Computational results are presented for a set of problems, with the application of the new algorithm and some algorithms from literature.  相似文献   

9.
Specification of household engel curves by nonparametric regression   总被引:1,自引:0,他引:1  
This paper demonstrates the usefulness of nonparametric regression analysis for functional specfication of houshold Engel curves.

After a brief review in section 2 of the literature on demand functions and equivalence scales and the functional specifications used, we first discuss in section 3 the issues of using income versus total expenditure, the origin and nature of the error terms in the light of utility theroy, and the interpretation of empirical demand functions. we shall reach the unorthodox view that household demand functions should be interpreted as conditional expectations relative to prices, household composition and either income or the conditional expectation of total expenditure (rather that total expenditure itself), where the latter conditional expectation is taken relative to income, prices and household composition. these two forms appear to be equivalent. this result also solves the simultaneity problem: the error variance matrix is no longer singular. Moreover, the errors are in general heteroskedastic.

In section 4 we discuss the model and the data, and in section 5 we review the nonparametric kernal regression approach.

In section 6 we derive the functional form of our household engel curves from nonparametric regression results, using the 1980 budget survey for the netherlands, in order to avoid model misspecification. thus the modl is derived directly from the data, without restricting its functional form. the nonparametric regression results are then translated to suitable parametric functional specifications, i.e., we choose parametric functional forms in accordance with the nanparametric regression results. these parametric specification are estimated by least squares, and various parameter restrictions are tested in order to simplify the models. this yields very simple final specifications of the household engel curves involved, namely linear functions of income and the number of children in two age groups.  相似文献   

10.
Research has shown that applying the T2 control chart by using a variable parameters (VP) scheme yields rapid detection of out-of-control states. In this paper, the problem of economic statistical design of the VP T2control chart is considered as a double-objective minimization problem with the statistical objective being the adjusted average time to signal and the economic objective being expected cost per hour. We then find the Pareto-optimal designs in which the two objectives are met simultaneously by using a multi-objective genetic algorithm. Through an illustrative example, we show that relatively large benefits can be achieved by applying the VP scheme when compared with usual schemes, and in addition, the multi-objective approach provides the user with designs that are flexible and adaptive.  相似文献   

11.
The aim of this work is to study in a first step the dependence between oil and some commodity prices (cotton, rice, wheat, sucre, coffee, and silver) using copula theory, and then in a second step to determine the optimal hedging strategy for oil–commodity portfolio against the risk of negative variation in commodity markets prices. The model is implemented with an AR-GARCH model with innovations that follow t distribution for the marginal distribution and the extreme value copula for the joint distribution and parameters and dependence indices are re-estimated in each new day which allow taking into account nonlinear dependence, tails behavior, and their development over time. Various copula functions are used to model the dependence structure between oil and commodity markets. Empirical results show an increase in the dependence during the last 6 years. Volatility for commodity prices registered record levels in the same time with the increase in uncertainty. Optimal hedging ratio varies over time as a consequence of the change in the dependence structure.  相似文献   

12.
Approximate normality and unbiasedness of the maximum likelihood estimate (MLE) of the long-memory parameter H of a fractional Brownian motion hold reasonably well for sample sizes as small as 20 if the mean and scale parameter are known. We show in a Monte Carlo study that if the latter two parameters are unknown the bias and variance of the MLE of H both increase substantially. We also show that the bias can be reduced by using a parametric bootstrap procedure. In very large samples, maximum likelihood estimation becomes problematic because of the large dimension of the covariance matrix that must be inverted. To overcome this difficulty, we propose a maximum likelihood method based upon first differences of the data. These first differences form a short-memory process. We split the data into a number of contiguous blocks consisting of a relatively small number of observations. Computation of the likelihood function in a block then presents no computational problem. We form a pseudo-likelihood function consisting of the product of the likelihood functions in each of the blocks and provide a formula for the standard error of the resulting estimator of H. This formula is shown in a Monte Carlo study to provide a good approximation to the true standard error. The computation time required to obtain the estimate and its standard error from large data sets is an order of magnitude less than that required to obtain the widely used Whittle estimator. Application of the methodology is illustrated on two data sets.  相似文献   

13.
The Jeffreys-rule prior and the marginal independence Jeffreys prior are recently proposed in Fonseca et al. [Objective Bayesian analysis for the Student-t regression model, Biometrika 95 (2008), pp. 325–333] as objective priors for the Student-t regression model. The authors showed that the priors provide proper posterior distributions and perform favourably in parameter estimation. Motivated by a practical financial risk management application, we compare the performance of the two Jeffreys priors with other priors proposed in the literature in a problem of estimating high quantiles for the Student-t model with unknown degrees of freedom. Through an asymptotic analysis and a simulation study, we show that both Jeffreys priors perform better in using a specific quantile of the Bayesian predictive distribution to approximate the true quantile.  相似文献   

14.
This article extends the empirical martingale simulation (EMS) method from using a risk-neutral measure to using a dynamic measure for financial derivative pricing. Although the EMS is shown to be capable of obtaining consistent estimate of financial derivative prices in a more efficient way than the standard Monte Carlo simulation procedure, it can proceed only under a risk-neutral framework. In practice, however, it is cumbersome to obtain the explicit expression of a risk-neutral model when dealing with a complex model. To alleviate this difficulty, we compute the financial derivative prices under the dynamic model and impose the martingale property on the simulated sample paths of both the change of measure process and the underlying asset prices under the dynamic P measure. Hence, we call this modification the empirical P-martingale simulation (EPMS). The strong consistency of the EPMS is established and its efficiency is performed by simulation in the GARCH framework. Simulation results shows that EPMS has the similar variance reduction as the EMS method in option pricing if the risk-neutral model can be obtained, and is more efficient than the standard Monte Carlo simulation in most cases.  相似文献   

15.
Summary. The paper reports on a study that tests the anecdotal hypothesis that parents are willing to pay a premium to secure places for their children in popular and oversubscribed comprehensive schools. Since many local education authorities use admissions policies that are based on catchment areas and places in popular schools are very difficult to obtain from outside these areas—but very easy from within them—parents have an incentive to move house for the sake of their children's education. This would be expected to be reflected in house prices. The study uses a cross-sectional sample based on two popular schools in one local education authority area, Coventry. Differences in quality of housing are dealt with by using the technique of hedonic regression and differences in location by sample selection within a block sample design. The sample was chosen from a limited number of locations spanning different catchment areas to reduce both observable and unobservable variability in nuisance effects while maximizing the variation in catchment areas. The results suggest that there are strong school catchment area effects. For one of the two popular schools we find a 20% premium and for the other a 16% premium on house prices ceteris paribus .  相似文献   

16.
Multivariate temporal disaggregation deals with the historical reconstruction and nowcasting of economic variables subject to temporal and contemporaneous aggregation constraints. The problem involves a system of time series that are related not only by a dynamic model but also by accounting constraints. The paper introduces two fundamental (and realistic) models that implement the multivariate best linear unbiased estimation approach that has potential application to the temporal disaggregation of the national accounts series. The multivariate regression model with random walk disturbances is most suitable to deal with the chained linked volumes (as the nature of the national accounts time series suggests); however, in this case the accounting constraints are not binding and the discrepancy has to be modeled by either a trend-stationary or an integrated process. The tiny, compared with other driving disturbances, size of the discrepancy prevents maximum-likelihood estimation to be carried out, and the parameters have to be estimated separately. The multivariate disaggregation with integrated random walk disturbances is suitable for the national accounts aggregates expressed at current prices, in which case the accounting constraints are binding.  相似文献   

17.
In socioeconomic areas, functional observations may be collected with weights, called weighted functional data. In this paper, we deal with a general linear hypothesis testing (GLHT) problem in the framework of functional analysis of variance with weighted functional data. With weights taken into account, we obtain unbiased and consistent estimators of the group mean and covariance functions. For the GLHT problem, we obtain a pointwise F-test statistic and build two global tests, respectively, via integrating the pointwise F-test statistic or taking its supremum over an interval of interest. The asymptotic distributions of test statistics under the null and some local alternatives are derived. Methods for approximating their null distributions are discussed. An application of the proposed methods to density function data is also presented. Intensive simulation studies and two real data examples show that the proposed tests outperform the existing competitors substantially in terms of size control and power.  相似文献   

18.
Missing data are a common problem in almost all areas of empirical research. Ignoring the missing data mechanism, especially when data are missing not at random (MNAR), can result in biased and/or inefficient inference. Because MNAR mechanism is not verifiable based on the observed data, sensitivity analysis is often used to assess it. Current sensitivity analysis methods primarily assume a model for the response mechanism in conjunction with a measurement model and examine sensitivity to missing data mechanism via the parameters of the response model. Recently, Jamshidian and Mata (Post-modelling sensitivity analysis to detect the effect of missing data mechanism, Multivariate Behav. Res. 43 (2008), pp. 432–452) introduced a new method of sensitivity analysis that does not require the difficult task of modelling the missing data mechanism. In this method, a single measurement model is fitted to all of the data and to a sub-sample of the data. Discrepancy in the parameter estimates obtained from the the two data sets is used as a measure of sensitivity to missing data mechanism. Jamshidian and Mata describe their method mainly in the context of detecting data that are missing completely at random (MCAR). They used a bootstrap type method, that relies on heuristic input from the researcher, to test for the discrepancy of the parameter estimates. Instead of using bootstrap, the current article obtains confidence interval for parameter differences on two samples based on an asymptotic approximation. Because it does not use bootstrap, the developed procedure avoids likely convergence problems with the bootstrap methods. It does not require heuristic input from the researcher and can be readily implemented in statistical software. The article also discusses methods of obtaining sub-samples that may be used to test missing at random in addition to MCAR. An application of the developed procedure to a real data set, from the first wave of an ongoing longitudinal study on aging, is presented. Simulation studies are performed as well, using two methods of missing data generation, which show promise for the proposed sensitivity method. One method of missing data generation is also new and interesting in its own right.  相似文献   

19.
Biased sampling occurs often in observational studies. With one biased sample, the problem of nonparametrically estimating both a target density function and a selection bias function is unidentifiable. This paper studies the nonparametric estimation problem when there are two biased samples that have some overlapping observations (i.e. recaptures) from a finite population. Since an intelligent subject sampled previously may experience a memory effect if sampled again, two general 2-stage models that incorporate both a selection bias and a possible memory effect are proposed. Nonparametric estimators of the target density, selection bias, and memory functions, as well as the population size are developed. Asymptotic properties of these estimators are studied and confidence bands for the selection function and memory function are provided. Our procedures are compared with those ignoring the memory effect or the selection bias in finite sample situations. A nonparametric model selection procedure is also given for choosing a model from the two 2-stage models and a mixture of these two models. Our procedures work well with or without a memory effect, and with or without a selection bias. The paper concludes with an application to a real survey data set.  相似文献   

20.
This paper contains an application of the asymptotic expansion of a pFp() function to a problem encountered in econometrics. In particular we consider an approximation of the distribution function of the limited information maximum likelihood (LIML) identifiability test statistic using the method of moments. An expression for the Sth order asymptotic approximation of the moments of the LIML identifiability test statistic is derived and tabulated. The exact distribution function of the test statistic is approximated by a member of the class of F (variance ratio) distribution functions having the same first two integer moments. Some tabulations of the approximating distribution function are included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号