首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
采用1990年1月以来居民消费价格指数(CPI)的月度数据,运用随机域回归模型、系列随机域的非线性检验方法和贝叶斯估计方法,对中国通货膨胀率与通货膨胀不确定性的关系进行了实证分析。研究发现:通货膨胀率与通货膨胀之间具有双向关系。通货膨胀率引起了通货膨胀不确定性,两者呈现U型关系;较高的通货膨胀不确定性引起通货膨胀率先升后降,呈现倒U曲线关系。  相似文献   

2.
A popular account for the demise of the U.K.’s monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily revised data. We consider a large set of recursively estimated vector autoregressive (VAR) and vector error correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily revised data obscure these fluctuations. Out-of-sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the U.K.’s monetary targeting regime.  相似文献   

3.
Abstract

In many industrial and biological experiments, the recorded data consist of the number of observations falling in an interval. In this paper, we develop two test statistics to test whether the grouped observations come from an exponential distribution. Following the procedure of Damianou and Kemp (Damianou, C., Kemp, A. W. (1990 Damianou, C. and Kemp, A. W. 1990. New goodness of statistics for discrete and continuous data. American Journal of Mathematical and Management Sciences, 10: 275307. [Taylor & Francis Online] [Google Scholar]). New goodness of statistics for discrete and continuous data. American Journal of Mathematical and Management Sciences 10:275–307.), Kolmogrov–Smirnov type statistics are developed with the maximum likelihood estimator of the scale parameter substituted for the true unknown scale. The asymptotic theory for both the statistics is studied and power studies carried out via simulations.  相似文献   

4.
In this article, tests are developed which can be used to investigate the goodness-of-fit of the skew-normal distribution in the context most relevant to the data analyst, namely that in which the parameter values are unknown and are estimated from the data. We consider five test statistics chosen from the broad Cramér–von Mises and Kolmogorov–Smirnov families, based on measures of disparity between the distribution function of a fitted skew-normal population and the empirical distribution function. The sampling distributions of the proposed test statistics are approximated using Monte Carlo techniques and summarized in easy to use tabular form. We also present results obtained from simulation studies designed to explore the true size of the tests and their power against various asymmetric alternative distributions.  相似文献   

5.
The starting point in uncertainty quantification is a stochastic model, which is fitted to a technical system in a suitable way, and prediction of uncertainty is carried out within this stochastic model. In any application, such a model will not be perfect, so any uncertainty quantification from such a model has to take into account the inadequacy of the model. In this paper, we rigorously show how the observed data of the technical system can be used to build a conservative non‐asymptotic confidence interval on quantiles related to experiments with the technical system. The construction of this confidence interval is based on concentration inequalities and order statistics. An asymptotic bound on the length of this confidence interval is presented. Here we assume that engineers use more and more of their knowledge to build models with order of errors bounded by . The results are illustrated by applying the newly proposed approach to real and simulated data.  相似文献   

6.
In this article we introduce a probability distribution generated by a mixture of discrete random variables to capture uncertainty, feeling, and overdispersion, possibly present in ordinal data surveys. The choice of the components of the new model is motivated by a study on the data generating process. Inferential issues concerning the maximum likelihood estimates and the validation steps are presented; then, some empirical analyses are given to support the usefulness of the approach. Discussion on further extensions of the model ends the article.  相似文献   

7.
Abstract

A crisis of validity has emerged from three related crises of science, that is, the crises of statistical significance and complete randomization, of replication, and of reproducibility. Guinnessometrics takes commonplace assumptions and methods of statistical science and stands them on their head, from little p-values to unstructured Big Data. Guinnessometrics focuses instead on the substantive significance which emerges from a small series of independent and economical yet balanced and repeated experiments. Originally developed and market-tested by William S. Gosset aka “Student” in his job as Head Experimental Brewer at the Guinness Brewery in Dublin, Gosset’s economic and common sense approach to statistical inference and scientific method has been unwisely neglected. In many areas of science and life, the 10 principles of Guinnessometrics or G-values outlined here can help. Other things equal, the larger the G-values, the better the science and judgment. By now a colleague, neighbor, or YouTube junkie has probably shown you one of those wacky psychology experiments in a video involving a gorilla, and testing the limits of human cognition. In one video, a person wearing a gorilla suit suddenly appears on the scene among humans, who are themselves engaged in some ordinary, mundane activity such as passing a basketball. The funny thing is, prankster researchers have discovered, when observers are asked to think about the mundane activity (such as by counting the number of observed passes of a basketball), the unexpected gorilla is frequently unseen (for discussion see Kahneman 2011 Kahneman, D. (2011), Thinking Fast and Slow, New York: Farrar, Straus and Giroux. [Google Scholar]). The gorilla is invisible. People don’t see it.  相似文献   

8.
Healthcare economic evaluation is an analytical tool used with increasing frequency to assist decision making in the choice and financing of interventions and technologies in the healthcare system. The objective of this article is to analyze the different methods of handling with sampling uncertainty in healthcare cost effectiveness evaluations when patient level data are available. The aim of this article is to focus on the strengths and the weakness of each method in order to facilitate the tasks of those who must base their choices on studies of this kind.  相似文献   

9.
Toxicologists and pharmacologists often describe toxicity of a chemical using parameters of a nonlinear regression model. Thus estimation of parameters of a nonlinear regression model is an important problem. The estimates of the parameters and their uncertainty estimates depend upon the underlying error variance structure in the model. Typically, a priori the researcher would not know if the error variances are homoscedastic (i.e., constant across dose) or if they are heteroscedastic (i.e., the variance is a function of dose). Motivated by this concern, in this paper we introduce an estimation procedure based on preliminary test which selects an appropriate estimation procedure accounting for the underlying error variance structure. Since outliers and influential observations are common in toxicological data, the proposed methodology uses M-estimators. The asymptotic properties of the preliminary test estimator are investigated; in particular its asymptotic covariance matrix is derived. The performance of the proposed estimator is compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using a data set obtained from the National Toxicology Program.  相似文献   

10.
Conformance to a specified part geometry is key to achieve product quality. Geometric tolerance control on Coordinate Measuring Machines is a critical issue as parsimony in the set of probed points dictated by economic considerations conflicts with the requirement of full field inspection mandated by tolerance standards. Evaluation of uncertainty originated by sampling errors takes therefore a high priority level. The case of position tolerance control on a hole axis and related uncertainty analysis is examined in the paper via Monte Carlo simulation. Results exhibit a remarkable uncertainty related to a number of steps involved in the control method. A comprehensive statistical analysis is shown to be required if risk of failure to reach the correct decision in assessing part conformance is to be kept under control.  相似文献   

11.
The variance of the error term in ordinary regression models and linear smoothers is usually estimated by adjusting the average squared residual for the trace of the smoothing matrix (the degrees of freedom of the predicted response). However, other types of variance estimators are needed when using monotonic regression (MR) models, which are particularly suitable for estimating response functions with pronounced thresholds. Here, we propose a simple bootstrap estimator to compensate for the over-fitting that occurs when MR models are estimated from empirical data. Furthermore, we show that, in the case of one or two predictors, the performance of this estimator can be enhanced by introducing adjustment factors that take into account the slope of the response function and characteristics of the distribution of the explanatory variables. Extensive simulations show that our estimators perform satisfactorily for a great variety of monotonic functions and error distributions.  相似文献   

12.
In the context of the Cardiovascular Health Study, a comprehensive investigation into the risk factors for strokes, we apply Bayesian model averaging to the selection of variables in Cox proportional hazard models. We use an extension of the leaps-and-bounds algorithm for locating the models that are to be averaged over and make available S-PLUS software to implement the methods. Bayesian model averaging provides a posterior probability that each variable belongs in the model, a more directly interpretable measure of variable importance than a P -value. P -values from models preferred by stepwise methods tend to overstate the evidence for the predictive value of a variable and do not account for model uncertainty. We introduce the partial predictive score to evaluate predictive performance. For the Cardiovascular Health Study, Bayesian model averaging predictively outperforms standard model selection and does a better job of assessing who is at high risk for a stroke.  相似文献   

13.
In biomedical studies, frailty models arecommonly used in analyzing multivariate survival data, wherethe objective of the study is to estimate both the covariateeffect and the dependence between the multivariate survival times.However, inference based on these models are dependent on thedistributional assumption of frailty. We propose a diagnosticplot for assessing the frailty assumption. The proposed methodis based on the cross-ratio function and the diagnostic plotsuggested by Oakes (1989). We use kernel regression smoothingwith bandwidth choice by cross-validation, to obtain the proposedplot. The resulting plot is capable of differentiating betweenthe gamma and positive stable frailty models when strong associationis present. We illustrate the feasibility of our method usingsimulation studies under known frailty distributions. The approachis applied to data on blindness for each eye of diabetic patientswith adult onset diabetes and a reasonable fit to the gamma frailtymodel is found.  相似文献   

14.
Sample attrition is a potentially serious problem for analysis of panel data, particularly experimental panel data. In this article, a variety of estimation procedures are used to assess the importance of attrition bias in labor supply response to the Seattle and Denver Income Maintenance Experiments (SIME/DIME). Data from Social Security Administration earnings records and the SIME/DIME public use file are used to test various hypotheses concerning attrition bias. The study differs from previous research in that data on both attriters and nonattriters are used to estimate the experimental labor supply response. Although not conclusive, the analysis suggests that attrition bias is probably not a serious enough problem in the SIME/DIME data to warrant extensive correction procedures. The methodology used in this study could be applied to other panel data sets.  相似文献   

15.
In this paper we examine maximum likelihood estimation procedures in multilevel models for two level nesting structures. Usually, for fixed effects and variance components estimation, level-one error terms and random effects are assumed to be normally distributed. Nevertheless, in some circumstances this assumption might not be realistic, especially as concerns random effects. Thus we assume for random effects the family of multivariate exponential power distributions (MEP); subsequently, by means of Monte Carlo simulation procedures, we study robustness of maximum likelihood estimators under normal assumption when, actually, random effects are MEP distributed.  相似文献   

16.
Estimation of nonlinear functions of a multinomial parameter vector is necessary in many categorical data problems. The first and second order jackknife are explored for the purpose of reduction of bias. The second order jackknife of a function g(.) of a multinomial parameter is shown to be asymptotically normal if all second order partials ?2g( p )?dpi?pj obey a Hölder condition with exponent α>1/2. Numerical results for the estimation of the log odds ratio in a 2times2 table demonstrate the efficiency of the jackknife method for reduction of mean-square-error and the construction of approximate confidence intervals.  相似文献   

17.
Many diagnostic tests may be available to identify a particular disease. Diagnostic performance can be potentially improved by combining. “Either” and “both” positive strategies for combining tests have been discussed in the literature, where a gain in diagnostic performance is measured by a ratio of positive (negative) likelihood ratio of the combined test to that of an individual test. Normal theory and bootstrap confidence intervals are constructed for gains in likelihood ratios. The performance (coverage probability, width) of the two methods are compared via simulation. All confidence intervals perform satisfactorily for large samples, while bootstrap performs better in smaller samples in terms of coverage and width.  相似文献   

18.
We present a three-stage, nonparametric estimation procedure to recover willingness to pay for housing attributes. In the first stage we estimate a nonparametric hedonic home price function. In the second stage we recover each consumer's taste parameters for product characteristics using first-order conditions for utility maximization. Finally, in the third stage we estimate the distribution of household tastes as a function of household demographics. As an application of our methods, we compare alternative explanations for why blacks choose to live in center cities while whites suburbanize.  相似文献   

19.
ABSTRACT

In logistic regression with nonignorable missing responses, Ibrahim and Lipsitz proposed a method for estimating regression parameters. It is known that the regression estimates obtained by using this method are biased when the sample size is small. Also, another complexity arises when the iterative estimation process encounters separation in estimating regression coefficients. In this article, we propose a method to improve the estimation of regression coefficients. In our likelihood-based method, we penalize the likelihood by multiplying it by a noninformative Jeffreys prior as a penalty term. The proposed method reduces bias and is able to handle the issue of separation. Simulation results show substantial bias reduction for the proposed method as compared to the existing method. Analyses using real world data also support the simulation findings. An R package called brlrmr is developed implementing the proposed method and the Ibrahim and Lipsitz method.  相似文献   

20.
Abstract.  Consider the model Y = β ' X + ε . Let F 0 be the unknown cumulative distribution function of the random variable ε . Consistency of the semi-parametric Maximum likelihood estimator of ( β , F 0), denoted by     , has not been established under any interval censorship (IC) model. We prove in this paper that     is consistent under the mixed case IC model and some mild assumptions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号