首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Summary.  Forecasts of future dangerousness are often used to inform the sentencing decisions of convicted offenders. For individuals who are sentenced to probation or paroled to community supervision, such forecasts affect the conditions under which they are to be supervised. The statistical criterion for these forecasts is commonly called recidivism, which is defined as a charge or conviction for any new offence, no matter how minor. Only rarely do such forecasts make distinctions on the basis of the seriousness of offences. Yet seriousness may be central to public concerns, and judges are increasingly required by law and sentencing guidelines to make assessments of seriousness. At the very least, information about seriousness is essential for allocating scarce resources for community supervision of convicted offenders. The paper focuses only on murderous conduct by individuals on probation or parole. Using data on a population of over 60000 cases from Philadelphia's Adult Probation and Parole Department, we forecast whether each offender will be charged with a homicide or attempted homicide within 2 years of beginning community supervision. We use a statistical learning approach that makes no assumptions about how predictors are related to the outcome. We also build in the costs of false negative and false positive charges and use half of the data to build the forecasting model, and the other half of the data to evaluate the quality of the forecasts. Forecasts that are based on this approach offer the possibility of concentrating rehabilitation, treatment and surveillance resources on a small subset of convicted offenders who may be in greatest need, and who pose the greatest risk to society.  相似文献   

2.
Stochastic population forecasts based on conditional expert opinions   总被引:1,自引:0,他引:1  
The paper develops and applies an expert-based stochastic population forecasting method, which can also be used to obtain a probabilistic version of scenario-based official forecasts. The full probability distribution of population forecasts is specified by starting from expert opinions on the future development of demographic components. Expert opinions are elicited as conditional on the realization of scenarios, in a two-step (or multiple-step) fashion. The method is applied to develop a stochastic forecast for the Italian population, starting from official scenarios from the Italian National Statistical Office.  相似文献   

3.
Some governments rely on centralized, official sets of population forecasts for planning capital facilities. But the nature of population forecasting, as well as the milieu of government forecasting in general, can lead to the creation of extrapolative forecasts not well suited to long-range planning. This report discusses these matters, and suggests that custom-made forecasts and the use of forecast guidelines and a review process stressing forecast assumption justification may be a more realistic basis for planning individual facilities than general-purpose, official forecasts.  相似文献   

4.
In human mortality modelling, if a population consists of several subpopulations it can be desirable to model their mortality rates simultaneously while taking into account the heterogeneity among them. The mortality forecasting methods tend to result in divergent forecasts for subpopulations when independence is assumed. However, under closely related social, economic and biological backgrounds, mortality patterns of these subpopulations are expected to be non-divergent in the future. In this article, we propose a new method for coherent modelling and forecasting of mortality rates for multiple subpopulations, in the sense of nondivergent life expectancy among subpopulations. The mortality rates of subpopulations are treated as multilevel functional data and a weighted multilevel functional principal component (wMFPCA) approach is proposed to model and forecast them. The proposed model is applied to sex-specific data for nine developed countries, and the results show that, in terms of overall forecasting accuracy, the model outperforms the independent model and the Product-Ratio model as well as the unweighted multilevel functional principal component approach.  相似文献   

5.
The effects of data uncertainty on real-time decision-making can be reduced by predicting data revisions to U.S. GDP growth. We show that survey forecasts efficiently predict the revision implicit in the second estimate of GDP growth, but that forecasting models incorporating monthly economic indicators and daily equity returns provide superior forecasts of the data revision implied by the release of the third estimate. We use forecasting models to measure the impact of surprises in GDP announcements on equity markets, and to analyze the effects of anticipated future revisions on announcement-day returns. We show that the publication of better than expected third-release GDP figures provides a boost to equity markets, and if future upward revisions are expected, the effects are enhanced during recessions.  相似文献   

6.
"The base period of a population forecast is the time period from which historical data are collected for the purpose of forecasting future population values. The length of the base period is one of the fundamental decisions made in preparing population forecasts, yet very few studies have investigated the effects of this decision on population forecast errors. In this article the relationship between the length of the base period and population forecast errors is analyzed, using three simple forecasting techniques and data from 1900 to 1980 for states in the United States. It is found that increasing the length of the base period up to 10 years improves forecast accuracy, but that further increases generally have little additional effect. The only exception to this finding is long-range forecasts of rapidly growing states, in which a longer base period substantially improves forecast accuracy for two of the forecasting techniques."  相似文献   

7.
"Errors in population forecasts arise from errors in the jump-off population and errors in the predictions of future vital rates. The propagation of these errors through the linear (Leslie) growth model is studied, and prediction intervals for future population are developed. For U.S. national forecasts, the prediction intervals are compared with the U.S. Census Bureau's high-low intervals." In order to assess the accuracy of the predictions of vital rates, the authors "derive the predictions from a parametric statistical model and estimate the extent of model misspecification and errors in parameter estimates. Subjective, expert opinion, so important in real forecasting, is incorporated with the technique of mixed estimation. A robust regression model is used to assess the effects of model misspecification."  相似文献   

8.
Current official population forecasts differ little from those that Whelpton made 50 years ago either in the cohort–component methodology used or in the arguments used to motivate the assumptions. However, Whelpton produced some of the most erroneous forecasts of this century. This suggests that current forecasters should ensure that they give users an assessment of the uncertainty of their forecasts. We show how simple statistical methods can be combined with expert judgment to arrive at an overall predictive distribution for the future population. We apply the methods to a world population forecast that was made in 1994. Accepting that point forecast, we find that the probability is only about 2% that the world population in the year 2030 will be less than the low scenario of 8317 million. The probability that the world population will exceed the high scenario of 10 736 million is about 13%. Similarly, the probability is only about 51% that the high–low interval of a recent United Nations (UN) forecast will contain the true population in the year 2025. Even if we consider the UN high–low intervals as conditional on the possible future policies of its member states, they appear to have a relatively small probability of encompassing the future population.  相似文献   

9.
Autoregressive Forecasting of Some Functional Climatic Variations   总被引:4,自引:0,他引:4  
Many variations such as the annual cycle in sea surface temperatures can be considered to be smooth functions and are appropriately described using methods from functional data analysis. This study defines a class of functional autoregressive (FAR) models which can be used as robust predictors for making forecasts of entire smooth functions in the future. The methods are illustrated and compared with pointwise predictors such as SARIMA by applying them to forecasting the entire annual cycle of climatological El Nino–Southern Oscillation (ENSO) time series one year ahead. Forecasts for the period 1987–1996 suggest that the FAR functional predictors show some promising skill, compared to traditional scalar SARIMA forecasts which perform poorly.  相似文献   

10.
Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. The linear multiregression dynamic model (LMDM) has been shown to be promising for forecasting flows, accommodating multivariate flow time series, while being a computationally simple model to use. While statistical flow forecasting models usually base their forecasts on flow data alone, data for other traffic variables are also routinely collected. This paper shows how cubic splines can be used to incorporate extra variables into the LMDM in order to enhance flow forecasts. Cubic splines are also introduced into the LMDM to parsimoniously accommodate the daily cycle exhibited by traffic flows. The proposed methodology allows the LMDM to provide more accurate forecasts when forecasting flows in a real high‐dimensional traffic data set. The resulting extended LMDM can deal with some important traffic modelling issues not usually considered in flow forecasting models. Additionally, the model can be implemented in a real‐time environment, a crucial requirement for traffic management systems designed to support decisions and actions to alleviate congestion and keep traffic flowing.  相似文献   

11.
ABSTRACT

We consider Pitman-closeness to evaluate the performance of univariate and multivariate forecasting methods. Optimal weights for the combination of forecasts are calculated with respect to this criterion. These weights depend on the assumption of the distribution of the individual forecasts errors. In the normal case they are identical with the optimal weights with respect to the MSE-criterion (univariate case) and with the optimal weights with respect to the MMSE-criterion (multivariate case). Further, we present a simple example to show how the different combination techniques perform. There we can see how much the optimal multivariate combination can outperform different other combinations. In practice, we can find multivariate forecasts e.g., in econometrics. There is often the situation that forecast institutes estimate several economic variables.  相似文献   

12.
"A model for birth forecasting based on prediction of the so-called 'birth order probabilities' is constructed. The relation between this model and recent models of fertility prediction is derived. Birth forecasts with approximate probability limits for the U.S. for the period 1983-1997 are generated. The performance of the proposed model in predicting future fertility is tested by fitting time series models to part of the available series (1917-1982) and ultimately generating birth forecasts for the remainder of the period, then comparing these forecasts with the actual data." The accuracy of the fertility forecasts made are compared with those made by other methods.  相似文献   

13.
"Projecting populations that have sparse or unreliable data, such as those of many developing countries, presents a challenge to demographers. The assumptions that they make to project data-poor populations frequently fall into the realm of ?educated guesses', and the resulting projections, often regarded as forecasts, are valid only to the extent that the assumptions on which they are based reasonably represent the past or future, as the case may be. These traditional projection techniques do not incorporate a demographer's assessment of uncertainty in the assumptions. Addressing the challenges of forecasting a data-poor population, we project the Iraqi Kurdish population using a Bayesian approach. This approach incorporates a demographer's uncertainty about past and future characteristics of the population in the form of elicited prior distributions."  相似文献   

14.
Methods for national population forecasts: a review   总被引:1,自引:0,他引:1  
"Three widely used classes of methods for forecasting national populations are reviewed: demographic accounting/cohort-component methods for long-range projections, statistical time series methods for short-range forecasts, and structural modeling methods for the simulation and forecasting of the effects of policy changes. In each case, the major characteristics, strengths, and weaknesses of the methods are described. Factors that place intrinsic limits on the accuracy of population forecasts are articulated. Promising lines of additional research by statisticians and demographers are identified for each class of methods and for population forecasting generally."  相似文献   

15.
A number of volatility forecasting studies have led to the perception that the ARCH- and Stochastic Volatility-type models provide poor out-of-sample forecasts of volatility. This is primarily based on the use of traditional forecast evaluation criteria concerning the accuracy and the unbiasedness of forecasts. In this paper we provide an analytical assessment of volatility forecasting performance. We use the volatility and log volatility framework to prove how the inherent noise in the approximation of the true- and unobservable-volatility by the squared return, results in a misleading forecast evaluation, inflating the observed mean squared forecast error and invalidating the Diebold-Mariano statistic. We analytically characterize this noise and explicitly quantify its effects assuming normal errors. We extend our results using more general error structures such as the Compound Normal and the Gram-Charlier classes of distributions. We argue that evaluation problems are likely to be exacerbated by non-normality of the shocks and that non-linear and utility-based criteria can be more suitable for the evaluation of volatility forecasts.  相似文献   

16.
Exponential smoothing is the most common model-free means of forecasting a future realization of a time series. It requires the specification of a smoothing factor which is usually chosen from the data to minimize the average squared residual of previous one-step-ahead forecasts. In this paper we show that exponential smoothing can be put into a nonparametric regression framework and gain some interesting insights into its performance through this interpretation. We also use theoretical developments from the kernel regression field to derive, for the first time, asymptotic properties of exponential smoothing forecasters.  相似文献   

17.
Many important variables in business and economics are neither measured nor measurable but are simply defined in terms of other measured variables. For instance, the real interest rate is defined as the difference between the nominal interest rate and the inflation rate. There are two ways to forecast a defined variable: one can directly forecast the variable itself, or one can derive the forecast of the defined variable indirectly from the forecasts of the constituent variables. Using Box-Jenkins univariate time series analysis for four defined variables—real interest rate, money multiplier, real GNP, and money velocity—the forecasting accuracy of the two methods is compared. The results show that indirect forecasts tend to outperform direct methods for these defined variables.  相似文献   

18.
《Econometric Reviews》2013,32(3):175-198
Abstract

A number of volatility forecasting studies have led to the perception that the ARCH- and Stochastic Volatility-type models provide poor out-of-sample forecasts of volatility. This is primarily based on the use of traditional forecast evaluation criteria concerning the accuracy and the unbiasedness of forecasts. In this paper we provide an analytical assessment of volatility forecasting performance. We use the volatility and log volatility framework to prove how the inherent noise in the approximation of the true- and unobservable-volatility by the squared return, results in a misleading forecast evaluation, inflating the observed mean squared forecast error and invalidating the Diebold–Mariano statistic. We analytically characterize this noise and explicitly quantify its effects assuming normal errors. We extend our results using more general error structures such as the Compound Normal and the Gram–Charlier classes of distributions. We argue that evaluation problems are likely to be exacerbated by non-normality of the shocks and that non-linear and utility-based criteria can be more suitable for the evaluation of volatility forecasts.  相似文献   

19.
The importance of interval forecasts is reviewed. Several general approaches to calculating such forecasts are described and compared. They include the use of theoretical formulas based on a fitted probability model (with or without a correction for parameter uncertainty), various “approximate” formulas (which should be avoided), and empirically based, simulation, and resampling procedures. The latter are useful when theoretical formulas are not available or there are doubts about some model assumptions. The distinction between a forecasting method and a forecasting model is expounded. For large groups of series, a forecasting method may be chosen in a fairly ad hoc way. With appropriate checks, it may be possible to base interval forecasts on the model for which the method is optimal. It is certainly unsound to use a model for which the method is not optimal, but, strangely, this is sometimes done. Some general comments are made as to why prediction intervals tend to be too narrow in practice to encompass the required proportion of future observations. An example demonstrates the overriding importance of careful model specification. In particular, when data are “nearly nonstationary,” the difference between fitting a stationary and a nonstationary model is critical.  相似文献   

20.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号