首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Real-time estimates of output gaps and inflation gaps differ from the values that are obtained using data available long after the event. Part of the problem is that the data on which the real-time estimates are based is subsequently revised. We show that vector-autoregressive models of data vintages provide forecasts of post-revision values of future observations and of already-released observations capable of improving estimates of output and inflation gaps in real time. Our findings indicate that annual revisions to output and inflation data are in part predictable based on their past vintages. This article has online supplementary materials.  相似文献   

2.
The balanced iterative reducing and clustering hierarchies (BIRCH) algorithm handles massive datasets by reading the data file only once, clustering the data as it is read, and retaining only a few clustering features to summarize the data read so far. Using BIRCH allows to analyse datasets that are too large to fit in the computer main memory. We propose estimates of Spearman's ρ and Kendall's τ that are calculated from a BIRCH output and assess their performance through Monte Carlo studies. The numerical results show that the BIRCH-based estimates can achieve the same efficiency as the usual estimates of ρ and τ while using only a fraction of the memory otherwise required.  相似文献   

3.
劳动投入法是一种新的国民经济核算方法,它利用劳动投入数据和每单位劳动投入的产出率(增加值率)来估算产出(增加值)。这一方法有助于实现全面的国民经济核算。本文以意大利为例,详细介绍了劳动投入法在国民经济核算中的应用,并对几个附加问题进行必要说明。并期望对我国国民经济核算起到一定的借鉴作用。  相似文献   

4.
We apply some log-linear modelling methods, which have been proposed for treating non-ignorable non-response, to some data on voting intention from the British General Election Survey. We find that, although some non-ignorable non-response models fit the data very well, they may generate implausible point estimates and predictions. Some explanation is provided for the extreme behaviour of the maximum likelihood estimates for the most parsimonious model. We conclude that point estimates for such models must be treated with great caution. To allow for the uncertainty about the non-response mechanism we explore the use of profile likelihood inference and find the likelihood surfaces to be very flat and the interval estimates to be very wide. To reduce the width of these intervals we propose constraining confidence regions to values where the parameters governing the non-response mechanism are plausible and study the effect of such constraints on inference. We find that the widths of these intervals are reduced but remain wide.  相似文献   

5.
The estimation of the incidence of tumours in an animal carcinogenicity study is complicated by the occult nature of the tumours involved (i.e. tumours are not observable before an animal's death). Also, the lethality of tumours is generally unknown, making the tumour incidence function non-identifiable without interim sacrifices, cause-of-death data or modelling assumptions. Although Kaplan–Meier curves for overall survival are typically displayed, obtaining analogous plots for tumour incidence generally requires fairly elaborate model fitting. We present a case-study of tetrafluoroethylene to illustrate a simple method for estimating the incidence of tumours as a function of more easily estimable components. One of the components, tumour prevalence, is modelled by using a generalized additive model, which leads to estimates that are more flexible than those derived under the usual parametric models. A multiplicative assumption for tumour lethality allows for the incorporation of concomitant information, such as the size of tumours. Our approach requires only terminal sacrifice data although additional sacrifice data are easily accommodated. Simulations are used to illustrate the estimator proposed and to evaluate its properties. The method also yields a simple summary measure of tumour lethality, which can be helpful in interpreting the results of a study.  相似文献   

6.
We consider the problem of robust M-estimation of a vector of regression parameters, when the errors are dependent. We assume a weakly stationary, but otherwise quite general dependence structure. Our model allows for the representation of the correlations of any time series of finite length. We first construct initial estimates of the regression, scale, and autocorrelation parameters. The initial autocorrelation estimates are used to transform the model to one of approximate independence. In this transformed model, final one-step M-estimates are calculated. Under appropriate assumptions, the regression estimates so obtained are asymptotically normal, with a variance-covariance structure identical to that in the case in which the autocorrelations are known a priori. The results of a simulation study are given. Two versions of our estimator are compared with the L1 -estimator and several Huber-type M-estimators. In terms of bias and mean squared error, the estimators are generally very close. In terms of the coverage probabilities of confidence intervals, our estimators appear to be quite superior to both the L1-estimator and the other estimators. The simulations also indicate that the approach to normality is quite fast.  相似文献   

7.
We have observations for a t distribution with unknown mean, variance, and degrees of freedom, each of which we wish to estimate. The major problem lies in the estimate of the degrees of freedom. We show that a relatively efficient yet very simple estimator is a given function of the ratio of percentile estimates. We derive the appropriate estimator, provide equations for transformation and standard errors, contrast this with other estimators, and give examples.  相似文献   

8.
We propose a method that uses a sequential design instead of a space filling design for estimating tuning parameters of a complex computer model. The goal is to bring the computer model output closer to the real system output. The method fits separate Gaussian process (GP) models to the available data from the physical experiment and the computer experiment and minimizes the discrepancy between the predictions from the GP models to obtain estimates of the tuning parameters. A criterion based on the discrepancy between the predictions from the two GP models and the standard error of prediction for the computer experiment output is then used to obtain a design point for the next run of the computer experiment. The tuning parameters are re-estimated using the augmented data set. The steps are repeated until the budget for the computer experiment data is exhausted. Simulation studies show that the proposed method performs better in bringing a computer model closer to the real system than methods that use a space filling design.  相似文献   

9.
We propose replacing the usual Student's-t statistic, which tests for equality of means of two distributions and is used to construct a confidence interval for the difference, by a biweight-“t” statistic. The biweight-“t” is a ratio of the difference of the biweight estimates of location from the two samples to an estimate of the standard error of this difference. Three forms of the denominator are evaluated: weighted variance estimates using both pooled and unpooled scale estimates, and unweighted variance estimates using an unpooled scale estimate. Monte Carlo simulations reveal that resulting confidence intervals are highly efficient on moderate sample sizes, and that nominal levels are nearly attained, even when considering extreme percentage points.  相似文献   

10.
Analyses of carcinogenicity experiments involving occult (hidden) tumours are usually based on cause-of-death information or the results of many interim sacrifices. A simple compartmental model is described that does not involve the cause of death. The method of analysis requires only one interim sacrifice, in addition to the usual terminal kill, to ensure that the tumour incidence rates can be estimated. One advantage of the approach is demonstrated in the analysis of glomerulosclerosis following exposure to ionizing radiation. Although the semiparametric model involves fewer parameters, estimates of key functions derived in this analysis are similar to those obtained previously by using a nonparametric method that involves many more parameters.  相似文献   

11.
In observational studies for the interaction between exposures on a dichotomous outcome of a certain population, usually one parameter of a regression model is used to describe the interaction, leading to one measure of the interaction. In this article we use the conditional risk of an outcome given exposures and covariates to describe the interaction and obtain five different measures of the interaction, that is, difference between the marginal risk differences, ratio of the marginal risk ratios, ratio of the marginal odds ratios, ratio of the conditional risk ratios, and ratio of the conditional odds ratios. These measures reflect different aspects of the interaction. By using only one regression model for the conditional risk, we obtain the maximum-likelihood (ML)-based point and interval estimates of these measures, which are most efficient due to the nature of ML. We use the ML estimates of the model parameters to obtain the ML estimates of these measures. We use the approximate normal distribution of the ML estimates of the model parameters to obtain approximate non-normal distributions of the ML estimates of these measures and then confidence intervals of these measures. The method can be easily implemented and is presented via a medical example.  相似文献   

12.
在对Feldstein—Horioka之谜及其相关研究的基础上,利用中国的省级数据,通过建立、估计和检验面板协整模型,分析各区域之间的资本流动关系。研究结果表明:中国东部地区的资本流动性最强,中部地区的资本流动性次之,西部地区资本流动性最差;东部地区各省份以资本净流人为主,西部地区各省份以资本净流出为主。因此,要提高资本的自由流动程度,尤其要引导资本向中西部地区流动,加速中国经济的一体化进程,缩小不同区域经济发展的差距。  相似文献   

13.
We consider a sequence of contingency tables whose cell probabilities may vary randomly. The distribution of cell probabilities is modelled by a Dirichlet distribution. Bayes and empirical Bayes estimates of the log odds ratio are obtained. Emphasis is placed on estimating the risks associated with the Bayes, empirical Bayes and maximum lilkelihood estimates of the log odds ratio.  相似文献   

14.
Nonlinear mixed‐effects models are being widely used for the analysis of longitudinal data, especially from pharmaceutical research. They use random effects which are latent and unobservable variables so the random‐effects distribution is subject to misspecification in practice. In this paper, we first study the consequences of misspecifying the random‐effects distribution in nonlinear mixed‐effects models. Our study is focused on Gauss‐Hermite quadrature, which is now the routine method for calculation of the marginal likelihood in mixed models. We then present a formal diagnostic test to check the appropriateness of the assumed random‐effects distribution in nonlinear mixed‐effects models, which is very useful for real data analysis. Our findings show that the estimates of fixed‐effects parameters in nonlinear mixed‐effects models are generally robust to deviations from normality of the random‐effects distribution, but the estimates of variance components are very sensitive to the distributional assumption of random effects. Furthermore, a misspecified random‐effects distribution will either overestimate or underestimate the predictions of random effects. We illustrate the results using a real data application from an intensive pharmacokinetic study.  相似文献   

15.
Summary.  We propose a model of transitions into and out of low paid employment that accounts for non-ignorable panel dropout, employment retention and base year low pay status ('initial conditions'). The model is fitted to data for men from the British Household Panel Survey. Initial conditions and employment retention are found to be non-ignorable selection processes. Whether panel dropout is found to be ignorable depends on how item non-response on pay is treated. Notwithstanding these results, we also find that models incorporating a simpler approach to accounting for non-ignorable selections provide estimates of covariate effects that differ very little from the estimates from the general model.  相似文献   

16.
Abstract.  Typically, regression analysis for multistate models has been based on regression models for the transition intensities. These models lead to highly nonlinear and very complex models for the effects of covariates on state occupation probabilities. We present a technique that models the state occupation or transition probabilities in a multistate model directly. The method is based on the pseudo-values from a jackknife statistic constructed from non-parametric estimators for the probability in question. These pseudo-values are used as outcome variables in a generalized estimating equation to obtain estimates of model parameters. We examine this approach and its properties in detail for two special multistate model probabilities, the cumulative incidence function in competing risks and the current leukaemia-free survival used in bone marrow transplants. The latter is the probability a patient is alive and in either a first or second post-transplant remission. The techniques are illustrated on a dataset of leukaemia patients given a marrow transplant. We also discuss extensions of the model that are of current research interest.  相似文献   

17.
We address the problem of estimating the proportions of two statistical populations in a given mixture on the basis of an unlabeled sample of n–dimensional observations on the mixture. Assuming that the expected values of observations on the two populations are known, we show that almost any linear map from Rn to R1 yields an unbiased consistent estimate of the proportion of one population in a very easy way. We then find that linear map for which the resulting proportion estimate has minimum variance among all estimates so obtained. After deriving a simple expression for the minimum-variance estimate, we discuss practical aspects of obtaining this and related estimates.  相似文献   

18.
Generalized linear mixed models (GLMM) are commonly used to model the treatment effect over time while controlling for important clinical covariates. Standard software procedures often provide estimates of the outcome based on the mean of the covariates; however, these estimates will be biased for the true group means in the GLMM. Implementing GLMM in the frequentist framework can lead to issues of convergence. A simulation study demonstrating the use of fully Bayesian GLMM for providing unbiased estimates of group means is shown. These models are very straightforward to implement and can be used for a broad variety of outcomes (eg, binary, categorical, and count data) that arise in clinical trials. We demonstrate the proposed method on a data set from a clinical trial in diabetes.  相似文献   

19.
We discuss maximum likelihood and estimating equations methods for combining results from multiple studies in pooling projects and data consortia using a meta-analysis model, when the multivariate estimates with their covariance matrices are available. The estimates to be combined are typically regression slopes, often from relative risk models in biomedical and epidemiologic applications. We generalize the existing univariate meta-analysis model and investigate the efficiency advantages of the multivariate methods, relative to the univariate ones. We generalize a popular univariate test for between-studies homogeneity to a multivariate test. The methods are applied to a pooled analysis of type of carotenoids in relation to lung cancer incidence from seven prospective studies. In these data, the expected gain in efficiency was evident, sometimes to a large extent. Finally, we study the finite sample properties of the estimators and compare the multivariate ones to their univariate counterparts.  相似文献   

20.
The problem of estimating the mean average total cost of each output for multiproduct firms in an industry is addressed. The identity that defines total cost for each firm as the product of output levels multiplied by their respective average total costs is viewed as a random-coefficients model. A random coefficients regression estimator is used to estimate mean average total output costs. Solutions to problems arising with this method in empirical studies are discussed. An application of the approach to data from cash grain farms in Illinois shows that the method gives reliable estimates for per-unit output production costs with considerably fewer data requirements than current methods of cost estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号