首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
王华  郭红丽 《统计研究》2011,28(12):29-35
 通过实施统计用户满意度调查,测量统计用户对于政府统计部门所生产各类统计数据项目的综合质量感知,及其在主要发布渠道的具体质量感知状况。基于调查数据的分析结果表明:各类统计数据项目的综合用户质量感知水平存在较为明显的差异,用户质量感知与其使用频率之间保持了一定的正相关关系;而各类统计项目在不同发布渠道的具体质量表现也不尽相同。据此可以有效确立统计数据质量管理工作的重点环节。  相似文献   

2.
This article presents an empirical analysis of firms' order backlogs, inventories, production, and price adjustments to unanticipated demand shocks. The data are obtained from quarterly INSEE Business Survey Tests on firms' realizations, expectations, and appraisals of some various economic variables. The analysis is based on the formulation and the estimation of a recursive system of conditional log-linear probability models.  相似文献   

3.
SEMIFAR forecasts, with applications to foreign exchange rates   总被引:2,自引:0,他引:2  
SEMIFAR models introduced in Beran (1997. Estimating trends, long-range dependence and nonstationarity, preprint) provide a semiparametric modelling framework that enables the data analyst to separate deterministic and stochastic trends as well as short- and long-memory components in an observed time series. A correct distinction between these components, and in particular, the decision which of the components may be present in the data have an important impact on forecasts. In this paper, forecasts and forecast intervals for SEMIFAR models are obtained. The forecasts are based on an extrapolation of the nonparametric trend function and optimal forecasts of the stochastic component. In the data analytical part of the paper, the proposed method is applied to foreign exchange rates from Europe and Asia.  相似文献   

4.
In the frailty Cox model, frequentist approaches often present problems of numerical resolution, convergence, and variance calculation. The Bayesian approach offers an alternative. The goal of this study was to compare, using real (calf gastroenteritis) and simulated data, the results obtained with the MCMC method used in the Bayesian approach versus two frequentist approaches: the Newton–Raphson algorithm to solve a penalized likelihood and the EM algorithm. The results obtained showed that when the number of groups in the population decreases, the Bayesian approach gives a less biased estimation of the frailty variance and of the group fixed effect than the frequentist approaches.  相似文献   

5.
This article describes a convenient method of selecting Metropolis– Hastings proposal distributions for multinomial logit models. There are two key ideas involved. The first is that multinomial logit models have a latent variable representation similar to that exploited by Albert and Chib (J Am Stat Assoc 88:669–679, 1993) for probit regression. Augmenting the latent variables replaces the multinomial logit likelihood function with the complete data likelihood for a linear model with extreme value errors. While no conjugate prior is available for this model, a least squares estimate of the parameters is easily obtained. The asymptotic sampling distribution of the least squares estimate is Gaussian with known variance. The second key idea in this paper is to generate a Metropolis–Hastings proposal distribution by conditioning on the estimator instead of the full data set. The resulting sampler has many of the benefits of so-called tailored or approximation Metropolis–Hastings samplers. However, because the proposal distributions are available in closed form they can be implemented without numerical methods for exploring the posterior distribution. The algorithm converges geometrically ergodically, its computational burden is minor, and it requires minimal user input. Improvements to the sampler’s mixing rate are investigated. The algorithm is also applied to partial credit models describing ordinal item response data from the 1998 National Assessment of Educational Progress. Its application to hierarchical models and Poisson regression are briefly discussed.  相似文献   

6.
Recently, Perron has carried out tests of the unit-root hypothesis against the alternative hypothesis of trend stationarity with a break in the trend occurring at the Great Crash of 1929 or at the 1973 oil-price shock. His analysis covers the Nelson–Plosser macroeconomic data series as well as a postwar quarterly real gross national product (GNP) series. His tests reject the unit-root null hypothesis for most of the series. This article takes issue with the assumption used by Perron that the Great Crash and the oil-price shock can be treated as exogenous events. A variation of Perron's test is considered in which the breakpoint is estimated rather than fixed. We argue that this test is more appropriate than Perron's because it circumvents the problem of data-mining. The asymptotic distribution of the estimated breakpoint test statistic is determined. The data series considered by Perron are reanalyzed using this test statistic. The empirical results make use of the asymptotics developed for the test statistic as well as extensive finite-sample corrections obtained by simulation. The effect on the empirical results of fat-tailed and temporally dependent innovations is investigated, in brief, by treating the breakpoint as endogenous, we find that there is less evidence against the unit-root hypothesis than Perron finds for many of the data series but stronger evidence against it for several of the series, including the Nelson-Plosser industrial-production, nominal-GNP, and real-GNP series.  相似文献   

7.
We derive asymptotic expansions for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of dispersion models, under a sequence of Pitman alternatives. The asymptotic distributions of these statistics are obtained for testing a subset of regression parameters and for testing the precision parameter. Based on these nonnull asymptotic expansions, the power of all four tests, which are equivalent to first order, are compared. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes.  相似文献   

8.
Consider a two-by-two factorial experiment with more than one replicate. Suppose that we have uncertain prior information that the two-factor interaction is zero. We describe new simultaneous frequentist confidence intervals for the four population cell means, with simultaneous confidence coefficient 1 ? α, that utilize this prior information in the following sense. These simultaneous confidence intervals define a cube with expected volume that (a) is relatively small when the two-factor interaction is zero and (b) has maximum value that is not too large. Also, these intervals coincide with the standard simultaneous confidence intervals obtained by Tukey’s method, with simultaneous confidence coefficient 1 ? α, when the data strongly contradict the prior information that the two-factor interaction is zero. We illustrate the application of these new simultaneous confidence intervals to a real data set.  相似文献   

9.
Abstract

This paper deals with the problem of local sensitivity analysis in regression, i.e., how sensitive the results of a regression model (objective function, parameters, and dual variables) are to changes in the data. We use a general formula for local sensitivities in optimization problems to calculate the sensitivities in three standard regression problems (least squares, minimax, and least absolute values). Closed formulas for all sensitivities are derived. Sensitivity contours are presented to help in assessing the sensitivity of each observation in the sample. The dual problems of the minimax and least absolute values are obtained and interpreted. The proposed sensitivity measures are shown to deal more effectively with the masking problem than the existing methods. The methods are illustrated by their application to some examples and graphical illustrations are given.  相似文献   

10.
Multi-state models help predict future numbers of patients requiring specific treatments but these models require exhaustive incidence data. Deriving reliable predictions from repeated-prevalence data would be helpful. A new method to model the number of patients that switch between therapeutic modalities using repeated-prevalence data is presented and illustrated. The parameters and goodness of fit obtained with the new method and repeated-prevalence data were compared to those obtained with the classical method and incidence data. The multi-state model parameters’ confidence intervals obtained with annually collected repeated-prevalence data were wider than those obtained with incidence data and six out of nine pairs of confidence intervals did not overlap. However, most parameters were of the same order of magnitude and the predicted patient distributions among various renal replacement therapies were similar regardless of the type of data used. In the absence of incidence data, a multi-state model can still be successfully built with annually collected repeated-prevalence data to predict the numbers of patients requiring specific treatments. This modeling technique can be extended to other chronic diseases.  相似文献   

11.
The construction of a joint model for mixed discrete and continuous random variables that accounts for their associations is an important statistical problem in many practical applications. In this paper, we use copulas to construct a class of joint distributions of mixed discrete and continuous random variables. In particular, we employ the Gaussian copula to generate joint distributions for mixed variables. Examples include the robit-normal and probit-normal-exponential distributions, the first for modelling the distribution of mixed binary-continuous data and the second for a mixture of continuous, binary and trichotomous variables. The new class of joint distributions is general enough to include many mixed-data models currently available. We study properties of the distributions and outline likelihood estimation; a small simulation study is used to investigate the finite-sample properties of estimates obtained by full and pairwise likelihood methods. Finally, we present an application to discriminant analysis of multiple correlated binary and continuous data from a study involving advanced breast cancer patients.  相似文献   

12.
We propose a new class of continuous distributions with two extra shape parameters named the generalized odd log-logistic family of distributions. The proposed family contains as special cases the proportional reversed hazard rate and odd log-logistic classes. Its density function can be expressed as a linear combination of exponentiated densities based on the same baseline distribution. Some of its mathematical properties including ordinary moments, quantile and generating functions, two entropy measures and order statistics are obtained. We derive a power series for the quantile function. We discuss the method of maximum likelihood to estimate the model parameters. We study the behaviour of the estimators by means of Monte Carlo simulations. We introduce the log-odd log-logistic Weibull regression model with censored data based on the odd log-logistic-Weibull distribution. The importance of the new family is illustrated using three real data sets. These applications indicate that this family can provide better fits than other well-known classes of distributions. The beauty and importance of the proposed family lies in its ability to model different types of real data.  相似文献   

13.
Binomial thinning operator has a major role in modeling one-dimensional integer-valued autoregressive time series models. The purpose of this article is to extend the use of such operator to define a new stationary first-order spatial non negative, integer-valued autoregressive SINAR(1, 1) model. We study some properties of this model like the mean, variance and autocorrelation function. Yule-Walker estimator of the model parameters is also obtained. Some numerical results of the model are presented and, moreover, this model is applied to a real data set.  相似文献   

14.
15.
A compound class of zero truncated Poisson and lifetime distributions is introduced. A specialization is paved to a new three-parameter distribution, called doubly Poisson-exponential distribution, which may represent the lifetime of units connected in a series-parallel system. The new distribution can be obtained by compounding two zero truncated Poisson distributions with an exponential distribution. Among its motivations is that its hazard rate function can take different shapes such as decreasing, increasing and upside-down bathtub depending on the values of its parameters. Several properties of the new distribution are discussed. Based on progressive type-II censoring, six estimation methods [maximum likelihood, moments, least squares, weighted least squares and Bayes (under linear-exponential and general entropy loss functions) estimations] are used to estimate the involved parameters. The performance of these methods is investigated through a simulation study. The Bayes estimates are obtained using Markov chain Monte Carlo algorithm. In addition, confidence intervals, symmetric credible intervals and highest posterior density credible intervals of the parameters are obtained. Finally, an application to a real data set is used to compare the new distribution with other five distributions.  相似文献   

16.
Variable selection in the presence of outliers may be performed by using a robust version of Akaike's information criterion (AIC). In this paper, explicit expressions are obtained for such criteria when S- and MM-estimators are used. The performance of these criteria is compared with the existing AIC based on M-estimators and with the classical non-robust AIC. In a simulation study and in data examples, we observe that the proposed AIC with S and MM-estimators selects more appropriate models in case outliers are present.  相似文献   

17.
In this article, we consider the problem of the model selection/discrimination among three different positively skewed lifetime distributions. All these three distributions, namely; the Weibull, log-normal, and log-logistic, have been used quite effectively to analyze positively skewed lifetime data. In this article, we have used three different methods to discriminate among these three distributions. We have used the maximized likelihood method to choose the correct model and computed the asymptotic probability of correct selection. We have further obtained the Fisher information matrices of these three different distributions and compare them for complete and censored observations. These measures can be used to discriminate among these three distributions. We have also proposed to use the Kolmogorov–Smirnov distance to choose the correct model. Extensive simulations have been performed to compare the performances of the three different methods. It is observed that each method performs better than the other two for some distributions and for certain range of parameters. Further, the loss of information due to censoring are compared for these three distributions. The analysis of a real dataset has been performed for illustrative purposes.  相似文献   

18.
A bivariate generalisation of the Consul's(1974) quasi-binomial distributionCQBD) has been obtained with the help of an urn-model for eKplaining data obtained as a result of four-fold sampling, The distribution is expected to cover a very wide range of situations in four-fold sampling. The first and the second order moments of the distribution have been obtained,The distribution has been fitted to an observed set of data as an illustration and its limiting form has also been obtained.  相似文献   

19.
In this paper, we introduce a generalization of the Bilal distribution, where a new two-parameter distribution is presented. We show that its failure rate function can be upside-down bathtub shaped. The failure rate can also be decreasing or increasing. A comprehensive mathematical treatment of the new distribution is provided. The estimation by maximum likelihood is discussed, and a closed-form expression for Fisher’s information matrix is obtained. Asymptotic interval estimators for both of the two unknown parameters are also given. A simulation study is conducted and applications to real data sets are presented.  相似文献   

20.
A new method of statistical classification (discrimination) is proposed. The method is most effective for high dimension, low sample size data. It uses a robust mean difference as the direction vector and locates the classification boundary by minimizing the error rates. Asymptotic results for assessment and comparison to several popular methods are obtained by using a type of asymptotics of finite sample size and infinite dimensions. The value of the proposed approach is demonstrated by simulations. Real data examples are used to illustrate the performance of different classification methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号