首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, an integer-valued self-exciting threshold model with a finite range based on the binomial INARCH(1) model is proposed. Important stochastic properties are derived, and approaches for parameter estimation are discussed. A real-data example about the regional spread of public drunkenness in Pittsburgh demonstrates the applicability of the new model in comparison to existing models. Feasible modifications of the model are presented, which are designed to handle special features such as zero-inflation.  相似文献   

2.
Abstract

In this paper, we present a fractional decomposition of the probability generating function of the innovation process of the first-order non-negative integer-valued autoregressive [INAR(1)] process to obtain the corresponding probability mass function. We also provide a comprehensive review of integer-valued time series models, based on the concept of thinning operators with geometric-type marginals. In particular, we develop two fractional approaches to obtain the distribution of innovation processes of the INAR(1) model and show that the distribution of the innovations sequence has geometric-type distribution. These approaches are discussed in detail and illustrated through a few examples.  相似文献   

3.
The Poisson distribution is a simple and popular model for count-data random variables, but it suffers from the equidispersion requirement, which is often not met in practice. While models for overdispersed counts have been discussed intensively in the literature, the opposite phenomenon, underdispersion, has received only little attention, especially in a time series context. We start with a detailed survey of distribution models allowing for underdispersion, discuss their properties and highlight possible disadvantages. After having identified two model families with attractive properties as well as only two model parameters, we combine these models with the INAR(1) model (integer-valued autoregressive), which is particularly well suited to obtain auotocorrelated counts with underdispersion. Properties of the resulting stationary INAR(1) models and approaches for parameter estimation are considered, as well as possible extensions to higher order autoregressions. Three real-data examples illustrate the application of the models in practice.  相似文献   

4.
The accelerated failure time (AFT) models have proved useful in many contexts, though heavy censoring (as for example in cancer survival) and high dimensionality (as for example in microarray data) cause difficulties for model fitting and model selection. We propose new approaches to variable selection for censored data, based on AFT models optimized using regularized weighted least squares. The regularized technique uses a mixture of \(\ell _1\) and \(\ell _2\) norm penalties under two proposed elastic net type approaches. One is the adaptive elastic net and the other is weighted elastic net. The approaches extend the original approaches proposed by Ghosh (Adaptive elastic net: an improvement of elastic net to achieve oracle properties, Technical Reports 2007) and Hong and Zhang (Math Model Nat Phenom 5(3):115–133 2010), respectively. We also extend the two proposed approaches by adding censoring observations as constraints into their model optimization frameworks. The approaches are evaluated on microarray and by simulation. We compare the performance of these approaches with six other variable selection techniques-three are generally used for censored data and the other three are correlation-based greedy methods used for high-dimensional data.  相似文献   

5.
Abstract

In this article, we introduce a new distribution for modeling positive data sets with high kurtosis, the modified slashed generalized exponential distribution. The new model can be seen as a modified version of the slashed generalized exponential distribution. It arises as a quotient of two independent random variables, one being a generalized exponential distribution in the numerator and a power of the exponential distribution in the denominator. We studied various structural properties (such as the stochastic representation, density function, hazard rate function and moments) and discuss moment and maximum likelihood estimating approaches. Two real data sets are considered in which the utility of the new model in the analysis with high kurtosis is illustrated.  相似文献   

6.
In this paper, we focus on Pitman closeness probabilities when the estimators are symmetrically distributed about the unknown parameter θ. We first consider two symmetric estimators θ?1 and θ?2 and obtain necessary and sufficient conditions for θ?1 to be Pitman closer to the common median θ than θ?2. We then establish some properties in the context of estimation under the Pitman closeness criterion. We define Pitman closeness probability which measures the frequency with which an individual order statistic is Pitman closer to θ than some symmetric estimator. We show that, for symmetric populations, the sample median is Pitman closer to the population median than any other independent and symmetrically distributed estimator of θ. Finally, we discuss the use of Pitman closeness probabilities in the determination of an optimal ranked set sampling scheme (denoted by RSS) for the estimation of the population median when the underlying distribution is symmetric. We show that the best RSS scheme from symmetric populations in the sense of Pitman closeness is the median and randomized median RSS for the cases of odd and even sample sizes, respectively.  相似文献   

7.
Few approaches for monitoring autocorrelated attribute data have been proposed in the literature. If the marginal process distribution is binomial, then the binomial AR(1) model as a realistic and well-interpretable process model may be adequate. Based on known and newly derived statistical properties of this model, we shall develop approaches to monitor a binomial AR(1) process, and investigate their performance in a simulation study. A case study demonstrates the applicability of the binomial AR(1) model and of the proposed control charts to problems from statistical process control.  相似文献   

8.
In this paper, we discuss a fully Bayesian quantile inference using Markov Chain Monte Carlo (MCMC) method for longitudinal data models with random effects. Under the assumption of error term subject to asymmetric Laplace distribution, we establish a hierarchical Bayesian model and obtain the posterior distribution of unknown parameters at τ-th level. We overcome the current computational limitations using two approaches. One is the general MCMC technique with Metropolis–Hastings algorithm and another is the Gibbs sampling from the full conditional distribution. These two methods outperform the traditional frequentist methods under a wide array of simulated data models and are flexible enough to easily accommodate changes in the number of random effects and in their assumed distribution. We apply the Gibbs sampling method to analyse a mouse growth data and some different conclusions from those in the literatures are obtained.  相似文献   

9.
In this article, we deal with the problem of testing a point null hypothesis for the mean of a multivariate power exponential distribution. We study the conditions under which Bayesian and frequentist approaches can match. In this comparison it is observed that the tails of the model are the key to explain the reconciliability or irreconciliability between the two approaches.  相似文献   

10.
Multiple-membership logit models with random effects are models for clustered binary data, where each statistical unit can belong to more than one group. The likelihood function of these models is analytically intractable. We propose two different approaches for parameter estimation: indirect inference and data cloning (DC). The former is a non-likelihood-based method which uses an auxiliary model to select reasonable estimates. We propose an auxiliary model with the same dimension of parameter space as the target model, which is particularly convenient to reach good estimates very fast. The latter method computes maximum likelihood estimates through the posterior distribution of an adequate Bayesian model, fitted to cloned data. We implement a DC algorithm specifically for multiple-membership models. A Monte Carlo experiment compares the two methods on simulated data. For further comparison, we also report Bayesian posterior mean and Integrated Nested Laplace Approximation hybrid DC estimates. Simulations show a negligible loss of efficiency for the indirect inference estimator, compensated by a relevant computational gain. The approaches are then illustrated with two real examples on matched paired data.  相似文献   

11.
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1–2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

12.
Forecasting Performance of an Open Economy DSGE Model   总被引:1,自引:0,他引:1  
《Econometric Reviews》2007,26(2):289-328
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

13.
Abstract

This study concerns semiparametric approaches to estimate discrete multivariate count regression functions. The semiparametric approaches investigated consist of combining discrete multivariate nonparametric kernel and parametric estimations such that (i) a prior knowledge of the conditional distribution of model response may be incorporated and (ii) the bias of the traditional nonparametric kernel regression estimator of Nadaraya-Watson may be reduced. We are precisely interested in combination of the two estimations approaches with some asymptotic properties of the resulting estimators. Asymptotic normality results were showed for nonparametric correction terms of parametric start function of the estimators. The performance of discrete semiparametric multivariate kernel estimators studied is illustrated using simulations and real count data. In addition, diagnostic checks are performed to test the adequacy of the parametric start model to the true discrete regression model. Finally, using discrete semiparametric multivariate kernel estimators provides a bias reduction when the parametric multivariate regression model used as start regression function belongs to a neighborhood of the true regression model.  相似文献   

14.
In cost‐effectiveness analyses of drugs or health technologies, estimates of life years saved or quality‐adjusted life years saved are required. Randomised controlled trials can provide an estimate of the average treatment effect; for survival data, the treatment effect is the difference in mean survival. However, typically not all patients will have reached the endpoint of interest at the close‐out of a trial, making it difficult to estimate the difference in mean survival. In this situation, it is common to report the more readily estimable difference in median survival. Alternative approaches to estimating the mean have also been proposed. We conducted a simulation study to investigate the bias and precision of the three most commonly used sample measures of absolute survival gain – difference in median, restricted mean and extended mean survival – when used as estimates of the true mean difference, under different censoring proportions, while assuming a range of survival patterns, represented by Weibull survival distributions with constant, increasing and decreasing hazards. Our study showed that the three commonly used methods tended to underestimate the true treatment effect; consequently, the incremental cost‐effectiveness ratio (ICER) would be overestimated. Of the three methods, the least biased is the extended mean survival, which perhaps should be used as the point estimate of the treatment effect to be inputted into the ICER, while the other two approaches could be used in sensitivity analyses. More work on the trade‐offs between simple extrapolation using the exponential distribution and more complicated extrapolation using other methods would be valuable. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Estimators for quantiles based on linear combinations of order statistics have been proposed by Harrell and Davis(1982) and kaigh and Lachenbruch (1982). Both estimators have been demonstrated to be at least as efficient for small sample point estimation as an ordinary sample quantile estimator based on one or two order statistics: Distribution-free confidence intervals for quantiles can be constructed using either of the two approaches. By means of a simulation study, these confidence intervals have been compared with several other methods of constructing confidence intervals for quantiles in small samples. For the median, the Kaigh and Lachenbruch method performed fairly well. For other quantiles, no method performed better than the method which uses pairs of order statistics.  相似文献   

16.
Methods for analyzing and modeling count data time series are used in various fields of practice, and they are particularly relevant for applications in finance and economy. We consider the binomial autoregressive (AR(1)) model for count data processes with a first-order AR dependence structure and a binomial marginal distribution. We present four approaches for estimating its model parameters based on given time series data, and we derive expressions for the asymptotic distribution of these estimators. Then we investigate the finite-sample performance of the estimators and of the respective asymptotic approximations in a simulation study, including a discussion of the 2-block jackknife. We illustrate our methods and findings with a real-data example about transactions at the Korea stock market. We conclude with an application of our results for obtaining reliable estimates for process capability indices.  相似文献   

17.
In this work we study robustness in Bayesian models through a generalization of the Normal distribution. We show new appropriate techniques in order to deal with this distribution in Bayesian inference. Then we propose two approaches to decide, in some applications, if we should replace the usual Normal model by this generalization. First, we pose this dilemma as a model rejection problem, using diagnostic measures. In the second approach we evaluate the model's predictive efficiency. We illustrate those perspectives with a simulation study, a non linear model and a longitudinal data model.  相似文献   

18.
Abstract

Negative hypergeometric distribution arises as a waiting time distribution when we sample without replacement from a finite population. It has applications in many areas such as inspection sampling and estimation of wildlife populations. However, as is well known, the negative hypergeometric distribution is over-dispersed in the sense that its variance is greater than the mean. To make it more flexible and versatile, we propose a modified version of negative hypergeometric distribution called COM-Negative Hypergeometric distribution (COM-NH) by introducing a shape parameter as in the COM-Poisson and COMP-Binomial distributions. It is shown that under some limiting conditions, COM-NH approaches to a distribution that we call the COM-Negative binomial (COMP-NB), which in turn, approaches to the COM Poisson distribution. For the proposed model, we investigate the dispersion characteristics and shape of the probability mass function for different combinations of parameters. We also develop statistical inference for this model including parameter estimation and hypothesis tests. In particular, we investigate some properties such as bias, MSE, and coverage probabilities of the maximum likelihood estimators for its parameters by Monte Carlo simulation and likelihood ratio test to assess shape parameter of the underlying model. We present illustrative data to provide discussion.  相似文献   

19.
We employ two different approaches to derive single and product moments of order statistics from a truncated Laplace distribution. A direct evaluation method establishes recurrence relations whereas the more general non-overlapping mixture model incorporates the truncated Laplace distribution as a special case. The results are thereafter applied to estimate location and scale parameters of such distributions.  相似文献   

20.
This paper explores the utility of different approaches for modeling longitudinal count data with dropouts arising from a clinical study for the treatment of actinic keratosis lesions on the face and balding scalp. A feature of these data is that as the disease for subjects on the active arm improves their data show larger dispersion compared with those on the vehicle, exhibiting an over‐dispersion relative to the Poisson distribution. After fitting the marginal (or population averaged) model using the generalized estimating equation (GEE), we note that inferences from such a model might be biased as dropouts are treatment related. Then, we consider using a weighted GEE (WGEE) where each subject's contribution to the analysis is weighted inversely by the subject's probability of dropout. Based on the model findings, we argue that the WGEE might not address the concerns about the impact of dropouts on the efficacy findings when dropouts are treatment related. As an alternative, we consider likelihood‐based inference where random effects are added to the model to allow for heterogeneity across subjects. Finally, we consider a transition model where, unlike the previous approaches that model the log‐link function of the mean response, we model the subject's actual lesion counts. This model is an extension of the Poisson autoregressive model of order 1, where the autoregressive parameter is taken to be a function of treatment as well as other covariates to induce different dispersions and correlations for the two treatment arms. We conclude with a discussion about model selection. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号