首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The objective of this article is to evaluate the performance of the COM‐Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM‐Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over‐ or underdispersion. Over the last year, the COM‐Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson‐gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM‐Poisson models were estimated using crash data collected at 162 railway‐highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM‐Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.  相似文献   

2.
The hyper‐Poisson distribution can handle both over‐ and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation‐specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper‐Poisson distribution in analyzing motor vehicle crash count data. The hyper‐Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway‐highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness‐of‐fit measures indicated that the hyper‐Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper‐Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway‐Maxwell‐Poisson model previously developed for the same data set. The advantages of the hyper‐Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper‐Poisson model can handle both over‐ and underdispersed crash data. Although not a major issue for the Conway‐Maxwell‐Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.  相似文献   

3.
Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single‐hit dose‐response models are the most commonly used dose‐response models in QMRA. Denoting as the probability of infection at a given mean dose d, a three‐parameter generalized QMRA beta‐Poisson dose‐response model, , is proposed in which the minimum number of organisms required for causing infection, Kmin, is not fixed, but a random variable following a geometric distribution with parameter . The single‐hit beta‐Poisson model, , is a special case of the generalized model with Kmin = 1 (which implies ). The generalized beta‐Poisson model is based on a conceptual model with greater detail in the dose‐response mechanism. Since a maximum likelihood solution is not easily available, a likelihood‐free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median estimates produced fall short of meeting the required condition of = 1 for single‐hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single‐hit assumption for characterizing the dose‐response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three‐parameter generalized model provides a possibility to investigate the mechanism of a dose‐response process in greater detail than is possible under a single‐hit model.  相似文献   

4.
For dose–response analysis in quantitative microbial risk assessment (QMRA), the exact beta‐Poisson model is a two‐parameter mechanistic dose–response model with parameters and , which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting as the probability of infection at a given mean dose d, the widely used dose–response model is an approximate formula for the exact beta‐Poisson model. Notwithstanding the required conditions and , issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | , ) as a validity measure (r is a random variable that follows a gamma distribution; and are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions for as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | , ) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta‐Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | , ), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta‐Poisson model dose–response curve.  相似文献   

5.
渐消记忆型自适应线性回归模型参数估计算法及应用   总被引:2,自引:0,他引:2  
本文讨论了根据经济信息的不断更新,调整原有线性回归模型的必要性,介绍了一种渐消记忆型自适应线性回归模型参数估计算法,并给出了应用实例.  相似文献   

6.
In this article, a classification model based on the majority rule sorting (MR‐Sort) method is employed to evaluate the vulnerability of safety‐critical systems with respect to malevolent intentional acts. The model is built on the basis of a (limited‐size) set of data representing (a priori known) vulnerability classification examples. The empirical construction of the classification model introduces a source of uncertainty into the vulnerability analysis process: a quantitative assessment of the performance of the classification model (in terms of accuracy and confidence in the assignments) is thus in order. Three different app oaches are here considered to this aim: (i) a model–retrieval‐based approach, (ii) the bootstrap method, and (iii) the leave‐one‐out cross‐validation technique. The analyses are presented with reference to an exemplificative case study involving the vulnerability assessment of nuclear power plants.  相似文献   

7.
An asymptotically efficient likelihood‐based semiparametric estimator is derived for the censored regression (tobit) model, based on a new approach for estimating the density function of the residuals in a partially observed regression. Smoothing the self‐consistency equation for the nonparametric maximum likelihood estimator of the distribution of the residuals yields an integral equation, which in some cases can be solved explicitly. The resulting estimated density is smooth enough to be used in a practical implementation of the profile likelihood estimator, but is sufficiently close to the nonparametric maximum likelihood estimator to allow estimation of the semiparametric efficient score. The parameter estimates obtained by solving the estimated score equations are then asymptotically efficient. A summary of analogous results for truncated regression is also given.  相似文献   

8.
This paper investigates asymptotic properties of the maximum likelihood estimator and the quasi‐maximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is important to make the distinction with different spatial scenarios. Under the scenario that each unit will be influenced by only a few neighboring units, the estimators may have ‐rate of convergence and be asymptotically normal. When each unit can be influenced by many neighbors, irregularity of the information matrix may occur and various components of the estimators may have different rates of convergence.  相似文献   

9.
This paper develops a new estimation procedure for characteristic‐based factor models of stock returns. We treat the factor model as a weighted additive nonparametric regression model, with the factor returns serving as time‐varying weights and a set of univariate nonparametric functions relating security characteristic to the associated factor betas. We use a time‐series and cross‐sectional pooled weighted additive nonparametric regression methodology to simultaneously estimate the factor returns and characteristic‐beta functions. By avoiding the curse of dimensionality, our methodology allows for a larger number of factors than existing semiparametric methods. We apply the technique to the three‐factor Fama–French model, Carhart's four‐factor extension of it that adds a momentum factor, and a five‐factor extension that adds an own‐volatility factor. We find that momentum and own‐volatility factors are at least as important, if not more important, than size and value in explaining equity return comovements. We test the multifactor beta pricing theory against a general alternative using a new nonparametric test.  相似文献   

10.
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback‐Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed.  相似文献   

11.
In order to develop a dose‐response model for SARS coronavirus (SARS‐CoV), the pooled data sets for infection of transgenic mice susceptible to SARS‐CoV and infection of mice with murine hepatitis virus strain 1, which may be a clinically relevant model of SARS, were fit to beta‐Poisson and exponential models with the maximum likelihood method. The exponential model (k= 4.1 × l02) could describe the dose‐response relationship of the pooled data sets. The beta‐Poisson model did not provide a statistically significant improvement in fit. With the exponential model, the infectivity of SARS‐CoV was calculated and compared with those of other coronaviruses. The does of SARS‐CoV corresponding to 10% and 50% responses (illness) were estimated at 43 and 280 PFU, respectively. Its estimated infectivity was comparable to that of HCoV‐229E, known as an agent of human common cold, and also similar to those of some animal coronaviruses belonging to the same genetic group. Moreover, the exponential model was applied to the analysis of the epidemiological data of SARS outbreak that occurred at an apartment complex in Hong Kong in 2003. The estimated dose of SARS‐CoV for apartment residents during the outbreak, which was back‐calculated from the reported number of cases, ranged from 16 to 160 PFU/person, depending on the floor. The exponential model developed here is the sole dose‐response model for SARS‐CoV at the present and would enable us to understand the possibility for reemergence of SARS.  相似文献   

12.
Risk factor selection is very important in the insurance industry, which helps precise rate making and studying the features of high‐quality insureds. Zero‐inflated data are common in insurance, such as the claim frequency data, and zero‐inflation makes the selection of risk factors quite difficult. In this article, we propose a new risk factor selection approach, EM adaptive LASSO, for a zero‐inflated Poisson regression model, which combines the EM algorithm and adaptive LASSO penalty. Under some regularity conditions, we show that, with probability approaching 1, important factors are selected and the redundant factors are excluded. We investigate the finite sample performance of the proposed method through a simulation study and the analysis of car insurance data from SAS Enterprise Miner database.  相似文献   

13.
When a continuous‐time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observations is not explicitly computable. Using Hermite polynomials, I construct an explicit sequence of closed‐form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models.  相似文献   

14.
Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.  相似文献   

15.
Analysis of competing hypothesis, a method for evaluating explanations of observed evidence, is used in numerous fields, including counterterrorism, psychology, and intelligence analysis. We propose a Bayesian extension of the methodology, posing the problem in terms of a multinomial‐Dirichlet hierarchical model. The yet‐to‐be observed true hypothesis is regarded as a multinomial random variable and the evaluation of the evidence is treated as a structured elicitation of a prior distribution on the probabilities of the hypotheses. This model provides the user with measures of uncertainty for the probabilities of the hypotheses. We discuss inference, such as point and interval estimates of hypothesis probabilities, ratios of hypothesis probabilities, and Bayes factors. A simple example involving the stadium relocation of the San Diego Chargers is used to illustrate the method. We also present several extensions of the model that enable it to handle special types of evidence, including evidence that is irrelevant to one or more hypotheses, evidence against hypotheses, and evidence that is subject to deception.  相似文献   

16.
We examine the role of knowledge diversity among unit members on an organizational unit's productivity. Utilizing a proprietary data set of corrective maintenance tasks from a large software‐services firm, we investigate the impact of two key within‐unit diversity metrics: interpersonal diversity and intrapersonal diversity. We analyze the independent influence of interpersonal diversity and the interactive influence of interpersonal diversity and intrapersonal diversity on organizational unit's productivity. Finally, we examine how diversity moderates productivity of an organizational unit when employee turnover occurs. Our analysis reveals the following key insights: (a) interpersonal diversity has an inverted U‐shaped effect on organizational unit's productivity; (b) intrapersonal diversity moderates the influence of interpersonal diversity on organizational‐unit productivity; (c) at higher levels of interpersonal diversity, rate of decrease in productivity of the organizational unit due to turnover is higher. We discuss the resulting theoretical and managerial insights associated with these findings.  相似文献   

17.
This study introduces a universal “Dome” appointment rule that can be parameterized through a planning constant for different clinics characterized by the environmental factors—no‐shows, walk‐ins, number of appointments per session, variability of service times, and cost of doctor's time to patients’ time. Simulation and nonlinear regression are used to derive an equation to predict the planning constant as a function of the environmental factors. We also introduce an adjustment procedure for appointment systems to explicitly minimize the disruptive effects of no‐shows and walk‐ins. The procedure adjusts the mean and standard deviation of service times based on the expected probabilities of no‐shows and walk‐ins for a given target number of patients to be served, and it is thus relevant for any appointment rule that uses the mean and standard deviation of service times to construct an appointment schedule. The results show that our Dome rule with the adjustment procedure performs better than the traditional rules in the literature, with a lower total system cost calculated as a weighted sum of patients’ waiting time, doctor's idle time, and doctor's overtime. An open‐source decision‐support tool is also provided so that healthcare managers can easily develop appointment schedules for their clinical environment.  相似文献   

18.
This paper uses a structural model to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. We model household decisions in the face of borrowing constraints, income uncertainty, and high‐yield indivisible investment opportunities. After estimation of parameters using preprogram data, we evaluate the model's ability to predict and interpret the impact of the village fund intervention. Simulations from the model mirror the data in yielding a greater increase in consumption than credit, which is interpreted as evidence of credit constraints. A cost–benefit analysis using the model indicates that some households value the program much more than its per household cost, but overall the program costs 30 percent more than the sum of these benefits.  相似文献   

19.
In this paper we derive the asymptotic properties of within groups (WG), GMM, and LIML estimators for an autoregressive model with random effects when both T and N tend to infinity. GMM and LIML are consistent and asymptotically equivalent to the WG estimator. When T/N→ 0 the fixed T results for GMM and LIML remain valid, but WG, although consistent, has an asymptotic bias in its asymptotic distribution. When T/N tends to a positive constant, the WG, GMM, and LIML estimators exhibit negative asymptotic biases of order 1/T, 1/N, and 1/(2NT), respectively. In addition, the crude GMM estimator that neglects the autocorrelation in first differenced errors is inconsistent as T/Nc>0, despite being consistent for fixed T. Finally, we discuss the properties of a random effects pseudo MLE with unrestricted initial conditions when both T and N tend to infinity.  相似文献   

20.
This paper develops the fixed‐smoothing asymptotics in a two‐step generalized method of moments (GMM) framework. Under this type of asymptotics, the weighting matrix in the second‐step GMM criterion function converges weakly to a random matrix and the two‐step GMM estimator is asymptotically mixed normal. Nevertheless, the Wald statistic, the GMM criterion function statistic, and the Lagrange multiplier statistic remain asymptotically pivotal. It is shown that critical values from the fixed‐smoothing asymptotic distribution are high order correct under the conventional increasing‐smoothing asymptotics. When an orthonormal series covariance estimator is used, the critical values can be approximated very well by the quantiles of a noncentral F distribution. A simulation study shows that statistical tests based on the new fixed‐smoothing approximation are much more accurate in size than existing tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号