首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 15 毫秒
1.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   

2.
As modeling efforts expand to a broader spectrum of areas the amount of computer time required to exercise the corresponding computer codes has become quite costly (several hours for a single run is not uncommon). This costly process can be directly tied to the complexity of the modeling and to the large number of input variables (often numbering in the hundreds) Further, the complexity of the modeling (usually involving systems of differential equations) makes the relationships among the input variables not mathematically tractable. In this setting it is desired to perform sensitivity studies of the input-output relationships. Hence, a judicious selection procedure for the choic of values of input variables is required, Latin hypercube sampling has been shown to work well on this type of problem.

However, a variety of situations require that decisions and judgments be made in the face of uncertainty. The source of this uncertainty may be lack ul knowledge about probability distributions associated with input variables, or about different hypothesized future conditions, or may be present as a result of different strategies associated with a decision making process In this paper a generalization of Latin hypercube sampling is given that allows these areas to be investigated without making additional computer runs. In particular it is shown how weights associated with Latin hypercube input vectors may be rhangpd to reflect different probability distribution assumptions on key input variables and yet provide: an unbiased estimate of the cumulative distribution function of the output variable. This allows for different distribution assumptions on input variables to be studied without additional computer runs and without fitting a response surface. In addition these same weights can be used in a modified nonparametric Friedman test to compare treatments, Sample size requirements needed to apply the results of the work are also considered. The procedures presented in this paper are illustrated using a model associated with the risk assessment of geologic disposal of radioactive waste.  相似文献   

3.
The main aim of this paper is to perform sensitivity analysis to the specification of prior distributions in a Bayesian analysis setting of STAR models. To achieve this aim, the joint posterior distribution of model order, coefficient, and implicit parameters in the logistic STAR model is first being presented. The conditional posterior distributions are then shown, followed by the design of a posterior simulator using a combination of Metropolis-Hastings, Gibbs Sampler, RJMCMC, and Multiple Try Metropolis algorithms, respectively. Following this, simulation studies and a case study on the prior sensitivity for the implicit parameters are being detailed at the end.  相似文献   

4.
An improved likelihood-based method is proposed to test for the significance of the first-order moving average model. Compared with commonly used tests which depend on the asymptotic properties of the maximum likelihood estimate and the likelihood ratio statistic, the proposed method has remarkable accuracy. Application of the method to a data set on book sales is presented to demonstrate the implementation of the method. Simulation studies are subsequently performed to illustrate the accuracy of the method compared to the traditional methods. Additionally, a simple and effective correction is used to deal with the boundary problem.  相似文献   

5.
Overdispersion is a common phenomenon in Poisson modeling. The generalized Poisson (GP) regression model accommodates both overdispersion and underdispersion in count data modeling, and is an increasingly popular platform for modeling overdispersed count data. The Poisson model is one of the special cases in the collection of models which may be specified by GP regression. Thus, we may derive a test of overdispersion which compares the equi-dispersion Poisson model within the context of the more general GP regression model. The score test has an advantage over the likelihood ratio test (LRT) and over the Wald test in that the score test only requires that the parameter of interest be estimated under the null hypothesis (the Poisson model). Herein, we propose a score test for overdispersion based on the GP model (specifically the GP-2 model) and compare the power of the test with the LRT and Wald tests. A simulation study indicates the proposed score test based on asymptotic standard normal distribution is more appropriate in practical applications.  相似文献   

6.
An improved likelihood-based method based on Fraser et al. (1999) is proposed in this paper to test the significance of the second lag of the stationary AR(2) model. Compared with the test proposed by Fan and Yao (2003) and the signed log-likelihood ratio test, the proposed method has remarkable accuracy. Simulation studies are performed to illustrate the accuracy of the proposed method. Application of the proposed method on historical data is presented to demonstrate the implementation of this method. Furthermore, the method can be extended to the general AR(p) model.  相似文献   

7.
Abstract

One of the most important factors in building and changing communication mechanisms in social networks is considering features of the members of social networks. Most of the existing methods in network monitoring don’t consider effects of features in network formation mechanisms and others don’t lead to reliable results when the features abound or when there are correlations among them. In this article, we combined two methods principal component analysis (PCA) and likelihood method to monitor the underlying network model when the features of individuals abound and when some of them have high correlations with each other.  相似文献   

8.
A random-effects transition model is proposed to model the economic activity status of household members. This model is introduced to take into account two kinds of correlations; one due to the longitudinal nature of the study, which will be considered using a transition parameter, and the other due to the existing correlation between responses of members of the same household which is taken into account by introducing random coefficients into the model. The results are presented based on the homogeneous (all parameters are not changed by time) and non-homogeneous Markov models with random coefficients. A Bayesian approach via the Gibbs sampling is used to perform parameter estimation. Results of using random-effects transition model are compared, using deviance information criterion, with those of three other models which exclude random effects and/or transition effects. It is shown that the full model gains more precision due to the consideration of all aspects of the process which generated the data. To illustrate the utility of the proposed model, a longitudinal data set which is extracted from the Iranian Labour Force Survey is analysed to explore the simultaneous effect of some covariates on the current economic activity as a nominal response. Also, some sensitivity analyses are performed to assess the robustness of the posterior estimation of the transition parameters to the perturbations of the prior parameters.  相似文献   

9.
One of the deficits of the common Bollinger band is that it fails to consider the fat tails/leptokurtosis often exists in financial time series. An adjusted Bollinger band generated by rolling GARCH regression method is proposed in this study. The performance of the adjusted Bollinger band strategy on EUR, GBP, JPY, and AUD vs. USD foreign exchange trading is evaluated. Results show that in general, the adjusted Bollinger band performs better than the traditional one in terms of success ratios, net successes, and profit. In addition, no matter there is transaction cost or not, only adjusted Bollinger strategies are recommended for investors. Adjusted Bollinger band strategies with MA 5 or 10 are recommended for EUR, GBP, and JPY. Adjusted Bollinger strategy with MA 20 is the recommended strategies for AUD.  相似文献   

10.
Bridge penalized regression has many desirable statistical properties such as unbiasedness, sparseness as well as ‘oracle’. In Bayesian framework, bridge regularized penalty can be implemented based on generalized Gaussian distribution (GGD) prior. In this paper, we incorporate Bayesian bridge-randomized penalty and its adaptive version into the quantile regression (QR) models with autoregressive perturbations to conduct Bayesian penalization estimation. Employing the working likelihood of the asymmetric Laplace distribution (ALD) perturbations, the Bayesian joint hierarchical models are established. Based on the mixture representations of the ALD and generalized Gaussian distribution (GGD) priors of coefficients, the hybrid algorithms based on Gibbs sampler and Metropolis-Hasting sampler are provided to conduct fully Bayesian posterior estimation. Finally, the proposed Bayesian procedures are illustrated by some simulation examples and applied to a real data application of the electricity consumption.  相似文献   

11.
A study is made of Neyman's C(a) test for testing independence in nonnormal situations. It is shown that it performs very well both in terms of the level of significance and the powereven for smallvalues of the samplesize. Also, in the case of the bivariate Polsson distribution, itis shown that Fisher's z and Student's t transforms of the sample correlation coefficient are good competitors for Neyman's procedure.

  相似文献   

12.
This article develops limit theory for likelihood analysis of weak exogeneity in I(2) cointegrated vector autoregressive (VAR) models incorporating deterministic terms. Conditions for weak exogeneity in I(2) VAR models are reviewed, and the asymptotic properties of conditional maximum likelihood estimators and a likelihood-based weak exogeneity test are then investigated. It is demonstrated that weak exogeneity in I(2) VAR models allows us to conduct asymptotic conditional inference based on mixed Gaussian distributions. It is then proved that a log-likelihood ratio test statistic for weak exogeneity in I(2) VAR models is asymptotically χ2 distributed. The article also presents an empirical illustration of the proposed test for weak exogeneity using Japan's macroeconomic data.  相似文献   

13.
In this paper, we consider an estimation for the unknown parameters of a conditional Gaussian MA(1) model. In the majority of cases, a maximum-likelihood estimator is chosen because the estimator is consistent. However, for small sample sizes the error is large, because the estimator has a bias of O(n? 1). Therefore, we provide a bias of O(n? 1) for the maximum-likelihood estimator for the conditional Gaussian MA(1) model. Moreover, we propose new estimators for the unknown parameters of the conditional Gaussian MA(1) model based on the bias of O(n? 1). We investigate the properties of the bias, as well as the asymptotical variance of the maximum-likelihood estimators for the unknown parameters, by performing some simulations. Finally, we demonstrate the validity of the new estimators through this simulation study.  相似文献   

14.
The INAR(1) model (integer-valued autoregressive) is commonly used to model serially dependent processes of Poisson counts. We propose several asymptotic simultaneous confidence regions for the two parameters of a Poisson INAR(1) model, and investigate their performance and robustness for finite-length time series in a simulation study. Practical recommendations are derived, and the application of the confidence regions is illustrated by a real-data example.  相似文献   

15.
In the present study, the stochastic process X(t) describing inventory model type of (s, S) with a heavy-tailed distributed demands is considered. The asymptotic expansions at sufficiently large values of parameter β = S ? s for the ergodic distribution and nth-order moment of the process X(t) based on the main results of the studies Teugels (1968 Teugels, J.L. (1968). Renewal theorems when the first or the second moment is infinite. Ann. Math. Stat. 39(4):12101219.[Crossref] [Google Scholar]) and Geluk and Frenk (2011 Geluk, J.L., Frenk, J.B.G. (2011). Renewal theory for random variables with a heavy tailed distribution and finite variance. Stat. Probab. Lett. 81:7782.[Crossref], [Web of Science ®] [Google Scholar]) are obtained.  相似文献   

16.
Abstract

For the restricted parameter space (0,1), we propose Zhang’s loss function which satisfies all the 7 properties for a good loss function on (0,1). We then calculate the Bayes rule (estimator), the posterior expectation, the integrated risk, and the Bayes risk of the parameter in (0,1) under Zhang’s loss function. We also calculate the usual Bayes estimator under the squared error loss function, and the Bayes estimator has been proved to underestimate the Bayes estimator under Zhang’s loss function. Finally, the numerical simulations and a real data example of some monthly magazine exposure data exemplify our theoretical studies of two size relationships about the Bayes estimators and the Posterior Expected Zhang’s Losses (PEZLs).  相似文献   

17.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号