首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
Complex models can only be realized a limited number of times due to large computational requirements. Methods exist for generating input parameters for model realizations including Monte Carlo simulation (MCS) and Latin hypercube sampling (LHS). Recent algorithms such as maximinLHS seek to maximize the minimum distance between model inputs in the multivariate space. A novel extension of Latin hypercube sampling (LHSMDU) for multivariate models is developed here that increases the multidimensional uniformity of the input parameters through sequential realization elimination. Correlations are considered in the LHSMDU sampling matrix using a Cholesky decomposition of the correlation matrix. Computer code implementing the proposed algorithm supplements this article. A simulation study comparing MCS, LHS, maximinLHS and LHSMDU demonstrates that increased multidimensional uniformity can significantly improve realization efficiency and that LHSMDU is effective for large multivariate problems.  相似文献   

2.
Several estimators are examined for the simple linear regression model under a controlled, experimental situation with multiple observations at each design point. The model is examined under normal and non-normal error distributions and mild heterogeneity of variances across the chosen design points. We consider the ordinary, generalized, and estimated generalized least squares estimators and several examples of M estimators. The asymptotic properties of the M estimator using the Huber ψ are presented under these conditions for the multiple regression model. A simulation study is also presented which indicates that the M estimator possesses strong robustness properties under the presence of both non-normality and mild heteroscedasticity o£ errors. Finally, the M estimates are compared to the least squares estimates in two examples.  相似文献   

3.
Summary.  We develop an efficient way to select the best subset autoregressive model with exogenous variables and generalized autoregressive conditional heteroscedasticity errors. One main feature of our method is to select important autoregressive and exogenous variables, and at the same time to estimate the unknown parameters. The method proposed uses the stochastic search idea. By adopting Markov chain Monte Carlo techniques, we can identify the best subset model from a large of number of possible choices. A simulation experiment shows that the method is very effective. Misspecification in the mean equation can also be detected by our model selection method. In the application to the stock-market data of seven countries, the lagged 1 US return is found to have a strong influence on the other stock-market returns.  相似文献   

4.
In this article, we consider the ranked set sampling (RSS) and investigate seven tests for normality under RSS. Each test is described and then power of each test is obtained by Monte Carlo simulations under various alternatives. Finally, the powers of the tests based on RSS are compared with the powers of the tests based on the simple random sampling and the results are discussed.  相似文献   

5.
In this paper, we propose and evaluate the performance of different parametric and nonparametric estimators for the population coefficient of variation considering Ranked Set Sampling (RSS) under normal distribution. The performance of the proposed estimators was assessed based on the bias and relative efficiency provided by a Monte Carlo simulation study. An application in anthropometric measurements data from a human population is also presented. The results showed that the proposed estimators via RSS present an expressively lower mean squared error when compared to the usual estimator, obtained via Simple Random Sampling. Also, it was verified the superiority of the maximum likelihood estimator, given the necessary assumptions of normality and perfect ranking are met.  相似文献   

6.
In this paper the problem of statistical hypothesis testing under weighted sampling is considered for obtaining the most powerful test. Some simulated powers of tests, using the Monte Carlo method, are performed. Using a convenient sample of the specialist physicians of Social Security Organization of Ahvaz in Iran, two weighted samplings versus random sampling are tested. Among the three mentioned sampling, the size-biased sampling order 0.2 is more appropriate for the mechanism of data collection.  相似文献   

7.
8.
This paper studies bandwidth selection for kernel estimation of derivatives of multidimensional conditional densities, a non-parametric realm unexplored in the literature. This paper extends Baird [Cross validation bandwidth selection for derivatives of multidimensional densities. RAND Working Paper series, WR-1060; 2014] in its examination of conditional multivariate densities, derives and presents criteria for arbitrary kernel order and density dimension, shows consistency of the estimators, and investigates a minimization criterion which jointly estimates numerator and denominator bandwidths. I conduct a Monte Carlo simulation study for various orders of kernels in the Gaussian family and compare the new cross validation criterion with those implied by Baird [Cross validation bandwidth selection for derivatives of multidimensional densities. RAND Working Paper series, WR-1060; 2014]. The paper finds that higher order kernels become increasingly important as the dimension of the distribution increases. I find that the cross validation criterion developed in this paper that jointly estimates the derivative of the joint density (numerator) and the marginal density (denominator) does orders of magnitude better than criteria that estimate the bandwidths separately. I further find that using the infinite order Dirichlet kernel tends to have the best results.  相似文献   

9.
The problem of sampling random variables with overlapping pdfs subject to inequality constraints is addressed. Often, the values of physical variables in an engineering model are interrelated. This mutual dependence imposes inequality constraints on the random variables representing these parameters. Ignoring the interdependencies and sampling the variables independently can lead to inconsistency/bias. We propose an algorithm to generate samples of constrained random variables that are characterized by typical continuous probability distributions and are subject to different kinds of inequality constraints. The sampling procedure is illustrated for various representative cases and one realistic application to simulation of structural natural frequencies.  相似文献   

10.
Using mean absolute deviation, we compare the efficay of two new parametric conditional error rate estimators with six others, four of which are well known.The performance of both new estimators is found to be superior to the six competing estimators examined in this paper, especially when the ratio of the training sample size to the feature dimensionality is small.  相似文献   

11.
In this paper we present a perfect simulation method for obtaining perfect samples from collections of correlated Poisson random variables conditioned to be positive. We show how to use this method to produce a perfect sample from a Boolean model conditioned to cover a set of points: in W.S. Kendall and E. Thönnes (Pattern Recognition 32(9): 1569–1586, 1999), this special case was treated in a more complicated way. The method is applied to several simple examples where exact calculations can be made, so as to check correctness of the program using 2-tests, and some small-scale experiments are carried out to explore the behaviour of the conditioned Boolean model.  相似文献   

12.
ABSTRACT

There is a widespread perception that standard unit-root tests have poor discriminatory power when they are applied to time series with nonlinear dynamics. Via Monte Carlo simulations this study re-examines the finite sample properties of selected univariate tests for unit-root and stationarity under a broad class of nonlinear dynamic models. Our simulation experiments produce a couple of interesting findings. First, performance of tests is driven by the degree of underlying persistence rather than the nonlinear dynamics per se. Tests under study exhibit reasonable performance for nonlinear models with mild persistence, while the accuracy of inference deteriorates substantially when the models are highly persistent regardless of the linearity. Second, when it comes to deciding which one to identify first between linearity and stationarity, our results suggest to conduct linearity test first to enhance the reliability of test inference.  相似文献   

13.
14.
In this paper, we study the E-Bayesian and hierarchical Bayesian estimations of the parameter derived from Pareto distribution under different loss functions. The definition of the E-Bayesian estimation of the parameter is provided. Moreover, for Pareto distribution, under the condition of the scale parameter is known, based on the different loss functions, formulas of the E-Bayesian estimation and hierarchical Bayesian estimations for the shape parameter are given, respectively, properties of the E-Bayesian estimation – (i) the relationship between of E-Bayesian estimations under different loss functions are provided, (ii) the relationship between of E-Bayesian and hierarchical Bayesian estimations under the same loss function are also provided, and using the Monte Carlo method simulation example is given. Finally, combined with the golfers income data practical problem are calculated, the results show that the proposed method is feasible and convenient for application.  相似文献   

15.
This paper develops a new test for the parametric volatility function of a diffusion model based on nonparametric estimation techniques. The proposed test imposes no restriction on the functional form of the drift function and has an asymptotically standard normal distribution under the null hypothesis of correct specification. It is consistent against any fixed alternatives and has nontrivial asymptotic power against a class of local alternatives with proper rates. Monte Carlo simulations show that the test performs well in finite samples and generally has better power performance than the nonparametric test of Li (2007 Li, F. (2007). Testing the parametric specification of the diffusion function in a diffusion process. Econometric Theory 23(2):221250.[Crossref], [Web of Science ®] [Google Scholar]) and the stochastic process-based tests of Dette and Podolskij (2008 Dette, H., Podolskij, M. (2008). Testing the parametric form of the volatility in continuous time diffusion models–a stochastic process approach. Journal of Econometrics 143(1):5673.[Crossref], [Web of Science ®] [Google Scholar]). When applying the test to high frequency data of EUR/USD exchange rate, the empirical results show that the commonly used volatility functions fit more poorly when the data frequency becomes higher, and the general volatility functions fit relatively better than the constant volatility function.  相似文献   

16.
17.
Abstract. Use of auxiliary variables for generating proposal variables within a Metropolis–Hastings setting has been suggested in many different settings. This has in particular been of interest for simulation from complex distributions such as multimodal distributions or in transdimensional approaches. For many of these approaches, the acceptance probabilities that are used turn up somewhat magic and different proofs for their validity have been given in each case. In this article, we will present a general framework for construction of acceptance probabilities in auxiliary variable proposal generation. In addition to showing the similarities between many of the proposed algorithms in the literature, the framework also demonstrates that there is a great flexibility in how to construct acceptance probabilities. With this flexibility, alternative acceptance probabilities are suggested. Some numerical experiments are also reported.  相似文献   

18.
19.
The hazard function describes the instantaneous rate of failure at a time t, given that the individual survives up to t. In applications, the effect of covariates produce changes in the hazard function. When dealing with survival analysis, it is of interest to identify where a change point in time has occurred. In this work, covariates and censored variables are considered in order to estimate a change-point in the Weibull regression hazard model, which is a generalization of the exponential model. For this more general model, it is possible to obtain maximum likelihood estimators for the change-point and for the parameters involved. A Monte Carlo simulation study shows that indeed, it is possible to implement this model in practice. An application with clinical trial data coming from a treatment of chronic granulomatous disease is also included.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号